Home
last modified time | relevance | path

Searched refs:can_use_mem_efficient_attention (Results 1 – 3 of 3) sorted by relevance

/aosp_15_r20/external/pytorch/aten/src/ATen/native/transformers/cuda/
H A Dsdp_utils.cpp602 bool can_use_mem_efficient_attention(sdp_params const& params, bool debug) { in can_use_mem_efficient_attention() function
702 if (sdp::can_use_mem_efficient_attention(kernel_params, print_debug)) { in select_sdp_backend()
724 sdp::can_use_mem_efficient_attention(kernel_params, print_debug); in select_sdp_backend()
H A Dsdp_utils.h14 C10_EXPORT bool can_use_mem_efficient_attention(sdp_params const& params, bool debug);
/aosp_15_r20/external/pytorch/torch/csrc/
H A DModule.cpp2047 return sdp::can_use_mem_efficient_attention(params, debug); in initModule()