Home
last modified time | relevance | path

Searched refs:split_key_device (Results 1 – 1 of 1) sorted by relevance

/aosp_15_r20/external/pytorch/aten/src/ATen/native/transformers/cuda/mem_eff_attention/
H A Dkernel_backward.h733 CUTLASS_HOST_DEVICE int16_t split_key_device() const { in split_key_device() function
754 workspace_gv += workspace_elements_gv() * split_key_device() / in advance_to_block()
756 workspace += workspace_elements_gk() * split_key_device() / in advance_to_block()
1310 int32_t key_start = p.split_key_device() * kBlockSizeJ; in attention_kernel()
2066 p.split_key_device() + 1, in processBlockIJ()
2265 return (p.split_key_device() * kBlockSizeI) % getQueryEnd(p); in getQueryStartShift()