Home
last modified time | relevance | path

Searched full:compositeexplicitautograd (Results 1 – 25 of 79) sorted by relevance

1234

/aosp_15_r20/external/pytorch/aten/src/ATen/native/
H A Dnative_functions.yaml99 CompositeExplicitAutograd: _fw_primal
104 CompositeExplicitAutograd: _make_dual
125 # - needs to be CompositeExplicitAutograd for jvp support in functorch.
127 # CompositeExplicitAutograd makes sure the TensorWrapper is unwrapped.
133 CompositeExplicitAutograd: _new_zeros_with_same_feature_meta
145 CompositeExplicitAutograd: _has_same_storage_numel
180 CompositeExplicitAutograd: _assert_scalar
184 CompositeExplicitAutograd: _functional_assert_scalar
194 CompositeExplicitAutograd: _print
198 CompositeExplicitAutograd: sym_constrain_range
[all …]
H A DREADME.md315 - `CompositeExplicitAutograd` (previously known as `DefaultBackend`):
321 kernel to `CompositeExplicitAutograd` is equivalent to registering that
323 DispatchStub should NOT be registered as CompositeExplicitAutograd, as
327 Similar to CompositeExplicitAutograd, but this key should be used if:
331 We would like to distinguish between "ordinary" CompositeExplicitAutograd kernels
370 …ance or better numerical stability, you should register the kernel with `CompositeExplicitAutograd`
569 CompositeExplicitAutograd: kernel
589 CompositeExplicitAutograd: kernel
613 … alias keywords to the same op: alias keywords have precedence `CompositeExplicitAutograd > Compos…
614 …e.g. adding both `CompositeImplicitAutograd` and `CompositeExplicitAutograd` kernels for one op wi…
/aosp_15_r20/external/pytorch/aten/src/ATen/core/dispatch/
H A DOperatorEntry.cpp272 // (2.2) Use kernel from DispatchKey::CompositeExplicitAutograd if available. in registerKernel()
277 … to its corresponding backend key or CompositeExplicitAutograd. See Note [CompositeExplici… in registerKernel()
281 …// A CompositeExplicitAutograd kernel prevents CompositeImplicitAutograd kernel being use… in registerKernel()
291 …// CompositExplicitAutogradNonFunctional > CompositeExplicitAutograd > CompositeImplicitAutograd… in registerKernel()
292 // Note [CompositeExplicitAutograd and CompositeImplicitAutograd] in registerKernel()
293 …egistrations to both CompositeExplicitAutograd & CompositeImplicitAutograd & Autograd, from (2.2) … in registerKernel()
295 …// This is fine and in practice CompositeExplicitAutograd and CompositeImplicitAutograd shouldn'… in registerKernel()
311 // 2.2 Use CompositeExplicitAutograd kernel if available. in registerKernel()
313 …ispatchKey::Undefined || isIncludedInAlias(dispatch_key, DispatchKey::CompositeExplicitAutograd)) { in registerKernel()
314 …o default_backend_registration = getKernelForDispatchKey(DispatchKey::CompositeExplicitAutograd)) { in registerKernel()
[all …]
/aosp_15_r20/external/pytorch/torch/csrc/distributed/c10d/
H A DFunctional.cpp313 c10::DispatchKey::CompositeExplicitAutograd, ::all_reduce), in TORCH_LIBRARY()
319 c10::DispatchKey::CompositeExplicitAutograd, ::all_reduce_), in TORCH_LIBRARY()
325 c10::DispatchKey::CompositeExplicitAutograd, ::all_reduce_coalesced), in TORCH_LIBRARY()
331 c10::DispatchKey::CompositeExplicitAutograd, ::all_reduce_coalesced_), in TORCH_LIBRARY()
337 c10::DispatchKey::CompositeExplicitAutograd, in TORCH_LIBRARY()
344 c10::DispatchKey::CompositeExplicitAutograd, in TORCH_LIBRARY()
351 c10::DispatchKey::CompositeExplicitAutograd, in TORCH_LIBRARY()
358 c10::DispatchKey::CompositeExplicitAutograd, ::reduce_scatter_tensor), in TORCH_LIBRARY()
364 c10::DispatchKey::CompositeExplicitAutograd, in TORCH_LIBRARY()
375 c10::DispatchKey::CompositeExplicitAutograd, ::all_to_all_single), in TORCH_LIBRARY()
[all …]
/aosp_15_r20/external/pytorch/torch/ao/quantization/fx/
H A D_decomposed.py50 @impl(quantized_decomposed_lib, "quantize_per_tensor", "CompositeExplicitAutograd")
111 quantized_decomposed_lib, "quantize_per_tensor.tensor", "CompositeExplicitAutograd"
168 quantized_decomposed_lib, "quantize_per_tensor.tensor2", "CompositeExplicitAutograd"
223 @impl(quantized_decomposed_lib, "dequantize_per_tensor", "CompositeExplicitAutograd")
299 "CompositeExplicitAutograd",
369 "CompositeExplicitAutograd",
425 @impl(quantized_decomposed_lib, "choose_qparams.tensor", "CompositeExplicitAutograd")
474 "CompositeExplicitAutograd",
557 @impl(quantized_decomposed_lib, "quantize_per_channel", "CompositeExplicitAutograd")
635 @impl(quantized_decomposed_lib, "dequantize_per_channel", "CompositeExplicitAutograd")
[all …]
/aosp_15_r20/external/executorch/backends/vulkan/_passes/
H A Dcustom_ops_defs.py23 lib.impl(name, prepack_impl, "CompositeExplicitAutograd")
65 lib.impl(name, conv_with_clamp_impl, "CompositeExplicitAutograd")
107 lib.impl(name, conv_with_clamp_out_impl, "CompositeExplicitAutograd")
134 lib.impl(name, grid_priors_impl, "CompositeExplicitAutograd")
153 lib.impl(name, grid_priors_out_impl, "CompositeExplicitAutograd")
184 lib.impl(name, linear_weight_int4_impl, "CompositeExplicitAutograd")
234 lib.impl(name, apply_rotary_emb_impl, "CompositeExplicitAutograd")
/aosp_15_r20/external/executorch/backends/vulkan/
H A Dcustom_ops_lib.py23 lib.impl(name, prepack_impl, "CompositeExplicitAutograd")
65 lib.impl(name, conv_with_clamp_impl, "CompositeExplicitAutograd")
107 lib.impl(name, conv_with_clamp_out_impl, "CompositeExplicitAutograd")
134 lib.impl(name, grid_priors_impl, "CompositeExplicitAutograd")
153 lib.impl(name, grid_priors_out_impl, "CompositeExplicitAutograd")
184 lib.impl(name, linear_weight_int4_impl, "CompositeExplicitAutograd")
234 lib.impl(name, apply_rotary_emb_impl, "CompositeExplicitAutograd")
/aosp_15_r20/external/pytorch/torch/
H A D_python_dispatcher.py27 - CompositeExplicitAutograd: alias key mapped to inference kernels of all backends like CPU, CUDA, …
31 - CompositeImplicitAutograd: alias key CompositeImplicitAutograd = CompositeExplicitAutograd + Auto…
35 you shouldn't register a CompositeImplicitAutograd or CompositeExplicitAutograd
67 "CompositeExplicitAutograd",
103 and "CompositeExplicitAutograd" in dispatchKeys
106 … "Registration to both CompositeImplicitAutograd and CompositeExplicitAutograd is not allowed."
H A Dlibrary.py583 # DispatchKeys are included in CompositeExplicitAutograd,
584 # not everything in CompositeExplicitAutograd is associated with a
586 return "CompositeExplicitAutograd"
679 device_types = "CompositeExplicitAutograd"
H A D_ops.py187 # 2.2 Use CompositeExplicitAutograd kernel if available
188 cand = DispatchKey.CompositeExplicitAutograd
195 ) or op.has_kernel_for_dispatch_key(DispatchKey.CompositeExplicitAutograd)
/aosp_15_r20/external/pytorch/tools/test/
H A Dtest_codegen.py392 CompositeExplicitAutograd: op
422 backend_metadata = self.backend_indices[DispatchKey.CompositeExplicitAutograd][
436 backend_metadata = self.backend_indices[DispatchKey.CompositeExplicitAutograd][
451 CompositeExplicitAutograd: op
459 dispatch_key = DispatchKey.CompositeExplicitAutograd
480 out, "return at::compositeexplicitautograd::op_out(out, self);"
497 out, "return at::compositeexplicitautograd::op_out(out, self);"
/aosp_15_r20/external/pytorch/test/
H A Dtest_dispatch.py707 "foo", "CompositeExplicitAutograd", debug="fn_defaultbackend"
720 CompositeExplicitAutograd[alias]: fn_defaultbackend :: (Tensor _0) -> Tensor _0 [ boxed unboxed ]
755 "foo", "CompositeExplicitAutograd", debug="fn_defaultbackend"
769 CompositeExplicitAutograd[alias]: fn_defaultbackend :: (Tensor _0) -> Tensor _0 [ boxed unboxed ]
809 "foo", "CompositeExplicitAutograd", debug="fn_defaultbackend"
824 CompositeExplicitAutograd[alias]: fn_defaultbackend :: (Tensor _0) -> Tensor _0 [ boxed unboxed ]
1049 ["CPU", "XLA", "Lazy", "CompositeExplicitAutograd", "AutogradCPU"]
1080 CompositeExplicitAutograd[alias] fn_CompositeExplicitAutograd
1129 … r"Registration to both CompositeImplicitAutograd and CompositeExplicitAutograd is not allowed",
1132 ["CompositeExplicitAutograd", "CompositeImplicitAutograd"]
/aosp_15_r20/external/executorch/exir/passes/
H A D_quant_patterns_and_replacements.py80 @impl(quantized_decomposed_lib, "embedding_byte", "CompositeExplicitAutograd")
126 @impl(quantized_decomposed_lib, "embedding_byte.dtype", "CompositeExplicitAutograd")
196 @impl(quantized_decomposed_lib, "embedding_2bit", "CompositeExplicitAutograd")
250 @impl(quantized_decomposed_lib, "embedding_2bit.dtype", "CompositeExplicitAutograd")
328 @impl(quantized_decomposed_lib, "embedding_4bit", "CompositeExplicitAutograd")
380 @impl(quantized_decomposed_lib, "embedding_4bit.dtype", "CompositeExplicitAutograd")
/aosp_15_r20/external/pytorch/c10/core/
H A DDispatchKeySet.cpp7 // Alias key DispatchKey::CompositeExplicitAutograd maps to
41 // included in CompositeExplicitAutograd kernels. in isBackendDispatchKey()
78 case DispatchKey::CompositeExplicitAutograd: in getRuntimeDispatchKeySet()
98 case DispatchKey::CompositeExplicitAutograd: in runtimeDispatchKeySetHas()
H A DDispatchKey.cpp204 case DispatchKey::CompositeExplicitAutograd: in toString()
205 return "CompositeExplicitAutograd"; in toString()
376 {"CompositeExplicitAutograd", in parseDispatchKey()
377 c10::DispatchKey::CompositeExplicitAutograd}, in parseDispatchKey()
/aosp_15_r20/external/executorch/exir/dialects/backend/
H A D_ops.py34 1. The backend op contains either a CompositeExplicitAutograd or a meta kernel.
64 self._op.has_kernel_for_dispatch_key(DispatchKey.CompositeExplicitAutograd)
73 …), "A backend op must contain either CompositeExplicitAutograd or Meta or CompositeImplicitAutogra…
/aosp_15_r20/external/pytorch/torch/csrc/inductor/
H A Dinductor_ops.cpp96 dispatch(c10::DispatchKey::CompositeExplicitAutograd, _mm_plus_mm), in TORCH_LIBRARY_FRAGMENT()
105 c10::DispatchKey::CompositeExplicitAutograd, _reinterpret_tensor), in TORCH_LIBRARY_FRAGMENT()
109 dispatch(c10::DispatchKey::CompositeExplicitAutograd, accumulate_grad_), in TORCH_LIBRARY_FRAGMENT()
/aosp_15_r20/external/executorch/exir/dialects/
H A D_ops.py30 …* If the backend op is registered with an CompositeExplicitAutograd (or Meta) kernel, once the g…
58 # we can't have both CompositeExplicitAutograd and CompositeImplicitAutograd kernel,
61 DispatchKey.CompositeExplicitAutograd,
/aosp_15_r20/external/executorch/examples/portable/custom_ops/
H A Dcustom_ops_1.py20 @impl(my_op_lib, "mul3", dispatch_key="CompositeExplicitAutograd")
31 @impl(my_op_lib, "mul3.out", dispatch_key="CompositeExplicitAutograd")
/aosp_15_r20/external/pytorch/torchgen/
H A Dgen_aoti_c_shim.py329 elif backend_indices[DispatchKey.CompositeExplicitAutograd].has_kernel(func):
330 # We need to create C shim wrappers for CompositeExplicitAutograd kernels
331 backend_index = backend_indices[DispatchKey.CompositeExplicitAutograd]
H A Dmodel.py130 CompositeExplicitAutograd = auto() variable in DispatchKey
288 DispatchKey.CompositeExplicitAutograd,
307 DispatchKey.CompositeExplicitAutograd,
799 f"expected {name} to have a CompositeExplicitAutograd "
811 if d == DispatchKey.CompositeExplicitAutograd
828 …"cannot specify more than one of CompositeExplicitAutograd, CompositeExplicitAutogradNonFunctional…
831 …"implementation, specify CompositeExplicitAutograd; otherwise specify CompositeImplicitAutograd on…
900 DispatchKey.CompositeExplicitAutograd in dispatch
/aosp_15_r20/external/executorch/extension/llm/custom_ops/
H A Dpreprocess_custom_ops.py23 @impl(preprocess_op_lib, "tile_crop", dispatch_key="CompositeExplicitAutograd")
43 @impl(preprocess_op_lib, "tile_crop.out", dispatch_key="CompositeExplicitAutograd")
/aosp_15_r20/external/pytorch/torch/csrc/autograd/
H A DVariableTypeManual.cpp333 // (3) CompositeExplicitAutograd kernels and additionally Autograd kernels
336 // kernels for Autograd, so we register them to both CompositeExplicitAutograd
341 // - Ops registered to CompositeImplicitAutograd or CompositeExplicitAutograd
/aosp_15_r20/external/pytorch/torch/_custom_op/
H A Dimpl.py339 # If the user's operator is CompositeExplicitAutograd,
341 # (existing custom ops may have CompositeExplicitAutograd
345 _C._dispatch_has_kernel_for_dispatch_key(self._qualname, "CompositeExplicitAutograd")
/aosp_15_r20/external/executorch/exir/tests/
H A Dtest_passes.py102 @impl(lib, "foo", "CompositeExplicitAutograd")
112 @impl(lib, "foo.out", "CompositeExplicitAutograd")
885 …# decorator registers this pattern as a CompositeExplicitAutograd kernel, since there's no kernel …
926 …# If not backend op retrace will error out because no CPU/CompositeExplicitAutograd kernel registe…

1234