Home
last modified time | relevance | path

Searched full:compositeimplicitautograd (Results 1 – 25 of 61) sorted by relevance

123

/aosp_15_r20/external/pytorch/aten/src/ATen/core/dispatch/
H A DOperatorEntry.cpp144 // Redirect catchAll registrations to CompositeImplicitAutograd. in registerKernel()
145 …patch_key.has_value() ? kernels_[*dispatch_key] : kernels_[DispatchKey::CompositeImplicitAutograd]; in registerKernel()
189 // Redirect catchAll deregistrations to CompositeImplicitAutograd. in registerKernel()
190 …DispatchKey dk = dispatch_key.has_value() ? *dispatch_key : DispatchKey::CompositeImplicitAutograd; in registerKernel()
275 // (2.3) Use kernel from DispatchKey::CompositeImplicitAutograd if available. in registerKernel()
276 …// For autograd keys, we only use kernel from CompositeImplicitAutograd when there's no d… in registerKernel()
277 …y or CompositeExplicitAutograd. See Note [CompositeExplicitAutograd and CompositeImplicitAutograd]. in registerKernel()
281 …// A CompositeExplicitAutograd kernel prevents CompositeImplicitAutograd kernel being use… in registerKernel()
291 …ositExplicitAutogradNonFunctional > CompositeExplicitAutograd > CompositeImplicitAutograd > Autogr… in registerKernel()
292 // Note [CompositeExplicitAutograd and CompositeImplicitAutograd] in registerKernel()
[all …]
/aosp_15_r20/external/pytorch/test/
H A Dtest_dispatch.py277 CompositeImplicitAutograd[alias]: impl_t_t :: (Tensor _0) -> Tensor _0 [ boxed unboxed ]
330 CompositeImplicitAutograd[alias]: default_def_name_t_t :: (Tensor _0) -> Tensor _0 [ boxed unboxed ]
374 CompositeImplicitAutograd[alias]: impl_t_t :: (Tensor _0) -> Tensor _0 [ boxed unboxed ]
406 CompositeImplicitAutograd[alias]: default_def_name_t_t :: (Tensor _0) -> Tensor _0 [ boxed unboxed ]
449 CompositeImplicitAutograd[alias]: default_def_name_t_t :: (Tensor _0) -> Tensor _0 [ boxed unboxed ]
480 lambda m: m.impl_t_t("foo", "CompositeImplicitAutograd"),
491 CompositeImplicitAutograd[alias]: impl_t_t :: (Tensor _0) -> Tensor _0 [ boxed unboxed ]
525 "foo", "CompositeImplicitAutograd", debug="fn_math"
538 CompositeImplicitAutograd[alias]: fn_math :: (Tensor _0) -> Tensor _0 [ boxed unboxed ]
599 # Now that catchAll maps to CompositeImplicitAutograd, registering to both
[all …]
/aosp_15_r20/external/pytorch/aten/src/ATen/native/
H A DREADME.md298 CompositeImplicitAutograd: func
305 CompositeImplicitAutograd: func_out
338 - `CompositeImplicitAutograd` (previously known as `Math`): implementations of
342 registering your kernel as `CompositeImplicitAutograd`. Explicitly adding
360 you can just register `my_op` to `CompositeImplicitAutograd` and both inference & autograd will jus…
368 1. If you can, always start with a `CompositeImplicitAutograd` kernel that's composable from existi…
369 2. If you don't want to use the derived gradient formula from `CompositeImplicitAutograd` kernel fo…
377 **Important**: because a `CompositeImplicitAutograd` kernel is implicitly registered for ops with n…
379 add a `CompositeImplicitAutograd:` entry that names the old kernel implementation (it's named after…
388 CompositeImplicitAutograd or a (Python or C++) function that consists of PyTorch
[all …]
H A Dnative_functions.yaml1367 CompositeImplicitAutograd: broadcast_to_symint
1466 CompositeImplicitAutograd: chunk
1472 CompositeImplicitAutograd: tensor_split_sections_symint
1477 CompositeImplicitAutograd: tensor_split_indices_symint
1699 CompositeImplicitAutograd: _convolution_mode_symint
1705 CompositeImplicitAutograd: conv1d_symint
1709 CompositeImplicitAutograd: conv2d_symint
1713 CompositeImplicitAutograd: conv3d_symint
1718 CompositeImplicitAutograd: conv1d_padding_symint
1723 CompositeImplicitAutograd: conv2d_padding_symint
[all …]
/aosp_15_r20/external/pytorch/torch/
H A D_python_dispatcher.py31 - CompositeImplicitAutograd: alias key CompositeImplicitAutograd = CompositeExplicitAutograd + Auto…
35 you shouldn't register a CompositeImplicitAutograd or CompositeExplicitAutograd
42 dispatcher.register(["CPU", "XLA", "CompositeImplicitAutograd"])
69 "CompositeImplicitAutograd",
102 "CompositeImplicitAutograd" in dispatchKeys
106 … "Registration to both CompositeImplicitAutograd and CompositeExplicitAutograd is not allowed."
H A D_ops.py196 # 2.3. Use CompositeImplicitAutograd kernel if available
204 cand = DispatchKey.CompositeImplicitAutograd
745 dk = DispatchKey.CompositeImplicitAutograd
751 dk = DispatchKey.CompositeImplicitAutograd
754 # apply Python CompositeImplicitAutograd *before* tracing
/aosp_15_r20/external/pytorch/test/functorch/
H A Dtest_vmap_registrations.py252 get_registrations_for_dispatch_key("CompositeImplicitAutograd")
288 @dispatch_registrations("CompositeImplicitAutograd", xfail_functorch_batched)
293 … f"You've added a batching rule for a CompositeImplicitAutograd operator {registration}. "
295 "reuse the CompositeImplicitAutograd decomposition"
303 f"The registrations in BatchedDecompositions.cpp must be for CompositeImplicitAutograd "
304 f"operations. If your operation {registration} is not CompositeImplicitAutograd, "
309 "CompositeImplicitAutograd", xfail_not_implemented, filter_vmap_implementable
/aosp_15_r20/external/pytorch/torch/_custom_op/
H A Dimpl.py315 if _C._dispatch_has_kernel_for_dispatch_key(self._qualname, "CompositeImplicitAutograd"):
319 f"pre-existing registration to DispatchKey::CompositeImplicitAutograd."
320 f"CompositeImplicitAutograd operators do not need an autograd formula; "
351 # op is CompositeImplicitAutograd or some other alias dispatch key,
354 # Special case for CompositeImplicitAutograd
355 if _C._dispatch_has_kernel_for_dispatch_key(self._qualname, "CompositeImplicitAutograd"):
359 f"pre-existing registration to DispatchKey::CompositeImplicitAutograd."
360 f"CompositeImplicitAutograd operators do not need an abstract impl; "
/aosp_15_r20/external/pytorch/torch/_decomp/
H A Ddecompositions.py1173 @aten.dropout.default.py_impl(DispatchKey.CompositeImplicitAutograd)
1474 DispatchKey.CompositeImplicitAutograd
1903 @aten.native_batch_norm.default.py_impl(DispatchKey.CompositeImplicitAutograd)
1939 @aten.unsafe_chunk.default.py_impl(DispatchKey.CompositeImplicitAutograd)
2634 @aten.pad_sequence.default.py_impl(DispatchKey.CompositeImplicitAutograd)
2767 @aten.upsample_nearest1d.vec.py_impl(DispatchKey.CompositeImplicitAutograd)
2769 @aten.upsample_nearest2d.vec.py_impl(DispatchKey.CompositeImplicitAutograd)
2771 @aten.upsample_nearest3d.vec.py_impl(DispatchKey.CompositeImplicitAutograd)
2788 @aten._upsample_nearest_exact1d.vec.py_impl(DispatchKey.CompositeImplicitAutograd)
2790 @aten._upsample_nearest_exact2d.vec.py_impl(DispatchKey.CompositeImplicitAutograd)
[all …]
/aosp_15_r20/external/pytorch/torch/_refs/
H A D__init__.py637 # CompositeImplicitAutograd - don't register decomp
689 aten_op=None, # CompositeImplicitAutograd
740 aten_op=None, # CompositeImplicitAutograd
781 # CompositeImplicitAutograd - don't register decomp
861 # CompositeImplicitAutograd - don't register decomp
952 aten_op=None, # CompositeImplicitAutograd,
1223 # CompositeImplicitAutograd - don't register decomp
1467 # CompositeImplicitAutograd - don't register decomp
1749 aten_op=None, # CompositeImplicitAutograd
1783 aten_op=None, # CompositeImplicitAutograd
[all …]
/aosp_15_r20/external/pytorch/aten/src/ATen/core/boxing/
H A DBoxedKernel.h25 // dispatch table when there is both a CompositeImplicitAutograd kernel and a
35 // over CompositeImplicitAutograd.
42 // n CompositeImplicitAutograd takes precedence.
50 // decide whether or not to use the CompositeImplicitAutograd kernel or the
57 // but unimplemented backends would prefer CompositeImplicitAutograd. Rather
H A DKernelFunction.cpp24 …op.operator_name(), " has kernels registered to both CompositeImplicitAutograd and a backend mappe… in ambiguous_autogradother_kernel()
25 …ckend kernel unreachable; the dispatcher will always prefer the CompositeImplicitAutograd lowering… in ambiguous_autogradother_kernel()
27 … "If you want to override CompositeImplicitAutograd, please open an issue to request a dedicated " in ambiguous_autogradother_kernel()
/aosp_15_r20/external/pytorch/torchgen/
H A Dmodel.py128 CompositeImplicitAutograd = auto() variable in DispatchKey
286 DispatchKey.CompositeImplicitAutograd,
309 DispatchKey.CompositeImplicitAutograd,
762 dispatch_key is DispatchKey.CompositeImplicitAutograd
780 or dispatch.keys() != {DispatchKey.CompositeImplicitAutograd}
781 or dispatch[DispatchKey.CompositeImplicitAutograd].supports_symint()
784 …f"unexpected name for singleton CompositeImplicitAutograd dispatch entry: expected {cpp.name(func)…
785 …f"but got {dispatch[DispatchKey.CompositeImplicitAutograd]}. Rename your implementation to the ex…
804 dispatch[DispatchKey.CompositeImplicitAutograd] = BackendMetadata(
813 or d == DispatchKey.CompositeImplicitAutograd
[all …]
/aosp_15_r20/external/pytorch/torch/_refs/linalg/
H A D__init__.py172 # CompositeImplicitAutograd
254 # CompositeImplicitAutograd
289 # CompositeImplicitAutograd
295 # CompositeImplicitAutograd
301 # CompositeImplicitAutograd
/aosp_15_r20/external/pytorch/torch/testing/_internal/optests/
H A Dautograd_registration.py37 DispatchKey::CompositeImplicitAutograd
67 # CompositeImplicitAutograd or not an op) or if the user invokes
101 op.name(), "CompositeImplicitAutograd"
129 f"or registering your operator as CompositeImplicitAutograd. If you have "
/aosp_15_r20/external/executorch/exir/dialects/
H A D_ops.py58 # we can't have both CompositeExplicitAutograd and CompositeImplicitAutograd kernel,
59 # we can't have both Meta and CompositeImplicitAutograd kernel either.
62 DispatchKey.CompositeImplicitAutograd,
66 library.impl(opname, f, "CompositeImplicitAutograd")
/aosp_15_r20/external/pytorch/torch/export/
H A Dexported_program.py149 kernel.name(), torch._C.DispatchKey.CompositeImplicitAutograd
153 torch._C.DispatchKey.CompositeImplicitAutograd, *args, **kwargs
178 # This function overrides CompositeImplicitAutograd decomp for
189 # replace their CompositeImplicitAutograd kernels with NotImplemented.
246 if torch._C.DispatchKey.CompositeImplicitAutograd in op_overload.py_kernels:
247 del op_overload.py_kernels[torch._C.DispatchKey.CompositeImplicitAutograd]
254 op_overload.py_impl(torch._C.DispatchKey.CompositeImplicitAutograd)(_)
281 # and their CompositeImplicitAutograd kernels will not become NotImplemented.
/aosp_15_r20/external/pytorch/c10/core/
H A DDispatchKey.cpp200 case DispatchKey::CompositeImplicitAutograd: in toString()
201 return "CompositeImplicitAutograd"; in toString()
372 {"CompositeImplicitAutograd", in parseDispatchKey()
373 c10::DispatchKey::CompositeImplicitAutograd}, in parseDispatchKey()
H A DDispatchKeySet.cpp46 // autograd_dispatch_keyset Alias key DispatchKey::CompositeImplicitAutograd
74 case DispatchKey::CompositeImplicitAutograd: in getRuntimeDispatchKeySet()
92 case DispatchKey::CompositeImplicitAutograd: in runtimeDispatchKeySetHas()
/aosp_15_r20/external/pytorch/torch/_library/
H A Dfake_impl.py41 self.qualname, "CompositeImplicitAutograd"
47 f"DispatchKey::CompositeImplicitAutograd."
48 f"CompositeImplicitAutograd operators do not need an fake "
/aosp_15_r20/external/executorch/exir/serde/
H A Dupgrade.py156 …<name>_<valid_from_ver>_<valid_till_ver>. Register upgraders as CompositeImplicitAutograd kernels.…
162 impl_lib.impl("div__Scalar_0_3", div__Scalar_0_3, "CompositeImplicitAutograd")
178 impl_lib.impl(name, locals()[name], "CompositeImplicitAutograd")
/aosp_15_r20/external/pytorch/torch/_refs/nn/functional/
H A D__init__.py423 # CompositeImplicitAutograd - don't register decomp
438 # CompositeImplicitAutograd - don't register decomp
564 # CompositeImplicitAutograd - don't register decomp
622 # CompositeImplicitAutograd - don't register decomp
923 # CompositeImplicitAutograd - don't register decomp
1072 # CompositeImplicitAutograd - don't register decomp
/aosp_15_r20/external/pytorch/aten/src/ATen/core/op_registration/
H A Dop_registration_test.cpp547 // catchAll now maps to CompositeImplicitAutograd which has in TEST()
1364 // CatchAll now maps to CompositeImplicitAutograd and has higher precedence than backend fallback. in TEST()
1386 …m.def("fn", torch::dispatch(c10::DispatchKey::CompositeImplicitAutograd, [&](const Tensor& x) { ma… in TEST()
1432 …m.def("fn", torch::dispatch(c10::DispatchKey::CompositeImplicitAutograd, [&](const Tensor& x) { ma… in TEST()
1438 // CompositeImplicitAutograd has higher precedence than Autograd in TEST()
1458 …m.def("fn", torch::dispatch(c10::DispatchKey::CompositeImplicitAutograd, [&](const Tensor& x) { ma… in TEST()
1464 …// catchAll now maps to CompositeImplicitAutograd, which means we have two registrations to Compos… in TEST()
1485 …m.def("fn", torch::dispatch(c10::DispatchKey::CompositeImplicitAutograd, [&](const Tensor& x) { ma… in TEST()
1524 …m.def("fn", torch::dispatch(c10::DispatchKey::CompositeImplicitAutograd, [&](const Tensor& x) { ma… in TEST()
1613 …m.impl("fn", c10::DispatchKey::CompositeImplicitAutograd, [&](const Tensor& x) { math_called = tru… in TEST()
[all …]
/aosp_15_r20/external/executorch/exir/passes/
H A Ddim_order_ops_registry.py32 @impl(lib, "_to_dim_order_copy", "CompositeImplicitAutograd")
37 @impl(lib, "_to_dim_order_copy.out", "CompositeImplicitAutograd")
/aosp_15_r20/external/executorch/exir/dialects/backend/
H A D_ops.py67 self._op.has_kernel_for_dispatch_key(DispatchKey.CompositeImplicitAutograd)
73 …kend op must contain either CompositeExplicitAutograd or Meta or CompositeImplicitAutograd kernel."

123