Searched full:autogradnestedtensor (Results 1 – 13 of 13) sorted by relevance
132 case DispatchKey::AutogradNestedTensor: in toString()133 return "AutogradNestedTensor"; in toString()293 {"AutogradNestedTensor", c10::DispatchKey::AutogradNestedTensor}, in parseDispatchKey()
61 {DispatchKey::AutogradNestedTensor, DispatchKey::NestedTensor}) |140 case DispatchKey::AutogradNestedTensor: in getBackendKeySetFromAutograd()
653 DispatchKey::AutogradNestedTensor,771 DispatchKeySet(DispatchKey::AutogradNestedTensor);
348 AutogradNestedTensor, enumerator
155 "AutogradNestedTensor": {177 differentiability_info["AutogradNestedTensor"],
1474 AutogradNestedTensor:1574 AutogradNestedTensor:1594 AutogradNestedTensor:1602 AutogradNestedTensor:1655 AutogradNestedTensor:1875 AutogradNestedTensor:1933 AutogradNestedTensor:2849 AutogradNestedTensor:2934 AutogradNestedTensor:2941 AutogradNestedTensor:
197 c10::DispatchKey::AutogradNestedTensor}; in generate_buffer_key_set()
82 ks = ks.add(DispatchKey.AutogradNestedTensor)
70 AUTOGRAD_KEYS = ["AutogradNestedTensor"] + [110 AutogradNestedTensor = auto() variable in DispatchKey
709 DEF_ONE(AutogradNestedTensor) in initDispatchBindings()
1324 torch._C._dispatch_keys(x).has(DispatchKey.AutogradNestedTensor)
13870 # test registered AutogradNestedTensor formula13889 # bogus gradient for AutogradNestedTensor is grad * grad13904 # test registered AutogradNestedTensor formula13923 # bogus gradient for AutogradNestedTensor is grad * grad + grad
7369 # By adding the AutogradNestedTensor this makes this function CompositeImplicit-like for nested ten…