xref: /aosp_15_r20/external/pytorch/tools/autograd/derivatives.yaml (revision da0073e96a02ea20f0ac840b70461e3646d07c45)
1*da0073e9SAndroid Build Coastguard Worker# Defines derivative formulas and Python signatures of methods on Variable
2*da0073e9SAndroid Build Coastguard Worker#
3*da0073e9SAndroid Build Coastguard Worker# Note about possibly confusing nomenclature: An 'output gradient' is the
4*da0073e9SAndroid Build Coastguard Worker# gradient of an output of a forward function. Output gradients are used as
5*da0073e9SAndroid Build Coastguard Worker# the inputs to backward functions. `grads` is a vector of output gradients,
6*da0073e9SAndroid Build Coastguard Worker# and `grad == grads[0]`, in all the derivative formulas in this file.
7*da0073e9SAndroid Build Coastguard Worker# An 'input gradient' is the gradient of an input to a forward function.
8*da0073e9SAndroid Build Coastguard Worker# Input gradients are the outputs of backward functions, corresponding to the
9*da0073e9SAndroid Build Coastguard Worker# input names included in the derivative formulas defined in this file.
10*da0073e9SAndroid Build Coastguard Worker# Also, every time we talk computing "gradient" we actually mean computing
11*da0073e9SAndroid Build Coastguard Worker# the vector jacobian product using the given 'output gradient' as the vector.
12*da0073e9SAndroid Build Coastguard Worker#
13*da0073e9SAndroid Build Coastguard Worker# Each entry consists of:
14*da0073e9SAndroid Build Coastguard Worker#   - A 'name', which specifies the ATen name of the function you
15*da0073e9SAndroid Build Coastguard Worker#     are defining derivatives for, and an argument specification.
16*da0073e9SAndroid Build Coastguard Worker#   - An optional 'dispatch' entry which can be used to specify
17*da0073e9SAndroid Build Coastguard Worker#     per-autograd dispatch key derivatives. If this entry is not
18*da0073e9SAndroid Build Coastguard Worker#     specified, then the gradient entries will be taken as the
19*da0073e9SAndroid Build Coastguard Worker#     default gradients (i.e. registered for every backward dispatch
20*da0073e9SAndroid Build Coastguard Worker#     key). (see _test_autograd_multiple_dispatch for an example
21*da0073e9SAndroid Build Coastguard Worker#     of how to register separate derivates for different dispatch keys).
22*da0073e9SAndroid Build Coastguard Worker#     The list of allowed dispatch keys (in addition to 'Default' which
23*da0073e9SAndroid Build Coastguard Worker#     represents the Autograd alias key) is torchgen/model.py:AUTOGRAD_KEYS.
24*da0073e9SAndroid Build Coastguard Worker#   - One or more gradients entries, mapping differentiable input
25*da0073e9SAndroid Build Coastguard Worker#     names to a formula specifying how to compute its gradient.
26*da0073e9SAndroid Build Coastguard Worker#     Note that a single gradient entry can specify the gradient
27*da0073e9SAndroid Build Coastguard Worker#     formula for multiple input names, by specifying a key
28*da0073e9SAndroid Build Coastguard Worker#     "input1, input2" (see atan2 for an example).
29*da0073e9SAndroid Build Coastguard Worker#   - An argument can be flagged as 'non_differentiable'.
30*da0073e9SAndroid Build Coastguard Worker#   - Optional entry with key 'output_differentiability' and value a list of the
31*da0073e9SAndroid Build Coastguard Worker#     same length as the number of outputs from the forward function. The list
32*da0073e9SAndroid Build Coastguard Worker#     should contain only booleans, specifying whether each of the output Tensor
33*da0073e9SAndroid Build Coastguard Worker#     is differentiable.
34*da0073e9SAndroid Build Coastguard Worker#     If it is not specified for a function that returns multiple elements but
35*da0073e9SAndroid Build Coastguard Worker#     uses `grad` instead of `grads[idx]`, then all but the first output will
36*da0073e9SAndroid Build Coastguard Worker#     be marked as non-differentiable.
37*da0073e9SAndroid Build Coastguard Worker#     If None of the output is differentiable, you can also add the function
38*da0073e9SAndroid Build Coastguard Worker#     name to `gen_variable_type.py`'s `DONT_REQUIRE_DERIVATIVE` list.
39*da0073e9SAndroid Build Coastguard Worker#
40*da0073e9SAndroid Build Coastguard Worker# There are two cases for Tensor and TensorList arguments here:
41*da0073e9SAndroid Build Coastguard Worker#   - If that argument is differentiable, in the sense that a gradient with respect
42*da0073e9SAndroid Build Coastguard Worker#     to that argument could exist. You should either:
43*da0073e9SAndroid Build Coastguard Worker#       - Specify the formula for that gradient
44*da0073e9SAndroid Build Coastguard Worker#       - Specify not_implemented("function_name") as a formula to say that this is not
45*da0073e9SAndroid Build Coastguard Worker#         implemented yet (but might be in the future and the user can request that on an issue)
46*da0073e9SAndroid Build Coastguard Worker#   - If that argument is not differentiable, because it is not a floating point dtype or the
47*da0073e9SAndroid Build Coastguard Worker#     function is not differentiable with respect to that argument  for
48*da0073e9SAndroid Build Coastguard Worker#     example. You should either:
49*da0073e9SAndroid Build Coastguard Worker#       - Do not specify any formula for this argument
50*da0073e9SAndroid Build Coastguard Worker#       - Specify explicitly that this argument is "non_differentiable". Note that in this case,
51*da0073e9SAndroid Build Coastguard Worker#         we trust you that this argument will never have requires_grad=True and it will be silently
52*da0073e9SAndroid Build Coastguard Worker#         ignored if it does.
53*da0073e9SAndroid Build Coastguard Worker#
54*da0073e9SAndroid Build Coastguard Worker# If a function has out-of-place and in-place variants, then the derivative
55*da0073e9SAndroid Build Coastguard Worker# definition for the in-place variant is optional. It will default to the
56*da0073e9SAndroid Build Coastguard Worker# definition for the out-of-place variant. Note that _out variants are never
57*da0073e9SAndroid Build Coastguard Worker# differentiable.
58*da0073e9SAndroid Build Coastguard Worker#
59*da0073e9SAndroid Build Coastguard Worker# Gradient expressions are standard C++ expressions operating on ATen
60*da0073e9SAndroid Build Coastguard Worker# variables.  In a gradient expression, the following variables/functions
61*da0073e9SAndroid Build Coastguard Worker# are in scope:
62*da0073e9SAndroid Build Coastguard Worker#
63*da0073e9SAndroid Build Coastguard Worker#   - 'grad', the gradient of the output (often spelled grad_output
64*da0073e9SAndroid Build Coastguard Worker#     in Python) which we are going to left-multiply.
65*da0073e9SAndroid Build Coastguard Worker#
66*da0073e9SAndroid Build Coastguard Worker#     When a function returns multiple *differentiable* outputs,
67*da0073e9SAndroid Build Coastguard Worker#     you can refer to the gradients of each outputs using 'grads',
68*da0073e9SAndroid Build Coastguard Worker#     e.g., 'grads[0]', 'grads[1]'.
69*da0073e9SAndroid Build Coastguard Worker#
70*da0073e9SAndroid Build Coastguard Worker#     When a function returns multiple *differentiable* outputs that
71*da0073e9SAndroid Build Coastguard Worker#     are named, you can refer to the gradients of each outputs using
72*da0073e9SAndroid Build Coastguard Worker#     'grad_{name}', e.g., 'grad_x', 'grad_y'.
73*da0073e9SAndroid Build Coastguard Worker#
74*da0073e9SAndroid Build Coastguard Worker#     When a function returns *one* differentiable output (the
75*da0073e9SAndroid Build Coastguard Worker#     first output) and some more nondifferentiable outputs,
76*da0073e9SAndroid Build Coastguard Worker#     you MUST refer to the gradient of the differentiable output with
77*da0073e9SAndroid Build Coastguard Worker#     'grad' (this case is special-cased in our code generation).
78*da0073e9SAndroid Build Coastguard Worker#
79*da0073e9SAndroid Build Coastguard Worker#     Note that the number of differentiable outputs can be modified by the
80*da0073e9SAndroid Build Coastguard Worker#     'output_differentiability' entry (see above).
81*da0073e9SAndroid Build Coastguard Worker#
82*da0073e9SAndroid Build Coastguard Worker#     Across a differentiable function's derivatives set, it is not
83*da0073e9SAndroid Build Coastguard Worker#     permitted to mix the use of "grad", "grads", and
84*da0073e9SAndroid Build Coastguard Worker#     "grad_{name}". You must be consistent for that differentiable
85*da0073e9SAndroid Build Coastguard Worker#     function.
86*da0073e9SAndroid Build Coastguard Worker#
87*da0073e9SAndroid Build Coastguard Worker#   - Any of the input arguments, tensor or non-tensor, including
88*da0073e9SAndroid Build Coastguard Worker#     argument names that only appear in Declarations.yaml, e.g. 'output'.
89*da0073e9SAndroid Build Coastguard Worker#
90*da0073e9SAndroid Build Coastguard Worker#   - 'result', representing the result of evaluating the forward
91*da0073e9SAndroid Build Coastguard Worker#     expression for ATen native function declarations. If the forward
92*da0073e9SAndroid Build Coastguard Worker#     expression outputs a tuple, use 'resultX' instead to access the
93*da0073e9SAndroid Build Coastguard Worker#     X-th entry
94*da0073e9SAndroid Build Coastguard Worker#
95*da0073e9SAndroid Build Coastguard Worker#   - 'grad_input_mask', a std::array<bool, n>, specifies which input
96*da0073e9SAndroid Build Coastguard Worker#     gradients are actually needed.  For example, in the entry
97*da0073e9SAndroid Build Coastguard Worker#     `input0, input1: foo(grad_input_mask)`, `grad_input_mask` is a size
98*da0073e9SAndroid Build Coastguard Worker#     two array, where `grad_input_mask[0]` is true if `input0` requires
99*da0073e9SAndroid Build Coastguard Worker#     grad, and `grad_input_mask[1]` is true if `input1` requires grad.
100*da0073e9SAndroid Build Coastguard Worker#
101*da0073e9SAndroid Build Coastguard Worker#     (NB: if your function computes gradient for a list of tensors,
102*da0073e9SAndroid Build Coastguard Worker#     the `grad_input_mask` will only have a single entry for the list
103*da0073e9SAndroid Build Coastguard Worker#     specifying if either zero or at least one tensor from the list requires
104*da0073e9SAndroid Build Coastguard Worker#     grad.  If we want to support more fine-grained signalling,
105*da0073e9SAndroid Build Coastguard Worker#     we'll need some alternate variable which is not a std::array)
106*da0073e9SAndroid Build Coastguard Worker#
107*da0073e9SAndroid Build Coastguard Worker#   - 'retain_variables', a bool which is true if a user has specified
108*da0073e9SAndroid Build Coastguard Worker#     that saved variables should be retained in case the backwards is
109*da0073e9SAndroid Build Coastguard Worker#     run again later.  This allows an optimization where we can
110*da0073e9SAndroid Build Coastguard Worker#     destroy saved buffers if we know variables are not going to be retained,
111*da0073e9SAndroid Build Coastguard Worker#     e.g., it is used by _cudnn_rnn
112*da0073e9SAndroid Build Coastguard Worker#
113*da0073e9SAndroid Build Coastguard Worker#   - `wrap_opt_if`, is a 2-argument function that accepts a tensor
114*da0073e9SAndroid Build Coastguard Worker#     variable and a boolean condition that dictates whether to save that
115*da0073e9SAndroid Build Coastguard Worker#     variable in a graph. The result of this function is `c10::optional<Tensor>`,
116*da0073e9SAndroid Build Coastguard Worker#     and it is `c10::nullopt` when the condition evalutes to `false`,
117*da0073e9SAndroid Build Coastguard Worker#     otherwise it is the variable wrapped in `c10::optional<Tensor>`.
118*da0073e9SAndroid Build Coastguard Worker#     For example, wrap_opt_if(var_0, grad_input_mask[1] || grad_input_mask[2])
119*da0073e9SAndroid Build Coastguard Worker#     would mean that `var_0` is saved as long as the second (grad_input_mask[1])
120*da0073e9SAndroid Build Coastguard Worker#     or the third (grad_input_mask[2]) argument requires gradients.
121*da0073e9SAndroid Build Coastguard Worker#     Another interpretation of this expression would read as `var_0` is needed
122*da0073e9SAndroid Build Coastguard Worker#     in the backward computation of the second or the third argument.
123*da0073e9SAndroid Build Coastguard Worker#     NOTE: the usage of `var_i.requires_grad()` in the conditional expression
124*da0073e9SAndroid Build Coastguard Worker#     is not supported, use `grad_input_mask[i]` instead.
125*da0073e9SAndroid Build Coastguard Worker#     NOTE: `wrap_opt_if` could be used to prevent saving redundant variables
126*da0073e9SAndroid Build Coastguard Worker#     with multi-output backward formulas.
127*da0073e9SAndroid Build Coastguard Worker#     See https://github.com/pytorch/pytorch/issues/97575 for more details
128*da0073e9SAndroid Build Coastguard Worker#     on the issue.
129*da0073e9SAndroid Build Coastguard Worker#
130*da0073e9SAndroid Build Coastguard Worker# If you need a complex expression, e.g., with local variables,
131*da0073e9SAndroid Build Coastguard Worker# write a _backward function in torch/csrc/autograd/FunctionsManual.cpp
132*da0073e9SAndroid Build Coastguard Worker# and invoke it from here.  By the way, go read
133*da0073e9SAndroid Build Coastguard Worker# https://github.com/zdevito/ATen/issues/163; this describes an
134*da0073e9SAndroid Build Coastguard Worker# important hazard that occurs when porting backwards from Python to C++
135*da0073e9SAndroid Build Coastguard Worker#
136*da0073e9SAndroid Build Coastguard Worker# Double backwards gradient expressions can be somewhat confusing;
137*da0073e9SAndroid Build Coastguard Worker# the most important thing to remember is: (1) you need to define a
138*da0073e9SAndroid Build Coastguard Worker# derivative formula for every input, including inputs named things
139*da0073e9SAndroid Build Coastguard Worker# like 'grad_output', and (2) the gradient to multiply with is always
140*da0073e9SAndroid Build Coastguard Worker# called 'grad' (even though it really is a grad-grad).
141*da0073e9SAndroid Build Coastguard Worker#
142*da0073e9SAndroid Build Coastguard Worker# You can also add forward derivative definition by defining a formula for
143*da0073e9SAndroid Build Coastguard Worker# a returned value (in general "result" if the name is not specified). This
144*da0073e9SAndroid Build Coastguard Worker# formula works the same way as the backward one and advanced implementations
145*da0073e9SAndroid Build Coastguard Worker# should also be placed in the FunctionsManual file.
146*da0073e9SAndroid Build Coastguard Worker# This formula should compute a single Jacobian vector product using the (primal)
147*da0073e9SAndroid Build Coastguard Worker# value of the argument "foo_p", its forward grad "foo_t" and the result of the
148*da0073e9SAndroid Build Coastguard Worker# function as "result".
149*da0073e9SAndroid Build Coastguard Worker# Note that the forward derivative can be automatically generated in two cases:
150*da0073e9SAndroid Build Coastguard Worker#     - if your function is linear (NOT affine or multi-linear), then you can
151*da0073e9SAndroid Build Coastguard Worker#       specify so by just using the string "auto_linear" for the formula.
152*da0073e9SAndroid Build Coastguard Worker#     - if your function is applied element wise (and has a single input), you
153*da0073e9SAndroid Build Coastguard Worker#       can specify so by just using the string "auto_element_wise" for the formula.
154*da0073e9SAndroid Build Coastguard Worker#
155*da0073e9SAndroid Build Coastguard Worker# Note that to avoid unpacking overhead, functions taking TensorList as inputs
156*da0073e9SAndroid Build Coastguard Worker# will always have their forward grad formula called. This function is responsible
157*da0073e9SAndroid Build Coastguard Worker# to check if any computation is needed and should return an undefined Tensor when
158*da0073e9SAndroid Build Coastguard Worker# there is nothing to do. You can check "cat_forward" for a full example.
159*da0073e9SAndroid Build Coastguard Worker#
160*da0073e9SAndroid Build Coastguard Worker# NB: There are a number of gradient definitions in here which are bogus
161*da0073e9SAndroid Build Coastguard Worker# (implemented using zeros_like).  These gradients are (hopefully) not
162*da0073e9SAndroid Build Coastguard Worker# used by our frontend.  You MUST check the frontend code; search for
163*da0073e9SAndroid Build Coastguard Worker# OpName.apply to see if it's still using a legacy Python style API.
164*da0073e9SAndroid Build Coastguard Worker#
165*da0073e9SAndroid Build Coastguard Worker# Note: Returning views.
166*da0073e9SAndroid Build Coastguard Worker# The following cases exist:
167*da0073e9SAndroid Build Coastguard Worker#     - If a function returns no view, it can have arbitrary outputs.
168*da0073e9SAndroid Build Coastguard Worker#     - If a function return at least one Tensor that is a differentiable view
169*da0073e9SAndroid Build Coastguard Worker#       of one of its input:
170*da0073e9SAndroid Build Coastguard Worker#         - If there is only one differentiable output, this Tensor is marked as a
171*da0073e9SAndroid Build Coastguard Worker#           differentiable view. (alias or transpose for example)
172*da0073e9SAndroid Build Coastguard Worker#         - If there are more than one differentiable output, by default all the views are
173*da0073e9SAndroid Build Coastguard Worker#           marked as differentiable views and created with allow_rebase_history=false.
174*da0073e9SAndroid Build Coastguard Worker#           Meaning that any inplace operation on it will raise an error. (unbind for example)
175*da0073e9SAndroid Build Coastguard Worker#
176*da0073e9SAndroid Build Coastguard Worker#  Notes about undefined output gradients:
177*da0073e9SAndroid Build Coastguard Worker#     All backward functions must support all combinations of undefined output
178*da0073e9SAndroid Build Coastguard Worker#     gradient Tensors, where `grad[i].defined() == false`. Depending on the
179*da0073e9SAndroid Build Coastguard Worker#     number of input and output grads your derivative formula uses, code
180*da0073e9SAndroid Build Coastguard Worker#     generation may automatically add some level of undefined grad support,
181*da0073e9SAndroid Build Coastguard Worker#     according to these three cases:
182*da0073e9SAndroid Build Coastguard Worker#
183*da0073e9SAndroid Build Coastguard Worker#       * 1 input grad and 1 output grad:
184*da0073e9SAndroid Build Coastguard Worker#           Complete undefined grad support is automatically added, so you
185*da0073e9SAndroid Build Coastguard Worker#           shouldn't have to think about it, unless there is a bug in the code
186*da0073e9SAndroid Build Coastguard Worker#           generation.
187*da0073e9SAndroid Build Coastguard Worker#
188*da0073e9SAndroid Build Coastguard Worker#       * 1 input grad and multiple output grads:
189*da0073e9SAndroid Build Coastguard Worker#           Undefined grad support is automatically added ONLY in the case where
190*da0073e9SAndroid Build Coastguard Worker#           all output grads are undefined. You will have to add explicit support
191*da0073e9SAndroid Build Coastguard Worker#           for cases where a subset of output grads is undefined.
192*da0073e9SAndroid Build Coastguard Worker#
193*da0073e9SAndroid Build Coastguard Worker#       * multiple input grads:
194*da0073e9SAndroid Build Coastguard Worker#           No automatic support, so you will need to add it.
195*da0073e9SAndroid Build Coastguard Worker#
196*da0073e9SAndroid Build Coastguard Worker#     If your derivative formula uses more than one output grad, it is usually
197*da0073e9SAndroid Build Coastguard Worker#     preferable to add undefined grad support in the backward function itself
198*da0073e9SAndroid Build Coastguard Worker#     (if you're using one), rather than in the derivative formula in this file.
199*da0073e9SAndroid Build Coastguard Worker#
200*da0073e9SAndroid Build Coastguard Worker#     Undefined Tensors are created with the default constructor `at::Tensor()`.
201*da0073e9SAndroid Build Coastguard Worker#     It is an efficient way to represent a Tensor filled with zeros because
202*da0073e9SAndroid Build Coastguard Worker#     the Tensor holds no sizing information and no Storage data is allocated.
203*da0073e9SAndroid Build Coastguard Worker#     But consequentially, Tensor operations cannot be performed on them.
204*da0073e9SAndroid Build Coastguard Worker#     Therefore, your backward function should treat an undefined output grad as
205*da0073e9SAndroid Build Coastguard Worker#     a zero, and it needs to be a special case.
206*da0073e9SAndroid Build Coastguard Worker#
207*da0073e9SAndroid Build Coastguard Worker#     If all output grads are undefined, then it should be correct for the
208*da0073e9SAndroid Build Coastguard Worker#     backward function to return undefined input grads. Since we use the chain
209*da0073e9SAndroid Build Coastguard Worker#     rule, output grads equal to zero should result in input grads equal to zero,
210*da0073e9SAndroid Build Coastguard Worker#     unless there is some rare special case.
211*da0073e9SAndroid Build Coastguard Worker#
212*da0073e9SAndroid Build Coastguard Worker#     If a subset of output grads is undefined, then it may be acceptable for
213*da0073e9SAndroid Build Coastguard Worker#     the backward function to return undefined input grads--it depends on the
214*da0073e9SAndroid Build Coastguard Worker#     specific function, so you'll have to determine that yourself. If returning
215*da0073e9SAndroid Build Coastguard Worker#     an undefined Tensor is correct for a given input grad, it is also logically
216*da0073e9SAndroid Build Coastguard Worker#     correct to return a defined grad full of zeros, but that would not be
217*da0073e9SAndroid Build Coastguard Worker#     preferable since it would be less efficient.
218*da0073e9SAndroid Build Coastguard Worker#
219*da0073e9SAndroid Build Coastguard Worker# NB: The parameter names here MUST be consistent with the parameter names
220*da0073e9SAndroid Build Coastguard Worker# in native_functions.yaml
221*da0073e9SAndroid Build Coastguard Worker- name: abs(Tensor self) -> Tensor
222*da0073e9SAndroid Build Coastguard Worker  self: grad * self.sgn()
223*da0073e9SAndroid Build Coastguard Worker  result: handle_r_to_c(result.scalar_type(), self_t.conj() * self_p.sgn())
224*da0073e9SAndroid Build Coastguard Worker
225*da0073e9SAndroid Build Coastguard Worker- name: acos(Tensor self) -> Tensor
226*da0073e9SAndroid Build Coastguard Worker  self: grad * -((-self * self + 1).rsqrt()).conj()
227*da0073e9SAndroid Build Coastguard Worker  result: auto_element_wise
228*da0073e9SAndroid Build Coastguard Worker
229*da0073e9SAndroid Build Coastguard Worker- name: add.Tensor(Tensor self, Tensor other, *, Scalar alpha=1) -> Tensor
230*da0073e9SAndroid Build Coastguard Worker  self: handle_r_to_c(self.scalar_type(), grad)
231*da0073e9SAndroid Build Coastguard Worker  other: handle_r_to_c(other.scalar_type(), maybe_multiply(grad, alpha.conj()))
232*da0073e9SAndroid Build Coastguard Worker  result: self_t + maybe_multiply(other_t, alpha)
233*da0073e9SAndroid Build Coastguard Worker
234*da0073e9SAndroid Build Coastguard Worker- name: add.Scalar(Tensor self, Scalar other, Scalar alpha=1) -> Tensor
235*da0073e9SAndroid Build Coastguard Worker  self: handle_r_to_c(self.scalar_type(), grad)
236*da0073e9SAndroid Build Coastguard Worker  result: self_t.clone()
237*da0073e9SAndroid Build Coastguard Worker
238*da0073e9SAndroid Build Coastguard Worker- name: addbmm(Tensor self, Tensor batch1, Tensor batch2, *, Scalar beta=1, Scalar alpha=1) -> Tensor
239*da0073e9SAndroid Build Coastguard Worker  self: maybe_multiply(grad, beta.conj())
240*da0073e9SAndroid Build Coastguard Worker  batch1: maybe_multiply(grad.unsqueeze(0).expand_symint({ batch1.sym_size(0), batch1.sym_size(1), batch2.sym_size(2) }).bmm(batch2.transpose(1, 2).conj()), alpha.conj())
241*da0073e9SAndroid Build Coastguard Worker  batch2: maybe_multiply(batch1.transpose(1, 2).conj().bmm(grad.unsqueeze(0).expand_symint({ batch1.sym_size(0), batch1.sym_size(1), batch2.sym_size(2) })), alpha.conj())
242*da0073e9SAndroid Build Coastguard Worker  result: maybe_multiply(self_t, beta) + maybe_multiply(batch1_t.bmm(batch2_p).sum(0), alpha) + maybe_multiply(batch1_p.bmm(batch2_t).sum(0), alpha)
243*da0073e9SAndroid Build Coastguard Worker
244*da0073e9SAndroid Build Coastguard Worker- name: addcdiv(Tensor self, Tensor tensor1, Tensor tensor2, *, Scalar value=1) -> Tensor
245*da0073e9SAndroid Build Coastguard Worker  self: handle_r_to_c(self.scalar_type(), grad)
246*da0073e9SAndroid Build Coastguard Worker  tensor1: handle_r_to_c(tensor1.scalar_type(), grad * (value / tensor2).conj())
247*da0073e9SAndroid Build Coastguard Worker  tensor2: handle_r_to_c(tensor2.scalar_type(), -grad * (value * tensor1 / (tensor2 * tensor2)).conj())
248*da0073e9SAndroid Build Coastguard Worker  result: self_t + maybe_multiply(tensor1_t / tensor2_p, value) - maybe_multiply(tensor2_t * (tensor1_p / tensor2_p) / tensor2_p, value)
249*da0073e9SAndroid Build Coastguard Worker
250*da0073e9SAndroid Build Coastguard Worker- name: addcmul(Tensor self, Tensor tensor1, Tensor tensor2, *, Scalar value=1) -> Tensor
251*da0073e9SAndroid Build Coastguard Worker  self: handle_r_to_c(self.scalar_type(), grad)
252*da0073e9SAndroid Build Coastguard Worker  tensor1: handle_r_to_c(tensor1.scalar_type(), grad * (tensor2 * value).conj())
253*da0073e9SAndroid Build Coastguard Worker  tensor2: handle_r_to_c(tensor2.scalar_type(), grad * (tensor1 * value).conj())
254*da0073e9SAndroid Build Coastguard Worker  result: self_t + maybe_multiply(tensor1_t * tensor2_p, value) + maybe_multiply(tensor2_t * tensor1_p, value)
255*da0073e9SAndroid Build Coastguard Worker
256*da0073e9SAndroid Build Coastguard Worker- name: addmm(Tensor self, Tensor mat1, Tensor mat2, *, Scalar beta=1, Scalar alpha=1) -> Tensor
257*da0073e9SAndroid Build Coastguard Worker  self: maybe_multiply(grad, beta.conj())
258*da0073e9SAndroid Build Coastguard Worker  mat1: mm_mat1_backward(grad, mat2, mat1.sym_sizes(), mat1.sym_strides(), mat1.layout(), alpha)
259*da0073e9SAndroid Build Coastguard Worker  mat2: mm_mat2_backward(grad, mat1, mat2.sym_sizes(), mat2.sym_strides(), mat2.layout(), alpha)
260*da0073e9SAndroid Build Coastguard Worker  result: maybe_multiply(self_t, beta) + maybe_multiply(mat1_t.mm(mat2_p), alpha) + maybe_multiply(mat1_p.mm(mat2_t), alpha)
261*da0073e9SAndroid Build Coastguard Worker
262*da0073e9SAndroid Build Coastguard Worker- name: _sparse_addmm(Tensor self, Tensor mat1, Tensor mat2, *, Scalar beta=1, Scalar alpha=1) -> Tensor
263*da0073e9SAndroid Build Coastguard Worker  self: maybe_multiply(grad, beta)
264*da0073e9SAndroid Build Coastguard Worker  mat1: mm_mat1_sparse_backward(grad, mat1, mat2, alpha)
265*da0073e9SAndroid Build Coastguard Worker  mat2: mm_mat2_backward(grad, mat1, mat2.sym_sizes(), mat2.sym_strides(), mat2.layout(), alpha)
266*da0073e9SAndroid Build Coastguard Worker
267*da0073e9SAndroid Build Coastguard Worker- name: addmv(Tensor self, Tensor mat, Tensor vec, *, Scalar beta=1, Scalar alpha=1) -> Tensor
268*da0073e9SAndroid Build Coastguard Worker  self: maybe_multiply(grad, beta.conj())
269*da0073e9SAndroid Build Coastguard Worker  mat: maybe_multiply(grad.ger(vec.conj()), alpha.conj())
270*da0073e9SAndroid Build Coastguard Worker  vec: maybe_multiply(mat.t().conj().mv(grad), alpha.conj())
271*da0073e9SAndroid Build Coastguard Worker  result: maybe_multiply(self_t, beta) + maybe_multiply(mat_t.mv(vec_p), alpha) + maybe_multiply(mat_p.mv(vec_t), alpha)
272*da0073e9SAndroid Build Coastguard Worker
273*da0073e9SAndroid Build Coastguard Worker- name: addr(Tensor self, Tensor vec1, Tensor vec2, *, Scalar beta=1, Scalar alpha=1) -> Tensor
274*da0073e9SAndroid Build Coastguard Worker  self: maybe_multiply(grad, beta.conj())
275*da0073e9SAndroid Build Coastguard Worker  vec1: maybe_multiply(grad.mv(vec2.conj()), alpha.conj())
276*da0073e9SAndroid Build Coastguard Worker  vec2: maybe_multiply(grad.t().mv(vec1.conj()), alpha.conj())
277*da0073e9SAndroid Build Coastguard Worker  result: maybe_multiply(self_t, beta) + maybe_multiply(vec1_t.outer(vec2_p), alpha) + maybe_multiply(vec1_p.outer(vec2_t), alpha)
278*da0073e9SAndroid Build Coastguard Worker
279*da0073e9SAndroid Build Coastguard Worker- name: affine_grid_generator(Tensor theta, SymInt[] size, bool align_corners) -> Tensor
280*da0073e9SAndroid Build Coastguard Worker  theta: affine_grid_generator_backward_symint(grad, size, align_corners)
281*da0073e9SAndroid Build Coastguard Worker
282*da0073e9SAndroid Build Coastguard Worker- name: alias(Tensor(a) self) -> Tensor(a)
283*da0073e9SAndroid Build Coastguard Worker  self: grad
284*da0073e9SAndroid Build Coastguard Worker  result: self_t
285*da0073e9SAndroid Build Coastguard Worker
286*da0073e9SAndroid Build Coastguard Worker- name: angle(Tensor self) -> Tensor
287*da0073e9SAndroid Build Coastguard Worker  self: angle_backward(grad, self)
288*da0073e9SAndroid Build Coastguard Worker  result: handle_r_to_c(result.scalar_type(), angle_backward(self_t.conj(), self_p).conj())
289*da0073e9SAndroid Build Coastguard Worker
290*da0073e9SAndroid Build Coastguard Worker# The four items below are necessary because TensorIterator doesn't work on
291*da0073e9SAndroid Build Coastguard Worker# Variables (codegen does not unwrap the input Tensor for all() and any() ).
292*da0073e9SAndroid Build Coastguard Worker- name: any(Tensor self) -> Tensor
293*da0073e9SAndroid Build Coastguard Worker  output_differentiability: [False]
294*da0073e9SAndroid Build Coastguard Worker
295*da0073e9SAndroid Build Coastguard Worker- name: any.dim(Tensor self, int dim, bool keepdim=False) -> Tensor
296*da0073e9SAndroid Build Coastguard Worker  output_differentiability: [False]
297*da0073e9SAndroid Build Coastguard Worker
298*da0073e9SAndroid Build Coastguard Worker- name: any.dims(Tensor self, int[]? dim=None, bool keepdim=False) -> Tensor
299*da0073e9SAndroid Build Coastguard Worker  output_differentiability: [False]
300*da0073e9SAndroid Build Coastguard Worker
301*da0073e9SAndroid Build Coastguard Worker- name: _is_all_true(Tensor self) -> Tensor
302*da0073e9SAndroid Build Coastguard Worker  self: non_differentiable
303*da0073e9SAndroid Build Coastguard Worker
304*da0073e9SAndroid Build Coastguard Worker- name: _is_any_true(Tensor self) -> Tensor
305*da0073e9SAndroid Build Coastguard Worker  self: non_differentiable
306*da0073e9SAndroid Build Coastguard Worker
307*da0073e9SAndroid Build Coastguard Worker- name: all(Tensor self) -> Tensor
308*da0073e9SAndroid Build Coastguard Worker  output_differentiability: [False]
309*da0073e9SAndroid Build Coastguard Worker
310*da0073e9SAndroid Build Coastguard Worker- name: all.dim(Tensor self, int dim, bool keepdim=False) -> Tensor
311*da0073e9SAndroid Build Coastguard Worker  output_differentiability: [False]
312*da0073e9SAndroid Build Coastguard Worker
313*da0073e9SAndroid Build Coastguard Worker- name: all.dims(Tensor self, int[]? dim=None, bool keepdim=False) -> Tensor
314*da0073e9SAndroid Build Coastguard Worker  output_differentiability: [False]
315*da0073e9SAndroid Build Coastguard Worker
316*da0073e9SAndroid Build Coastguard Worker- name: acosh(Tensor self) -> Tensor
317*da0073e9SAndroid Build Coastguard Worker# Save one rsqrt in the real case by using that for x real and positive sqrt(x*y) = sqrt(x)*sqrt(y) (not true in the complex case)
318*da0073e9SAndroid Build Coastguard Worker  self: "self.is_complex() ? grad * ((self + 1).rsqrt() * (self - 1).rsqrt()).conj() : grad * (self * self - 1).rsqrt()"
319*da0073e9SAndroid Build Coastguard Worker  result: auto_element_wise
320*da0073e9SAndroid Build Coastguard Worker
321*da0073e9SAndroid Build Coastguard Worker- name: acosh_(Tensor(a!) self) -> Tensor(a!)
322*da0073e9SAndroid Build Coastguard Worker  self: not_implemented("inplace version of acosh")
323*da0073e9SAndroid Build Coastguard Worker
324*da0073e9SAndroid Build Coastguard Worker- name: asinh(Tensor self) -> Tensor
325*da0073e9SAndroid Build Coastguard Worker  self: grad * (self.pow(2) + 1).rsqrt().conj()
326*da0073e9SAndroid Build Coastguard Worker  result: auto_element_wise
327*da0073e9SAndroid Build Coastguard Worker
328*da0073e9SAndroid Build Coastguard Worker- name: asinh_(Tensor(a!) self) -> Tensor(a!)
329*da0073e9SAndroid Build Coastguard Worker  self: not_implemented("inplace version of asinh")
330*da0073e9SAndroid Build Coastguard Worker
331*da0073e9SAndroid Build Coastguard Worker- name: atanh(Tensor self) -> Tensor
332*da0073e9SAndroid Build Coastguard Worker  self: grad * 1 / (1 - self.pow(2)).conj()
333*da0073e9SAndroid Build Coastguard Worker  result: auto_element_wise
334*da0073e9SAndroid Build Coastguard Worker
335*da0073e9SAndroid Build Coastguard Worker- name: atanh_(Tensor(a!) self) -> Tensor(a!)
336*da0073e9SAndroid Build Coastguard Worker  self: not_implemented("inplace version of atanh")
337*da0073e9SAndroid Build Coastguard Worker
338*da0073e9SAndroid Build Coastguard Worker- name: as_strided(Tensor(a) self, SymInt[] size, SymInt[] stride, SymInt? storage_offset=None) -> Tensor(a)
339*da0073e9SAndroid Build Coastguard Worker  self: as_strided_backward(grad, TensorGeometry(self), size, stride, storage_offset)
340*da0073e9SAndroid Build Coastguard Worker  result: auto_linear
341*da0073e9SAndroid Build Coastguard Worker
342*da0073e9SAndroid Build Coastguard Worker- name: as_strided_(Tensor(a!) self, SymInt[] size, SymInt[] stride, SymInt? storage_offset=None) -> Tensor(a!)
343*da0073e9SAndroid Build Coastguard Worker  self: as_strided_backward(grad, TensorGeometry(self), size, stride, storage_offset)
344*da0073e9SAndroid Build Coastguard Worker  result: auto_linear
345*da0073e9SAndroid Build Coastguard Worker
346*da0073e9SAndroid Build Coastguard Worker- name: asin(Tensor self) -> Tensor
347*da0073e9SAndroid Build Coastguard Worker  self: grad * (-self * self + 1).rsqrt().conj()
348*da0073e9SAndroid Build Coastguard Worker  result: auto_element_wise
349*da0073e9SAndroid Build Coastguard Worker
350*da0073e9SAndroid Build Coastguard Worker- name: atan(Tensor self) -> Tensor
351*da0073e9SAndroid Build Coastguard Worker  self: grad / (self * self + 1).conj()
352*da0073e9SAndroid Build Coastguard Worker  result: auto_element_wise
353*da0073e9SAndroid Build Coastguard Worker
354*da0073e9SAndroid Build Coastguard Worker- name: atan2(Tensor self, Tensor other) -> Tensor
355*da0073e9SAndroid Build Coastguard Worker  self, other: atan2_backward(grad, self, other, grad_input_mask)
356*da0073e9SAndroid Build Coastguard Worker  result: (-self_p * other_t + other_p * self_t) / (self_p.pow(2) + other_p.pow(2))
357*da0073e9SAndroid Build Coastguard Worker
358*da0073e9SAndroid Build Coastguard Worker- name: baddbmm(Tensor self, Tensor batch1, Tensor batch2, *, Scalar beta=1, Scalar alpha=1) -> Tensor
359*da0073e9SAndroid Build Coastguard Worker  self: maybe_multiply(grad, beta.conj())
360*da0073e9SAndroid Build Coastguard Worker  batch1: maybe_multiply(grad.bmm(batch2.transpose(1, 2).conj()), alpha.conj())
361*da0073e9SAndroid Build Coastguard Worker  batch2: maybe_multiply(batch1.transpose(1, 2).conj().bmm(grad), alpha.conj())
362*da0073e9SAndroid Build Coastguard Worker  result: maybe_multiply(self_t, beta) + maybe_multiply(batch1_t.bmm(batch2_p), alpha) + maybe_multiply(batch1_p.bmm(batch2_t), alpha)
363*da0073e9SAndroid Build Coastguard Worker
364*da0073e9SAndroid Build Coastguard Worker- name: bernoulli(Tensor self, *, Generator? generator=None) -> Tensor
365*da0073e9SAndroid Build Coastguard Worker  self: zeros_like(grad)
366*da0073e9SAndroid Build Coastguard Worker  result: auto_element_wise
367*da0073e9SAndroid Build Coastguard Worker
368*da0073e9SAndroid Build Coastguard Worker- name: bernoulli_.Tensor(Tensor(a!) self, Tensor p, *, Generator? generator=None) -> Tensor(a!)
369*da0073e9SAndroid Build Coastguard Worker  self: zeros_like(grad)
370*da0073e9SAndroid Build Coastguard Worker  p: zeros_like(p)
371*da0073e9SAndroid Build Coastguard Worker  result: self_t.zero_()
372*da0073e9SAndroid Build Coastguard Worker
373*da0073e9SAndroid Build Coastguard Worker- name: bernoulli_.float(Tensor(a!) self, float p=0.5, *, Generator? generator=None) -> Tensor(a!)
374*da0073e9SAndroid Build Coastguard Worker  self: zeros_like(grad)
375*da0073e9SAndroid Build Coastguard Worker  result: self_t.zero_()
376*da0073e9SAndroid Build Coastguard Worker
377*da0073e9SAndroid Build Coastguard Worker- name: bmm(Tensor self, Tensor mat2) -> Tensor
378*da0073e9SAndroid Build Coastguard Worker  self: grad.bmm(mat2.transpose(1, 2).conj())
379*da0073e9SAndroid Build Coastguard Worker  mat2: self.transpose(1, 2).conj().bmm(grad)
380*da0073e9SAndroid Build Coastguard Worker  result: self_t.bmm(mat2_p) + self_p.bmm(mat2_t)
381*da0073e9SAndroid Build Coastguard Worker
382*da0073e9SAndroid Build Coastguard Worker- name: matmul(Tensor self, Tensor other) -> Tensor
383*da0073e9SAndroid Build Coastguard Worker  self, other: matmul_backward(grad, self, other, grad_input_mask)
384*da0073e9SAndroid Build Coastguard Worker
385*da0073e9SAndroid Build Coastguard Worker- name: cat(Tensor[] tensors, int dim=0) -> Tensor
386*da0073e9SAndroid Build Coastguard Worker  tensors: cat_tensors_backward(grad, to_args_sizes_symint(tensors), to_args_scalartypes(tensors), dim)
387*da0073e9SAndroid Build Coastguard Worker  result: cat_jvp(tensors, dim)
388*da0073e9SAndroid Build Coastguard Worker
389*da0073e9SAndroid Build Coastguard Worker- name: cauchy_(Tensor(a!) self, float median=0, float sigma=1, *, Generator? generator=None) -> Tensor(a!)
390*da0073e9SAndroid Build Coastguard Worker  self: zeros_like(grad)
391*da0073e9SAndroid Build Coastguard Worker  result: self_t.zero_()
392*da0073e9SAndroid Build Coastguard Worker
393*da0073e9SAndroid Build Coastguard Worker- name: ceil(Tensor self) -> Tensor
394*da0073e9SAndroid Build Coastguard Worker  self: zeros_like(grad)
395*da0073e9SAndroid Build Coastguard Worker  result: auto_element_wise
396*da0073e9SAndroid Build Coastguard Worker
397*da0073e9SAndroid Build Coastguard Worker- name: cholesky(Tensor self, bool upper=False) -> Tensor
398*da0073e9SAndroid Build Coastguard Worker  self: cholesky_backward(grad, upper, result)
399*da0073e9SAndroid Build Coastguard Worker
400*da0073e9SAndroid Build Coastguard Worker- name: linalg_cholesky_ex(Tensor self, *, bool upper=False, bool check_errors=False) -> (Tensor L, Tensor info)
401*da0073e9SAndroid Build Coastguard Worker  self: cholesky_backward(grad, upper, L)
402*da0073e9SAndroid Build Coastguard Worker  L: cholesky_jvp(self_t, L, upper)
403*da0073e9SAndroid Build Coastguard Worker
404*da0073e9SAndroid Build Coastguard Worker- name: cholesky_solve(Tensor self, Tensor input2, bool upper=False) -> Tensor
405*da0073e9SAndroid Build Coastguard Worker  self, input2: cholesky_solve_backward(grad, self, input2, result, upper, grad_input_mask)
406*da0073e9SAndroid Build Coastguard Worker  result: cholesky_solve_jvp(result, input2_p, input2_t, self_t, upper)
407*da0073e9SAndroid Build Coastguard Worker
408*da0073e9SAndroid Build Coastguard Worker- name: cholesky_inverse(Tensor self, bool upper=False) -> Tensor
409*da0073e9SAndroid Build Coastguard Worker  self: cholesky_inverse_backward(grad, self, upper, result)
410*da0073e9SAndroid Build Coastguard Worker  result: cholesky_inverse_jvp(self_p, self_t, result, upper)
411*da0073e9SAndroid Build Coastguard Worker
412*da0073e9SAndroid Build Coastguard Worker# For clamp, gradient is not defined at the boundaries. But empirically it's helpful
413*da0073e9SAndroid Build Coastguard Worker# to be able to get gradient on min and max, so we return the subgradient 1 for these cases.
414*da0073e9SAndroid Build Coastguard Worker- name: clamp.Tensor(Tensor self, Tensor? min=None, Tensor? max=None) -> Tensor
415*da0073e9SAndroid Build Coastguard Worker  self: clamp_backward(grad, self, min, max)
416*da0073e9SAndroid Build Coastguard Worker  min, max: clamp_backward_min_max(grad, self, min, max, grad_input_mask)
417*da0073e9SAndroid Build Coastguard Worker  result: clamp_jvp(self_p, self_t, min_p, min_t, max_p, max_t)
418*da0073e9SAndroid Build Coastguard Worker
419*da0073e9SAndroid Build Coastguard Worker- name: clamp(Tensor self, Scalar? min=None, Scalar? max=None) -> Tensor
420*da0073e9SAndroid Build Coastguard Worker  self: clamp_backward(grad, self, min, max)
421*da0073e9SAndroid Build Coastguard Worker  result: auto_element_wise
422*da0073e9SAndroid Build Coastguard Worker
423*da0073e9SAndroid Build Coastguard Worker- name: clamp_min(Tensor self, Scalar min) -> Tensor
424*da0073e9SAndroid Build Coastguard Worker  self: where(self >= min, grad, at::scalar_tensor(0., grad.options()))
425*da0073e9SAndroid Build Coastguard Worker  result: auto_element_wise
426*da0073e9SAndroid Build Coastguard Worker
427*da0073e9SAndroid Build Coastguard Worker- name: clamp_min.Tensor(Tensor self, Tensor min) -> Tensor
428*da0073e9SAndroid Build Coastguard Worker  self: where(self >= min, grad, at::scalar_tensor(0., grad.options()))
429*da0073e9SAndroid Build Coastguard Worker  min: where(self < min, grad, at::scalar_tensor(0., grad.options()))
430*da0073e9SAndroid Build Coastguard Worker  result: where(self_p >= min_p, self_t, min_t)
431*da0073e9SAndroid Build Coastguard Worker
432*da0073e9SAndroid Build Coastguard Worker- name: clamp_max(Tensor self, Scalar max) -> Tensor
433*da0073e9SAndroid Build Coastguard Worker  self: where(self <= max, grad, at::scalar_tensor(0., grad.options()))
434*da0073e9SAndroid Build Coastguard Worker  result: auto_element_wise
435*da0073e9SAndroid Build Coastguard Worker
436*da0073e9SAndroid Build Coastguard Worker- name: clamp_max.Tensor(Tensor self, Tensor max) -> Tensor
437*da0073e9SAndroid Build Coastguard Worker  self: where(self <= max, grad, at::scalar_tensor(0., grad.options()))
438*da0073e9SAndroid Build Coastguard Worker  max: where(self > max, grad, at::scalar_tensor(0., grad.options()))
439*da0073e9SAndroid Build Coastguard Worker  result: where(self_p <= max_p, self_t, max_t)
440*da0073e9SAndroid Build Coastguard Worker
441*da0073e9SAndroid Build Coastguard Worker- name: clone(Tensor self, *, MemoryFormat? memory_format=None) -> Tensor
442*da0073e9SAndroid Build Coastguard Worker  self: grad
443*da0073e9SAndroid Build Coastguard Worker  result: auto_linear
444*da0073e9SAndroid Build Coastguard Worker
445*da0073e9SAndroid Build Coastguard Worker- name: _lazy_clone(Tensor self) -> Tensor
446*da0073e9SAndroid Build Coastguard Worker  self: grad
447*da0073e9SAndroid Build Coastguard Worker  result: auto_linear
448*da0073e9SAndroid Build Coastguard Worker
449*da0073e9SAndroid Build Coastguard Worker- name: _to_copy(Tensor self, *, ScalarType? dtype=None, Layout? layout=None, Device? device=None, bool? pin_memory=None, bool non_blocking=False, MemoryFormat? memory_format=None) -> Tensor
450*da0073e9SAndroid Build Coastguard Worker  self: _to_copy_backward(grad, self.options())
451*da0073e9SAndroid Build Coastguard Worker  result: _to_copy(self_t, dtype, layout, device, pin_memory, non_blocking, memory_format)
452*da0073e9SAndroid Build Coastguard Worker  # The condition is: if dtype is not nullopt, then isDifferentiableType(*dtype)
453*da0073e9SAndroid Build Coastguard Worker  # (If dtype IS nullopt, we rely on the regular check that any input requires grad).
454*da0073e9SAndroid Build Coastguard Worker  output_differentiability: ["!dtype || isDifferentiableType(*dtype)"]
455*da0073e9SAndroid Build Coastguard Worker
456*da0073e9SAndroid Build Coastguard Worker- name: _coalesce(Tensor self) -> Tensor
457*da0073e9SAndroid Build Coastguard Worker  self: grad
458*da0073e9SAndroid Build Coastguard Worker
459*da0073e9SAndroid Build Coastguard Worker- name: complex(Tensor real, Tensor imag) -> Tensor
460*da0073e9SAndroid Build Coastguard Worker  real: at::real(grad)
461*da0073e9SAndroid Build Coastguard Worker  imag: at::imag(grad)
462*da0073e9SAndroid Build Coastguard Worker  result: at::complex(real_t, imag_t)
463*da0073e9SAndroid Build Coastguard Worker
464*da0073e9SAndroid Build Coastguard Worker- name: polar(Tensor abs, Tensor angle) -> Tensor
465*da0073e9SAndroid Build Coastguard Worker  abs, angle: polar_backward(grad, result)
466*da0073e9SAndroid Build Coastguard Worker  result: at::complex(abs_t*angle_p.cos() - angle_t*abs_p*angle_p.sin(), abs_t*angle_p.sin() + angle_t*abs_p*angle_p.cos())
467*da0073e9SAndroid Build Coastguard Worker
468*da0073e9SAndroid Build Coastguard Worker- name: _conj(Tensor(a) self) -> Tensor(a)
469*da0073e9SAndroid Build Coastguard Worker  self: grad.conj()
470*da0073e9SAndroid Build Coastguard Worker  result: self_t.conj()
471*da0073e9SAndroid Build Coastguard Worker
472*da0073e9SAndroid Build Coastguard Worker- name: _neg_view(Tensor(a) self) -> Tensor(a)
473*da0073e9SAndroid Build Coastguard Worker  self: grad.neg()
474*da0073e9SAndroid Build Coastguard Worker  result: self_t._neg_view()
475*da0073e9SAndroid Build Coastguard Worker
476*da0073e9SAndroid Build Coastguard Worker- name: _conj_physical(Tensor self) -> Tensor
477*da0073e9SAndroid Build Coastguard Worker  self: grad.conj_physical()
478*da0073e9SAndroid Build Coastguard Worker  result: self_t.conj_physical()
479*da0073e9SAndroid Build Coastguard Worker
480*da0073e9SAndroid Build Coastguard Worker- name: conj_physical_(Tensor(a!) self) -> Tensor(a!)
481*da0073e9SAndroid Build Coastguard Worker  self: grad.conj_physical()
482*da0073e9SAndroid Build Coastguard Worker  result: self_t.conj_physical_()
483*da0073e9SAndroid Build Coastguard Worker
484*da0073e9SAndroid Build Coastguard Worker- name: copysign.Tensor(Tensor self, Tensor other) -> Tensor
485*da0073e9SAndroid Build Coastguard Worker  self: copysign_tensor_self_backward(grad, self, result)
486*da0073e9SAndroid Build Coastguard Worker  other: zeros_like(other)
487*da0073e9SAndroid Build Coastguard Worker  result: copysign_tensor_self_backward(self_t, self_p, result)
488*da0073e9SAndroid Build Coastguard Worker
489*da0073e9SAndroid Build Coastguard Worker- name: copysign.Scalar(Tensor self, Scalar other) -> Tensor
490*da0073e9SAndroid Build Coastguard Worker  self: copysign_tensor_self_backward(grad, self, result)
491*da0073e9SAndroid Build Coastguard Worker  result: auto_element_wise
492*da0073e9SAndroid Build Coastguard Worker
493*da0073e9SAndroid Build Coastguard Worker- name: cos(Tensor self) -> Tensor
494*da0073e9SAndroid Build Coastguard Worker  self: grad * -self.sin().conj()
495*da0073e9SAndroid Build Coastguard Worker  result: auto_element_wise
496*da0073e9SAndroid Build Coastguard Worker
497*da0073e9SAndroid Build Coastguard Worker- name: cosh(Tensor self) -> Tensor
498*da0073e9SAndroid Build Coastguard Worker  self: grad * self.sinh().conj()
499*da0073e9SAndroid Build Coastguard Worker  result: auto_element_wise
500*da0073e9SAndroid Build Coastguard Worker
501*da0073e9SAndroid Build Coastguard Worker- name: count_nonzero.dim_IntList(Tensor self, int[] dim) -> Tensor
502*da0073e9SAndroid Build Coastguard Worker  output_differentiability: [False]
503*da0073e9SAndroid Build Coastguard Worker
504*da0073e9SAndroid Build Coastguard Worker- name: count_nonzero(Tensor self, int? dim=None) -> Tensor
505*da0073e9SAndroid Build Coastguard Worker  output_differentiability: [False]
506*da0073e9SAndroid Build Coastguard Worker
507*da0073e9SAndroid Build Coastguard Worker- name: linalg_cross(Tensor self, Tensor other, *, int dim=-1) -> Tensor
508*da0073e9SAndroid Build Coastguard Worker  self: at::linalg_cross(other.conj(), grad, dim)
509*da0073e9SAndroid Build Coastguard Worker  other: at::linalg_cross(grad, self.conj(), dim)
510*da0073e9SAndroid Build Coastguard Worker  result: "at::linalg_cross(self_t, other_p, dim) + at::linalg_cross(self_p, other_t, dim)"
511*da0073e9SAndroid Build Coastguard Worker
512*da0073e9SAndroid Build Coastguard Worker- name: logcumsumexp(Tensor self, int dim) -> Tensor
513*da0073e9SAndroid Build Coastguard Worker  self: logcumsumexp_backward(grad, self, result, dim)
514*da0073e9SAndroid Build Coastguard Worker  result: logcumsumexp_jvp(self_p, self_t, dim)
515*da0073e9SAndroid Build Coastguard Worker
516*da0073e9SAndroid Build Coastguard Worker- name: cumprod(Tensor self, int dim, *, ScalarType? dtype=None) -> Tensor
517*da0073e9SAndroid Build Coastguard Worker  self: cumprod_backward(grad.to(self.scalar_type()), self, dim, result)
518*da0073e9SAndroid Build Coastguard Worker  result: "cumprod_jvp(self_t, self_p, result, dim).to(dtype.has_value() ? *dtype : self_p.scalar_type())"
519*da0073e9SAndroid Build Coastguard Worker
520*da0073e9SAndroid Build Coastguard Worker- name: cumsum(Tensor self, int dim, *, ScalarType? dtype=None) -> Tensor
521*da0073e9SAndroid Build Coastguard Worker  self: cumsum_backward(grad.to(self.scalar_type()), dim)
522*da0073e9SAndroid Build Coastguard Worker  result: auto_linear
523*da0073e9SAndroid Build Coastguard Worker
524*da0073e9SAndroid Build Coastguard Worker- name: cummax(Tensor self, int dim) -> (Tensor values, Tensor indices)
525*da0073e9SAndroid Build Coastguard Worker  self: cummaxmin_backward(grad, self, indices, dim)
526*da0073e9SAndroid Build Coastguard Worker  values: self_t.gather(dim, indices)
527*da0073e9SAndroid Build Coastguard Worker
528*da0073e9SAndroid Build Coastguard Worker- name: cummin(Tensor self, int dim) -> (Tensor values, Tensor indices)
529*da0073e9SAndroid Build Coastguard Worker  self: cummaxmin_backward(grad, self, indices, dim)
530*da0073e9SAndroid Build Coastguard Worker  values: self_t.gather(dim, indices)
531*da0073e9SAndroid Build Coastguard Worker
532*da0073e9SAndroid Build Coastguard Worker- name: conv_tbc(Tensor self, Tensor weight, Tensor bias, int pad=0) -> Tensor
533*da0073e9SAndroid Build Coastguard Worker  self, weight, bias: "grad.defined() ? conv_tbc_backward(grad, self, weight, bias, pad) : std::tuple<Tensor, Tensor, Tensor>()"
534*da0073e9SAndroid Build Coastguard Worker
535*da0073e9SAndroid Build Coastguard Worker- name: _ctc_loss(Tensor log_probs, Tensor targets, int[] input_lengths, int[] target_lengths, int blank=0, bool zero_infinity=False) -> (Tensor, Tensor)
536*da0073e9SAndroid Build Coastguard Worker  log_probs: _ctc_loss_backward(grad, log_probs, targets, input_lengths, target_lengths, result0, result1, blank, zero_infinity)
537*da0073e9SAndroid Build Coastguard Worker
538*da0073e9SAndroid Build Coastguard Worker- name: _ctc_loss.Tensor(Tensor log_probs, Tensor targets, Tensor input_lengths, Tensor target_lengths, int blank=0, bool zero_infinity=False) -> (Tensor, Tensor)
539*da0073e9SAndroid Build Coastguard Worker  log_probs: _ctc_loss_backward(grad, log_probs, targets, input_lengths, target_lengths, result0, result1, blank, zero_infinity)
540*da0073e9SAndroid Build Coastguard Worker
541*da0073e9SAndroid Build Coastguard Worker- name: deg2rad(Tensor self) -> Tensor
542*da0073e9SAndroid Build Coastguard Worker  self: deg2rad_backward(grad)
543*da0073e9SAndroid Build Coastguard Worker  result: auto_element_wise
544*da0073e9SAndroid Build Coastguard Worker
545*da0073e9SAndroid Build Coastguard Worker- name: _linalg_det(Tensor A) -> (Tensor result, Tensor LU, Tensor pivots)
546*da0073e9SAndroid Build Coastguard Worker  A: linalg_det_backward(grad, result, A, LU, pivots)
547*da0073e9SAndroid Build Coastguard Worker  result: linalg_det_jvp(A_t, result, LU, pivots, A_p.is_contiguous() && !A_p.is_complex())
548*da0073e9SAndroid Build Coastguard Worker  output_differentiability: [True, False, False]
549*da0073e9SAndroid Build Coastguard Worker
550*da0073e9SAndroid Build Coastguard Worker- name: _linalg_slogdet(Tensor A) -> (Tensor sign, Tensor logabsdet, Tensor LU, Tensor pivots)
551*da0073e9SAndroid Build Coastguard Worker  A: slogdet_backward(grad_sign, grad_logabsdet, A, sign, LU, pivots)
552*da0073e9SAndroid Build Coastguard Worker  sign, logabsdet: slogdet_jvp(LU, pivots, A_t, sign, A_p.is_contiguous() && !A_p.is_complex())
553*da0073e9SAndroid Build Coastguard Worker  output_differentiability: [True, True, False, False]
554*da0073e9SAndroid Build Coastguard Worker
555*da0073e9SAndroid Build Coastguard Worker- name: block_diag(Tensor[] tensors) -> Tensor
556*da0073e9SAndroid Build Coastguard Worker  tensors: block_diag_backward(grad, to_args_sizes(tensors), to_args_scalartypes(tensors))
557*da0073e9SAndroid Build Coastguard Worker  result: block_diag_jvp(tensors)
558*da0073e9SAndroid Build Coastguard Worker
559*da0073e9SAndroid Build Coastguard Worker- name: diag_embed(Tensor self, int offset=0, int dim1=-2, int dim2=-1) -> Tensor
560*da0073e9SAndroid Build Coastguard Worker  self: grad.diagonal(offset, dim1, dim2)
561*da0073e9SAndroid Build Coastguard Worker  result: auto_linear
562*da0073e9SAndroid Build Coastguard Worker
563*da0073e9SAndroid Build Coastguard Worker- name: diagonal(Tensor(a) self, int offset=0, int dim1=0, int dim2=1) -> Tensor(a)
564*da0073e9SAndroid Build Coastguard Worker  self: diagonal_backward_symint(grad, self.sym_sizes(), offset, dim1, dim2)
565*da0073e9SAndroid Build Coastguard Worker  result: auto_linear
566*da0073e9SAndroid Build Coastguard Worker
567*da0073e9SAndroid Build Coastguard Worker- name: diagonal_backward(Tensor grad_output, SymInt[] input_sizes, int offset, int dim1, int dim2) -> Tensor
568*da0073e9SAndroid Build Coastguard Worker  grad_output: grad.diagonal(offset, dim1, dim2)
569*da0073e9SAndroid Build Coastguard Worker  result: auto_linear
570*da0073e9SAndroid Build Coastguard Worker
571*da0073e9SAndroid Build Coastguard Worker- name: dist(Tensor self, Tensor other, Scalar p=2) -> Tensor
572*da0073e9SAndroid Build Coastguard Worker  self: norm_backward(grad, self - other, p, result)
573*da0073e9SAndroid Build Coastguard Worker  other: -norm_backward(grad, self - other, p, result)
574*da0073e9SAndroid Build Coastguard Worker  result: norm_jvp(self_p - other_p, self_t - other_t, p, result, {}, false)
575*da0073e9SAndroid Build Coastguard Worker
576*da0073e9SAndroid Build Coastguard Worker# The backward formula is done in this order to improve numerical stability
577*da0073e9SAndroid Build Coastguard Worker# of the higher order derivatives, see https://github.com/pytorch/pytorch/issues/43414
578*da0073e9SAndroid Build Coastguard Worker# Note that we don't use "result" because saving it would be BC-breaking when it is used in an inplace operation later
579*da0073e9SAndroid Build Coastguard Worker- name: div.Tensor(Tensor self, Tensor other) -> Tensor
580*da0073e9SAndroid Build Coastguard Worker  self: div_tensor_self_backward(grad, other, self.scalar_type())
581*da0073e9SAndroid Build Coastguard Worker  other: div_tensor_other_backward(grad, self, other)
582*da0073e9SAndroid Build Coastguard Worker  result: (self_t - other_t * result) / other_p
583*da0073e9SAndroid Build Coastguard Worker
584*da0073e9SAndroid Build Coastguard Worker- name: div.Scalar(Tensor self, Scalar other) -> Tensor
585*da0073e9SAndroid Build Coastguard Worker  self: div_tensor_self_backward(grad, other, self.scalar_type())
586*da0073e9SAndroid Build Coastguard Worker  result: self_t / other
587*da0073e9SAndroid Build Coastguard Worker
588*da0073e9SAndroid Build Coastguard Worker- name: div.Tensor_mode(Tensor self, Tensor other, *, str? rounding_mode) -> Tensor
589*da0073e9SAndroid Build Coastguard Worker  self: div_tensor_self_backward(grad, other, self.scalar_type(), rounding_mode)
590*da0073e9SAndroid Build Coastguard Worker  other: div_tensor_other_backward(grad, self, other, rounding_mode)
591*da0073e9SAndroid Build Coastguard Worker  result: "rounding_mode.has_value() ? result.new_zeros_symint(result.sym_sizes()) : self_t / other_p - other_t * (self_p / other_p) / other_p"
592*da0073e9SAndroid Build Coastguard Worker
593*da0073e9SAndroid Build Coastguard Worker- name: div.Scalar_mode(Tensor self, Scalar other, *, str? rounding_mode) -> Tensor
594*da0073e9SAndroid Build Coastguard Worker  self: div_tensor_self_backward(grad, other, self.scalar_type(), rounding_mode)
595*da0073e9SAndroid Build Coastguard Worker  result: "rounding_mode.has_value() ? result.new_zeros_symint(result.sym_sizes()) : self_t / other"
596*da0073e9SAndroid Build Coastguard Worker
597*da0073e9SAndroid Build Coastguard Worker- name: dot(Tensor self, Tensor tensor) -> Tensor
598*da0073e9SAndroid Build Coastguard Worker  self: grad * tensor.conj()
599*da0073e9SAndroid Build Coastguard Worker  tensor: grad * self.conj()
600*da0073e9SAndroid Build Coastguard Worker  result: at::dot(self_t, tensor_p) + at::dot(self_p, tensor_t)
601*da0073e9SAndroid Build Coastguard Worker
602*da0073e9SAndroid Build Coastguard Worker- name: vdot(Tensor self, Tensor other) -> Tensor
603*da0073e9SAndroid Build Coastguard Worker  self: grad.conj() * other
604*da0073e9SAndroid Build Coastguard Worker  other: grad * self
605*da0073e9SAndroid Build Coastguard Worker  result: at::vdot(self_t, other_p) + at::vdot(self_p, other_t)
606*da0073e9SAndroid Build Coastguard Worker
607*da0073e9SAndroid Build Coastguard Worker- name: _fused_dropout(Tensor self, float p, Generator? generator=None) -> (Tensor, Tensor)
608*da0073e9SAndroid Build Coastguard Worker  self: _fused_dropout_backward(grad, result1, p)
609*da0073e9SAndroid Build Coastguard Worker
610*da0073e9SAndroid Build Coastguard Worker- name: native_dropout(Tensor input, float p, bool? train) -> (Tensor, Tensor)
611*da0073e9SAndroid Build Coastguard Worker  input: "GradMode::is_enabled() ? infinitely_differentiable_native_dropout_backward(grad, result1, (!train.has_value() || !train.value() ? 1 : (p == 1 ? 0.0 : 1.0 / (1.0 - p)))) : native_dropout_backward(grad, result1, (!train.has_value() || !train.value() ? 1 : (p == 1 ? 0.0 : 1.0 / (1.0 - p))))"
612*da0073e9SAndroid Build Coastguard Worker  result0: "(!train.has_value() || train.value()) ? (p == 1 ? 0.0 : 1.0 / (1.0 - p)) * input_t * result1 : input_t"
613*da0073e9SAndroid Build Coastguard Worker
614*da0073e9SAndroid Build Coastguard Worker- name: native_dropout_backward(Tensor grad_output, Tensor mask, float scale) -> Tensor
615*da0073e9SAndroid Build Coastguard Worker  grad_output: "native_dropout_double_backward(grad, grad_output, mask, scale)"
616*da0073e9SAndroid Build Coastguard Worker  mask: 'not_implemented("native_dropout_backward: mask")'
617*da0073e9SAndroid Build Coastguard Worker
618*da0073e9SAndroid Build Coastguard Worker- name: eq_.Scalar(Tensor(a!) self, Scalar other) -> Tensor(a!)
619*da0073e9SAndroid Build Coastguard Worker  self: zeros_like(self)
620*da0073e9SAndroid Build Coastguard Worker  result: self_t.zero_()
621*da0073e9SAndroid Build Coastguard Worker
622*da0073e9SAndroid Build Coastguard Worker- name: eq_.Tensor(Tensor(a!) self, Tensor other) -> Tensor(a!)
623*da0073e9SAndroid Build Coastguard Worker  self: zeros_like(self)
624*da0073e9SAndroid Build Coastguard Worker  other: zeros_like(other)
625*da0073e9SAndroid Build Coastguard Worker  result: self_t.zero_()
626*da0073e9SAndroid Build Coastguard Worker
627*da0073e9SAndroid Build Coastguard Worker- name: erf(Tensor self) -> Tensor
628*da0073e9SAndroid Build Coastguard Worker  self: 2.0 / sqrt(M_PI) * exp(-(self.pow(2))) * grad
629*da0073e9SAndroid Build Coastguard Worker  result: auto_element_wise
630*da0073e9SAndroid Build Coastguard Worker
631*da0073e9SAndroid Build Coastguard Worker- name: erfc(Tensor self) -> Tensor
632*da0073e9SAndroid Build Coastguard Worker  self: -2.0 / sqrt(M_PI) * exp(-(self.pow(2))) * grad
633*da0073e9SAndroid Build Coastguard Worker  result: auto_element_wise
634*da0073e9SAndroid Build Coastguard Worker
635*da0073e9SAndroid Build Coastguard Worker- name: special_erfcx(Tensor self) -> Tensor
636*da0073e9SAndroid Build Coastguard Worker  self: (2.0 * self * result - 2.0 / sqrt(M_PI)) * grad
637*da0073e9SAndroid Build Coastguard Worker  result: auto_element_wise
638*da0073e9SAndroid Build Coastguard Worker
639*da0073e9SAndroid Build Coastguard Worker- name: erfinv(Tensor self) -> Tensor
640*da0073e9SAndroid Build Coastguard Worker  self: 0.5 * sqrt(M_PI) * exp(self.erfinv().pow(2)) * grad
641*da0073e9SAndroid Build Coastguard Worker  result: auto_element_wise
642*da0073e9SAndroid Build Coastguard Worker
643*da0073e9SAndroid Build Coastguard Worker- name: exp(Tensor self) -> Tensor
644*da0073e9SAndroid Build Coastguard Worker  self: grad * result.conj()
645*da0073e9SAndroid Build Coastguard Worker  result: auto_element_wise
646*da0073e9SAndroid Build Coastguard Worker
647*da0073e9SAndroid Build Coastguard Worker- name: exp2(Tensor self) -> Tensor
648*da0073e9SAndroid Build Coastguard Worker  self: grad * result.conj() * M_LN2
649*da0073e9SAndroid Build Coastguard Worker  result: auto_element_wise
650*da0073e9SAndroid Build Coastguard Worker
651*da0073e9SAndroid Build Coastguard Worker- name: expm1(Tensor self) -> Tensor
652*da0073e9SAndroid Build Coastguard Worker  self: grad * (result.conj() + 1)
653*da0073e9SAndroid Build Coastguard Worker  result: auto_element_wise
654*da0073e9SAndroid Build Coastguard Worker
655*da0073e9SAndroid Build Coastguard Worker# TODO: this derivative is not SymInt safe, need sum_to support
656*da0073e9SAndroid Build Coastguard Worker- name: expand(Tensor(a) self, SymInt[] size, *, bool implicit=False) -> Tensor(a)
657*da0073e9SAndroid Build Coastguard Worker  self: at::sum_to(grad, self.sym_sizes())
658*da0073e9SAndroid Build Coastguard Worker  result: auto_linear
659*da0073e9SAndroid Build Coastguard Worker
660*da0073e9SAndroid Build Coastguard Worker- name: exponential_(Tensor(a!) self, float lambd=1, *, Generator? generator=None) -> Tensor(a!)
661*da0073e9SAndroid Build Coastguard Worker  self: zeros_like(grad)
662*da0073e9SAndroid Build Coastguard Worker  result: self_t.zero_()
663*da0073e9SAndroid Build Coastguard Worker
664*da0073e9SAndroid Build Coastguard Worker- name: fake_quantize_per_tensor_affine_cachemask(Tensor self, float scale, int zero_point, int quant_min, int quant_max) -> (Tensor output, Tensor mask)
665*da0073e9SAndroid Build Coastguard Worker  self: fake_quantize_per_tensor_affine_cachemask_backward(grad, mask)
666*da0073e9SAndroid Build Coastguard Worker
667*da0073e9SAndroid Build Coastguard Worker- name: _fake_quantize_per_tensor_affine_cachemask_tensor_qparams(Tensor self, Tensor scale, Tensor zero_point, Tensor fake_quant_enabled, int quant_min, int quant_max) -> (Tensor output, Tensor mask)
668*da0073e9SAndroid Build Coastguard Worker  self: fake_quantize_per_tensor_affine_cachemask_backward(grad, mask)
669*da0073e9SAndroid Build Coastguard Worker
670*da0073e9SAndroid Build Coastguard Worker- name: _fake_quantize_learnable_per_tensor_affine(Tensor self, Tensor scale, Tensor zero_point, int quant_min, int quant_max, float grad_factor=1.0) -> Tensor
671*da0073e9SAndroid Build Coastguard Worker  self, scale, zero_point: "grad.defined() ? _fake_quantize_learnable_per_tensor_affine_backward(grad, self, scale, zero_point, quant_min, quant_max, grad_factor) : std::tuple<Tensor, Tensor, Tensor>()"
672*da0073e9SAndroid Build Coastguard Worker
673*da0073e9SAndroid Build Coastguard Worker- name: fake_quantize_per_channel_affine_cachemask(Tensor self, Tensor scale, Tensor zero_point, int axis, int quant_min, int quant_max) -> (Tensor output, Tensor mask)
674*da0073e9SAndroid Build Coastguard Worker  self: fake_quantize_per_channel_affine_cachemask_backward(grad, mask)
675*da0073e9SAndroid Build Coastguard Worker
676*da0073e9SAndroid Build Coastguard Worker- name: _fake_quantize_learnable_per_channel_affine(Tensor self, Tensor scale, Tensor zero_point, int axis, int quant_min, int quant_max, float grad_factor=1.0) -> Tensor
677*da0073e9SAndroid Build Coastguard Worker  self, scale, zero_point: "grad.defined() ? _fake_quantize_learnable_per_channel_affine_backward(grad, self, scale, zero_point, axis, quant_min, quant_max, grad_factor) : std::tuple<Tensor, Tensor, Tensor>()"
678*da0073e9SAndroid Build Coastguard Worker
679*da0073e9SAndroid Build Coastguard Worker- name: _fused_moving_avg_obs_fq_helper(Tensor self, Tensor observer_on, Tensor fake_quant_on, Tensor(a!) running_min, Tensor(b!) running_max, Tensor(c!) scale, Tensor(d!) zero_point, float averaging_const, int quant_min, int quant_max, int ch_axis, bool per_row_fake_quant=False, bool symmetric_quant=False) -> (Tensor output, Tensor mask)
680*da0073e9SAndroid Build Coastguard Worker  self: fake_quantize_per_tensor_affine_cachemask_backward(grad, mask)
681*da0073e9SAndroid Build Coastguard Worker
682*da0073e9SAndroid Build Coastguard Worker- name: fill.Scalar(Tensor self, Scalar value) -> Tensor
683*da0073e9SAndroid Build Coastguard Worker  self: zeros_like(grad)
684*da0073e9SAndroid Build Coastguard Worker  result: at::fill(self_t, 0)
685*da0073e9SAndroid Build Coastguard Worker
686*da0073e9SAndroid Build Coastguard Worker- name: fill.Tensor(Tensor self, Tensor value) -> Tensor
687*da0073e9SAndroid Build Coastguard Worker  self: zeros_like(grad)
688*da0073e9SAndroid Build Coastguard Worker  value: grad.sum()
689*da0073e9SAndroid Build Coastguard Worker  result: at::fill(self_t, value_t)
690*da0073e9SAndroid Build Coastguard Worker
691*da0073e9SAndroid Build Coastguard Worker- name: fill_.Scalar(Tensor(a!) self, Scalar value) -> Tensor(a!)
692*da0073e9SAndroid Build Coastguard Worker  self: zeros_like(grad)
693*da0073e9SAndroid Build Coastguard Worker  result: self_t.fill_(0)
694*da0073e9SAndroid Build Coastguard Worker
695*da0073e9SAndroid Build Coastguard Worker- name: fill_.Tensor(Tensor(a!) self, Tensor value) -> Tensor(a!)
696*da0073e9SAndroid Build Coastguard Worker  self: zeros_like(grad)
697*da0073e9SAndroid Build Coastguard Worker  value: grad.sum()
698*da0073e9SAndroid Build Coastguard Worker  result: self_t.fill_(value_t)
699*da0073e9SAndroid Build Coastguard Worker
700*da0073e9SAndroid Build Coastguard Worker- name: floor(Tensor self) -> Tensor
701*da0073e9SAndroid Build Coastguard Worker  self: zeros_like(grad)
702*da0073e9SAndroid Build Coastguard Worker  result: auto_element_wise
703*da0073e9SAndroid Build Coastguard Worker
704*da0073e9SAndroid Build Coastguard Worker- name: fmod.Scalar(Tensor self, Scalar other) -> Tensor
705*da0073e9SAndroid Build Coastguard Worker  self: grad
706*da0073e9SAndroid Build Coastguard Worker  result: auto_element_wise
707*da0073e9SAndroid Build Coastguard Worker
708*da0073e9SAndroid Build Coastguard Worker- name: fmod.Tensor(Tensor self, Tensor other) -> Tensor
709*da0073e9SAndroid Build Coastguard Worker  self: grad
710*da0073e9SAndroid Build Coastguard Worker  other: -grad * self.div(other, /*rounding_mode=*/"trunc")
711*da0073e9SAndroid Build Coastguard Worker  result: self_t - other_t * self_p.div(other_p, /*rounding_mode=*/"trunc")
712*da0073e9SAndroid Build Coastguard Worker
713*da0073e9SAndroid Build Coastguard Worker- name: frac(Tensor self) -> Tensor
714*da0073e9SAndroid Build Coastguard Worker  self: grad
715*da0073e9SAndroid Build Coastguard Worker  result: self_t
716*da0073e9SAndroid Build Coastguard Worker
717*da0073e9SAndroid Build Coastguard Worker- name: frexp.Tensor(Tensor self) -> (Tensor mantissa, Tensor exponent)
718*da0073e9SAndroid Build Coastguard Worker  self: grad / exponent.exp2()
719*da0073e9SAndroid Build Coastguard Worker  mantissa: self_t / exponent.exp2()
720*da0073e9SAndroid Build Coastguard Worker
721*da0073e9SAndroid Build Coastguard Worker- name: gather(Tensor self, int dim, Tensor index, *, bool sparse_grad=False) -> Tensor
722*da0073e9SAndroid Build Coastguard Worker  self: gather_backward(grad, self, dim, index, sparse_grad)
723*da0073e9SAndroid Build Coastguard Worker  index: non_differentiable
724*da0073e9SAndroid Build Coastguard Worker  result: auto_linear
725*da0073e9SAndroid Build Coastguard Worker
726*da0073e9SAndroid Build Coastguard Worker- name: ge_.Scalar(Tensor(a!) self, Scalar other) -> Tensor(a!)
727*da0073e9SAndroid Build Coastguard Worker  self: zeros_like(self)
728*da0073e9SAndroid Build Coastguard Worker  result: self_t.zero_()
729*da0073e9SAndroid Build Coastguard Worker
730*da0073e9SAndroid Build Coastguard Worker- name: ge_.Tensor(Tensor(a!) self, Tensor other) -> Tensor(a!)
731*da0073e9SAndroid Build Coastguard Worker  self: zeros_like(self)
732*da0073e9SAndroid Build Coastguard Worker  other: zeros_like(other)
733*da0073e9SAndroid Build Coastguard Worker  result: self_t.zero_()
734*da0073e9SAndroid Build Coastguard Worker
735*da0073e9SAndroid Build Coastguard Worker- name: geometric_(Tensor(a!) self, float p, *, Generator? generator=None) -> Tensor(a!)
736*da0073e9SAndroid Build Coastguard Worker  self: zeros_like(grad)
737*da0073e9SAndroid Build Coastguard Worker  result: self_t.zero_()
738*da0073e9SAndroid Build Coastguard Worker
739*da0073e9SAndroid Build Coastguard Worker- name: geqrf(Tensor self) -> (Tensor a, Tensor tau)
740*da0073e9SAndroid Build Coastguard Worker  self: not_implemented("geqrf")
741*da0073e9SAndroid Build Coastguard Worker
742*da0073e9SAndroid Build Coastguard Worker- name: indices(Tensor(a) self) -> Tensor(a)
743*da0073e9SAndroid Build Coastguard Worker  output_differentiability: [False]
744*da0073e9SAndroid Build Coastguard Worker
745*da0073e9SAndroid Build Coastguard Worker- name: _indices(Tensor(a) self) -> Tensor(a)
746*da0073e9SAndroid Build Coastguard Worker  output_differentiability: [False]
747*da0073e9SAndroid Build Coastguard Worker
748*da0073e9SAndroid Build Coastguard Worker- name: crow_indices(Tensor(a) self) -> Tensor(a)
749*da0073e9SAndroid Build Coastguard Worker  output_differentiability: [False]
750*da0073e9SAndroid Build Coastguard Worker
751*da0073e9SAndroid Build Coastguard Worker- name: col_indices(Tensor(a) self) -> Tensor(a)
752*da0073e9SAndroid Build Coastguard Worker  output_differentiability: [False]
753*da0073e9SAndroid Build Coastguard Worker
754*da0073e9SAndroid Build Coastguard Worker- name: ccol_indices(Tensor(a) self) -> Tensor(a)
755*da0073e9SAndroid Build Coastguard Worker  output_differentiability: [False]
756*da0073e9SAndroid Build Coastguard Worker
757*da0073e9SAndroid Build Coastguard Worker- name: row_indices(Tensor(a) self) -> Tensor(a)
758*da0073e9SAndroid Build Coastguard Worker  output_differentiability: [False]
759*da0073e9SAndroid Build Coastguard Worker
760*da0073e9SAndroid Build Coastguard Worker- name: grid_sampler_2d(Tensor input, Tensor grid, int interpolation_mode, int padding_mode, bool align_corners) -> Tensor
761*da0073e9SAndroid Build Coastguard Worker  input, grid: "grad.defined() ? grid_sampler_2d_backward(grad, input, grid, interpolation_mode, padding_mode, align_corners, grad_input_mask) : std::tuple<Tensor, Tensor>()"
762*da0073e9SAndroid Build Coastguard Worker
763*da0073e9SAndroid Build Coastguard Worker- name: grid_sampler_3d(Tensor input, Tensor grid, int interpolation_mode, int padding_mode, bool align_corners) -> Tensor
764*da0073e9SAndroid Build Coastguard Worker  input, grid: "grad.defined() ? grid_sampler_3d_backward(grad, input, grid, interpolation_mode, padding_mode, align_corners, grad_input_mask) : std::tuple<Tensor, Tensor>()"
765*da0073e9SAndroid Build Coastguard Worker
766*da0073e9SAndroid Build Coastguard Worker# See NOTE [ grid_sample CPU fallback ]
767*da0073e9SAndroid Build Coastguard Worker- name: _grid_sampler_2d_cpu_fallback(Tensor input, Tensor grid, int interpolation_mode, int padding_mode, bool align_corners) -> Tensor
768*da0073e9SAndroid Build Coastguard Worker  input, grid: "grad.defined() ? _grid_sampler_2d_cpu_fallback_backward(grad, input, grid, interpolation_mode, padding_mode, align_corners) : std::tuple<Tensor, Tensor>()"
769*da0073e9SAndroid Build Coastguard Worker
770*da0073e9SAndroid Build Coastguard Worker- name: gt_.Scalar(Tensor(a!) self, Scalar other) -> Tensor(a!)
771*da0073e9SAndroid Build Coastguard Worker  self: zeros_like(self)
772*da0073e9SAndroid Build Coastguard Worker  result: self_t.zero_()
773*da0073e9SAndroid Build Coastguard Worker
774*da0073e9SAndroid Build Coastguard Worker- name: gt_.Tensor(Tensor(a!) self, Tensor other) -> Tensor(a!)
775*da0073e9SAndroid Build Coastguard Worker  self: zeros_like(self)
776*da0073e9SAndroid Build Coastguard Worker  other: zeros_like(other)
777*da0073e9SAndroid Build Coastguard Worker  result: self_t.zero_()
778*da0073e9SAndroid Build Coastguard Worker
779*da0073e9SAndroid Build Coastguard Worker- name: hardsigmoid(Tensor self) -> Tensor
780*da0073e9SAndroid Build Coastguard Worker  self: hardsigmoid_backward(grad, self)
781*da0073e9SAndroid Build Coastguard Worker  result: auto_element_wise
782*da0073e9SAndroid Build Coastguard Worker
783*da0073e9SAndroid Build Coastguard Worker- name: histc(Tensor self, int bins=100, Scalar min=0, Scalar max=0) -> Tensor
784*da0073e9SAndroid Build Coastguard Worker  output_differentiability: [False]
785*da0073e9SAndroid Build Coastguard Worker
786*da0073e9SAndroid Build Coastguard Worker- name: hardswish(Tensor self) -> Tensor
787*da0073e9SAndroid Build Coastguard Worker  self: hardswish_backward(grad, self)
788*da0073e9SAndroid Build Coastguard Worker  result: auto_element_wise
789*da0073e9SAndroid Build Coastguard Worker
790*da0073e9SAndroid Build Coastguard Worker- name: hardswish_backward(Tensor grad_output, Tensor self) -> Tensor
791*da0073e9SAndroid Build Coastguard Worker  grad_output: hardswish_backward(grad, self)
792*da0073e9SAndroid Build Coastguard Worker  self: at::where(at::logical_and(-3.0 < self, self < 3.0), grad * grad_output / 3.0, at::zeros({}, self.options()))
793*da0073e9SAndroid Build Coastguard Worker  result: "hardswish_backward(grad_output_t, self_p)
794*da0073e9SAndroid Build Coastguard Worker         + at::where(at::logical_and(-3.0 < self_p, self_p < 3.0), self_t * grad_output_p / 3.0, at::zeros({}, self_p.options()))"
795*da0073e9SAndroid Build Coastguard Worker
796*da0073e9SAndroid Build Coastguard Worker- name: hypot(Tensor self, Tensor other) -> Tensor
797*da0073e9SAndroid Build Coastguard Worker  self: grad * self / result
798*da0073e9SAndroid Build Coastguard Worker  other: grad * other / result
799*da0073e9SAndroid Build Coastguard Worker  result: self_t * self_p / result + other_t * other_p / result
800*da0073e9SAndroid Build Coastguard Worker
801*da0073e9SAndroid Build Coastguard Worker- name: i0(Tensor self) -> Tensor
802*da0073e9SAndroid Build Coastguard Worker  self: grad * at::special_i1(self)
803*da0073e9SAndroid Build Coastguard Worker  result: auto_element_wise
804*da0073e9SAndroid Build Coastguard Worker
805*da0073e9SAndroid Build Coastguard Worker- name: special_i0e(Tensor self) -> Tensor
806*da0073e9SAndroid Build Coastguard Worker  self: grad * (at::special_i1e(self) - self.sgn() * result)
807*da0073e9SAndroid Build Coastguard Worker  result: auto_element_wise
808*da0073e9SAndroid Build Coastguard Worker
809*da0073e9SAndroid Build Coastguard Worker- name: special_i1(Tensor self) -> Tensor
810*da0073e9SAndroid Build Coastguard Worker  self: i1_backward(grad, self, result)
811*da0073e9SAndroid Build Coastguard Worker  result: auto_element_wise
812*da0073e9SAndroid Build Coastguard Worker
813*da0073e9SAndroid Build Coastguard Worker- name: special_i1e(Tensor self) -> Tensor
814*da0073e9SAndroid Build Coastguard Worker  self: i1e_backward(grad, self, result)
815*da0073e9SAndroid Build Coastguard Worker  result: auto_element_wise
816*da0073e9SAndroid Build Coastguard Worker
817*da0073e9SAndroid Build Coastguard Worker- name: igamma(Tensor self, Tensor other) -> Tensor
818*da0073e9SAndroid Build Coastguard Worker  self: 'not_implemented("igamma: input")'
819*da0073e9SAndroid Build Coastguard Worker  other: grad * exp((self - 1) * log(other) - other - lgamma(self))
820*da0073e9SAndroid Build Coastguard Worker
821*da0073e9SAndroid Build Coastguard Worker- name: igammac(Tensor self, Tensor other) -> Tensor
822*da0073e9SAndroid Build Coastguard Worker  self: 'not_implemented("igammac: input")'
823*da0073e9SAndroid Build Coastguard Worker  other: -grad * exp((self - 1) * log(other) - other - lgamma(self))
824*da0073e9SAndroid Build Coastguard Worker
825*da0073e9SAndroid Build Coastguard Worker- name: index.Tensor(Tensor self, Tensor?[] indices) -> Tensor
826*da0073e9SAndroid Build Coastguard Worker  self: index_backward(grad.new_zeros_symint(self.sym_sizes(), self.options()), indices, grad)
827*da0073e9SAndroid Build Coastguard Worker  result: auto_linear
828*da0073e9SAndroid Build Coastguard Worker
829*da0073e9SAndroid Build Coastguard Worker- name: _unsafe_index.Tensor(Tensor self, Tensor?[] indices) -> Tensor
830*da0073e9SAndroid Build Coastguard Worker  self: at::_unsafe_index_put(grad.new_zeros_symint(self.sym_sizes(), self.options()), indices, grad, true)
831*da0073e9SAndroid Build Coastguard Worker  result: auto_linear
832*da0073e9SAndroid Build Coastguard Worker
833*da0073e9SAndroid Build Coastguard Worker- name: index_add(Tensor self, int dim, Tensor index, Tensor source, *, Scalar alpha=1) -> Tensor
834*da0073e9SAndroid Build Coastguard Worker  self: grad
835*da0073e9SAndroid Build Coastguard Worker  # The case source.dim() == 0  is necessary to support scalar tensors of the form
836*da0073e9SAndroid Build Coastguard Worker  # source.dim() == 0 and index.dim() == 1 and index.size() == (1,),
837*da0073e9SAndroid Build Coastguard Worker  # This is because source is not broadcastable to index, as source.dim() < index.dim()
838*da0073e9SAndroid Build Coastguard Worker  source: "maybe_multiply(source.dim() > 0 ? grad.index_select(dim, index).expand_as(source) : grad.index_select(dim, index.squeeze(0)), alpha)"
839*da0073e9SAndroid Build Coastguard Worker  index: non_differentiable
840*da0073e9SAndroid Build Coastguard Worker  result: at::index_add(self_t, dim, index, maybe_multiply(source_t, alpha))
841*da0073e9SAndroid Build Coastguard Worker
842*da0073e9SAndroid Build Coastguard Worker- name: index_reduce(Tensor self, int dim, Tensor index, Tensor source, str reduce, *, bool include_self=True) -> Tensor
843*da0073e9SAndroid Build Coastguard Worker  self, source: index_reduce_backward(grad, self, dim, index, source, reduce, include_self, result)
844*da0073e9SAndroid Build Coastguard Worker  index: non_differentiable
845*da0073e9SAndroid Build Coastguard Worker
846*da0073e9SAndroid Build Coastguard Worker- name: index_copy(Tensor self, int dim, Tensor index, Tensor source) -> Tensor
847*da0073e9SAndroid Build Coastguard Worker  self: grad.index_fill(dim, index, 0)
848*da0073e9SAndroid Build Coastguard Worker  # The case source.dim() == 0 is necessary to support scalar tensors of the form
849*da0073e9SAndroid Build Coastguard Worker  # source.dim() == 0 and index.dim() == 1 and index.size() == (1,),
850*da0073e9SAndroid Build Coastguard Worker  # This is because source is not broadcastable to index, as source.dim() < index.dim()
851*da0073e9SAndroid Build Coastguard Worker  source: "source.dim() > 0 ? grad.index_select(dim, index).expand_as(source) : grad.index_select(dim, index.squeeze(0))"
852*da0073e9SAndroid Build Coastguard Worker  index: non_differentiable
853*da0073e9SAndroid Build Coastguard Worker  result: self_t.index_copy(dim, index, source_t)
854*da0073e9SAndroid Build Coastguard Worker
855*da0073e9SAndroid Build Coastguard Worker- name: index_fill.int_Scalar(Tensor self, int dim, Tensor index, Scalar value) -> Tensor
856*da0073e9SAndroid Build Coastguard Worker  self: grad.index_fill(dim, index, 0)
857*da0073e9SAndroid Build Coastguard Worker  index: non_differentiable
858*da0073e9SAndroid Build Coastguard Worker  result: self_t.index_fill(dim, index, 0)
859*da0073e9SAndroid Build Coastguard Worker
860*da0073e9SAndroid Build Coastguard Worker- name: index_fill.int_Tensor(Tensor self, int dim, Tensor index, Tensor value) -> Tensor
861*da0073e9SAndroid Build Coastguard Worker  self: grad.index_fill(dim, index, 0)
862*da0073e9SAndroid Build Coastguard Worker  value: grad.index_select(dim, std::get<0>(at::_unique(index, /*sorted=*/false))).sum()
863*da0073e9SAndroid Build Coastguard Worker  index: non_differentiable
864*da0073e9SAndroid Build Coastguard Worker  result: self_t.index_fill(dim, index, value_t)
865*da0073e9SAndroid Build Coastguard Worker
866*da0073e9SAndroid Build Coastguard Worker- name: index_put(Tensor self, Tensor?[] indices, Tensor values, bool accumulate=False) -> Tensor
867*da0073e9SAndroid Build Coastguard Worker  self: "accumulate ? grad : grad.index_put(indices, zeros_like(values), false)"
868*da0073e9SAndroid Build Coastguard Worker  values: grad.index(indices)
869*da0073e9SAndroid Build Coastguard Worker  result: self_t.index_put(indices, values_t, accumulate)
870*da0073e9SAndroid Build Coastguard Worker
871*da0073e9SAndroid Build Coastguard Worker- name: _unsafe_index_put(Tensor self, Tensor?[] indices, Tensor values, bool accumulate=False) -> Tensor
872*da0073e9SAndroid Build Coastguard Worker  self: "accumulate ? grad : at::_unsafe_index_put(grad, indices, zeros_like(values), false)"
873*da0073e9SAndroid Build Coastguard Worker  values: at::_unsafe_index(grad, indices)
874*da0073e9SAndroid Build Coastguard Worker  result: at::_unsafe_index_put(self_t, indices, values_t, accumulate)
875*da0073e9SAndroid Build Coastguard Worker
876*da0073e9SAndroid Build Coastguard Worker- name: _index_put_impl_(Tensor(a!) self, Tensor?[] indices, Tensor values, bool accumulate=False, bool unsafe=False) -> Tensor(a!)
877*da0073e9SAndroid Build Coastguard Worker  self: "accumulate ? grad : grad.index_put(indices, zeros_like(values), false)"
878*da0073e9SAndroid Build Coastguard Worker  values: grad.index(indices)
879*da0073e9SAndroid Build Coastguard Worker  result: at::_index_put_impl_(self_t, indices, values_t, accumulate, unsafe)
880*da0073e9SAndroid Build Coastguard Worker
881*da0073e9SAndroid Build Coastguard Worker- name: index_select(Tensor self, int dim, Tensor index) -> Tensor
882*da0073e9SAndroid Build Coastguard Worker  self: index_select_backward_symint(grad, self.sym_sizes(), dim, index)
883*da0073e9SAndroid Build Coastguard Worker  index: non_differentiable
884*da0073e9SAndroid Build Coastguard Worker  result: auto_linear
885*da0073e9SAndroid Build Coastguard Worker
886*da0073e9SAndroid Build Coastguard Worker- name: linalg_inv_ex(Tensor A, *, bool check_errors=False) -> (Tensor inverse, Tensor info)
887*da0073e9SAndroid Build Coastguard Worker  A: -at::matmul(inverse.mH(), at::matmul(grad, inverse.mH()))
888*da0073e9SAndroid Build Coastguard Worker  inverse: -at::matmul(at::matmul(inverse, A_t), inverse)
889*da0073e9SAndroid Build Coastguard Worker  output_differentiability: [True, False]
890*da0073e9SAndroid Build Coastguard Worker
891*da0073e9SAndroid Build Coastguard Worker- name: linalg_pinv.atol_rtol_tensor(Tensor self, *, Tensor? atol=None, Tensor? rtol=None, bool hermitian=False) -> Tensor
892*da0073e9SAndroid Build Coastguard Worker  self: pinv_backward(grad, result, self)
893*da0073e9SAndroid Build Coastguard Worker  result: pinv_jvp(self_p, result, self_t)
894*da0073e9SAndroid Build Coastguard Worker
895*da0073e9SAndroid Build Coastguard Worker- name: isnan(Tensor self) -> Tensor
896*da0073e9SAndroid Build Coastguard Worker  self: non_differentiable
897*da0073e9SAndroid Build Coastguard Worker
898*da0073e9SAndroid Build Coastguard Worker- name: kthvalue(Tensor self, int k, int dim=-1, bool keepdim=False) -> (Tensor values, Tensor indices)
899*da0073e9SAndroid Build Coastguard Worker  self: value_selecting_reduction_backward_symint(grad, dim, indices, self.sym_sizes(), keepdim)
900*da0073e9SAndroid Build Coastguard Worker  values: gather_with_keepdimed_indices(self_t, dim, indices, keepdim)
901*da0073e9SAndroid Build Coastguard Worker
902*da0073e9SAndroid Build Coastguard Worker- name: le_.Scalar(Tensor(a!) self, Scalar other) -> Tensor(a!)
903*da0073e9SAndroid Build Coastguard Worker  self: zeros_like(self)
904*da0073e9SAndroid Build Coastguard Worker  result: self_t.zero_()
905*da0073e9SAndroid Build Coastguard Worker
906*da0073e9SAndroid Build Coastguard Worker- name: le_.Tensor(Tensor(a!) self, Tensor other) -> Tensor(a!)
907*da0073e9SAndroid Build Coastguard Worker  self: zeros_like(self)
908*da0073e9SAndroid Build Coastguard Worker  other: zeros_like(other)
909*da0073e9SAndroid Build Coastguard Worker  result: self_t.zero_()
910*da0073e9SAndroid Build Coastguard Worker
911*da0073e9SAndroid Build Coastguard Worker- name: lerp.Scalar(Tensor self, Tensor end, Scalar weight) -> Tensor
912*da0073e9SAndroid Build Coastguard Worker  self: "weight.isComplex() ? grad * (1 - weight.conj().toComplexDouble()) : grad * (1 - weight.toDouble())"
913*da0073e9SAndroid Build Coastguard Worker  end: grad * weight.conj()
914*da0073e9SAndroid Build Coastguard Worker  result: at::lerp(self_t, end_t, weight)
915*da0073e9SAndroid Build Coastguard Worker
916*da0073e9SAndroid Build Coastguard Worker- name: lerp.Tensor(Tensor self, Tensor end, Tensor weight) -> Tensor
917*da0073e9SAndroid Build Coastguard Worker  self: grad * (1 - weight).conj()
918*da0073e9SAndroid Build Coastguard Worker  end: grad * weight.conj()
919*da0073e9SAndroid Build Coastguard Worker  weight: grad * (end - self).conj()
920*da0073e9SAndroid Build Coastguard Worker  result: at::lerp(self_t, end_t, weight_p) + weight_t * (end_p - self_p)
921*da0073e9SAndroid Build Coastguard Worker
922*da0073e9SAndroid Build Coastguard Worker- name: lgamma(Tensor self) -> Tensor
923*da0073e9SAndroid Build Coastguard Worker  self: grad * digamma(self)
924*da0073e9SAndroid Build Coastguard Worker  result: auto_element_wise
925*da0073e9SAndroid Build Coastguard Worker
926*da0073e9SAndroid Build Coastguard Worker- name: digamma(Tensor self) -> Tensor
927*da0073e9SAndroid Build Coastguard Worker  self: grad * polygamma(1, self)
928*da0073e9SAndroid Build Coastguard Worker  result: auto_element_wise
929*da0073e9SAndroid Build Coastguard Worker
930*da0073e9SAndroid Build Coastguard Worker- name: polygamma(int n, Tensor self) -> Tensor
931*da0073e9SAndroid Build Coastguard Worker  self: grad * polygamma(n + 1, self)
932*da0073e9SAndroid Build Coastguard Worker  result: auto_element_wise
933*da0073e9SAndroid Build Coastguard Worker
934*da0073e9SAndroid Build Coastguard Worker- name: polygamma_(Tensor(a!) self, int n) -> Tensor(a!)
935*da0073e9SAndroid Build Coastguard Worker  self: grad * polygamma(n + 1, self)
936*da0073e9SAndroid Build Coastguard Worker  result: self_t.mul_(polygamma(n + 1, original_self_p))
937*da0073e9SAndroid Build Coastguard Worker
938*da0073e9SAndroid Build Coastguard Worker- name: log(Tensor self) -> Tensor
939*da0073e9SAndroid Build Coastguard Worker  self: grad.div(self.conj())
940*da0073e9SAndroid Build Coastguard Worker  result: auto_element_wise
941*da0073e9SAndroid Build Coastguard Worker
942*da0073e9SAndroid Build Coastguard Worker- name: log10(Tensor self) -> Tensor
943*da0073e9SAndroid Build Coastguard Worker  self: grad / (self.conj() * 2.3025850929940456)
944*da0073e9SAndroid Build Coastguard Worker  result: auto_element_wise
945*da0073e9SAndroid Build Coastguard Worker
946*da0073e9SAndroid Build Coastguard Worker- name: log1p(Tensor self) -> Tensor
947*da0073e9SAndroid Build Coastguard Worker  self: log1p_backward(grad, self)
948*da0073e9SAndroid Build Coastguard Worker  result: auto_element_wise
949*da0073e9SAndroid Build Coastguard Worker
950*da0073e9SAndroid Build Coastguard Worker- name: log2(Tensor self) -> Tensor
951*da0073e9SAndroid Build Coastguard Worker  self: grad / (self.conj() * 0.6931471805599453)
952*da0073e9SAndroid Build Coastguard Worker  result: auto_element_wise
953*da0073e9SAndroid Build Coastguard Worker
954*da0073e9SAndroid Build Coastguard Worker- name: logaddexp(Tensor self, Tensor other) -> Tensor
955*da0073e9SAndroid Build Coastguard Worker  self: grad / (1 + exp(other - self)).conj()
956*da0073e9SAndroid Build Coastguard Worker  other: grad / (1 + exp(self - other)).conj()
957*da0073e9SAndroid Build Coastguard Worker  result: self_t / (1 + exp(other_p - self_p)) + other_t / (1 + exp(self_p - other_p))
958*da0073e9SAndroid Build Coastguard Worker
959*da0073e9SAndroid Build Coastguard Worker- name: logaddexp2(Tensor self, Tensor other) -> Tensor
960*da0073e9SAndroid Build Coastguard Worker  self: grad / (1 + pow(2, other - self))
961*da0073e9SAndroid Build Coastguard Worker  other: grad / (1 + pow(2, self - other))
962*da0073e9SAndroid Build Coastguard Worker  result: self_t / (1 + pow(2, other_p - self_p)) + other_t / (1 + pow(2, self_p - other_p))
963*da0073e9SAndroid Build Coastguard Worker
964*da0073e9SAndroid Build Coastguard Worker# Note [Gradient formula for xlogy at x = 0, y <= 0]
965*da0073e9SAndroid Build Coastguard Worker# x * log(y) is not defined at y <= 0, so we cannot even talk about differentiability
966*da0073e9SAndroid Build Coastguard Worker# Now, xlogy(0, y) = 0 by definition.
967*da0073e9SAndroid Build Coastguard Worker# This does not make it differentiable as it's not defined in a neighbourhood of a point
968*da0073e9SAndroid Build Coastguard Worker# (0, y) when y <= 0.
969*da0073e9SAndroid Build Coastguard Worker# Now, when a function is non-differentiable, sometimes we return "a relatively sensible value"
970*da0073e9SAndroid Build Coastguard Worker# In this case, as per the discussion in https://github.com/pytorch/pytorch/issues/80770, we choose
971*da0073e9SAndroid Build Coastguard Worker# this value to be zero, which is the directional derivative along the line {x = 0}.
972*da0073e9SAndroid Build Coastguard Worker- name: xlogy.Tensor(Tensor self, Tensor other) -> Tensor
973*da0073e9SAndroid Build Coastguard Worker  self: at::xlogy(grad, other).masked_fill((self == 0.) & (other <= 0.), 0.)
974*da0073e9SAndroid Build Coastguard Worker  other: grad * self / other
975*da0073e9SAndroid Build Coastguard Worker  result: at::xlogy(self_t, other_p).masked_fill((self_p == 0.) & (other_p <= 0.), 0.) + other_t * self_p / other_p
976*da0073e9SAndroid Build Coastguard Worker
977*da0073e9SAndroid Build Coastguard Worker- name: xlogy.Scalar_Self(Scalar self, Tensor other) -> Tensor
978*da0073e9SAndroid Build Coastguard Worker  other: grad * self / other
979*da0073e9SAndroid Build Coastguard Worker  result: auto_element_wise
980*da0073e9SAndroid Build Coastguard Worker
981*da0073e9SAndroid Build Coastguard Worker- name: xlogy.Scalar_Other(Tensor self, Scalar other) -> Tensor
982*da0073e9SAndroid Build Coastguard Worker  self: "other.toDouble() > 0.
983*da0073e9SAndroid Build Coastguard Worker          ? at::xlogy(grad,  other)
984*da0073e9SAndroid Build Coastguard Worker          : at::xlogy(grad,  other).masked_fill(self == 0., 0.)"
985*da0073e9SAndroid Build Coastguard Worker  result: auto_element_wise
986*da0073e9SAndroid Build Coastguard Worker
987*da0073e9SAndroid Build Coastguard Worker# See Note [Gradient formula for xlogy at x = 0, y <= 0]
988*da0073e9SAndroid Build Coastguard Worker# Same here but with y <= -1
989*da0073e9SAndroid Build Coastguard Worker- name: special_xlog1py(Tensor self, Tensor other) -> Tensor
990*da0073e9SAndroid Build Coastguard Worker  self: at::special_xlog1py(grad,  other).masked_fill((self == 0.) & (other <= -1.), 0.)
991*da0073e9SAndroid Build Coastguard Worker  other: grad * self / (other + 1)
992*da0073e9SAndroid Build Coastguard Worker  result: at::special_xlog1py(self_t,  other_p).masked_fill((self_p == 0.) & (other_p <= -1.), 0.) + other_t * self_p / (other_p + 1)
993*da0073e9SAndroid Build Coastguard Worker
994*da0073e9SAndroid Build Coastguard Worker- name: special_xlog1py.self_scalar(Scalar self, Tensor other) -> Tensor
995*da0073e9SAndroid Build Coastguard Worker  other: grad * self / (other + 1)
996*da0073e9SAndroid Build Coastguard Worker  result: auto_element_wise
997*da0073e9SAndroid Build Coastguard Worker
998*da0073e9SAndroid Build Coastguard Worker- name: special_xlog1py.other_scalar(Tensor self, Scalar other) -> Tensor
999*da0073e9SAndroid Build Coastguard Worker  self: "other.toDouble() > -1.
1000*da0073e9SAndroid Build Coastguard Worker          ? at::special_xlog1py(grad,  other)
1001*da0073e9SAndroid Build Coastguard Worker          : at::special_xlog1py(grad,  other).masked_fill(self == 0., 0.)"
1002*da0073e9SAndroid Build Coastguard Worker  result: auto_element_wise
1003*da0073e9SAndroid Build Coastguard Worker
1004*da0073e9SAndroid Build Coastguard Worker- name: special_zeta(Tensor self, Tensor other) -> Tensor
1005*da0073e9SAndroid Build Coastguard Worker  self: not_implemented("zeta")
1006*da0073e9SAndroid Build Coastguard Worker  other:  grad * -self * special_zeta(self + 1., other)
1007*da0073e9SAndroid Build Coastguard Worker
1008*da0073e9SAndroid Build Coastguard Worker- name: special_zeta.self_scalar(Scalar self, Tensor other) -> Tensor
1009*da0073e9SAndroid Build Coastguard Worker  other:  grad * -self * special_zeta(self.toDouble() + 1., other)
1010*da0073e9SAndroid Build Coastguard Worker
1011*da0073e9SAndroid Build Coastguard Worker- name: special_zeta.other_scalar(Tensor self, Scalar other) -> Tensor
1012*da0073e9SAndroid Build Coastguard Worker  self: not_implemented("zeta")
1013*da0073e9SAndroid Build Coastguard Worker
1014*da0073e9SAndroid Build Coastguard Worker- name: log_normal_(Tensor(a!) self, float mean=1, float std=2, *, Generator? generator=None) -> Tensor(a!)
1015*da0073e9SAndroid Build Coastguard Worker  self: zeros_like(grad)
1016*da0073e9SAndroid Build Coastguard Worker  result: self_t.zero_()
1017*da0073e9SAndroid Build Coastguard Worker
1018*da0073e9SAndroid Build Coastguard Worker- name: logsumexp(Tensor self, int[1] dim, bool keepdim=False) -> Tensor
1019*da0073e9SAndroid Build Coastguard Worker  self: logsumexp_backward(grad, self, result, dim, keepdim)
1020*da0073e9SAndroid Build Coastguard Worker  result: logsumexp_jvp(self_p, self_t, dim, keepdim)
1021*da0073e9SAndroid Build Coastguard Worker
1022*da0073e9SAndroid Build Coastguard Worker- name: linalg_lstsq(Tensor self, Tensor b, float? rcond=None, *, str? driver=None) -> (Tensor solution, Tensor residuals, Tensor rank, Tensor singular_values)
1023*da0073e9SAndroid Build Coastguard Worker  self, b: linalg_lstsq_backward(grad, self, b, grad_input_mask)
1024*da0073e9SAndroid Build Coastguard Worker  solution: linalg_lstsq_jvp(self_p, b_p, self_t, b_t)
1025*da0073e9SAndroid Build Coastguard Worker  output_differentiability: [True, False, False, False]
1026*da0073e9SAndroid Build Coastguard Worker
1027*da0073e9SAndroid Build Coastguard Worker- name: lt_.Scalar(Tensor(a!) self, Scalar other) -> Tensor(a!)
1028*da0073e9SAndroid Build Coastguard Worker  self: zeros_like(self)
1029*da0073e9SAndroid Build Coastguard Worker  result: self_t.zero_()
1030*da0073e9SAndroid Build Coastguard Worker
1031*da0073e9SAndroid Build Coastguard Worker- name: lt_.Tensor(Tensor(a!) self, Tensor other) -> Tensor(a!)
1032*da0073e9SAndroid Build Coastguard Worker  self: zeros_like(self)
1033*da0073e9SAndroid Build Coastguard Worker  other: zeros_like(other)
1034*da0073e9SAndroid Build Coastguard Worker  result: self_t.zero_()
1035*da0073e9SAndroid Build Coastguard Worker
1036*da0073e9SAndroid Build Coastguard Worker- name: linalg_lu_factor_ex(Tensor A, *, bool pivot=True, bool check_errors=False) -> (Tensor LU, Tensor pivots, Tensor info)
1037*da0073e9SAndroid Build Coastguard Worker  A: lu_factor_ex_backward(grad, LU, pivots, pivot)
1038*da0073e9SAndroid Build Coastguard Worker  LU: lu_factor_ex_jvp(A_t, LU, pivots, pivot)
1039*da0073e9SAndroid Build Coastguard Worker  output_differentiability: [True, False, False]
1040*da0073e9SAndroid Build Coastguard Worker
1041*da0073e9SAndroid Build Coastguard Worker- name: linalg_lu(Tensor A, *, bool pivot=True) -> (Tensor P, Tensor L, Tensor U)
1042*da0073e9SAndroid Build Coastguard Worker  A: linalg_lu_backward(grad_L, grad_U, P, L, U, pivot)
1043*da0073e9SAndroid Build Coastguard Worker  L: std::get<0>(linalg_lu_jvp(A_t, P, L, U, pivot))
1044*da0073e9SAndroid Build Coastguard Worker  U: std::get<1>(linalg_lu_jvp(A_t, P, L, U, pivot))
1045*da0073e9SAndroid Build Coastguard Worker  output_differentiability: [False, True, True]
1046*da0073e9SAndroid Build Coastguard Worker
1047*da0073e9SAndroid Build Coastguard Worker- name: linalg_lu_solve(Tensor LU, Tensor pivots, Tensor B, *, bool left=True, bool adjoint=False) -> Tensor
1048*da0073e9SAndroid Build Coastguard Worker  LU: linalg_lu_solve_LU(grad, LU, pivots, result, left, adjoint)
1049*da0073e9SAndroid Build Coastguard Worker  B: "at::linalg_lu_solve(LU, pivots, grad, left, !adjoint)"
1050*da0073e9SAndroid Build Coastguard Worker  result: linalg_lu_solve_jvp(result, LU_p, pivots, LU_t, B_t, left, adjoint)
1051*da0073e9SAndroid Build Coastguard Worker
1052*da0073e9SAndroid Build Coastguard Worker- name: lu_unpack(Tensor LU_data, Tensor LU_pivots, bool unpack_data=True, bool unpack_pivots=True) -> (Tensor P, Tensor L, Tensor U)
1053*da0073e9SAndroid Build Coastguard Worker  LU_data: lu_unpack_backward(grad_L, grad_U, LU_data.sym_size(-2), LU_data.sym_size(-1))
1054*da0073e9SAndroid Build Coastguard Worker  LU_pivots: non_differentiable
1055*da0073e9SAndroid Build Coastguard Worker  L: "LU_data_t.sym_size(-2) >= LU_data_t.sym_size(-1) ? LU_data_t.tril(-1) : LU_data_t.narrow_symint(-1, 0, LU_data_t.sym_size(-2)).tril(-1)"
1056*da0073e9SAndroid Build Coastguard Worker  U: "LU_data_t.sym_size(-1) >= LU_data_t.sym_size(-2) ? LU_data_t.triu() : LU_data_t.narrow_symint(-2, 0, LU_data_t.sym_size(-1)).triu()"
1057*da0073e9SAndroid Build Coastguard Worker  output_differentiability: [False, True, True]
1058*da0073e9SAndroid Build Coastguard Worker
1059*da0073e9SAndroid Build Coastguard Worker- name: masked_fill.Scalar(Tensor self, Tensor mask, Scalar value) -> Tensor
1060*da0073e9SAndroid Build Coastguard Worker  self: grad.masked_fill(mask, 0)
1061*da0073e9SAndroid Build Coastguard Worker  mask: non_differentiable
1062*da0073e9SAndroid Build Coastguard Worker  result: self_t.masked_fill(mask, 0)
1063*da0073e9SAndroid Build Coastguard Worker
1064*da0073e9SAndroid Build Coastguard Worker- name: masked_fill.Tensor(Tensor self, Tensor mask, Tensor value) -> Tensor
1065*da0073e9SAndroid Build Coastguard Worker  self: grad.masked_fill(mask, 0)
1066*da0073e9SAndroid Build Coastguard Worker  value: masked_fill_backward(grad, mask)
1067*da0073e9SAndroid Build Coastguard Worker  mask: non_differentiable
1068*da0073e9SAndroid Build Coastguard Worker  result: self_t.masked_fill(mask, value_t)
1069*da0073e9SAndroid Build Coastguard Worker
1070*da0073e9SAndroid Build Coastguard Worker- name: masked_scatter(Tensor self, Tensor mask, Tensor source) -> Tensor
1071*da0073e9SAndroid Build Coastguard Worker  self: grad.masked_fill(mask, 0)
1072*da0073e9SAndroid Build Coastguard Worker  source: masked_scatter_backward_symint(grad, mask, source.sym_sizes())
1073*da0073e9SAndroid Build Coastguard Worker  mask: non_differentiable
1074*da0073e9SAndroid Build Coastguard Worker  result: self_t.masked_scatter(mask, source_t)
1075*da0073e9SAndroid Build Coastguard Worker
1076*da0073e9SAndroid Build Coastguard Worker- name: masked_scatter_backward(Tensor grad_output, Tensor mask, SymInt[] sizes) -> Tensor
1077*da0073e9SAndroid Build Coastguard Worker  grad_output: zeros_like(grad_output).masked_scatter(mask, grad)
1078*da0073e9SAndroid Build Coastguard Worker  mask: non_differentiable
1079*da0073e9SAndroid Build Coastguard Worker  result: masked_scatter_backward(grad_output_t, mask, grad_output_t.sizes())
1080*da0073e9SAndroid Build Coastguard Worker
1081*da0073e9SAndroid Build Coastguard Worker- name: masked_select(Tensor self, Tensor mask) -> Tensor
1082*da0073e9SAndroid Build Coastguard Worker  self: masked_select_backward(grad, self, mask)
1083*da0073e9SAndroid Build Coastguard Worker  mask: non_differentiable
1084*da0073e9SAndroid Build Coastguard Worker  result: auto_linear
1085*da0073e9SAndroid Build Coastguard Worker
1086*da0073e9SAndroid Build Coastguard Worker- name: linalg_matrix_exp(Tensor self) -> Tensor
1087*da0073e9SAndroid Build Coastguard Worker  self: linalg_matrix_exp_differential(self, grad, /*adjoint*/ true)
1088*da0073e9SAndroid Build Coastguard Worker  result: linalg_matrix_exp_differential(self_p, self_t, /*adjoint*/ false)
1089*da0073e9SAndroid Build Coastguard Worker
1090*da0073e9SAndroid Build Coastguard Worker- name: max.dim(Tensor self, int dim, bool keepdim=False) -> (Tensor values, Tensor indices)
1091*da0073e9SAndroid Build Coastguard Worker  self: value_selecting_reduction_backward_symint(grad, dim, indices, self.sym_sizes(), keepdim)
1092*da0073e9SAndroid Build Coastguard Worker  values: gather_with_keepdimed_indices(self_t, dim, indices, keepdim)
1093*da0073e9SAndroid Build Coastguard Worker
1094*da0073e9SAndroid Build Coastguard Worker- name: max(Tensor self) -> Tensor
1095*da0073e9SAndroid Build Coastguard Worker  self: evenly_distribute_backward(grad, self, result)
1096*da0073e9SAndroid Build Coastguard Worker  result: evenly_read_jvp(self_t, self_p, result)
1097*da0073e9SAndroid Build Coastguard Worker
1098*da0073e9SAndroid Build Coastguard Worker- name: maximum(Tensor self, Tensor other) -> Tensor
1099*da0073e9SAndroid Build Coastguard Worker  self: at::where(self == other, grad / 2, grad).masked_fill_(self < other, 0)
1100*da0073e9SAndroid Build Coastguard Worker  other: at::where(self == other, grad / 2, grad).masked_fill_(self > other, 0)
1101*da0073e9SAndroid Build Coastguard Worker  result: other_t + at::where(self_p == other_p, at::scalar_tensor(0.5, result.options()), (self_p > other_p).to(result.scalar_type())) * (self_t - other_t)
1102*da0073e9SAndroid Build Coastguard Worker
1103*da0073e9SAndroid Build Coastguard Worker- name: fmax(Tensor self, Tensor other) -> Tensor
1104*da0073e9SAndroid Build Coastguard Worker  self: grad.masked_fill((self >= other).logical_or_(other.isnan()).logical_not_(), 0)
1105*da0073e9SAndroid Build Coastguard Worker  other: grad.masked_fill((self >= other).logical_or_(other.isnan()), 0)
1106*da0073e9SAndroid Build Coastguard Worker  result: other_t + (self_p > other_p).logical_or_(other_p.isnan()) * (self_t - other_t)
1107*da0073e9SAndroid Build Coastguard Worker
1108*da0073e9SAndroid Build Coastguard Worker- name: mean(Tensor self, *, ScalarType? dtype=None) -> Tensor
1109*da0073e9SAndroid Build Coastguard Worker  self: grad.expand_symint(self.sym_sizes()) / self.sym_numel()
1110*da0073e9SAndroid Build Coastguard Worker  result: auto_linear
1111*da0073e9SAndroid Build Coastguard Worker
1112*da0073e9SAndroid Build Coastguard Worker- name: mean.dim(Tensor self, int[1]? dim, bool keepdim=False, *, ScalarType? dtype=None) -> Tensor
1113*da0073e9SAndroid Build Coastguard Worker  self: mean_backward(grad, self.sym_sizes(), dim, self.sym_numel(), keepdim)
1114*da0073e9SAndroid Build Coastguard Worker  result: auto_linear
1115*da0073e9SAndroid Build Coastguard Worker
1116*da0073e9SAndroid Build Coastguard Worker- name: median(Tensor self) -> Tensor
1117*da0073e9SAndroid Build Coastguard Worker  self: evenly_distribute_backward(grad, self, result)
1118*da0073e9SAndroid Build Coastguard Worker  result: evenly_read_jvp(self_t, self_p, result)
1119*da0073e9SAndroid Build Coastguard Worker
1120*da0073e9SAndroid Build Coastguard Worker- name: nanmedian(Tensor self) -> Tensor
1121*da0073e9SAndroid Build Coastguard Worker  self: evenly_distribute_backward(grad, self, result)
1122*da0073e9SAndroid Build Coastguard Worker  result: evenly_read_jvp(self_t, self_p, result)
1123*da0073e9SAndroid Build Coastguard Worker
1124*da0073e9SAndroid Build Coastguard Worker# This is in theory incorrect in the following case:
1125*da0073e9SAndroid Build Coastguard Worker#   sorted list: [..., a, b, b, ..., b, b, c, ...] with median = b and the value
1126*da0073e9SAndroid Build Coastguard Worker#                            |                     at middle position of the
1127*da0073e9SAndroid Build Coastguard Worker#                            |                     list between two `b`s. E.g.,
1128*da0073e9SAndroid Build Coastguard Worker#                            |
1129*da0073e9SAndroid Build Coastguard Worker#                            ^the middle position
1130*da0073e9SAndroid Build Coastguard Worker# The gradient exists and is essentially 0 in this case.
1131*da0073e9SAndroid Build Coastguard Worker#
1132*da0073e9SAndroid Build Coastguard Worker# In case where the middle position is at the boundary of `b` range, e.g.,
1133*da0073e9SAndroid Build Coastguard Worker#   sorted list: [..., a, b, b, ..., b, b, c, ...]
1134*da0073e9SAndroid Build Coastguard Worker#                                       |
1135*da0073e9SAndroid Build Coastguard Worker#                                       ^the middle position
1136*da0073e9SAndroid Build Coastguard Worker# The backward implementation is correct in the sense that it returns the
1137*da0073e9SAndroid Build Coastguard Worker# subgradient on one side.
1138*da0073e9SAndroid Build Coastguard Worker- name: median.dim(Tensor self, int dim, bool keepdim=False) -> (Tensor values, Tensor indices)
1139*da0073e9SAndroid Build Coastguard Worker  self: value_selecting_reduction_backward_symint(grad, dim, indices, self.sym_sizes(), keepdim)
1140*da0073e9SAndroid Build Coastguard Worker  values: gather_with_keepdimed_indices(self_t, dim, indices, keepdim)
1141*da0073e9SAndroid Build Coastguard Worker
1142*da0073e9SAndroid Build Coastguard Worker- name: nanmedian.dim(Tensor self, int dim, bool keepdim=False) -> (Tensor values, Tensor indices)
1143*da0073e9SAndroid Build Coastguard Worker  self: value_selecting_reduction_backward_symint(grad, dim, indices, self.sym_sizes(), keepdim)
1144*da0073e9SAndroid Build Coastguard Worker  values: gather_with_keepdimed_indices(self_t, dim, indices, keepdim)
1145*da0073e9SAndroid Build Coastguard Worker
1146*da0073e9SAndroid Build Coastguard Worker- name: min.dim(Tensor self, int dim, bool keepdim=False) -> (Tensor values, Tensor indices)
1147*da0073e9SAndroid Build Coastguard Worker  self: value_selecting_reduction_backward_symint(grad, dim, indices, self.sym_sizes(), keepdim)
1148*da0073e9SAndroid Build Coastguard Worker  values: gather_with_keepdimed_indices(self_t, dim, indices, keepdim)
1149*da0073e9SAndroid Build Coastguard Worker
1150*da0073e9SAndroid Build Coastguard Worker- name: min(Tensor self) -> Tensor
1151*da0073e9SAndroid Build Coastguard Worker  self: evenly_distribute_backward(grad, self, result)
1152*da0073e9SAndroid Build Coastguard Worker  result: evenly_read_jvp(self_t, self_p, result)
1153*da0073e9SAndroid Build Coastguard Worker
1154*da0073e9SAndroid Build Coastguard Worker- name: minimum(Tensor self, Tensor other) -> Tensor
1155*da0073e9SAndroid Build Coastguard Worker  self: at::where(self == other, grad / 2, grad).masked_fill_(self > other, 0)
1156*da0073e9SAndroid Build Coastguard Worker  other: at::where(self == other, grad / 2, grad).masked_fill_(self < other, 0)
1157*da0073e9SAndroid Build Coastguard Worker  result: other_t + at::where(self_p == other_p, at::scalar_tensor(0.5, result.options()), (self_p < other_p).to(result.scalar_type())) * (self_t - other_t)
1158*da0073e9SAndroid Build Coastguard Worker
1159*da0073e9SAndroid Build Coastguard Worker- name: fmin(Tensor self, Tensor other) -> Tensor
1160*da0073e9SAndroid Build Coastguard Worker  self: grad.masked_fill((self <= other).logical_or_(other.isnan()).logical_not_(), 0)
1161*da0073e9SAndroid Build Coastguard Worker  other: grad.masked_fill((self <= other).logical_or_(other.isnan()), 0)
1162*da0073e9SAndroid Build Coastguard Worker  result: other_t + (self_p <= other_p).logical_or_(other_p.isnan()) * (self_t - other_t)
1163*da0073e9SAndroid Build Coastguard Worker
1164*da0073e9SAndroid Build Coastguard Worker- name: amax(Tensor self, int[1] dim=[], bool keepdim=False) -> Tensor
1165*da0073e9SAndroid Build Coastguard Worker  self: scale_grad_by_count(restore_reduced_dims(grad, dim, keepdim), restore_reduced_dims(result, dim, keepdim) == self, dim)
1166*da0073e9SAndroid Build Coastguard Worker  result: amaxamin_jvp(self_p, self_t, result, dim, keepdim)
1167*da0073e9SAndroid Build Coastguard Worker
1168*da0073e9SAndroid Build Coastguard Worker- name: amin(Tensor self, int[1] dim=[], bool keepdim=False) -> Tensor
1169*da0073e9SAndroid Build Coastguard Worker  self: scale_grad_by_count(restore_reduced_dims(grad, dim, keepdim), restore_reduced_dims(result, dim, keepdim) == self, dim)
1170*da0073e9SAndroid Build Coastguard Worker  result: amaxamin_jvp(self_p, self_t, result, dim, keepdim)
1171*da0073e9SAndroid Build Coastguard Worker
1172*da0073e9SAndroid Build Coastguard Worker- name: mm(Tensor self, Tensor mat2) -> Tensor
1173*da0073e9SAndroid Build Coastguard Worker  self: mm_mat1_backward(grad, mat2, self.sym_sizes(), self.sym_strides(), self.layout(), 1)
1174*da0073e9SAndroid Build Coastguard Worker  mat2: mm_mat2_backward(grad, self, mat2.sym_sizes(), mat2.sym_strides(), mat2.layout(), 1)
1175*da0073e9SAndroid Build Coastguard Worker  result: at::mm(self_t, mat2_p) + at::mm(self_p, mat2_t)
1176*da0073e9SAndroid Build Coastguard Worker
1177*da0073e9SAndroid Build Coastguard Worker- name: mode(Tensor self, int dim=-1, bool keepdim=False) -> (Tensor values, Tensor indices)
1178*da0073e9SAndroid Build Coastguard Worker  self: value_selecting_reduction_backward_symint(grad, dim, indices, self.sym_sizes(), keepdim)
1179*da0073e9SAndroid Build Coastguard Worker  values: gather_with_keepdimed_indices(self_t, dim, indices, keepdim)
1180*da0073e9SAndroid Build Coastguard Worker
1181*da0073e9SAndroid Build Coastguard Worker- name: mul.Tensor(Tensor self, Tensor other) -> Tensor
1182*da0073e9SAndroid Build Coastguard Worker  self: mul_tensor_backward(grad, other, self.scalar_type())
1183*da0073e9SAndroid Build Coastguard Worker  other: mul_tensor_backward(grad, self, other.scalar_type())
1184*da0073e9SAndroid Build Coastguard Worker  result: other_t * self_p + self_t * other_p
1185*da0073e9SAndroid Build Coastguard Worker
1186*da0073e9SAndroid Build Coastguard Worker- name: mul.Scalar(Tensor self, Scalar other) -> Tensor
1187*da0073e9SAndroid Build Coastguard Worker  self: mul_tensor_backward(grad, other, self.scalar_type())
1188*da0073e9SAndroid Build Coastguard Worker  result: self_t * other
1189*da0073e9SAndroid Build Coastguard Worker
1190*da0073e9SAndroid Build Coastguard Worker- name: mv(Tensor self, Tensor vec) -> Tensor
1191*da0073e9SAndroid Build Coastguard Worker  self: grad.ger(vec.conj())
1192*da0073e9SAndroid Build Coastguard Worker  vec: self.conj().t().mv(grad)
1193*da0073e9SAndroid Build Coastguard Worker  result: mv(self_t, vec_p) + mv(self_p, vec_t)
1194*da0073e9SAndroid Build Coastguard Worker
1195*da0073e9SAndroid Build Coastguard Worker- name: mvlgamma(Tensor self, int p) -> Tensor
1196*da0073e9SAndroid Build Coastguard Worker  self: mvlgamma_backward(grad, self, p)
1197*da0073e9SAndroid Build Coastguard Worker  result: auto_element_wise
1198*da0073e9SAndroid Build Coastguard Worker
1199*da0073e9SAndroid Build Coastguard Worker- name: nan_to_num(Tensor self, float? nan=None, float? posinf=None, float? neginf=None) -> Tensor
1200*da0073e9SAndroid Build Coastguard Worker  self: grad * at::isfinite(self)
1201*da0073e9SAndroid Build Coastguard Worker  result: auto_element_wise
1202*da0073e9SAndroid Build Coastguard Worker
1203*da0073e9SAndroid Build Coastguard Worker- name: native_batch_norm(Tensor input, Tensor? weight, Tensor? bias, Tensor? running_mean, Tensor? running_var, bool training, float momentum, float eps) -> (Tensor, Tensor, Tensor)
1204*da0073e9SAndroid Build Coastguard Worker  input, weight, bias: "grad.defined() ? native_batch_norm_backward(grad, input, weight, running_mean, running_var, result1, result2, training, eps, grad_input_mask) : std::tuple<Tensor, Tensor, Tensor>()"
1205*da0073e9SAndroid Build Coastguard Worker  result0: batch_norm_jvp(input_p, input_t, weight_p, weight_t, bias_p, bias_t, running_mean, running_var, result1, result2, training, eps)
1206*da0073e9SAndroid Build Coastguard Worker
1207*da0073e9SAndroid Build Coastguard Worker- name: _native_batch_norm_legit(Tensor input, Tensor? weight, Tensor? bias, Tensor(a!) running_mean, Tensor(b!) running_var, bool training, float momentum, float eps) -> (Tensor, Tensor, Tensor)
1208*da0073e9SAndroid Build Coastguard Worker  input, weight, bias: "grad.defined() ? native_batch_norm_backward(grad, input, weight, running_mean, running_var, result1, result2, training, eps, grad_input_mask) : std::tuple<Tensor, Tensor, Tensor>()"
1209*da0073e9SAndroid Build Coastguard Worker  result0: batch_norm_jvp(input_p, input_t, weight_p, weight_t, bias_p, bias_t, running_mean, running_var, result1, result2, training, eps)
1210*da0073e9SAndroid Build Coastguard Worker
1211*da0073e9SAndroid Build Coastguard Worker- name: _native_batch_norm_legit_no_training(Tensor input, Tensor? weight, Tensor? bias, Tensor running_mean, Tensor running_var, float momentum, float eps) -> (Tensor, Tensor, Tensor)
1212*da0073e9SAndroid Build Coastguard Worker  input, weight, bias: "grad.defined() ? native_batch_norm_backward(grad, input, weight, running_mean, running_var, result1, result2, /*training=*/false, eps, grad_input_mask) : std::tuple<Tensor, Tensor, Tensor>()"
1213*da0073e9SAndroid Build Coastguard Worker  result0: batch_norm_jvp(input_p, input_t, weight_p, weight_t, bias_p, bias_t, running_mean, running_var, result1, result2, /*training=*/false, eps)
1214*da0073e9SAndroid Build Coastguard Worker
1215*da0073e9SAndroid Build Coastguard Worker- name: _native_batch_norm_legit.no_stats(Tensor input, Tensor? weight, Tensor? bias, bool training, float momentum, float eps) -> (Tensor, Tensor, Tensor)
1216*da0073e9SAndroid Build Coastguard Worker  input, weight, bias: "grad.defined() ? native_batch_norm_backward(grad, input, weight, Tensor(), Tensor(), result1, result2, training, eps, grad_input_mask) : std::tuple<Tensor, Tensor, Tensor>()"
1217*da0073e9SAndroid Build Coastguard Worker  result0: batch_norm_jvp(input_p, input_t, weight_p, weight_t, bias_p, bias_t, Tensor(), Tensor(), result1, result2, training, eps)
1218*da0073e9SAndroid Build Coastguard Worker
1219*da0073e9SAndroid Build Coastguard Worker- name: native_batch_norm_backward(Tensor grad_out, Tensor input, Tensor? weight, Tensor? running_mean, Tensor? running_var, Tensor? save_mean, Tensor? save_invstd, bool train, float eps, bool[3] output_mask) -> (Tensor, Tensor, Tensor)
1220*da0073e9SAndroid Build Coastguard Worker  input, weight, grad_out: batchnorm_double_backward(input, weight, grads[0], grads[1], grads[2], grad_out, running_mean, running_var, train, eps, save_mean, save_invstd, grad_input_mask)
1221*da0073e9SAndroid Build Coastguard Worker  save_mean: not_implemented("native_batch_norm_backward save_mean")
1222*da0073e9SAndroid Build Coastguard Worker  save_invstd: not_implemented("native_batch_norm_backward save_invstd")
1223*da0073e9SAndroid Build Coastguard Worker
1224*da0073e9SAndroid Build Coastguard Worker- name: native_layer_norm(Tensor input, SymInt[] normalized_shape, Tensor? weight, Tensor? bias, float eps) -> (Tensor, Tensor, Tensor)
1225*da0073e9SAndroid Build Coastguard Worker  input, weight, bias: "grad.defined() ? native_layer_norm_backward_symint(grad, input, normalized_shape, result1, result2, weight, bias, grad_input_mask) : std::tuple<Tensor, Tensor, Tensor>()"
1226*da0073e9SAndroid Build Coastguard Worker  result0: layer_norm_jvp(input_p, input_t, weight_p, weight_t, bias_p, bias_t, result1, result2, normalized_shape)
1227*da0073e9SAndroid Build Coastguard Worker
1228*da0073e9SAndroid Build Coastguard Worker- name: native_layer_norm_backward(Tensor grad_out, Tensor input, SymInt[] normalized_shape, Tensor mean, Tensor rstd, Tensor? weight, Tensor? bias, bool[3] output_mask) -> (Tensor, Tensor, Tensor)
1229*da0073e9SAndroid Build Coastguard Worker  input, weight, grad_out: layer_norm_double_backward(input, weight, grads[0], grads[1], grads[2], grad_out, mean, rstd, normalized_shape, grad_input_mask)
1230*da0073e9SAndroid Build Coastguard Worker  bias: Tensor()
1231*da0073e9SAndroid Build Coastguard Worker  mean: not_implemented("native_layer_norm_backward mean")
1232*da0073e9SAndroid Build Coastguard Worker  rstd: not_implemented("native_layer_norm_backward rstd")
1233*da0073e9SAndroid Build Coastguard Worker
1234*da0073e9SAndroid Build Coastguard Worker- name: native_group_norm(Tensor input, Tensor? weight, Tensor? bias, SymInt N, SymInt C, SymInt HxW, int group, float eps) -> (Tensor, Tensor, Tensor)
1235*da0073e9SAndroid Build Coastguard Worker  input, weight, bias: "GradMode::is_enabled() || grads[1].defined() || grads[2].defined() ? infinitely_differentiable_native_group_norm_backward(grads[0], grads[1], grads[2], input, result1, result2, weight, N, C, HxW, group, eps, grad_input_mask) : (grads[0].defined() ? native_group_norm_backward_symint(grads[0].device().is_xpu() ? grads[0] : grads[0].contiguous(grads[0].device().is_cpu() ? input.suggest_memory_format() : c10::MemoryFormat::Contiguous), input.device().is_xpu() ? input : input.contiguous(input.device().is_cpu() ? input.suggest_memory_format() : c10::MemoryFormat::Contiguous), result1, result2, weight, N, C, HxW, group, grad_input_mask) : std::tuple<Tensor, Tensor, Tensor>())"
1236*da0073e9SAndroid Build Coastguard Worker  result0: group_norm_jvp(input_p, input_t, weight_p, weight_t, bias_p, bias_t, result1, result2, group)
1237*da0073e9SAndroid Build Coastguard Worker  result1: group_norm_mean_jvp(input_t, result1, group)
1238*da0073e9SAndroid Build Coastguard Worker  result2: group_norm_invstd_jvp(input_p, input_t, result1, result2, group)
1239*da0073e9SAndroid Build Coastguard Worker
1240*da0073e9SAndroid Build Coastguard Worker- name: ne_.Scalar(Tensor(a!) self, Scalar other) -> Tensor(a!)
1241*da0073e9SAndroid Build Coastguard Worker  self: zeros_like(self)
1242*da0073e9SAndroid Build Coastguard Worker  result: self_t.zero_()
1243*da0073e9SAndroid Build Coastguard Worker
1244*da0073e9SAndroid Build Coastguard Worker- name: ne_.Tensor(Tensor(a!) self, Tensor other) -> Tensor(a!)
1245*da0073e9SAndroid Build Coastguard Worker  self: zeros_like(self)
1246*da0073e9SAndroid Build Coastguard Worker  other: zeros_like(other)
1247*da0073e9SAndroid Build Coastguard Worker  result: self_t.zero_()
1248*da0073e9SAndroid Build Coastguard Worker
1249*da0073e9SAndroid Build Coastguard Worker- name: neg(Tensor self) -> Tensor
1250*da0073e9SAndroid Build Coastguard Worker  self: grad.neg()
1251*da0073e9SAndroid Build Coastguard Worker  result: auto_element_wise
1252*da0073e9SAndroid Build Coastguard Worker
1253*da0073e9SAndroid Build Coastguard Worker- name: _batch_norm_with_update(Tensor input, Tensor? weight, Tensor? bias, Tensor(a!) running_mean, Tensor(b!) running_var, float momentum, float eps) -> (Tensor, Tensor, Tensor, Tensor)
1254*da0073e9SAndroid Build Coastguard Worker  input, weight, bias: "grad.defined() ? batch_norm_backward(grad, input, weight, running_mean, running_var, result1, result2, /*update*/true, eps, grad_input_mask, retain_variables ? result3.clone() : result3) : std::tuple<Tensor, Tensor, Tensor>()"
1255*da0073e9SAndroid Build Coastguard Worker  result0: batch_norm_jvp(input_p, input_t, weight_p, weight_t, bias_p, bias_t, running_mean, running_var, result1, result2, true, eps)
1256*da0073e9SAndroid Build Coastguard Worker
1257*da0073e9SAndroid Build Coastguard Worker- name: _batch_norm_no_update(Tensor input, Tensor? weight, Tensor? bias, Tensor? running_mean, Tensor? running_var, float momentum, float eps) -> (Tensor, Tensor, Tensor, Tensor)
1258*da0073e9SAndroid Build Coastguard Worker  input, weight, bias: "grad.defined() ? batch_norm_backward(grad, input, weight, running_mean, running_var, result1, result2, /*update*/false, eps, grad_input_mask, retain_variables ? result3.clone() : result3) : std::tuple<Tensor, Tensor, Tensor>()"
1259*da0073e9SAndroid Build Coastguard Worker  result0: batch_norm_jvp(input_p, input_t, weight_p, weight_t, bias_p, bias_t, running_mean, running_var, result1, result2, false, eps)
1260*da0073e9SAndroid Build Coastguard Worker
1261*da0073e9SAndroid Build Coastguard Worker- name: batch_norm_backward(Tensor grad_out, Tensor input, Tensor weight, Tensor? running_mean, Tensor? running_var, Tensor? save_mean, Tensor? save_var, bool update, float eps, bool[3] output_mask, Tensor reserve) -> (Tensor, Tensor, Tensor)
1262*da0073e9SAndroid Build Coastguard Worker  input, weight, grad_out: batchnorm_double_backward(input, weight, grads[0], grads[1], grads[2], grad_out, running_mean, running_var, update, eps, save_mean, save_var, grad_input_mask)
1263*da0073e9SAndroid Build Coastguard Worker  save_mean: not_implemented("batch_norm_backward save_mean")
1264*da0073e9SAndroid Build Coastguard Worker  save_var: not_implemented("batch_norm_backward save_var")
1265*da0073e9SAndroid Build Coastguard Worker  reserve: not_implemented("batch_norm_backward reserve")
1266*da0073e9SAndroid Build Coastguard Worker
1267*da0073e9SAndroid Build Coastguard Worker- name: nextafter(Tensor self, Tensor other) -> Tensor
1268*da0073e9SAndroid Build Coastguard Worker  self: not_implemented("nextafter")
1269*da0073e9SAndroid Build Coastguard Worker  other: not_implemented("nextafter")
1270*da0073e9SAndroid Build Coastguard Worker
1271*da0073e9SAndroid Build Coastguard Worker- name: norm.Scalar(Tensor self, Scalar p=2) -> Tensor
1272*da0073e9SAndroid Build Coastguard Worker  self: norm_backward(grad, self, p, result)
1273*da0073e9SAndroid Build Coastguard Worker  result: norm_jvp(self_p, self_t, p, result)
1274*da0073e9SAndroid Build Coastguard Worker
1275*da0073e9SAndroid Build Coastguard Worker- name: norm.ScalarOpt_dim(Tensor self, Scalar? p, int[1] dim, bool keepdim=False) -> Tensor
1276*da0073e9SAndroid Build Coastguard Worker  self: norm_backward(grad, self, p, result, dim, keepdim)
1277*da0073e9SAndroid Build Coastguard Worker  result: norm_jvp(self_p, self_t, p, result, dim, keepdim)
1278*da0073e9SAndroid Build Coastguard Worker
1279*da0073e9SAndroid Build Coastguard Worker- name: norm.ScalarOpt_dtype(Tensor self, Scalar? p, *, ScalarType dtype) -> Tensor
1280*da0073e9SAndroid Build Coastguard Worker  self: norm_backward(grad, self.to(grad.scalar_type()), p, result)
1281*da0073e9SAndroid Build Coastguard Worker  result: norm_jvp(self_p, self_t, p, result)
1282*da0073e9SAndroid Build Coastguard Worker
1283*da0073e9SAndroid Build Coastguard Worker- name: norm.ScalarOpt_dim_dtype(Tensor self, Scalar? p, int[1] dim, bool keepdim, *, ScalarType dtype) -> Tensor
1284*da0073e9SAndroid Build Coastguard Worker  self: norm_backward(grad, self.to(grad.scalar_type()), p, result, dim, keepdim)
1285*da0073e9SAndroid Build Coastguard Worker  result: norm_jvp(self_p, self_t, p, result, dim, keepdim)
1286*da0073e9SAndroid Build Coastguard Worker
1287*da0073e9SAndroid Build Coastguard Worker- name: linalg_vector_norm(Tensor self, Scalar ord=2, int[1]? dim=None, bool keepdim=False, *, ScalarType? dtype=None) -> Tensor
1288*da0073e9SAndroid Build Coastguard Worker  self: linalg_vector_norm_backward(grad, self, ord, result, dim, keepdim)
1289*da0073e9SAndroid Build Coastguard Worker  result: linalg_vector_norm_jvp(self_p, self_t, ord, result, dim, keepdim)
1290*da0073e9SAndroid Build Coastguard Worker
1291*da0073e9SAndroid Build Coastguard Worker- name: _pdist_forward(Tensor self, float p=2) -> Tensor
1292*da0073e9SAndroid Build Coastguard Worker  self: _pdist_backward(grad, self, p, result)
1293*da0073e9SAndroid Build Coastguard Worker
1294*da0073e9SAndroid Build Coastguard Worker- name: _pdist_backward(Tensor grad, Tensor self, float p, Tensor pdist) -> Tensor
1295*da0073e9SAndroid Build Coastguard Worker  grad: not_implemented("_pdist_backward")
1296*da0073e9SAndroid Build Coastguard Worker  self: not_implemented("_pdist_backward")
1297*da0073e9SAndroid Build Coastguard Worker  pdist: not_implemented("_pdist_backward")
1298*da0073e9SAndroid Build Coastguard Worker
1299*da0073e9SAndroid Build Coastguard Worker- name: _euclidean_dist(Tensor x1, Tensor x2) -> Tensor
1300*da0073e9SAndroid Build Coastguard Worker  x1, x2: _euclidean_dist_backward(grad, x1, x2, result)
1301*da0073e9SAndroid Build Coastguard Worker
1302*da0073e9SAndroid Build Coastguard Worker- name: _cdist_forward(Tensor x1, Tensor x2, float p, int? compute_mode) -> Tensor
1303*da0073e9SAndroid Build Coastguard Worker  x1: _cdist_backward(grad.contiguous(), x1, x2, p, result)
1304*da0073e9SAndroid Build Coastguard Worker  x2: _cdist_backward(grad.mT().contiguous(), x2, x1, p, result.mT().contiguous())
1305*da0073e9SAndroid Build Coastguard Worker
1306*da0073e9SAndroid Build Coastguard Worker- name: _cdist_backward(Tensor grad, Tensor x1, Tensor x2, float p, Tensor cdist) -> Tensor
1307*da0073e9SAndroid Build Coastguard Worker  grad: not_implemented("_cdist_backward")
1308*da0073e9SAndroid Build Coastguard Worker  x1: not_implemented("_cdist_backward")
1309*da0073e9SAndroid Build Coastguard Worker  x2: not_implemented("_cdist_backward")
1310*da0073e9SAndroid Build Coastguard Worker  cdist: not_implemented("_cdist_backward")
1311*da0073e9SAndroid Build Coastguard Worker
1312*da0073e9SAndroid Build Coastguard Worker- name: normal_(Tensor(a!) self, float mean=0, float std=1, *, Generator? generator=None) -> Tensor(a!)
1313*da0073e9SAndroid Build Coastguard Worker  self: zeros_like(grad)
1314*da0073e9SAndroid Build Coastguard Worker  result: self_t.zero_()
1315*da0073e9SAndroid Build Coastguard Worker
1316*da0073e9SAndroid Build Coastguard Worker- name: normal.Tensor_float(Tensor mean, float std=1, *, Generator? generator=None) -> Tensor
1317*da0073e9SAndroid Build Coastguard Worker  mean: at::zeros_symint(mean.sym_sizes(), grad.options())
1318*da0073e9SAndroid Build Coastguard Worker  result: auto_element_wise
1319*da0073e9SAndroid Build Coastguard Worker
1320*da0073e9SAndroid Build Coastguard Worker- name: normal.float_Tensor(float mean, Tensor std, *, Generator? generator=None) -> Tensor
1321*da0073e9SAndroid Build Coastguard Worker  std: at::zeros_symint(std.sym_sizes(), grad.options())
1322*da0073e9SAndroid Build Coastguard Worker  result: auto_element_wise
1323*da0073e9SAndroid Build Coastguard Worker
1324*da0073e9SAndroid Build Coastguard Worker- name: normal.Tensor_Tensor(Tensor mean, Tensor std, *, Generator? generator=None) -> Tensor
1325*da0073e9SAndroid Build Coastguard Worker  mean: at::zeros_symint(mean.sym_sizes(), grad.options())
1326*da0073e9SAndroid Build Coastguard Worker  std: at::zeros_symint(std.sym_sizes(), grad.options())
1327*da0073e9SAndroid Build Coastguard Worker  result: zeros_like(mean_t)
1328*da0073e9SAndroid Build Coastguard Worker
1329*da0073e9SAndroid Build Coastguard Worker- name: linalg_householder_product(Tensor input, Tensor tau) -> Tensor
1330*da0073e9SAndroid Build Coastguard Worker  input, tau: householder_product_backward(grad, result, input, tau)
1331*da0073e9SAndroid Build Coastguard Worker  result: householder_product_jvp(input_t, tau_t, result, input_p, tau_p)
1332*da0073e9SAndroid Build Coastguard Worker
1333*da0073e9SAndroid Build Coastguard Worker- name: ormqr(Tensor self, Tensor input2, Tensor input3, bool left=True, bool transpose=False) -> Tensor
1334*da0073e9SAndroid Build Coastguard Worker  self, input2, input3: ormqr_backward(grad, result, self, input2, input3, left, transpose, grad_input_mask)
1335*da0073e9SAndroid Build Coastguard Worker
1336*da0073e9SAndroid Build Coastguard Worker- name: permute(Tensor(a) self, int[] dims) -> Tensor(a)
1337*da0073e9SAndroid Build Coastguard Worker  self: permute_backwards(grad, dims)
1338*da0073e9SAndroid Build Coastguard Worker  result: auto_linear
1339*da0073e9SAndroid Build Coastguard Worker
1340*da0073e9SAndroid Build Coastguard Worker- name: poisson(Tensor self, Generator? generator=None) -> Tensor
1341*da0073e9SAndroid Build Coastguard Worker  self: zeros_like(self)
1342*da0073e9SAndroid Build Coastguard Worker  result: auto_element_wise
1343*da0073e9SAndroid Build Coastguard Worker
1344*da0073e9SAndroid Build Coastguard Worker- name: pow.Tensor_Scalar(Tensor self, Scalar exponent) -> Tensor
1345*da0073e9SAndroid Build Coastguard Worker  self: pow_backward(grad, self, exponent)
1346*da0073e9SAndroid Build Coastguard Worker  result: auto_element_wise
1347*da0073e9SAndroid Build Coastguard Worker
1348*da0073e9SAndroid Build Coastguard Worker- name: pow.Tensor_Tensor(Tensor self, Tensor exponent) -> Tensor
1349*da0073e9SAndroid Build Coastguard Worker  self: pow_backward_self(grad, self, exponent)
1350*da0073e9SAndroid Build Coastguard Worker  exponent: pow_backward_exponent(grad, self, exponent, result)
1351*da0073e9SAndroid Build Coastguard Worker  result: (pow_backward_self(self_t.conj(), self_p, exponent_p) + pow_backward_exponent(exponent_t.conj(), self_p, exponent_p, result)).conj()
1352*da0073e9SAndroid Build Coastguard Worker
1353*da0073e9SAndroid Build Coastguard Worker- name: pow.Scalar(Scalar self, Tensor exponent) -> Tensor
1354*da0073e9SAndroid Build Coastguard Worker  exponent: pow_backward_exponent(grad, self, exponent, result)
1355*da0073e9SAndroid Build Coastguard Worker  result: auto_element_wise
1356*da0073e9SAndroid Build Coastguard Worker
1357*da0073e9SAndroid Build Coastguard Worker- name: prod(Tensor self, *, ScalarType? dtype=None) -> Tensor
1358*da0073e9SAndroid Build Coastguard Worker  self: prod_backward(grad, self.to(grad.scalar_type()), result)
1359*da0073e9SAndroid Build Coastguard Worker  result: (prod_backward(at::ones({}, result.options()).expand_as(result), self_p.to(result.scalar_type()), result) * self_t.conj()).sum().conj()
1360*da0073e9SAndroid Build Coastguard Worker
1361*da0073e9SAndroid Build Coastguard Worker- name: prod.dim_int(Tensor self, int dim, bool keepdim=False, *, ScalarType? dtype=None) -> Tensor
1362*da0073e9SAndroid Build Coastguard Worker  self: prod_backward(grad, self.to(grad.scalar_type()), result, dim, keepdim)
1363*da0073e9SAndroid Build Coastguard Worker  result: (prod_backward(at::ones({}, result.options()).expand_as(result), self_p.to(result.scalar_type()), result, dim, keepdim) * self_t.conj()).sum(dim, keepdim).conj()
1364*da0073e9SAndroid Build Coastguard Worker
1365*da0073e9SAndroid Build Coastguard Worker- name: put(Tensor self, Tensor index, Tensor source, bool accumulate=False) -> Tensor
1366*da0073e9SAndroid Build Coastguard Worker  self: "accumulate ? grad : grad.put(index, zeros_like(source), false)"
1367*da0073e9SAndroid Build Coastguard Worker  index: non_differentiable
1368*da0073e9SAndroid Build Coastguard Worker  source: grad.take(index).reshape_as(source)
1369*da0073e9SAndroid Build Coastguard Worker  result: self_t.put(index, source_t, accumulate)
1370*da0073e9SAndroid Build Coastguard Worker
1371*da0073e9SAndroid Build Coastguard Worker- name: linalg_qr(Tensor A, str mode='reduced') -> (Tensor Q, Tensor R)
1372*da0073e9SAndroid Build Coastguard Worker  A: linalg_qr_backward(grad_Q, grad_R, Q, R, mode)
1373*da0073e9SAndroid Build Coastguard Worker  Q, R: linalg_qr_jvp(A_t, Q, R, mode)
1374*da0073e9SAndroid Build Coastguard Worker
1375*da0073e9SAndroid Build Coastguard Worker- name: rad2deg(Tensor self) -> Tensor
1376*da0073e9SAndroid Build Coastguard Worker  self: rad2deg_backward(grad)
1377*da0073e9SAndroid Build Coastguard Worker  result: auto_element_wise
1378*da0073e9SAndroid Build Coastguard Worker
1379*da0073e9SAndroid Build Coastguard Worker- name: random_.from(Tensor(a!) self, int from, int? to, *, Generator? generator=None) -> Tensor(a!)
1380*da0073e9SAndroid Build Coastguard Worker  self: zeros_like(grad)
1381*da0073e9SAndroid Build Coastguard Worker  result: self_t.zero_()
1382*da0073e9SAndroid Build Coastguard Worker
1383*da0073e9SAndroid Build Coastguard Worker- name: random_.to(Tensor(a!) self, int to, *, Generator? generator=None) -> Tensor(a!)
1384*da0073e9SAndroid Build Coastguard Worker  self: zeros_like(grad)
1385*da0073e9SAndroid Build Coastguard Worker  result: self_t.zero_()
1386*da0073e9SAndroid Build Coastguard Worker
1387*da0073e9SAndroid Build Coastguard Worker- name: random_(Tensor(a!) self, *, Generator? generator=None) -> Tensor(a!)
1388*da0073e9SAndroid Build Coastguard Worker  self: zeros_like(grad)
1389*da0073e9SAndroid Build Coastguard Worker  result: self_t.zero_()
1390*da0073e9SAndroid Build Coastguard Worker
1391*da0073e9SAndroid Build Coastguard Worker- name: reciprocal(Tensor self) -> Tensor
1392*da0073e9SAndroid Build Coastguard Worker  self: -grad * (result * result).conj()
1393*da0073e9SAndroid Build Coastguard Worker  result: auto_element_wise
1394*da0073e9SAndroid Build Coastguard Worker
1395*da0073e9SAndroid Build Coastguard Worker- name: remainder.Scalar(Tensor self, Scalar other) -> Tensor
1396*da0073e9SAndroid Build Coastguard Worker  self: grad
1397*da0073e9SAndroid Build Coastguard Worker  result: auto_element_wise
1398*da0073e9SAndroid Build Coastguard Worker
1399*da0073e9SAndroid Build Coastguard Worker- name: remainder.Tensor(Tensor self, Tensor other) -> Tensor
1400*da0073e9SAndroid Build Coastguard Worker  self: grad
1401*da0073e9SAndroid Build Coastguard Worker  other: -grad * self.div(other, /*rounding_mode=*/"floor")
1402*da0073e9SAndroid Build Coastguard Worker  result: self_t - other_t * self_p.div(other_p, /*rounding_mode=*/"floor")
1403*da0073e9SAndroid Build Coastguard Worker
1404*da0073e9SAndroid Build Coastguard Worker- name: renorm(Tensor self, Scalar p, int dim, Scalar maxnorm) -> Tensor
1405*da0073e9SAndroid Build Coastguard Worker  self: renorm_backward(grad, self, p, dim, maxnorm)
1406*da0073e9SAndroid Build Coastguard Worker  result: renorm_jvp(self_p, self_t, p, dim, maxnorm)
1407*da0073e9SAndroid Build Coastguard Worker
1408*da0073e9SAndroid Build Coastguard Worker- name: repeat(Tensor self, SymInt[] repeats) -> Tensor
1409*da0073e9SAndroid Build Coastguard Worker  self: repeat_backward(grad, repeats, self.sym_sizes())
1410*da0073e9SAndroid Build Coastguard Worker  result: auto_linear
1411*da0073e9SAndroid Build Coastguard Worker
1412*da0073e9SAndroid Build Coastguard Worker- name: special_entr(Tensor self) -> Tensor
1413*da0073e9SAndroid Build Coastguard Worker  self: grad * (-(1 + self.log()))
1414*da0073e9SAndroid Build Coastguard Worker  result: auto_element_wise
1415*da0073e9SAndroid Build Coastguard Worker
1416*da0073e9SAndroid Build Coastguard Worker- name: special_ndtri(Tensor self) -> Tensor
1417*da0073e9SAndroid Build Coastguard Worker  self: grad * std::sqrt(2 * M_PI) * (result.square() / 2).exp()
1418*da0073e9SAndroid Build Coastguard Worker  result: auto_element_wise
1419*da0073e9SAndroid Build Coastguard Worker
1420*da0073e9SAndroid Build Coastguard Worker- name: special_log_ndtr(Tensor self) -> Tensor
1421*da0073e9SAndroid Build Coastguard Worker  self: grad / std::sqrt(2 * M_PI) * (result + self.pow(2) / 2).neg().exp()
1422*da0073e9SAndroid Build Coastguard Worker  result: auto_element_wise
1423*da0073e9SAndroid Build Coastguard Worker
1424*da0073e9SAndroid Build Coastguard Worker# [Note: Sometimes view derivatives]
1425*da0073e9SAndroid Build Coastguard Worker# The following situation applies to other operations as well.
1426*da0073e9SAndroid Build Coastguard Worker# TODO: This note is only referenced by to_dense and to_sparse*. Make
1427*da0073e9SAndroid Build Coastguard Worker# this more generic if it's been referenced more than once.
1428*da0073e9SAndroid Build Coastguard Worker#
1429*da0073e9SAndroid Build Coastguard Worker# DO NOT define a backward for reshape!
1430*da0073e9SAndroid Build Coastguard Worker# reshape is special in that it sometimes returns a view, and sometimes not.
1431*da0073e9SAndroid Build Coastguard Worker# Defining a backward will make codegen spit out the forward call as
1432*da0073e9SAndroid Build Coastguard Worker#     as_variable(baseType->reshape(self)),
1433*da0073e9SAndroid Build Coastguard Worker# making it impossible (hard) to detect when it is actually a view.
1434*da0073e9SAndroid Build Coastguard Worker# - name: reshape(Tensor self, IntArrayRef shape)
1435*da0073e9SAndroid Build Coastguard Worker
1436*da0073e9SAndroid Build Coastguard Worker- name: _reshape_alias(Tensor(a) self, SymInt[] size, SymInt[] stride) -> Tensor(a)
1437*da0073e9SAndroid Build Coastguard Worker  self: grad.reshape_symint(self.sym_sizes())
1438*da0073e9SAndroid Build Coastguard Worker  result: auto_linear
1439*da0073e9SAndroid Build Coastguard Worker
1440*da0073e9SAndroid Build Coastguard Worker- name: round(Tensor self) -> Tensor
1441*da0073e9SAndroid Build Coastguard Worker  self: zeros_like(grad)
1442*da0073e9SAndroid Build Coastguard Worker  result: auto_element_wise
1443*da0073e9SAndroid Build Coastguard Worker
1444*da0073e9SAndroid Build Coastguard Worker- name: round.decimals(Tensor self, *, int decimals) -> Tensor
1445*da0073e9SAndroid Build Coastguard Worker  self: zeros_like(grad)
1446*da0073e9SAndroid Build Coastguard Worker  result: auto_element_wise
1447*da0073e9SAndroid Build Coastguard Worker
1448*da0073e9SAndroid Build Coastguard Worker- name: rsqrt(Tensor self) -> Tensor
1449*da0073e9SAndroid Build Coastguard Worker  self: -0.5 * grad * result.pow(3).conj()
1450*da0073e9SAndroid Build Coastguard Worker  result: auto_element_wise
1451*da0073e9SAndroid Build Coastguard Worker
1452*da0073e9SAndroid Build Coastguard Worker- name: scatter.src(Tensor self, int dim, Tensor index, Tensor src) -> Tensor
1453*da0073e9SAndroid Build Coastguard Worker  self: grad.scatter(dim, index, 0)
1454*da0073e9SAndroid Build Coastguard Worker  index: non_differentiable
1455*da0073e9SAndroid Build Coastguard Worker  src: grad.gather(dim, index)
1456*da0073e9SAndroid Build Coastguard Worker  result: self_t.scatter(dim, index, src_t)
1457*da0073e9SAndroid Build Coastguard Worker
1458*da0073e9SAndroid Build Coastguard Worker- name: scatter.value(Tensor self, int dim, Tensor index, Scalar value) -> Tensor
1459*da0073e9SAndroid Build Coastguard Worker  self: grad.scatter(dim, index, 0)
1460*da0073e9SAndroid Build Coastguard Worker  index: non_differentiable
1461*da0073e9SAndroid Build Coastguard Worker  result: self_t.scatter(dim, index, 0)
1462*da0073e9SAndroid Build Coastguard Worker
1463*da0073e9SAndroid Build Coastguard Worker- name: scatter_add(Tensor self, int dim, Tensor index, Tensor src) -> Tensor
1464*da0073e9SAndroid Build Coastguard Worker  self: grad
1465*da0073e9SAndroid Build Coastguard Worker  index: non_differentiable
1466*da0073e9SAndroid Build Coastguard Worker  src: grad.gather(dim, index)
1467*da0073e9SAndroid Build Coastguard Worker  result: scatter_add(self_t, dim, index, src_t)
1468*da0073e9SAndroid Build Coastguard Worker
1469*da0073e9SAndroid Build Coastguard Worker- name: select.int(Tensor(a) self, int dim, SymInt index) -> Tensor(a)
1470*da0073e9SAndroid Build Coastguard Worker  dispatch:
1471*da0073e9SAndroid Build Coastguard Worker    Default:
1472*da0073e9SAndroid Build Coastguard Worker      self: select_backward_symint(grad, self.sym_sizes(), dim, index)
1473*da0073e9SAndroid Build Coastguard Worker      result: auto_linear
1474*da0073e9SAndroid Build Coastguard Worker    AutogradNestedTensor:
1475*da0073e9SAndroid Build Coastguard Worker      self: _nested_select_backward_symint(grad, self, dim, index)
1476*da0073e9SAndroid Build Coastguard Worker
1477*da0073e9SAndroid Build Coastguard Worker- name: select_backward(Tensor grad_output, SymInt[] input_sizes, int dim, SymInt index) -> Tensor
1478*da0073e9SAndroid Build Coastguard Worker  grad_output: grad.select_symint(dim, index)
1479*da0073e9SAndroid Build Coastguard Worker  result: auto_linear
1480*da0073e9SAndroid Build Coastguard Worker
1481*da0073e9SAndroid Build Coastguard Worker- name: sigmoid(Tensor self) -> Tensor
1482*da0073e9SAndroid Build Coastguard Worker  self: sigmoid_backward(grad, result)
1483*da0073e9SAndroid Build Coastguard Worker  result: auto_element_wise
1484*da0073e9SAndroid Build Coastguard Worker
1485*da0073e9SAndroid Build Coastguard Worker- name: logit(Tensor self, float? eps=None) -> Tensor
1486*da0073e9SAndroid Build Coastguard Worker  self: "GradMode::is_enabled() ? infinitely_differentiable_logit_backward(grad, self, eps) : logit_backward(grad, self, eps)"
1487*da0073e9SAndroid Build Coastguard Worker  result: auto_element_wise
1488*da0073e9SAndroid Build Coastguard Worker
1489*da0073e9SAndroid Build Coastguard Worker- name: sign(Tensor self) -> Tensor
1490*da0073e9SAndroid Build Coastguard Worker  self: zeros_like(grad)
1491*da0073e9SAndroid Build Coastguard Worker  result: auto_element_wise
1492*da0073e9SAndroid Build Coastguard Worker
1493*da0073e9SAndroid Build Coastguard Worker- name: sgn(Tensor self) -> Tensor
1494*da0073e9SAndroid Build Coastguard Worker  self: sgn_backward(self, grad, result)
1495*da0073e9SAndroid Build Coastguard Worker  # Cannot use auto_element_wise here because the Jacobian is *not* Hermitian (in fact, it is symmetric)
1496*da0073e9SAndroid Build Coastguard Worker  # The function is not holomorphic, so there's no reason for its Jacobian to be Hermitian
1497*da0073e9SAndroid Build Coastguard Worker  # auto_element_wise has a name that's a bit deceiving in the complex case
1498*da0073e9SAndroid Build Coastguard Worker  result: sgn_backward(self_p, self_t, result)
1499*da0073e9SAndroid Build Coastguard Worker
1500*da0073e9SAndroid Build Coastguard Worker- name: sin(Tensor self) -> Tensor
1501*da0073e9SAndroid Build Coastguard Worker  self: grad * self.cos().conj()
1502*da0073e9SAndroid Build Coastguard Worker  result: auto_element_wise
1503*da0073e9SAndroid Build Coastguard Worker
1504*da0073e9SAndroid Build Coastguard Worker- name: sinc(Tensor self) -> Tensor
1505*da0073e9SAndroid Build Coastguard Worker  self: sinc_backward(grad, self)
1506*da0073e9SAndroid Build Coastguard Worker  result: auto_element_wise
1507*da0073e9SAndroid Build Coastguard Worker
1508*da0073e9SAndroid Build Coastguard Worker- name: sinh(Tensor self) -> Tensor
1509*da0073e9SAndroid Build Coastguard Worker  self: grad * self.cosh().conj()
1510*da0073e9SAndroid Build Coastguard Worker  result: auto_element_wise
1511*da0073e9SAndroid Build Coastguard Worker
1512*da0073e9SAndroid Build Coastguard Worker- name: slice.Tensor(Tensor(a) self, int dim=0, SymInt? start=None, SymInt? end=None, SymInt step=1) -> Tensor(a)
1513*da0073e9SAndroid Build Coastguard Worker  self: slice_backward_wrapper(grad, self.sym_sizes(), dim, start, end, step)
1514*da0073e9SAndroid Build Coastguard Worker  result: auto_linear
1515*da0073e9SAndroid Build Coastguard Worker
1516*da0073e9SAndroid Build Coastguard Worker- name: slice_backward(Tensor grad_output, SymInt[] input_sizes, int dim, SymInt start, SymInt end, SymInt step) -> Tensor
1517*da0073e9SAndroid Build Coastguard Worker  grad_output: grad.slice_symint(dim, start, end, step)
1518*da0073e9SAndroid Build Coastguard Worker  result: auto_linear
1519*da0073e9SAndroid Build Coastguard Worker
1520*da0073e9SAndroid Build Coastguard Worker- name: slice_inverse(Tensor(a) self, Tensor src, int dim=0, SymInt? start=None, SymInt? end=None, SymInt step=1) -> Tensor(a)
1521*da0073e9SAndroid Build Coastguard Worker  self: grad.slice_symint(dim, start, end, step)
1522*da0073e9SAndroid Build Coastguard Worker  src: slice_scatter_symint(grad, zeros_like(self), dim, start, end, step)
1523*da0073e9SAndroid Build Coastguard Worker  result: auto_linear
1524*da0073e9SAndroid Build Coastguard Worker
1525*da0073e9SAndroid Build Coastguard Worker- name: slice_scatter(Tensor self, Tensor src, int dim=0, SymInt? start=None, SymInt? end=None, SymInt step=1) -> Tensor
1526*da0073e9SAndroid Build Coastguard Worker  self: slice_scatter_symint(grad, zeros_like(src), dim, start, end, step)
1527*da0073e9SAndroid Build Coastguard Worker  src: grad.slice_symint(dim, start, end, step)
1528*da0073e9SAndroid Build Coastguard Worker  result: auto_linear
1529*da0073e9SAndroid Build Coastguard Worker
1530*da0073e9SAndroid Build Coastguard Worker- name: select_scatter(Tensor self, Tensor src, int dim, SymInt index) -> Tensor
1531*da0073e9SAndroid Build Coastguard Worker  self: select_scatter_symint(grad, zeros_like(src), dim, index)
1532*da0073e9SAndroid Build Coastguard Worker  src: grad.select_symint(dim, index)
1533*da0073e9SAndroid Build Coastguard Worker  result: auto_linear
1534*da0073e9SAndroid Build Coastguard Worker
1535*da0073e9SAndroid Build Coastguard Worker- name: diagonal_scatter(Tensor self, Tensor src, int offset=0, int dim1=0, int dim2=1) -> Tensor
1536*da0073e9SAndroid Build Coastguard Worker  self: diagonal_scatter(grad, zeros_like(src), offset, dim1, dim2)
1537*da0073e9SAndroid Build Coastguard Worker  src: grad.diagonal(offset, dim1, dim2)
1538*da0073e9SAndroid Build Coastguard Worker  result: auto_linear
1539*da0073e9SAndroid Build Coastguard Worker
1540*da0073e9SAndroid Build Coastguard Worker- name: as_strided_scatter(Tensor self, Tensor src, SymInt[] size, SymInt[] stride, SymInt? storage_offset=None) -> Tensor
1541*da0073e9SAndroid Build Coastguard Worker  self: as_strided_scatter_backward(grad, TensorGeometry(self), TensorGeometry(src), size, stride, storage_offset)
1542*da0073e9SAndroid Build Coastguard Worker  # See Note [as_strided_scatter backward support]
1543*da0073e9SAndroid Build Coastguard Worker  src: grad.contiguous().as_strided_symint(size, stride, storage_offset)
1544*da0073e9SAndroid Build Coastguard Worker  result: auto_linear
1545*da0073e9SAndroid Build Coastguard Worker
1546*da0073e9SAndroid Build Coastguard Worker- name: _linalg_solve_ex(Tensor A, Tensor B, *, bool left=True, bool check_errors=False) -> (Tensor result, Tensor LU, Tensor pivots, Tensor info)
1547*da0073e9SAndroid Build Coastguard Worker  A, B: linalg_solve_backward(grad, result, A, LU, pivots, left, grad_input_mask[1])
1548*da0073e9SAndroid Build Coastguard Worker  result: "linalg_solve_jvp(A_t, B_t, result, LU, pivots, left, A_p.is_contiguous() && !A_p.is_complex())"
1549*da0073e9SAndroid Build Coastguard Worker  output_differentiability: [True, False, False, False]  # LU is an auxiliary tensor not exposed to the user
1550*da0073e9SAndroid Build Coastguard Worker
1551*da0073e9SAndroid Build Coastguard Worker- name: sort(Tensor self, int dim=-1, bool descending=False) -> (Tensor values, Tensor indices)
1552*da0073e9SAndroid Build Coastguard Worker  self: value_selecting_reduction_backward_symint(grad, dim, indices, self.sym_sizes(), true)
1553*da0073e9SAndroid Build Coastguard Worker  output_differentiability: [True, False]
1554*da0073e9SAndroid Build Coastguard Worker  values: gather_with_keepdimed_indices(self_t, dim, indices, true)
1555*da0073e9SAndroid Build Coastguard Worker
1556*da0073e9SAndroid Build Coastguard Worker- name: sort.stable(Tensor self, *, bool? stable, int dim=-1, bool descending=False) -> (Tensor values, Tensor indices)
1557*da0073e9SAndroid Build Coastguard Worker  self: value_selecting_reduction_backward_symint(grad, dim, indices, self.sym_sizes(), true)
1558*da0073e9SAndroid Build Coastguard Worker  output_differentiability: [True, False]
1559*da0073e9SAndroid Build Coastguard Worker  values: gather_with_keepdimed_indices(self_t, dim, indices, true)
1560*da0073e9SAndroid Build Coastguard Worker
1561*da0073e9SAndroid Build Coastguard Worker- name: split.Tensor(Tensor(a -> *) self, SymInt split_size, int dim=0) -> Tensor(a)[]
1562*da0073e9SAndroid Build Coastguard Worker  self: split_backward(grads, split_size, dim, self.sym_sizes(), self.options())
1563*da0073e9SAndroid Build Coastguard Worker  result: auto_linear
1564*da0073e9SAndroid Build Coastguard Worker
1565*da0073e9SAndroid Build Coastguard Worker- name: unsafe_split.Tensor(Tensor self, SymInt split_size, int dim=0) -> Tensor[]
1566*da0073e9SAndroid Build Coastguard Worker  self: split_backward(grads, split_size, dim, self.sym_sizes(), self.options())
1567*da0073e9SAndroid Build Coastguard Worker  result: auto_linear
1568*da0073e9SAndroid Build Coastguard Worker
1569*da0073e9SAndroid Build Coastguard Worker- name: split_with_sizes(Tensor(a -> *) self, SymInt[] split_sizes, int dim=0) -> Tensor(a)[]
1570*da0073e9SAndroid Build Coastguard Worker  dispatch:
1571*da0073e9SAndroid Build Coastguard Worker    Default:
1572*da0073e9SAndroid Build Coastguard Worker      self: split_with_sizes_backward(grads, split_sizes, dim, self.sym_sizes(), self.options())
1573*da0073e9SAndroid Build Coastguard Worker      result: auto_linear
1574*da0073e9SAndroid Build Coastguard Worker    AutogradNestedTensor:
1575*da0073e9SAndroid Build Coastguard Worker      self: _nested_split_with_sizes_backward(grads, split_sizes, dim, at::native::get_nested_tensor_impl(self)->get_nested_sizes(), self.options())
1576*da0073e9SAndroid Build Coastguard Worker
1577*da0073e9SAndroid Build Coastguard Worker- name: unsafe_split_with_sizes(Tensor self, SymInt[] split_sizes, int dim=0) -> Tensor[]
1578*da0073e9SAndroid Build Coastguard Worker  self: split_with_sizes_backward(grads, split_sizes, dim, self.sym_sizes(), self.options())
1579*da0073e9SAndroid Build Coastguard Worker  result: auto_linear
1580*da0073e9SAndroid Build Coastguard Worker
1581*da0073e9SAndroid Build Coastguard Worker- name: sqrt(Tensor self) -> Tensor
1582*da0073e9SAndroid Build Coastguard Worker  self: grad / (2 * result.conj())
1583*da0073e9SAndroid Build Coastguard Worker  result: auto_element_wise
1584*da0073e9SAndroid Build Coastguard Worker
1585*da0073e9SAndroid Build Coastguard Worker- name: squeeze(Tensor(a) self) -> Tensor(a)
1586*da0073e9SAndroid Build Coastguard Worker  self: unsqueeze_to(grad, self.sym_sizes())
1587*da0073e9SAndroid Build Coastguard Worker  result: auto_linear
1588*da0073e9SAndroid Build Coastguard Worker
1589*da0073e9SAndroid Build Coastguard Worker- name: squeeze.dim(Tensor(a) self, int dim) -> Tensor(a)
1590*da0073e9SAndroid Build Coastguard Worker  dispatch:
1591*da0073e9SAndroid Build Coastguard Worker    Default:
1592*da0073e9SAndroid Build Coastguard Worker      self: unsqueeze_to(grad, dim, self.sym_sizes())
1593*da0073e9SAndroid Build Coastguard Worker      result: auto_linear
1594*da0073e9SAndroid Build Coastguard Worker    AutogradNestedTensor:
1595*da0073e9SAndroid Build Coastguard Worker      self: grad.unsqueeze(dim)
1596*da0073e9SAndroid Build Coastguard Worker
1597*da0073e9SAndroid Build Coastguard Worker- name: squeeze.dims(Tensor(a) self, int[] dim) -> Tensor(a)
1598*da0073e9SAndroid Build Coastguard Worker  dispatch:
1599*da0073e9SAndroid Build Coastguard Worker    Default:
1600*da0073e9SAndroid Build Coastguard Worker      self: unsqueeze_to(grad, dim, self.sym_sizes())
1601*da0073e9SAndroid Build Coastguard Worker      result: auto_linear
1602*da0073e9SAndroid Build Coastguard Worker    AutogradNestedTensor:
1603*da0073e9SAndroid Build Coastguard Worker      self: unsqueeze_multiple(grad, dim, self.dim())
1604*da0073e9SAndroid Build Coastguard Worker
1605*da0073e9SAndroid Build Coastguard Worker- name: squeeze_(Tensor(a!) self) -> Tensor(a!)
1606*da0073e9SAndroid Build Coastguard Worker  self: unsqueeze_to(grad, self.sym_sizes())
1607*da0073e9SAndroid Build Coastguard Worker  result: auto_linear
1608*da0073e9SAndroid Build Coastguard Worker
1609*da0073e9SAndroid Build Coastguard Worker- name: squeeze_.dim(Tensor(a!) self, int dim) -> Tensor(a!)
1610*da0073e9SAndroid Build Coastguard Worker  self: unsqueeze_to(grad, dim, self.sym_sizes())
1611*da0073e9SAndroid Build Coastguard Worker  result: auto_linear
1612*da0073e9SAndroid Build Coastguard Worker
1613*da0073e9SAndroid Build Coastguard Worker- name: squeeze_.dims(Tensor(a!) self, int[] dim) -> Tensor(a!)
1614*da0073e9SAndroid Build Coastguard Worker  self: unsqueeze_to(grad, dim, self.sym_sizes())
1615*da0073e9SAndroid Build Coastguard Worker  result: auto_linear
1616*da0073e9SAndroid Build Coastguard Worker
1617*da0073e9SAndroid Build Coastguard Worker- name: std.correction(Tensor self, int[1]? dim=None, *, Scalar? correction=None, bool keepdim=False) -> Tensor
1618*da0073e9SAndroid Build Coastguard Worker  self: std_backward(result, grad, self, dim, correction, keepdim)
1619*da0073e9SAndroid Build Coastguard Worker  # pointwise (variance) + sum + sqrt
1620*da0073e9SAndroid Build Coastguard Worker  result: (at::real(var_backward(self_t.conj(), self_p, dim, correction, true).sum(dim.value_or(IntArrayRef({})), keepdim)) / (2. * result)).masked_fill_(result == 0, 0)
1621*da0073e9SAndroid Build Coastguard Worker
1622*da0073e9SAndroid Build Coastguard Worker- name: std_mean.correction(Tensor self, int[1]? dim=None, *, Scalar? correction=None, bool keepdim=False) -> (Tensor, Tensor)
1623*da0073e9SAndroid Build Coastguard Worker  self: std_mean_backward(grads[0], grads[1], self, result0, dim, correction, keepdim)
1624*da0073e9SAndroid Build Coastguard Worker  result0: (at::real(var_backward(self_t.conj(), self_p, dim, correction, true).sum(dim.value_or(IntArrayRef({})), keepdim)) / (2. * result0)).masked_fill_(result0 == 0, 0)
1625*da0073e9SAndroid Build Coastguard Worker  # linear
1626*da0073e9SAndroid Build Coastguard Worker  result1: mean(self_t, dim.value_or(IntArrayRef({})), keepdim)
1627*da0073e9SAndroid Build Coastguard Worker
1628*da0073e9SAndroid Build Coastguard Worker- name: sub.Tensor(Tensor self, Tensor other, *, Scalar alpha=1) -> Tensor
1629*da0073e9SAndroid Build Coastguard Worker  self: handle_r_to_c(self.scalar_type(), grad)
1630*da0073e9SAndroid Build Coastguard Worker  other: handle_r_to_c(other.scalar_type(), maybe_multiply(-grad, alpha.conj()))
1631*da0073e9SAndroid Build Coastguard Worker  result: self_t - maybe_multiply(other_t, alpha)
1632*da0073e9SAndroid Build Coastguard Worker
1633*da0073e9SAndroid Build Coastguard Worker- name: sub.Scalar(Tensor self, Scalar other, Scalar alpha=1) -> Tensor
1634*da0073e9SAndroid Build Coastguard Worker  self: handle_r_to_c(self.scalar_type(), grad)
1635*da0073e9SAndroid Build Coastguard Worker  result: auto_element_wise
1636*da0073e9SAndroid Build Coastguard Worker
1637*da0073e9SAndroid Build Coastguard Worker- name: rsub.Tensor(Tensor self, Tensor other, *, Scalar alpha=1) -> Tensor
1638*da0073e9SAndroid Build Coastguard Worker  self: handle_r_to_c(self.scalar_type(), maybe_multiply(-grad, alpha.conj()))
1639*da0073e9SAndroid Build Coastguard Worker  other: handle_r_to_c(other.scalar_type(), grad)
1640*da0073e9SAndroid Build Coastguard Worker  result: -maybe_multiply(self_t, alpha) + other_t
1641*da0073e9SAndroid Build Coastguard Worker
1642*da0073e9SAndroid Build Coastguard Worker- name: rsub.Scalar(Tensor self, Scalar other, Scalar alpha=1) -> Tensor
1643*da0073e9SAndroid Build Coastguard Worker  self: handle_r_to_c(self.scalar_type(), maybe_multiply(-grad, alpha.conj()))
1644*da0073e9SAndroid Build Coastguard Worker  result: auto_element_wise
1645*da0073e9SAndroid Build Coastguard Worker
1646*da0073e9SAndroid Build Coastguard Worker- name: sum(Tensor self, *, ScalarType? dtype=None) -> Tensor
1647*da0073e9SAndroid Build Coastguard Worker  self: grad.expand_symint(self.sym_sizes())
1648*da0073e9SAndroid Build Coastguard Worker  result: auto_linear
1649*da0073e9SAndroid Build Coastguard Worker
1650*da0073e9SAndroid Build Coastguard Worker- name: sum.dim_IntList(Tensor self, int[1]? dim, bool keepdim=False, *, ScalarType? dtype=None) -> Tensor
1651*da0073e9SAndroid Build Coastguard Worker  dispatch:
1652*da0073e9SAndroid Build Coastguard Worker    Default:
1653*da0073e9SAndroid Build Coastguard Worker      self: sum_backward(grad, self.sym_sizes(), dim, keepdim)
1654*da0073e9SAndroid Build Coastguard Worker      result: auto_linear
1655*da0073e9SAndroid Build Coastguard Worker    AutogradNestedTensor:
1656*da0073e9SAndroid Build Coastguard Worker      # TODO: replace this function once semantics for nested tensor expand have been settled on
1657*da0073e9SAndroid Build Coastguard Worker      self: _nested_sum_backward(grad, self, dim, keepdim)
1658*da0073e9SAndroid Build Coastguard Worker
1659*da0073e9SAndroid Build Coastguard Worker- name: nansum(Tensor self, int[1]? dim=None, bool keepdim=False, *, ScalarType? dtype=None) -> Tensor
1660*da0073e9SAndroid Build Coastguard Worker  self: nansum_backward(grad.to(self.scalar_type()), self, dim, keepdim)
1661*da0073e9SAndroid Build Coastguard Worker  result: at::where(self_p.isnan(), 0, self_t).sum(dim, keepdim, dtype)
1662*da0073e9SAndroid Build Coastguard Worker
1663*da0073e9SAndroid Build Coastguard Worker# We never call _linalg_svd with compute_uv=False in an autograd context, so we don't even consider it here
1664*da0073e9SAndroid Build Coastguard Worker- name: _linalg_svd(Tensor A, bool full_matrices=False, bool compute_uv=True, *, str? driver=None) -> (Tensor U, Tensor S, Tensor Vh)
1665*da0073e9SAndroid Build Coastguard Worker  A: "svd_backward(full_matrices && grad_U.defined() ? grad_U.narrow_symint(-1, 0, S.sym_size(-1)) : grad_U,
1666*da0073e9SAndroid Build Coastguard Worker                   grad_S,
1667*da0073e9SAndroid Build Coastguard Worker                   full_matrices && grad_Vh.defined() ? grad_Vh.narrow_symint(-2, 0, S.sym_size(-1)) : grad_Vh,
1668*da0073e9SAndroid Build Coastguard Worker                   full_matrices ? U.narrow_symint(-1, 0, S.sym_size(-1)) : U,
1669*da0073e9SAndroid Build Coastguard Worker                   S,
1670*da0073e9SAndroid Build Coastguard Worker                   full_matrices ? Vh.narrow_symint(-2, 0, S.sym_size(-1)) : Vh)"
1671*da0073e9SAndroid Build Coastguard Worker  U, S, Vh: linalg_svd_jvp(A_t, U, S, Vh, full_matrices)
1672*da0073e9SAndroid Build Coastguard Worker
1673*da0073e9SAndroid Build Coastguard Worker- name: _linalg_eigh(Tensor A, str UPLO="L", bool compute_v=True) -> (Tensor eigenvalues, Tensor eigenvectors)
1674*da0073e9SAndroid Build Coastguard Worker  A: linalg_eig_backward(grads[0], grads[1], eigenvalues, eigenvectors, /*is_hermitian=*/true)
1675*da0073e9SAndroid Build Coastguard Worker  eigenvalues, eigenvectors: linalg_eig_jvp(A_t, eigenvalues, eigenvectors, /*is_hermitian=*/true)
1676*da0073e9SAndroid Build Coastguard Worker
1677*da0073e9SAndroid Build Coastguard Worker- name: linalg_eig(Tensor self) -> (Tensor eigenvalues, Tensor eigenvectors)
1678*da0073e9SAndroid Build Coastguard Worker  self: handle_r_to_c(self.scalar_type(), linalg_eig_backward(grads[0], grads[1], eigenvalues, eigenvectors, /*is_hermitian=*/false))
1679*da0073e9SAndroid Build Coastguard Worker  eigenvalues, eigenvectors: linalg_eig_jvp(self_t, eigenvalues, eigenvectors, /*is_hermitian=*/false)
1680*da0073e9SAndroid Build Coastguard Worker
1681*da0073e9SAndroid Build Coastguard Worker- name: t(Tensor(a) self) -> Tensor(a)
1682*da0073e9SAndroid Build Coastguard Worker  self: grad.t()
1683*da0073e9SAndroid Build Coastguard Worker  result: auto_linear
1684*da0073e9SAndroid Build Coastguard Worker
1685*da0073e9SAndroid Build Coastguard Worker- name: t_(Tensor(a!) self) -> Tensor(a!)
1686*da0073e9SAndroid Build Coastguard Worker  self: grad.t()
1687*da0073e9SAndroid Build Coastguard Worker  result: auto_linear
1688*da0073e9SAndroid Build Coastguard Worker
1689*da0073e9SAndroid Build Coastguard Worker- name: one_hot(Tensor self, int num_classes=-1) -> Tensor
1690*da0073e9SAndroid Build Coastguard Worker  self: non_differentiable
1691*da0073e9SAndroid Build Coastguard Worker
1692*da0073e9SAndroid Build Coastguard Worker- name: flip(Tensor self, int[] dims) -> Tensor
1693*da0073e9SAndroid Build Coastguard Worker  self: grad.flip(dims)
1694*da0073e9SAndroid Build Coastguard Worker  result: auto_linear
1695*da0073e9SAndroid Build Coastguard Worker
1696*da0073e9SAndroid Build Coastguard Worker- name: roll(Tensor self, SymInt[1] shifts, int[1] dims=[]) -> Tensor
1697*da0073e9SAndroid Build Coastguard Worker  self: grad.roll_symint(fmap(reverse_list_symint(shifts), [](c10::SymInt i){return -i;}), reverse_list(dims))
1698*da0073e9SAndroid Build Coastguard Worker  result: auto_linear
1699*da0073e9SAndroid Build Coastguard Worker
1700*da0073e9SAndroid Build Coastguard Worker- name: rot90(Tensor self, int k=1, int[] dims=[0,1]) -> Tensor
1701*da0073e9SAndroid Build Coastguard Worker  self: grad.rot90(-k, dims)
1702*da0073e9SAndroid Build Coastguard Worker  result: auto_linear
1703*da0073e9SAndroid Build Coastguard Worker
1704*da0073e9SAndroid Build Coastguard Worker- name: take(Tensor self, Tensor index) -> Tensor
1705*da0073e9SAndroid Build Coastguard Worker  self: take_backward(grad, self, index)
1706*da0073e9SAndroid Build Coastguard Worker  index: non_differentiable
1707*da0073e9SAndroid Build Coastguard Worker  result: auto_linear
1708*da0073e9SAndroid Build Coastguard Worker
1709*da0073e9SAndroid Build Coastguard Worker- name: tan(Tensor self) -> Tensor
1710*da0073e9SAndroid Build Coastguard Worker  self: grad * (1 + result.pow(2)).conj()
1711*da0073e9SAndroid Build Coastguard Worker  result: auto_element_wise
1712*da0073e9SAndroid Build Coastguard Worker
1713*da0073e9SAndroid Build Coastguard Worker- name: tanh(Tensor self) -> Tensor
1714*da0073e9SAndroid Build Coastguard Worker  self: tanh_backward(grad, result)
1715*da0073e9SAndroid Build Coastguard Worker  result: auto_element_wise
1716*da0073e9SAndroid Build Coastguard Worker
1717*da0073e9SAndroid Build Coastguard Worker- name: topk(Tensor self, SymInt k, int dim=-1, bool largest=True, bool sorted=True) -> (Tensor values, Tensor indices)
1718*da0073e9SAndroid Build Coastguard Worker  self: value_selecting_reduction_backward_symint(grad, dim, indices, self.sym_sizes(), true)
1719*da0073e9SAndroid Build Coastguard Worker  output_differentiability: [True, False]
1720*da0073e9SAndroid Build Coastguard Worker  values: gather(self_t, dim, indices)
1721*da0073e9SAndroid Build Coastguard Worker
1722*da0073e9SAndroid Build Coastguard Worker- name: trace(Tensor self) -> Tensor
1723*da0073e9SAndroid Build Coastguard Worker  self: trace_backward_symint(grad, self.sym_sizes())
1724*da0073e9SAndroid Build Coastguard Worker  result: auto_linear
1725*da0073e9SAndroid Build Coastguard Worker
1726*da0073e9SAndroid Build Coastguard Worker- name: transpose.int(Tensor(a) self, int dim0, int dim1) -> Tensor(a)
1727*da0073e9SAndroid Build Coastguard Worker  self: grad.transpose(dim0, dim1)
1728*da0073e9SAndroid Build Coastguard Worker  result: auto_linear
1729*da0073e9SAndroid Build Coastguard Worker
1730*da0073e9SAndroid Build Coastguard Worker- name: transpose_(Tensor(a!) self, int dim0, int dim1) -> Tensor(a!)
1731*da0073e9SAndroid Build Coastguard Worker  self: grad.transpose(dim0, dim1)
1732*da0073e9SAndroid Build Coastguard Worker  result: auto_linear
1733*da0073e9SAndroid Build Coastguard Worker
1734*da0073e9SAndroid Build Coastguard Worker- name: triangular_solve(Tensor self, Tensor A, bool upper=True, bool transpose=False, bool unitriangular=False) -> (Tensor solution, Tensor cloned_coefficient)
1735*da0073e9SAndroid Build Coastguard Worker  self, A: triangular_solve_backward(grad_solution, grad_cloned_coefficient, self, A, solution, upper, transpose, unitriangular, grad_input_mask)
1736*da0073e9SAndroid Build Coastguard Worker  solution: triangular_solve_jvp(solution, A_p, A_t, self_t, upper, transpose, unitriangular)
1737*da0073e9SAndroid Build Coastguard Worker  cloned_coefficient: A_t
1738*da0073e9SAndroid Build Coastguard Worker
1739*da0073e9SAndroid Build Coastguard Worker- name: linalg_solve_triangular(Tensor self, Tensor B, *, bool upper, bool left=True, bool unitriangular=False) -> Tensor
1740*da0073e9SAndroid Build Coastguard Worker  self, B: linalg_solve_triangular_backward(grad, self, result, upper, left, unitriangular, grad_input_mask)
1741*da0073e9SAndroid Build Coastguard Worker  result: linalg_solve_triangular_forward_AD(self_t, B_t, self_p, result, upper, left, unitriangular)
1742*da0073e9SAndroid Build Coastguard Worker
1743*da0073e9SAndroid Build Coastguard Worker- name: tril(Tensor self, int diagonal=0) -> Tensor
1744*da0073e9SAndroid Build Coastguard Worker  self: grad.tril(diagonal)
1745*da0073e9SAndroid Build Coastguard Worker  result: auto_linear
1746*da0073e9SAndroid Build Coastguard Worker
1747*da0073e9SAndroid Build Coastguard Worker- name: triu(Tensor self, int diagonal=0) -> Tensor
1748*da0073e9SAndroid Build Coastguard Worker  self: grad.triu(diagonal)
1749*da0073e9SAndroid Build Coastguard Worker  result: auto_linear
1750*da0073e9SAndroid Build Coastguard Worker
1751*da0073e9SAndroid Build Coastguard Worker- name: trunc(Tensor self) -> Tensor
1752*da0073e9SAndroid Build Coastguard Worker  self: zeros_like(grad)
1753*da0073e9SAndroid Build Coastguard Worker  result: auto_element_wise
1754*da0073e9SAndroid Build Coastguard Worker
1755*da0073e9SAndroid Build Coastguard Worker# DO NOT define a backward for to_dense
1756*da0073e9SAndroid Build Coastguard Worker# See [Note: Sometimes view derivatives]
1757*da0073e9SAndroid Build Coastguard Worker# - name: to_dense(Tensor self, ScalarType? dtype=None, *, bool? masked_grad=None) -> Tensor
1758*da0073e9SAndroid Build Coastguard Worker#
1759*da0073e9SAndroid Build Coastguard Worker- name: _to_dense(Tensor self, ScalarType? dtype=None, bool? masked_grad=None) -> Tensor
1760*da0073e9SAndroid Build Coastguard Worker  self: to_dense_backward(grad, self, masked_grad)
1761*da0073e9SAndroid Build Coastguard Worker
1762*da0073e9SAndroid Build Coastguard Worker# DO NOT define a backward for to_sparse.sparse_dim
1763*da0073e9SAndroid Build Coastguard Worker# See [Note: Sometimes view derivatives]
1764*da0073e9SAndroid Build Coastguard Worker# - name: to_sparse.sparse_dim(Tensor self, int sparse_dim) -> Tensor
1765*da0073e9SAndroid Build Coastguard Worker#
1766*da0073e9SAndroid Build Coastguard Worker- name: _to_sparse.sparse_dim(Tensor self, int sparse_dim) -> Tensor
1767*da0073e9SAndroid Build Coastguard Worker  self: to_sparse_backward(grad, self.layout(), self.sym_blocksize())
1768*da0073e9SAndroid Build Coastguard Worker
1769*da0073e9SAndroid Build Coastguard Worker# DO NOT define a backward for to_sparse
1770*da0073e9SAndroid Build Coastguard Worker# See [Note: Sometimes view derivatives]
1771*da0073e9SAndroid Build Coastguard Worker# - name: to_sparse(Tensor self, *, Layout? layout=None, int[2]? blocksize=None, int? dense_dim=None) -> Tensor
1772*da0073e9SAndroid Build Coastguard Worker#
1773*da0073e9SAndroid Build Coastguard Worker- name: _to_sparse(Tensor self, *, Layout? layout=None, int[2]? blocksize=None, int? dense_dim=None) -> Tensor
1774*da0073e9SAndroid Build Coastguard Worker  self: to_sparse_backward(grad, self.layout(), self.sym_blocksize())
1775*da0073e9SAndroid Build Coastguard Worker
1776*da0073e9SAndroid Build Coastguard Worker# DO NOT define a backward for to_sparse_csr
1777*da0073e9SAndroid Build Coastguard Worker# See [Note: Sometimes view derivatives]
1778*da0073e9SAndroid Build Coastguard Worker# - name: to_sparse_csr(Tensor self, int? dense_dim=None) -> Tensor
1779*da0073e9SAndroid Build Coastguard Worker#
1780*da0073e9SAndroid Build Coastguard Worker- name: _to_sparse_csr(Tensor self, int? dense_dim=None) -> Tensor
1781*da0073e9SAndroid Build Coastguard Worker  self: to_sparse_backward(grad, self.layout(), self.sym_blocksize())
1782*da0073e9SAndroid Build Coastguard Worker
1783*da0073e9SAndroid Build Coastguard Worker# DO NOT define a backward for to_sparse_csc
1784*da0073e9SAndroid Build Coastguard Worker# See [Note: Sometimes view derivatives]
1785*da0073e9SAndroid Build Coastguard Worker# - name: to_sparse_csc(Tensor self, int? dense_dim=None) -> Tensor
1786*da0073e9SAndroid Build Coastguard Worker#
1787*da0073e9SAndroid Build Coastguard Worker- name: _to_sparse_csc(Tensor self, int? dense_dim=None) -> Tensor
1788*da0073e9SAndroid Build Coastguard Worker  self: to_sparse_backward(grad, self.layout(), self.sym_blocksize())
1789*da0073e9SAndroid Build Coastguard Worker
1790*da0073e9SAndroid Build Coastguard Worker# DO NOT define a backward for to_sparse_bsr
1791*da0073e9SAndroid Build Coastguard Worker# See [Note: Sometimes view derivatives]
1792*da0073e9SAndroid Build Coastguard Worker# - name: to_sparse_bsr(Tensor self, int[2] blocksize, int? dense_dim=None) -> Tensor
1793*da0073e9SAndroid Build Coastguard Worker#
1794*da0073e9SAndroid Build Coastguard Worker- name: _to_sparse_bsr(Tensor self, int[2] blocksize, int? dense_dim=None) -> Tensor
1795*da0073e9SAndroid Build Coastguard Worker  self: to_sparse_backward(grad, self.layout(), self.sym_blocksize())
1796*da0073e9SAndroid Build Coastguard Worker
1797*da0073e9SAndroid Build Coastguard Worker# DO NOT define a backward for to_sparse_bsc
1798*da0073e9SAndroid Build Coastguard Worker# See [Note: Sometimes view derivatives]
1799*da0073e9SAndroid Build Coastguard Worker# - name: to_sparse_bsc(Tensor self, int[2] blocksize, int? dense_dim=None) -> Tensor
1800*da0073e9SAndroid Build Coastguard Worker#
1801*da0073e9SAndroid Build Coastguard Worker- name: _to_sparse_bsc(Tensor self, int[2] blocksize, int? dense_dim=None) -> Tensor
1802*da0073e9SAndroid Build Coastguard Worker  self: to_sparse_backward(grad, self.layout(), self.sym_blocksize())
1803*da0073e9SAndroid Build Coastguard Worker
1804*da0073e9SAndroid Build Coastguard Worker- name: to_mkldnn(Tensor self, ScalarType? dtype=None) -> Tensor
1805*da0073e9SAndroid Build Coastguard Worker  self: to_mkldnn_backward(grad, self)
1806*da0073e9SAndroid Build Coastguard Worker
1807*da0073e9SAndroid Build Coastguard Worker- name: unfold(Tensor(a) self, int dimension, int size, int step) -> Tensor(a)
1808*da0073e9SAndroid Build Coastguard Worker  self: unfold_backward_symint(grad, self.sym_sizes(), dimension, size, step)
1809*da0073e9SAndroid Build Coastguard Worker  result: auto_linear
1810*da0073e9SAndroid Build Coastguard Worker
1811*da0073e9SAndroid Build Coastguard Worker- name: unfold_backward(Tensor grad_in, SymInt[] input_sizes, int dim, int size, int step) -> Tensor
1812*da0073e9SAndroid Build Coastguard Worker  grad_in: grad.unfold(dim, size, step)
1813*da0073e9SAndroid Build Coastguard Worker  result: auto_linear
1814*da0073e9SAndroid Build Coastguard Worker
1815*da0073e9SAndroid Build Coastguard Worker- name: uniform_(Tensor(a!) self, float from=0, float to=1, *, Generator? generator=None) -> Tensor(a!)
1816*da0073e9SAndroid Build Coastguard Worker  self: zeros_like(grad)
1817*da0073e9SAndroid Build Coastguard Worker  result: self_t.zero_()
1818*da0073e9SAndroid Build Coastguard Worker
1819*da0073e9SAndroid Build Coastguard Worker- name: _unique(Tensor self, bool sorted=True, bool return_inverse=False) -> (Tensor, Tensor)
1820*da0073e9SAndroid Build Coastguard Worker  output_differentiability: [True, False]
1821*da0073e9SAndroid Build Coastguard Worker  self: not_implemented("_unique")
1822*da0073e9SAndroid Build Coastguard Worker
1823*da0073e9SAndroid Build Coastguard Worker- name: unique_dim(Tensor self, int dim, bool sorted=True, bool return_inverse=False, bool return_counts=False) -> (Tensor, Tensor, Tensor)
1824*da0073e9SAndroid Build Coastguard Worker  output_differentiability: [True, False, False]
1825*da0073e9SAndroid Build Coastguard Worker  self: not_implemented("unique_dim")
1826*da0073e9SAndroid Build Coastguard Worker
1827*da0073e9SAndroid Build Coastguard Worker- name: unique_consecutive(Tensor self, bool return_inverse=False, bool return_counts=False, int? dim=None) -> (Tensor, Tensor, Tensor)
1828*da0073e9SAndroid Build Coastguard Worker  output_differentiability: [True, False, False]
1829*da0073e9SAndroid Build Coastguard Worker  self: not_implemented("unique_consecutive")
1830*da0073e9SAndroid Build Coastguard Worker
1831*da0073e9SAndroid Build Coastguard Worker- name: unique_dim_consecutive(Tensor self, int dim, bool return_inverse=False, bool return_counts=False) -> (Tensor, Tensor, Tensor)
1832*da0073e9SAndroid Build Coastguard Worker  output_differentiability: [True, False, False]
1833*da0073e9SAndroid Build Coastguard Worker  self: not_implemented("unique_dim_consecutive")
1834*da0073e9SAndroid Build Coastguard Worker
1835*da0073e9SAndroid Build Coastguard Worker- name: _unique2(Tensor self, bool sorted=True, bool return_inverse=False, bool return_counts=False) -> (Tensor, Tensor, Tensor)
1836*da0073e9SAndroid Build Coastguard Worker  output_differentiability: [True, False, False]
1837*da0073e9SAndroid Build Coastguard Worker  self: not_implemented("_unique2")
1838*da0073e9SAndroid Build Coastguard Worker
1839*da0073e9SAndroid Build Coastguard Worker- name: _unsafe_view(Tensor self, SymInt[] size) -> Tensor
1840*da0073e9SAndroid Build Coastguard Worker  self: grad.reshape_symint(self.sym_sizes())
1841*da0073e9SAndroid Build Coastguard Worker  result: auto_linear
1842*da0073e9SAndroid Build Coastguard Worker
1843*da0073e9SAndroid Build Coastguard Worker- name: lift(Tensor self) -> Tensor
1844*da0073e9SAndroid Build Coastguard Worker  self: grad
1845*da0073e9SAndroid Build Coastguard Worker  result: auto_linear
1846*da0073e9SAndroid Build Coastguard Worker
1847*da0073e9SAndroid Build Coastguard Worker- name: lift_fresh(Tensor(a) self) -> Tensor(a)
1848*da0073e9SAndroid Build Coastguard Worker  self: grad
1849*da0073e9SAndroid Build Coastguard Worker  result: auto_linear
1850*da0073e9SAndroid Build Coastguard Worker
1851*da0073e9SAndroid Build Coastguard Worker- name: unsqueeze(Tensor(a) self, int dim) -> Tensor(a)
1852*da0073e9SAndroid Build Coastguard Worker  self: grad.squeeze(dim)
1853*da0073e9SAndroid Build Coastguard Worker  result: auto_linear
1854*da0073e9SAndroid Build Coastguard Worker
1855*da0073e9SAndroid Build Coastguard Worker- name: unsqueeze_(Tensor(a!) self, int dim) -> Tensor(a!)
1856*da0073e9SAndroid Build Coastguard Worker  self: grad.squeeze(dim)
1857*da0073e9SAndroid Build Coastguard Worker  result: auto_linear
1858*da0073e9SAndroid Build Coastguard Worker
1859*da0073e9SAndroid Build Coastguard Worker- name: var.correction(Tensor self, int[1]? dim=None, *, Scalar? correction=None, bool keepdim=False) -> Tensor
1860*da0073e9SAndroid Build Coastguard Worker  self: var_backward(grad, self, dim, correction, keepdim)
1861*da0073e9SAndroid Build Coastguard Worker  # pointwise + sum
1862*da0073e9SAndroid Build Coastguard Worker  result: at::real(var_backward(self_t.conj(), self_p, dim, correction, true).sum(dim.value_or(IntArrayRef({})), keepdim))
1863*da0073e9SAndroid Build Coastguard Worker
1864*da0073e9SAndroid Build Coastguard Worker- name: var_mean.correction(Tensor self, int[1]? dim=None, *, Scalar? correction=None, bool keepdim=False) -> (Tensor, Tensor)
1865*da0073e9SAndroid Build Coastguard Worker  self: var_mean_backward(grads[0], grads[1], self, dim, correction, keepdim)
1866*da0073e9SAndroid Build Coastguard Worker  result0: at::real(var_backward(self_t.conj(), self_p, dim, correction, true).sum(dim.value_or(IntArrayRef({})), keepdim))
1867*da0073e9SAndroid Build Coastguard Worker  # linear
1868*da0073e9SAndroid Build Coastguard Worker  result1: mean(self_t, dim.value_or(IntArrayRef({})), keepdim)
1869*da0073e9SAndroid Build Coastguard Worker
1870*da0073e9SAndroid Build Coastguard Worker- name: view(Tensor(a) self, SymInt[] size) -> Tensor(a)
1871*da0073e9SAndroid Build Coastguard Worker  dispatch:
1872*da0073e9SAndroid Build Coastguard Worker    Default:
1873*da0073e9SAndroid Build Coastguard Worker      self: grad.reshape_symint(self.sym_sizes())
1874*da0073e9SAndroid Build Coastguard Worker      result: auto_linear
1875*da0073e9SAndroid Build Coastguard Worker    AutogradNestedTensor:
1876*da0073e9SAndroid Build Coastguard Worker      self: grad.reshape_as(self)
1877*da0073e9SAndroid Build Coastguard Worker      result: auto_linear
1878*da0073e9SAndroid Build Coastguard Worker
1879*da0073e9SAndroid Build Coastguard Worker- name: view.dtype(Tensor(a) self, ScalarType dtype) -> Tensor(a)
1880*da0073e9SAndroid Build Coastguard Worker  output_differentiability: [False]
1881*da0073e9SAndroid Build Coastguard Worker
1882*da0073e9SAndroid Build Coastguard Worker- name: view_as_real(Tensor(a) self) -> Tensor(a)
1883*da0073e9SAndroid Build Coastguard Worker  self: at::view_as_complex(grad.contiguous()) # gx0 + 1j * gx1
1884*da0073e9SAndroid Build Coastguard Worker  result: at::view_as_real(self_t)
1885*da0073e9SAndroid Build Coastguard Worker
1886*da0073e9SAndroid Build Coastguard Worker- name: view_as_complex(Tensor(a) self) -> Tensor(a)
1887*da0073e9SAndroid Build Coastguard Worker  self: at::view_as_real(grad.contiguous().resolve_conj()) # [gx, gy]
1888*da0073e9SAndroid Build Coastguard Worker  result: at::view_as_complex(self_t)
1889*da0073e9SAndroid Build Coastguard Worker
1890*da0073e9SAndroid Build Coastguard Worker- name: where.self(Tensor condition, Tensor self, Tensor other) -> Tensor
1891*da0073e9SAndroid Build Coastguard Worker  condition: non_differentiable
1892*da0073e9SAndroid Build Coastguard Worker  self: where(condition, grad, 0)
1893*da0073e9SAndroid Build Coastguard Worker  other: where(condition, 0, grad)
1894*da0073e9SAndroid Build Coastguard Worker  result: where(condition, self_t, other_t)
1895*da0073e9SAndroid Build Coastguard Worker
1896*da0073e9SAndroid Build Coastguard Worker# weight_norm_cuda_interface_backward does not have an explicitly defined derivative, so if we do happen
1897*da0073e9SAndroid Build Coastguard Worker# to be running backward with create_graph=True, fall back to a backward function that uses
1898*da0073e9SAndroid Build Coastguard Worker# differentiable ops.
1899*da0073e9SAndroid Build Coastguard Worker- name: _weight_norm_interface(Tensor v, Tensor g, int dim=0) -> (Tensor, Tensor)
1900*da0073e9SAndroid Build Coastguard Worker  v, g: "grad.defined() ? (GradMode::is_enabled() ? _weight_norm_differentiable_backward(grad.contiguous(), v, g, result1, dim) : _weight_norm_interface_backward(grad.contiguous(), v, g, result1, dim)) : std::tuple<Tensor, Tensor>()"
1901*da0073e9SAndroid Build Coastguard Worker
1902*da0073e9SAndroid Build Coastguard Worker- name: zero_(Tensor(a!) self) -> Tensor(a!)
1903*da0073e9SAndroid Build Coastguard Worker  self: zeros_like(grad)
1904*da0073e9SAndroid Build Coastguard Worker  result: auto_linear
1905*da0073e9SAndroid Build Coastguard Worker
1906*da0073e9SAndroid Build Coastguard Worker- name: sparse_mask(Tensor self, Tensor mask) -> Tensor
1907*da0073e9SAndroid Build Coastguard Worker  self: sparse_mask_backward(grad, mask, self.layout())
1908*da0073e9SAndroid Build Coastguard Worker  mask: non_differentiable
1909*da0073e9SAndroid Build Coastguard Worker
1910*da0073e9SAndroid Build Coastguard Worker- name: _sparse_coo_tensor_with_dims_and_tensors(int sparse_dim, int dense_dim, SymInt[] size, Tensor indices, Tensor values, *, ScalarType? dtype=None, Layout? layout=None, Device? device=None, bool? pin_memory=False, bool? is_coalesced=None) -> Tensor
1911*da0073e9SAndroid Build Coastguard Worker  indices: non_differentiable
1912*da0073e9SAndroid Build Coastguard Worker  values: grad.sparse_mask(result)._values()
1913*da0073e9SAndroid Build Coastguard Worker
1914*da0073e9SAndroid Build Coastguard Worker- name: sparse_compressed_tensor.comp_plain_value_size(Tensor compressed_indices, Tensor plain_indices, Tensor values, SymInt[] size, *, ScalarType? dtype=None, Layout? layout=None, Device? device=None, bool? pin_memory=False) -> Tensor
1915*da0073e9SAndroid Build Coastguard Worker  compressed_indices: non_differentiable
1916*da0073e9SAndroid Build Coastguard Worker  plain_indices: non_differentiable
1917*da0073e9SAndroid Build Coastguard Worker  # TODO: remove to_dense after gh-107381 is fixed
1918*da0073e9SAndroid Build Coastguard Worker  values: grad.to_dense().sparse_mask(result).values()
1919*da0073e9SAndroid Build Coastguard Worker
1920*da0073e9SAndroid Build Coastguard Worker- name: _sparse_sum.dim(Tensor self, int[1] dim) -> Tensor
1921*da0073e9SAndroid Build Coastguard Worker  self: at::_sparse_sum_backward(grad, self, dim)
1922*da0073e9SAndroid Build Coastguard Worker
1923*da0073e9SAndroid Build Coastguard Worker- name: _standard_gamma(Tensor self, Generator? generator=None) -> Tensor
1924*da0073e9SAndroid Build Coastguard Worker  self: grad * _standard_gamma_grad(self, result)
1925*da0073e9SAndroid Build Coastguard Worker
1926*da0073e9SAndroid Build Coastguard Worker- name: _standard_gamma_grad(Tensor self, Tensor output) -> Tensor
1927*da0073e9SAndroid Build Coastguard Worker  self: not_implemented("_standard_gamma_grad")
1928*da0073e9SAndroid Build Coastguard Worker
1929*da0073e9SAndroid Build Coastguard Worker- name: values(Tensor(a) self) -> Tensor(a)
1930*da0073e9SAndroid Build Coastguard Worker  dispatch:
1931*da0073e9SAndroid Build Coastguard Worker    Default:
1932*da0073e9SAndroid Build Coastguard Worker      self: values_backward(grad, self)
1933*da0073e9SAndroid Build Coastguard Worker    AutogradNestedTensor:
1934*da0073e9SAndroid Build Coastguard Worker      self: at::_nested_view_from_buffer(grad.contiguous(), self._nested_tensor_size(), self._nested_tensor_strides(), self._nested_tensor_storage_offsets())
1935*da0073e9SAndroid Build Coastguard Worker
1936*da0073e9SAndroid Build Coastguard Worker# Why is _values() not differentiable?
1937*da0073e9SAndroid Build Coastguard Worker# See NOTE [ Sparse: autograd and API ]
1938*da0073e9SAndroid Build Coastguard Worker- name: _values(Tensor(a) self) -> Tensor(a)
1939*da0073e9SAndroid Build Coastguard Worker  output_differentiability: [False]
1940*da0073e9SAndroid Build Coastguard Worker
1941*da0073e9SAndroid Build Coastguard Worker# NN
1942*da0073e9SAndroid Build Coastguard Worker- name: _trilinear(Tensor i1, Tensor i2, Tensor i3, int[] expand1, int[] expand2, int[] expand3, int[] sumdim, int unroll_dim=1) -> Tensor
1943*da0073e9SAndroid Build Coastguard Worker  i1, i2, i3: "_trilinear_backward(grad,
1944*da0073e9SAndroid Build Coastguard Worker               wrap_opt_if(i1, grad_input_mask[1] || grad_input_mask[2]),
1945*da0073e9SAndroid Build Coastguard Worker               wrap_opt_if(i2, grad_input_mask[0] || grad_input_mask[2]),
1946*da0073e9SAndroid Build Coastguard Worker               wrap_opt_if(i3, grad_input_mask[0] || grad_input_mask[1]),
1947*da0073e9SAndroid Build Coastguard Worker               expand1, expand2, expand3, sumdim, grad_input_mask)"
1948*da0073e9SAndroid Build Coastguard Worker  result: "_trilinear(i1_t, i2_p, i3_p, expand1, expand2, expand3, sumdim, unroll_dim) +
1949*da0073e9SAndroid Build Coastguard Worker           _trilinear(i1_p, i2_t, i3_p, expand1, expand2, expand3, sumdim, unroll_dim) +
1950*da0073e9SAndroid Build Coastguard Worker           _trilinear(i1_p, i2_p, i3_t, expand1, expand2, expand3, sumdim, unroll_dim)"
1951*da0073e9SAndroid Build Coastguard Worker
1952*da0073e9SAndroid Build Coastguard Worker- name: constant_pad_nd(Tensor self, SymInt[] pad, Scalar value=0) -> Tensor
1953*da0073e9SAndroid Build Coastguard Worker  self: constant_pad_nd_backward(grad, pad)
1954*da0073e9SAndroid Build Coastguard Worker  result: constant_pad_nd_symint(self_t, pad, 0)
1955*da0073e9SAndroid Build Coastguard Worker
1956*da0073e9SAndroid Build Coastguard Worker- name: binary_cross_entropy(Tensor self, Tensor target, Tensor? weight=None, int reduction=Mean) -> Tensor
1957*da0073e9SAndroid Build Coastguard Worker  self: binary_cross_entropy_backward(grad, self, target, weight, reduction)
1958*da0073e9SAndroid Build Coastguard Worker  target: binary_cross_entropy_target_backward(grad, self, target, weight, reduction)
1959*da0073e9SAndroid Build Coastguard Worker  result: "apply_loss_reduction(
1960*da0073e9SAndroid Build Coastguard Worker               binary_cross_entropy_backward(self_t, self_p, target_p, weight, at::Reduction::None)
1961*da0073e9SAndroid Build Coastguard Worker             + binary_cross_entropy_target_backward(target_t, self_p, target_p, weight, at::Reduction::None),
1962*da0073e9SAndroid Build Coastguard Worker           reduction)"
1963*da0073e9SAndroid Build Coastguard Worker
1964*da0073e9SAndroid Build Coastguard Worker- name: binary_cross_entropy_backward(Tensor grad_output, Tensor self, Tensor target, Tensor? weight=None, int reduction=Mean) -> Tensor
1965*da0073e9SAndroid Build Coastguard Worker  self: binary_cross_entropy_double_backward(grad_output, grad, self, target, weight, reduction)
1966*da0073e9SAndroid Build Coastguard Worker  target: binary_cross_entropy_double_backward_target(grad, grad_output, self, target, weight, reduction)
1967*da0073e9SAndroid Build Coastguard Worker  grad_output: binary_cross_entropy_double_backward_grad_output(grad, self, target, weight, reduction)
1968*da0073e9SAndroid Build Coastguard Worker  result: " binary_cross_entropy_double_backward(grad_output_p, self_t, self_p, target_p, weight, reduction)
1969*da0073e9SAndroid Build Coastguard Worker          + binary_cross_entropy_double_backward_target(target_t, grad_output_p, self_p, target_p, weight, reduction)
1970*da0073e9SAndroid Build Coastguard Worker          + binary_cross_entropy_double_backward_grad_output(grad_output_t, self_p, target_p, weight, reduction)"
1971*da0073e9SAndroid Build Coastguard Worker
1972*da0073e9SAndroid Build Coastguard Worker- name: binary_cross_entropy_with_logits(Tensor self, Tensor target, Tensor? weight=None, Tensor? pos_weight=None, int reduction=Mean) -> Tensor
1973*da0073e9SAndroid Build Coastguard Worker  self: binary_cross_entropy_with_logits_backward(grad, self, target, weight, pos_weight, reduction)
1974*da0073e9SAndroid Build Coastguard Worker  target: binary_cross_entropy_with_logits_target_backward(grad, self, target, weight, pos_weight, reduction)
1975*da0073e9SAndroid Build Coastguard Worker  result: "apply_loss_reduction(
1976*da0073e9SAndroid Build Coastguard Worker               binary_cross_entropy_with_logits_backward(self_t, self_p, target_p, weight, pos_weight, at::Reduction::None)
1977*da0073e9SAndroid Build Coastguard Worker             + binary_cross_entropy_with_logits_target_backward(target_t, self_p, target_p, weight, pos_weight, at::Reduction::None),
1978*da0073e9SAndroid Build Coastguard Worker           reduction)"
1979*da0073e9SAndroid Build Coastguard Worker
1980*da0073e9SAndroid Build Coastguard Worker- name: embedding(Tensor weight, Tensor indices, SymInt padding_idx=-1, bool scale_grad_by_freq=False, bool sparse=False) -> Tensor
1981*da0073e9SAndroid Build Coastguard Worker  indices: non_differentiable
1982*da0073e9SAndroid Build Coastguard Worker  weight: embedding_backward_symint(grad, indices, weight.sym_size(0), padding_idx, scale_grad_by_freq, sparse)
1983*da0073e9SAndroid Build Coastguard Worker  result: auto_linear
1984*da0073e9SAndroid Build Coastguard Worker
1985*da0073e9SAndroid Build Coastguard Worker- name: embedding_dense_backward(Tensor grad_output, Tensor indices, SymInt num_weights, SymInt padding_idx, bool scale_grad_by_freq) -> Tensor
1986*da0073e9SAndroid Build Coastguard Worker  grad_output: embedding_dense_double_backward_symint(grad, indices, padding_idx)
1987*da0073e9SAndroid Build Coastguard Worker  indices: non_differentiable
1988*da0073e9SAndroid Build Coastguard Worker  result: auto_linear
1989*da0073e9SAndroid Build Coastguard Worker
1990*da0073e9SAndroid Build Coastguard Worker- name: _embedding_bag(Tensor weight, Tensor indices, Tensor offsets, bool scale_grad_by_freq=False, int mode=0, bool sparse=False, Tensor? per_sample_weights=None, bool include_last_offset=False, int padding_idx=-1) -> (Tensor, Tensor, Tensor, Tensor)
1991*da0073e9SAndroid Build Coastguard Worker  indices: non_differentiable
1992*da0073e9SAndroid Build Coastguard Worker  offsets: non_differentiable
1993*da0073e9SAndroid Build Coastguard Worker  weight: _embedding_bag_backward_symint(grad, indices, offsets, result1, result2, result3, weight.sym_size(0), scale_grad_by_freq, mode, sparse, per_sample_weights, padding_idx)
1994*da0073e9SAndroid Build Coastguard Worker  per_sample_weights: _embedding_bag_per_sample_weights_backward(grad, weight, indices, offsets, result1, mode, padding_idx)
1995*da0073e9SAndroid Build Coastguard Worker
1996*da0073e9SAndroid Build Coastguard Worker- name: _embedding_bag_dense_backward(Tensor grad, Tensor indices, Tensor offset2bag, Tensor bag_size, Tensor maximum_indices, SymInt num_weights, bool scale_grad_by_freq, int mode, Tensor? per_sample_weights, int padding_idx=-1) -> Tensor
1997*da0073e9SAndroid Build Coastguard Worker  indices: non_differentiable
1998*da0073e9SAndroid Build Coastguard Worker  offset2bag: non_differentiable
1999*da0073e9SAndroid Build Coastguard Worker  bag_size: non_differentiable
2000*da0073e9SAndroid Build Coastguard Worker  maximum_indices: non_differentiable
2001*da0073e9SAndroid Build Coastguard Worker
2002*da0073e9SAndroid Build Coastguard Worker- name: embedding_renorm_(Tensor(a!) self, Tensor indices, float max_norm, float norm_type) -> Tensor(a!)
2003*da0073e9SAndroid Build Coastguard Worker  indices: non_differentiable
2004*da0073e9SAndroid Build Coastguard Worker  self: not_implemented("embedding_renorm")
2005*da0073e9SAndroid Build Coastguard Worker
2006*da0073e9SAndroid Build Coastguard Worker- name: mse_loss(Tensor self, Tensor target, int reduction=Mean) -> Tensor
2007*da0073e9SAndroid Build Coastguard Worker  self: mse_loss_backward(grad, self, target, reduction)
2008*da0073e9SAndroid Build Coastguard Worker  target: mse_loss_backward(grad, target, self, reduction)
2009*da0073e9SAndroid Build Coastguard Worker  result: apply_loss_reduction(mse_loss_backward(self_t.conj(), self_p, target_p, at::Reduction::None).conj() + mse_loss_backward(target_t.conj(), target_p, self_p, at::Reduction::None).conj(), reduction)
2010*da0073e9SAndroid Build Coastguard Worker
2011*da0073e9SAndroid Build Coastguard Worker- name: multi_margin_loss(Tensor self, Tensor target, Scalar p=1, Scalar margin=1, Tensor? weight=None, int reduction=Mean) -> Tensor
2012*da0073e9SAndroid Build Coastguard Worker  self: multi_margin_loss_backward(grad, self, target, p, margin, weight, reduction)
2013*da0073e9SAndroid Build Coastguard Worker  target: non_differentiable
2014*da0073e9SAndroid Build Coastguard Worker
2015*da0073e9SAndroid Build Coastguard Worker- name: multilabel_margin_loss_forward(Tensor self, Tensor target, int reduction) -> (Tensor output, Tensor is_target)
2016*da0073e9SAndroid Build Coastguard Worker  self: multilabel_margin_loss_backward(grad, self, target, reduction, is_target)
2017*da0073e9SAndroid Build Coastguard Worker  target: non_differentiable
2018*da0073e9SAndroid Build Coastguard Worker
2019*da0073e9SAndroid Build Coastguard Worker- name: nll_loss_forward(Tensor self, Tensor target, Tensor? weight, int reduction, SymInt ignore_index) -> (Tensor output, Tensor total_weight)
2020*da0073e9SAndroid Build Coastguard Worker  self: nll_loss_backward_symint(grad, self, target, weight, reduction, ignore_index, total_weight)
2021*da0073e9SAndroid Build Coastguard Worker  target: non_differentiable
2022*da0073e9SAndroid Build Coastguard Worker  output: std::get<0>(nll_loss_forward_symint(self_t, target, weight, reduction, ignore_index))
2023*da0073e9SAndroid Build Coastguard Worker
2024*da0073e9SAndroid Build Coastguard Worker- name: nll_loss2d_forward(Tensor self, Tensor target, Tensor? weight, int reduction, SymInt ignore_index) -> (Tensor output, Tensor total_weight)
2025*da0073e9SAndroid Build Coastguard Worker  self: nll_loss2d_backward_symint(grad, self, target, weight, reduction, ignore_index, total_weight)
2026*da0073e9SAndroid Build Coastguard Worker  target: non_differentiable
2027*da0073e9SAndroid Build Coastguard Worker  output: std::get<0>(nll_loss2d_forward_symint(self_t, target, weight, reduction, ignore_index))
2028*da0073e9SAndroid Build Coastguard Worker
2029*da0073e9SAndroid Build Coastguard Worker- name: smooth_l1_loss(Tensor self, Tensor target, int reduction=Mean, float beta=1.0) -> Tensor
2030*da0073e9SAndroid Build Coastguard Worker  self: smooth_l1_loss_backward(grad, self, target, reduction, beta)
2031*da0073e9SAndroid Build Coastguard Worker  target: smooth_l1_loss_backward(grad, target, self, reduction, beta)
2032*da0073e9SAndroid Build Coastguard Worker  result: apply_loss_reduction(smooth_l1_loss_backward(self_t.conj(), self_p, target_p, at::Reduction::None, beta).conj() + smooth_l1_loss_backward(target_t.conj(), target_p, self_p, at::Reduction::None, beta).conj(), reduction)
2033*da0073e9SAndroid Build Coastguard Worker
2034*da0073e9SAndroid Build Coastguard Worker- name: huber_loss(Tensor self, Tensor target, int reduction=Mean, float delta=1.0) -> Tensor
2035*da0073e9SAndroid Build Coastguard Worker  self: huber_loss_backward(grad, self, target, reduction, delta)
2036*da0073e9SAndroid Build Coastguard Worker  target: huber_loss_backward(grad, target, self, reduction, delta)
2037*da0073e9SAndroid Build Coastguard Worker  result: apply_loss_reduction(huber_loss_backward(self_t.conj(), self_p, target_p, at::Reduction::None, delta).conj() + huber_loss_backward(target_t.conj(), target_p, self_p, at::Reduction::None, delta).conj(), reduction)
2038*da0073e9SAndroid Build Coastguard Worker
2039*da0073e9SAndroid Build Coastguard Worker- name: soft_margin_loss(Tensor self, Tensor target, int reduction=Mean) -> Tensor
2040*da0073e9SAndroid Build Coastguard Worker  self: soft_margin_loss_backward(grad, self, target, reduction)
2041*da0073e9SAndroid Build Coastguard Worker  result: apply_loss_reduction(soft_margin_loss_backward(self_t.conj(), self_p, target, at::Reduction::None).conj(), reduction)
2042*da0073e9SAndroid Build Coastguard Worker
2043*da0073e9SAndroid Build Coastguard Worker- name: relu(Tensor self) -> Tensor
2044*da0073e9SAndroid Build Coastguard Worker  self: threshold_backward(grad, result, 0)
2045*da0073e9SAndroid Build Coastguard Worker  result: auto_element_wise
2046*da0073e9SAndroid Build Coastguard Worker
2047*da0073e9SAndroid Build Coastguard Worker- name: silu(Tensor self) -> Tensor
2048*da0073e9SAndroid Build Coastguard Worker  self: "GradMode::is_enabled() ? infinitely_differentiable_silu_backward(grad, self) : silu_backward(grad, self)"
2049*da0073e9SAndroid Build Coastguard Worker  result: auto_element_wise
2050*da0073e9SAndroid Build Coastguard Worker
2051*da0073e9SAndroid Build Coastguard Worker- name: mish(Tensor self) -> Tensor
2052*da0073e9SAndroid Build Coastguard Worker  self: "GradMode::is_enabled() ? infinitely_differentiable_mish_backward(grad, self) : mish_backward(grad, self)"
2053*da0073e9SAndroid Build Coastguard Worker  result: auto_element_wise
2054*da0073e9SAndroid Build Coastguard Worker
2055*da0073e9SAndroid Build Coastguard Worker- name: elu(Tensor self, Scalar alpha=1, Scalar scale=1, Scalar input_scale=1) -> Tensor
2056*da0073e9SAndroid Build Coastguard Worker  self: elu_backward(grad, alpha, scale, input_scale, /* is_result */ false, self)
2057*da0073e9SAndroid Build Coastguard Worker  result: auto_element_wise
2058*da0073e9SAndroid Build Coastguard Worker
2059*da0073e9SAndroid Build Coastguard Worker- name: elu_(Tensor(a!) self, Scalar alpha=1, Scalar scale=1, Scalar input_scale=1) -> Tensor(a!)
2060*da0073e9SAndroid Build Coastguard Worker  self: elu_backward(grad, alpha, scale, input_scale, /* is_result */ true, result)
2061*da0073e9SAndroid Build Coastguard Worker  result: self_t.copy_(elu_backward(original_self_t, alpha, scale, input_scale, /* is_result */ true, result))
2062*da0073e9SAndroid Build Coastguard Worker
2063*da0073e9SAndroid Build Coastguard Worker- name: celu(Tensor self, Scalar alpha=1.0) -> Tensor
2064*da0073e9SAndroid Build Coastguard Worker  self: elu_backward(grad, alpha, 1, 1.0/alpha.toFloat(), /* is_result */ false, self)
2065*da0073e9SAndroid Build Coastguard Worker  result: auto_element_wise
2066*da0073e9SAndroid Build Coastguard Worker
2067*da0073e9SAndroid Build Coastguard Worker- name: celu_(Tensor(a!) self, Scalar alpha=1.0) -> Tensor(a!)
2068*da0073e9SAndroid Build Coastguard Worker  self: elu_backward(grad, alpha, 1, 1.0/alpha.toFloat(), /* is_result */ true, result)
2069*da0073e9SAndroid Build Coastguard Worker  result: self_t.copy_(elu_backward(original_self_t, alpha, 1, 1.0/alpha.toFloat(), /* is_result */ true, result))
2070*da0073e9SAndroid Build Coastguard Worker
2071*da0073e9SAndroid Build Coastguard Worker- name: gelu(Tensor self, *, str approximate='none') -> Tensor
2072*da0073e9SAndroid Build Coastguard Worker  self: gelu_backward(grad, self, approximate)
2073*da0073e9SAndroid Build Coastguard Worker  result: auto_element_wise
2074*da0073e9SAndroid Build Coastguard Worker
2075*da0073e9SAndroid Build Coastguard Worker- name: gelu_backward(Tensor grad_output, Tensor self, *, str approximate='none') -> Tensor
2076*da0073e9SAndroid Build Coastguard Worker  grad_output: gelu_backward(grad, self, approximate)
2077*da0073e9SAndroid Build Coastguard Worker  self: gelu_double_backward(grad, grad_output, self, approximate)
2078*da0073e9SAndroid Build Coastguard Worker  result: gelu_backward(grad_output_t, self_p, approximate) + gelu_double_backward(self_t, grad_output_p, self_p, approximate)
2079*da0073e9SAndroid Build Coastguard Worker
2080*da0073e9SAndroid Build Coastguard Worker- name: glu(Tensor self, int dim=-1) -> Tensor
2081*da0073e9SAndroid Build Coastguard Worker  # TODO: glu_backward can benefit from forward result,
2082*da0073e9SAndroid Build Coastguard Worker  # and forward ad/forward over reverse ad for that matter
2083*da0073e9SAndroid Build Coastguard Worker  self: glu_backward(grad, self, dim)
2084*da0073e9SAndroid Build Coastguard Worker  result: glu_jvp(result, self_p, self_t, dim)
2085*da0073e9SAndroid Build Coastguard Worker
2086*da0073e9SAndroid Build Coastguard Worker- name: hardshrink(Tensor self, Scalar lambd=0.5) -> Tensor
2087*da0073e9SAndroid Build Coastguard Worker  self: hardshrink_backward(grad, self, lambd)
2088*da0073e9SAndroid Build Coastguard Worker  result: auto_element_wise
2089*da0073e9SAndroid Build Coastguard Worker
2090*da0073e9SAndroid Build Coastguard Worker- name: hardshrink_backward(Tensor grad_out, Tensor self, Scalar lambd) -> Tensor
2091*da0073e9SAndroid Build Coastguard Worker  grad_out: hardshrink_backward(grad, self, lambd)
2092*da0073e9SAndroid Build Coastguard Worker  self: zeros_like(grad)
2093*da0073e9SAndroid Build Coastguard Worker  result: at::where((self_p > lambd).logical_or(self_p < -lambd), grad_out_t, at::zeros({}, result.options()).expand_as(result))
2094*da0073e9SAndroid Build Coastguard Worker
2095*da0073e9SAndroid Build Coastguard Worker- name: hardtanh(Tensor self, Scalar min_val=-1, Scalar max_val=1) -> Tensor
2096*da0073e9SAndroid Build Coastguard Worker  self: hardtanh_backward(grad, self, min_val, max_val)
2097*da0073e9SAndroid Build Coastguard Worker  result: auto_element_wise
2098*da0073e9SAndroid Build Coastguard Worker
2099*da0073e9SAndroid Build Coastguard Worker- name: leaky_relu(Tensor self, Scalar negative_slope=0.01) -> Tensor
2100*da0073e9SAndroid Build Coastguard Worker  self: leaky_relu_backward(grad, self, negative_slope, false)
2101*da0073e9SAndroid Build Coastguard Worker  result: auto_element_wise
2102*da0073e9SAndroid Build Coastguard Worker
2103*da0073e9SAndroid Build Coastguard Worker- name: leaky_relu_(Tensor(a!) self, Scalar negative_slope=0.01) -> Tensor(a!)
2104*da0073e9SAndroid Build Coastguard Worker  self: leaky_relu_backward(grad, result, negative_slope, true)
2105*da0073e9SAndroid Build Coastguard Worker  result: self_t.copy_(leaky_relu_backward(original_self_t.conj(), result, negative_slope, true).conj())
2106*da0073e9SAndroid Build Coastguard Worker
2107*da0073e9SAndroid Build Coastguard Worker- name: log_sigmoid_forward(Tensor self) -> (Tensor output, Tensor buffer)
2108*da0073e9SAndroid Build Coastguard Worker  self: log_sigmoid_backward(grad, self, buffer)
2109*da0073e9SAndroid Build Coastguard Worker  output: log_sigmoid_backward(self_t.conj(), self_p, buffer).conj()
2110*da0073e9SAndroid Build Coastguard Worker  output_differentiability: [True, False]
2111*da0073e9SAndroid Build Coastguard Worker
2112*da0073e9SAndroid Build Coastguard Worker- name: _log_softmax(Tensor self, int dim, bool half_to_float) -> Tensor
2113*da0073e9SAndroid Build Coastguard Worker  self: _log_softmax_backward_data(grad, result, dim, self.scalar_type())
2114*da0073e9SAndroid Build Coastguard Worker  result: self_t - logsumexp_jvp(self_p, self_t, {dim}, true)
2115*da0073e9SAndroid Build Coastguard Worker
2116*da0073e9SAndroid Build Coastguard Worker- name: _sparse_log_softmax(Tensor self, int dim, bool half_to_float) -> Tensor
2117*da0073e9SAndroid Build Coastguard Worker  self: _sparse_log_softmax_backward_data(grad, result, dim, self)
2118*da0073e9SAndroid Build Coastguard Worker
2119*da0073e9SAndroid Build Coastguard Worker- name: _masked_softmax(Tensor self, Tensor mask, int? dim=None, int? mask_type=None) -> Tensor
2120*da0073e9SAndroid Build Coastguard Worker  self: _masked_softmax_backward(grad, result, mask, dim)
2121*da0073e9SAndroid Build Coastguard Worker  mask: non_differentiable
2122*da0073e9SAndroid Build Coastguard Worker
2123*da0073e9SAndroid Build Coastguard Worker- name: _prelu_kernel(Tensor self, Tensor weight) -> Tensor
2124*da0073e9SAndroid Build Coastguard Worker  self, weight: "grad.defined() ? _prelu_kernel_backward(grad, self, weight) : std::tuple<Tensor, Tensor>()"
2125*da0073e9SAndroid Build Coastguard Worker  result: at::where(self_p >= 0, self_t, weight_p * self_t + weight_t * self_p)
2126*da0073e9SAndroid Build Coastguard Worker
2127*da0073e9SAndroid Build Coastguard Worker- name: _prelu_kernel_backward(Tensor grad_output, Tensor self, Tensor weight) -> (Tensor, Tensor)
2128*da0073e9SAndroid Build Coastguard Worker  grad_output: "grads[0].defined() ?
2129*da0073e9SAndroid Build Coastguard Worker                (grads[1].defined() ? at::where(self >= 0, grads[0], grads[0] * weight + grads[1] * self)
2130*da0073e9SAndroid Build Coastguard Worker                                    : at::where(self >= 0, grads[0], grads[0] * weight))
2131*da0073e9SAndroid Build Coastguard Worker                                    : at::where(self >= 0, at::zeros({}, grad_output.options()), grads[1] * self)"
2132*da0073e9SAndroid Build Coastguard Worker  self: "grads[1].defined() ? at::where(self >= 0, at::zeros({}, self.options()), grad_output * grads[1]) : zeros_like(self)"
2133*da0073e9SAndroid Build Coastguard Worker  weight: "grads[0].defined() ? at::where(self >= 0, at::zeros({}, weight.options()), grad_output * grads[0]) : zeros_like(self)"
2134*da0073e9SAndroid Build Coastguard Worker  result0: at::where(self_p >= 0, grad_output_t, grad_output_t * weight_p + grad_output_p * weight_t)
2135*da0073e9SAndroid Build Coastguard Worker  result1: at::where(self_p >= 0, at::zeros({}, self_p.options()), grad_output_p * self_t + grad_output_t * self_p)
2136*da0073e9SAndroid Build Coastguard Worker
2137*da0073e9SAndroid Build Coastguard Worker- name: rrelu_with_noise(Tensor self, Tensor noise, Scalar lower=0.125, Scalar upper=0.3333333333333333, bool training=False, Generator? generator=None) -> Tensor
2138*da0073e9SAndroid Build Coastguard Worker  self: rrelu_with_noise_backward(grad, self, noise, lower, upper, training, false)
2139*da0073e9SAndroid Build Coastguard Worker  result: auto_element_wise
2140*da0073e9SAndroid Build Coastguard Worker
2141*da0073e9SAndroid Build Coastguard Worker- name: rrelu_with_noise_(Tensor(a!) self, Tensor noise, Scalar lower=0.125, Scalar upper=0.3333333333333333, bool training=False, Generator? generator=None) -> Tensor(a!)
2142*da0073e9SAndroid Build Coastguard Worker  self: rrelu_with_noise_backward(grad, result, noise, lower, upper, training, true)
2143*da0073e9SAndroid Build Coastguard Worker
2144*da0073e9SAndroid Build Coastguard Worker- name: _softmax(Tensor self, int dim, bool half_to_float) -> Tensor
2145*da0073e9SAndroid Build Coastguard Worker  self: _softmax_backward_data(grad, result, dim, self.scalar_type())
2146*da0073e9SAndroid Build Coastguard Worker  result: result * (self_t - logsumexp_jvp(self_p, self_t, {dim}, true))
2147*da0073e9SAndroid Build Coastguard Worker
2148*da0073e9SAndroid Build Coastguard Worker- name: _sparse_softmax(Tensor self, int dim, bool half_to_float) -> Tensor
2149*da0073e9SAndroid Build Coastguard Worker  self: _sparse_softmax_backward_data(grad, result, dim, self)
2150*da0073e9SAndroid Build Coastguard Worker
2151*da0073e9SAndroid Build Coastguard Worker- name: _sparse_sparse_matmul(Tensor self, Tensor other) -> Tensor
2152*da0073e9SAndroid Build Coastguard Worker  self: sparse_sparse_matmul_backward(grad, self, other, 0)
2153*da0073e9SAndroid Build Coastguard Worker  other: sparse_sparse_matmul_backward(grad, self, other, 1)
2154*da0073e9SAndroid Build Coastguard Worker
2155*da0073e9SAndroid Build Coastguard Worker- name: softplus(Tensor self, Scalar beta=1, Scalar threshold=20) -> Tensor
2156*da0073e9SAndroid Build Coastguard Worker  self: softplus_backward(grad, self, beta, threshold)
2157*da0073e9SAndroid Build Coastguard Worker  result: auto_element_wise
2158*da0073e9SAndroid Build Coastguard Worker
2159*da0073e9SAndroid Build Coastguard Worker- name: softshrink(Tensor self, Scalar lambd=0.5) -> Tensor
2160*da0073e9SAndroid Build Coastguard Worker  self: softshrink_backward(grad, self, lambd)
2161*da0073e9SAndroid Build Coastguard Worker  result: auto_element_wise
2162*da0073e9SAndroid Build Coastguard Worker
2163*da0073e9SAndroid Build Coastguard Worker- name: threshold(Tensor self, Scalar threshold, Scalar value) -> Tensor
2164*da0073e9SAndroid Build Coastguard Worker  self: threshold_backward(grad, self, threshold)
2165*da0073e9SAndroid Build Coastguard Worker  result: auto_element_wise
2166*da0073e9SAndroid Build Coastguard Worker
2167*da0073e9SAndroid Build Coastguard Worker- name: threshold_(Tensor(a!) self, Scalar threshold, Scalar value) -> Tensor(a!)
2168*da0073e9SAndroid Build Coastguard Worker  self: threshold_backward(grad, self, threshold)
2169*da0073e9SAndroid Build Coastguard Worker  result: self_t.copy_(threshold_backward(self_t.conj(), original_self_p, threshold).conj())
2170*da0073e9SAndroid Build Coastguard Worker
2171*da0073e9SAndroid Build Coastguard Worker- name: reflection_pad1d(Tensor self, SymInt[2] padding) -> Tensor
2172*da0073e9SAndroid Build Coastguard Worker  self: reflection_pad1d_backward_symint(grad, self, padding)
2173*da0073e9SAndroid Build Coastguard Worker  result: auto_linear
2174*da0073e9SAndroid Build Coastguard Worker
2175*da0073e9SAndroid Build Coastguard Worker- name: reflection_pad2d(Tensor self, SymInt[4] padding) -> Tensor
2176*da0073e9SAndroid Build Coastguard Worker  self: reflection_pad2d_backward_symint(grad, self, padding)
2177*da0073e9SAndroid Build Coastguard Worker  result: auto_linear
2178*da0073e9SAndroid Build Coastguard Worker
2179*da0073e9SAndroid Build Coastguard Worker- name: reflection_pad3d(Tensor self, SymInt[6] padding) -> Tensor
2180*da0073e9SAndroid Build Coastguard Worker  self: reflection_pad3d_backward_symint(grad, self, padding)
2181*da0073e9SAndroid Build Coastguard Worker  result: auto_linear
2182*da0073e9SAndroid Build Coastguard Worker
2183*da0073e9SAndroid Build Coastguard Worker- name: replication_pad1d(Tensor self, SymInt[2] padding) -> Tensor
2184*da0073e9SAndroid Build Coastguard Worker  self: replication_pad1d_backward_symint(grad, self, padding)
2185*da0073e9SAndroid Build Coastguard Worker  result: auto_linear
2186*da0073e9SAndroid Build Coastguard Worker
2187*da0073e9SAndroid Build Coastguard Worker- name: replication_pad2d(Tensor self, SymInt[4] padding) -> Tensor
2188*da0073e9SAndroid Build Coastguard Worker  self: replication_pad2d_backward_symint(grad, self, padding)
2189*da0073e9SAndroid Build Coastguard Worker  result: auto_linear
2190*da0073e9SAndroid Build Coastguard Worker
2191*da0073e9SAndroid Build Coastguard Worker- name: replication_pad3d(Tensor self, SymInt[6] padding) -> Tensor
2192*da0073e9SAndroid Build Coastguard Worker  self: replication_pad3d_backward_symint(grad, self, padding)
2193*da0073e9SAndroid Build Coastguard Worker  result: auto_linear
2194*da0073e9SAndroid Build Coastguard Worker
2195*da0073e9SAndroid Build Coastguard Worker- name: upsample_linear1d(Tensor self, SymInt[1] output_size, bool align_corners, float? scales=None) -> Tensor
2196*da0073e9SAndroid Build Coastguard Worker  self: upsample_linear1d_backward_symint(grad, output_size, self.sym_sizes(), align_corners, scales)
2197*da0073e9SAndroid Build Coastguard Worker  result: auto_linear
2198*da0073e9SAndroid Build Coastguard Worker
2199*da0073e9SAndroid Build Coastguard Worker- name: upsample_bilinear2d(Tensor self, SymInt[2] output_size, bool align_corners, float? scales_h=None, float? scales_w=None) -> Tensor
2200*da0073e9SAndroid Build Coastguard Worker  self: upsample_bilinear2d_backward_symint(grad, output_size, self.sym_sizes(), align_corners, scales_h, scales_w)
2201*da0073e9SAndroid Build Coastguard Worker  result: auto_linear
2202*da0073e9SAndroid Build Coastguard Worker
2203*da0073e9SAndroid Build Coastguard Worker- name: _upsample_bilinear2d_aa(Tensor self, SymInt[2] output_size, bool align_corners, float? scales_h=None, float? scales_w=None) -> Tensor
2204*da0073e9SAndroid Build Coastguard Worker  self: _upsample_bilinear2d_aa_backward_symint(grad, output_size, self.sym_sizes(), align_corners, scales_h, scales_w)
2205*da0073e9SAndroid Build Coastguard Worker  result: auto_linear
2206*da0073e9SAndroid Build Coastguard Worker
2207*da0073e9SAndroid Build Coastguard Worker- name: upsample_bicubic2d(Tensor self, SymInt[2] output_size, bool align_corners, float? scales_h=None, float? scales_w=None) -> Tensor
2208*da0073e9SAndroid Build Coastguard Worker  self: upsample_bicubic2d_backward_symint(grad, output_size, self.sym_sizes(), align_corners, scales_h, scales_w)
2209*da0073e9SAndroid Build Coastguard Worker  result: auto_linear
2210*da0073e9SAndroid Build Coastguard Worker
2211*da0073e9SAndroid Build Coastguard Worker- name: _upsample_bicubic2d_aa(Tensor self, SymInt[2] output_size, bool align_corners, float? scales_h=None, float? scales_w=None) -> Tensor
2212*da0073e9SAndroid Build Coastguard Worker  self: _upsample_bicubic2d_aa_backward_symint(grad, output_size, self.sym_sizes(), align_corners, scales_h, scales_w)
2213*da0073e9SAndroid Build Coastguard Worker  result: auto_linear
2214*da0073e9SAndroid Build Coastguard Worker
2215*da0073e9SAndroid Build Coastguard Worker- name: upsample_trilinear3d(Tensor self, SymInt[3] output_size, bool align_corners, float? scales_d=None, float? scales_h=None, float? scales_w=None) -> Tensor
2216*da0073e9SAndroid Build Coastguard Worker  self: upsample_trilinear3d_backward_symint(grad, output_size, self.sym_sizes(), align_corners, scales_d, scales_h, scales_w)
2217*da0073e9SAndroid Build Coastguard Worker  result: auto_linear
2218*da0073e9SAndroid Build Coastguard Worker
2219*da0073e9SAndroid Build Coastguard Worker- name: upsample_nearest1d(Tensor self, SymInt[1] output_size, float? scales=None) -> Tensor
2220*da0073e9SAndroid Build Coastguard Worker  self: upsample_nearest1d_backward_symint(grad, output_size, self.sym_sizes(), scales)
2221*da0073e9SAndroid Build Coastguard Worker  result: auto_linear
2222*da0073e9SAndroid Build Coastguard Worker
2223*da0073e9SAndroid Build Coastguard Worker- name: _upsample_nearest_exact1d(Tensor self, SymInt[1] output_size, float? scales=None) -> Tensor
2224*da0073e9SAndroid Build Coastguard Worker  self: _upsample_nearest_exact1d_backward_symint(grad, output_size, self.sym_sizes(), scales)
2225*da0073e9SAndroid Build Coastguard Worker  result: auto_linear
2226*da0073e9SAndroid Build Coastguard Worker
2227*da0073e9SAndroid Build Coastguard Worker- name: upsample_nearest2d(Tensor self, SymInt[2] output_size, float? scales_h=None, float? scales_w=None) -> Tensor
2228*da0073e9SAndroid Build Coastguard Worker  self: upsample_nearest2d_backward_symint(grad, output_size, self.sym_sizes(), scales_h, scales_w)
2229*da0073e9SAndroid Build Coastguard Worker  result: auto_linear
2230*da0073e9SAndroid Build Coastguard Worker
2231*da0073e9SAndroid Build Coastguard Worker- name: _upsample_nearest_exact2d(Tensor self, SymInt[2] output_size, float? scales_h=None, float? scales_w=None) -> Tensor
2232*da0073e9SAndroid Build Coastguard Worker  self: _upsample_nearest_exact2d_backward_symint(grad, output_size, self.sym_sizes(), scales_h, scales_w)
2233*da0073e9SAndroid Build Coastguard Worker  result: auto_linear
2234*da0073e9SAndroid Build Coastguard Worker
2235*da0073e9SAndroid Build Coastguard Worker- name: upsample_nearest3d(Tensor self, SymInt[3] output_size, float? scales_d=None, float? scales_h=None, float? scales_w=None) -> Tensor
2236*da0073e9SAndroid Build Coastguard Worker  self: upsample_nearest3d_backward_symint(grad, output_size, self.sym_sizes(), scales_d, scales_h, scales_w)
2237*da0073e9SAndroid Build Coastguard Worker  result: auto_linear
2238*da0073e9SAndroid Build Coastguard Worker
2239*da0073e9SAndroid Build Coastguard Worker- name: _upsample_nearest_exact3d(Tensor self, SymInt[3] output_size, float? scales_d=None, float? scales_h=None, float? scales_w=None) -> Tensor
2240*da0073e9SAndroid Build Coastguard Worker  self: _upsample_nearest_exact3d_backward_symint(grad, output_size, self.sym_sizes(), scales_d, scales_h, scales_w)
2241*da0073e9SAndroid Build Coastguard Worker  result: auto_linear
2242*da0073e9SAndroid Build Coastguard Worker
2243*da0073e9SAndroid Build Coastguard Worker- name: pixel_shuffle(Tensor self, int upscale_factor) -> Tensor
2244*da0073e9SAndroid Build Coastguard Worker  self: pixel_unshuffle(grad, upscale_factor)
2245*da0073e9SAndroid Build Coastguard Worker  result: auto_linear
2246*da0073e9SAndroid Build Coastguard Worker
2247*da0073e9SAndroid Build Coastguard Worker- name: pixel_unshuffle(Tensor self, int downscale_factor) -> Tensor
2248*da0073e9SAndroid Build Coastguard Worker  self: pixel_shuffle(grad, downscale_factor)
2249*da0073e9SAndroid Build Coastguard Worker  result: auto_linear
2250*da0073e9SAndroid Build Coastguard Worker
2251*da0073e9SAndroid Build Coastguard Worker- name: _adaptive_avg_pool2d(Tensor self, SymInt[2] output_size) -> Tensor
2252*da0073e9SAndroid Build Coastguard Worker  self: _adaptive_avg_pool2d_backward(grad, self)
2253*da0073e9SAndroid Build Coastguard Worker  result: auto_linear
2254*da0073e9SAndroid Build Coastguard Worker
2255*da0073e9SAndroid Build Coastguard Worker- name: _adaptive_avg_pool3d(Tensor self, SymInt[3] output_size) -> Tensor
2256*da0073e9SAndroid Build Coastguard Worker  self: _adaptive_avg_pool3d_backward(grad, self)
2257*da0073e9SAndroid Build Coastguard Worker  result: auto_linear
2258*da0073e9SAndroid Build Coastguard Worker
2259*da0073e9SAndroid Build Coastguard Worker- name: adaptive_max_pool2d(Tensor self, int[2] output_size) -> (Tensor, Tensor)
2260*da0073e9SAndroid Build Coastguard Worker  self: adaptive_max_pool2d_backward(grad, self, result1)
2261*da0073e9SAndroid Build Coastguard Worker  result0: gather(self_t.flatten(-2), -1, result1.flatten(-2)).view_as(result1)
2262*da0073e9SAndroid Build Coastguard Worker  output_differentiability: [True, False]
2263*da0073e9SAndroid Build Coastguard Worker
2264*da0073e9SAndroid Build Coastguard Worker- name: adaptive_max_pool3d(Tensor self, int[3] output_size) -> (Tensor, Tensor)
2265*da0073e9SAndroid Build Coastguard Worker  self: adaptive_max_pool3d_backward(grad, self, result1)
2266*da0073e9SAndroid Build Coastguard Worker  result0: gather(self_t.flatten(-3), -1, result1.flatten(-3)).view_as(result1)
2267*da0073e9SAndroid Build Coastguard Worker  output_differentiability: [True, False]
2268*da0073e9SAndroid Build Coastguard Worker
2269*da0073e9SAndroid Build Coastguard Worker- name: avg_pool2d(Tensor self, int[2] kernel_size, int[2] stride=[], int[2] padding=0, bool ceil_mode=False, bool count_include_pad=True, int? divisor_override=None) -> Tensor
2270*da0073e9SAndroid Build Coastguard Worker  self: avg_pool2d_backward(grad, self, kernel_size, stride, padding, ceil_mode, count_include_pad, divisor_override)
2271*da0073e9SAndroid Build Coastguard Worker  result: auto_linear
2272*da0073e9SAndroid Build Coastguard Worker
2273*da0073e9SAndroid Build Coastguard Worker- name: avg_pool3d(Tensor self, int[3] kernel_size, int[3] stride=[], int[3] padding=0, bool ceil_mode=False, bool count_include_pad=True, int? divisor_override=None) -> Tensor
2274*da0073e9SAndroid Build Coastguard Worker  self: avg_pool3d_backward(grad, self, kernel_size, stride, padding, ceil_mode, count_include_pad, divisor_override)
2275*da0073e9SAndroid Build Coastguard Worker  result: auto_linear
2276*da0073e9SAndroid Build Coastguard Worker
2277*da0073e9SAndroid Build Coastguard Worker- name: fractional_max_pool2d(Tensor self, int[2] kernel_size, int[2] output_size, Tensor random_samples) -> (Tensor, Tensor)
2278*da0073e9SAndroid Build Coastguard Worker  self: fractional_max_pool2d_backward(grad, self, kernel_size, output_size, result1)
2279*da0073e9SAndroid Build Coastguard Worker  result0: gather(self_t.flatten(-2), -1, result1.flatten(-2)).view_as(result1)
2280*da0073e9SAndroid Build Coastguard Worker  output_differentiability: [True, False]
2281*da0073e9SAndroid Build Coastguard Worker
2282*da0073e9SAndroid Build Coastguard Worker- name: fractional_max_pool3d(Tensor self, int[3] kernel_size, int[3] output_size, Tensor random_samples) -> (Tensor, Tensor)
2283*da0073e9SAndroid Build Coastguard Worker  self: fractional_max_pool3d_backward(grad, self, kernel_size, output_size, result1)
2284*da0073e9SAndroid Build Coastguard Worker  result0: gather(self_t.flatten(-3), -1, result1.flatten(-3)).view_as(result1)
2285*da0073e9SAndroid Build Coastguard Worker  output_differentiability: [True, False]
2286*da0073e9SAndroid Build Coastguard Worker
2287*da0073e9SAndroid Build Coastguard Worker- name: linear(Tensor input, Tensor weight, Tensor? bias=None) -> Tensor
2288*da0073e9SAndroid Build Coastguard Worker  input, weight, bias: "grad.defined() ? linear_backward(input, grad, weight, grad_input_mask) : std::tuple<Tensor, Tensor, Tensor>()"
2289*da0073e9SAndroid Build Coastguard Worker
2290*da0073e9SAndroid Build Coastguard Worker- name: linear_backward(Tensor self, Tensor grad_output, Tensor weight, bool[3] output_mask) -> (Tensor, Tensor, Tensor)
2291*da0073e9SAndroid Build Coastguard Worker  self, grad_output, weight: linear_double_backward(grads, self, grad_output, weight)
2292*da0073e9SAndroid Build Coastguard Worker
2293*da0073e9SAndroid Build Coastguard Worker#mps
2294*da0073e9SAndroid Build Coastguard Worker- name: max_pool2d(Tensor self, int[2] kernel_size, int[2] stride=[], int[2] padding=0, int[2] dilation=1, bool ceil_mode=False) -> Tensor
2295*da0073e9SAndroid Build Coastguard Worker  self: max_pool2d_backward(grad, self, kernel_size, stride, padding, dilation, ceil_mode)
2296*da0073e9SAndroid Build Coastguard Worker
2297*da0073e9SAndroid Build Coastguard Worker- name: _mps_convolution(Tensor self, Tensor weight, Tensor? bias, SymInt[] padding, SymInt[] stride, SymInt[] dilation, SymInt groups) -> Tensor
2298*da0073e9SAndroid Build Coastguard Worker  self, weight, bias: "grad.defined() ? mps_convolution_backward_symint(self, grad, weight, padding, stride, dilation, groups, grad_input_mask) : std::tuple<Tensor, Tensor, Tensor>()"
2299*da0073e9SAndroid Build Coastguard Worker
2300*da0073e9SAndroid Build Coastguard Worker- name: mps_convolution_backward(Tensor self, Tensor grad_output, Tensor weight, SymInt[] padding, SymInt[] stride, SymInt[] dilation, SymInt groups, bool[3] output_mask) -> (Tensor, Tensor, Tensor)
2301*da0073e9SAndroid Build Coastguard Worker  grad_output, self, weight: _convolution_double_backward_symint(grads[0], grads[1], grads[2], grad_output, weight, self, stride, padding, dilation, false, std::vector<c10::SymInt>(padding.size(), 0), groups, grad_input_mask)
2302*da0073e9SAndroid Build Coastguard Worker
2303*da0073e9SAndroid Build Coastguard Worker- name: max_pool2d_with_indices(Tensor self, int[2] kernel_size, int[2] stride=[], int[2] padding=0, int[2] dilation=1, bool ceil_mode=False) -> (Tensor, Tensor)
2304*da0073e9SAndroid Build Coastguard Worker  self: max_pool2d_with_indices_backward(grad, self, kernel_size, stride, padding, dilation, ceil_mode, result1)
2305*da0073e9SAndroid Build Coastguard Worker  result0: gather(self_t.flatten(-2), -1, result1.flatten(-2)).view_as(result1)
2306*da0073e9SAndroid Build Coastguard Worker  output_differentiability: [True, False]
2307*da0073e9SAndroid Build Coastguard Worker
2308*da0073e9SAndroid Build Coastguard Worker- name: max_pool3d_with_indices(Tensor self, int[3] kernel_size, int[3] stride=[], int[3] padding=0, int[3] dilation=1, bool ceil_mode=False) -> (Tensor, Tensor)
2309*da0073e9SAndroid Build Coastguard Worker  self: max_pool3d_with_indices_backward(grad, self, kernel_size, stride, padding, dilation, ceil_mode, result1)
2310*da0073e9SAndroid Build Coastguard Worker  result0: gather(self_t.flatten(-3), -1, result1.flatten(-3)).view_as(result1)
2311*da0073e9SAndroid Build Coastguard Worker  output_differentiability: [True, False]
2312*da0073e9SAndroid Build Coastguard Worker
2313*da0073e9SAndroid Build Coastguard Worker- name: max_unpool2d(Tensor self, Tensor indices, SymInt[2] output_size) -> Tensor
2314*da0073e9SAndroid Build Coastguard Worker  self: max_pool_double_backward(grad, indices, 2)
2315*da0073e9SAndroid Build Coastguard Worker  indices: non_differentiable
2316*da0073e9SAndroid Build Coastguard Worker  result: auto_linear
2317*da0073e9SAndroid Build Coastguard Worker
2318*da0073e9SAndroid Build Coastguard Worker- name: max_unpool3d(Tensor self, Tensor indices, SymInt[3] output_size, int[3] stride, int[3] padding) -> Tensor
2319*da0073e9SAndroid Build Coastguard Worker  self: max_pool_double_backward(grad, indices, 3)
2320*da0073e9SAndroid Build Coastguard Worker  indices: non_differentiable
2321*da0073e9SAndroid Build Coastguard Worker  result: auto_linear
2322*da0073e9SAndroid Build Coastguard Worker
2323*da0073e9SAndroid Build Coastguard Worker- name: convolution(Tensor input, Tensor weight, Tensor? bias, SymInt[] stride, SymInt[] padding, SymInt[] dilation, bool transposed, SymInt[] output_padding, SymInt groups) -> Tensor
2324*da0073e9SAndroid Build Coastguard Worker  input, weight, bias: "grad.defined() ? convolution_backward_symint(grad, input, weight, bias->sym_sizes(), stride, padding, dilation, transposed, output_padding, groups, grad_input_mask) : std::tuple<Tensor, Tensor, Tensor>()"
2325*da0073e9SAndroid Build Coastguard Worker  result: convolution_jvp(input_p, input_t, weight_p, weight_t, bias_p, bias_t, stride, padding, dilation, transposed, output_padding, groups)
2326*da0073e9SAndroid Build Coastguard Worker
2327*da0073e9SAndroid Build Coastguard Worker# TorchScript serializes calls to _convolution so this entry is present until that is changed to use convolution.
2328*da0073e9SAndroid Build Coastguard Worker# Note that the benchmark, deterministic, cudnn_enabled, and allow_tf32 flags are queried from the global context
2329*da0073e9SAndroid Build Coastguard Worker# by convolution_backward instead of being passed along from the forward pass.
2330*da0073e9SAndroid Build Coastguard Worker- name: _convolution(Tensor input, Tensor weight, Tensor? bias, SymInt[] stride, SymInt[] padding, SymInt[] dilation, bool transposed, SymInt[] output_padding, SymInt groups, bool benchmark, bool deterministic, bool cudnn_enabled, bool allow_tf32) -> Tensor
2331*da0073e9SAndroid Build Coastguard Worker  input, weight, bias: "grad.defined() ? convolution_backward_symint(grad, input, weight, bias->sym_sizes(), stride, padding, dilation, transposed, output_padding, groups, grad_input_mask) : std::tuple<Tensor, Tensor, Tensor>()"
2332*da0073e9SAndroid Build Coastguard Worker  result: _convolution_jvp(input_p, input_t, weight_p, weight_t, bias_p, bias_t, stride, padding, dilation, transposed, output_padding, groups, benchmark, deterministic, cudnn_enabled, allow_tf32)
2333*da0073e9SAndroid Build Coastguard Worker
2334*da0073e9SAndroid Build Coastguard Worker- name: convolution_backward(Tensor grad_output, Tensor input, Tensor weight, SymInt[]? bias_sizes, SymInt[] stride, SymInt[] padding, SymInt[] dilation, bool transposed, SymInt[] output_padding, SymInt groups, bool[3] output_mask) -> (Tensor, Tensor, Tensor)
2335*da0073e9SAndroid Build Coastguard Worker  grad_output, input, weight: _convolution_double_backward_symint(grads[0], grads[1], grads[2], grad_output, weight, input, stride, padding, dilation, transposed, output_padding, groups, grad_input_mask)
2336*da0073e9SAndroid Build Coastguard Worker  result0: std::get<0>(convolution_backward_symint(grad_output_p, input_p, weight_t, bias_sizes, stride, padding, dilation, transposed, output_padding, groups, {true, false, false})) + std::get<0>(convolution_backward_symint(grad_output_t, input_p, weight_p, bias_sizes, stride, padding, dilation, transposed, output_padding, groups, {true, false, false}))
2337*da0073e9SAndroid Build Coastguard Worker  result1: std::get<1>(convolution_backward_symint(grad_output_p, input_t, weight_p, bias_sizes, stride, padding, dilation, transposed, output_padding, groups, {false, true, false})) + std::get<1>(convolution_backward_symint(grad_output_t, input_p, weight_p, bias_sizes, stride, padding, dilation, transposed, output_padding, groups, {false, true, false}))
2338*da0073e9SAndroid Build Coastguard Worker  result2: convolution_backward_jvp_grad_bias(grad_output_t, result2)
2339*da0073e9SAndroid Build Coastguard Worker
2340*da0073e9SAndroid Build Coastguard Worker- name: convolution_overrideable(Tensor input, Tensor weight, Tensor? bias, SymInt[] stride, SymInt[] padding, SymInt[] dilation, bool transposed, SymInt[] output_padding, SymInt groups) -> Tensor
2341*da0073e9SAndroid Build Coastguard Worker  input, weight, bias: "grad.defined() ? convolution_backward_overrideable_symint(grad, input, weight, stride, padding, dilation, transposed, output_padding, groups, grad_input_mask) : std::tuple<Tensor, Tensor, Tensor>()"
2342*da0073e9SAndroid Build Coastguard Worker
2343*da0073e9SAndroid Build Coastguard Worker- name: convolution_backward_overrideable(Tensor grad_output, Tensor input, Tensor weight, SymInt[] stride, SymInt[] padding, SymInt[] dilation, bool transposed, SymInt[] output_padding, SymInt groups, bool[3] output_mask) -> (Tensor grad_input, Tensor grad_weight, Tensor grad_bias)
2344*da0073e9SAndroid Build Coastguard Worker  grad_output, input, weight: _convolution_double_backward_symint(grads[0], grads[1], grads[2], grad_output, weight, input, stride, padding, dilation, transposed, output_padding, groups, grad_input_mask)
2345*da0073e9SAndroid Build Coastguard Worker
2346*da0073e9SAndroid Build Coastguard Worker- name: slow_conv_transpose2d(Tensor self, Tensor weight, SymInt[2] kernel_size, Tensor? bias=None, SymInt[2] stride=1, SymInt[2] padding=0, SymInt[2] output_padding=0, SymInt[2] dilation=1) -> Tensor
2347*da0073e9SAndroid Build Coastguard Worker  self, weight, bias: "grad.defined() ? convolution_backward_symint(grad, self, weight, bias->sym_sizes(), stride, padding, dilation, true, output_padding, 1, grad_input_mask) : std::tuple<Tensor, Tensor, Tensor>()"
2348*da0073e9SAndroid Build Coastguard Worker
2349*da0073e9SAndroid Build Coastguard Worker- name: slow_conv_transpose3d(Tensor self, Tensor weight, SymInt[3] kernel_size, Tensor? bias=None, SymInt[3] stride=1, SymInt[3] padding=0, SymInt[3] output_padding=0, SymInt[3] dilation=1) -> Tensor
2350*da0073e9SAndroid Build Coastguard Worker  self, weight, bias: "grad.defined() ? convolution_backward_symint(grad, self, weight, bias->sym_sizes(), stride, padding, dilation, true, output_padding, 1, grad_input_mask) : std::tuple<Tensor, Tensor, Tensor>()"
2351*da0073e9SAndroid Build Coastguard Worker
2352*da0073e9SAndroid Build Coastguard Worker- name: _slow_conv2d_forward(Tensor self, Tensor weight, SymInt[2] kernel_size, Tensor? bias, SymInt[2] stride, SymInt[2] padding) -> Tensor
2353*da0073e9SAndroid Build Coastguard Worker  self, weight, bias: "grad.defined() ? _slow_conv2d_backward_symint(grad, self, weight, kernel_size, stride, padding, grad_input_mask) : std::tuple<Tensor, Tensor, Tensor>()"
2354*da0073e9SAndroid Build Coastguard Worker
2355*da0073e9SAndroid Build Coastguard Worker- name: _slow_conv2d_backward.output_mask(Tensor grad_output, Tensor self, Tensor weight, SymInt[2] kernel_size, SymInt[2] stride, SymInt[2] padding, bool[3] output_mask) -> (Tensor grad_input, Tensor grad_weight, Tensor grad_bias)
2356*da0073e9SAndroid Build Coastguard Worker  grad_output, self, weight: _convolution_double_backward_symint(grads[0], grads[1], grads[2], grad_output, weight, self, stride, padding, {{1, 1}}, false, {{0, 0}}, 1, grad_input_mask)
2357*da0073e9SAndroid Build Coastguard Worker
2358*da0073e9SAndroid Build Coastguard Worker- name: _conv_depthwise2d(Tensor self, Tensor weight, SymInt[2] kernel_size, Tensor? bias, SymInt[2] stride, SymInt[2] padding, SymInt[2] dilation) -> Tensor
2359*da0073e9SAndroid Build Coastguard Worker  self, weight, bias: "grad.defined() ? convolution_backward_symint(grad.contiguous(), self, weight, bias->sym_sizes(), stride, padding, dilation, /*transposed=*/ false, /*output_padding=*/ {{0, 0}}, /*groups=*/ 1, grad_input_mask) : std::tuple<Tensor, Tensor, Tensor>()"
2360*da0073e9SAndroid Build Coastguard Worker
2361*da0073e9SAndroid Build Coastguard Worker- name: conv_depthwise3d(Tensor self, Tensor weight, SymInt[3] kernel_size, Tensor? bias, SymInt[3] stride, SymInt[3] padding, SymInt[3] dilation) -> Tensor
2362*da0073e9SAndroid Build Coastguard Worker  self, weight, bias: "grad.defined() ? convolution_backward_symint(grad.contiguous(), self, weight, bias->sym_sizes(), stride, padding, dilation, /*transposed=*/ false, /*output_padding=*/ {{0, 0, 0}}, /*groups=*/ 1, grad_input_mask) : std::tuple<Tensor, Tensor, Tensor>()"
2363*da0073e9SAndroid Build Coastguard Worker
2364*da0073e9SAndroid Build Coastguard Worker- name: slow_conv3d_forward(Tensor self, Tensor weight, SymInt[3] kernel_size, Tensor? bias, SymInt[3] stride, SymInt[3] padding) -> Tensor
2365*da0073e9SAndroid Build Coastguard Worker  self, weight, bias: "grad.defined() ? convolution_backward_symint(grad, self, weight, bias->sym_sizes(), stride, padding, /*dilation=*/ {{1, 1, 1}}, false, /*output_padding=*/ {{0, 0, 0}}, 1, grad_input_mask) : std::tuple<Tensor, Tensor, Tensor>()"
2366*da0073e9SAndroid Build Coastguard Worker
2367*da0073e9SAndroid Build Coastguard Worker- name: slow_conv_dilated2d(Tensor self, Tensor weight, SymInt[2] kernel_size, Tensor? bias=None, SymInt[2] stride=1, SymInt[2] padding=0, SymInt[2] dilation=1) -> Tensor
2368*da0073e9SAndroid Build Coastguard Worker  self, weight, bias: "grad.defined() ? convolution_backward_symint(grad, self, weight, bias->sym_sizes(), stride, padding, dilation, false, std::vector<c10::SymInt>(padding.size(), 0), 1, grad_input_mask) : std::tuple<Tensor, Tensor, Tensor>()"
2369*da0073e9SAndroid Build Coastguard Worker
2370*da0073e9SAndroid Build Coastguard Worker- name: slow_conv_dilated3d(Tensor self, Tensor weight, SymInt[3] kernel_size, Tensor? bias=None, SymInt[3] stride=1, SymInt[3] padding=0, SymInt[3] dilation=1) -> Tensor
2371*da0073e9SAndroid Build Coastguard Worker  self, weight, bias: "grad.defined() ? convolution_backward_symint(grad, self, weight, bias->sym_sizes(), stride, padding, dilation, false, std::vector<c10::SymInt>(padding.size(), 0), 1, grad_input_mask) : std::tuple<Tensor, Tensor, Tensor>()"
2372*da0073e9SAndroid Build Coastguard Worker
2373*da0073e9SAndroid Build Coastguard Worker- name: col2im(Tensor self, SymInt[2] output_size, int[2] kernel_size, int[2] dilation, int[2] padding, int[2] stride) -> Tensor
2374*da0073e9SAndroid Build Coastguard Worker  self: im2col(grad, kernel_size, dilation, padding, stride)
2375*da0073e9SAndroid Build Coastguard Worker  result: auto_linear
2376*da0073e9SAndroid Build Coastguard Worker
2377*da0073e9SAndroid Build Coastguard Worker- name: im2col(Tensor self, int[2] kernel_size, int[2] dilation, int[2] padding, int[2] stride) -> Tensor
2378*da0073e9SAndroid Build Coastguard Worker  self: col2im_symint(grad, {self.sym_size(-2), self.sym_size(-1)}, kernel_size, dilation, padding, stride)
2379*da0073e9SAndroid Build Coastguard Worker  result: auto_linear
2380*da0073e9SAndroid Build Coastguard Worker
2381*da0073e9SAndroid Build Coastguard Worker- name: _adaptive_avg_pool2d_backward(Tensor grad_output, Tensor self) -> Tensor
2382*da0073e9SAndroid Build Coastguard Worker  grad_output: _adaptive_avg_pool2d_symint(grad, {grad_output.sym_size(-2), grad_output.sym_size(-1)})
2383*da0073e9SAndroid Build Coastguard Worker  self: zeros_like(self)
2384*da0073e9SAndroid Build Coastguard Worker  result: _adaptive_avg_pool2d_backward(grad_output_t, self_p)
2385*da0073e9SAndroid Build Coastguard Worker
2386*da0073e9SAndroid Build Coastguard Worker- name: _adaptive_avg_pool3d_backward(Tensor grad_output, Tensor self) -> Tensor
2387*da0073e9SAndroid Build Coastguard Worker  grad_output: _adaptive_avg_pool3d_symint(grad, { grad_output.sym_size(-3), grad_output.sym_size(-2), grad_output.sym_size(-1) })
2388*da0073e9SAndroid Build Coastguard Worker  self: zeros_like(self)
2389*da0073e9SAndroid Build Coastguard Worker  result: _adaptive_avg_pool3d_backward(grad_output_t, self_p)
2390*da0073e9SAndroid Build Coastguard Worker
2391*da0073e9SAndroid Build Coastguard Worker- name: adaptive_max_pool2d_backward(Tensor grad_output, Tensor self, Tensor indices) -> Tensor
2392*da0073e9SAndroid Build Coastguard Worker  grad_output: max_pool_double_backward(grad, indices, 2)
2393*da0073e9SAndroid Build Coastguard Worker  self: zeros_like(self)
2394*da0073e9SAndroid Build Coastguard Worker  result: auto_linear
2395*da0073e9SAndroid Build Coastguard Worker
2396*da0073e9SAndroid Build Coastguard Worker- name: adaptive_max_pool3d_backward(Tensor grad_output, Tensor self, Tensor indices) -> Tensor
2397*da0073e9SAndroid Build Coastguard Worker  grad_output: max_pool_double_backward(grad, indices, 3)
2398*da0073e9SAndroid Build Coastguard Worker  self: zeros_like(self)
2399*da0073e9SAndroid Build Coastguard Worker  result: auto_linear
2400*da0073e9SAndroid Build Coastguard Worker
2401*da0073e9SAndroid Build Coastguard Worker- name: avg_pool2d_backward(Tensor grad_output, Tensor self, int[2] kernel_size, int[2] stride, int[2] padding, bool ceil_mode, bool count_include_pad, int? divisor_override) -> Tensor
2402*da0073e9SAndroid Build Coastguard Worker  grad_output: avg_pool2d(grad, kernel_size, stride, padding, ceil_mode, count_include_pad, divisor_override)
2403*da0073e9SAndroid Build Coastguard Worker  self: zeros_like(self)
2404*da0073e9SAndroid Build Coastguard Worker  result: avg_pool2d_backward(grad_output_t, self_p, kernel_size, stride, padding, ceil_mode, count_include_pad, divisor_override)
2405*da0073e9SAndroid Build Coastguard Worker
2406*da0073e9SAndroid Build Coastguard Worker- name: avg_pool3d_backward(Tensor grad_output, Tensor self, int[3] kernel_size, int[3] stride, int[3] padding, bool ceil_mode, bool count_include_pad, int? divisor_override) -> Tensor
2407*da0073e9SAndroid Build Coastguard Worker  grad_output: avg_pool3d(grad, kernel_size, stride, padding, ceil_mode, count_include_pad, divisor_override)
2408*da0073e9SAndroid Build Coastguard Worker  self: zeros_like(self)
2409*da0073e9SAndroid Build Coastguard Worker  result: avg_pool3d_backward(grad_output_t, self_p, kernel_size, stride, padding, ceil_mode, count_include_pad, divisor_override)
2410*da0073e9SAndroid Build Coastguard Worker
2411*da0073e9SAndroid Build Coastguard Worker- name: elu_backward(Tensor grad_output, Scalar alpha, Scalar scale, Scalar input_scale, bool is_result, Tensor self_or_result) -> Tensor
2412*da0073e9SAndroid Build Coastguard Worker  grad_output: elu_backward(grad, alpha, scale, input_scale, is_result, self_or_result)
2413*da0073e9SAndroid Build Coastguard Worker  self_or_result: elu_double_backward(grad, grad_output, alpha, scale, input_scale, is_result, self_or_result)
2414*da0073e9SAndroid Build Coastguard Worker  result: elu_backward(grad_output_t, alpha, scale, input_scale, is_result, self_or_result_p) + elu_double_backward(self_or_result_t, grad_output_p, alpha, scale, input_scale, is_result, self_or_result_p)
2415*da0073e9SAndroid Build Coastguard Worker
2416*da0073e9SAndroid Build Coastguard Worker- name: fractional_max_pool2d_backward(Tensor grad_output, Tensor self, int[2] kernel_size, int[2] output_size, Tensor indices) -> Tensor
2417*da0073e9SAndroid Build Coastguard Worker  grad_output: max_pool_double_backward(grad, indices, 2)
2418*da0073e9SAndroid Build Coastguard Worker  self: zeros_like(self)
2419*da0073e9SAndroid Build Coastguard Worker  result: auto_linear
2420*da0073e9SAndroid Build Coastguard Worker
2421*da0073e9SAndroid Build Coastguard Worker- name: fractional_max_pool3d_backward(Tensor grad_output, Tensor self, int[3] kernel_size, int[3] output_size, Tensor indices) -> Tensor
2422*da0073e9SAndroid Build Coastguard Worker  grad_output: max_pool_double_backward(grad, indices, 3)
2423*da0073e9SAndroid Build Coastguard Worker  self: zeros_like(self)
2424*da0073e9SAndroid Build Coastguard Worker  result: auto_linear
2425*da0073e9SAndroid Build Coastguard Worker
2426*da0073e9SAndroid Build Coastguard Worker- name: glu_backward(Tensor grad_output, Tensor self, int dim) -> Tensor
2427*da0073e9SAndroid Build Coastguard Worker  grad_output: glu_double_backward_grad_output(grad, self, dim)
2428*da0073e9SAndroid Build Coastguard Worker  self: glu_double_backward(grad, grad_output, self, dim)
2429*da0073e9SAndroid Build Coastguard Worker  result: glu_backward_jvp(result, grad_output_p, self_p, grad_output_t, self_t, dim)
2430*da0073e9SAndroid Build Coastguard Worker
2431*da0073e9SAndroid Build Coastguard Worker- name: hardtanh_backward(Tensor grad_output, Tensor self, Scalar min_val, Scalar max_val) -> Tensor
2432*da0073e9SAndroid Build Coastguard Worker  grad_output: hardtanh_backward(grad, self, min_val, max_val)
2433*da0073e9SAndroid Build Coastguard Worker  self: zeros_like(grad)
2434*da0073e9SAndroid Build Coastguard Worker  result: at::where((self_p > min_val).logical_and(self_p < max_val), grad_output_t, at::zeros({}, result.options()).expand_as(result))
2435*da0073e9SAndroid Build Coastguard Worker
2436*da0073e9SAndroid Build Coastguard Worker- name: log_sigmoid_backward(Tensor grad_output, Tensor self, Tensor buffer) -> Tensor
2437*da0073e9SAndroid Build Coastguard Worker  grad_output: log_sigmoid_backward(grad, self, buffer)
2438*da0073e9SAndroid Build Coastguard Worker  self: log_sigmoid_double_backward(grad * grad_output, self)
2439*da0073e9SAndroid Build Coastguard Worker  result: log_sigmoid_backward(grad_output_t, self_p, buffer) + log_sigmoid_double_backward(self_t * grad_output_p, self_p)
2440*da0073e9SAndroid Build Coastguard Worker
2441*da0073e9SAndroid Build Coastguard Worker- name: _log_softmax_backward_data(Tensor grad_output, Tensor output, int dim, ScalarType input_dtype) -> Tensor
2442*da0073e9SAndroid Build Coastguard Worker  grad_output: grad.to(output.dtype()) - (grad.to(output.dtype()) * output.exp()).sum(dim, true)
2443*da0073e9SAndroid Build Coastguard Worker  output: (-grad_output.sum(dim, true) * output.exp() * grad.to(output.dtype())).to(output.dtype())
2444*da0073e9SAndroid Build Coastguard Worker
2445*da0073e9SAndroid Build Coastguard Worker- name: leaky_relu_backward(Tensor grad_output, Tensor self, Scalar negative_slope, bool self_is_result) -> Tensor
2446*da0073e9SAndroid Build Coastguard Worker  # self_is_result is always false here since double backward call is an out-of-place call, self is input itself
2447*da0073e9SAndroid Build Coastguard Worker  grad_output: leaky_relu_backward(grad, self, negative_slope, false)
2448*da0073e9SAndroid Build Coastguard Worker  self: zeros_like(grad)
2449*da0073e9SAndroid Build Coastguard Worker  # leaky_relu_backward(grad_output, self, negative_slope, false)
2450*da0073e9SAndroid Build Coastguard Worker  # computes grad_output * at::where(self_p > 0, 1, negative_slope)
2451*da0073e9SAndroid Build Coastguard Worker  # so the jvp formula is the following:
2452*da0073e9SAndroid Build Coastguard Worker  # grad_output_t * at::where(self_p > 0, self_p.new_ones([]), negative_slope);
2453*da0073e9SAndroid Build Coastguard Worker  #
2454*da0073e9SAndroid Build Coastguard Worker  # leaky_relu_backward(grad_output, result, negative_slope, true)
2455*da0073e9SAndroid Build Coastguard Worker  # computes grad_output * at::where(result > 0, 1, negative_slope)
2456*da0073e9SAndroid Build Coastguard Worker  # under the assumption that `negative_slope` is positive (otherwise,
2457*da0073e9SAndroid Build Coastguard Worker  # it is not possible to compute the gradient).
2458*da0073e9SAndroid Build Coastguard Worker  #
2459*da0073e9SAndroid Build Coastguard Worker  # so the jvp formula is the following:
2460*da0073e9SAndroid Build Coastguard Worker  # grad_output_t * at::where(result_p > 0, result_p.new_ones([]), negative_slope);
2461*da0073e9SAndroid Build Coastguard Worker  # with the assumption that negative_slope is positive.
2462*da0073e9SAndroid Build Coastguard Worker  #
2463*da0073e9SAndroid Build Coastguard Worker  # Combined together that results in the following optimized kernel which
2464*da0073e9SAndroid Build Coastguard Worker  # also checks the assumption that negative_slope is positive when self_is_result
2465*da0073e9SAndroid Build Coastguard Worker  # is True:
2466*da0073e9SAndroid Build Coastguard Worker  result: leaky_relu_backward(grad_output_t, self_p, negative_slope, self_is_result)
2467*da0073e9SAndroid Build Coastguard Worker
2468*da0073e9SAndroid Build Coastguard Worker# This derivative is mps-only, and `error_for_max_pool2d_double_backward` just raises an error.
2469*da0073e9SAndroid Build Coastguard Worker- name: max_pool2d_backward(Tensor grad_output, Tensor self, int[2] kernel_size, int[2] stride=[], int[2] padding=0, int[2] dilation=1, bool ceil_mode=False) -> Tensor
2470*da0073e9SAndroid Build Coastguard Worker  grad_output: error_for_max_pool2d_double_backward()
2471*da0073e9SAndroid Build Coastguard Worker  self: zeros_like(self)
2472*da0073e9SAndroid Build Coastguard Worker  result: auto_linear
2473*da0073e9SAndroid Build Coastguard Worker
2474*da0073e9SAndroid Build Coastguard Worker- name: max_pool2d_with_indices_backward(Tensor grad_output, Tensor self, int[2] kernel_size, int[2] stride, int[2] padding, int[2] dilation, bool ceil_mode, Tensor indices) -> Tensor
2475*da0073e9SAndroid Build Coastguard Worker  grad_output: max_pool_double_backward(grad, indices, 2)
2476*da0073e9SAndroid Build Coastguard Worker  self: zeros_like(self)
2477*da0073e9SAndroid Build Coastguard Worker  indices: non_differentiable
2478*da0073e9SAndroid Build Coastguard Worker  result: auto_linear
2479*da0073e9SAndroid Build Coastguard Worker
2480*da0073e9SAndroid Build Coastguard Worker- name: max_pool3d_with_indices_backward(Tensor grad_output, Tensor self, int[3] kernel_size, int[3] stride, int[3] padding, int[3] dilation, bool ceil_mode, Tensor indices) -> Tensor
2481*da0073e9SAndroid Build Coastguard Worker  grad_output: max_pool_double_backward(grad, indices, 3)
2482*da0073e9SAndroid Build Coastguard Worker  self: zeros_like(self)
2483*da0073e9SAndroid Build Coastguard Worker  indices: non_differentiable
2484*da0073e9SAndroid Build Coastguard Worker  result: auto_linear
2485*da0073e9SAndroid Build Coastguard Worker
2486*da0073e9SAndroid Build Coastguard Worker- name: mse_loss_backward(Tensor grad_output, Tensor self, Tensor target, int reduction) -> Tensor
2487*da0073e9SAndroid Build Coastguard Worker  grad_output: mse_loss_backward(grad, self, target, reduction)
2488*da0073e9SAndroid Build Coastguard Worker  self: mse_loss_double_backward(grad * grad_output, self, reduction)
2489*da0073e9SAndroid Build Coastguard Worker  target: -mse_loss_double_backward(grad * grad_output, target, reduction)
2490*da0073e9SAndroid Build Coastguard Worker  result: "  mse_loss_double_backward(self_t * grad_output_p, self_p, reduction)
2491*da0073e9SAndroid Build Coastguard Worker           - mse_loss_double_backward(target_t * grad_output_p, target_p, reduction)
2492*da0073e9SAndroid Build Coastguard Worker           + mse_loss_backward(grad_output_t, self_p, target_p, reduction)
2493*da0073e9SAndroid Build Coastguard Worker          "
2494*da0073e9SAndroid Build Coastguard Worker
2495*da0073e9SAndroid Build Coastguard Worker- name: nll_loss_backward(Tensor grad_output, Tensor self, Tensor target, Tensor? weight, int reduction, SymInt ignore_index, Tensor total_weight) -> Tensor
2496*da0073e9SAndroid Build Coastguard Worker  grad_output: nll_loss_symint(grad, target, weight, reduction, ignore_index)
2497*da0073e9SAndroid Build Coastguard Worker  self: zeros_like(grad)
2498*da0073e9SAndroid Build Coastguard Worker  target: non_differentiable
2499*da0073e9SAndroid Build Coastguard Worker
2500*da0073e9SAndroid Build Coastguard Worker- name: nll_loss2d_backward(Tensor grad_output, Tensor self, Tensor target, Tensor? weight, int reduction, SymInt ignore_index, Tensor total_weight) -> Tensor
2501*da0073e9SAndroid Build Coastguard Worker  grad_output: nll_loss2d_symint(grad, target, weight, reduction, ignore_index)
2502*da0073e9SAndroid Build Coastguard Worker  self: zeros_like(grad)
2503*da0073e9SAndroid Build Coastguard Worker  target: non_differentiable
2504*da0073e9SAndroid Build Coastguard Worker
2505*da0073e9SAndroid Build Coastguard Worker- name: rrelu_with_noise_backward(Tensor grad_output, Tensor self, Tensor noise, Scalar lower, Scalar upper, bool training, bool self_is_result) -> Tensor
2506*da0073e9SAndroid Build Coastguard Worker  # self_is_result is always false here since double backward call is an out-of-place call, self is input itself
2507*da0073e9SAndroid Build Coastguard Worker  grad_output: rrelu_with_noise_backward(grad, self, noise, lower, upper, training, false)
2508*da0073e9SAndroid Build Coastguard Worker  self: zeros_like(grad)
2509*da0073e9SAndroid Build Coastguard Worker  result: rrelu_with_noise_backward(grad_output_t, self_p, noise, lower, upper, training, false)
2510*da0073e9SAndroid Build Coastguard Worker
2511*da0073e9SAndroid Build Coastguard Worker- name: reflection_pad1d_backward(Tensor grad_output, Tensor self, SymInt[2] padding) -> Tensor
2512*da0073e9SAndroid Build Coastguard Worker  grad_output: reflection_pad1d_symint(grad, padding)
2513*da0073e9SAndroid Build Coastguard Worker  self: zeros_like(self)
2514*da0073e9SAndroid Build Coastguard Worker  result: reflection_pad1d_backward_symint(grad_output_t, self_p, padding)
2515*da0073e9SAndroid Build Coastguard Worker
2516*da0073e9SAndroid Build Coastguard Worker- name: reflection_pad2d_backward(Tensor grad_output, Tensor self, SymInt[4] padding) -> Tensor
2517*da0073e9SAndroid Build Coastguard Worker  grad_output: reflection_pad2d_symint(grad, padding)
2518*da0073e9SAndroid Build Coastguard Worker  self: zeros_like(self)
2519*da0073e9SAndroid Build Coastguard Worker  result: reflection_pad2d_backward_symint(grad_output_t, self_p, padding)
2520*da0073e9SAndroid Build Coastguard Worker
2521*da0073e9SAndroid Build Coastguard Worker- name: reflection_pad3d_backward(Tensor grad_output, Tensor self, SymInt[6] padding) -> Tensor
2522*da0073e9SAndroid Build Coastguard Worker  grad_output: reflection_pad3d_symint(grad, padding)
2523*da0073e9SAndroid Build Coastguard Worker  self: zeros_like(self)
2524*da0073e9SAndroid Build Coastguard Worker  result: reflection_pad3d_backward_symint(grad_output_t, self_p, padding)
2525*da0073e9SAndroid Build Coastguard Worker
2526*da0073e9SAndroid Build Coastguard Worker- name: replication_pad1d_backward(Tensor grad_output, Tensor self, SymInt[2] padding) -> Tensor
2527*da0073e9SAndroid Build Coastguard Worker  grad_output: replication_pad1d_symint(grad, padding)
2528*da0073e9SAndroid Build Coastguard Worker  self: zeros_like(self)
2529*da0073e9SAndroid Build Coastguard Worker  result: replication_pad1d_backward_symint(grad_output_t, self_p, padding)
2530*da0073e9SAndroid Build Coastguard Worker
2531*da0073e9SAndroid Build Coastguard Worker- name: replication_pad2d_backward(Tensor grad_output, Tensor self, SymInt[4] padding) -> Tensor
2532*da0073e9SAndroid Build Coastguard Worker  grad_output: replication_pad2d_symint(grad, padding)
2533*da0073e9SAndroid Build Coastguard Worker  self: zeros_like(self)
2534*da0073e9SAndroid Build Coastguard Worker  result: replication_pad2d_backward_symint(grad_output_t, self_p, padding)
2535*da0073e9SAndroid Build Coastguard Worker
2536*da0073e9SAndroid Build Coastguard Worker- name: replication_pad3d_backward(Tensor grad_output, Tensor self, SymInt[6] padding) -> Tensor
2537*da0073e9SAndroid Build Coastguard Worker  grad_output: replication_pad3d_symint(grad, padding)
2538*da0073e9SAndroid Build Coastguard Worker  self: zeros_like(self)
2539*da0073e9SAndroid Build Coastguard Worker  result: replication_pad3d_backward_symint(grad_output_t, self_p, padding)
2540*da0073e9SAndroid Build Coastguard Worker
2541*da0073e9SAndroid Build Coastguard Worker- name: sparse_sampled_addmm(Tensor self, Tensor mat1, Tensor mat2, *, Scalar beta=1, Scalar alpha=1) -> Tensor
2542*da0073e9SAndroid Build Coastguard Worker  self, mat1, mat2: "sparse_sampled_addmm_backward(grad,
2543*da0073e9SAndroid Build Coastguard Worker                                                   self,
2544*da0073e9SAndroid Build Coastguard Worker                                                   wrap_opt_if(mat1, grad_input_mask[2]),
2545*da0073e9SAndroid Build Coastguard Worker                                                   wrap_opt_if(mat2, grad_input_mask[1]),
2546*da0073e9SAndroid Build Coastguard Worker                                                   alpha, beta, grad_input_mask)"
2547*da0073e9SAndroid Build Coastguard Worker
2548*da0073e9SAndroid Build Coastguard Worker- name: _sparse_mm_reduce_impl(Tensor self, Tensor other, str reduce) -> (Tensor, Tensor)
2549*da0073e9SAndroid Build Coastguard Worker  output_differentiability: [True, False]
2550*da0073e9SAndroid Build Coastguard Worker  self, other: "grad.defined() ? _sparse_mm_reduce_impl_backward(self, grad, other, reduce, result1, grad_input_mask) :  std::tuple<Tensor, Tensor>()"
2551*da0073e9SAndroid Build Coastguard Worker
2552*da0073e9SAndroid Build Coastguard Worker- name: smooth_l1_loss_backward(Tensor grad_output, Tensor self, Tensor target, int reduction, float beta) -> Tensor
2553*da0073e9SAndroid Build Coastguard Worker  grad_output: smooth_l1_loss_backward(grad, self, target, reduction, beta)
2554*da0073e9SAndroid Build Coastguard Worker  self: smooth_l1_loss_double_backward(grad * grad_output, self, target, reduction, beta)
2555*da0073e9SAndroid Build Coastguard Worker  target: -smooth_l1_loss_double_backward(grad * grad_output, self, target, reduction, beta)
2556*da0073e9SAndroid Build Coastguard Worker  result: "  smooth_l1_loss_double_backward(self_t * grad_output_p, self_p, target_p, reduction, beta)
2557*da0073e9SAndroid Build Coastguard Worker           - smooth_l1_loss_double_backward(target_t * grad_output_p, self_p, target_p, reduction, beta)
2558*da0073e9SAndroid Build Coastguard Worker           + smooth_l1_loss_backward(grad_output_t, self_p, target_p, reduction, beta)
2559*da0073e9SAndroid Build Coastguard Worker          "
2560*da0073e9SAndroid Build Coastguard Worker
2561*da0073e9SAndroid Build Coastguard Worker- name: huber_loss_backward(Tensor grad_output, Tensor self, Tensor target, int reduction, float delta) -> Tensor
2562*da0073e9SAndroid Build Coastguard Worker  grad_output: huber_loss_double_backward_grad_output(grad, grad_output, self, target, reduction, delta)
2563*da0073e9SAndroid Build Coastguard Worker  self: huber_loss_double_backward(grad * grad_output, self, target, reduction, delta)
2564*da0073e9SAndroid Build Coastguard Worker  target: -huber_loss_double_backward(grad * grad_output, self, target, reduction, delta)
2565*da0073e9SAndroid Build Coastguard Worker
2566*da0073e9SAndroid Build Coastguard Worker- name: softplus_backward(Tensor grad_output, Tensor self, Scalar beta, Scalar threshold) -> Tensor
2567*da0073e9SAndroid Build Coastguard Worker  grad_output: softplus_backward(grad, self, beta, threshold)
2568*da0073e9SAndroid Build Coastguard Worker  self: softplus_double_backward(grad * grad_output, self, beta, threshold)
2569*da0073e9SAndroid Build Coastguard Worker  result: "softplus_backward(grad_output_t, self_p, beta, threshold)
2570*da0073e9SAndroid Build Coastguard Worker         + softplus_double_backward(self_t * grad_output_p, self_p, beta, threshold)"
2571*da0073e9SAndroid Build Coastguard Worker
2572*da0073e9SAndroid Build Coastguard Worker- name: _softmax_backward_data(Tensor grad_output, Tensor output, int dim, ScalarType input_dtype) -> Tensor
2573*da0073e9SAndroid Build Coastguard Worker  grad_output: _softmax_backward_data(grad.to(output.dtype()), output, dim, input_dtype)
2574*da0073e9SAndroid Build Coastguard Worker  output: softmax_double_backward(grad.to(output.dtype()), grad_output, dim, output).to(output.dtype())
2575*da0073e9SAndroid Build Coastguard Worker
2576*da0073e9SAndroid Build Coastguard Worker- name: soft_margin_loss_backward(Tensor grad_output, Tensor self, Tensor target, int reduction) -> Tensor
2577*da0073e9SAndroid Build Coastguard Worker  grad_output: soft_margin_loss_double_backward_grad_output(grad, grad_output, self, target, reduction)
2578*da0073e9SAndroid Build Coastguard Worker  self: soft_margin_loss_double_backward(grad * grad_output, self, target, reduction)
2579*da0073e9SAndroid Build Coastguard Worker
2580*da0073e9SAndroid Build Coastguard Worker- name: softshrink_backward(Tensor grad_output, Tensor self, Scalar lambd) -> Tensor
2581*da0073e9SAndroid Build Coastguard Worker  grad_output: softshrink_backward(grad, self, lambd)
2582*da0073e9SAndroid Build Coastguard Worker  self: zeros_like(grad)
2583*da0073e9SAndroid Build Coastguard Worker  result: at::where((self_p > lambd).logical_or(self_p < -lambd), grad_output_t, at::zeros({}, result.options()).expand_as(result))
2584*da0073e9SAndroid Build Coastguard Worker
2585*da0073e9SAndroid Build Coastguard Worker- name: threshold_backward(Tensor grad_output, Tensor self, Scalar threshold) -> Tensor
2586*da0073e9SAndroid Build Coastguard Worker  grad_output: threshold_backward(grad, self, threshold)
2587*da0073e9SAndroid Build Coastguard Worker  self: zeros_like(grad)
2588*da0073e9SAndroid Build Coastguard Worker  result: zeros_like(self_t) + threshold_backward(grad_output_t, self_p, threshold)
2589*da0073e9SAndroid Build Coastguard Worker
2590*da0073e9SAndroid Build Coastguard Worker- name: upsample_linear1d_backward(Tensor grad_output, SymInt[1] output_size, SymInt[3] input_size, bool align_corners, float? scales=None) -> Tensor
2591*da0073e9SAndroid Build Coastguard Worker  grad_output: upsample_linear1d_symint(grad, output_size, align_corners, scales)
2592*da0073e9SAndroid Build Coastguard Worker  result: auto_linear
2593*da0073e9SAndroid Build Coastguard Worker
2594*da0073e9SAndroid Build Coastguard Worker- name: upsample_bilinear2d_backward(Tensor grad_output, SymInt[2] output_size, SymInt[4] input_size, bool align_corners, float? scales_h=None, float? scales_w=None) -> Tensor
2595*da0073e9SAndroid Build Coastguard Worker  grad_output: upsample_bilinear2d_symint(grad, output_size, align_corners, scales_h, scales_w)
2596*da0073e9SAndroid Build Coastguard Worker  result: auto_linear
2597*da0073e9SAndroid Build Coastguard Worker
2598*da0073e9SAndroid Build Coastguard Worker- name: _upsample_bilinear2d_aa_backward(Tensor grad_output, SymInt[2] output_size, SymInt[4] input_size, bool align_corners, float? scales_h=None, float? scales_w=None) -> Tensor
2599*da0073e9SAndroid Build Coastguard Worker  grad_output: _upsample_bilinear2d_aa_symint(grad, output_size, align_corners, scales_h, scales_w)
2600*da0073e9SAndroid Build Coastguard Worker  result: auto_linear
2601*da0073e9SAndroid Build Coastguard Worker
2602*da0073e9SAndroid Build Coastguard Worker- name: upsample_bicubic2d_backward(Tensor grad_output, SymInt[2] output_size, SymInt[4] input_size, bool align_corners, float? scales_h=None, float? scales_w=None) -> Tensor
2603*da0073e9SAndroid Build Coastguard Worker  grad_output: upsample_bicubic2d_symint(grad, output_size, align_corners, scales_h, scales_w)
2604*da0073e9SAndroid Build Coastguard Worker  result: auto_linear
2605*da0073e9SAndroid Build Coastguard Worker
2606*da0073e9SAndroid Build Coastguard Worker- name: _upsample_bicubic2d_aa_backward(Tensor grad_output, SymInt[2] output_size, SymInt[4] input_size, bool align_corners, float? scales_h=None, float? scales_w=None) -> Tensor
2607*da0073e9SAndroid Build Coastguard Worker  grad_output: _upsample_bicubic2d_aa_symint(grad, output_size, align_corners, scales_h, scales_w)
2608*da0073e9SAndroid Build Coastguard Worker  result: auto_linear
2609*da0073e9SAndroid Build Coastguard Worker
2610*da0073e9SAndroid Build Coastguard Worker- name: upsample_trilinear3d_backward(Tensor grad_output, SymInt[3] output_size, SymInt[5] input_size, bool align_corners, float? scales_d=None, float? scales_h=None, float? scales_w=None) -> Tensor
2611*da0073e9SAndroid Build Coastguard Worker  grad_output: upsample_trilinear3d_symint(grad, output_size, align_corners, scales_d, scales_h, scales_w)
2612*da0073e9SAndroid Build Coastguard Worker  result: auto_linear
2613*da0073e9SAndroid Build Coastguard Worker
2614*da0073e9SAndroid Build Coastguard Worker- name: upsample_nearest1d_backward(Tensor grad_output, SymInt[1] output_size, SymInt[3] input_size, float? scales=None) -> Tensor
2615*da0073e9SAndroid Build Coastguard Worker  grad_output: upsample_nearest1d_symint(grad, output_size, scales)
2616*da0073e9SAndroid Build Coastguard Worker  result: auto_linear
2617*da0073e9SAndroid Build Coastguard Worker
2618*da0073e9SAndroid Build Coastguard Worker- name: _upsample_nearest_exact1d_backward(Tensor grad_output, SymInt[1] output_size, SymInt[3] input_size, float? scales=None) -> Tensor
2619*da0073e9SAndroid Build Coastguard Worker  grad_output: _upsample_nearest_exact1d_symint(grad, output_size, scales)
2620*da0073e9SAndroid Build Coastguard Worker  result: auto_linear
2621*da0073e9SAndroid Build Coastguard Worker
2622*da0073e9SAndroid Build Coastguard Worker- name: upsample_nearest2d_backward(Tensor grad_output, SymInt[2] output_size, SymInt[4] input_size, float? scales_h=None, float? scales_w=None) -> Tensor
2623*da0073e9SAndroid Build Coastguard Worker  grad_output: upsample_nearest2d_symint(grad, output_size, scales_h, scales_w)
2624*da0073e9SAndroid Build Coastguard Worker  result: auto_linear
2625*da0073e9SAndroid Build Coastguard Worker
2626*da0073e9SAndroid Build Coastguard Worker- name: _upsample_nearest_exact2d_backward(Tensor grad_output, SymInt[2] output_size, SymInt[4] input_size, float? scales_h=None, float? scales_w=None) -> Tensor
2627*da0073e9SAndroid Build Coastguard Worker  grad_output: _upsample_nearest_exact2d_symint(grad, output_size, scales_h, scales_w)
2628*da0073e9SAndroid Build Coastguard Worker  result: auto_linear
2629*da0073e9SAndroid Build Coastguard Worker
2630*da0073e9SAndroid Build Coastguard Worker- name: upsample_nearest3d_backward(Tensor grad_output, SymInt[3] output_size, SymInt[5] input_size, float? scales_d=None, float? scales_h=None, float? scales_w=None) -> Tensor
2631*da0073e9SAndroid Build Coastguard Worker  grad_output: upsample_nearest3d_symint(grad, output_size, scales_d, scales_h, scales_w)
2632*da0073e9SAndroid Build Coastguard Worker  result: auto_linear
2633*da0073e9SAndroid Build Coastguard Worker
2634*da0073e9SAndroid Build Coastguard Worker- name: _upsample_nearest_exact3d_backward(Tensor grad_output, SymInt[3] output_size, SymInt[5] input_size, float? scales_d=None, float? scales_h=None, float? scales_w=None) -> Tensor
2635*da0073e9SAndroid Build Coastguard Worker  grad_output: _upsample_nearest_exact3d_symint(grad, output_size, scales_d, scales_h, scales_w)
2636*da0073e9SAndroid Build Coastguard Worker  result: auto_linear
2637*da0073e9SAndroid Build Coastguard Worker
2638*da0073e9SAndroid Build Coastguard Worker- name: sigmoid_backward(Tensor grad_output, Tensor output) -> Tensor
2639*da0073e9SAndroid Build Coastguard Worker  grad_output: sigmoid_backward(grad, output.conj())
2640*da0073e9SAndroid Build Coastguard Worker  output: grad.conj() * grad_output * (-2 * output.conj() + 1)
2641*da0073e9SAndroid Build Coastguard Worker  result: sigmoid_backward(grad_output_t, output_p) + output_t.conj() * grad_output_p * (-2 * output_p.conj() + 1)
2642*da0073e9SAndroid Build Coastguard Worker
2643*da0073e9SAndroid Build Coastguard Worker- name: tanh_backward(Tensor grad_output, Tensor output) -> Tensor
2644*da0073e9SAndroid Build Coastguard Worker  grad_output: tanh_backward(grad, output.conj())
2645*da0073e9SAndroid Build Coastguard Worker  output: grad.conj() * (-2 * output.conj() * grad_output)
2646*da0073e9SAndroid Build Coastguard Worker  result: tanh_backward(grad_output_t, output_p) + output_t.conj() * (-2 * output_p.conj() * grad_output_p)
2647*da0073e9SAndroid Build Coastguard Worker
2648*da0073e9SAndroid Build Coastguard Worker# cudnn
2649*da0073e9SAndroid Build Coastguard Worker- name: _cudnn_ctc_loss(Tensor log_probs, Tensor targets, int[] input_lengths, int[] target_lengths, int blank, bool deterministic, bool zero_infinity) -> (Tensor, Tensor)
2650*da0073e9SAndroid Build Coastguard Worker  log_probs: _cudnn_ctc_loss_backward(grad, result0, result1, zero_infinity)
2651*da0073e9SAndroid Build Coastguard Worker
2652*da0073e9SAndroid Build Coastguard Worker- name: _cudnn_ctc_loss.Tensor(Tensor log_probs, Tensor targets, Tensor input_lengths, Tensor target_lengths, int blank, bool deterministic, bool zero_infinity) -> (Tensor, Tensor)
2653*da0073e9SAndroid Build Coastguard Worker  log_probs: _cudnn_ctc_loss_backward(grad, result0, result1, zero_infinity)
2654*da0073e9SAndroid Build Coastguard Worker
2655*da0073e9SAndroid Build Coastguard Worker- name: cudnn_convolution_transpose(Tensor self, Tensor weight, SymInt[] padding, SymInt[] output_padding, SymInt[] stride, SymInt[] dilation, SymInt groups, bool benchmark, bool deterministic, bool allow_tf32) -> Tensor
2656*da0073e9SAndroid Build Coastguard Worker  self, weight: "_cudnn_convolution_backward(self, grad, weight, padding, output_padding, stride, dilation, true, groups, {grad_input_mask[0], grad_input_mask[1]})"
2657*da0073e9SAndroid Build Coastguard Worker
2658*da0073e9SAndroid Build Coastguard Worker- name: _mps_convolution_transpose(Tensor self, Tensor weight, SymInt[] padding, SymInt[] output_padding, SymInt[] stride, SymInt[] dilation, SymInt groups) -> Tensor
2659*da0073e9SAndroid Build Coastguard Worker  self, weight: "grad.defined() ? mps_convolution_transpose_backward_symint(self, grad, weight, padding, output_padding, stride, dilation, groups, grad_input_mask) : std::tuple<Tensor, Tensor>()"
2660*da0073e9SAndroid Build Coastguard Worker
2661*da0073e9SAndroid Build Coastguard Worker- name: cudnn_convolution(Tensor self, Tensor weight, SymInt[] padding, SymInt[] stride, SymInt[] dilation, SymInt groups, bool benchmark, bool deterministic, bool allow_tf32) -> Tensor
2662*da0073e9SAndroid Build Coastguard Worker  self, weight: "_cudnn_convolution_backward(self, grad, weight, padding, std::vector<c10::SymInt>(padding.size(), 0), stride, dilation, false, groups, {grad_input_mask[0], grad_input_mask[1]})"
2663*da0073e9SAndroid Build Coastguard Worker
2664*da0073e9SAndroid Build Coastguard Worker- name: cudnn_grid_sampler(Tensor self, Tensor grid) -> Tensor output
2665*da0073e9SAndroid Build Coastguard Worker  self, grid: "grad.defined() ? cudnn_grid_sampler_backward(self, grid, grad) : std::tuple<Tensor, Tensor>()"
2666*da0073e9SAndroid Build Coastguard Worker
2667*da0073e9SAndroid Build Coastguard Worker- name: cudnn_affine_grid_generator(Tensor theta, int N, int C, int H, int W) -> Tensor grid
2668*da0073e9SAndroid Build Coastguard Worker  theta: cudnn_affine_grid_generator_backward(grad, N, C, H, W)
2669*da0073e9SAndroid Build Coastguard Worker
2670*da0073e9SAndroid Build Coastguard Worker# NB: Why is the backwards here so complicated?  CuDNN cannot be used to compute
2671*da0073e9SAndroid Build Coastguard Worker# backward in evaluation mode, because the math for backward in evaluation mode
2672*da0073e9SAndroid Build Coastguard Worker# is different (since the forward math is different), and CuDNN does not support
2673*da0073e9SAndroid Build Coastguard Worker# it.  And in any case, you shouldn't be using this bn in evaluation mode,
2674*da0073e9SAndroid Build Coastguard Worker# because it should be merged into the previous convolution (left for future
2675*da0073e9SAndroid Build Coastguard Worker# work.)
2676*da0073e9SAndroid Build Coastguard Worker# NB2: The quotes around the gradient are needed to appease YAML parsing rules.
2677*da0073e9SAndroid Build Coastguard Worker- name: cudnn_batch_norm(Tensor input, Tensor weight, Tensor? bias, Tensor? running_mean, Tensor? running_var, bool training, float exponential_average_factor, float epsilon) -> (Tensor, Tensor, Tensor, Tensor)
2678*da0073e9SAndroid Build Coastguard Worker  input, weight, bias: "grad.defined() ? (training ? cudnn_batch_norm_backward(input, grad.contiguous(input.suggest_memory_format()), weight, running_mean, running_var, result1, result2, epsilon, retain_variables ? result3.clone() : result3) : native_batch_norm_backward(grad, input, weight, running_mean, running_var, result1, result2, training, epsilon, grad_input_mask)) : std::tuple<Tensor, Tensor, Tensor>()"
2679*da0073e9SAndroid Build Coastguard Worker  result0: batch_norm_jvp(input_p, input_t, weight_p, weight_t, bias_p, bias_t, running_mean, running_var, result1, result2, training, epsilon)
2680*da0073e9SAndroid Build Coastguard Worker
2681*da0073e9SAndroid Build Coastguard Worker# HACK: save_mean and save_var are going to be passed in as
2682*da0073e9SAndroid Build Coastguard Worker# requires_grad variables (even though we'll never backprop through
2683*da0073e9SAndroid Build Coastguard Worker# them) so we need to prevent the unpacking from triggering an error.
2684*da0073e9SAndroid Build Coastguard Worker- name: cudnn_batch_norm_backward(Tensor input, Tensor grad_output, Tensor weight, Tensor? running_mean, Tensor? running_var, Tensor? save_mean, Tensor? save_var, float epsilon, Tensor reserveSpace) -> (Tensor, Tensor, Tensor)
2685*da0073e9SAndroid Build Coastguard Worker  save_mean: not_implemented("cudnn_batch_norm_backward save_mean")
2686*da0073e9SAndroid Build Coastguard Worker  save_var: not_implemented("cudnn_batch_norm_backward save_var")
2687*da0073e9SAndroid Build Coastguard Worker  reserveSpace: not_implemented("cudnn_batch_norm_backward reserveSpace")
2688*da0073e9SAndroid Build Coastguard Worker  input, weight, grad_output: batchnorm_double_backward(input, weight, grads[0], grads[1], grads[2], grad_output, running_mean, running_var, true, epsilon, save_mean, save_var, grad_input_mask)
2689*da0073e9SAndroid Build Coastguard Worker
2690*da0073e9SAndroid Build Coastguard Worker# nnpack
2691*da0073e9SAndroid Build Coastguard Worker
2692*da0073e9SAndroid Build Coastguard Worker- name: _nnpack_spatial_convolution(Tensor input, Tensor weight, Tensor? bias, SymInt[2] padding, SymInt[2] stride=1) -> Tensor
2693*da0073e9SAndroid Build Coastguard Worker  # NNPACK does not support strided convolutions in the backwards path, which is the reason why we are using the closest available function that does here.
2694*da0073e9SAndroid Build Coastguard Worker  input, weight, bias: "grad.defined() ? convolution_backward_symint(grad, input, weight, bias->sym_sizes(), stride, padding, std::vector<c10::SymInt>(padding.size(), 1), false, std::vector<c10::SymInt>(padding.size(), 0), 1, grad_input_mask) : std::tuple<Tensor, Tensor, Tensor>()"
2695*da0073e9SAndroid Build Coastguard Worker
2696*da0073e9SAndroid Build Coastguard Worker#LSTM MPS
2697*da0073e9SAndroid Build Coastguard Worker- name: _lstm_mps(Tensor input, Tensor[] hx, Tensor[] params, bool has_biases, int num_layers, float dropout, bool train, bool bidirectional, bool batch_first) -> (Tensor, Tensor, Tensor, Tensor, Tensor, Tensor)
2698*da0073e9SAndroid Build Coastguard Worker  output_differentiability: [True, True, True, False, False, False]
2699*da0073e9SAndroid Build Coastguard Worker  input, hx, params: "lstm_mps_backward(grads[0], grads[1], grads[2], result3, result4, input, result5, hx, params, has_biases, num_layers, dropout, train, bidirectional, batch_first)"
2700*da0073e9SAndroid Build Coastguard Worker
2701*da0073e9SAndroid Build Coastguard Worker- name: lstm_mps_backward(Tensor? grad_y, Tensor? grad_hy, Tensor? grad_cy, Tensor z_state, Tensor cell_state_fwd, Tensor input, Tensor layersOutputs, Tensor[] hx, Tensor[] params, bool has_biases, int num_layers, float dropout, bool train, bool bidirectional, bool batch_first) -> (Tensor, Tensor[], Tensor[])
2702*da0073e9SAndroid Build Coastguard Worker
2703*da0073e9SAndroid Build Coastguard Worker
2704*da0073e9SAndroid Build Coastguard Worker
2705*da0073e9SAndroid Build Coastguard Worker# Only frst three of _cudnn_rnn outputs can have gradients.
2706*da0073e9SAndroid Build Coastguard Worker# _cudnn_rnn outputs: (output, hy, cy, reserve, weight_buf)
2707*da0073e9SAndroid Build Coastguard Worker- name: _cudnn_rnn(Tensor input, Tensor[] weight, int weight_stride0, Tensor? weight_buf, Tensor hx, Tensor? cx, int mode, SymInt hidden_size, SymInt proj_size, int num_layers, bool batch_first, float dropout, bool train, bool bidirectional, SymInt[] batch_sizes, Tensor? dropout_state) -> (Tensor, Tensor, Tensor, Tensor, Tensor)
2708*da0073e9SAndroid Build Coastguard Worker  dropout_state: non_differentiable
2709*da0073e9SAndroid Build Coastguard Worker  output_differentiability: [True, True, True, False, False]
2710*da0073e9SAndroid Build Coastguard Worker  input, hx, cx, weight: "_cudnn_rnn_backward_symint(input, weight, weight_stride0, result4, hx, cx, result0, grads[0], grads[1], grads[2], mode, hidden_size, proj_size, num_layers, batch_first, dropout, train, bidirectional, batch_sizes, dropout_state, retain_variables ? result3.clone() : result3, grad_input_mask)"
2711*da0073e9SAndroid Build Coastguard Worker
2712*da0073e9SAndroid Build Coastguard Worker- name: _cudnn_rnn_backward(Tensor input, Tensor[] weight, int weight_stride0, Tensor weight_buf, Tensor hx, Tensor? cx, Tensor output, Tensor? grad_output, Tensor? grad_hy, Tensor? grad_cy, int mode, SymInt hidden_size, SymInt proj_size, int num_layers, bool batch_first, float dropout, bool train, bool bidirectional, SymInt[] batch_sizes, Tensor? dropout_state, Tensor reserve, bool[4] output_mask) -> (Tensor, Tensor, Tensor, Tensor[])
2713*da0073e9SAndroid Build Coastguard Worker  dropout_state: non_differentiable
2714*da0073e9SAndroid Build Coastguard Worker  input: not_implemented("_cudnn_rnn_backward", kCudnnDoubleBackwardMsg)
2715*da0073e9SAndroid Build Coastguard Worker  weight: not_implemented_list("_cudnn_rnn_backward", kCudnnDoubleBackwardMsg)
2716*da0073e9SAndroid Build Coastguard Worker  hx: not_implemented("_cudnn_rnn_backward", kCudnnDoubleBackwardMsg)
2717*da0073e9SAndroid Build Coastguard Worker  cx: not_implemented("_cudnn_rnn_backward", kCudnnDoubleBackwardMsg)
2718*da0073e9SAndroid Build Coastguard Worker  output: not_implemented("_cudnn_rnn_backward", kCudnnDoubleBackwardMsg)
2719*da0073e9SAndroid Build Coastguard Worker  grad_output: not_implemented("_cudnn_rnn_backward", kCudnnDoubleBackwardMsg)
2720*da0073e9SAndroid Build Coastguard Worker  grad_hy: not_implemented("_cudnn_rnn_backward", kCudnnDoubleBackwardMsg)
2721*da0073e9SAndroid Build Coastguard Worker  grad_cy: not_implemented("_cudnn_rnn_backward", kCudnnDoubleBackwardMsg)
2722*da0073e9SAndroid Build Coastguard Worker
2723*da0073e9SAndroid Build Coastguard Worker# miopen
2724*da0073e9SAndroid Build Coastguard Worker
2725*da0073e9SAndroid Build Coastguard Worker- name: miopen_convolution_transpose(Tensor self, Tensor weight, Tensor? bias, SymInt[] padding, SymInt[] output_padding, SymInt[] stride, SymInt[] dilation, SymInt groups, bool benchmark, bool deterministic) -> Tensor
2726*da0073e9SAndroid Build Coastguard Worker  self, weight, bias: "grad.defined() ? convolution_backward_symint(grad, self, weight, bias->sym_sizes(), stride, padding, dilation, true, output_padding, groups, grad_input_mask) : std::tuple<Tensor, Tensor, Tensor>()"
2727*da0073e9SAndroid Build Coastguard Worker
2728*da0073e9SAndroid Build Coastguard Worker- name: miopen_convolution(Tensor self, Tensor weight, Tensor? bias, SymInt[] padding, SymInt[] stride, SymInt[] dilation, SymInt groups, bool benchmark, bool deterministic) -> Tensor
2729*da0073e9SAndroid Build Coastguard Worker  self, weight, bias: "grad.defined() ? convolution_backward_symint(grad, self, weight, bias->sym_sizes(), stride, padding, dilation, false, std::vector<c10::SymInt>(padding.size(), 0), groups, grad_input_mask) : std::tuple<Tensor, Tensor, Tensor>()"
2730*da0073e9SAndroid Build Coastguard Worker
2731*da0073e9SAndroid Build Coastguard Worker- name: miopen_depthwise_convolution(Tensor self, Tensor weight, Tensor? bias, SymInt[] padding, SymInt[] stride, SymInt[] dilation, SymInt groups, bool benchmark, bool deterministic) -> Tensor
2732*da0073e9SAndroid Build Coastguard Worker  self, weight, bias: "grad.defined() ? convolution_backward_symint(grad, self, weight, bias->sym_sizes(), stride, padding, dilation, false, std::vector<c10::SymInt>(padding.size(), 0), groups, grad_input_mask) : std::tuple<Tensor, Tensor, Tensor>()"
2733*da0073e9SAndroid Build Coastguard Worker
2734*da0073e9SAndroid Build Coastguard Worker- name: miopen_batch_norm(Tensor input, Tensor weight, Tensor? bias, Tensor? running_mean, Tensor? running_var, bool training, float exponential_average_factor, float epsilon) -> (Tensor, Tensor, Tensor)
2735*da0073e9SAndroid Build Coastguard Worker  input, weight, bias: "grad.defined() ? (training ? miopen_batch_norm_backward(input, grad.contiguous(), weight, running_mean, running_var, result1, result2, epsilon) : native_batch_norm_backward(grad, input, weight, running_mean, running_var, result1, result2, training, epsilon, grad_input_mask)) : std::tuple<Tensor, Tensor, Tensor>()"
2736*da0073e9SAndroid Build Coastguard Worker  result0: batch_norm_jvp(input_p, input_t, weight_p, weight_t, bias_p, bias_t, running_mean, running_var, result1, result2, training, epsilon)
2737*da0073e9SAndroid Build Coastguard Worker
2738*da0073e9SAndroid Build Coastguard Worker- name: miopen_batch_norm_backward(Tensor input, Tensor grad_output, Tensor weight, Tensor? running_mean, Tensor? running_var, Tensor? save_mean, Tensor? save_var, float epsilon) -> (Tensor, Tensor, Tensor)
2739*da0073e9SAndroid Build Coastguard Worker  save_mean: not_implemented("miopen_batch_norm_backward save_mean")
2740*da0073e9SAndroid Build Coastguard Worker  save_var: not_implemented("miopen_batch_norm_backward save_var")
2741*da0073e9SAndroid Build Coastguard Worker  input, weight, grad_output: batchnorm_double_backward(input, weight, grads[0], grads[1], grads[2], grad_output, running_mean, running_var, true, epsilon, save_mean, save_var, grad_input_mask)
2742*da0073e9SAndroid Build Coastguard Worker
2743*da0073e9SAndroid Build Coastguard Worker- name: miopen_rnn(Tensor input, Tensor[] weight, int weight_stride0, Tensor hx, Tensor? cx, int mode, int hidden_size, int num_layers, bool batch_first, float dropout, bool train, bool bidirectional, int[] batch_sizes, Tensor? dropout_state) -> (Tensor, Tensor, Tensor, Tensor, Tensor)
2744*da0073e9SAndroid Build Coastguard Worker  dropout_state: non_differentiable
2745*da0073e9SAndroid Build Coastguard Worker  output_differentiability: [True, True, True, False, False]
2746*da0073e9SAndroid Build Coastguard Worker  input, hx, cx, weight: "miopen_rnn_backward(input, weight, weight_stride0, result4, hx, cx, result0, grads[0], grads[1], grads[2], mode, hidden_size, num_layers, batch_first, dropout, train, bidirectional, batch_sizes, dropout_state, retain_variables ? result3.clone() : result3, grad_input_mask)"
2747*da0073e9SAndroid Build Coastguard Worker
2748*da0073e9SAndroid Build Coastguard Worker- name: miopen_rnn_backward(Tensor input, Tensor[] weight, int weight_stride0, Tensor weight_buf, Tensor hx, Tensor? cx, Tensor output, Tensor? grad_output, Tensor? grad_hy, Tensor? grad_cy, int mode, int hidden_size, int num_layers, bool batch_first, float dropout, bool train, bool bidirectional, int[] batch_sizes, Tensor? dropout_state, Tensor reserve, bool[4] output_mask) -> (Tensor, Tensor, Tensor, Tensor[])
2749*da0073e9SAndroid Build Coastguard Worker  dropout_state: non_differentiable
2750*da0073e9SAndroid Build Coastguard Worker
2751*da0073e9SAndroid Build Coastguard Worker- name: mkldnn_rnn_layer(Tensor input, Tensor weight0, Tensor weight1, Tensor weight2, Tensor weight3, Tensor hx_, Tensor cx_, bool reverse, int[] batch_sizes, int mode, int hidden_size, int num_layers, bool has_biases, bool bidirectional, bool batch_first, bool train) -> (Tensor, Tensor, Tensor, Tensor)
2752*da0073e9SAndroid Build Coastguard Worker  output_differentiability: [True, True, True, False]
2753*da0073e9SAndroid Build Coastguard Worker  input, weight0, weight1, weight2, weight3, hx_, cx_: "GradMode::is_enabled() ? mkldnn_rnn_layer_differentiable_backward(input, weight0, weight1, weight2, weight3, hx_, cx_, result0, result1, result2, grads[0], grads[1], grads[2], reverse, mode, hidden_size, num_layers, has_biases, train, bidirectional, batch_sizes, batch_first, result3) : mkldnn_rnn_layer_backward(input, weight0, weight1, weight2, weight3, hx_, cx_, result0, result1, result2, grads[0], grads[1], grads[2], reverse, mode, hidden_size, num_layers, has_biases, train, bidirectional, batch_sizes, batch_first, result3)"
2754*da0073e9SAndroid Build Coastguard Worker
2755*da0073e9SAndroid Build Coastguard Worker- name: mkldnn_rnn_layer_backward(Tensor input, Tensor weight1, Tensor weight2, Tensor weight3, Tensor weight4, Tensor hx_, Tensor cx_tmp, Tensor output, Tensor hy_, Tensor cy_, Tensor? grad_output, Tensor? grad_hy, Tensor? grad_cy, bool reverse, int mode, int hidden_size, int num_layers, bool has_biases, bool train, bool bidirectional, int[] batch_sizes, bool batch_first, Tensor workspace) -> (Tensor, Tensor, Tensor, Tensor, Tensor, Tensor, Tensor)
2756*da0073e9SAndroid Build Coastguard Worker
2757*da0073e9SAndroid Build Coastguard Worker# mkldnn
2758*da0073e9SAndroid Build Coastguard Worker- name: mkldnn_convolution(Tensor self, Tensor weight, Tensor? bias, SymInt[] padding, SymInt[] stride, SymInt[] dilation, SymInt groups) -> Tensor
2759*da0073e9SAndroid Build Coastguard Worker  self, weight, bias: "grad.defined() ? convolution_backward_symint(grad, self, weight, bias->sym_sizes(), stride, padding, dilation, /*transposed=*/ false, /*output_padding=*/ std::vector<c10::SymInt>(padding.size(), 0), groups, grad_input_mask) : std::tuple<Tensor, Tensor, Tensor>()"
2760*da0073e9SAndroid Build Coastguard Worker
2761*da0073e9SAndroid Build Coastguard Worker- name: mkldnn_linear(Tensor self, Tensor weight, Tensor? bias=None) -> Tensor
2762*da0073e9SAndroid Build Coastguard Worker  self, weight, bias: mkldnn_linear_backward(self, grad, weight, grad_input_mask)
2763*da0073e9SAndroid Build Coastguard Worker
2764*da0073e9SAndroid Build Coastguard Worker- name: mkldnn_max_pool2d(Tensor self, int[2] kernel_size, int[2] stride=[], int[2] padding=0, int[2] dilation=1, bool ceil_mode=False) -> Tensor
2765*da0073e9SAndroid Build Coastguard Worker  self: mkldnn_max_pool2d_backward(grad, result, self, kernel_size, stride, padding, dilation, ceil_mode)
2766*da0073e9SAndroid Build Coastguard Worker
2767*da0073e9SAndroid Build Coastguard Worker- name: mkldnn_max_pool3d(Tensor self, int[3] kernel_size, int[3] stride=[], int[3] padding=0, int[3] dilation=1, bool ceil_mode=False) -> Tensor
2768*da0073e9SAndroid Build Coastguard Worker  self: mkldnn_max_pool3d_backward(grad, result, self, kernel_size, stride, padding, dilation, ceil_mode)
2769*da0073e9SAndroid Build Coastguard Worker
2770*da0073e9SAndroid Build Coastguard Worker- name: mkldnn_adaptive_avg_pool2d(Tensor self, int[2] output_size) -> Tensor
2771*da0073e9SAndroid Build Coastguard Worker  self: mkldnn_adaptive_avg_pool2d_backward(grad, self)
2772*da0073e9SAndroid Build Coastguard Worker
2773*da0073e9SAndroid Build Coastguard Worker- name: _mkldnn_reshape(Tensor self, int[] shape) -> Tensor
2774*da0073e9SAndroid Build Coastguard Worker  self: grad.reshape_symint(self.sym_sizes())
2775*da0073e9SAndroid Build Coastguard Worker
2776*da0073e9SAndroid Build Coastguard Worker# NestedTensor
2777*da0073e9SAndroid Build Coastguard Worker- name: _nested_tensor_from_tensor_list(Tensor[] list, ScalarType? dtype=None, Layout? layout=None, Device? device=None, bool? pin_memory=None) -> Tensor
2778*da0073e9SAndroid Build Coastguard Worker  list: "grad.defined()? at::unbind(grad) : std::vector<Tensor>(list.size())"
2779*da0073e9SAndroid Build Coastguard Worker
2780*da0073e9SAndroid Build Coastguard Worker- name: _nested_tensor_from_mask(Tensor t, Tensor mask, bool mask_check=True) -> Tensor
2781*da0073e9SAndroid Build Coastguard Worker  t: grad.to_padded_tensor_symint(0, t.sym_sizes())
2782*da0073e9SAndroid Build Coastguard Worker  mask: non_differentiable
2783*da0073e9SAndroid Build Coastguard Worker
2784*da0073e9SAndroid Build Coastguard Worker- name: _nested_from_padded(Tensor padded, Tensor cpu_nested_shape_example, bool fuse_transform_0213=False) -> Tensor
2785*da0073e9SAndroid Build Coastguard Worker  padded: _nested_from_padded_backward(grad, padded, fuse_transform_0213)
2786*da0073e9SAndroid Build Coastguard Worker  cpu_nested_shape_example: non_differentiable
2787*da0073e9SAndroid Build Coastguard Worker
2788*da0073e9SAndroid Build Coastguard Worker- name: to_padded_tensor(Tensor self, float padding, SymInt[]? output_size=None) -> Tensor
2789*da0073e9SAndroid Build Coastguard Worker  self: at::_nested_from_padded(grad, self._nested_tensor_size())
2790*da0073e9SAndroid Build Coastguard Worker  padding: non_differentiable
2791*da0073e9SAndroid Build Coastguard Worker
2792*da0073e9SAndroid Build Coastguard Worker- name:  _nested_view_from_buffer(Tensor(a) self, Tensor nested_size, Tensor nested_strides, Tensor offsets) -> Tensor(a)
2793*da0073e9SAndroid Build Coastguard Worker  self: grad.values()
2794*da0073e9SAndroid Build Coastguard Worker  nested_size: non_differentiable
2795*da0073e9SAndroid Build Coastguard Worker  nested_strides: non_differentiable
2796*da0073e9SAndroid Build Coastguard Worker
2797*da0073e9SAndroid Build Coastguard Worker- name: _nested_view_from_jagged(Tensor(a) self, Tensor offsets, Tensor dummy, Tensor? lengths=None, int ragged_idx=1) -> Tensor(a)
2798*da0073e9SAndroid Build Coastguard Worker  self: grad.values()
2799*da0073e9SAndroid Build Coastguard Worker  offsets: non_differentiable
2800*da0073e9SAndroid Build Coastguard Worker  lengths: non_differentiable
2801*da0073e9SAndroid Build Coastguard Worker  dummy: non_differentiable
2802*da0073e9SAndroid Build Coastguard Worker
2803*da0073e9SAndroid Build Coastguard Worker- name: _nested_get_values(Tensor(a) self) -> Tensor(a)
2804*da0073e9SAndroid Build Coastguard Worker  self: _nested_view_from_jagged(grad, at::_nested_get_offsets(self), at::_nested_get_jagged_dummy(self), at::_nested_get_lengths(self), at::_nested_get_ragged_idx(self))
2805*da0073e9SAndroid Build Coastguard Worker
2806*da0073e9SAndroid Build Coastguard Worker# Transformers
2807*da0073e9SAndroid Build Coastguard Worker- name: _scaled_dot_product_efficient_attention(Tensor query, Tensor key, Tensor value, Tensor? attn_bias, bool compute_log_sumexp, float dropout_p=0.0, bool is_causal=False, *, float? scale=None) -> (Tensor output, Tensor log_sumexp, Tensor philox_seed, Tensor philox_offset)
2808*da0073e9SAndroid Build Coastguard Worker  output_differentiability: [True, False, False, False]
2809*da0073e9SAndroid Build Coastguard Worker  query, key, value, attn_bias: _scaled_dot_product_efficient_attention_backward(grad, query, key, value, attn_bias, output, log_sumexp, philox_seed, philox_offset, dropout_p, grad_input_mask, is_causal, scale)
2810*da0073e9SAndroid Build Coastguard Worker
2811*da0073e9SAndroid Build Coastguard Worker- name: _scaled_dot_product_flash_attention(Tensor query, Tensor key, Tensor value, float dropout_p=0.0, bool is_causal=False, bool return_debug_mask=False, *, float? scale=None) -> (Tensor output, Tensor logsumexp, Tensor cum_seq_q, Tensor cum_seq_k, SymInt max_q, SymInt max_k, Tensor philox_seed, Tensor philox_offset, Tensor debug_attn_mask)
2812*da0073e9SAndroid Build Coastguard Worker  output_differentiability: [True, False, False, False, False, False, False, False, False]
2813*da0073e9SAndroid Build Coastguard Worker  query, key, value: _scaled_dot_product_flash_attention_backward_symint(grad, query, key, value, output, logsumexp, cum_seq_q, cum_seq_k, max_q, max_k, dropout_p, is_causal, philox_seed, philox_offset, scale)
2814*da0073e9SAndroid Build Coastguard Worker
2815*da0073e9SAndroid Build Coastguard Worker- name: _scaled_dot_product_flash_attention_for_cpu(Tensor query, Tensor key, Tensor value, float dropout_p=0.0, bool is_causal=False, *, Tensor? attn_mask=None, float? scale=None) -> (Tensor output, Tensor logsumexp)
2816*da0073e9SAndroid Build Coastguard Worker  output_differentiability: [True, False]
2817*da0073e9SAndroid Build Coastguard Worker  query, key, value: _scaled_dot_product_flash_attention_for_cpu_backward(grad, query, key, value, output, logsumexp, dropout_p, is_causal, attn_mask, scale)
2818*da0073e9SAndroid Build Coastguard Worker
2819*da0073e9SAndroid Build Coastguard Worker- name: _flash_attention_forward(Tensor query, Tensor key, Tensor value, Tensor? cum_seq_q, Tensor? cum_seq_k, SymInt max_q, SymInt max_k, float dropout_p, bool is_causal, bool return_debug_mask, *, float? scale=None, SymInt? window_size_left=None, SymInt? window_size_right=None, Tensor? seqused_k=None, Tensor? alibi_slopes=None) -> (Tensor output, Tensor softmax_logsumexp, Tensor philox_seed, Tensor philox_offset, Tensor debug_attn_mask)
2820*da0073e9SAndroid Build Coastguard Worker  output_differentiability: [True, False, False, False, False]
2821*da0073e9SAndroid Build Coastguard Worker  query, key, value: _flash_attention_backward_symint(grad, query, key, value, output, softmax_logsumexp, cum_seq_q, cum_seq_k, max_q, max_k, dropout_p, is_causal, philox_seed, philox_offset, scale, window_size_left, window_size_right)
2822*da0073e9SAndroid Build Coastguard Worker
2823*da0073e9SAndroid Build Coastguard Worker- name: _efficient_attention_forward(Tensor query, Tensor key, Tensor value, Tensor? bias, Tensor? cu_seqlens_q, Tensor? cu_seqlens_k, SymInt? max_seqlen_q, SymInt? max_seqlen_k, float dropout_p, int custom_mask_type, bool compute_log_sumexp=False, *, float? scale=None, Tensor? seqlen_k=None, int? window_size=None) -> (Tensor output, Tensor logsumexp, Tensor philox_seed, Tensor philox_offset, SymInt max_seqlen_batch_q, SymInt max_seqlen_batch_k)
2824*da0073e9SAndroid Build Coastguard Worker  output_differentiability: [True, False, False, False, False, False]
2825*da0073e9SAndroid Build Coastguard Worker  query, key, value, bias: _efficient_attention_backward_symint(grad, query, key, value, bias, output, cu_seqlens_q, cu_seqlens_k, max_seqlen_batch_q, max_seqlen_batch_k, logsumexp, dropout_p, philox_seed, philox_offset, custom_mask_type, bias.requires_grad(), scale)
2826*da0073e9SAndroid Build Coastguard Worker
2827*da0073e9SAndroid Build Coastguard Worker- name: _scaled_dot_product_cudnn_attention(Tensor query, Tensor key, Tensor value, float dropout_p=0.0, bool is_causal=False, bool return_debug_mask=False, *, float? scale=None) -> (Tensor output, Tensor logsumexp, Tensor cum_seq_q, Tensor cum_seq_k, SymInt max_q, SymInt max_k, Tensor philox_seed, Tensor philox_offset, Tensor debug_attn_mask)
2828*da0073e9SAndroid Build Coastguard Worker  output_differentiability: [True, False, False, False, False, False, False, False, False]
2829*da0073e9SAndroid Build Coastguard Worker  query, key, value: _scaled_dot_product_cudnn_attention_backward_symint(grad, query, key, value, output, logsumexp, cum_seq_q, cum_seq_k, max_q, max_k, dropout_p, is_causal, philox_seed, philox_offset, scale)
2830*da0073e9SAndroid Build Coastguard Worker
2831*da0073e9SAndroid Build Coastguard Worker# fft
2832*da0073e9SAndroid Build Coastguard Worker- name: _fft_r2c(Tensor self, int[] dim, int normalization, bool onesided) -> Tensor
2833*da0073e9SAndroid Build Coastguard Worker  self: fft_r2c_backward(grad, dim, normalization, onesided, self.sym_size(dim.back()))
2834*da0073e9SAndroid Build Coastguard Worker  result: auto_linear
2835*da0073e9SAndroid Build Coastguard Worker
2836*da0073e9SAndroid Build Coastguard Worker- name: _fft_c2r(Tensor self, int[] dim, int normalization, SymInt last_dim_size) -> Tensor
2837*da0073e9SAndroid Build Coastguard Worker  self: fft_c2r_backward(grad, dim, normalization)
2838*da0073e9SAndroid Build Coastguard Worker  result: auto_linear
2839*da0073e9SAndroid Build Coastguard Worker
2840*da0073e9SAndroid Build Coastguard Worker- name: _fft_c2c(Tensor self, SymInt[] dim, int normalization, bool forward) -> Tensor
2841*da0073e9SAndroid Build Coastguard Worker  self: _fft_c2c_symint(grad, dim, normalization, !forward)
2842*da0073e9SAndroid Build Coastguard Worker  result: auto_linear
2843*da0073e9SAndroid Build Coastguard Worker
2844*da0073e9SAndroid Build Coastguard Worker- name: unbind.int(Tensor(a -> *) self, int dim=0) -> Tensor(a)[]
2845*da0073e9SAndroid Build Coastguard Worker  dispatch:
2846*da0073e9SAndroid Build Coastguard Worker    Default:
2847*da0073e9SAndroid Build Coastguard Worker      self: unbind_backward(grads, dim)
2848*da0073e9SAndroid Build Coastguard Worker      result: auto_linear
2849*da0073e9SAndroid Build Coastguard Worker    AutogradNestedTensor:
2850*da0073e9SAndroid Build Coastguard Worker      self: unbind_backward_nested(grads, at::native::get_nested_tensor_impl(self)->get_nested_sizes(), dim, self.options())
2851*da0073e9SAndroid Build Coastguard Worker      result: auto_linear
2852*da0073e9SAndroid Build Coastguard Worker
2853*da0073e9SAndroid Build Coastguard Worker- name: stack(Tensor[] tensors, int dim=0) -> Tensor
2854*da0073e9SAndroid Build Coastguard Worker  tensors: stack_tensors_backward(grad, dim, to_args_scalartypes(tensors))
2855*da0073e9SAndroid Build Coastguard Worker  result: stack_jvp(tensors, dim)
2856*da0073e9SAndroid Build Coastguard Worker
2857*da0073e9SAndroid Build Coastguard Worker# fused RNN kernels
2858*da0073e9SAndroid Build Coastguard Worker
2859*da0073e9SAndroid Build Coastguard Worker# Only frst two of _thnn_fused_lstm_cell outputs can have gradients.
2860*da0073e9SAndroid Build Coastguard Worker# _thnn_fused_lstm_cell outputs: (hy, cy, workspace)
2861*da0073e9SAndroid Build Coastguard Worker- name: _thnn_fused_lstm_cell(Tensor input_gates, Tensor hidden_gates, Tensor cx, Tensor? input_bias=None, Tensor? hidden_bias=None) -> (Tensor, Tensor, Tensor)
2862*da0073e9SAndroid Build Coastguard Worker  output_differentiability: [True, True, False]
2863*da0073e9SAndroid Build Coastguard Worker  input_gates, hidden_gates, cx, input_bias, hidden_bias: "GradMode::is_enabled() ? _thnn_differentiable_lstm_cell_backward(grads[0], grads[1], input_gates, hidden_gates, input_bias, hidden_bias, cx, result1) : _thnn_fused_lstm_cell_backward(grads[0], grads[1], cx, result1, result2, input_bias.defined())"
2864*da0073e9SAndroid Build Coastguard Worker
2865*da0073e9SAndroid Build Coastguard Worker- name: _thnn_fused_gru_cell(Tensor input_gates, Tensor hidden_gates, Tensor hx, Tensor? input_bias=None, Tensor? hidden_bias=None) -> (Tensor, Tensor)
2866*da0073e9SAndroid Build Coastguard Worker  input_gates, hidden_gates, hx, input_bias, hidden_bias: "grad.defined() ? (GradMode::is_enabled() ? _thnn_differentiable_gru_cell_backward(grad, input_gates, hidden_gates, hx, input_bias, hidden_bias) : _thnn_fused_gru_cell_backward(grad, result1, input_bias.defined())) : std::tuple<Tensor, Tensor, Tensor, Tensor, Tensor>()"
2867*da0073e9SAndroid Build Coastguard Worker
2868*da0073e9SAndroid Build Coastguard Worker# PackedSequence helpers
2869*da0073e9SAndroid Build Coastguard Worker- name: _pack_padded_sequence(Tensor input, Tensor lengths, bool batch_first) -> (Tensor, Tensor)
2870*da0073e9SAndroid Build Coastguard Worker  input: _pack_padded_sequence_backward_symint(grad, input.sym_sizes(), result1, batch_first)
2871*da0073e9SAndroid Build Coastguard Worker
2872*da0073e9SAndroid Build Coastguard Worker# TH wrappers
2873*da0073e9SAndroid Build Coastguard Worker- name: eq.Scalar(Tensor self, Scalar other) -> Tensor
2874*da0073e9SAndroid Build Coastguard Worker  output_differentiability: [False]
2875*da0073e9SAndroid Build Coastguard Worker
2876*da0073e9SAndroid Build Coastguard Worker- name: eq.Tensor(Tensor self, Tensor other) -> Tensor
2877*da0073e9SAndroid Build Coastguard Worker  output_differentiability: [False]
2878*da0073e9SAndroid Build Coastguard Worker
2879*da0073e9SAndroid Build Coastguard Worker- name: ge.Scalar(Tensor self, Scalar other) -> Tensor
2880*da0073e9SAndroid Build Coastguard Worker  output_differentiability: [False]
2881*da0073e9SAndroid Build Coastguard Worker
2882*da0073e9SAndroid Build Coastguard Worker- name: ge.Tensor(Tensor self, Tensor other) -> Tensor
2883*da0073e9SAndroid Build Coastguard Worker  output_differentiability: [False]
2884*da0073e9SAndroid Build Coastguard Worker
2885*da0073e9SAndroid Build Coastguard Worker- name: gt.Scalar(Tensor self, Scalar other) -> Tensor
2886*da0073e9SAndroid Build Coastguard Worker  output_differentiability: [False]
2887*da0073e9SAndroid Build Coastguard Worker
2888*da0073e9SAndroid Build Coastguard Worker- name: gt.Tensor(Tensor self, Tensor other) -> Tensor
2889*da0073e9SAndroid Build Coastguard Worker  output_differentiability: [False]
2890*da0073e9SAndroid Build Coastguard Worker
2891*da0073e9SAndroid Build Coastguard Worker- name: le.Scalar(Tensor self, Scalar other) -> Tensor
2892*da0073e9SAndroid Build Coastguard Worker  output_differentiability: [False]
2893*da0073e9SAndroid Build Coastguard Worker
2894*da0073e9SAndroid Build Coastguard Worker- name: le.Tensor(Tensor self, Tensor other) -> Tensor
2895*da0073e9SAndroid Build Coastguard Worker  output_differentiability: [False]
2896*da0073e9SAndroid Build Coastguard Worker
2897*da0073e9SAndroid Build Coastguard Worker- name: lt.Scalar(Tensor self, Scalar other) -> Tensor
2898*da0073e9SAndroid Build Coastguard Worker  output_differentiability: [False]
2899*da0073e9SAndroid Build Coastguard Worker
2900*da0073e9SAndroid Build Coastguard Worker- name: lt.Tensor(Tensor self, Tensor other) -> Tensor
2901*da0073e9SAndroid Build Coastguard Worker  output_differentiability: [False]
2902*da0073e9SAndroid Build Coastguard Worker
2903*da0073e9SAndroid Build Coastguard Worker- name: ne.Scalar(Tensor self, Scalar other) -> Tensor
2904*da0073e9SAndroid Build Coastguard Worker  output_differentiability: [False]
2905*da0073e9SAndroid Build Coastguard Worker
2906*da0073e9SAndroid Build Coastguard Worker- name: ne.Tensor(Tensor self, Tensor other) -> Tensor
2907*da0073e9SAndroid Build Coastguard Worker  output_differentiability: [False]
2908*da0073e9SAndroid Build Coastguard Worker
2909*da0073e9SAndroid Build Coastguard Worker- name: multinomial(Tensor self, int num_samples, bool replacement=False, *, Generator? generator=None) -> Tensor
2910*da0073e9SAndroid Build Coastguard Worker  output_differentiability: [False]
2911*da0073e9SAndroid Build Coastguard Worker
2912*da0073e9SAndroid Build Coastguard Worker- name: nonzero(Tensor self) -> Tensor
2913*da0073e9SAndroid Build Coastguard Worker  output_differentiability: [False]
2914*da0073e9SAndroid Build Coastguard Worker
2915*da0073e9SAndroid Build Coastguard Worker- name: segment_reduce(Tensor data, str reduce, *, Tensor? lengths=None, Tensor? indices=None, Tensor? offsets=None, int axis=0, bool unsafe=False, Scalar? initial=None) -> Tensor
2916*da0073e9SAndroid Build Coastguard Worker  data: _segment_reduce_backward(grad, result, data, reduce, lengths, offsets, axis, initial)
2917*da0073e9SAndroid Build Coastguard Worker
2918*da0073e9SAndroid Build Coastguard Worker- name: _pin_memory(Tensor self, Device? device=None) -> Tensor
2919*da0073e9SAndroid Build Coastguard Worker  self: grad
2920*da0073e9SAndroid Build Coastguard Worker
2921*da0073e9SAndroid Build Coastguard Worker- name: _new_zeros_with_same_feature_meta(Tensor self, Tensor other, *, int self_num_batch_dims=0) -> Tensor
2922*da0073e9SAndroid Build Coastguard Worker  self: non_differentiable
2923*da0073e9SAndroid Build Coastguard Worker  other: non_differentiable
2924*da0073e9SAndroid Build Coastguard Worker  output_differentiability: [False]
2925*da0073e9SAndroid Build Coastguard Worker
2926*da0073e9SAndroid Build Coastguard Worker- name: _test_warn_in_autograd(Tensor self) -> Tensor
2927*da0073e9SAndroid Build Coastguard Worker  self: warn_backwards(grad)
2928*da0073e9SAndroid Build Coastguard Worker
2929*da0073e9SAndroid Build Coastguard Worker- name: _test_autograd_multiple_dispatch.fullcoverage(Tensor self) -> Tensor
2930*da0073e9SAndroid Build Coastguard Worker  dispatch:
2931*da0073e9SAndroid Build Coastguard Worker    Default:
2932*da0073e9SAndroid Build Coastguard Worker      self: grad.expand_symint(self.sym_sizes()) + 1
2933*da0073e9SAndroid Build Coastguard Worker      result: auto_linear
2934*da0073e9SAndroid Build Coastguard Worker    AutogradNestedTensor:
2935*da0073e9SAndroid Build Coastguard Worker      self: grad.mul(grad)
2936*da0073e9SAndroid Build Coastguard Worker    AutogradCUDA:
2937*da0073e9SAndroid Build Coastguard Worker      self: grad.expand_symint(self.sym_sizes()) * 2
2938*da0073e9SAndroid Build Coastguard Worker
2939*da0073e9SAndroid Build Coastguard Worker- name: _test_autograd_multiple_dispatch.ntonly(Tensor self, bool b) -> Tensor
2940*da0073e9SAndroid Build Coastguard Worker  dispatch:
2941*da0073e9SAndroid Build Coastguard Worker    AutogradNestedTensor:
2942*da0073e9SAndroid Build Coastguard Worker      self: grad.mul(grad).add(grad)
2943*da0073e9SAndroid Build Coastguard Worker
2944*da0073e9SAndroid Build Coastguard Worker- name: _test_autograd_multiple_dispatch_view(Tensor(a) self) -> Tensor(a)
2945*da0073e9SAndroid Build Coastguard Worker  dispatch:
2946*da0073e9SAndroid Build Coastguard Worker    Default:
2947*da0073e9SAndroid Build Coastguard Worker      self: grad.reshape_as(self)
2948*da0073e9SAndroid Build Coastguard Worker    AutogradCUDA:
2949*da0073e9SAndroid Build Coastguard Worker      self: grad.reshape_as(self) + 1
2950*da0073e9SAndroid Build Coastguard Worker
2951*da0073e9SAndroid Build Coastguard Worker- name: _efficientzerotensor(SymInt[] size, *, ScalarType? dtype=None, Layout? layout=None, Device? device=None, bool? pin_memory=None) -> Tensor
2952*da0073e9SAndroid Build Coastguard Worker  output_differentiability: [False]
2953*da0073e9SAndroid Build Coastguard Worker
2954*da0073e9SAndroid Build Coastguard Worker- name: scatter_reduce.two(Tensor self, int dim, Tensor index, Tensor src, str reduce, *, bool include_self=True) -> Tensor
2955*da0073e9SAndroid Build Coastguard Worker  self, src: scatter_reduce_backward(grad, self, dim, index, src, reduce, include_self, result)
2956*da0073e9SAndroid Build Coastguard Worker  index: non_differentiable
2957*da0073e9SAndroid Build Coastguard Worker  result: scatter_reduce_jvp(self_p, self_t, dim, index, src_p, src_t, reduce, include_self, result)
2958*da0073e9SAndroid Build Coastguard Worker
2959*da0073e9SAndroid Build Coastguard Worker- name: special_airy_ai(Tensor x) -> Tensor
2960*da0073e9SAndroid Build Coastguard Worker  x: non_differentiable
2961*da0073e9SAndroid Build Coastguard Worker
2962*da0073e9SAndroid Build Coastguard Worker- name: special_bessel_j0(Tensor self) -> Tensor
2963*da0073e9SAndroid Build Coastguard Worker  self: non_differentiable
2964*da0073e9SAndroid Build Coastguard Worker
2965*da0073e9SAndroid Build Coastguard Worker- name: special_bessel_j1(Tensor self) -> Tensor
2966*da0073e9SAndroid Build Coastguard Worker  self: non_differentiable
2967*da0073e9SAndroid Build Coastguard Worker
2968*da0073e9SAndroid Build Coastguard Worker- name: special_bessel_y0(Tensor self) -> Tensor
2969*da0073e9SAndroid Build Coastguard Worker  self: non_differentiable
2970*da0073e9SAndroid Build Coastguard Worker
2971*da0073e9SAndroid Build Coastguard Worker- name: special_bessel_y1(Tensor self) -> Tensor
2972*da0073e9SAndroid Build Coastguard Worker  self: non_differentiable
2973*da0073e9SAndroid Build Coastguard Worker
2974*da0073e9SAndroid Build Coastguard Worker- name: special_chebyshev_polynomial_t(Tensor x, Tensor n) -> Tensor
2975*da0073e9SAndroid Build Coastguard Worker  x: non_differentiable
2976*da0073e9SAndroid Build Coastguard Worker  n: non_differentiable
2977*da0073e9SAndroid Build Coastguard Worker
2978*da0073e9SAndroid Build Coastguard Worker- name: special_chebyshev_polynomial_t.x_scalar(Scalar x, Tensor n) -> Tensor
2979*da0073e9SAndroid Build Coastguard Worker  n: non_differentiable
2980*da0073e9SAndroid Build Coastguard Worker
2981*da0073e9SAndroid Build Coastguard Worker- name: special_chebyshev_polynomial_t.n_scalar(Tensor x, Scalar n) -> Tensor
2982*da0073e9SAndroid Build Coastguard Worker  x: non_differentiable
2983*da0073e9SAndroid Build Coastguard Worker
2984*da0073e9SAndroid Build Coastguard Worker- name: special_chebyshev_polynomial_u(Tensor x, Tensor n) -> Tensor
2985*da0073e9SAndroid Build Coastguard Worker  x: non_differentiable
2986*da0073e9SAndroid Build Coastguard Worker  n: non_differentiable
2987*da0073e9SAndroid Build Coastguard Worker
2988*da0073e9SAndroid Build Coastguard Worker- name: special_chebyshev_polynomial_u.x_scalar(Scalar x, Tensor n) -> Tensor
2989*da0073e9SAndroid Build Coastguard Worker  n: non_differentiable
2990*da0073e9SAndroid Build Coastguard Worker
2991*da0073e9SAndroid Build Coastguard Worker- name: special_chebyshev_polynomial_u.n_scalar(Tensor x, Scalar n) -> Tensor
2992*da0073e9SAndroid Build Coastguard Worker  x: non_differentiable
2993*da0073e9SAndroid Build Coastguard Worker
2994*da0073e9SAndroid Build Coastguard Worker- name: special_chebyshev_polynomial_v(Tensor x, Tensor n) -> Tensor
2995*da0073e9SAndroid Build Coastguard Worker  x: non_differentiable
2996*da0073e9SAndroid Build Coastguard Worker  n: non_differentiable
2997*da0073e9SAndroid Build Coastguard Worker
2998*da0073e9SAndroid Build Coastguard Worker- name: special_chebyshev_polynomial_v.x_scalar(Scalar x, Tensor n) -> Tensor
2999*da0073e9SAndroid Build Coastguard Worker  n: non_differentiable
3000*da0073e9SAndroid Build Coastguard Worker
3001*da0073e9SAndroid Build Coastguard Worker- name: special_chebyshev_polynomial_v.n_scalar(Tensor x, Scalar n) -> Tensor
3002*da0073e9SAndroid Build Coastguard Worker  x: non_differentiable
3003*da0073e9SAndroid Build Coastguard Worker
3004*da0073e9SAndroid Build Coastguard Worker- name: special_chebyshev_polynomial_w(Tensor x, Tensor n) -> Tensor
3005*da0073e9SAndroid Build Coastguard Worker  x: non_differentiable
3006*da0073e9SAndroid Build Coastguard Worker  n: non_differentiable
3007*da0073e9SAndroid Build Coastguard Worker
3008*da0073e9SAndroid Build Coastguard Worker- name: special_chebyshev_polynomial_w.x_scalar(Scalar x, Tensor n) -> Tensor
3009*da0073e9SAndroid Build Coastguard Worker  n: non_differentiable
3010*da0073e9SAndroid Build Coastguard Worker
3011*da0073e9SAndroid Build Coastguard Worker- name: special_chebyshev_polynomial_w.n_scalar(Tensor x, Scalar n) -> Tensor
3012*da0073e9SAndroid Build Coastguard Worker  x: non_differentiable
3013*da0073e9SAndroid Build Coastguard Worker
3014*da0073e9SAndroid Build Coastguard Worker- name: special_hermite_polynomial_h(Tensor x, Tensor n) -> Tensor
3015*da0073e9SAndroid Build Coastguard Worker  x: non_differentiable
3016*da0073e9SAndroid Build Coastguard Worker  n: non_differentiable
3017*da0073e9SAndroid Build Coastguard Worker
3018*da0073e9SAndroid Build Coastguard Worker- name: special_hermite_polynomial_h.x_scalar(Scalar x, Tensor n) -> Tensor
3019*da0073e9SAndroid Build Coastguard Worker  n: non_differentiable
3020*da0073e9SAndroid Build Coastguard Worker
3021*da0073e9SAndroid Build Coastguard Worker- name: special_hermite_polynomial_h.n_scalar(Tensor x, Scalar n) -> Tensor
3022*da0073e9SAndroid Build Coastguard Worker  x: non_differentiable
3023*da0073e9SAndroid Build Coastguard Worker
3024*da0073e9SAndroid Build Coastguard Worker- name: special_hermite_polynomial_he(Tensor x, Tensor n) -> Tensor
3025*da0073e9SAndroid Build Coastguard Worker  x: non_differentiable
3026*da0073e9SAndroid Build Coastguard Worker  n: non_differentiable
3027*da0073e9SAndroid Build Coastguard Worker
3028*da0073e9SAndroid Build Coastguard Worker- name: special_hermite_polynomial_he.x_scalar(Scalar x, Tensor n) -> Tensor
3029*da0073e9SAndroid Build Coastguard Worker  n: non_differentiable
3030*da0073e9SAndroid Build Coastguard Worker
3031*da0073e9SAndroid Build Coastguard Worker- name: special_hermite_polynomial_he.n_scalar(Tensor x, Scalar n) -> Tensor
3032*da0073e9SAndroid Build Coastguard Worker  x: non_differentiable
3033*da0073e9SAndroid Build Coastguard Worker
3034*da0073e9SAndroid Build Coastguard Worker- name: special_laguerre_polynomial_l(Tensor x, Tensor n) -> Tensor
3035*da0073e9SAndroid Build Coastguard Worker  x: non_differentiable
3036*da0073e9SAndroid Build Coastguard Worker  n: non_differentiable
3037*da0073e9SAndroid Build Coastguard Worker
3038*da0073e9SAndroid Build Coastguard Worker- name: special_laguerre_polynomial_l.x_scalar(Scalar x, Tensor n) -> Tensor
3039*da0073e9SAndroid Build Coastguard Worker  n: non_differentiable
3040*da0073e9SAndroid Build Coastguard Worker
3041*da0073e9SAndroid Build Coastguard Worker- name: special_laguerre_polynomial_l.n_scalar(Tensor x, Scalar n) -> Tensor
3042*da0073e9SAndroid Build Coastguard Worker  x: non_differentiable
3043*da0073e9SAndroid Build Coastguard Worker
3044*da0073e9SAndroid Build Coastguard Worker- name: special_legendre_polynomial_p(Tensor x, Tensor n) -> Tensor
3045*da0073e9SAndroid Build Coastguard Worker  x: non_differentiable
3046*da0073e9SAndroid Build Coastguard Worker  n: non_differentiable
3047*da0073e9SAndroid Build Coastguard Worker
3048*da0073e9SAndroid Build Coastguard Worker- name: special_legendre_polynomial_p.x_scalar(Scalar x, Tensor n) -> Tensor
3049*da0073e9SAndroid Build Coastguard Worker  n: non_differentiable
3050*da0073e9SAndroid Build Coastguard Worker
3051*da0073e9SAndroid Build Coastguard Worker- name: special_legendre_polynomial_p.n_scalar(Tensor x, Scalar n) -> Tensor
3052*da0073e9SAndroid Build Coastguard Worker  x: non_differentiable
3053*da0073e9SAndroid Build Coastguard Worker
3054*da0073e9SAndroid Build Coastguard Worker- name: special_modified_bessel_i0(Tensor self) -> Tensor
3055*da0073e9SAndroid Build Coastguard Worker  self: non_differentiable
3056*da0073e9SAndroid Build Coastguard Worker
3057*da0073e9SAndroid Build Coastguard Worker- name: special_modified_bessel_i1(Tensor self) -> Tensor
3058*da0073e9SAndroid Build Coastguard Worker  self: non_differentiable
3059*da0073e9SAndroid Build Coastguard Worker
3060*da0073e9SAndroid Build Coastguard Worker- name: special_modified_bessel_k0(Tensor self) -> Tensor
3061*da0073e9SAndroid Build Coastguard Worker  self: non_differentiable
3062*da0073e9SAndroid Build Coastguard Worker
3063*da0073e9SAndroid Build Coastguard Worker- name: special_modified_bessel_k1(Tensor self) -> Tensor
3064*da0073e9SAndroid Build Coastguard Worker  self: non_differentiable
3065*da0073e9SAndroid Build Coastguard Worker
3066*da0073e9SAndroid Build Coastguard Worker- name: special_scaled_modified_bessel_k0(Tensor x) -> Tensor
3067*da0073e9SAndroid Build Coastguard Worker  x: non_differentiable
3068*da0073e9SAndroid Build Coastguard Worker
3069*da0073e9SAndroid Build Coastguard Worker- name: special_scaled_modified_bessel_k1(Tensor x) -> Tensor
3070*da0073e9SAndroid Build Coastguard Worker  x: non_differentiable
3071*da0073e9SAndroid Build Coastguard Worker
3072*da0073e9SAndroid Build Coastguard Worker- name: special_shifted_chebyshev_polynomial_t(Tensor x, Tensor n) -> Tensor
3073*da0073e9SAndroid Build Coastguard Worker  x: non_differentiable
3074*da0073e9SAndroid Build Coastguard Worker  n: non_differentiable
3075*da0073e9SAndroid Build Coastguard Worker
3076*da0073e9SAndroid Build Coastguard Worker- name: special_shifted_chebyshev_polynomial_t.x_scalar(Scalar x, Tensor n) -> Tensor
3077*da0073e9SAndroid Build Coastguard Worker  n: non_differentiable
3078*da0073e9SAndroid Build Coastguard Worker
3079*da0073e9SAndroid Build Coastguard Worker- name: special_shifted_chebyshev_polynomial_t.n_scalar(Tensor x, Scalar n) -> Tensor
3080*da0073e9SAndroid Build Coastguard Worker  x: non_differentiable
3081*da0073e9SAndroid Build Coastguard Worker
3082*da0073e9SAndroid Build Coastguard Worker- name: special_shifted_chebyshev_polynomial_u(Tensor x, Tensor n) -> Tensor
3083*da0073e9SAndroid Build Coastguard Worker  x: non_differentiable
3084*da0073e9SAndroid Build Coastguard Worker  n: non_differentiable
3085*da0073e9SAndroid Build Coastguard Worker
3086*da0073e9SAndroid Build Coastguard Worker- name: special_shifted_chebyshev_polynomial_u.x_scalar(Scalar x, Tensor n) -> Tensor
3087*da0073e9SAndroid Build Coastguard Worker  n: non_differentiable
3088*da0073e9SAndroid Build Coastguard Worker
3089*da0073e9SAndroid Build Coastguard Worker- name: special_shifted_chebyshev_polynomial_u.n_scalar(Tensor x, Scalar n) -> Tensor
3090*da0073e9SAndroid Build Coastguard Worker  x: non_differentiable
3091*da0073e9SAndroid Build Coastguard Worker
3092*da0073e9SAndroid Build Coastguard Worker- name: special_shifted_chebyshev_polynomial_v(Tensor x, Tensor n) -> Tensor
3093*da0073e9SAndroid Build Coastguard Worker  x: non_differentiable
3094*da0073e9SAndroid Build Coastguard Worker  n: non_differentiable
3095*da0073e9SAndroid Build Coastguard Worker
3096*da0073e9SAndroid Build Coastguard Worker- name: special_shifted_chebyshev_polynomial_v.x_scalar(Scalar x, Tensor n) -> Tensor
3097*da0073e9SAndroid Build Coastguard Worker  n: non_differentiable
3098*da0073e9SAndroid Build Coastguard Worker
3099*da0073e9SAndroid Build Coastguard Worker- name: special_shifted_chebyshev_polynomial_v.n_scalar(Tensor x, Scalar n) -> Tensor
3100*da0073e9SAndroid Build Coastguard Worker  x: non_differentiable
3101*da0073e9SAndroid Build Coastguard Worker
3102*da0073e9SAndroid Build Coastguard Worker- name: special_shifted_chebyshev_polynomial_w(Tensor x, Tensor n) -> Tensor
3103*da0073e9SAndroid Build Coastguard Worker  x: non_differentiable
3104*da0073e9SAndroid Build Coastguard Worker  n: non_differentiable
3105*da0073e9SAndroid Build Coastguard Worker
3106*da0073e9SAndroid Build Coastguard Worker- name: special_shifted_chebyshev_polynomial_w.x_scalar(Scalar x, Tensor n) -> Tensor
3107*da0073e9SAndroid Build Coastguard Worker  n: non_differentiable
3108*da0073e9SAndroid Build Coastguard Worker
3109*da0073e9SAndroid Build Coastguard Worker- name: special_shifted_chebyshev_polynomial_w.n_scalar(Tensor x, Scalar n) -> Tensor
3110*da0073e9SAndroid Build Coastguard Worker  x: non_differentiable
3111*da0073e9SAndroid Build Coastguard Worker
3112*da0073e9SAndroid Build Coastguard Worker- name: special_spherical_bessel_j0(Tensor x) -> Tensor
3113*da0073e9SAndroid Build Coastguard Worker  x: non_differentiable
3114*da0073e9SAndroid Build Coastguard Worker
3115*da0073e9SAndroid Build Coastguard Worker- name: _reshape_copy(Tensor self, SymInt[] size) -> Tensor
3116*da0073e9SAndroid Build Coastguard Worker  self: grad.reshape_symint(self.sym_sizes())
3117*da0073e9SAndroid Build Coastguard Worker  result: auto_linear
3118*da0073e9SAndroid Build Coastguard Worker
3119*da0073e9SAndroid Build Coastguard Worker# note(crcrpar): `torchgen/api/autograd` logic would unwantedly replace substrings of `self` and `other` of function names.
3120*da0073e9SAndroid Build Coastguard Worker- name: _foreach_div.List(Tensor[] self, Tensor[] other) -> Tensor[]
3121*da0073e9SAndroid Build Coastguard Worker  self: div_tensor_self_backward(grads[i], other[i], self[i].scalar_type())
3122*da0073e9SAndroid Build Coastguard Worker  other: div_tensor_other_backward(grads[i], self[i], other[i])
3123*da0073e9SAndroid Build Coastguard Worker  result: (self_t - other_t * result[i]) / other_p
3124*da0073e9SAndroid Build Coastguard Worker
3125*da0073e9SAndroid Build Coastguard Worker- name: _foreach_pow.List(Tensor[] self, Tensor[] exponent) -> Tensor[]
3126*da0073e9SAndroid Build Coastguard Worker  self: pow_backward_self(grads[i], self[i], exponent[i])
3127*da0073e9SAndroid Build Coastguard Worker  exponent: pow_backward_exponent(grads[i], self[i], exponent[i], result[i])
3128*da0073e9SAndroid Build Coastguard Worker  result: (pow_backward_self(self_t.conj(), self_p, exponent_p) + pow_backward_exponent(exponent_t.conj(), self_p, exponent_p, result[i])).conj()
3129*da0073e9SAndroid Build Coastguard Worker
3130*da0073e9SAndroid Build Coastguard Worker- name: _foreach_pow.ScalarList(Tensor[] self, Scalar[] exponent) -> Tensor[]
3131*da0073e9SAndroid Build Coastguard Worker  self: pow_backward(grads[i], self[i], exponent[i])
3132*da0073e9SAndroid Build Coastguard Worker  result: pow_backward(self_t.conj(), self_p, exponent[i]).conj()
3133*da0073e9SAndroid Build Coastguard Worker
3134*da0073e9SAndroid Build Coastguard Worker- name: _foreach_pow.ScalarAndTensor(Scalar self, Tensor[] exponent) -> Tensor[]
3135*da0073e9SAndroid Build Coastguard Worker  exponent: pow_backward_exponent(grads[i], self, exponent[i], result[i])
3136*da0073e9SAndroid Build Coastguard Worker
3137*da0073e9SAndroid Build Coastguard Worker# note(crcrpar): following definitions seem necessary because the reference native functions
3138*da0073e9SAndroid Build Coastguard Worker# of `maximum` and `minimum` don't have the overload def with Scalar as their second argument.
3139*da0073e9SAndroid Build Coastguard Worker- name: _foreach_minimum.Scalar(Tensor[] self, Scalar scalar) -> Tensor[]
3140*da0073e9SAndroid Build Coastguard Worker  self: at::where(self[i] == scalar, grads[i] / 2, grads[i]).masked_fill_(self[i] > scalar, 0)
3141*da0073e9SAndroid Build Coastguard Worker  result: scalar + at::where(self_p == scalar, at::scalar_tensor(0.5, result[i].options()), (self_p < scalar).to(result[i].scalar_type())) * (self_t - scalar)
3142*da0073e9SAndroid Build Coastguard Worker
3143*da0073e9SAndroid Build Coastguard Worker- name: _foreach_minimum.ScalarList(Tensor[] self, Scalar[] scalars) -> Tensor[]
3144*da0073e9SAndroid Build Coastguard Worker  self: at::where(self[i] == scalars[i], grads[i] / 2, grads[i]).masked_fill_(self[i] > scalars[i], 0)
3145*da0073e9SAndroid Build Coastguard Worker  result: scalars[i] + at::where(self_p == scalars[i], at::scalar_tensor(0.5, result[i].options()), (self_p < scalars[i]).to(result[i].scalar_type())) * (self_t - scalars[i])
3146*da0073e9SAndroid Build Coastguard Worker
3147*da0073e9SAndroid Build Coastguard Worker- name: _foreach_maximum.Scalar(Tensor[] self, Scalar scalar) -> Tensor[]
3148*da0073e9SAndroid Build Coastguard Worker  self: at::where(self[i] == scalar, grads[i] / 2, grads[i]).masked_fill_(self[i] < scalar, 0)
3149*da0073e9SAndroid Build Coastguard Worker  result: scalar + at::where(self_p == scalar, at::scalar_tensor(0.5, result[i].options()), (self_p > scalar).to(result[i].scalar_type())) * (self_t - scalar)
3150*da0073e9SAndroid Build Coastguard Worker
3151*da0073e9SAndroid Build Coastguard Worker- name: _foreach_maximum.ScalarList(Tensor[] self, Scalar[] scalars) -> Tensor[]
3152*da0073e9SAndroid Build Coastguard Worker  self: at::where(self[i] == scalars[i], grads[i] / 2, grads[i]).masked_fill_(self[i] < scalars[i], 0)
3153*da0073e9SAndroid Build Coastguard Worker  result: scalars[i] + at::where(self_p == scalars[i], at::scalar_tensor(0.5, result[i].options()), (self_p > scalars[i]).to(result[i].scalar_type())) * (self_t - scalars[i])
3154*da0073e9SAndroid Build Coastguard Worker
3155*da0073e9SAndroid Build Coastguard Worker# note(crcrpar): forward-mode AD is tricky for a simple string replace to handle:
3156*da0073e9SAndroid Build Coastguard Worker#   formula.replace("p", "ord") produces `norm_jvord(self_ord, self_t, ord, result)`
3157*da0073e9SAndroid Build Coastguard Worker- name: _foreach_norm.Scalar(Tensor[] self, Scalar ord=2, ScalarType? dtype=None) -> Tensor[]
3158*da0073e9SAndroid Build Coastguard Worker  self: norm_backward(grads[i], self[i], ord, result[i])
3159*da0073e9SAndroid Build Coastguard Worker  result: norm_jvp(self_p, self_t, ord, result[i])
3160