Home
last modified time | relevance | path

Searched full:autograd (Results 1 – 25 of 1005) sorted by relevance

12345678910>>...41

/aosp_15_r20/external/pytorch/
H A Dpt_template_srcs.bzl115 …"autograd/generated/ADInplaceOrViewTypeEverything.cpp": ["autograd/generated/ADInplaceOrViewTypeEv…
116 … "autograd/generated/ADInplaceOrViewType_0.cpp": ["autograd/generated/ADInplaceOrViewType_0.cpp"],
117 … "autograd/generated/ADInplaceOrViewType_1.cpp": ["autograd/generated/ADInplaceOrViewType_1.cpp"],
118 "autograd/generated/Functions.cpp": ["autograd/generated/Functions.cpp"],
119 "autograd/generated/Functions.h": ["autograd/generated/Functions.h"],
120 … "autograd/generated/TraceTypeEverything.cpp": ["autograd/generated/TraceTypeEverything.cpp"],
121 "autograd/generated/TraceType_0.cpp": ["autograd/generated/TraceType_0.cpp"],
122 "autograd/generated/TraceType_1.cpp": ["autograd/generated/TraceType_1.cpp"],
123 "autograd/generated/TraceType_2.cpp": ["autograd/generated/TraceType_2.cpp"],
124 "autograd/generated/TraceType_3.cpp": ["autograd/generated/TraceType_3.cpp"],
[all …]
H A Dbuild.bzl136 name = "generated-autograd-headers",
258 "torch/csrc/autograd/generated/python_functions.h",
259 "torch/csrc/autograd/generated/python_return_types.h",
263 "torch/csrc/autograd/generated/Functions.h",
264 "torch/csrc/autograd/generated/VariableType.h",
265 "torch/csrc/autograd/generated/ViewFuncs.h",
266 "torch/csrc/autograd/generated/variable_factories.h",
280 "torch/csrc/autograd/generated/python_functions_0.cpp",
281 "torch/csrc/autograd/generated/python_functions_1.cpp",
282 "torch/csrc/autograd/generated/python_functions_2.cpp",
[all …]
H A Dbuild_variables.bzl21 "torch/csrc/autograd/generated/Functions.cpp",
22 "torch/csrc/autograd/generated/VariableType_0.cpp",
23 "torch/csrc/autograd/generated/VariableType_1.cpp",
24 "torch/csrc/autograd/generated/VariableType_2.cpp",
25 "torch/csrc/autograd/generated/VariableType_3.cpp",
26 "torch/csrc/autograd/generated/VariableType_4.cpp",
27 "torch/csrc/autograd/generated/ViewFuncs.cpp",
28 "torch/csrc/autograd/generated/TraceType_0.cpp",
29 "torch/csrc/autograd/generated/TraceType_1.cpp",
30 "torch/csrc/autograd/generated/TraceType_2.cpp",
[all …]
/aosp_15_r20/external/pytorch/torch/csrc/distributed/autograd/engine/
H A Ddist_engine.h6 #include <torch/csrc/autograd/engine.h>
7 #include <torch/csrc/autograd/function.h>
8 #include <torch/csrc/autograd/functions/basic_ops.h>
9 #include <torch/csrc/distributed/autograd/context/context.h>
13 namespace autograd {
19 // passes. This engine relies heavily on the vanilla autograd engine and tries
21 // distributed aspects of autograd and tries to hook into the autograd engine
24 // Unlike the vanilla autograd engine, the distributed autograd engine
33 // these variables and accumulate all the gradients in the current autograd
34 // context on each node. This method is used to kickoff distributed autograd
[all …]
H A Ddist_engine.cpp8 #include <torch/csrc/autograd/functions/accumulate_grad.h>
9 #include <torch/csrc/autograd/input_buffer.h>
10 #include <torch/csrc/distributed/autograd/context/container.h>
11 #include <torch/csrc/distributed/autograd/engine/dist_engine.h>
15 namespace autograd { namespace
17 using torch::autograd::AccumulateGrad;
18 using torch::autograd::edge_list;
19 using torch::autograd::Engine;
20 using torch::autograd::GraphRoot;
21 using torch::autograd::GraphTask;
[all …]
/aosp_15_r20/external/pytorch/docs/source/rpc/
H A Ddistributed_autograd.rst3 .. _distributed-autograd-design:
5 Distributed Autograd Design
8 This note will present the detailed design for distributed autograd and walk
10 :ref:`autograd-mechanics` and the :ref:`distributed-rpc-framework` before
41 The main motivation behind distributed autograd is to enable running a backward
47 Autograd recording during the forward pass
50 PyTorch builds the autograd graph during the forward pass and this graph is
52 :ref:`how-autograd-encodes-history`.
54 For distributed autograd, we need to keep track of all RPCs during the forward
56 we attach ``send`` and ``recv`` functions to the autograd graph when we perform
[all …]
/aosp_15_r20/external/pytorch/docs/source/
H A Dautograd.rst4 Automatic differentiation package - torch.autograd
7 .. automodule:: torch.autograd
8 .. currentmodule:: torch.autograd
49 This section contains the higher level API for the autograd that builds on the basic API above
87 :func:`torch.autograd.backward` or :func:`torch.Tensor.backward`
136 Supporting in-place operations in autograd is a hard matter, and we discourage
137 their use in most cases. Autograd's aggressive buffer freeing and reuse makes
157 use autograd with tensors. Autograd automatically supports Tensors with
173 Tensor autograd functions
244 .. automodule:: torch.autograd.gradcheck
[all …]
/aosp_15_r20/external/pytorch/test/inductor/
H A Dtest_compiled_autograd.py90 with torch.autograd.set_multithreading_enabled(False):
485 # Freeze compiled autograd graph
632 gy, gz = torch.autograd.grad(result, inputs=[y, z])
642 class UnreachableBwd(torch.autograd.Function):
661 gz = torch.autograd.grad(result, inputs=[z])
692 class UnreachableBwd(torch.autograd.Function):
734 torch.compile(lambda: torch.autograd.backward(loss, inputs=[x]))()
739 torch.compile(lambda: torch.autograd.backward(loss, inputs=[y]))()
921 class MySin(torch.autograd.Function):
943 class MyFn(torch.autograd.Function):
[all …]
/aosp_15_r20/external/pytorch/docs/source/notes/
H A Dextending.func.rst1 .. _func-autograd-function:
3 Extending torch.func with autograd.Function
6 .. currentmodule:: torch.autograd
8 So you'd like to use :class:`torch.autograd.Function` with the :mod:`torch.func`
14 have it work with function transforms. That is, the :class:`torch.autograd.Function`'s
19 PyTorch combines both of these concepts into :class:`torch.autograd.Function`.
24 This guide assumes you are familiar with :ref:`extending-autograd`,
25 which explains how to use :class:`torch.autograd.Function`.
27 :class:`torch.autograd.Function` can either have a :meth:`~Function.forward` that accepts a ctx obj…
51 the :class:`torch.autograd.Function` needs a :meth:`~Function.backward` staticmethod.
[all …]
H A Dautograd.rst3 Autograd mechanics
6 This note will present an overview of how autograd works and records the
11 .. _how-autograd-encodes-history:
13 How autograd encodes the history
16 Autograd is a reverse automatic differentiation system. Conceptually,
17 autograd records a graph recording all of the operations that created
23 Internally, autograd represents this graph as a graph of
25 :meth:`~torch.autograd.Function.apply` ed to compute the result of
26 evaluating the graph. When computing the forward pass, autograd
48 When defining a custom Python :class:`~torch.autograd.Function`, you can use
[all …]
/aosp_15_r20/external/pytorch/tools/
H A DBUCK.bzl111 name = "autograd",
112 srcs = glob(["autograd/*.py"]),
115 "autograd/deprecated.yaml",
116 "autograd/derivatives.yaml",
117 "autograd/templates/ADInplaceOrViewType.cpp",
118 "autograd/templates/Functions.cpp",
119 "autograd/templates/Functions.h",
120 "autograd/templates/TraceType.cpp",
121 "autograd/templates/VariableType.cpp",
122 "autograd/templates/VariableType.h",
[all …]
/aosp_15_r20/external/pytorch/torch/testing/_internal/optests/
H A Dautograd_registration.py20 """Check if autograd was registered correctly (for the operator).
22 Operators should have "autograd support" registered directly to an
23 autograd dispatch key.
32 Here are some best practices if you do find your autograd is
35 and you wish the operator to decompose and get autograd support
38 - If you're adding an autograd formula for the operator, the correct
39 thing to do is to register an autograd.Function to
40 DispatchKey::Autograd (preferred) or one of the
41 DispatchKey::Autograd<BACKEND> keys. It is NOT OK to register
42 an autograd.Function to a backend (e.g. CPU/CUDA) key.
[all …]
/aosp_15_r20/external/pytorch/test/profiler/
H A Dtest_profiler_tree.py287 autograd::engine::evaluate_function: PowBackward0
300 autograd::engine::evaluate_function: SubBackward0
303 autograd::engine::evaluate_function: AddBackward0
305 autograd::engine::evaluate_function: torch::autograd::AccumulateGrad
306 torch::autograd::AccumulateGrad
310 autograd::engine::evaluate_function: torch::autograd::AccumulateGrad
311 torch::autograd::AccumulateGrad
321 with torch.autograd.profiler.record_function("Top level Annotation"):
322 with torch.autograd.profiler.record_function("First Annotation"):
327 _ = torch.autograd.profiler.record_function(
[all …]
/aosp_15_r20/external/pytorch/torch/csrc/distributed/autograd/context/
H A Dcontainer.h6 #include <torch/csrc/distributed/autograd/context/context.h>
10 namespace autograd {
13 // autograd context for each autograd pass and also cleans up data for an
14 // autograd pass once its done.
16 // Each autograd pass is assigned a unique autograd_context_id and all data for
23 // id, which is used to associate send/recv autograd function pairs. The format
37 // Create a new context for a distributed autograd pass.
40 // Clean up resources for a given context_id once the autograd pass is done.
46 // Releases an autograd context if it is present on this node. Also sends RPC
54 // Retrieve the autograd context for a given context_id.
[all …]
H A Dcontext.h7 #include <torch/csrc/autograd/engine.h>
8 #include <torch/csrc/distributed/autograd/functions/recvrpc_backward.h>
9 #include <torch/csrc/distributed/autograd/functions/sendrpc_backward.h>
14 namespace autograd {
19 // autograd pass on a worker.
26 // Retrieves the autograd context id for this context.
29 // Records a 'send' autograd function for this context with the provided
35 // Records a 'recv' autograd function for this context with the provided
63 const torch::autograd::Variable& variable,
72 // workerIDs are added here when we attach a send function to this autograd
[all …]
/aosp_15_r20/external/pytorch/torch/csrc/autograd/
H A Dpython_engine.cpp1 #include <torch/csrc/autograd/python_engine.h>
9 #include <torch/csrc/autograd/edge.h>
10 #include <torch/csrc/autograd/engine.h>
11 #include <torch/csrc/autograd/function.h>
12 #include <torch/csrc/autograd/functions/basic_ops.h>
13 #include <torch/csrc/autograd/python_anomaly_mode.h>
14 #include <torch/csrc/autograd/python_cpp_function.h>
15 #include <torch/csrc/autograd/python_function.h>
16 #include <torch/csrc/autograd/python_saved_variable_hooks.h>
27 using namespace torch::autograd;
[all …]
H A DVariableTypeManual.cpp6 #include <torch/csrc/autograd/FunctionsManual.h>
7 #include <torch/csrc/autograd/VariableTypeUtils.h>
8 #include <torch/csrc/autograd/autograd.h>
9 #include <torch/csrc/autograd/functions/utils.h>
10 #include <torch/csrc/autograd/generated/VariableType.h>
11 #include <torch/csrc/autograd/generated/ViewFuncs.h>
18 using namespace torch::autograd::generated;
19 using torch::autograd::as_view;
20 using torch::autograd::CreationMeta;
24 namespace autograd::VariableType { namespace
[all …]
H A Dinit.cpp14 #include <torch/csrc/autograd/VariableTypeUtils.h>
15 #include <torch/csrc/autograd/autograd.h>
16 #include <torch/csrc/autograd/autograd_not_implemented_fallback.h>
17 #include <torch/csrc/autograd/function.h>
18 #include <torch/csrc/autograd/grad_mode.h>
19 #include <torch/csrc/autograd/input_metadata.h>
20 #include <torch/csrc/autograd/profiler.h>
21 #include <torch/csrc/autograd/profiler_python.h>
22 #include <torch/csrc/autograd/python_function.h>
23 #include <torch/csrc/autograd/python_saved_variable_hooks.h>
[all …]
H A Dvariable.cpp1 #include <torch/csrc/autograd/variable.h>
3 #include <torch/csrc/autograd/InferenceMode.h>
4 #include <torch/csrc/autograd/autograd.h>
5 #include <torch/csrc/autograd/edge.h>
6 #include <torch/csrc/autograd/engine.h>
7 #include <torch/csrc/autograd/function.h>
8 #include <torch/csrc/autograd/functions/accumulate_grad.h>
9 #include <torch/csrc/autograd/functions/tensor.h>
10 #include <torch/csrc/autograd/functions/utils.h>
11 #include <torch/csrc/autograd/generated/Functions.h>
[all …]
/aosp_15_r20/external/pytorch/torch/csrc/distributed/autograd/
H A Dutils.cpp3 #include <torch/csrc/autograd/functions/utils.h>
4 #include <torch/csrc/autograd/profiler.h>
5 #include <torch/csrc/distributed/autograd/context/container.h>
6 #include <torch/csrc/distributed/autograd/functions/recvrpc_backward.h>
7 #include <torch/csrc/distributed/autograd/functions/sendrpc_backward.h>
8 #include <torch/csrc/distributed/autograd/utils.h>
15 namespace autograd { namespace
17 using torch::distributed::autograd::AutogradMetadata;
18 using torch::distributed::autograd::RpcWithAutograd;
29 // Attach autograd information only for tensors requiring grad. in addSendRpcBackward()
[all …]
H A Dinit.cpp1 #include <torch/csrc/autograd/python_cpp_function.h>
2 #include <torch/csrc/distributed/autograd/autograd.h>
11 namespace autograd { namespace
20 THPObjectPtr(PyImport_ImportModule("torch.distributed.autograd")); in dist_autograd_init()
32 "_distributed_autograd", "distributed autograd bindings"); in dist_autograd_init()
54 torch::autograd::functionToPyObject( in dist_autograd_init()
72 torch::autograd::functionToPyObject( in dist_autograd_init()
147 assumes all RPC messages sent in the same distributed autograd context in dist_autograd_init()
148 across workers would be part of the autograd graph during the backward pass. in dist_autograd_init()
150 We use the provided roots to discover the autograd graph and compute in dist_autograd_init()
[all …]
H A Dutils.h3 #include <torch/csrc/distributed/autograd/context/context.h>
4 #include <torch/csrc/distributed/autograd/rpc_messages/rpc_with_autograd.h>
5 #include <torch/csrc/distributed/autograd/rpc_messages/rpc_with_profiling_req.h>
6 #include <torch/csrc/distributed/autograd/rpc_messages/rpc_with_profiling_resp.h>
10 namespace autograd {
12 // This method is used to attach the 'send' autograd function to the autograd
13 // graph when we use RPC. This method creates a new 'send' autograd function
16 // autograd context. Finally, the RPC message is updated with appropriate
17 // autograd information for the recipient.
23 // This method is used to attach the 'recv' autograd function to the autograd
[all …]
/aosp_15_r20/external/pytorch/torch/_functorch/_aot_autograd/
H A Dcollect_metadata_analysis.py112 # can have autograd data stored directly on it.
116 # Autograd - Functionalization ~~~~> Proxy Mode - Fake Tensor
282 …# which you can read more about under Note [AOT Autograd: outputs aliasing inputs or intermediates…
294 # its correctness with the autograd engine in all cases.
305 …# (2) regenerate each aliased output off of "intermediate", **outside** of the autograd.Function.
306 … # The reason AOTAutograd ordinarily does this is for safety: the autograd engine needs to know
307 …o1 through o10 are all aliased, and if we blindly return o1 through o10 from the autograd.Function,
309 …# In particular, mutating one alias might require autograd to update autograd metadata on the othe…
310 … # (like their grad_fn, for example, when the autograd engine needs to do view-replay).
314 …ossible to find a set of conditions where it is **safe** to hide the output aliasing from autograd?
[all …]
/aosp_15_r20/external/pytorch/torch/csrc/distributed/autograd/functions/
H A Dsendrpc_backward.h3 #include <torch/csrc/autograd/function.h>
7 namespace autograd {
9 // As part of our distributed autograd implementation, whenever we send an RPC
10 // from one node to another, we add a 'SendRpcBackward' autograd function to the
11 // autograd graph. This is more or less a placeholder function that is used to
12 // kickoff the autograd engine on the current worker on the backward pass. The
13 // edges for this autograd function are the inputs to the RPC method.
16 // autograd engine which eventually runs the rest of the autograd graph.
17 struct TORCH_API SendRpcBackward : public torch::autograd::Node {
19 torch::autograd::variable_list apply(
[all …]
/aosp_15_r20/external/pytorch/torch/_functorch/
H A Dautograd_function.py22 from torch.autograd.forward_ad import _set_fwd_grad_enabled
25 # autograd.Function technically runs before the regular PyTorch dispatcher.
28 # we need to give the illusion that autograd.Function runs before those things.
38 # it should just invoke the autograd.Function. This is consistent
39 # with the autograd.Function behavior of being invoked before the
45 # Tensor. However, make_fx sees autograd.Function as a composite
46 # (because autograd.Function happens before the Python dispatch key)
54 # This is the mechanism for an autograd.Function that works with functorch transforms.
55 # It wraps an autograd.Function; interactions with functorch transforms are defined
62 # (autograd.Function that only works with a single layer (level) of functorch) that:
[all …]

12345678910>>...41