Name Date Size #Lines LOC

..--

_internal/H25-Apr-2025-21,51616,577

README.mdH A D25-Apr-20254.3 KiB9667

__init__.pyH A D25-Apr-202519.3 KiB554463

_constants.pyH A D25-Apr-2025608 2619

_deprecation.pyH A D25-Apr-20252.2 KiB7349

_experimental.pyH A D25-Apr-20251,017 2820

_exporter_states.pyH A D25-Apr-2025444 137

_flags.pyH A D25-Apr-20251.3 KiB5040

_globals.pyH A D25-Apr-20252.9 KiB8865

_onnx_supported_ops.pyH A D25-Apr-20253.3 KiB9977

_type_utils.pyH A D25-Apr-202513.6 KiB392324

errors.pyH A D25-Apr-20253.6 KiB10478

operators.pyH A D25-Apr-20251.3 KiB4832

symbolic_caffe2.pyH A D25-Apr-202510.7 KiB362301

symbolic_helper.pyH A D25-Apr-202580.2 KiB2,2621,844

symbolic_opset10.pyH A D25-Apr-202536.5 KiB1,185971

symbolic_opset11.pyH A D25-Apr-202552.1 KiB1,4681,135

symbolic_opset12.pyH A D25-Apr-202515.3 KiB466368

symbolic_opset13.pyH A D25-Apr-202540.3 KiB1,114873

symbolic_opset14.pyH A D25-Apr-20259.2 KiB284219

symbolic_opset15.pyH A D25-Apr-20252.8 KiB8158

symbolic_opset16.pyH A D25-Apr-20256.3 KiB186145

symbolic_opset17.pyH A D25-Apr-20257.5 KiB232179

symbolic_opset18.pyH A D25-Apr-20257.9 KiB266208

symbolic_opset19.pyH A D25-Apr-2025561 3425

symbolic_opset20.pyH A D25-Apr-20252.4 KiB9371

symbolic_opset7.pyH A D25-Apr-20252.1 KiB6849

symbolic_opset8.pyH A D25-Apr-202514.6 KiB464390

symbolic_opset9.pyH A D25-Apr-2025218.8 KiB6,6385,356

utils.pyH A D25-Apr-202576.8 KiB1,9911,570

verification.pyH A D25-Apr-202566.9 KiB1,8071,487

README.md

1# torch.onnx
2
3Torch->ONNX converter / exporter.
4
5- User-facing docs: https://pytorch.org/docs/main/onnx.html
6- Developer docs: https://github.com/pytorch/pytorch/wiki/PyTorch-ONNX-exporter
7
8> Read the following if you are contributing to `torch.onnx`
9
10## Symbolic functions Opsets
11
12Opset 9 is the base version. It is selected as the base version because
13
141. It is the first opset version supported by PyTorch export.
152. Opset 9 is more robust than previous opset versions. Opset versions like 7/8 have limitations
16    that certain basic operators cannot be expressed in ONNX. Instead of basing on these limitations,
17    we chose to handle them as special cases separately.
18
19Backward support for opset versions beyond opset 7 is not in our roadmap.
20
21For opset versions other than 9, by default they will inherit the symbolic functions defined in
22symbolic_opset9.py.
23
24To extend support for updated operators in different opset versions on top of opset 9,
25simply add the updated symbolic functions in the respective symbolic_opset{version}.py file.
26Checkout topk in symbolic_opset10.py, and upsample_nearest2d in symbolic_opset8.py for example.
27
28## Editing Symbolic Files
29
30- Use the internal `registration.onnx_symbolic` decorator to register a new symbolic function. Search for `def reshape(g, self, shape):` to see an example.
31- Parameter names must *exactly* match the names in
32  aten/src/ATen/native/native_functions.yaml, because
33  dispatch is done with keyword arguments.
34- Looking for inplace ops? They're detected by
35  `_jit_pass_onnx_remove_inplace_ops_for_onnx`, and
36  transparently dispatched to their non inplace versions in
37  "run_symbolic_function". See Note [Export inplace](#export-inplace)
38
39### A note on Tensor types
40
41In general, we should avoid depending on the type of Tensor Values contained
42within the trace graph. However, this is sometimes unavoidable (due to ONNX
43spec requirements, etc). The TensorType object has accessors for these properties that return the property if it is statically known and return nullopt otherwise.
44
45In general, we should prefer to rely on the least specific information possible.
46For example, not relying on tensor properties at all is better than relying
47on the number of dimensions which is better than relying on
48concrete shapes. Doing so will make the export symbolics
49more robust to different graphs.
50
51### Extra context for symbolic functions
52
53The first argument of a symbolic function is always a `GraphContext` object.
54
55`GraphContext` contains all methods defined in a `torch.Graph` object and context
56for the symbolic function.
57
58In general, symbolic functions only require inputs and attributes to
59the original node. An example of a symbolic function needing context is
60`prim::Loop`. It needs access to the sub-block of the original node.
61
62### Export inplace
63
64It would be better for us to export inplace annotations,
65than to not export them, since it is useful information that can
66help the target of an ONNX export export more efficiently. However,
67ONNX doesn't currently formalize inplace. Fortunately, it's sound to drop
68inplace annotations, but we are losing information this way.
69
70### Pointwise by scalar
71
72What happens if you add a tensor with a constant (e.g., x + 2)?  There are
73some moving parts to implementing the ONNX translation in this case:
74
75- By the time we get the scalar in a symbolic function here, it is no longer a
76  Python long/float, but a PyTorch tensor with `numel == 1` (eventually, we want
77  it to be a zero dim tensor but this change has not happened yet.) However, the
78  type of this scalar is *exactly* what the user wrote in Python, which may not
79  match the tensor it is being added to. PyTorch will do implicit conversions on
80  scalars; however, ONNX will not, so we must do the conversion ourselves. This
81  is what `symbolic_helper._if_scalar_type_as()` and
82  `_jit_pass_onnx_scalar_type_analysis` does.
83
84- Dispatch to these functions takes advantage an outrageous coincidence
85    between the tensor and scalar name.  When we add two tensors together,
86    you get the dispatch:
87
88    add(*[self, other], **{"alpha": alpha})
89
90    When you add a tensor and a scalar, you get the dispatch:
91
92    add(*[self], **{"other": other, "alpha": alpha})
93
94    By having the argument name line up with the name of the scalar attribute
95    if it exists, we can write a single function for both overloads.
96