xref: /aosp_15_r20/external/pytorch/docs/source/torch.rst (revision da0073e96a02ea20f0ac840b70461e3646d07c45)
1torch
2=====
3.. automodule:: torch
4.. currentmodule:: torch
5
6Tensors
7-------
8.. autosummary::
9    :toctree: generated
10    :nosignatures:
11
12    is_tensor
13    is_storage
14    is_complex
15    is_conj
16    is_floating_point
17    is_nonzero
18    set_default_dtype
19    get_default_dtype
20    set_default_device
21    get_default_device
22    set_default_tensor_type
23    numel
24    set_printoptions
25    set_flush_denormal
26
27.. _tensor-creation-ops:
28
29Creation Ops
30~~~~~~~~~~~~
31
32.. note::
33    Random sampling creation ops are listed under :ref:`random-sampling` and
34    include:
35    :func:`torch.rand`
36    :func:`torch.rand_like`
37    :func:`torch.randn`
38    :func:`torch.randn_like`
39    :func:`torch.randint`
40    :func:`torch.randint_like`
41    :func:`torch.randperm`
42    You may also use :func:`torch.empty` with the :ref:`inplace-random-sampling`
43    methods to create :class:`torch.Tensor` s with values sampled from a broader
44    range of distributions.
45
46.. autosummary::
47    :toctree: generated
48    :nosignatures:
49
50    tensor
51    sparse_coo_tensor
52    sparse_csr_tensor
53    sparse_csc_tensor
54    sparse_bsr_tensor
55    sparse_bsc_tensor
56    asarray
57    as_tensor
58    as_strided
59    from_file
60    from_numpy
61    from_dlpack
62    frombuffer
63    zeros
64    zeros_like
65    ones
66    ones_like
67    arange
68    range
69    linspace
70    logspace
71    eye
72    empty
73    empty_like
74    empty_strided
75    full
76    full_like
77    quantize_per_tensor
78    quantize_per_channel
79    dequantize
80    complex
81    polar
82    heaviside
83
84.. _indexing-slicing-joining:
85
86Indexing, Slicing, Joining, Mutating Ops
87~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
88.. autosummary::
89    :toctree: generated
90    :nosignatures:
91
92    adjoint
93    argwhere
94    cat
95    concat
96    concatenate
97    conj
98    chunk
99    dsplit
100    column_stack
101    dstack
102    gather
103    hsplit
104    hstack
105    index_add
106    index_copy
107    index_reduce
108    index_select
109    masked_select
110    movedim
111    moveaxis
112    narrow
113    narrow_copy
114    nonzero
115    permute
116    reshape
117    row_stack
118    select
119    scatter
120    diagonal_scatter
121    select_scatter
122    slice_scatter
123    scatter_add
124    scatter_reduce
125    split
126    squeeze
127    stack
128    swapaxes
129    swapdims
130    t
131    take
132    take_along_dim
133    tensor_split
134    tile
135    transpose
136    unbind
137    unravel_index
138    unsqueeze
139    vsplit
140    vstack
141    where
142
143.. _accelerators:
144
145Accelerators
146----------------------------------
147Within the PyTorch repo, we define an "Accelerator" as a :class:`torch.device` that is being used
148alongside a CPU to speed up computation. These device use an asynchronous execution scheme,
149using :class:`torch.Stream` and :class:`torch.Event` as their main way to perform synchronization.
150We also assume that only one such accelerator can be available at once on a given host. This allows
151us to use the current accelerator as the default device for relevant concepts such as pinned memory,
152Stream device_type, FSDP, etc.
153
154As of today, accelerator devices are (in no particular order) :doc:`"CUDA" <cuda>`, :doc:`"MTIA" <mtia>`,
155:doc:`"XPU" <xpu>`, and PrivateUse1 (many device not in the PyTorch repo itself).
156
157.. autosummary::
158    :toctree: generated
159    :nosignatures:
160
161    Stream
162    Event
163
164.. _generators:
165
166Generators
167----------------------------------
168.. autosummary::
169    :toctree: generated
170    :nosignatures:
171
172    Generator
173
174.. _random-sampling:
175
176Random sampling
177----------------------------------
178.. autosummary::
179    :toctree: generated
180    :nosignatures:
181
182    seed
183    manual_seed
184    initial_seed
185    get_rng_state
186    set_rng_state
187
188.. autoattribute:: torch.default_generator
189   :annotation:  Returns the default CPU torch.Generator
190
191.. The following doesn't actually seem to exist.
192   https://github.com/pytorch/pytorch/issues/27780
193   .. autoattribute:: torch.cuda.default_generators
194      :annotation:  If cuda is available, returns a tuple of default CUDA torch.Generator-s.
195                    The number of CUDA torch.Generator-s returned is equal to the number of
196                    GPUs available in the system.
197.. autosummary::
198    :toctree: generated
199    :nosignatures:
200
201    bernoulli
202    multinomial
203    normal
204    poisson
205    rand
206    rand_like
207    randint
208    randint_like
209    randn
210    randn_like
211    randperm
212
213.. _inplace-random-sampling:
214
215In-place random sampling
216~~~~~~~~~~~~~~~~~~~~~~~~
217
218There are a few more in-place random sampling functions defined on Tensors as well. Click through to refer to their documentation:
219
220- :func:`torch.Tensor.bernoulli_` - in-place version of :func:`torch.bernoulli`
221- :func:`torch.Tensor.cauchy_` - numbers drawn from the Cauchy distribution
222- :func:`torch.Tensor.exponential_` - numbers drawn from the exponential distribution
223- :func:`torch.Tensor.geometric_` - elements drawn from the geometric distribution
224- :func:`torch.Tensor.log_normal_` - samples from the log-normal distribution
225- :func:`torch.Tensor.normal_` - in-place version of :func:`torch.normal`
226- :func:`torch.Tensor.random_` - numbers sampled from the discrete uniform distribution
227- :func:`torch.Tensor.uniform_` - numbers sampled from the continuous uniform distribution
228
229Quasi-random sampling
230~~~~~~~~~~~~~~~~~~~~~
231.. autosummary::
232    :toctree: generated
233    :nosignatures:
234    :template: sobolengine.rst
235
236    quasirandom.SobolEngine
237
238Serialization
239----------------------------------
240.. autosummary::
241    :toctree: generated
242    :nosignatures:
243
244    save
245    load
246
247Parallelism
248----------------------------------
249.. autosummary::
250    :toctree: generated
251    :nosignatures:
252
253    get_num_threads
254    set_num_threads
255    get_num_interop_threads
256    set_num_interop_threads
257
258.. _torch-rst-local-disable-grad:
259
260Locally disabling gradient computation
261--------------------------------------
262The context managers :func:`torch.no_grad`, :func:`torch.enable_grad`, and
263:func:`torch.set_grad_enabled` are helpful for locally disabling and enabling
264gradient computation. See :ref:`locally-disable-grad` for more details on
265their usage.  These context managers are thread local, so they won't
266work if you send work to another thread using the ``threading`` module, etc.
267
268Examples::
269
270  >>> x = torch.zeros(1, requires_grad=True)
271  >>> with torch.no_grad():
272  ...     y = x * 2
273  >>> y.requires_grad
274  False
275
276  >>> is_train = False
277  >>> with torch.set_grad_enabled(is_train):
278  ...     y = x * 2
279  >>> y.requires_grad
280  False
281
282  >>> torch.set_grad_enabled(True)  # this can also be used as a function
283  >>> y = x * 2
284  >>> y.requires_grad
285  True
286
287  >>> torch.set_grad_enabled(False)
288  >>> y = x * 2
289  >>> y.requires_grad
290  False
291
292.. autosummary::
293    :toctree: generated
294    :nosignatures:
295
296    no_grad
297    enable_grad
298    autograd.grad_mode.set_grad_enabled
299    is_grad_enabled
300    autograd.grad_mode.inference_mode
301    is_inference_mode_enabled
302
303Math operations
304---------------
305
306Pointwise Ops
307~~~~~~~~~~~~~~~~~~~~~~
308
309.. autosummary::
310    :toctree: generated
311    :nosignatures:
312
313    abs
314    absolute
315    acos
316    arccos
317    acosh
318    arccosh
319    add
320    addcdiv
321    addcmul
322    angle
323    asin
324    arcsin
325    asinh
326    arcsinh
327    atan
328    arctan
329    atanh
330    arctanh
331    atan2
332    arctan2
333    bitwise_not
334    bitwise_and
335    bitwise_or
336    bitwise_xor
337    bitwise_left_shift
338    bitwise_right_shift
339    ceil
340    clamp
341    clip
342    conj_physical
343    copysign
344    cos
345    cosh
346    deg2rad
347    div
348    divide
349    digamma
350    erf
351    erfc
352    erfinv
353    exp
354    exp2
355    expm1
356    fake_quantize_per_channel_affine
357    fake_quantize_per_tensor_affine
358    fix
359    float_power
360    floor
361    floor_divide
362    fmod
363    frac
364    frexp
365    gradient
366    imag
367    ldexp
368    lerp
369    lgamma
370    log
371    log10
372    log1p
373    log2
374    logaddexp
375    logaddexp2
376    logical_and
377    logical_not
378    logical_or
379    logical_xor
380    logit
381    hypot
382    i0
383    igamma
384    igammac
385    mul
386    multiply
387    mvlgamma
388    nan_to_num
389    neg
390    negative
391    nextafter
392    polygamma
393    positive
394    pow
395    quantized_batch_norm
396    quantized_max_pool1d
397    quantized_max_pool2d
398    rad2deg
399    real
400    reciprocal
401    remainder
402    round
403    rsqrt
404    sigmoid
405    sign
406    sgn
407    signbit
408    sin
409    sinc
410    sinh
411    softmax
412    sqrt
413    square
414    sub
415    subtract
416    tan
417    tanh
418    true_divide
419    trunc
420    xlogy
421
422Reduction Ops
423~~~~~~~~~~~~~~~~~~~~~~
424.. autosummary::
425    :toctree: generated
426    :nosignatures:
427
428    argmax
429    argmin
430    amax
431    amin
432    aminmax
433    all
434    any
435    max
436    min
437    dist
438    logsumexp
439    mean
440    nanmean
441    median
442    nanmedian
443    mode
444    norm
445    nansum
446    prod
447    quantile
448    nanquantile
449    std
450    std_mean
451    sum
452    unique
453    unique_consecutive
454    var
455    var_mean
456    count_nonzero
457
458Comparison Ops
459~~~~~~~~~~~~~~~~~~~~~~
460.. autosummary::
461    :toctree: generated
462    :nosignatures:
463
464    allclose
465    argsort
466    eq
467    equal
468    ge
469    greater_equal
470    gt
471    greater
472    isclose
473    isfinite
474    isin
475    isinf
476    isposinf
477    isneginf
478    isnan
479    isreal
480    kthvalue
481    le
482    less_equal
483    lt
484    less
485    maximum
486    minimum
487    fmax
488    fmin
489    ne
490    not_equal
491    sort
492    topk
493    msort
494
495
496Spectral Ops
497~~~~~~~~~~~~~~~~~~~~~~
498.. autosummary::
499    :toctree: generated
500    :nosignatures:
501
502    stft
503    istft
504    bartlett_window
505    blackman_window
506    hamming_window
507    hann_window
508    kaiser_window
509
510
511Other Operations
512~~~~~~~~~~~~~~~~~~~~~~
513
514.. autosummary::
515    :toctree: generated
516    :nosignatures:
517
518    atleast_1d
519    atleast_2d
520    atleast_3d
521    bincount
522    block_diag
523    broadcast_tensors
524    broadcast_to
525    broadcast_shapes
526    bucketize
527    cartesian_prod
528    cdist
529    clone
530    combinations
531    corrcoef
532    cov
533    cross
534    cummax
535    cummin
536    cumprod
537    cumsum
538    diag
539    diag_embed
540    diagflat
541    diagonal
542    diff
543    einsum
544    flatten
545    flip
546    fliplr
547    flipud
548    kron
549    rot90
550    gcd
551    histc
552    histogram
553    histogramdd
554    meshgrid
555    lcm
556    logcumsumexp
557    ravel
558    renorm
559    repeat_interleave
560    roll
561    searchsorted
562    tensordot
563    trace
564    tril
565    tril_indices
566    triu
567    triu_indices
568    unflatten
569    vander
570    view_as_real
571    view_as_complex
572    resolve_conj
573    resolve_neg
574
575
576BLAS and LAPACK Operations
577~~~~~~~~~~~~~~~~~~~~~~~~~~~
578.. autosummary::
579    :toctree: generated
580    :nosignatures:
581
582    addbmm
583    addmm
584    addmv
585    addr
586    baddbmm
587    bmm
588    chain_matmul
589    cholesky
590    cholesky_inverse
591    cholesky_solve
592    dot
593    geqrf
594    ger
595    inner
596    inverse
597    det
598    logdet
599    slogdet
600    lu
601    lu_solve
602    lu_unpack
603    matmul
604    matrix_power
605    matrix_exp
606    mm
607    mv
608    orgqr
609    ormqr
610    outer
611    pinverse
612    qr
613    svd
614    svd_lowrank
615    pca_lowrank
616    lobpcg
617    trapz
618    trapezoid
619    cumulative_trapezoid
620    triangular_solve
621    vdot
622
623Foreach Operations
624~~~~~~~~~~~~~~~~~~
625
626.. warning::
627    This API is in beta and subject to future changes.
628    Forward-mode AD is not supported.
629
630.. autosummary::
631    :toctree: generated
632    :nosignatures:
633
634    _foreach_abs
635    _foreach_abs_
636    _foreach_acos
637    _foreach_acos_
638    _foreach_asin
639    _foreach_asin_
640    _foreach_atan
641    _foreach_atan_
642    _foreach_ceil
643    _foreach_ceil_
644    _foreach_cos
645    _foreach_cos_
646    _foreach_cosh
647    _foreach_cosh_
648    _foreach_erf
649    _foreach_erf_
650    _foreach_erfc
651    _foreach_erfc_
652    _foreach_exp
653    _foreach_exp_
654    _foreach_expm1
655    _foreach_expm1_
656    _foreach_floor
657    _foreach_floor_
658    _foreach_log
659    _foreach_log_
660    _foreach_log10
661    _foreach_log10_
662    _foreach_log1p
663    _foreach_log1p_
664    _foreach_log2
665    _foreach_log2_
666    _foreach_neg
667    _foreach_neg_
668    _foreach_tan
669    _foreach_tan_
670    _foreach_sin
671    _foreach_sin_
672    _foreach_sinh
673    _foreach_sinh_
674    _foreach_round
675    _foreach_round_
676    _foreach_sqrt
677    _foreach_sqrt_
678    _foreach_lgamma
679    _foreach_lgamma_
680    _foreach_frac
681    _foreach_frac_
682    _foreach_reciprocal
683    _foreach_reciprocal_
684    _foreach_sigmoid
685    _foreach_sigmoid_
686    _foreach_trunc
687    _foreach_trunc_
688    _foreach_zero_
689
690Utilities
691----------------------------------
692.. autosummary::
693    :toctree: generated
694    :nosignatures:
695
696    compiled_with_cxx11_abi
697    result_type
698    can_cast
699    promote_types
700    use_deterministic_algorithms
701    are_deterministic_algorithms_enabled
702    is_deterministic_algorithms_warn_only_enabled
703    set_deterministic_debug_mode
704    get_deterministic_debug_mode
705    set_float32_matmul_precision
706    get_float32_matmul_precision
707    set_warn_always
708    get_device_module
709    is_warn_always_enabled
710    vmap
711    _assert
712
713Symbolic Numbers
714----------------
715.. autoclass:: SymInt
716    :members:
717
718.. autoclass:: SymFloat
719    :members:
720
721.. autoclass:: SymBool
722    :members:
723
724.. autosummary::
725    :toctree: generated
726    :nosignatures:
727
728    sym_float
729    sym_int
730    sym_max
731    sym_min
732    sym_not
733    sym_ite
734
735Export Path
736-------------
737.. autosummary::
738    :toctree: generated
739    :nosignatures:
740
741.. warning::
742    This feature is a prototype and may have compatibility breaking changes in the future.
743
744    export
745    generated/exportdb/index
746
747Control Flow
748------------
749
750.. warning::
751    This feature is a prototype and may have compatibility breaking changes in the future.
752
753.. autosummary::
754    :toctree: generated
755    :nosignatures:
756
757    cond
758
759Optimizations
760-------------
761.. autosummary::
762    :toctree: generated
763    :nosignatures:
764
765    compile
766
767`torch.compile documentation <https://pytorch.org/docs/main/torch.compiler.html>`__
768
769Operator Tags
770------------------------------------
771.. autoclass:: Tag
772    :members:
773
774.. Empty submodules added only for tracking.
775.. py:module:: torch.contrib
776.. py:module:: torch.utils.backcompat
777
778.. This module is only used internally for ROCm builds.
779.. py:module:: torch.utils.hipify
780
781.. This module needs to be documented. Adding here in the meantime
782.. for tracking purposes
783.. py:module:: torch.utils.model_dump
784.. py:module:: torch.utils.viz
785.. py:module:: torch.functional
786.. py:module:: torch.quasirandom
787.. py:module:: torch.return_types
788.. py:module:: torch.serialization
789.. py:module:: torch.signal.windows.windows
790.. py:module:: torch.sparse.semi_structured
791.. py:module:: torch.storage
792.. py:module:: torch.torch_version
793.. py:module:: torch.types
794.. py:module:: torch.version
795