xref: /aosp_15_r20/external/pytorch/aten/src/ATen/cuda/tunable/README.md (revision da0073e96a02ea20f0ac840b70461e3646d07c45)
1# TunableOp
2
3This directory implements a TunableOp interface.
4
5Some operations, such as GEMMs, could be implemented using more than one library or more than one technique. For
6example, a GEMM could be implemented for CUDA or ROCm using either the blas or blasLt libraries. Further, ROCm's
7rocblas and hipblaslt libraries allow the user to query for all possible algorithms and then choose one. How does one
8know which implementation is the fastest and should be chosen? That's what TunableOp provides.
9
10## Enabling TunableOp and Tuning Separately
11The TunableOp feature is enabled separately from enabling the tuning phase itself. Enabling TunableOp means that PyTorch
12will replace any standard operators with their Tunable implementations. Any call to a TunableOp first checks whether it
13has already been tuned for the given operator inputs. If so, it will immediately call the tuned operation; no further
14tuning will take place even when the tuning setting is enabled. Instead if no tuning result is found, and tuning is
15enabled, the TunableOp will benchmark every registered implementation of that operator for the given set of inputs and
16select the fastest.
17
18## File Input and Output
19The first time any TunableOp is invoked, the internal database of tuned operations will be prepared by attempting to
20read the results from the given file. The default filename is 'tunableop_results.csv'. To support tuning when multiple
21GPUs are used across multiple processes, the GPU device ordinal is automatically inserted into the filename to avoid
22multiple processes overwriting the same file.
23
24If tuning is enabled and new tunings are discovered during the course of your workload, it will also write out to this
25same filename with all tunings, both the ones it read in at startup as well as the new ones found at runtime. This can
26be used, for example, to build up a tunings file across many workloads by reusing the same file. The output file is
27automatically created when the application terminates. This behavior can be controlled by the C++ and Python APIs but
28not the environment variables.
29
30Assuming you specified a filename, you'll end up with a CSV file with contents like so:
31
32```
33Validator,PT_VERSION,2.2.0
34Validator,ROCM_VERSION,6.0.0.0-12969-1544e39
35Validator,HIPBLASLT_VERSION,0.6.0-a9c5cc7
36Validator,ROCBLAS_VERSION,4.0.0-72e57364-dirty
37GemmTunableOp_float_NT,nt_25088_4096_64,1219,1.262
38GemmTunableOp_float_NT,nt_4096_4096_64,1216,0.033
39```
40
41Note the "Validator" lines. If you change a library verison, or ROCm version, or PyTorch version, TunableOp will detect
42this and reject the tunings file because the prior tunings are likely affected by other software changes.
43
44The remaining lines are the tuned solutions for each TunableOp encountered during your execution. Each line consists of
454 comma-separated fields: operator name, operator parameters, solution name, and average execution time. The execution
46time is an optional field. The CSV file can be edited, but with caution. For example, the solution name (field 3) can be
47changed to "Default" and it will fall back to the original PyTorch untuned implementation. Or, in the case of ROCm's
48hipBLAS or hipBLASLt libraries, if you know the specific solution index you can override the solution that TunableOp
49selected by replacing the value. The operator name and parameters (fields 1 and 2) are internally named and should not
50be modified. In the case of GemmTunableOp, field 1 indicates the datatype and whether the inputs are transposed (T) or
51not (N) and field 2 indicates the M, N, K input shapes.
52
53There is an option to enable verbose output but it is only recommended for debugging purposes. This will produce a lot
54of diagnostic messages but may be useful to see if TunableOp is being used at all. Otherwise, TunableOp is completely
55silent, besides file output, unless there is a warning or error during its use.
56
57## A Note on Tuning Behavior, Warmup, and Cache Effects
58Tuning an operator consists of iterating through the list or registered implementations and profiling each one. The
59profile is established by running a single implementation in a loop multiple times and taking the average execution
60time. There is also an optional warmup phase prior to tuning that can help with reaching stable power states by the
61hardware. During tuning of a workload the various hardware caches will more likely produce hits than when not tuning.
62There are options for flushing the instruction cache and rotate the input tensors which might help produce a more
63faithful profile of the tuned operator as if the operator were run within a larger workload instead of in a tight,
64repetitive loop.
65
66By default, each possible solution for a given operator will be run for either 100 iterations or as many iterations that
67can be run within 30ms, whichever is smaller, and its average execution will be calculated. The fastest solution among
68all that were successfully profiled will be chosen. A profile might fail if the given solution doesn't achieve the same
69accuracy as the default implementation or if the solution returns an error code.
70
71## Current Tunable Operators
72
73### TunableGemm for ROCm
74Currently only a TunableGemm for ROCm is implemented. Note that CUDA builds of PyTorch will function correctly when
75using TunableOp but the only solution available to CUDA builds is the 'Default' implementation i.e. the original cuBLAS
76default, now called through TunableOp. Any call to at::cuda::blas::gemm() or ::bgemm() will be routed through TunableOp
77when enabled. Calling gemm() for a given set of input arguments (transa, transb, m, n, k) will attempt to use the
78fastest available implementation across both rocblas and hipblaslt.
79
80## Tuning Context
81The behavior of TunableOp is currently manipulated through environment variables, the C++ interface of
82at::cuda::tunable::getTuningContext(), or the `torch.cuda.tunable` python interfaces. The environment variables take
83precedence over any setting you manipulate using the C++ or Python APIs.
84
85### Environment Variable Interface
86Environment variables are cached the first time they are read. You cannot use the environment variable interface
87programmatically since the settings become fixed. Use the C++ or Python APIs instead.
88
89| Environment Variable | Description |
90| -------------------- | ----------- |
91| PYTORCH_TUNABLEOP_ENABLED | Default is 0. Set to 1 to enable. |
92| PYTORCH_TUNABLEOP_TUNING | Default is 1. Set to 0 to disable. |
93| PYTORCH_TUNABLEOP_VERBOSE | Default is 0. Set to 1 to enable basic logging. 2 for basic tuning status. 3 for full trace. |
94| PYTORCH_TUNABLEOP_VERBOSE_FILENAME | Default is "err" for stderr. Set to "out" for stdout or a filename for capturing verbose logging. |
95| PYTORCH_TUNABLEOP_FILENAME | Default is 'tunableop_results.csv'. |
96| PYTORCH_TUNABLEOP_NUMERICAL_CHECK | Default is 0. Set to 1 to enable. |
97| PYTORCH_TUNABLEOP_ROCBLAS_ENABLED | Default is 1. Set to 0 to disable rocblas being considered during tuning. |
98| PYTORCH_TUNABLEOP_HIPBLASLT_ENABLED | Default is 1. Set to 0 to disable hipblaslt being considered during tuning. |
99| PYTORCH_TUNABLEOP_MAX_TUNING_DURATION_MS | Default is 30. Unit is milliseconds. |
100| PYTORCH_TUNABLEOP_MAX_TUNING_ITERATIONS | Default is 100. |
101| PYTORCH_TUNABLEOP_MAX_WARMUP_DURATION_MS | Default is 0, meaning it is not used. Unit is milliseconds. |
102| PYTORCH_TUNABLEOP_MAX_WARMUP_ITERATIONS | Default is 0, meaning it is not used. |
103| PYTORCH_TUNABLEOP_ICACHE_FLUSH_ENABLED | Default is 1. Set to 0 to disable. |
104| PYTORCH_TUNABLEOP_ROTATING_BUFFER_SIZE | Default is to query L2 cache size. Set to 0 to disable. Otherwise, set to the number of MiB to use for the pool of operator parameters. For example, setting this to the size of your device's memory cache will guarantee that every tuning iteration will use a cold cache. |
105
106### Python Interface
107All python APIs exist in the `torch.cuda.tunable` module.
108
109| Python API | Description |
110| ---------- | ----------- |
111| enable(val: bool = True) -> None | |
112| is_enabled() -> bool | |
113| tuning_enable(val: bool = True) -> None | Default is True. |
114| tuning_is_enabled() -> bool | |
115| set_max_tuning_duration(duration: int) -> None | |
116| get_max_tuning_duration() -> int | |
117| set_max_tuning_iterations(iterations: int) -> None | |
118| get_max_tuning_iterations() -> int | |
119| set_filename(filename: str, insert_device_ordinal: bool = False) -> None | |
120| get_filename() -> str | |
121| get_results() -> Tuple[str, str, str, float] | |
122| get_validators() -> Tuple[str, str] | |
123| write_file_on_exit(val: bool) -> None | Default is True. |
124| write_file(filename: Optional[str] = None) -> None | If filename not given, it will call get_filename(). |
125| read_file(filename: Optional[str] = None) -> None | If filename not given, it will call get_filename(). |
126
127### C++ Interface
128Example:
129```C++
130#include <ATen/cuda/tunable/Tunable.h>
131
132at::cuda::tunable::getTuningContext()->EnableTunableOp(true);
133```
134
135| C++ API | Description |
136| ------- | ----------- |
137| void EnableTunableOp(bool value); | |
138| bool IsTunableOpEnabled() const; | |
139| void EnableTuning(bool value); | |
140| bool IsTuningEnabled() const; | |
141| void SetMaxTuningDurationMs(int max_duration_ms); | |
142| int GetMaxTuningDurationMs() const; | |
143| void SetMaxTuningIterations(int max_iter); | |
144| int GetMaxTuningIterations() const; | |
145| TuningResults GetTuningResults(); | |
146| void SetFilename(const std::string& filename, bool insert_device_ordinal=false); | |
147| std::string GetFilename() const; | |
148| void WriteFileOnExit(bool value); | |
149| bool ReadFile(const std::string& filename={}); | |
150| bool WriteFile(const std::string& filename={}); | |
151