Name Date Size #Lines LOC

..--

CMakeLists.txtH A D25-Apr-20253 KiB8175

README.mdH A D25-Apr-20251 KiB3624

any.cppH A D25-Apr-202513.6 KiB458387

autograd.cppH A D25-Apr-202551.3 KiB1,6821,367

dataloader.cppH A D25-Apr-202576.1 KiB2,3231,817

dispatch.cppH A D25-Apr-20251.8 KiB6357

enum.cppH A D25-Apr-20253 KiB8983

expanding-array.cppH A D25-Apr-20251.6 KiB6150

fft.cppH A D25-Apr-20254.3 KiB131102

functional.cppH A D25-Apr-2025117.3 KiB3,2942,883

grad_mode.cppH A D25-Apr-20252.4 KiB7965

inference_mode.cppH A D25-Apr-202521.3 KiB658518

init.cppH A D25-Apr-20254.2 KiB132106

init_baseline.hH A D25-Apr-202542.6 KiB1,6221,612

init_baseline.pyH A D25-Apr-20252 KiB7651

integration.cppH A D25-Apr-20259.4 KiB325260

ivalue.cppH A D25-Apr-20252.3 KiB6448

jit.cppH A D25-Apr-20253.8 KiB127100

memory.cppH A D25-Apr-2025978 3627

meta_tensor.cppH A D25-Apr-20251.2 KiB3626

misc.cppH A D25-Apr-20252.4 KiB10580

module.cppH A D25-Apr-202533.9 KiB1,058892

moduledict.cppH A D25-Apr-20259.9 KiB310262

modulelist.cppH A D25-Apr-20258.9 KiB309250

modules.cppH A D25-Apr-2025189 KiB5,5704,882

namespace.cppH A D25-Apr-2025695 217

nested.cppH A D25-Apr-2025394 1610

nested_int.cppH A D25-Apr-20253.2 KiB10674

nn_utils.cppH A D25-Apr-202532.2 KiB894746

operations.cppH A D25-Apr-20253 KiB9173

optim.cppH A D25-Apr-202518.5 KiB576453

optim_baseline.hH A D25-Apr-2025105.9 KiB3,0613,036

optim_baseline.pyH A D25-Apr-20254.5 KiB144113

ordered_dict.cppH A D25-Apr-20256.8 KiB235203

parallel.cppH A D25-Apr-20259.3 KiB295235

parallel_benchmark.cppH A D25-Apr-20252.1 KiB8983

parameterdict.cppH A D25-Apr-20255.1 KiB145131

parameterlist.cppH A D25-Apr-20255.7 KiB164140

rnn.cppH A D25-Apr-202526.7 KiB813627

sequential.cppH A D25-Apr-202522.7 KiB674592

serialize.cppH A D25-Apr-202537.1 KiB1,095868

special.cppH A D25-Apr-2025316 148

static.cppH A D25-Apr-20252.3 KiB9274

support.cppH A D25-Apr-2025167 106

support.hH A D25-Apr-20255.6 KiB197149

tensor.cppH A D25-Apr-202543.2 KiB1,2611,092

tensor_cuda.cppH A D25-Apr-20255 KiB12795

tensor_flatten.cppH A D25-Apr-20251.8 KiB4430

tensor_indexing.cppH A D25-Apr-202535 KiB1,004774

tensor_options.cppH A D25-Apr-20254.9 KiB162122

tensor_options_cuda.cppH A D25-Apr-20252.9 KiB8359

torch_include.cppH A D25-Apr-2025401 159

transformer.cppH A D25-Apr-202558.9 KiB1,5241,355

README.md

1# C++ Frontend Tests
2
3In this folder live the tests for PyTorch's C++ Frontend. They use the
4[GoogleTest](https://github.com/google/googletest) test framework.
5
6## CUDA Tests
7
8To make a test runnable only on platforms with CUDA, you should suffix your
9test with `_CUDA`, e.g.
10
11```cpp
12TEST(MyTestSuite, MyTestCase_CUDA) { }
13```
14
15To make it runnable only on platforms with at least two CUDA machines, suffix
16it with `_MultiCUDA` instead of `_CUDA`, e.g.
17
18```cpp
19TEST(MyTestSuite, MyTestCase_MultiCUDA) { }
20```
21
22There is logic in `main.cpp` that detects the availability and number of CUDA
23devices and supplies the appropriate negative filters to GoogleTest.
24
25## Integration Tests
26
27Integration tests use the MNIST dataset. You must download it by running the
28following command from the PyTorch root folder:
29
30```sh
31$ python tools/download_mnist.py -d test/cpp/api/mnist
32```
33
34The required paths will be referenced as `test/cpp/api/mnist/...` in the test
35code, so you *must* run the integration tests from the PyTorch root folder.
36