pytorch/test/cpp/api
Pavithran Ramachandran d9d34922a0 Extend jit::load to work on flatbuffer file (#75022)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/75022

Extending torch::jit::load to read flatbuffer file
ghstack-source-id: 152820697

Test Plan: CI

Reviewed By: iseeyuan

Differential Revision: D35060736

fbshipit-source-id: d653a5af662a46107ff4fd70209fd2a0a4d40f20
(cherry picked from commit 109e14a54bd279011c8f9066e6c29e8e0b1fc4db)
2022-04-02 01:33:34 +00:00
..
any.cpp
autograd.cpp [easy][PyTorch] Use at::native::is_nonzero (#67195) 2021-10-26 12:40:32 -07:00
CMakeLists.txt Compile without -Wno-unused-variable (take 2) (#66041) 2021-10-04 20:39:39 -07:00
dataloader.cpp use irange for loops 10 (#69394) 2021-12-09 09:49:34 -08:00
dispatch.cpp use irange for loops 10 (#69394) 2021-12-09 09:49:34 -08:00
enum.cpp
expanding-array.cpp use irange for loops 10 (#69394) 2021-12-09 09:49:34 -08:00
fft.cpp use irange for loops 10 (#69394) 2021-12-09 09:49:34 -08:00
functional.cpp [caffe2] fix build failures in optimized builds under clang 2022-02-22 22:31:47 +00:00
grad_mode.cpp
imethod.cpp [deploy][1/n] Make deploy code conform to PyTorch style. (#65861) 2021-09-30 22:59:47 -07:00
inference_mode.cpp
init_baseline.h
init_baseline.py
init.cpp use irange for loops 10 (#69394) 2021-12-09 09:49:34 -08:00
integration.cpp use irange for loops 10 (#69394) 2021-12-09 09:49:34 -08:00
jit.cpp
memory.cpp
meta_tensor.cpp
misc.cpp Resolve int[]? arguments to new OptionalIntArrayRef class 2022-03-26 01:45:50 +00:00
module.cpp use irange for loops 10 (#69394) 2021-12-09 09:49:34 -08:00
moduledict.cpp
modulelist.cpp use irange for loops 10 (#69394) 2021-12-09 09:49:34 -08:00
modules.cpp Implement Tanh Gelu Approximation (#61439) 2022-02-14 03:40:32 +00:00
namespace.cpp
nn_utils.cpp use irange for loops 10 (#69394) 2021-12-09 09:49:34 -08:00
operations.cpp use irange for loops 5 (#66744) 2021-10-18 21:59:50 -07:00
optim_baseline.h
optim_baseline.py
optim.cpp use irange for loops 5 (#66744) 2021-10-18 21:59:50 -07:00
ordered_dict.cpp
parallel_benchmark.cpp
parallel.cpp use irange for loops 5 (#66744) 2021-10-18 21:59:50 -07:00
parameterdict.cpp
parameterlist.cpp use irange for loops 5 (#66744) 2021-10-18 21:59:50 -07:00
README.md
rnn.cpp
sequential.cpp use irange for loops 5 (#66744) 2021-10-18 21:59:50 -07:00
serialize.cpp Extend jit::load to work on flatbuffer file (#75022) 2022-04-02 01:33:34 +00:00
special.cpp
static.cpp use irange for loops 5 (#66744) 2021-10-18 21:59:50 -07:00
support.cpp
support.h
tensor_cuda.cpp
tensor_flatten.cpp
tensor_indexing.cpp
tensor_options_cuda.cpp
tensor_options.cpp
tensor.cpp use irange for loops 5 (#66744) 2021-10-18 21:59:50 -07:00
torch_include.cpp
transformer.cpp

C++ Frontend Tests

In this folder live the tests for PyTorch's C++ Frontend. They use the GoogleTest test framework.

CUDA Tests

To make a test runnable only on platforms with CUDA, you should suffix your test with _CUDA, e.g.

TEST(MyTestSuite, MyTestCase_CUDA) { }

To make it runnable only on platforms with at least two CUDA machines, suffix it with _MultiCUDA instead of _CUDA, e.g.

TEST(MyTestSuite, MyTestCase_MultiCUDA) { }

There is logic in main.cpp that detects the availability and number of CUDA devices and supplies the appropriate negative filters to GoogleTest.

Integration Tests

Integration tests use the MNIST dataset. You must download it by running the following command from the PyTorch root folder:

$ python tools/download_mnist.py -d test/cpp/api/mnist

The required paths will be referenced as test/cpp/api/mnist/... in the test code, so you must run the integration tests from the PyTorch root folder.