pytorch/test/cpp/api
Sam Estep 5bcbbf5373 Lint trailing newlines (#54737)
Summary:
*Context:* https://github.com/pytorch/pytorch/issues/53406 added a lint for trailing whitespace at the ends of lines. However, in order to pass FB-internal lints, that PR also had to normalize the trailing newlines in four of the files it touched. This PR adds an OSS lint to normalize trailing newlines.

The changes to the following files (made in 54847d0adb9be71be4979cead3d9d4c02160e4cd) are the only manually-written parts of this PR:

- `.github/workflows/lint.yml`
- `mypy-strict.ini`
- `tools/README.md`
- `tools/test/test_trailing_newlines.py`
- `tools/trailing_newlines.py`

I would have liked to make this just a shell one-liner like the other three similar lints, but nothing I could find quite fit the bill. Specifically, all the answers I tried from the following Stack Overflow questions were far too slow (at least a minute and a half to run on this entire repository):

- [How to detect file ends in newline?](https://stackoverflow.com/q/38746)
- [How do I find files that do not end with a newline/linefeed?](https://stackoverflow.com/q/4631068)
- [How to list all files in the Git index without newline at end of file](https://stackoverflow.com/q/27624800)
- [Linux - check if there is an empty line at the end of a file [duplicate]](https://stackoverflow.com/q/34943632)
- [git ensure newline at end of each file](https://stackoverflow.com/q/57770972)

To avoid giving false positives during the few days after this PR is merged, we should probably only merge it after https://github.com/pytorch/pytorch/issues/54967.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/54737

Test Plan:
Running the shell script from the "Ensure correct trailing newlines" step in the `quick-checks` job of `.github/workflows/lint.yml` should print no output and exit in a fraction of a second with a status of 0. That was not the case prior to this PR, as shown by this failing GHA workflow run on an earlier draft of this PR:

- https://github.com/pytorch/pytorch/runs/2197446987?check_suite_focus=true

In contrast, this run (after correcting the trailing newlines in this PR) succeeded:

- https://github.com/pytorch/pytorch/pull/54737/checks?check_run_id=2197553241

To unit-test `tools/trailing_newlines.py` itself (this is run as part of our "Test tools" GitHub Actions workflow):
```
python tools/test/test_trailing_newlines.py
```

Reviewed By: malfet

Differential Revision: D27409736

Pulled By: samestep

fbshipit-source-id: 46f565227046b39f68349bbd5633105b2d2e9b19
2021-03-30 13:09:52 -07:00
..
any.cpp
autograd.cpp Fix autograd when inputs contains tensors without materialized grad_fn (#51940) 2021-02-11 09:22:15 -08:00
CMakeLists.txt Revert D26973911: Implement public API InferenceMode and its error handling 2021-03-29 11:17:49 -07:00
dataloader.cpp Lint trailing newlines (#54737) 2021-03-30 13:09:52 -07:00
dispatch.cpp [Codemod][GleanFbcode] Remove dead includes in caffe2/test (#39023) 2020-05-27 14:07:26 -07:00
enum.cpp
expanding-array.cpp
fft.cpp Remove deprecated spectral ops from torch namespace (#48594) 2020-12-05 04:12:32 -08:00
functional.cpp Forbid trailing whitespace (#53406) 2021-03-05 17:22:55 -08:00
init_baseline.h Lint trailing newlines (#54737) 2021-03-30 13:09:52 -07:00
init_baseline.py
init.cpp [Codemod][GleanFbcode] Remove dead includes in caffe2/test (#39023) 2020-05-27 14:07:26 -07:00
integration.cpp
jit.cpp
memory.cpp
misc.cpp codegen: Resolve overload ambiguities created by defaulted arguments (#49348) 2021-01-04 11:59:16 -08:00
module.cpp
moduledict.cpp Implement C++ ModuleDict (#47707) 2020-11-19 08:07:51 -08:00
modulelist.cpp
modules.cpp Add padding='same' mode to conv{1,2,3}d (#45667) 2021-03-18 16:22:03 -07:00
namespace.cpp
nn_utils.cpp Raise error in clip_grad_norm_ if norm is non-finite (#53843) 2021-03-29 08:41:21 -07:00
operations.cpp [Codemod][GleanFbcode] Remove dead includes in caffe2/test (#43953) 2020-09-01 21:48:28 -07:00
optim_baseline.h Add AdamW to C++ frontend (#40009) 2020-06-18 15:28:12 -07:00
optim_baseline.py Add AdamW to C++ frontend (#40009) 2020-06-18 15:28:12 -07:00
optim.cpp Adding learning rate schedulers to C++ API (#52268) 2021-03-10 23:09:51 -08:00
ordered_dict.cpp
parallel_benchmark.cpp [aten] Pass std::function<> to thread_pool by value, instead of const ref. (#37681) 2020-05-05 08:41:38 -07:00
parallel.cpp [PyTorch] Modify data_parallel to work with small tensors (#37704) 2020-05-04 11:06:42 -07:00
parameterdict.cpp Python/C++ API Parity: Add impl and tests for ParameterDict (#40654) 2020-06-29 08:50:44 -07:00
parameterlist.cpp Impl for ParameterList (#41259) 2020-07-12 20:50:31 -07:00
README.md Rewrite C++ API tests in gtest (#11953) 2018-09-21 21:28:16 -07:00
rnn.cpp Adding support for CuDNN-based LSTM with projections (#47725) 2020-12-16 11:27:02 -08:00
sequential.cpp
serialize.cpp Modernize for-loops (#50912) 2021-01-22 10:53:24 -08:00
special.cpp [special] add torch.special namespace (#52296) 2021-03-04 00:04:36 -08:00
static.cpp
support.cpp
support.h Revert D26973911: Implement public API InferenceMode and its error handling 2021-03-29 11:17:49 -07:00
tensor_cuda.cpp
tensor_flatten.cpp fix unflatten_dense_tensor when there is empty tensor inside (#50321) 2021-01-23 12:14:34 -08:00
tensor_indexing.cpp Making ops c10-full: list of optional tensors (#49138) 2021-01-04 05:04:02 -08:00
tensor_options_cuda.cpp
tensor_options.cpp [PyTorch] Narrow Device to 2 bytes by narrowing DeviceType and DeviceIndex (#47023) 2020-11-18 19:39:40 -08:00
tensor.cpp Change to.dtype_layout to c10-full (#41169) 2020-07-10 16:04:34 -07:00
torch_include.cpp
transformer.cpp C++ APIs Transformer NN Module Top Layer (#44333) 2020-09-11 08:25:27 -07:00

C++ Frontend Tests

In this folder live the tests for PyTorch's C++ Frontend. They use the GoogleTest test framework.

CUDA Tests

To make a test runnable only on platforms with CUDA, you should suffix your test with _CUDA, e.g.

TEST(MyTestSuite, MyTestCase_CUDA) { }

To make it runnable only on platforms with at least two CUDA machines, suffix it with _MultiCUDA instead of _CUDA, e.g.

TEST(MyTestSuite, MyTestCase_MultiCUDA) { }

There is logic in main.cpp that detects the availability and number of CUDA devices and supplies the appropriate negative filters to GoogleTest.

Integration Tests

Integration tests use the MNIST dataset. You must download it by running the following command from the PyTorch root folder:

$ python tools/download_mnist.py -d test/cpp/api/mnist

The required paths will be referenced as test/cpp/api/mnist/... in the test code, so you must run the integration tests from the PyTorch root folder.