Commit Graph

267 Commits

Author SHA1 Message Date
Mike Ruberry
36c87f1243 Refactors test_torch.py to be fewer than 10k lines (#47356)
Summary:
Creates multiple new test suites to have fewer tests in test_torch.py, consistent with previous test suite creation like test_unary_ufuncs.py and test_linalg.py.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/47356

Reviewed By: ngimel

Differential Revision: D25202268

Pulled By: mruberry

fbshipit-source-id: 75fde3ca76545d1b32b86d432a5cb7a5ba8f5bb6
2020-11-28 20:11:40 -08:00
Jithun Nair
f1c985695c Enabled gloo backend in test_distributed unit tests for ROCm (#40395)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/40395

Reviewed By: ngimel

Differential Revision: D25181692

Pulled By: mrshenli

fbshipit-source-id: 29f478c974791efc0acea210c8c9e574944746a5
2020-11-25 19:51:40 -08:00
Sam Estep
c4a6df989c Pass any verbosity from test/run_test.py to pytest (#48204)
Summary:
Previously it was only possible to pass up to one [verbosity level](https://adamj.eu/tech/2019/10/03/my-most-used-pytest-commandline-flags/) to `pytest` when running a test via `test/run_test.py`. Presumably that behavior was never added because `unittest` [doesn't do anything extra](https://stackoverflow.com/a/1322648/5044950) when given more than one `--verbose` flag. This PR removes that limitation.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/48204

Test Plan:
Make a dummy `pytest`-style file `test/test_foo.py`:
```py
def test_bar():
    assert 'hello\n' * 10 == 'hello\n' * 20
```
Then add `'test_foo'` to both `TESTS` and `USE_PYTEST_LIST` in `test/run_test.py`, and run this command:
```sh
test/run_test.py -vvi test_foo
```

Reviewed By: walterddr

Differential Revision: D25069147

Pulled By: samestep

fbshipit-source-id: 2765ee78d18cc84ea0e262520838993f9e9ee04f
2020-11-19 08:06:26 -08:00
Wanchao Liang
bc484cfed1 [c10d][jit] initial torchbind bindings for ProcessGroupNCCL (#42944)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/42944

Test Plan: Imported from OSS

Reviewed By: mrshenli

Differential Revision: D23228682

Pulled By: wanchaol

fbshipit-source-id: 30f4258ec2a90202264745511b897f4e1f5550f7
2020-11-17 21:01:55 -08:00
Xiang Gao
6e42b77be1 Add '--allow-run-as-root' to mpiexec to allow running distributed test inside a container (#43794)
Summary:
Inside a container, the user is often root. We should allow this use case so that people can easily run `run_test.py` insider a container

Pull Request resolved: https://github.com/pytorch/pytorch/pull/43794

Reviewed By: ezyang

Differential Revision: D24904469

Pulled By: malfet

fbshipit-source-id: f96cb9dda3e7bd18b29801cde4c5b0616c750016
2020-11-13 15:31:06 -08:00
Jane Xu
579cfc6641 Moving test order to rebalance test1 and test2 times (#47290)
Summary:
asan testing diff is absurd right now, moving some heftier tests to be in shard2 (test_nn and test_quantization)

Pull Request resolved: https://github.com/pytorch/pytorch/pull/47290

Reviewed By: malfet

Differential Revision: D24706877

Pulled By: janeyx99

fbshipit-source-id: 35069d1e425857f85775f9be76501d6a158e0376
2020-11-03 09:39:29 -08:00
Pritam Damania
78de12f588 Replace -f with -x for pytest tests. (#46967)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/46967

Tests under `tests/distributed/_pipeline/sync` use pytest and
specifying the `-f` option for such tests as follows: `python test/run_test.py
-i distributed/_pipeline/sync/skip/test_api -- -f` doesn't work.

The equivalent option for pytest is `-x`. To resolve this issue, I've updated
`run_test.py` to replace `-f` with `-x` for pytest tests.

More details in https://github.com/pytorch/pytorch/issues/46782

#Closes: https://github.com/pytorch/pytorch/issues/46782
ghstack-source-id: 115440558

Test Plan:
1) waitforbuildbot
2) `python test/run_test.py -i distributed/_pipeline/sync/skip/test_api -- -f`

Reviewed By: malfet

Differential Revision: D24584556

fbshipit-source-id: bd87f5b4953504e5659fe72fc8615e126e5490ff
2020-10-29 15:28:06 -07:00
Jane Xu
85954164a4 fix minor bug, message variable does not exist (#46777)
Summary:
When run with `--continue-through-error`, the script ends with the following error:

```
Traceback (most recent call last):
  File "run_test.py", line 745, in <module>
    main()
  File "run_test.py", line 741, in main
    print_to_stderr(message)
NameError: name 'message' is not defined
make: *** [macos-compat] Error 1
```

This PR just changes `message` to `err`, which is the intended variable.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/46777

Reviewed By: seemethere

Differential Revision: D24510460

Pulled By: janeyx99

fbshipit-source-id: be1124b6fc72b178d62acc168d0cbc74962de52b
2020-10-23 14:20:23 -07:00
Pritam Damania
06d50b5eb0 Pull in fairscale.nn.Pipe into PyTorch. (#44090)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/44090

This is an initial commit pulling in the torchgpipe fork at
https://github.com/facebookresearch/fairscale.

The purpose of this commit is to just pull in the code and ensure all tests and
builds work fine. We will slowly modify this to match our intended API
mentioned in https://fb.quip.com/txurAV3zIFox#RPZACAfAKMq. Follow up PRs would
address further changes needed on top of the initial commit..

We're pulling the code into the `torch.distributed._pipeline.sync` package. The
package is private on purpose since there is a lot of work (ex: docs, API
changes etc.) that needs to go in before we can actually officially support
this.
ghstack-source-id: 114864254

Test Plan:
1) waitforbuildbot
2) Ran all tests on my devgpu

Reviewed By: mrshenli

Differential Revision: D23493316

fbshipit-source-id: fe3c8b7dadeeb86abdc00e8a8652491b0b16743a
2020-10-22 10:59:02 -07:00
Richard Zou
0285618a11 Add utilities to support handling of nested python data structures (#46287)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/46287

This adds a lightweight `pytree` implementation that is similar to and
inspired by JAX pytrees, tensorflow.nest, deepmind/tree,
TorchBeast's TensorNest, etc.

A *pytree* is Python nested data structure. It is a tree in the sense
that nodes are Python collections (e.g., list, tuple, dict) and the leaves
are Python values. Furthermore, a pytree should not contain reference
cycles.

This PR:
- adds support for flattening and unflattening nested Python list/dict/tuples

Context: nested Tensor inputs for vmap
--------------------------------------
Right now, vmap is restricted to taking in flat lists of tensors. This
is because vmap needs to be able to convert every tensor in the input
that is being vmapped over into a BatchedTensor.

With a pytree library, we can simply flatten the input data structure
(returning the leaves), map all of the Tensors in the flat input to
BatchedTensors, and unflatten the flat list of BatchedTensors into a new
input. Or equivalently, with a `tree_map` function, we can map a nested
python data structure containing Tensors into one containing
BatchedTensors.

Future work
-----------
In some future PRs, we'll add nested input support for vmap. The
prerequisites for that are:
- a `broadcast_to(small, big)` that broadcasts `small` up to `big`.
  This is for handling the in_dims to vmap: the in_dims structure must
  be compatible with the structure of the inputs.

Test Plan
---------
- New tests in test/test_pytree.py

Test Plan: Imported from OSS

Reviewed By: heitorschueroff

Differential Revision: D24392890

Pulled By: zou3519

fbshipit-source-id: 7daf7430c5a38354e7d203a72882bd7a9b24cfb1
2020-10-20 07:45:45 -07:00
jiej
ac146c4820 [nvFuser] Switching to CudaFusionGuard from BailOut for nvfuser - update 2 (#46452)
Summary:
1. Added CudaFusionGuard as the custom TypeCheck for nvfuser; enabled dynamic shape support with profiling executor;
2. dropped support for legacy fuser;
3. re-enabled nvfuser tests;
4. added registration for profiling record to allow profiling on user specified nodes.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/46452

Reviewed By: zou3519, anjali411

Differential Revision: D24364642

Pulled By: ngimel

fbshipit-source-id: daf53a9a6b6636e1ede420a3a6d0397d4a8b450b
2020-10-19 15:44:31 -07:00
Taylor Robie
dda95e6914 More Timer refinement (#46023)
Summary:
This PR just adds more polish to the benchmark utils:

1) `common.py`, `timer.py`, and `valgrind_wrapper/timer_interface.py` are now MyPy strict compliant. (except for three violations due to external deps.) Compare and Fuzzer will be covered in a future PR.
2) `CallgrindStats` now uses `TaskSpec` rather than accepting the individual fields which brings it closer to `Measurement`.
3) Some `__repr__` logic has been moved into `TaskSpec` (which `Measurement` and `CallgrindStats` use in their own `__repr__`s) for a more unified feel and less horrible f-string hacking, and the repr's have been given a cleanup pass.
4) `Tuple[FunctionCount, ...]` has been formalized as the `FunctionCounts` class, which has a much nicer `__repr__` than just the raw tuple, as well as some convenience methods (`__add__`, `__sub__`, `filter`, `transform`) for easier DIY stat exploration. (I find myself using the latter two a lot now.) My personal experience is that manipulating `FunctionCounts` is massively more pleasant than the raw tuples of `FunctionCount`. (Though it's still possible to get at the raw data if you want.)
5) Better support for multi-line `stmt` and `setup`.
6) Compare now also supports rowwise coloring, which is often the more natural layout for A/B testing.
7) Limited support for `globals` in `collect_callgrind`. This should make it easier to benchmark JIT models. (CC ZolotukhinM)
8) More unit tests, including extensive tests for the Callgrind stats manipulation APIs.
9) Mitigate issue with `MKL_THREADING_LAYER` when run in Jupyter. (https://github.com/pytorch/pytorch/issues/37377)

Pull Request resolved: https://github.com/pytorch/pytorch/pull/46023

Test Plan: changes should be covered by existing and new unit tests.

Reviewed By: navahgar, malfet

Differential Revision: D24313911

Pulled By: robieta

fbshipit-source-id: 835d4b5cde336fb7ff0adef3c0fd614d64df0f77
2020-10-15 16:32:53 -07:00
Wang Xu
62d37b9f26 add size_based_partition final (#46282)
Summary:
Reopen the PR: https://github.com/pytorch/pytorch/pull/45837
This PR add a new feature for Partitioner() class called size_based_partition. Given a list of devices with the same memory size, this function could distribute graph nodes into different devices. To implement this feature, several help functions are created in Partitioner.py and GraphManipulation.py.
An unit test is also added in test/test_fx_experimental.py

Pull Request resolved: https://github.com/pytorch/pytorch/pull/46282

Reviewed By: gcatron

Differential Revision: D24288470

Pulled By: scottxu0730

fbshipit-source-id: e81b1e0c56e34f61e497d868882126216eba7538
2020-10-14 03:44:05 -07:00
Neeraj Pradhan
faa9c22a51 Support pytest for distribution testing (#45648)
Summary:
In response to https://github.com/pytorch/pytorch/issues/11578. This is a test run to see if CI (and other internal systems) works fine with pytest style tests.
 - Creates a separate `distributions` directory within `test`.
 - For testing, this rewrites the `constraint` tests as parameterized tests in pytest. I don't plan to convert any other tests to pytest style, but only expose this option for adding new tests, if required.

If this is a success, we can move `EXAMPLES` in `test_distributions` into a separate file that can be imported by both pytest and unittest style tests. cc. fritzo

Pull Request resolved: https://github.com/pytorch/pytorch/pull/45648

Reviewed By: ezyang, colesbury

Differential Revision: D24080248

Pulled By: neerajprad

fbshipit-source-id: 1f2e7d169c3c291a3051d0cece17851560fe9ea9
2020-10-13 10:56:50 -07:00
Jane Xu
ba78eb80ff including tensorexpr tests in CI for all configs (#46188)
Summary:
Removed test_tensorexpr from the JIT-EXECUTOR exclude list.

CI will now run those tests.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/46188

Reviewed By: glaringlee

Differential Revision: D24255433

Pulled By: janeyx99

fbshipit-source-id: f18e5b41d49b439407c1c24ef6190ef68bc809bf
2020-10-12 12:03:06 -07:00
Jane (Yuan) Xu
be137e45cd reorganizing tests so that test1 and test2 are balanced in timing (#45778)
Summary:
used --shard option to split up python tests ran from `test/run_test.py` in the testing script run in CI

also revised a help message to be more accurate for --shard.

Test results:
BEFORE:
| EVENT | TIMING  |
|---|---|
| **TEST1** | |
| | |
| test_python_nn | 35m19s |
| test_cpp_extensions | 30s |
| **total** | **35m49s** |
| **TEST2** | |
| | |
| install_torchvision | 35s |
| test_python_all_except_nn_and_cpp_extensions | 255m37s |
| test_aten | SKIPPED |
| test_libtorch | 9m8s |
| test_custom_script_ops | SKIPPED |
| test_custom_backend | SKIPPED |
| test_torch_function_benchmark | 10s |
| **total** | **4hr24m** |

AFTER THIS SHARD:
| EVENT | TIMING  |
|---|---|
| **TEST1** | |
| | |
| test_autograd | 26m30s |
| test_foreach | 69m |
| test_nn | test_nn is 35m38s |
| **total** | **3h1m** |
| **TEST2** | |
| | |
| test-quantization | 41m28s |
| test_spectral_ops | 17m37s |
| test_torch | 8m56s |
| test_jit_legacy | 16m21s |
| **total** | **2h18m** |

Pull Request resolved: https://github.com/pytorch/pytorch/pull/45778

Reviewed By: albanD

Differential Revision: D24137156

Pulled By: janeyx99

fbshipit-source-id: 5873fec47aedb9f699ebbda653a4d32a9950fc13
2020-10-06 07:57:08 -07:00
Jane Xu
8bc0c755be adding option to move excluding to run_test.py instead of test.sh (#45868)
Summary:
Cleaning up test.sh a tiny bit

Pull Request resolved: https://github.com/pytorch/pytorch/pull/45868

Reviewed By: albanD

Differential Revision: D24122726

Pulled By: janeyx99

fbshipit-source-id: e8254accad15ad887a000ec1401c401389393c92
2020-10-06 07:13:27 -07:00
Jane (Yuan) Xu
6acd7b686c adding sharding option to run_test.py (#45583)
Summary:
Added a sharding option to run_test.py to enable users to run a subset of the many tests. The new `--shard` argument takes in two integer values, `x` and `y`, where the larger value would denote the number of shards and the smaller value would denote which shard to run.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/45583

Reviewed By: malfet

Differential Revision: D24083469

Pulled By: janeyx99

fbshipit-source-id: 1777bd7822c95b3bf37079deff9381c6f8eaf4cc
2020-10-02 11:21:51 -07:00
Thomas Viehmann
22a34bcf4e ROCm {emoji:2764} TensorExpr (#45506)
Summary:
This might be an alternative to reverting https://github.com/pytorch/pytorch/issues/45396 .
The obvious rough edge is that I'm not really seeing the work group limits that TensorExpr produces.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/45506

Reviewed By: zhangguanheng66

Differential Revision: D23991410

Pulled By: Krovatkin

fbshipit-source-id: 11d3fc4600e4bffb1d1192c6b8dd2fe22c1e064e
2020-09-29 16:52:16 -07:00
gunandrose4u
f07ac6a004 Fix Windows build failure after DDP PR merged (#45335)
Summary:
Fixes #{issue number}
This is resubmit for PR https://github.com/pytorch/pytorch/issues/42897 . Together with fix for Windows build issue introduced by PR https://github.com/pytorch/pytorch/issues/44344 .

Pull Request resolved: https://github.com/pytorch/pytorch/pull/45335

Reviewed By: zou3519

Differential Revision: D23931471

Pulled By: mrshenli

fbshipit-source-id: f49b5a114944c1450b32934b3292170be064f494
2020-09-25 12:37:50 -07:00
Mike Ruberry
95df8657c9 Enables test linalg (#45278)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/45271.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/45278

Reviewed By: ngimel

Differential Revision: D23926124

Pulled By: mruberry

fbshipit-source-id: 26692597f9a1988e5fa846f97b8430c3689cac27
2020-09-24 23:09:38 -07:00
Mike Ruberry
103fa3894a Revert D23841786: [pytorch][PR] Enable distributed package on windows, Gloo backend supported only
Test Plan: revert-hammer

Differential Revision:
D23841786 (0122299f9b)

Original commit changeset: 334ba1ed73ef

fbshipit-source-id: ec95432f9957df56a5a04e52661f5db920b7f57f
2020-09-24 22:44:33 -07:00
gunandrose4u
0122299f9b Enable distributed package on windows, Gloo backend supported only (#42897)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/42095

For test case part will be committed to this PR later

mrshenli, please help to review

Pull Request resolved: https://github.com/pytorch/pytorch/pull/42897

Reviewed By: osalpekar

Differential Revision: D23841786

Pulled By: mrshenli

fbshipit-source-id: 334ba1ed73eff2f668857390fc32d1bc7f08e5f3
2020-09-24 21:13:55 -07:00
Zachary DeVito
cb75addee4 torch.package - a way to package models and code (#45015)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/45015

torch.package allows you to write packages of code, pickled python data, and
arbitrary binary and text resources into a self-contained package.

torch.package.PackageExporter writes the packages and
torch.package.PackageImporter reads them.

The importers can load this code in a hermetic way, such that code is loaded
from the package rather than the normal python import system. This allows
for the packaging of PyTorch model code and data so that it can be run
on a server or used in the future for transfer learning.

The code contained in packages is copied file-by-file from the original
source when it is created, and the file format is a specially organized
zip file. Future users of the package can unzip the package, and edit the code
in order to perform custom modifications to it.

The importer for packages ensures that code in the module can only be loaded from
within the package, except for modules explicitly listed as external using :method:`extern_module`.
The file `extern_modules` in the zip archive lists all the modules that a package externally depends on.
This prevents "implicit" dependencies where the package runs locally because it is importing
a locally-installed package, but then fails when the package is copied to another machine.

Test Plan: Imported from OSS

Reviewed By: SplitInfinity

Differential Revision: D23824337

Pulled By: zdevito

fbshipit-source-id: 1247c34ba9b656f9db68a83e31f2a0fbe3bea6bd
2020-09-22 21:21:21 -07:00
Richard Zou
07cba8b1fc Run vmap tests in CI (#44656)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/44656

All this time, test_vmap wasn't running in the CI. Fortunately all the
tests pass locally for me. h/t to anjali411 for pointing this out.

Test Plan: - Wait for CI

Reviewed By: anjali411

Differential Revision: D23689355

Pulled By: zou3519

fbshipit-source-id: 543c3e6aed0af77bfd6ea7a7549337f8230e3d32
2020-09-15 10:59:00 -07:00
Nikita Shulga
fc51047af5 Small fixes in Dependency.cmake and run_test.py (#44414)
Summary:
Do not add gencode flags to NVCC_FLAGS twice: First time they are added in `cmake/public/cuda.cmake` no need to do it again in `cmake/Dependencies.cmake`
Copy `additional_unittest_args` before appending local options to it in `run_test()` method

Pull Request resolved: https://github.com/pytorch/pytorch/pull/44414

Reviewed By: seemethere

Differential Revision: D23605733

Pulled By: malfet

fbshipit-source-id: 782a0da61650356a978a892fb03c66cb1a1ea26b
2020-09-09 15:09:33 -07:00
Rohan Varma
106459acac Rename test_distributed to test_distributed_fork (#42932)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/42932

Follow up from https://github.com/pytorch/pytorch/pull/41769, rename `test_distributed` to `test_distributed_fork` to make it explicit that it forks.

New command to run test:
`python test/run_test.py -i distributed/test_distributed_fork -v`
ghstack-source-id: 111632568

Test Plan: `python test/run_test.py -i distributed/test_distributed_fork -v`

Reviewed By: izdeby

Differential Revision: D23072201

fbshipit-source-id: 48581688b6c5193a309e803c3de38e70be980872
2020-09-08 23:13:37 -07:00
Rohan Varma
b22abbe381 Enable test_distributed to work with spawn mode (#41769)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/41769

Currently the tests in `test_distributed` only work with the `fork` mode multiprocessing, this PR introduces support for `spawn` mode multiprocessing as well (while keeping the `fork` mode intact).

Motivations for the change:
1) Spawn multiprocessing is the default on MacOS, so it better emulates how MacOS users would use distributed
2) With python 3.8+, spawn is the default on linux, so we should have test coverage for this
3) PT multiprocessing suggests using spawn/forkserver over fork, for sharing cuda tensors: https://pytorch.org/docs/stable/multiprocessing.html
4) Spawn is better supported with respect to certain sanitizers such as TSAN, so adding this sanitizer coverage may help us uncover issues.

How it is done:
1) Move `test_distributed` tests in `_DistTestBase` class to a shared file `distributed_test` (similar to how the RPC tests are structured)
2) For `Barrier`, refactor the setup of temp directories, as the current version did not work with spawn, each process would get a different randomly generated directory and thus would write to different barriers.
3) Add all the relevant builds to run internally and in OSS.
Running test_distributed with spawn mode in OSS can be done with:
`python test/run_test.py -i distributed/test_distributed_spawn -v`

Reviewed By: izdeby

Differential Revision: D22408023

fbshipit-source-id: e206be16961fd80438f995e221f18139d7e6d2a9
2020-09-08 23:11:12 -07:00
Mike Ruberry
665feda15b Adds opinfo-based autograd tests and (un)supported dtype tests (#43451)
Summary:
This PR adds a new test suite, test_ops.py, designed for generic tests across all operators with OpInfos. It currently has two kinds of tests:

- it validates that the OpInfo has the correct supported dtypes by verifying that unsupported dtypes throw an error and supported dtypes do not
- it runs grad and gradgrad checks on each op and its variants (method and inplace) that has an OpInfo

This is a significant expansion and simplification of the current autogenerated autograd tests, which spend considerable processing their inputs. As an alternative, this PR extends OpInfos with "SampleInputs" that are much easier to use. These sample inputs are analogous to the existing tuples in`method_tests()`.

Future PRs will extend OpInfo-based testing to other uses of `method_tests()`, like test_jit.py, to ensure that new operator tests can be implemented entirely using an OpInfo.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/43451

Reviewed By: albanD

Differential Revision: D23481723

Pulled By: mruberry

fbshipit-source-id: 0c2cdeacc1fdaaf8c69bcd060d623fa3db3d6459
2020-09-03 02:50:48 -07:00
Sinan Nasir
1a79d7bb28 DDP communication hook examples (#43310)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/43310

In this diff, we prepared some example DDP communication hooks [#40848](https://github.com/pytorch/pytorch/pull/40848):

1\. `allreduce_hook`: This DDP communication hook just calls ``allreduce`` using ``GradBucket`` tensors. Once gradient tensors are aggregated across all workers, its ``then`` callback takes the mean and returns the result. If user registers this hook DDP results is expected to be same as the case where no hook was registered. Hence, this won't change behavior of DDP and user can use this as a reference or modify this hook to log useful information or any other purposes while unaffecting DDP behavior.

2\. `allgather_then_aggregate_hook` Similar to ``allreduce_hook``, this hook first gathers ``GradBucket`` tensors and its ``then`` callback aggregates the gathered gradient tensors and takes mean. Instead of ``allreduce`` this hook uses ``allgather``. Note that with W workers, both the computation and communication time scale as O(W) for allgather compared to O(logW) for allreduce. Therefore, this hook is expected to be much slower than ``allreduce_hook`` although both essentially do the same thing with the gradients.

3\. `fp16_compress_hook` This DDP communication hook implements a simple gradient compression approach that converts ``GradBucket`` tensors whose type is assumed to be ``torch.float32`` to half-precision floating point format (``torch.float16``). It allreduces those ``float16`` gradient tensors. Once compressed gradient tensors are allreduced, its then callback called ``decompress`` converts the aggregated result back to ``float32`` and takes the mean.

4\. `quantization_pertensor_hook` does quantization per tensor and uses the idea in https://pytorch.org/docs/master/generated/torch.quantize_per_tensor.html.  Note that we separately send scale and zero_point (two floats per rank) before quantized tensors.

5\. `quantization_perchannel_hook` does quantization per channel similar to https://pytorch.org/docs/master/generated/torch.quantize_per_channel.html. The main motivation is that after the initial QSGD study diff, we realized that for considerably large gradient tensors such as a tensor that contains 6 million floats quantizing dividing it into smaller channels (512 float chunks) and quantizing independently may significantly increase the resolution and result with lower error.
ghstack-source-id: 110923269

Test Plan:
python torch/distributed/algorithms/ddp_comm_hooks/test_ddp_hooks.py
Couldn't download test skip set, leaving all tests enabled...
.....
----------------------------------------------------------------------
Ran 4 tests in 26.724s

OK

Internal testing:
```
buck run mode/dev-nosan //caffe2/test/distributed/algorithms/ddp_comm_hooks:test_ddp_hooks
```

Reviewed By: malfet

Differential Revision: D22937999

fbshipit-source-id: 274452e7932414570999cb978ae77a97eb3fb0ec
2020-08-28 18:59:14 -07:00
Nikita Shulga
1bda5e480c Add Python code coverage (#43600)
Summary:
Replace  `test` with  `coverage_test` stage for `pytorch-linux-bionic-py3.8-gcc9` configuration
Add `coverage.xml` to the list of ignored files
Add `codecov.yml` that maps installed pytorch folders back to original locations
Cleanup coverage option utilization in `run_test.py` and adapt it towards combining coverage reports across the runs

Pull Request resolved: https://github.com/pytorch/pytorch/pull/43600

Reviewed By: seemethere

Differential Revision: D23351877

Pulled By: malfet

fbshipit-source-id: acf78ae4c8f3e23920a76cce1d50f2821b83eb06
2020-08-26 16:16:03 -07:00
albanD
e08e93f946 Reland of benchmark code (#43428)
Summary:
Reland of the benchmark code that broke the slow tests because the GPU were running out of memory

Pull Request resolved: https://github.com/pytorch/pytorch/pull/43428

Reviewed By: ngimel

Differential Revision: D23296136

Pulled By: albanD

fbshipit-source-id: 0002ae23dc82f401604e33d0905d6b9eedebc851
2020-08-24 13:27:26 -07:00
Mike Ruberry
4dc8f3be8c Creates test_tensor_creation_ops.py test suite (#43104)
Summary:
As part of our continued refactoring of test_torch.py, this takes tests for tensor creation ops like torch.eye, torch.randint, and torch.ones_like and puts them in test_tensor_creation_ops.py. There hare three test classes in the new test suite: TestTensorCreation, TestRandomTensorCreation, TestLikeTensorCreation. TestViewOps and tests for construction of tensors from NumPy arrays have been left in test_torch.py. These might be refactored separately into test_view_ops.py and test_numpy_interop.py in the future.

Most of the tests ported from test_torch.py were left as is or received a signature change to make them nominally "device generic." Future work will need to review test coverage and update the tests.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/43104

Reviewed By: ngimel

Differential Revision: D23280358

Pulled By: mruberry

fbshipit-source-id: 469325dd1a734509dd478cc7fe0413e276ffb192
2020-08-22 23:18:54 -07:00
Alban Desmaison
74781ab5b8 Revert D23242101: [pytorch][PR] Implement first draft of autograd benchmark.
Test Plan: revert-hammer

Differential Revision:
D23242101 (c2511bdfa4)

Original commit changeset: a2b92d5a4341

fbshipit-source-id: bda562d15565f074b448022d180ec8f959c6ecc9
2020-08-21 12:22:57 -07:00
albanD
c2511bdfa4 Implement first draft of autograd benchmark. (#40586)
Summary:
It is quite a lot of code because I pulled some code from torchaudio and torchvision to remove issues I had to get latest version with pytorch built from source while I can't build there libs from source (dependency missing for torchaudio).

The compare script generates table as follows:
| model | task | speedup | mean (before) | var (before) | mean (after) | var (after) |
| -- | -- | -- | -- | -- | -- | -- |
| resnet18 | vjp | 1.021151844124464 | 1.5627719163894653 | 0.005164200905710459 | 1.5304011106491089 | 0.003979875706136227 |
| resnet18 | vhp | 0.9919114430761606 | 6.8089728355407715 | 0.019538333639502525 | 6.86449670791626 | 0.014775685034692287 |
| resnet18 | jvp | 0.9715963084255123 | 5.720699310302734 | 0.08197150379419327 | 5.887938499450684 | 0.018408503383398056 |
| ppl_simple_reg | vjp | 0.9529183269165618 | 0.000362396240234375 | 7.526952949810095e-10 | 0.00038030146970413625 | 7.726220357939795e-11 |
| ppl_simple_reg | vhp | 0.9317708619586977 | 0.00048058031825348735 | 5.035701855504726e-10 | 0.0005157709238119423 | 3.250243477137538e-11 |
| ppl_simple_reg | jvp | 0.8609755877018406 | 0.00045447348384186625 | 9.646707044286273e-11 | 0.0005278587341308594 | 1.4493808930815533e-10 |
| ppl_simple_reg | hvp | 0.9764100147808232 | 0.0005881547695025802 | 7.618464747949361e-10 | 0.0006023645401000977 | 6.370915461850757e-10 |
| ppl_simple_reg | jacobian | 1.0019173715134297 | 0.0003612995205912739 | 2.2979899233499523e-11 | 0.0003606081008911133 | 1.2609764794835332e-11 |
| ppl_simple_reg | hessian | 1.0358429970264393 | 0.00206911563873291 | 2.590938796842579e-09 | 0.0019975185859948397 | 2.8916853356264482e-09 |
| ppl_robust_reg | vjp | 1.0669910916521521 | 0.0017304659122601151 | 3.1047047155396967e-09 | 0.0016218185191974044 | 4.926861585374809e-09 |
| ppl_robust_reg | vhp | 1.0181130455462972 | 0.0029563189018517733 | 2.6359153082466946e-08 | 0.0029037236236035824 | 1.020585038702393e-08 |
| ppl_robust_reg | jvp | 0.9818360373406179 | 0.0026934861671179533 | 6.981357714153091e-09 | 0.00274331565015018 | 3.589908459389335e-08 |
| ppl_robust_reg | hvp | 1.0270848910527002 | 0.005576515104621649 | 3.2798087801211295e-08 | 0.005429458804428577 | 6.438724398094564e-08 |
| ppl_robust_reg | jacobian | 1.0543611284155785 | 0.00167675013653934 | 2.3236829349571053e-08 | 0.001590299652889371 | 1.2011492245278532e-08 |
| ppl_robust_reg | hessian | 1.0535378727082656 | 0.01643357239663601 | 1.8450685956850066e-06 | 0.015598463825881481 | 2.1876705602608126e-07 |
| wav2letter | vjp | 1.0060408105086573 | 0.3516994118690491 | 1.4463969819189515e-05 | 0.349587619304657 | 9.897866402752697e-05 |
| wav2letter | vhp | 0.9873655295086051 | 1.1196287870407104 | 0.00474404776468873 | 1.133955717086792 | 0.009759620763361454 |
| wav2letter | jvp | 0.9741820317882822 | 0.7888165712356567 | 0.0017476462526246905 | 0.8097219467163086 | 0.0018235758179798722 |
| transfo | vjp | 0.9883954031921641 | 2.8865864276885986 | 0.008410997688770294 | 2.9204773902893066 | 0.006901870481669903 |
| transfo | vhp | 1.0111290842971339 | 8.374398231506348 | 0.014904373325407505 | 8.282224655151367 | 0.04449500888586044 |
| transfo | jvp | 1.0080534543381963 | 6.293097972869873 | 0.03796082362532616 | 6.24282169342041 | 0.010179692879319191 |

Pull Request resolved: https://github.com/pytorch/pytorch/pull/40586

Reviewed By: pbelevich

Differential Revision: D23242101

Pulled By: albanD

fbshipit-source-id: a2b92d5a4341fe1472711a685ca425ec257d6384
2020-08-21 07:36:26 -07:00
Mike Ruberry
e2eb0cb1a9 Adds arccosh alias for acosh and adds an alias consistency test (#43107)
Summary:
This adds the torch.arccosh alias and updates alias testing to validate the consistency of the aliased and original operations. The alias testing is also updated to run on CPU and CUDA, which revealed a memory leak when tracing (see https://github.com/pytorch/pytorch/issues/43119).

Pull Request resolved: https://github.com/pytorch/pytorch/pull/43107

Reviewed By: ngimel

Differential Revision: D23156472

Pulled By: mruberry

fbshipit-source-id: 6155fac7954fcc49b95e7c72ed917c85e0eabfcd
2020-08-16 22:12:25 -07:00
Mike Ruberry
bee174dc3f Adds linalg.det alias, fixes outer alias, updates alias testing (#42802)
Summary:
This PR:

- updates test_op_normalization.py, which verifies that aliases are correctly translated in the JIT
- adds torch.linalg.det as an alias for torch.det
- moves the torch.linalg.outer alias to torch.outer (to be consistent with NumPy)

The torch.linalg.outer alias was put the linalg namespace erroneously as a placeholder since it's a "linear algebra op" according to NumPy but is actually still in the main NumPy namespace.

The updates to test_op_normalization are necessary. Previously it was using method_tests to generate tests, and method_tests assumes test suites using it also use the device generic framework, which test_op_normalization did not. For example, some ops require decorators like `skipCPUIfNoLapack`, which only works in device generic test classes. Moving test_op_normalization to the device generic framework also lets these tests run on CPU and CUDA.

Continued reliance on method_tests() is excessive since the test suite is only interested in testing aliasing, and a simpler and more readable `AliasInfo` class is used for the required information. An example impedance mismatch between method_tests and the new tests, for example, was how to handle ops in namespaces like torch.linalg.det. In the future this information will likely be folded into a common 'OpInfo' registry in the test suite.

The actual tests performed are similar to what they were previously: a scripted and traced version of the op is run and the test verifies that both graphs do not contain the alias name and do contain the aliased name.

The guidance for adding an alias has been updated accordingly.

cc mattip

Note:

ngimel suggests:
- deprecating and then removing the `torch.ger` name
- reviewing the implementation of `torch.outer`

Pull Request resolved: https://github.com/pytorch/pytorch/pull/42802

Reviewed By: zou3519

Differential Revision: D23059883

Pulled By: mruberry

fbshipit-source-id: 11321c2a7fb283a6e7c0d8899849ad7476be42d1
2020-08-11 21:48:31 -07:00
Mike Ruberry
4bafca1a69 Adds list of operator-related information for testing (#41662)
Summary:
This PR adds:

- an "OpInfo" class in common_method_invocations that can contain useful information about an operator, like what dtypes it supports
- a more specialized "UnaryUfuncInfo" class designed to help test the unary ufuncs
- the `ops` decorator, which can generate test variants from lists of OpInfos
- test_unary_ufuncs.py, a new test suite stub that shows how the `ops` decorator and operator information can be used to improve the thoroughness of our testing

The single test in test_unary_ufuncs.py simply ensures that the dtypes associated with a unary ufunc operator in its OpInfo entry are correct. Writing a test like this previously, however, would have required manually constructing test-specific operator information and writing a custom test generator. The `ops` decorator and a common place to put operator information make writing tests like this easier and allows what would have been test-specific information to be reused.

The `ops` decorator extends and composes with the existing device generic test framework, allowing its decorators to be reused. For example, the `onlyOnCPUAndCUDA` decorator works with the new `ops` decorator. This should keep the tests readable and consistent.

Future PRs will likely:

- continue refactoring the too large test_torch.py into more verticals (unary ufuncs, binary ufuncs, reductions...)
- add more operator information to common_method_invocations.py
- refactor tests for unary ufuncs into test_unary_ufunc

Examples of possible future extensions are [here](616747e50d), where an example unary ufunc test is added, and [here](d0b624f110), where example autograd tests are added. Both tests leverage the operator info in common_method_invocations to simplify testing.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/41662

Reviewed By: ngimel

Differential Revision: D23048416

Pulled By: mruberry

fbshipit-source-id: ecce279ac8767f742150d45854404921a6855f2c
2020-08-11 11:34:53 -07:00
James Reed
575e7497f6 Introduce experimental FX library (#42741)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/42741

Test Plan: Imported from OSS

Reviewed By: dzhulgakov

Differential Revision: D23006383

Pulled By: jamesr66a

fbshipit-source-id: 6cb6d921981fcae47a07df581ffcf900fb8a7fe8
2020-08-11 10:01:47 -07:00
Luca Wehrstedt
935fcc9580 [RPC tests] Merge process group tests into single entry point (#40818)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/40818

Summary of the entire stack:
--

This diff is part of an attempt to refactor the RPC tests. They currently suffer from several problems:
- Several ways to specify the agent to use: there exists one "generic" fixture that uses the global variable TEST_CONFIG to look up the agent name, and is used for process group and Thrift, and then there are separate fixtures for the flaky agent and the TensorPipe one.
- These two ways lead to having two separate decorators (`requires_process_group_agent` and `@_skip_if_tensorpipe_agent`) which must both be specified, making it unclear what the effect of each of them is and what happens if only one is given.
- Thrift must override the TEST_CONFIG global variable before any other import (in order for the `requires_process_group_agent` decorator to work correctly) and for that it must use a "trap" file, which makes it even harder to track which agent is being used, and which is specific to Buck, and thus cannot be used in OSS by other agents.
- Even if the TensorPipe fixture doesn't use TEST_CONFIG, it still needs to set it to the right value for other parts of the code to work. (This is done in `dist_init`).
- There are a few functions in dist_utils.py that return some properties of the agent (e.g., a regexp to match against the error it returns in case of shutdown). These functions are effectively chained if/elses on the various agents, which has the effect of "leaking" some part of the Thrift agent into OSS.
- Each test suite (RPC, dist autograd/dist optimizer, their JIT versions, remote module, ...) must be run on each agent (or almost; the faulty one is an exception) in both fork and spawn mode. Each of these combinations is a separate file, which leads to a proliferation of scripts.
- There is no "master list" of what combinations make sense and should be run. Therefore it has happened that when adding new tests or new agents we forgot to enroll them into the right tests. (TensorPipe is still missing a few tests, it turns out).
- All of these tiny "entry point" files contain almost the same duplicated boilerplate. This makes it very easy to get the wrong content into one of them due to a bad copy-paste.

This refactoring aims to address these problems by:
- Avoiding global state, defaults/override, traps, if/elses, ... and have a single way to specify the agent, based on an abstract base class and several concrete subclasses which can be "mixed in" to any test suite.
- Instead of enabling/disabling tests using decorators, the tests that are specific to a certain agent are now in a separate class (which is a subclass of the "generic" test suite) so that they are only picked up by the agent they apply to.
- Instead of having one separate entry point script for each combination, it uses one entry point for each agent, and in that script it provides a list of all the test suites it wants to run on that agent. And it does that by trying to deduplicate the boilerplate as much as possible. (In fact, the various agent-suite combinations could be grouped in any way, not necessarily by agent as I did here).

It provides further advantages:
- It puts all the agents on equal standing, by not having any of them be the default, making it thus easier to migrate from process group to TensorPipe.
- It will make it easier to add more versions of the TensorPipe tests (e.g., one that disables the same-machine backends in order to test the TCP-based ones) without a further duplication of entry points, of boilerplate, ...

Summary of this commit
--
This diff does the changes described above for the process group agent. It defines a fixture for it (instead of using the generic fixture in its default behavior) and then merges all the entry points into a single script. Note that after this change there won't be anymore a "vanilla" RPC test: all test scripts now specify what agent they are using. This puts all agents on equal standing.
ghstack-source-id: 109229474

Test Plan: Sandcastle and CircleCI

Reviewed By: pritamdamania87

Differential Revision: D22283182

fbshipit-source-id: 7e3626bbbf37d88b892077a03725f0598576b370
2020-08-05 15:10:07 -07:00
Luca Wehrstedt
b93c7c54eb [RPC tests] Merge tests for faulty agent into single script (#40817)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/40817

Summary of the entire stack:
--

This diff is part of an attempt to refactor the RPC tests. They currently suffer from several problems:
- Several ways to specify the agent to use: there exists one "generic" fixture that uses the global variable TEST_CONFIG to look up the agent name, and is used for process group and Thrift, and then there are separate fixtures for the flaky agent and the TensorPipe one.
- These two ways lead to having two separate decorators (`requires_process_group_agent` and `@_skip_if_tensorpipe_agent`) which must both be specified, making it unclear what the effect of each of them is and what happens if only one is given.
- Thrift must override the TEST_CONFIG global variable before any other import (in order for the `requires_process_group_agent` decorator to work correctly) and for that it must use a "trap" file, which makes it even harder to track which agent is being used, and which is specific to Buck, and thus cannot be used in OSS by other agents.
- Even if the TensorPipe fixture doesn't use TEST_CONFIG, it still needs to set it to the right value for other parts of the code to work. (This is done in `dist_init`).
- There are a few functions in dist_utils.py that return some properties of the agent (e.g., a regexp to match against the error it returns in case of shutdown). These functions are effectively chained if/elses on the various agents, which has the effect of "leaking" some part of the Thrift agent into OSS.
- Each test suite (RPC, dist autograd/dist optimizer, their JIT versions, remote module, ...) must be run on each agent (or almost; the faulty one is an exception) in both fork and spawn mode. Each of these combinations is a separate file, which leads to a proliferation of scripts.
- There is no "master list" of what combinations make sense and should be run. Therefore it has happened that when adding new tests or new agents we forgot to enroll them into the right tests. (TensorPipe is still missing a few tests, it turns out).
- All of these tiny "entry point" files contain almost the same duplicated boilerplate. This makes it very easy to get the wrong content into one of them due to a bad copy-paste.

This refactoring aims to address these problems by:
- Avoiding global state, defaults/override, traps, if/elses, ... and have a single way to specify the agent, based on an abstract base class and several concrete subclasses which can be "mixed in" to any test suite.
- Instead of enabling/disabling tests using decorators, the tests that are specific to a certain agent are now in a separate class (which is a subclass of the "generic" test suite) so that they are only picked up by the agent they apply to.
- Instead of having one separate entry point script for each combination, it uses one entry point for each agent, and in that script it provides a list of all the test suites it wants to run on that agent. And it does that by trying to deduplicate the boilerplate as much as possible. (In fact, the various agent-suite combinations could be grouped in any way, not necessarily by agent as I did here).

It provides further advantages:
- It puts all the agents on equal standing, by not having any of them be the default, making it thus easier to migrate from process group to TensorPipe.
- It will make it easier to add more versions of the TensorPipe tests (e.g., one that disables the same-machine backends in order to test the TCP-based ones) without a further duplication of entry points, of boilerplate, ...

Summary of this commit
--
This diff does the changes described above for the faulty agent, which is its own strange beast. It merges all the test entry points (i.e., the combinations of agent, suite and fork/spawn) into a single file. It also modifies the test suites that are intended to be run only on the faulty agent, which used to inherit from its fixture, to inherit from the generic fixture, as they will be mixed in with the faulty fixture at the very end, inside the entry point script.
ghstack-source-id: 109229477

Test Plan: Sandcastle and CircleCI

Reviewed By: pritamdamania87

Differential Revision: D22283178

fbshipit-source-id: 72659efe6652dac8450473642a578933030f2c74
2020-08-05 15:10:04 -07:00
Luca Wehrstedt
edf6c4bc4d [RPC tests] Merge TensorPipe tests into single entry point (#40816)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/40816

Summary of the entire stack:
--

This diff is part of an attempt to refactor the RPC tests. They currently suffer from several problems:
- Several ways to specify the agent to use: there exists one "generic" fixture that uses the global variable TEST_CONFIG to look up the agent name, and is used for process group and Thrift, and then there are separate fixtures for the flaky agent and the TensorPipe one.
- These two ways lead to having two separate decorators (`requires_process_group_agent` and `@_skip_if_tensorpipe_agent`) which must both be specified, making it unclear what the effect of each of them is and what happens if only one is given.
- Thrift must override the TEST_CONFIG global variable before any other import (in order for the `requires_process_group_agent` decorator to work correctly) and for that it must use a "trap" file, which makes it even harder to track which agent is being used, and which is specific to Buck, and thus cannot be used in OSS by other agents.
- Even if the TensorPipe fixture doesn't use TEST_CONFIG, it still needs to set it to the right value for other parts of the code to work. (This is done in `dist_init`).
- There are a few functions in dist_utils.py that return some properties of the agent (e.g., a regexp to match against the error it returns in case of shutdown). These functions are effectively chained if/elses on the various agents, which has the effect of "leaking" some part of the Thrift agent into OSS.
- Each test suite (RPC, dist autograd/dist optimizer, their JIT versions, remote module, ...) must be run on each agent (or almost; the faulty one is an exception) in both fork and spawn mode. Each of these combinations is a separate file, which leads to a proliferation of scripts.
- There is no "master list" of what combinations make sense and should be run. Therefore it has happened that when adding new tests or new agents we forgot to enroll them into the right tests. (TensorPipe is still missing a few tests, it turns out).
- All of these tiny "entry point" files contain almost the same duplicated boilerplate. This makes it very easy to get the wrong content into one of them due to a bad copy-paste.

This refactoring aims to address these problems by:
- Avoiding global state, defaults/override, traps, if/elses, ... and have a single way to specify the agent, based on an abstract base class and several concrete subclasses which can be "mixed in" to any test suite.
- Instead of enabling/disabling tests using decorators, the tests that are specific to a certain agent are now in a separate class (which is a subclass of the "generic" test suite) so that they are only picked up by the agent they apply to.
- Instead of having one separate entry point script for each combination, it uses one entry point for each agent, and in that script it provides a list of all the test suites it wants to run on that agent. And it does that by trying to deduplicate the boilerplate as much as possible. (In fact, the various agent-suite combinations could be grouped in any way, not necessarily by agent as I did here).

It provides further advantages:
- It puts all the agents on equal standing, by not having any of them be the default, making it thus easier to migrate from process group to TensorPipe.
- It will make it easier to add more versions of the TensorPipe tests (e.g., one that disables the same-machine backends in order to test the TCP-based ones) without a further duplication of entry points, of boilerplate, ...

Summary of this commit
--
This diff does the changes described above for the TensorPipe agent. It fixes its fixture (making it inherit from the generic fixture) and merges all the entry point scripts into a single one, so that it's easier to have a clear overview of all the test suites which we run on TensorPipe (you'll notice that many are missing: the JIT ones, the remote module one, ...).
ghstack-source-id: 109229476

Test Plan: Sandcastle and CircleCI

Reviewed By: pritamdamania87

Differential Revision: D22283180

fbshipit-source-id: d5e9f9f4e6d4bfd6fbcae7ae56eed63d2567a02f
2020-08-05 15:08:32 -07:00
iurii zdebskyi
e995c3d21e Add private API to support tensor lists: _foreach_add(TensorList tensors, Scalar scalar) (#41554)
Summary:
Initial PR for the Tensor List functionality.

**Motivation**
[GitHub issue](https://github.com/pytorch/pytorch/issues/38655)
Current PyTorch optimizer implementations are not efficient in cases when we work with a lot of small feature tensors. Starting a lot of kernels slows down the whole process. We need to reduce the number of kernels that we start.
As an example, we should be looking at [NVIDIAs Apex](https://github.com/NVIDIA/apex).
In order to track progress, we will pick PyTorchs DCGAN model with Adam optimizer and once the optimizer is reimplemented with tensor lists, benchmark the model performance against original model version, Apexs version with original Adam optimizer and it’s FusedAdam optimizer.

**In this PR**
- Adding `multi_tensor_apply` mechanism which will help to efficiently apply passed functor on a given list of tensors on CUDA.
- Adding a first private API - `std::vector<Tensor> _foreach_add(TensorList tensors, Scalar scalar)`

**Tests**
Tested via unit tests

**Plan for the next PRs**

1. Cover these ops with `multi_tensor_apply` support
- exponent
- division
- mul_
- add_
- addcmul_
- addcdiv_
- Sqrt

2. Rewrite PyTorch optimizers to use for-each operators in order to get performance gains.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/41554

Reviewed By: cpuhrsch

Differential Revision: D22829724

Pulled By: izdeby

fbshipit-source-id: 47febdbf7845cf931958a638567b7428a24782b1
2020-08-04 15:01:09 -07:00
Mike Ruberry
4b6e5f42a4 Creates spectral ops test suite (#42157)
Summary:
In preparation for creating the new torch.fft namespace and NumPy-like fft functions, as well as supporting our goal of refactoring and reducing the size of test_torch.py, this PR creates a test suite for our spectral ops.

The existing spectral op tests from test_torch.py and test_cuda.py are moved to test_spectral_ops.py and updated to run under the device generic test framework.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/42157

Reviewed By: albanD

Differential Revision: D22811096

Pulled By: mruberry

fbshipit-source-id: e5c50f0016ea6bb8b093cd6df2dbcef6db9bb6b6
2020-07-29 11:36:18 -07:00
Alexander Grund
86492410bc Don't run tests with custom arguments with pytest (#41397)
Summary:
This patch basically removes the `-m pytest` parameters when `extra_unittest_args` is used (e.g. `--subprocess`)

Fixes https://github.com/pytorch/pytorch/issues/41393

Pull Request resolved: https://github.com/pytorch/pytorch/pull/41397

Reviewed By: pbelevich

Differential Revision: D22792133

Pulled By: ezyang

fbshipit-source-id: 29930d703666f4ecc0d727356bbab4a5f7ed4860
2020-07-28 08:17:36 -07:00
Noman Arshad
1a8269a566 Replace blacklist with blocklist in test/run_test.py file. (#42011)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/41716
test/run_test.py file updated with an appropriate replacement for blacklist and whitelist.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/42011

Reviewed By: pbelevich

Differential Revision: D22791836

Pulled By: malfet

fbshipit-source-id: 8139649c5b70c876b711e25c33f3051ea8461063
2020-07-28 07:56:01 -07:00
Eli Uriegas
f71cccc457 test: Add option to continue testing through error (#41136)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/41136

Running this within CI seems impossible since this script exits out
after one failed test, so let's just add an option that CI can use to
power through these errors.

Should not affect current functionality.

Signed-off-by: Eli Uriegas <eliuriegas@fb.com>

Test Plan: Imported from OSS

Differential Revision: D22441694

Pulled By: seemethere

fbshipit-source-id: 7f152fea15af9d47a964062ad43830818de5a109
2020-07-08 17:26:13 -07:00
David Reiss
5e03a1e926 Add support for int[]? arguments in native_functions.yaml (#37174)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/37174

ghstack-source-id: 106938112

Test Plan: Upcoming diffs use this for upsampling.

Differential Revision: D21210002

fbshipit-source-id: d6a55ab6420c05a92873a569221b613149aa0daa
2020-07-07 13:52:20 -07:00
Christian Sarofeen
b9b4f05abf [nvFuser] Working towards reductions, codegen improvements (#40864)
Summary:
Have basic reduction fusion working, and have improved code generator to approach performance of eager mode reductions. Coming soon will be pointwise-reduction fusions in a way that should prevent the possibility of hitting regressions. Also working on performant softmax kernels in the code generator which may be our next fusion target.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/40864

Reviewed By: ngimel

Differential Revision: D22392877

Pulled By: soumith

fbshipit-source-id: 457448a807d628b1035f6d90bc0abe8a87bf8447
2020-07-06 14:52:49 -07:00
Jeff Daily
ac8c8b028d [ROCm] restore jit tests (#40447)
Summary:
Remove `skipIfRocm` from most jit tests and enable `RUN_CUDA_HALF` tests for ROCm.

These changes passed more than three rounds of CI testing against the ROCm CI.

CC ezyang xw285cornell sunway513
Pull Request resolved: https://github.com/pytorch/pytorch/pull/40447

Differential Revision: D22190711

Pulled By: xw285cornell

fbshipit-source-id: bac44825a2675d247b3abe2ec2f80420a95348a3
2020-06-27 01:03:59 -07:00
Ilia Cherniavskii
d8c384544e Destroy CUDA events after profiling (#39962)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/39962

Adding a simple wrapper with ref count for cuda event and
destroying cuda event after the last copy is destroyed

Test Plan: CI cuda profiler tests

Differential Revision: D22027092

Pulled By: ilia-cher

fbshipit-source-id: e0810388aa60b2291eb010896e13af1fad92e472
2020-06-23 10:44:39 -07:00
Pritam Damania
e632bf8d57 Add thrift and tensorpipe backend tests for test_ddp_under_dist_autograd. (#40210)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/40210

ghstack-source-id: 106300839

Test Plan: waitforbuildbot

Differential Revision: D22110065

fbshipit-source-id: d9ebd009b8d451c75708eadc7eb3f2b788e875aa
2020-06-20 22:59:59 -07:00
Ivan Kobzarev
3852215170 [vulkan] jit passes for vulkan conv2 prepack and fuse with clamp (#39282)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/39282

Test Plan: Imported from OSS

Differential Revision: D21962424

Pulled By: IvanKobzarev

fbshipit-source-id: 2d20e827d2c3836b7e6b443293377c68dc1ffa5a
2020-06-20 14:12:21 -07:00
Jeff Daily
89ef8f8141 add test_openmp to ROCM_BLACKLIST (#40204)
Summary:
This test is flaky for rocm platform.  Add to blacklist until it can be further reviewed.

CC ezyang xw285cornell sunway513
Pull Request resolved: https://github.com/pytorch/pytorch/pull/40204

Differential Revision: D22108295

Pulled By: xw285cornell

fbshipit-source-id: 802444a7b41260edcb6ce393237784f3e6c52a74
2020-06-18 15:15:35 -07:00
Shihao Xu
00651b8c93 [distribtued.nn] Implement TorchScript-compatible RemoteModule API (#37139)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/37139

See design doc in https://github.com/pytorch/pytorch/issues/37136

ghstack-source-id: 105926270

Test Plan:
TODO:

- Make the generated Interface usable. https://github.com/pytorch/pytorch/pull/37139#discussion_r434190978
-
- Avoid generating the same template instances for Module that is not scriptable.
- Remove "infer_module_interface_cls".
- Use Python format instead of a CodeTemplate
- Use Python tempfile to track and delete file. Does it work if there is crash.

```
buck test mode/dev-nosan //caffe2/test/distributed/nn/jit:test_instantiator

buck build mode/dev-nosan //caffe2/test/distributed/nn/jit:test_instantiator && \
buck-out/gen/caffe2/test/distributed/nn/jit/test_instantiator\#binary.par -r test_instantiate_scripted_remote_module_template

buck build mode/dev-nosan //caffe2/test/distributed/nn/jit:test_instantiator && \
buck-out/gen/caffe2/test/distributed/nn/jit/test_instantiator\#binary.par -r test_instantiate_non_scripted_remote_module_template
```

```
buck test mode/dev-nosan //caffe2/test/distributed/nn/api:remote_module_spawn
```

```
buck test mode/dev-nosan //caffe2/test/distributed/nn/api:remote_module_fork

buck build mode/dev-nosan //caffe2/test/distributed/nn/api:remote_module_fork && \
buck-out/gen/caffe2/test/distributed/nn/api/remote_module_fork\#binary.par -r test_user_provided_global_unique_name

buck build mode/dev-nosan //caffe2/test/distributed/nn/api:remote_module_fork && \
buck-out/gen/caffe2/test/distributed/nn/api/remote_module_fork\#binary.par -r test_forward_async_script

buck build mode/dev-nosan //caffe2/test/distributed/nn/api:remote_module_fork && \
buck-out/gen/caffe2/test/distributed/nn/api/remote_module_fork\#binary.par -r test_forward_sync_script

buck build mode/dev-nosan //caffe2/test/distributed/nn/api:remote_module_fork && \
buck-out/gen/caffe2/test/distributed/nn/api/remote_module_fork\#binary.par -r test_forward_with_kwargs

buck build mode/dev-nosan //caffe2/test/distributed/nn/api:remote_module_fork && \
buck-out/gen/caffe2/test/distributed/nn/api/remote_module_fork\#binary.par -r test_user_provided_global_unique_name
```

```
buck test mode/dev-nosan //caffe2/test/distributed/rpc:rpc_fork
```

buck test mode/opt-asan //caffe2/test:jit -- 'test_script_forward_method_replacement

buck build mode/dev-nosan //caffe2/test:jit && \
buck-out/gen/caffe2/test/jit\#binary.par -r 'test_script_forward_method_replacement'

buck build mode/dev-nosan //caffe2/test:jit && \
buck-out/gen/caffe2/test/jit\#binary.par -r 'test_imported_classes'

Differential Revision: D20499658

fbshipit-source-id: dd9383ae4eb2343366c11127664f845b91ca3b0a
2020-06-15 19:07:35 -07:00
Ilia Cherniavskii
cc3fc786b7 [resubmit] [pytorch][PR] Fix for num_threads==1 in OpenMP "parallel for" (#39533)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/39533

Test Plan: CI

Reviewed By: ngimel

Differential Revision: D21889269

fbshipit-source-id: 5ba13a0a3ec11edd0b6a7c3fdb35396b847a3d9e
2020-06-15 13:14:59 -07:00
HC Zhu
acc13ac828 [PyTorch] Make DDP reducer work under distributed autograd (#37998)
Summary:
## Why doesn’t DDP work under dist_autograd?
DDP follows the steps below
1. [DDP Python constructor](8d6a8d2b3f/torch/nn/parallel/distributed.py (L389-L393)) (on a module) creates a [C++ Reducer](https://github.com/pytorch/pytorch/blob/master/torch/csrc/distributed/c10d/reducer.cpp), which holds references to all parameters (or variables in C++ code).
2. The reducer installs a post hook on each model parameter.
3. The backward run starts and triggers the post hooks installed above.
4. The post hook of a parameter simply marks the parameter ready for all-reduce.
5. Once all parameters in a bucket are ready, an all-reduce process starts by reading variable `.grad` and writes to variable `.grad`.

But under dist_autograd, `.grad` of a variable is not populated at all. Instead, grads are in a global map in distributed context from variables to their grads.

## Solution of this PR
The distributed engine to set a thread_local variable in a backward run indicating we're running in distributed mode. DDP reducer can then appropriately use `.grad` or the distributed context based on the thread local. More precisely, the thread local is set before calling the post hooks installed by the DDP reducer so that DDP post hooks can retrieve this thread local.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/37998

Test Plan:
```
python test/distributed/test_ddp_under_dist_autograd.py
```

FB repo
```
buck test caffe2/test/distributed/...
```

DDP accuracy benchmark workflow run

```
flow-cli canary pytorch.benchmark.accuracy_comparison.workflow --parameters-json '{"node_world_size": 4, "dist_backend": "nccl"}' --run-as-secure-group fblearner_flow --entitlement gpu_prod
```

f196173157

Reviewed By: pritamdamania87

Differential Revision: D21513795

Pulled By: hczhu

fbshipit-source-id: fe21e68ecdc9274182db4d4bb5a1e2d68ef927a2
2020-06-10 08:38:14 -07:00
Jithun Nair
545a3e1eca Remove test_nccl from ROCM_BLACKLIST and enable only a couple of test_nccl tests (#39354)
Summary:
All individual test_nccl unit tests have been disabled for ROCm in bf9395438f
test_nccl was also added to the ROCM_BLACKLIST in 87b198d309
However, the issue only arises when running the test_nccl suite as a whole (as opposed to any one test individually). More details in comments here: https://github.com/pytorch/pytorch/pull/38689

This PR enables test_nccl suite with only two tests so as to workaround the as-yet unresolved issue above, while allowing at least one test_nccl collective test to run on ROCm. This is also needed as a precursor for: https://github.com/pytorch/pytorch/pull/38515
Pull Request resolved: https://github.com/pytorch/pytorch/pull/39354

Differential Revision: D21843194

Pulled By: mrshenli

fbshipit-source-id: b28d1e073d8d0fdc1b59928fc3b00187cfd02a35
2020-06-05 13:52:23 -07:00
mattip
ada2652ca6 Restore docs coverage test via sphinx (#39331)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/39331

Fixes gh-37590

Adds an extra `make coverage` to document building, which uses the built-in facility in sphinx to check docstring coverage. Also fixes a failure to import `torch/jit/supported_ops.py` which broke the [Torchscript Builtins](https://pytorch.org/docs/stable/jit_builtin_functions.html) page.

This also adds the required `SPHINXOPTS` to turn warnings into error, but this is commented out. Note that since documentation of `torchvision` is merged in here, failures there would cause failures here if this is made active. Some thought might be needed about pinning the torchvision version merged into documentation.

The first commit should fail, since the "ScriptModule" class is commented out. I did that in order to check that a CI failure is properly reported.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/38244

Differential Revision: D21640589

Pulled By: ezyang

fbshipit-source-id: 1e240d81669b5f21404d596de4a27d192dc9fd8a
2020-06-04 10:49:38 -07:00
Oguz Ulgen
4a0a38c17a Revert D21652452: [pytorch][PR] Fix for num_threads==1 in OpenMP "parallel for"
Test Plan: revert-hammer

Differential Revision:
D21652452

Original commit changeset: 2cda7777c0ea

fbshipit-source-id: fdd9a0346ce32a962766f57e13357dd2bc60d8b8
2020-06-03 22:51:51 -07:00
Luca Wehrstedt
5beb3b0c53 [TensorPipe] Re-enable dist optimizer tests (#39441)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/39441

This is the last test suite to be enabled for TensorPipe.
ghstack-source-id: 105166757

Test Plan: Ran the tests, hundreds of times each, in different build modes.

Differential Revision: D21858975

fbshipit-source-id: ee0a7e64b77b4b1974f031207031cc14afb3a8c2
2020-06-03 09:00:52 -07:00
Luca Wehrstedt
b1dab266f7 [TensorPipe] Re-enable dist autograd tests (#39440)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/39440

After the RPC tests, re-enable the second test suite: dist autograd.
ghstack-source-id: 105165393

Test Plan: Ran the tests, several times each, in different build configs.

Differential Revision: D21858974

fbshipit-source-id: 409377d564c36fecae51b9e4c776d94187b434a2
2020-06-03 08:59:19 -07:00
Luca Wehrstedt
3f099879f7 [TensorPipe] Re-enable RPC tests (#39406)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/39406

For now, just the RPC test (no dist autograd or dist optimizer).

I removed the skipping decorator from all the tests except those that explicitly use the ProcessGroup options.

Includes #39027.
ghstack-source-id: 105159974

Test Plan: Ran the tests several hundred times, in various build modes. Saw some flakes, but at a rate of about 0.1%

Differential Revision: D21716069

fbshipit-source-id: 9d2a99e112049a63745772c18e7a58266ed8e74e
2020-06-03 07:14:30 -07:00
mattip
a952f9bb06 Fix for num_threads==1 in OpenMP "parallel for" (#36479)
Summary:
fixes gh-32284

Move the non-parallel stanza out of the parallel context, and use `num_threads` to limit nesting `parallel for`s. The nesting caused a memory leak in the test script in the issue.

This should probably have a test somewhere: are there tests for ParallelOpenMP?
Pull Request resolved: https://github.com/pytorch/pytorch/pull/36479

Differential Revision: D21652452

Pulled By: ilia-cher

fbshipit-source-id: 2cda7777c0eafbe268550a82fed306e52fb6eb25
2020-06-02 18:56:13 -07:00
Shen Li
bb0377bb24 Expose torch.futures.Future (#39008)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/39008

This commit adds a `torch.futures.Future` type and exposes its ctor,
`wait`, `then`, and `set_result` APIs. This type is currently a
wrapper of `c10::ivalue::Future` and mainly used by RPC for now. Later,
we could revamp c10d APIs to return this `Future` type as well. More
utils will be added into `torch.futures` package in followup PRs.

Test Plan: Imported from OSS

Differential Revision: D21723022

Pulled By: mrshenli

fbshipit-source-id: 92e56160544e9bf00d11db3e8347a1b9707882c9
2020-06-02 10:12:56 -07:00
Nikita Shulga
39d037253c Test PyTorch using python-3.8 + GCC-9 on Bionic (Reland) (#39121)
Summary:
Enable new test config in .circleci/config.yml
Skip scanning several 3rd-party packages to work around https://bugs.python.org/issue40350
Remove pre python-3.5 checks from `test.sh` and update `scikit-learn` to python-3.8 compatible version

This is a reland of https://github.com/pytorch/pytorch/pull/39030
Pull Request resolved: https://github.com/pytorch/pytorch/pull/39121

Differential Revision: D21820375

Pulled By: malfet

fbshipit-source-id: d0be79b7d204cf692e055d42b9be42402dc4c1c0
2020-06-01 11:11:12 -07:00
Rohan Varma
988e31c788 Revert D21752017: [pytorch][PR] Test PyTorch using python-3.8 + GCC-9 on Bionic
Test Plan: revert-hammer

Differential Revision:
D21752017

Original commit changeset: 56c841636349

fbshipit-source-id: adf08e03ba9610050fc5440ef453789f805fdc6b
2020-05-27 17:42:22 -07:00
Nikita Shulga
30dd4acbf6 Test PyTorch using python-3.8 + GCC-9 on Bionic (#39030)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/39030

Differential Revision: D21752017

Pulled By: malfet

fbshipit-source-id: 56c841636349e24c9ebef8dac18c283de3664fa5
2020-05-27 15:56:37 -07:00
Nikolay Korovaiko
4fcd1c3123 run te only for profiling executor (#38591)
Summary:
* Disable the mode where PE can still run the old fuser.
* Clean up
Pull Request resolved: https://github.com/pytorch/pytorch/pull/38591

Differential Revision: D21643664

Pulled By: Krovatkin

fbshipit-source-id: 6753ed6bdc544698a1340e59a624608ff3abf7f9
2020-05-26 18:35:25 -07:00
Shen Li
40ce90bfc1 Revert D21560096: [Tensorpipe Agent] Enabling tests with OSS CI
Test Plan: revert-hammer

Differential Revision:
D21560096

Original commit changeset: 7d61cc1c354e

fbshipit-source-id: 6adfd87e354545031203d65d04f0bad4687a93cd
2020-05-19 19:39:33 -07:00
Jeff Daily
87b198d309 add distributed/test_nccl to ROCM_BLACKLIST (#38730)
Summary:
CC ezyang xw285cornell sunway513

Work-around for recent ROCm CI failures due to 9cfc10d52e (https://github.com/pytorch/pytorch/issues/37294).  Replaces full revert suggested by PR https://github.com/pytorch/pytorch/issues/38689.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/38730

Differential Revision: D21648707

Pulled By: xw285cornell

fbshipit-source-id: 627b11b229c7eadca1f6e0c6192c6b5b6416e6a1
2020-05-19 14:45:50 -07:00
Omkar Salpekar
87aa2d25ae [Tensorpipe Agent] Enabling tests with OSS CI (#38447)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/38447

This PR modifies `run_tests.py` to enable running Tensorpipe Agent tests with the OSS CI.
ghstack-source-id: 104321881

Test Plan: CI

Differential Revision: D21560096

fbshipit-source-id: 7d61cc1c354e9353c4a586dd2b56690c28d51d10
2020-05-19 13:34:06 -07:00
Nikita Shulga
72e5b7ae5b Add option to run python unittests in parallel (#37180)
Summary:
So far results looks quite promising: test_nn is purely sequential tests and can be accelerated 3x
Pull Request resolved: https://github.com/pytorch/pytorch/pull/37180

Differential Revision: D21437871

Pulled By: malfet

fbshipit-source-id: 8679a8af355f839f2c9dae3bf36d2e102af05425
2020-05-06 22:14:11 -07:00
Kimish Patel
b1b6bc36a5 Enable xnnpack_integration test in CI. (#37838)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/37838

Test Plan: oss: python test/test_xnnpack_integration.py

Reviewed By: xcheng16

Differential Revision: D21405850

fbshipit-source-id: ba4ba06692b49315f110653d9492b2e14b618574
2020-05-06 13:53:03 -07:00
ashishfarmer
402f635bbe Enable ahead of time compilation for HIPExtensions using ninja (#37800)
Summary:
This pull request enables ahead of time compilation of HIPExtensions with ninja by setting appropriate compilation flags for ROCm environment. Also, this enables the unit test for testing cuda_extensions on ROCm as well as removing test for ahead of time compilation of extensions with ninja from ROCM_BLACKLIST

ezyang jeffdaily
Pull Request resolved: https://github.com/pytorch/pytorch/pull/37800

Differential Revision: D21408148

Pulled By: soumith

fbshipit-source-id: 146f4ffb3418f3534e6ce86805d3fe9c3eae84e1
2020-05-05 20:53:35 -07:00
ashishfarmer
bbd2350c99 Disable tests failing on test2 in ROCm CI (#37427)
Summary:
This pull request disables the unit tests that were observed to be failing once `test2` was enabled. These tests will be one by one looked at and fixed at the earliest, but until then disabling them to unblock `test2`
The pull request also disables fftPlanDestroy for rocFFT to avoid double-freeing FFT handles

cc: ezyang jeffdaily
Pull Request resolved: https://github.com/pytorch/pytorch/pull/37427

Differential Revision: D21302909

Pulled By: ezyang

fbshipit-source-id: ecadda3778e65b7f4f97e24b932b96b9ce928616
2020-04-29 09:56:28 -07:00
Nikolay Korovaiko
edc5ef1afb run the simple executor for jit tests by default, add profiling jobs … (#37017)
Summary:
…for fusion tests

fix flake8 warnings

fix ci failures

fix test_determination.py
Pull Request resolved: https://github.com/pytorch/pytorch/pull/37017

Differential Revision: D21238446

Pulled By: Krovatkin

fbshipit-source-id: 393e6135883dc5ac57bdff580de96c66829d454c
2020-04-28 19:16:52 -07:00
Nikita Shulga
47c4dca1ab Remove python-2 or python<3.5 checks from unit tests (#37252)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/37252

Test Plan: CI

Differential Revision: D21241083

Pulled By: malfet

fbshipit-source-id: 44164b822f7905288abb2beda0175d2162d86143
2020-04-24 17:42:04 -07:00
Jerry Zhang
230b68168b [quant] Refactor test files (#36964)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/36964

Rename and restructure quantization related tests
https://github.com/pytorch/pytorch/issues/31625

Test Plan:
.

Imported from OSS

Differential Revision: D21192509

fbshipit-source-id: 148c93e86e0ea68ab18a067fe74a8035a29a1e4e
2020-04-23 10:28:56 -07:00
David Reiss
e75fb4356b Remove (most) Python 2 support from Python code (#35615)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/35615

Python 2 has reached end-of-life and is no longer supported by PyTorch.
Now we can clean up a lot of cruft that we put in place to support it.
These changes were all done manually, and I skipped anything that seemed
like it would take more than a few seconds, so I think it makes sense to
review it manually as well (though using side-by-side view and ignoring
whitespace change might be helpful).

Test Plan: CI

Differential Revision: D20842886

Pulled By: dreiss

fbshipit-source-id: 8cad4e87c45895e7ce3938a88e61157a79504aed
2020-04-22 09:23:14 -07:00
Jerry Zhang
57c50db441 [reland][quant] Add backward compatiblity test (#36842)
Summary:
re-created the same PR: https://github.com/pytorch/pytorch/pull/36639
because ghimport does not support importing binary files right now
Pull Request resolved: https://github.com/pytorch/pytorch/pull/36842

Test Plan: python test/quantization/test_backward_compatibility.py

Differential Revision: D21100689

Pulled By: jerryzh168

fbshipit-source-id: 625a0f9da98138c9c2891b9d99fc45d85fa27cca
2020-04-17 21:24:31 -07:00
Xingying Cheng
86f354c530 Python binding api to optimize for mobile model on script module. (#36357)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/36357
ghstack-source-id: 101907180

Creating a python api entry to optimize mobile model which takes a scripted module as argument and returns an optimized scripted module. The initial optimization features includes inserting and folding prepack ops.

Test Plan: python test/test_optimizer.py

Differential Revision: D20946076

fbshipit-source-id: 93cb4a5bb2371128f802d738eb26d0a4f3b2fe10
2020-04-17 16:21:27 -07:00
Mike Ruberry
f00014b790 Revert D21080503: [pytorch][PR] [quant] Add backward compatiblity test
Test Plan: revert-hammer

Differential Revision:
D21080503

Original commit changeset: 1dca08208bcc

fbshipit-source-id: 5cd8c22130ff28b9231f657f80961e94b65b5792
2020-04-16 22:03:12 -07:00
Jerry Zhang
484a00b2d3 [quant] Add backward compatiblity test (#36771)
Summary:
re-created the same PR: https://github.com/pytorch/pytorch/pull/36639
because ghimport does not support importing binary files right now
Pull Request resolved: https://github.com/pytorch/pytorch/pull/36771

Test Plan: python test/quantization/test_backward_compatibility.py

Differential Revision: D21080503

Pulled By: jerryzh168

fbshipit-source-id: 1dca08208bccead60bba03e5fb5d39e1a1d7c20d
2020-04-16 19:00:30 -07:00
Haixin Liu
455d4aab64 [PyTorch Numeric Suite] Add weight compare API (#36186)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/36186

Start PyTorch Numeric Suite under PyTorch quantization and add weight compare API to it.
ghstack-source-id: 102062165

Test Plan: buck test mode/dev caffe2/test:quantization -- 'test_compare_weights'

Differential Revision: D20903395

fbshipit-source-id: 125d84569837142626a0e2119b3b7657a32dbf4e
2020-04-13 19:02:00 -07:00
Thomas Viehmann
d070c0bcf0 ROCm: enable cpp_extensions.load/load_inline (#35897)
Summary:
This enables cpp_extensions.load/load_inline. This works by hipify-ing cuda sources.
Also enable tests.
CuDNN/MIOpen extensions aren't yet supported, I propose to not do this in this PR.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/35897

Differential Revision: D20983279

Pulled By: ezyang

fbshipit-source-id: a5d0f5ac592d04488a6a46522c58e2ee0a6fd57c
2020-04-13 11:44:08 -07:00
David Reiss
fab06bfb75 Add utility for bundling sample inputs with models (#35631)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/35631

Bundling sample inputs with our models with a standardized interface
will make it possible to write benchmarking and code-coverage tools that
call all models in a uniform way.  The intent is to make this a standard
for mobile models within Facebook.  Putting it in torch/utils so tests
can run on GitHub and because it might be useful for others as well.

`augment_model_with_bundled_inputs` is the primary entry point.  See
its docstring for usage information and the test for some example uses.

One design question I had was how much power should be available for
automatic deflating and inflating of inputs.  The current scheme gives
some automatic handling and a reasonable escape hatch
("_bundled_input_inflate_format") for top-level tensor arguments, but no
automatic support for (e.g.) tensors in tuples or long strings.  For
more complex cases, we have the ultimate escape hatch of just defining
_generate_bundled_inputs in the model.

Another design question was whether to add the inputs to the model or
wrap the model in a wrapper module that had these methods and delegated
calls to `forward`.  Because models can have other exposed methods and
attributes, the wrapped seemed too onerous.

Test Plan: Unit test.

Differential Revision: D20925013

Pulled By: dreiss

fbshipit-source-id: 4dbbb4cce41e5752133b4ecdb05e1c92bac6b2d5
2020-04-08 13:10:36 -07:00
Johannes M Dieterich
45fc881f05 [ROCm] Hotfix: Black list tensorexpr test set that has failures on ROCm (#36049)
Summary:
Test set got enabled with ROCm failures in https://github.com/pytorch/pytorch/pull/35914 - black list it for now.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/36049

Differential Revision: D20869814

Pulled By: zou3519

fbshipit-source-id: fcdb2abc9f3407344b56cf8d48b7740008317020
2020-04-06 13:26:05 -07:00
David Reiss
a054d05707 Add torch.utils.show_pickle for showing pickle contents in saved models (#35168)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/35168

Sometimes when a saved model isn't working, it's nice to be able to look
at the contents of the pickle files.  Unfortunately, pickletools output
isn't particularly readable, and unpickling is often either not possible
or runs so much post-processing code that it's not possible to tell
exactly what is present in the pickled data.

This script uses a custom Unpickler to unpickle (almost) any data into
stub objects that have no dependency on torch or any other runtime types
and suppress (almost) any postprocessing code.

As a convenience, the wrapper can search through zip files, supporting
command lines like

`python -m torch.utils.show_pickle /path/to/model.pt1@*/data.pkl`

When the module is invoked as main, we also install a hack in pprint to
allow semi-resonable formatting of our stub objects.

Test Plan: Ran it on a data.pkl, constants.pkl, and a debug pkl

Differential Revision: D20842550

Pulled By: dreiss

fbshipit-source-id: ef662d8915fc5795039054d1f8fef2e1c51cf40a
2020-04-03 15:11:20 -07:00
Mikhail Zolotukhin
ba3cec867f Reenable test/test_tensorexpr.py (#35914)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/35914

Test Plan: Imported from OSS

Differential Revision: D20827188

Pulled By: ZolotukhinM

fbshipit-source-id: ffcc1bb0396a0a19afb577a7ab4ca95c7e4ced37
2020-04-03 12:20:31 -07:00
Will Feng (FAIAR)
2fa3c1570d Refactor C++ API parity test mechanism and turn it on in CI again (#35190)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/35190

The following are the main changes:
- The main logic of C++ API parity test mechanism is moved from `test/test_cpp_api_parity.py` to `test/cpp_api_parity/module_impl_check.py` and `test/cpp_api_parity/functional_impl_check.py`, so that there is a clear separation between module tests and functional tests, although they still share a lot of common utility functions which are all in `test/cpp_api_parity/utils.py`.
- Module init tests (i.e. testing whether C++ module accepts the same constructor options as the corresponding Python module) is removed and will be added again in the future.
- `cpp_constructor_args` / `cpp_options_args` / `cpp_function_call` are added as appropriate to all test params dict in `torch/testing/_internal/common_nn.py`, to indicate how to run C++ API parity test for this test params dict.

Test Plan: Imported from OSS

Differential Revision: D20588198

Pulled By: yf225

fbshipit-source-id: 11238c560c8247129584b9b49df73fff40c4d81d
2020-04-03 11:20:36 -07:00
Feng Tian
762270c51f add c10d dynamic loading mechanism and unit test (#28068)
Summary:
The original behavior of pytorch c10d only supports built-in c10d backends, such as
nccl/gloo/mpi. This patch is used to extend the c10d capability to support dynamically
loading 3rd party communication libraries which are derived from ProcessGroup base class.

related RFC is in: https://github.com/pytorch/pytorch/issues/27955

Through this way, user just need specify a 3rd party c10d backend name when invoking
torch.distributed.init_process_group(). The proposed logic will try to load corresponding
c10d backend cpp extension automatically. as for how to develop a new 3rd party c10d backend
through cpp extension, pls refer to test/cpp_extensions/cpp_c10d_extension.cpp
Pull Request resolved: https://github.com/pytorch/pytorch/pull/28068

Differential Revision: D19174838

Pulled By: agolynski

fbshipit-source-id: 3409a504a43ce7260e6f9d1207c00e87471fac62
2020-04-02 15:46:51 -07:00
Nick Korovaiko
ddcad5b9ca temp disable test_tensorexpr.py
Test Plan: test on CI

Reviewed By: soumith

Differential Revision: D20823336

fbshipit-source-id: 65c04bc57c6a120003cb561613645d2d7e60189c
2020-04-02 14:28:22 -07:00
Christian Sarofeen
6d24f8fe21 Infrastructure for a new CUDA Fuser (#34785)
Summary:
**Summary:** This PR contains the infrastructure of a new CUDA fuser. This CUDA fuser is based on many of the same principles of TensorExpressions and Halide, however the implementation is ground up. The fusion pass itself is similar to the default CUDA fuser, however, it has undergone some refactoring and is using the new code generation infrastructure. For those who are interested in how the code generation in this PR works, I would recommend reviewing _test/cpp/jit/test_gpu_fusion.cpp_ as well as the long comment section at the beginning of _torch/csrc/jit/codegen/cuda/transform_replay.h_  One of the largest differences between our approach and that of TVM/Halide, is the concept of "TensorView". TensorView from a high level should be thought of similarly to how we think of working with Tensors in PyTorch. It's an N-D object which can undergo transformations that change its dimensionality. Dimensionality changes are done through the operations split/merge/reorder/computeAt. These transformations are similar to split/fuse/reorder/compute_at of TVM, they modify how a tensor is iterated over to generate GPU code. Interestingly, in our scheme these transformations are applied to tensors and only impact how that tensor is generated.

**Warning:** This PR is purposefully not feature complete with the current fuser. We wanted to separate out the infrastructure from the fusion capabilities. Once in, smaller incremental PRs will be submitted to expand capabilities of the fuser.

**Short term goals:**

Parity with current CUDA fuser (including performance):
- Dynamic shapes (no recompilation)
- Implicit handling of braodcast (broadcasted tensors are treated as tensors of the braodcasted size in the generated code)
- Dropout

**Mid-term goals:**

- Transposes fused with pointwise operations where transpose involves only 2 axes (across the fused operation).
- 1-D reductions fused with pointwise operations
Pull Request resolved: https://github.com/pytorch/pytorch/pull/34785

Reviewed By: ZolotukhinM

Differential Revision: D20650977

Pulled By: soumith

fbshipit-source-id: ee39c95a880e1b9822e874ed4cc180971572bf63
2020-04-02 09:22:42 -07:00
Nick Korovaiko
2f50c11954 add test_tensorexpr.py (#35776)
Summary:
Adding `test_tensorexpr.py` to our CI. There's a few complications: the first one is that we now always run `SimpleIREVal` as a part of simplifier, so the counts will always be greater than one. We can potentially invest some effort to differentiate between a real codegen call to `SimpleIREval` and calls in simplifier, but it's probably not that important and the second change to turn not being able to retrieve a counter into a default value of 0 since the test are structured to test for either an llvm or simpleireval backends, so it only seems appropriate to not fail the test too early.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/35776

Differential Revision: D20799333

Pulled By: Krovatkin

fbshipit-source-id: 2a94ff98e647180c6e6aea141a411c3376c509f9
2020-04-01 22:05:37 -07:00
Jerry Zhang
ab26dfb44e [quant] Move quantization tests into test/quantization (#35812)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/35812

Test Plan:
.

Imported from OSS

Differential Revision: D20795329

fbshipit-source-id: 42cc905c44ce7b86720aeef512d747ff6788d7a2
2020-04-01 12:44:19 -07:00
Michael Suo
319aee1afb Revert D20771828: [quant] Move quantization tests into test/quantization
Test Plan: revert-hammer

Differential Revision:
D20771828

Original commit changeset: 5f1df5e86c29

fbshipit-source-id: d14f915f291ae8a90026c5b65624459211495f47
2020-03-31 23:01:00 -07:00
Jerry Zhang
fef6c617d4 [quant] Move quantization tests into test/quantization (#35688)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/35688

Test Plan:
.

Imported from OSS

Differential Revision: D20771828

fbshipit-source-id: 5f1df5e86c29f7bdfbdc6563450e909b3bfdc07a
2020-03-31 20:30:57 -07:00
Johannes M Dieterich
0eb26fb01e [ROCm] Properly blacklist (#35230)
Summary:
test_python_all_except_nn
+ /usr/bin/python3.6 test/run_test.py --exclude test_nn test_jit_simple
test_jit_legacy test_jit_fuser_legacy --verbose --bring-to-front
test_quantization test_quantized test_quantized_tensor
test_quantized_nn_mods --determine-from=

test_nn continues to be run as part of test1 target

This will allows us to run run_test.py and correctly disabling these sets for ROCm.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/35230

Differential Revision: D20735851

Pulled By: ezyang

fbshipit-source-id: 255d21374c9605c8f8b6ffa1b08f58fb10d8e543
2020-03-30 08:57:03 -07:00
Omkar Salpekar
4025729e88 [1.5 Release][RPC Reliability] RRef Idempotency and RPC Retry enablement (#33636)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/33636

Fixes https://github.com/pytorch/pytorch/issues/32119, https://github.com/pytorch/pytorch/issues/26116,
https://github.com/pytorch/pytorch/issues/33072

Makes RRef control messages idempotent and enables sending with retries for distributed autograd cleanup and RRef internal messages.

In order to effectively test whether these RRef and distributed autograd cleanup work with network failures/retries, I implemented an  RPC Agent with a faulty send function, and enabled running tests using this as a third backend (in addition to Thrift and PGA). The tests using this backend are in a separate class (the test cases are similar but with minor changes to ensure short-running tests wait for retried RPCs to finish).

This faulty RPC Agent is pretty configurable. The tests can configure which messages types to fail, and how many messages to fail, but going forward, other RPC functionality can be overriden with faulty methods to test with failures injected.

Differential Revision: D20019236

fbshipit-source-id: 540a977e96b2e29aa0393ff12621fa293fe92b48
2020-03-20 20:07:47 -07:00