Commit Graph

801 Commits

Author SHA1 Message Date
Xiang Gao
b822aba8ec Enable BFloat support for gemms on arch other than ampere (#50442)
Summary:
Fixes #{issue number}

Pull Request resolved: https://github.com/pytorch/pytorch/pull/50442

Reviewed By: bdhirsh

Differential Revision: D26044981

Pulled By: mruberry

fbshipit-source-id: 65c42f2c1de8d24e4852a1b5bd8f4b1735b2230e
2021-01-26 11:07:07 -08:00
Sam Estep
6dda0363bb [reland] Refactor mypy configs list into editor-friendly wrapper (#50826)
Summary:
Closes https://github.com/pytorch/pytorch/issues/50513 by resolving all four checkboxes. If this PR is merged, I will also modify one or both of the following wiki pages to add instructions on how to use this `mypy` wrapper for VS Code editor integration:

- [Guide for adding type annotations to PyTorch](https://github.com/pytorch/pytorch/wiki/Guide-for-adding-type-annotations-to-PyTorch)
- [Lint as you type](https://github.com/pytorch/pytorch/wiki/Lint-as-you-type)

Pull Request resolved: https://github.com/pytorch/pytorch/pull/50826

Test Plan:
Unit tests for globbing function:
```
python test/test_testing.py TestMypyWrapper -v
```

Manual checks:

- Uninstall `mypy` and run `python test/test_type_hints.py` to verify that it still works when `mypy` is absent.
- Reinstall `mypy` and run `python test/test_type_hints.py` to verify that this didn't break the `TestTypeHints` suite.
- Run `python test/test_type_hints.py` again (should finish quickly) to verify that this didn't break `mypy` caching.
- Run `torch/testing/_internal/mypy_wrapper.py` on a few Python files in this repo to verify that it doesn't give any additional warnings when the `TestTypeHints` suite passes. Some examples (compare with the behavior of just running `mypy` on these files):
  ```sh
  torch/testing/_internal/mypy_wrapper.py $PWD/README.md
  torch/testing/_internal/mypy_wrapper.py $PWD/tools/fast_nvcc/fast_nvcc.py
  torch/testing/_internal/mypy_wrapper.py $PWD/test/test_type_hints.py
  torch/testing/_internal/mypy_wrapper.py $PWD/torch/random.py
  torch/testing/_internal/mypy_wrapper.py $PWD/torch/testing/_internal/mypy_wrapper.py
  ```
- Remove type hints from `torch.testing._internal.mypy_wrapper` and verify that running `mypy_wrapper.py` on that file gives type errors.
- Remove the path to `mypy_wrapper.py` from the `files` setting in `mypy-strict.ini` and verify that running it again on itself no longer gives type errors.
- Add `test/test_type_hints.py` to the `files` setting in `mypy-strict.ini` and verify that running the `mypy` wrapper on it again now gives type errors.
- Change a return type in `torch/random.py` and verify that running the `mypy` wrapper on it again now gives type errors.
- Add the suggested JSON from the docstring of `torch.testing._internal.mypy_wrapper.main` to your `.vscode/settings.json` and verify that VS Code gives the same results (inline, while editing any Python file in the repo) as running the `mypy` wrapper on the command line, in all the above cases.

Reviewed By: walterddr

Differential Revision: D26049052

Pulled By: samestep

fbshipit-source-id: 0b35162fc78976452b5ea20d4ab63937b3c7695d
2021-01-26 09:04:14 -08:00
Richard Zou
83315965ab Turn on batched grad testing for CriterionTest (#50744)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/50744

This PR adds a `check_batched_grad=True` option to CriterionTest and
turns it on by default for all CriterionTest-generated tests

Test Plan: - run tests

Reviewed By: ejguan

Differential Revision: D25997676

Pulled By: zou3519

fbshipit-source-id: cc730731e6fae2bddc01bc93800fd0e3de28b32d
2021-01-26 07:37:15 -08:00
Yanli Zhao
250c71121b Create a DDPLoggingData and expose it to python interface (#50622)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/50622

1. Define a DDPLoggingData struct that is the placeholder for all the ddp related logging fields
2. Put the DDPLoggingData struct in the C10 directory so that it can be easily imported by c10 and torch files
3. Expose get_ddp_logging_data() method in python so that users can get the logging data and dump in their applications
4. Unit test tested the logging data can be set and got as expected
5. Follow up will add more logging fields such as perf stats, internal states, env variables and etc
ghstack-source-id: 120275870

Test Plan: unit tests

Reviewed By: SciPioneer

Differential Revision: D25930527

fbshipit-source-id: 290c200161019c58e28eed9a5a2a7a8153113f99
2021-01-25 15:23:07 -08:00
Ivan Yashchuk
ddf26816d3 Make torch.svd return V, not V.conj() for complex inputs (#51012)
Summary:
**BC-breaking note:**

torch.svd() added support for complex inputs in PyTorch 1.7, but was not documented as doing so. The complex "V" tensor returned was actually the complex conjugate of what's expected. This PR fixes the discrepancy.

This will silently break all users of torch.svd() with complex inputs.

**Original PR Summary:**

This PR resolves https://github.com/pytorch/pytorch/issues/45821.

The problem was that when introducing the support of complex inputs for `torch.svd` it was overlooked that LAPACK/MAGMA returns the conjugate transpose of V matrix, not just the transpose of V. So `torch.svd` was silently returning U, S, V.conj() instead of U, S, V.

Behavior of `torch.linalg.pinv`, `torch.pinverse` and `torch.linalg.svd` (they depend on `torch.svd`) is not changed in this PR.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/51012

Reviewed By: bdhirsh

Differential Revision: D26047593

Pulled By: albanD

fbshipit-source-id: d1e08dbc3aab9ce1150a95806ef3b5da98b5d3ca
2021-01-25 14:06:41 -08:00
Peter Bell
8690819618 OpInfo: Add DecorateInfo class similar to SkipInfo for decorators (#50501)
Summary:
Follow up to https://github.com/pytorch/pytorch/issues/50435

I have confirmed this works by running
```
pytest test_ops.py -k test_fn_gradgrad_fft`
```
with normally and with `PYTORCH_TEST_WITH_SLOW=1 PYTORCH_TEST_SKIP_FAST=1`. In the first case all tests are skipped, in the second they all run as they should.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/50501

Reviewed By: ezyang

Differential Revision: D25956416

Pulled By: mruberry

fbshipit-source-id: c896a8cec5f19b8ffb9b168835f3743b6986dad7
2021-01-25 04:51:04 -08:00
anjali411
e544d74c55 [CPU] Add torch.trace for complex tensors (#50380)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/50380

Test Plan: Imported from OSS

Reviewed By: ezyang

Differential Revision: D25949361

Pulled By: anjali411

fbshipit-source-id: 9910bc5b532c9bf3add530221d643b2c41c62d01
2021-01-23 09:04:31 -08:00
Wanchao Liang
2c3c2a4b7a [dist_optim] add distributed functional AdamW optimizer (#50620)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/50620

Add TorchScript compatible AdamW functional optimizer to distributed optimizer

Test Plan: Imported from OSS

Reviewed By: rohan-varma

Differential Revision: D25932774

Pulled By: wanchaol

fbshipit-source-id: 64eb4aeaa3cab208d0ebbec7c4d91a9d43951947
2021-01-23 01:04:45 -08:00
Wanchao Liang
3f982e56b1 [dist_optim] add distributed functional RMSprop optimizer (#50619)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/50619

Add TorchScript compatible RMSprop functional optimizer to distributed optimizer

Test Plan: Imported from OSS

Reviewed By: rohan-varma

Differential Revision: D25932775

Pulled By: wanchaol

fbshipit-source-id: bd4854f9f95a740e02a1bebe24f780488460ba4d
2021-01-23 01:04:41 -08:00
Wanchao Liang
6c81b4d917 [dist_optim] add distributed functional Adadelta optimizer (#50623)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/50623

Add TorchScript compatible Adadelta functional optimizer to distributed optimizer

Test Plan: Imported from OSS

Reviewed By: rohan-varma

Differential Revision: D25932772

Pulled By: wanchaol

fbshipit-source-id: d59b04e5f0b6bab7e0d1c5f68e66249a65958e0b
2021-01-23 01:04:36 -08:00
Wanchao Liang
cd2067539e [dist_optim] add distributed functional sgd optimizer (#50618)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/50618

Test Plan: Imported from OSS

Reviewed By: rohan-varma

Differential Revision: D25932778

Pulled By: wanchaol

fbshipit-source-id: 8df3567b477bc5ba3556b8c5294cd3da5db963ad
2021-01-23 01:04:32 -08:00
Wanchao Liang
5cbe1e4933 [dist_optim] add distributed functional Adam optimizer (#50624)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/50624

Add TorchScript compatible Adam functional optimizer to distributed optimizer

Test Plan: Imported from OSS

Reviewed By: rohan-varma

Differential Revision: D25932770

Pulled By: wanchaol

fbshipit-source-id: cab3f1164c76186969c284a2c52481b79bbb7190
2021-01-23 01:01:37 -08:00
Rohan Varma
5a661e0171 [WIP][Grad Compression] Unittest to verify allreduce_hook parity (#50851)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/50851

Improves upon the previous unittest to ensure allreduce_hook results in the same gradients as vanilla allreduce in DDP.

ghstack-source-id: 120229103

Test Plan:
buck build mode/dev-nosan //caffe2/test/distributed:distributed_nccl_fork --keep-going
BACKEND=nccl WORLD_SIZE=2 ~/fbcode/buck-out/dev/gen/caffe2/test/distributed/distributed_nccl_fork#binary.par -r test_ddp_hook_parity

Reviewed By: SciPioneer

Differential Revision: D25963654

fbshipit-source-id: d55eee0aee9cf1da52aa0c4ba1066718aa8fd9a4
2021-01-23 00:47:08 -08:00
Sam Estep
5c1c858ca8 Revert D25977352: [pytorch][PR] Refactor mypy configs list into editor-friendly wrapper
Test Plan: revert-hammer

Differential Revision:
D25977352 (73dffc8452)

Original commit changeset: 4b3a5e8a9071

fbshipit-source-id: a0383ea4158f54be6f128b9ddb2cd12fc3a3ea53
2021-01-22 15:53:44 -08:00
Richard Zou
63838b9330 Turn on batched_grad testing for NewModuleTest (#50740)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/50740

This PR adds a `check_batched_grad=True` option to
NewModuleTest-generated NN tests.

Test Plan: - run tests (`pytest test/test_nn.py -v -rf`)

Reviewed By: ejguan

Differential Revision: D25997679

Pulled By: zou3519

fbshipit-source-id: b75e73d7e86fd3af9bad6efed7127b36551587b3
2021-01-22 15:33:09 -08:00
Sam Estep
73dffc8452 Refactor mypy configs list into editor-friendly wrapper (#50826)
Summary:
Closes https://github.com/pytorch/pytorch/issues/50513 by resolving the first three checkboxes. If this PR is merged, I will also modify one or both of the following wiki pages to add instructions on how to use this `mypy` wrapper for VS Code editor integration:

- [Guide for adding type annotations to PyTorch](https://github.com/pytorch/pytorch/wiki/Guide-for-adding-type-annotations-to-PyTorch)
- [Lint as you type](https://github.com/pytorch/pytorch/wiki/Lint-as-you-type)

The test plan below is fairly manual, so let me know if I should add more automated tests to this PR.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/50826

Test Plan:
Unit tests for globbing function:
```
python test/test_testing.py TestMypyWrapper -v
```

Manual checks:

- Uninstall `mypy` and run `python test/test_type_hints.py` to verify that it still works when `mypy` is absent.
- Reinstall `mypy` and run `python test/test_type_hints.py` to verify that this didn't break the `TestTypeHints` suite.
- Run `python test/test_type_hints.py` again (should finish quickly) to verify that this didn't break `mypy` caching.
- Run `torch/testing/_internal/mypy_wrapper.py` on a few Python files in this repo to verify that it doesn't give any additional warnings when the `TestTypeHints` suite passes. Some examples (compare with the behavior of just running `mypy` on these files):
  ```sh
  torch/testing/_internal/mypy_wrapper.py README.md
  torch/testing/_internal/mypy_wrapper.py tools/fast_nvcc/fast_nvcc.py
  torch/testing/_internal/mypy_wrapper.py test/test_type_hints.py
  torch/testing/_internal/mypy_wrapper.py torch/random.py
  torch/testing/_internal/mypy_wrapper.py torch/testing/_internal/mypy_wrapper.py
  ```
- Remove type hints from `torch.testing._internal.mypy_wrapper` and verify that running `mypy_wrapper.py` on that file gives type errors.
- Remove the path to `mypy_wrapper.py` from the `files` setting in `mypy-strict.ini` and verify that running it again on itself no longer gives type errors.
- Add `test/test_type_hints.py` to the `files` setting in `mypy-strict.ini` and verify that running the `mypy` wrapper on it again now gives type errors.
- Remove type hints from `torch/random.py` and verify that running the `mypy` wrapper on it again now gives type errors.
- Add the suggested JSON from the docstring of `torch.testing._internal.mypy_wrapper.main` to your `.vscode/settings.json` and verify that VS Code gives the same results (inline, while editing any Python file in the repo) as running the `mypy` wrapper on the command line, in all the above cases.

Reviewed By: glaringlee, walterddr

Differential Revision: D25977352

Pulled By: samestep

fbshipit-source-id: 4b3a5e8a9071fcad65a19f193bf3dc7dc3ba1b96
2021-01-22 13:35:44 -08:00
Peter Bell
db079a9877 Padding: support complex dtypes (#50594)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/50594

Fixes #50234

Test Plan: Imported from OSS

Reviewed By: pbelevich

Differential Revision: D25987316

Pulled By: anjali411

fbshipit-source-id: c298b771fe52b267a86938e886ea402badecfe3e
2021-01-22 11:57:42 -08:00
Kurt Mohler
8ab1a1495d Rename set_deterministic to use_deterministic_algorithms (#49904)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/49100

Pull Request resolved: https://github.com/pytorch/pytorch/pull/49904

Reviewed By: ezyang, mrshenli

Differential Revision: D25956761

Pulled By: mruberry

fbshipit-source-id: 86a59289d50825a0ebbd7c358b483c8d8039ffa6
2021-01-22 11:27:07 -08:00
Peter Bell
47f0bda3ef Improve complex support in common_nn test machinery (#50593)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/50593

There are no equivalent to torch.FloatTensor, torch.cuda.FloatTensor for complex
types. So `get_gpu_type` and `get_cpu_type` are broken for complex tensors.

Also found a few places that explicitly cast inputs to floating point types,
which would drop the imaginary component before running the test.

Test Plan: Imported from OSS

Reviewed By: ngimel

Differential Revision: D25954050

Pulled By: mruberry

fbshipit-source-id: 1fa8e5af233aa095c839d5e2f860564baaf92aef
2021-01-22 09:44:45 -08:00
Peter Bell
0436ea125b OpInfo: Remove promotes_integers_to_float and infer it instead (#50279)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/50279

This allows different sample inputs to have different behavior for the same
operator. For example, `div(..., rounding_mode='true')` will promote but other
rounding modes don't. The current boolean flag is too restrictive to allow this.

Test Plan: Imported from OSS

Reviewed By: ngimel

Differential Revision: D25950011

Pulled By: mruberry

fbshipit-source-id: 7e82b82bedc626b2b6970d92d5b25676183ec384
2021-01-22 09:32:37 -08:00
Richard Zou
c7d348fea6 Turn on batched grad testing for non-autogenerated tests in test_nn.py (#50739)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/50739

This does not turn on batched grad testing for autogenerated NewModuleTest
tests and CriterionTest tests. Those are coming later.

Test Plan: - run tests

Reviewed By: ejguan

Differential Revision: D25997677

Pulled By: zou3519

fbshipit-source-id: b4b2d68e0f99c3d573faf237e1e531d0b3fced40
2021-01-22 07:40:20 -08:00
Jagadish Krishnamoorthy
eb0fe70680 [distributed_test]Enable disabled ROCm tests. (#50421)
Summary:
Signed-off-by: Jagadish Krishnamoorthy <jagdish.krishna@gmail.com>

Pull Request resolved: https://github.com/pytorch/pytorch/pull/50421

Reviewed By: ejguan

Differential Revision: D26006844

Pulled By: zhaojuanmao

fbshipit-source-id: aa6ac5ee2d37f354d52328c72eb2cd23f5665f53
2021-01-21 17:22:40 -08:00
Kurt Mohler
c082e2184d Add autograd tests for complex matrix norm nuclear and +/-2 (#50746)
Summary:
Also upgrades `linalg.norm`'s autograd and jit tests to `OpInfo`

Fixes https://github.com/pytorch/pytorch/issues/48842

Pull Request resolved: https://github.com/pytorch/pytorch/pull/50746

Reviewed By: mruberry

Differential Revision: D25968246

Pulled By: anjali411

fbshipit-source-id: d457069ddb4caf2a5caed1aa64c791ef0790952c
2021-01-21 15:33:08 -08:00
Richard Zou
16691516a5 Add batched grad testing to OpInfo (#50818)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/50818

This PR does two things:
1. Add batched grad testing to OpInfo
2. Improve the error message from `gradcheck` if batched gradient
computation fails to include suggestions for workarounds.

To add batched grad testing to OpInfo, this PR:
- adds new `check_batched_grad=True` and `check_batched_gradgrad=True`
attributes to OpInfo. These are True by default because we expect most
operators to support batched gradient computation.
- If `check_batched_grad=True`, then `test_fn_grad` invokes gradcheck
with `check_batched_grad=True`.
- If `check_batched_gradgrad=True`, then `test_fn_gradgradgrad` invokes
gradgradcheck with `check_batched_grad=True`.

The improved gradcheck error message looks like the following when an
exception is thrown while computing batched gradients:
https://gist.github.com/zou3519/5a0f46f908ba036259ca5e3752fd642f

Future
- Sometime in the not-near future, we will separate out "batched grad
testing" from "gradcheck" for the purposes of OpInfo to make the
testing more granular and also so that we can test that the vmap
fallback doesn't get invoked (currently batched gradient testing only
tests that the output values are correct).

Test Plan: - run tests `pytest test/test_ops.py -v -k "Gradients"`

Reviewed By: ejguan

Differential Revision: D25997703

Pulled By: zou3519

fbshipit-source-id: 6d2d444d6348ae6cdc24c32c6c0622bd67b9eb7b
2021-01-21 15:13:06 -08:00
Natalia Gimelshein
4d169258ef Revert D25976245: [pytorch][PR] Enable Skipped ROCM Tests in common_nn.py
Test Plan: revert-hammer

Differential Revision:
D25976245 (24a0272132)

Original commit changeset: 801032534f91

fbshipit-source-id: 561e6d761cb694451d5f87557b4f96f37d19dd90
2021-01-21 13:28:37 -08:00
Arindam Roy
24a0272132 Enable Skipped ROCM Tests in common_nn.py (#50753)
Summary:
Removed test_cuda=(not TEST_WITH_ROCM)
in common_nn.py to enable the skipped tests
for ROCM.

Signed-off-by: Arindam Roy <rarindam@gmail.com>

Fixes #{issue number}

Pull Request resolved: https://github.com/pytorch/pytorch/pull/50753

Reviewed By: mrshenli

Differential Revision: D25976245

Pulled By: ngimel

fbshipit-source-id: 801032534f911d24d231bc9f0d3235a4506412c0
2021-01-21 09:48:47 -08:00
Yi Wang
439afda090 [Gradient Compression] Fix warm-start for PowerSGD laywerwise compression (#50283)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/50283

Realize that for the layerwise compression, the previous warm-start implementation only skips memory allocations, but does not skip filling random values for Qs.

Also fix the unit test in distributed_test.py. Previously the process group was not created correctly, and not communication occurred in the test_DistributedDataParallel_powerSGD_ddp_comm_hook.

Original PR issue: Investigate Applying PowerSGD to Communication Hook for Gradient Compression #47202
ghstack-source-id: 120101220

Test Plan:
Verified the fix by adding added some loggings locally.

Also verified no NE diff on Ads 1x.

Reviewed By: rohan-varma

Differential Revision: D25846222

fbshipit-source-id: 1ebeeb55ceba64d4d904ea6ac1bb42b1b2241520
2021-01-20 22:31:44 -08:00
anjali411
7fdc6a27b8 Skip test_variant_consistency_eager_addr_cpu_bfloat16 (#50836)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/50836

Fixes the broken master

Test Plan: Imported from OSS

Reviewed By: albanD

Differential Revision: D25981125

Pulled By: anjali411

fbshipit-source-id: 4043b6a7287700c7c9f0ce703eef53bb666ff655
2021-01-20 16:03:00 -08:00
Alex Suhan
1bde5a216f [TensorExpr] Use wider type for scalars (#50774)
Summary:
Scalars have to be double / 64-bit integers to match eager semantics.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/50774

Test Plan: python test/test_jit_fuser_te.py -k TestTEFuser.test_clamp

Reviewed By: ngimel

Differential Revision: D25978214

Pulled By: asuhan

fbshipit-source-id: ba765b7d215239f2bf0f3d467e4dce876f7ccb91
2021-01-20 15:12:27 -08:00
Xiang Gao
44922f26f5 Add support for NCCL alltoall (#44374)
Summary:
In https://github.com/pytorch/pytorch/issues/42514, NCCL `alltoall_single` is already added. This PR adds NCCL `alltoall`.

The difference between `alltoall_single` and `alltoall` is: `alltoall_single`  works on a single tensor and send/receive slices of that tensor, while `alltoall` works on a list of tensor, and send/receive tensors in that list.

cc: ptrblck ngimel

Pull Request resolved: https://github.com/pytorch/pytorch/pull/44374

Reviewed By: zhangguanheng66, mrshenli

Differential Revision: D24455427

Pulled By: srinivas212

fbshipit-source-id: 42fdebdd14f8340098e2c34ef645bd40603552b1
2021-01-20 14:57:12 -08:00
Vasiliy Kuznetsov
ac8e90fa6d quantization: Linear + BatchNorm1d fusion (#50748)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/50748

Adds support for Linear + BatchNorm1d fusion to quantization.

This is a redo of dreiss's https://github.com/pytorch/pytorch/pull/37467, faster
to copy-paste it than rebase and deal with conflicts.

Test Plan:
```
python test/test_quantization.py TestFusion.test_fusion_linear_bn_eval
```

Imported from OSS

Reviewed By: supriyar

Differential Revision: D25957432

fbshipit-source-id: 24e5b760f70186aa953ef65ab0182770e89495e4
2021-01-20 12:59:02 -08:00
Kyle Chen
16faabe7f0 [ROCm] re-enable tests (#50691)
Summary:
Signed-off-by: Kyle Chen <kylechen@amd.com>

cc: jeffdaily

re-enable test_torch.py and test_unary_ufuncs.py tests

Pull Request resolved: https://github.com/pytorch/pytorch/pull/50691

Reviewed By: mruberry

Differential Revision: D25967842

Pulled By: ngimel

fbshipit-source-id: dc0f6cb68fe4d151c2719bdf67ead96e1396acf2
2021-01-20 11:23:39 -08:00
Xiang Gao
4f3cdd971c Fix test_dispatch.py when running with TORCH_SHOW_CPP_STACKTRACES=1 (#50509)
Summary:
`test_dispatch.py` has many asserts about the error message. When running with `TORCH_SHOW_CPP_STACKTRACES=1`, the error message is different from when `TORCH_SHOW_CPP_STACKTRACES=0`, which makes many tests in `test_dispatch.py` fail. This PR fixes these failures when running with `TORCH_SHOW_CPP_STACKTRACES=1`.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/50509

Reviewed By: ngimel

Differential Revision: D25956853

Pulled By: ezyang

fbshipit-source-id: 3b3696742a7dfb8f52f23a364838ec96945c5662
2021-01-20 10:15:01 -08:00
Lillian Johnson
f1c578594b JIT Testing: Improve assertAutodiffNode error message (#50626)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/50626

Test Plan: Imported from OSS

Reviewed By: mruberry

Differential Revision: D25932184

Pulled By: Lilyjjo

fbshipit-source-id: 6fa5a652eb1a0c10bb9d9040b9a708fdf93aaf46
2021-01-20 10:05:52 -08:00
anjali411
1cc8f8a750 Add complex autograd support and OpInfo based test for torch.addr (#50667)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/50667

Test Plan: Imported from OSS

Reviewed By: pbelevich

Differential Revision: D25957584

Pulled By: anjali411

fbshipit-source-id: a6b2880971027389721f4e051009b7d9694f979b
2021-01-20 09:43:13 -08:00
chengjun
4a8ef4525e Add new backend type for Intel heterogeneous computation platform. (#49786)
Summary:
Add a new device type 'XPU' ('xpu' for lower case) to PyTorch. Changes are needed for code related to device model and kernel dispatch, e.g. DeviceType, Backend and DispatchKey etc.

https://github.com/pytorch/pytorch/issues/48246

Pull Request resolved: https://github.com/pytorch/pytorch/pull/49786

Reviewed By: mrshenli

Differential Revision: D25893962

Pulled By: ezyang

fbshipit-source-id: 7ff0a316ee34cf0ed6fc7ead08ecdeb7df4b0052
2021-01-20 08:15:18 -08:00
Shen Li
a3b8cbcdfc Let TensorPipe detect peer access (#50676)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/50676

Test Plan: Imported from OSS

Reviewed By: beauby

Differential Revision: D25941962

Pulled By: mrshenli

fbshipit-source-id: 7d4fd3b4fbd5ae5a0c50ad65605ced9db10ede4a
2021-01-20 08:04:51 -08:00
kiyosora
4803eaf502 Implement NumPy-like function torch.fmax() & torch.fmin() (#49312)
Summary:
- Implementing the NumPy-like function`torch.fmax()` and `torch.fmin()` recommended in https://github.com/pytorch/pytorch/issues/48440

Pull Request resolved: https://github.com/pytorch/pytorch/pull/49312

Reviewed By: izdeby

Differential Revision: D25887246

Pulled By: heitorschueroff

fbshipit-source-id: d762eeff8b328bfcbe7d48b7ee9d2da72c249691
2021-01-20 06:45:25 -08:00
Himangshu
4ff1823fac Add Sparse support for torch.sqrt (#50088)
Summary:
Fixes #{issue number}

Pull Request resolved: https://github.com/pytorch/pytorch/pull/50088

Reviewed By: mrshenli

Differential Revision: D25894003

Pulled By: ezyang

fbshipit-source-id: 93688c33b2f9a355c331d6edb3e402935223f75b
2021-01-19 20:19:07 -08:00
Xinyu Li
7526e38cd3 Revert "Stable sort for CPU (#50052)" (#50752)
Summary:
This reverts commit c99f356051.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/50752

Reviewed By: zou3519

Differential Revision: D25958146

Pulled By: glaringlee

fbshipit-source-id: f4068d038f9bd337bac8b673eaeb46a4646f6c77
2021-01-19 18:21:25 -08:00
anjali411
5d64658ce8 Add complex support for torch.{acosh, asinh, atanh} (#50387)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/50387

Test Plan: Imported from OSS

Reviewed By: heitorschueroff

Differential Revision: D25947496

Pulled By: anjali411

fbshipit-source-id: c70886a73378501421ff94cdc0dc737f1738bf6f
2021-01-19 08:18:22 -08:00
Shen Li
1000403f66 Adding missing decorator for test_device_map_gpu_mixed_self_4 (#50732)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/50732

Test Plan: Imported from OSS

Reviewed By: beauby

Differential Revision: D25954041

Pulled By: mrshenli

fbshipit-source-id: b2eeb1a77753cb8696613bfdc7bbc5001ae4c972
2021-01-19 07:53:11 -08:00
Ivan Yashchuk
f9a5ba7398 Added linalg.slogdet (#49194)
Summary:
This PR adds `torch.linalg.slogdet`.

Changes compared to the original torch.slogdet:

- Complex input now works as in NumPy
- Added out= variant (allocates temporary and makes a copy for now)
- Updated `slogdet_backward` to work with complex input

Ref. https://github.com/pytorch/pytorch/issues/42666

Pull Request resolved: https://github.com/pytorch/pytorch/pull/49194

Reviewed By: VitalyFedyunin

Differential Revision: D25916959

Pulled By: mruberry

fbshipit-source-id: cf9be8c5c044870200dcce38be48cd0d10e61a48
2021-01-19 07:28:12 -08:00
kshitij12345
316f0b89c3 [testing] Port torch.{repeat, tile} tests to use OpInfo machinery (#50199)
Summary:
Reference: https://github.com/pytorch/pytorch/issues/50013

Pull Request resolved: https://github.com/pytorch/pytorch/pull/50199

Reviewed By: ngimel

Differential Revision: D25949791

Pulled By: mruberry

fbshipit-source-id: 10eaf2d749fac8c08847f50461e72ad1c75c61e3
2021-01-19 06:02:27 -08:00
Shen Li
94d9a7e8ac Enable TensorPipe CUDA sending to self (#50674)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/50674

Test Plan: Imported from OSS

Reviewed By: beauby

Differential Revision: D25941964

Pulled By: mrshenli

fbshipit-source-id: b53454efdce01f7c06f67dfb890d3c3bdc2c648f
2021-01-18 19:35:40 -08:00
anjali411
227acc2e51 Complex autograd support for torch.{baddbmm, addbmm, addmm, addmv} (#50632)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/50632

I'll port the following method tests in follow-up PRs:
`'baddbmm', 'addbmm', 'addmv', 'addr'`
After the tests are ported to OpInfo based tests, it would also be much easier to add tests with complex alpha and beta values.
Edit- it seems like it's hard to port the broadcasting variant tests because one ends up skipping `test_inplace_grad` and `test_variant_consistency_eager` even for the case when inputs are not required to be broadcasted.

Test Plan: Imported from OSS

Reviewed By: navahgar

Differential Revision: D25947471

Pulled By: anjali411

fbshipit-source-id: 9faa7f1fd55a1269bad282adac2b39d19bfa4591
2021-01-18 14:05:02 -08:00
vfdev-5
eae1b40400 Introduced operator variant to OpInfo (#50370)
Summary:
Introduced operator variant to OpInfo

Context: Split of https://github.com/pytorch/pytorch/issues/49158

cc mruberry

Pull Request resolved: https://github.com/pytorch/pytorch/pull/50370

Reviewed By: mrshenli

Differential Revision: D25897821

Pulled By: mruberry

fbshipit-source-id: 4387ea10607dbd7209842b685f1794bcb31f434e
2021-01-18 00:05:01 -08:00
nikitaved
c99f356051 Stable sort for CPU (#50052)
Summary:
Fixes [https://github.com/pytorch/pytorch/issues/38681](https://github.com/pytorch/pytorch/issues/38681) for the CPU.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/50052

Reviewed By: mrshenli

Differential Revision: D25900823

Pulled By: glaringlee

fbshipit-source-id: 1a3fa336037d0aa2344d79f46dcacfd478a353d1
2021-01-15 19:34:27 -08:00
Nikolay Korovaiko
8e60bf9034 add RequiresGradCheck (#50392)
Summary:
This change improves perf by 3-4% on fastrnns.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/50392

Reviewed By: izdeby

Differential Revision: D25891392

Pulled By: Krovatkin

fbshipit-source-id: 44d9b6907d3975742c9d77102fe6a85aab2c08c0
2021-01-15 16:50:42 -08:00
Jeffrey Wan
6e3e57095c Add complex support for torch.nn.L1Loss (#49912)
Summary:
Building on top of the work of anjali411 (https://github.com/pytorch/pytorch/issues/46640)

Things added in this PR:
1. Modify backward and double-backward formulas
2. Add complex support for `new module tests` and criterion tests (and add complex tests for L1)
3. Modify some existing tests to support complex

Pull Request resolved: https://github.com/pytorch/pytorch/pull/49912

Reviewed By: zhangguanheng66

Differential Revision: D25853036

Pulled By: soulitzer

fbshipit-source-id: df619f1b71c450ab2818eb17804e0c55990aa8ad
2021-01-15 15:53:15 -08:00