Commit Graph

114 Commits

Author SHA1 Message Date
Nikolay Korovaiko
d304bb070a Gelu Backward, Contribution from Kevin Stephano (#58249)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/58249

Test Plan: Imported from OSS

Reviewed By: ejguan

Differential Revision: D28425381

Pulled By: Krovatkin

fbshipit-source-id: 21b7ac972220b6c35b285e3b66f05eb392002408
2021-05-13 16:36:44 -07:00
Mike Ruberry
f88297c66b Revert D28387767: Add forward AD test for op info
Test Plan: revert-hammer

Differential Revision:
D28387767 (26b6d044cd)

Original commit changeset: 369d76921c84

fbshipit-source-id: 91ac961339bdd5e1e2530d2655364f9fe46cdafb
2021-05-12 20:41:25 -07:00
albanD
26b6d044cd Add forward AD test for op info (#57701)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/57701

The new OpInfo flag has the following semantic:
- If it says that it supports forward AD, we run gradcheck with forward AD to ensure it is correct
- If it says that it does not support it, we check that the corresponding error is raised

All the added tests take 3s to run for CPU builds and 1min for GPU builds which should be pretty negligible compared to the test_ops runtime for each of these arch.

Test Plan: Imported from OSS

Reviewed By: agolynski

Differential Revision: D28387767

Pulled By: albanD

fbshipit-source-id: 369d76921c8460aa4548f9b5159b7297994672f5
2021-05-12 18:49:18 -07:00
Elias Ellison
7627dd568a hardswish reland (#57652)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/57652

Test Plan: Imported from OSS

Reviewed By: Krovatkin

Differential Revision: D28226724

Pulled By: eellison

fbshipit-source-id: 585a91ffab7a855b5600e79130a37be25ef9b354
2021-05-05 17:21:43 -07:00
Shen Li
887d0e5657 Revert D28197820: [JIT][NNC] add hardswish symbolic gradient and NNC lowering
Test Plan: revert-hammer

Differential Revision:
D28197820 (0142fd0b57)

Original commit changeset: 05305d85c5bb

fbshipit-source-id: 2e1d9699515982ba2a9be06e83a2ce043ec857ee
2021-05-05 07:53:30 -07:00
eellison
0142fd0b57 [JIT][NNC] add hardswish symbolic gradient and NNC lowering (#57383)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/57383

Notes: I picked up an activation from https://github.com/pytorch/pytorch/issues/56969. You can look at the [activations.cpp](https://github.com/pytorch/pytorch/blob/master/aten/src/ATen/native/cpu/Activation.cpp#L429) file which has both forward and backward kernel code to help you write the NNC lowering and the symbolic gradient.

I added a test in test_jit_fuser_te for the fusion, and I added an OpInfo and asserted that we expect to see autodiffable nodes to test the symbolic gradient.

Test Plan: Imported from OSS

Reviewed By: mrshenli

Differential Revision: D28197820

Pulled By: eellison

fbshipit-source-id: 05305d85c5bb0847c8f911b95ba47b137dca7e90
2021-05-04 23:39:59 -07:00
Rong Rong (AI Infra)
095c328d9f Add supported backward_dtype to OpInfo (#56156)
Summary:
Related to https://github.com/pytorch/pytorch/issues/55601.

- [x] removed complex autograd checker in `test_supported_backward`
- [x] created `backward_dtype[If<Device>]` that inherits from normal `dtype[If<Device>]` by default
- [x] removed all skip for backward test, instead added backward dtype
- [x] change complex autograd to a function call: `support_complex_autograd(device_type)` that depends on `backward_dtype*` since they essentially mean the same thing for complex types

TODO for next PR
- add `test_unsupported_backward` to verify they are actually unsupported.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/56156

Reviewed By: mruberry

Differential Revision: D27926717

Pulled By: walterddr

fbshipit-source-id: 9a4af8612278ca44a97b6f1510b6b175852c893b
2021-04-30 08:24:58 -07:00
Philip Meier
db32b69591 quote str kwarg values in test_ops.py::TestCommon::test_jit_alias_remapping (#57120)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/57119.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/57120

Reviewed By: gchanan

Differential Revision: D28086601

Pulled By: mruberry

fbshipit-source-id: 566a53c2365f2d128da49ac58463e37b36455831
2021-04-30 04:29:12 -07:00
albanD
10fd7d8be6 Add option to OpInfo to skip gradgrad check and empty cdist OpInfo (#56603)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/56603

Test Plan: Imported from OSS

Reviewed By: mruberry

Differential Revision: D27939204

Pulled By: albanD

fbshipit-source-id: c7c80551ef3c34c822832891a99104440893ea4c
2021-04-23 14:06:33 -07:00
Jeffrey Wan
d01302431c Enable fast gradcheck for real inputs and outputs (#55237)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/55237

In this PR, we reenable fast-gradcheck and resolve misc issues that arise:
Before landing this PR, land #55182 so that slow tests are still being run periodically.

Bolded indicates the issue is handled in this PR, otherwise it is handled in a previous PR.

**Non-determinism issues**:
- ops that do not have deterministic implementation (as documented https://pytorch.org/docs/stable/generated/torch.use_deterministic_algorithms.html#torch.use_deterministic_algorithms)
  - test_pad_cuda (replication_pad2d) (test_nn)
  - interpolate (test_nn)
  - cummin, cummax (scatter_add_cuda_kernel) (test_ops)
  - test_fn_gradgrad_prod_cpu_float64 (test_ops)

Randomness:
  - RRelu (new module tests) - we fix by using our own generator as to avoid messing with user RNG state (handled in #54480)

Numerical precision issues:
- jacobian mismatch: test_gelu (test_nn, float32, not able to replicate locally) - we fixed this by disabling for float32 (handled in previous  PR)
- cholesky_solve (test_linalg): #56235 handled in previous PR
- **cumprod** (test_ops) - #56275 disabled fast gradcheck

Not yet replicated:
 - test_relaxed_one_hot_categorical_2d (test_distributions)

Test Plan: Imported from OSS

Reviewed By: albanD

Differential Revision: D27920906

fbshipit-source-id: 894dd7bf20b74f1a91a5bc24fe56794b4ee24656
2021-04-22 19:46:37 -07:00
kshitij12345
6350fcef83 [testing] add broadcasts_input and verifies the behaviour for inplace_variant. (#55771)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/55595

* Add `broadcasts_input` attribute to SampleInput
* Update test_variant_consistency_eager to verify that sample with `broadcasts_input==True` and inplace variant raises an error.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/55771

Reviewed By: jbschlosser, ngimel

Differential Revision: D27760530

Pulled By: mruberry

fbshipit-source-id: feb0658730d4cff483848a5ade9512837a65c24c
2021-04-15 07:39:50 -07:00
Mike Ruberry
399b66c813 Ports logdet from method_tests() to op_db (#55743)
Summary:
Per title. Also updates some tensor construction helpers.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/55743

Reviewed By: ngimel

Differential Revision: D27702060

Pulled By: mruberry

fbshipit-source-id: f64b7bee855733ad1f4fd182819ceec5831d9878
2021-04-11 20:39:16 -07:00
Natalia Gimelshein
0910363e8f adds data_ptr checks to in-place OpInfo variant tests and out OpInfo tests (#55527)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/55088.
Unfortunately, this test wouldn't have caught index_add_ breakage (because index_add_ breakage would appear only in a particular type promotion situation).

Pull Request resolved: https://github.com/pytorch/pytorch/pull/55527

Reviewed By: mruberry

Differential Revision: D27671138

Pulled By: ngimel

fbshipit-source-id: b52411f5a6d81098b706dfda4d0c9a16716414d7
2021-04-08 23:09:28 -07:00
kshitij12345
17e5ba44f1 [testing] Support input samples where self is broadcasted. (#53014)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/50747

Reference https://github.com/pytorch/pytorch/issues/50006

Pull Request resolved: https://github.com/pytorch/pytorch/pull/53014

Reviewed By: SplitInfinity, ngimel

Differential Revision: D27615320

Pulled By: mruberry

fbshipit-source-id: a48bccf06aef1ee8f66a89e6b2bbe79736700b2b
2021-04-07 08:20:27 -07:00
lezcano
158cdece65 Correct many OpInfos "test_out" skips. (#55141)
Summary:
Partially solves https://github.com/pytorch/pytorch/issues/54061

This PR solves many of the "easy to solve" problems with `out=` not notifying when it resizes a tensor. It also reports the cause of some fails of the `out=` operation in the tests. Hopefully this way we will be able to catch some errors that do not come simply from not using `resize_output`.
cc mruberry anjali411

Pull Request resolved: https://github.com/pytorch/pytorch/pull/55141

Reviewed By: anjali411

Differential Revision: D27568755

Pulled By: mruberry

fbshipit-source-id: a32546555fef8d241de2ef635a99e5615461ed09
2021-04-06 08:41:25 -07:00
Ivan Yashchuk
2798f38c86 Added checks for dtype and device of OpInfo's sample_inputs (#54949)
Summary:
Currently, it's not tested whether `op.sample_inputs` actually used the provided dtype and device arguments. This PR fixes that introducing asserts in `test_supported_dtypes`.
This will help to detect incorrectly generated inputs in the future.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/54949

Reviewed By: H-Huang

Differential Revision: D27435952

Pulled By: mruberry

fbshipit-source-id: 8465c459b9b0c007411a9a74340bc2755519624a
2021-04-01 09:34:51 -07:00
Heitor Schueroff
8b02d1207b Fixed OpInfo jit tests failing for TensorList inputs (#54954)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/53906

Pull Request resolved: https://github.com/pytorch/pytorch/pull/54954

Reviewed By: glaringlee

Differential Revision: D27474863

Pulled By: heitorschueroff

fbshipit-source-id: cf8c1cac6fd1cceacd6be73a2eb49d28a5cfc20a
2021-04-01 07:09:07 -07:00
Heitor Schueroff
8b8c4096ee Added OpInfo gradcheck_wrapper to replace output_func (#54914)
Summary:
Added a field to `OpInfo` to provide a wrapper function for gradcheck. This is useful for functions that need to perform some extra input/output processing to work with gradcheck.

fixes https://github.com/pytorch/pytorch/issues/50837

Pull Request resolved: https://github.com/pytorch/pytorch/pull/54914

Reviewed By: H-Huang

Differential Revision: D27435234

Pulled By: heitorschueroff

fbshipit-source-id: fa3e9b61f3d3df221243fd142ddb8b7861dbf669
2021-03-31 20:23:49 -07:00
Heitor Schueroff
6d87b3667f Added support for TensorList inputs in OpInfo (#54922)
Summary:
Stack:
* https://github.com/pytorch/pytorch/issues/54954 Fixed OpInfo jit tests failing for TensorList inputs
* __#54922 Added support for TensorList inputs in OpInfo__

Updated OpInfo to accept either a `Tensor` or `TensorList` as `sample.input` and added workarounds to make this work with gradcheck.

Note: JIT testing support for TensorList inputs will be added in a follow up PR.

Fixes https://github.com/pytorch/pytorch/issues/51996

Pull Request resolved: https://github.com/pytorch/pytorch/pull/54922

Reviewed By: H-Huang

Differential Revision: D27448952

Pulled By: heitorschueroff

fbshipit-source-id: 3f24a56f6180eb2d044dcfc89ba59fce8acfe278
2021-03-31 04:42:10 -07:00
Mike Ruberry
94efb48e16 Adds the cfloat dtype to the eager and jit variant consistency tests (#54854)
Summary:
Per title. One skip for addmm was needed. Either it or the jit test doesn't seem to handle a complex literal properly.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/54854

Reviewed By: anjali411

Differential Revision: D27395651

Pulled By: mruberry

fbshipit-source-id: 0bfadf0a8500f26d3a89f56f104fb44561f594d9
2021-03-29 08:15:27 -07:00
kshitij12345
b7c5d57563 [testing] support op with args/kwargs in test_unary_ufunc (#52194)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/51242

Pull Request resolved: https://github.com/pytorch/pytorch/pull/52194

Reviewed By: ngimel

Differential Revision: D27385139

Pulled By: mruberry

fbshipit-source-id: 63118dee33a138ef13810465d2d2d9fa194dfb28
2021-03-28 18:10:20 -07:00
76181208+imaginary-person@users.noreply.github.com
5cd8a77e01 Skip inplace autograd test if inplace variant doesn't exist (#54460)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/54413

1. Skip inplace autograd test for an op if its inplace variant does not exist.
2. For ops that don't have an inplace variant, remove redundant `supports_inplace_autograd=False` assignments in their `OpInfo`s.
3. Ops having inplace variants that do not support autograd should not have `supports_inplace_autograd=False` entries removed from their `OpInfo`s.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/54460

Reviewed By: ngimel

Differential Revision: D27255938

Pulled By: mruberry

fbshipit-source-id: f15334b09e68995e9f26adc2ff3e59c292689ee8
2021-03-23 21:10:37 -07:00
Heitor Schueroff
591084abb8 Deprecate torch.matrix_power in favor of torch.linalg.matrix_power (#53538)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/53538

* #52608 Added torch.linalg.matrix_power

Test Plan: Imported from OSS

Reviewed By: bdhirsh

Differential Revision: D27261531

Pulled By: heitorschueroff

fbshipit-source-id: 5a944b390f3cc6896c2aa92ba467319ddc9309e4
2021-03-23 15:11:24 -07:00
Mike Ruberry
7b939d934e Simplifes OpInfo test matrix to reduce test time (#53255)
Summary:
This PR:

- Updates the structure of the SampleInput class to require the "input" attribute be a tensor
- Limits unary ufuncs to test only the uint8, long, float16, bfloat16, float and cfloat dtypes by default
- Limits variant testing to the float dtype
- Removes test_variant_consistency from test_unary_ufuncs.py since it's now redundant with variant testing in test_ops.py
- Adds backwards supported testing to clarify failures that were coming from variant testing

This should decrease test e2e time.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/53255

Reviewed By: ngimel

Differential Revision: D27043643

Pulled By: mruberry

fbshipit-source-id: 91d6b483ad6e2cd1b9ade939d42082980ae14217
2021-03-22 03:48:27 -07:00
Ivan Yashchuk
564456ac44 Added autograd support for torch.orgqr (#52637)
Summary:
This PR adds autograd support for `torch.orgqr`.

Since `torch.orgqr` is one of few functions that expose LAPACK's naming and all other linear algebra routines were renamed a long time ago, I also added a new function with a new name and `torch.orgqr` now is an alias for it.

The new proposed name is `householder_product`. For a matrix `input` and a vector `tau` LAPACK's orgqr operation takes columns of `input` (called Householder vectors or elementary reflectors) scalars of `tau` that together represent Householder matrices and then the product of these matrices is computed. See https://www.netlib.org/lapack/lug/node128.html.
Other linear algebra libraries that I'm aware of do not expose this LAPACK function, so there is some freedom in naming it. It is usually used internally only for QR decomposition, but can be useful for deep learning tasks now when it supports differentiation.

Resolves https://github.com/pytorch/pytorch/issues/50104

Pull Request resolved: https://github.com/pytorch/pytorch/pull/52637

Reviewed By: agolynski

Differential Revision: D27114246

Pulled By: mruberry

fbshipit-source-id: 9ab51efe52aec7c137aa018c7bd486297e4111ce
2021-03-18 05:42:18 -07:00
kshitij12345
df7c0a06d6 [testing] assert no duplicate in method_tests for an OpInfo entry (#53492)
Summary:
Assert no duplicate in method_tests for an OpInfo entry

Pull Request resolved: https://github.com/pytorch/pytorch/pull/53492

Reviewed By: izdeby

Differential Revision: D26882441

Pulled By: mruberry

fbshipit-source-id: f0631ea2b46b74285c76365c679bd45abc917d63
2021-03-14 21:58:39 -07:00
Mike Ruberry
2d36b30a8c Expands OpInfo out= testing (#53259)
Summary:
Addresses several of the challenges described in https://github.com/pytorch/pytorch/issues/49468.

This PR builds on https://github.com/pytorch/pytorch/pull/50741 and https://github.com/pytorch/pytorch/issues/53105 to extend OpInfo out= testing. It covers the following cases for ops that produce a single tensor:

- out= values don't affect computation
- out= noncontiguous produces the correct output and preserves strides
- out= with the wrong shape throws a warning
- out= with an empty tensor throws no warning
- out= with the wrong device throws an error
- out= with a dtype the computation's result can't be "safely" cast to throws an error

It works with operations that produce a single tensor and operations that produce an iterable of tensors (the latter is tested with operations like torch.svd).

In addition to the new out= test, the OpInfos have been updated. "supports_tensor_out" is replaced with the more general and straightforward "supports_out" metadata, and many operations which previously had to skip out= testing with an explicit SkipInfo no longer need to. A couple redundant tests in test_unary_ufuncs.py have been removed, too.

One other perk of these tests is that once all operations have OpInfos this will allow us to validate that we've universally deprecated incorrectly sized tensors passed to out=, and give us the option to actually disable the behavior.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/53259

Reviewed By: mrshenli

Differential Revision: D26894723

Pulled By: mruberry

fbshipit-source-id: 2b536e9baf126f36386a35f2f806dd88c58690b3
2021-03-09 08:19:26 -08:00
Mike Ruberry
1e992810b5 Revert D26811466: [pytorch][PR] [reland] Add OpInfo for bitwise_not and make ROCM and CUDA OpInfo tests consistent
Test Plan: revert-hammer

Differential Revision:
D26811466 (a5ada2127d)

Original commit changeset: 8434a7515d83

fbshipit-source-id: 9c2c760e18154a88cf7531e45843a802e3f3d19c
2021-03-08 15:47:47 -08:00
kshitij12345
a5ada2127d [reland] Add OpInfo for bitwise_not and make ROCM and CUDA OpInfo tests consistent (#53181)
Summary:
Reference: https://github.com/pytorch/pytorch/issues/42515

This PR also enables the OpInfo tests on ROCM to check the same dtypes that of CUDA.

Note: Reland https://github.com/pytorch/pytorch/issues/51944

Pull Request resolved: https://github.com/pytorch/pytorch/pull/53181

Reviewed By: zhangguanheng66

Differential Revision: D26811466

Pulled By: mruberry

fbshipit-source-id: 8434a7515d83ed859db1b2f916fad81a9deaeb9b
2021-03-08 03:39:01 -08:00
Sam Estep
8c798e0622 Forbid trailing whitespace (#53406)
Summary:
Context: https://github.com/pytorch/pytorch/pull/53299#discussion_r587882857

These are the only hand-written parts of this diff:
- the addition to `.github/workflows/lint.yml`
- the file endings changed in these four files (to appease FB-internal land-blocking lints):
  - `GLOSSARY.md`
  - `aten/src/ATen/core/op_registration/README.md`
  - `scripts/README.md`
  - `torch/csrc/jit/codegen/fuser/README.md`

The rest was generated by running this command (on macOS):
```
git grep -I -l ' $' -- . ':(exclude)**/contrib/**' ':(exclude)third_party' | xargs gsed -i 's/ *$//'
```

I looked over the auto-generated changes and didn't see anything that looked problematic.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/53406

Test Plan:
This run (after adding the lint but before removing existing trailing spaces) failed:
- https://github.com/pytorch/pytorch/runs/2043032377

This run (on the tip of this PR) succeeded:
- https://github.com/pytorch/pytorch/runs/2043296348

Reviewed By: walterddr, seemethere

Differential Revision: D26856620

Pulled By: samestep

fbshipit-source-id: 3f0de7f7c2e4b0f1c089eac9b5085a58dd7e0d97
2021-03-05 17:22:55 -08:00
Alban Desmaison
59b2b8b091 Revert D26727660: [pytorch][PR] Add OpInfo for bitwise_not and make ROCM and CUDA OpInfo tests consistent
Test Plan: revert-hammer

Differential Revision:
D26727660 (816646bd6f)

Original commit changeset: 3aea236cf000

fbshipit-source-id: 91c6ec0c55c0295bb209f450ae3c96bee0a37356
2021-03-03 06:08:48 -08:00
kshitij12345
816646bd6f Add OpInfo for bitwise_not and make ROCM and CUDA OpInfo tests consistent (#51944)
Summary:
Reference: https://github.com/pytorch/pytorch/issues/42515

This PR also enables the OpInfo tests on ROCM to check the same dtypes that of CUDA.

Few tests have to be skipped (due to failure).

Pull Request resolved: https://github.com/pytorch/pytorch/pull/51944

Reviewed By: H-Huang

Differential Revision: D26727660

Pulled By: mruberry

fbshipit-source-id: 3aea236cf0002f46c2737afbda2ed3efccfe14f5
2021-03-02 22:56:40 -08:00
kshitij12345
aa603cb2ce add OpInfo entry for signbit (#52198)
Summary:
Reference: https://github.com/pytorch/pytorch/issues/42515

Pull Request resolved: https://github.com/pytorch/pytorch/pull/52198

Reviewed By: H-Huang

Differential Revision: D26727598

Pulled By: mruberry

fbshipit-source-id: 282350febbd0b1af73320f0e912bf553d386d4b0
2021-03-02 10:38:34 -08:00
kshitij12345
49b59e3472 Add OpInfo entries for i0 and logical_not (#51956)
Summary:
Reference: https://github.com/pytorch/pytorch/issues/42515

Pull Request resolved: https://github.com/pytorch/pytorch/pull/51956

Reviewed By: albanD

Differential Revision: D26404440

Pulled By: mruberry

fbshipit-source-id: dd73e63155dd4a200afb38a5e566eb2132e69fde
2021-02-23 10:12:05 -08:00
vfdev
8b0cb5ede3 OpInfo: Added clamp and trunc tests with aliases (#51167)
Summary:
Description:
- Added clamp, trunc tests with aliases
- Added tests for aliases for asin(h), acos(h), etc
- fixed 'fix' alias implementation
- fixed annotations in test_jit_alias_remapping
- updated native_functions.yaml aliases guidelines

Blocked by https://github.com/pytorch/pytorch/issues/50368

cc mruberry

Pull Request resolved: https://github.com/pytorch/pytorch/pull/51167

Reviewed By: gchanan

Differential Revision: D26245753

Pulled By: mruberry

fbshipit-source-id: e17b657f0515139735a8a677b1ae284904f98aef
2021-02-10 05:36:18 -08:00
Mike Ruberry
594a66d778 Warn about floor_divide performing incorrect rounding (#50281) (#50281)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/50281

Pull Request resolved: https://github.com/pytorch/pytorch/pull/51745

Test Plan: Imported from OSS

Reviewed By: ngimel

Pulled By: mruberry

Differential Revision: D26257855

fbshipit-source-id: e5d497cf07b0c746838ed081c5d0e82fb4cb701b
2021-02-10 03:13:34 -08:00
vfdev
b106250047 Introduced AliasInfo for OpInfo (#50368)
Summary:
Introduced AliasInfo for OpInfo.

Context: Split of https://github.com/pytorch/pytorch/issues/49158

cc mruberry , please let me know if you'd like to see here more code to cover

> [ ] fold test_op_aliases.py into OpInfo-based testing in test_ops.py

from https://github.com/pytorch/pytorch/issues/50006

and/or add `UnaryUfuncInfo('abs')` as discussed https://github.com/pytorch/pytorch/pull/49158/files#r548774221

Pull Request resolved: https://github.com/pytorch/pytorch/pull/50368

Reviewed By: ngimel

Differential Revision: D26177261

Pulled By: mruberry

fbshipit-source-id: 2e3884a387e8d5365fe05945375f0a9d1b5f5d82
2021-02-02 00:10:09 -08:00
Lillian Johnson
3b6f30824c OpInfo JIT op.output_func handling support (#50775)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/50775

Test Plan: Imported from OSS

Reviewed By: mruberry

Differential Revision: D25964541

Pulled By: Lilyjjo

fbshipit-source-id: 8cf1ee9191d526cc46ae283f38c2d64bd60afdb2
2021-01-27 15:04:23 -08:00
Peter Bell
9b6d463704 Move std and var tests to OpInfos (#50901)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/50901

Test Plan: Imported from OSS

Reviewed By: ngimel

Differential Revision: D26083289

Pulled By: mruberry

fbshipit-source-id: 7e14ff37bba46dd456e0bc0aa9c4e0a632d0734c
2021-01-27 10:50:51 -08:00
Arindam Roy
95ae9a20e4 Enable ROCM Skipped tests in test_ops.py (#50500)
Summary:
Removed skipCUDAIfRocm to re-enable tests for
ROCM platform.

Initially, Only 4799 cases were being run.
Out of those, 882 cases were being skipped.
After removing skipCUDAIfRocm from two places
in test_ops.py, now more than 8000 cases are
being executed, out of which only 282 cases
are bing skipped, which are FFT related tests.

Signed-off-by: Arindam Roy <rarindam@gmail.com>

Fixes #{issue number}

Pull Request resolved: https://github.com/pytorch/pytorch/pull/50500

Reviewed By: albanD

Differential Revision: D25920303

Pulled By: mrshenli

fbshipit-source-id: b2d17b7e2d1de4f9fdd6f1660fb4cad5841edaa0
2021-01-26 08:09:18 -08:00
Peter Bell
0436ea125b OpInfo: Remove promotes_integers_to_float and infer it instead (#50279)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/50279

This allows different sample inputs to have different behavior for the same
operator. For example, `div(..., rounding_mode='true')` will promote but other
rounding modes don't. The current boolean flag is too restrictive to allow this.

Test Plan: Imported from OSS

Reviewed By: ngimel

Differential Revision: D25950011

Pulled By: mruberry

fbshipit-source-id: 7e82b82bedc626b2b6970d92d5b25676183ec384
2021-01-22 09:32:37 -08:00
Richard Zou
16691516a5 Add batched grad testing to OpInfo (#50818)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/50818

This PR does two things:
1. Add batched grad testing to OpInfo
2. Improve the error message from `gradcheck` if batched gradient
computation fails to include suggestions for workarounds.

To add batched grad testing to OpInfo, this PR:
- adds new `check_batched_grad=True` and `check_batched_gradgrad=True`
attributes to OpInfo. These are True by default because we expect most
operators to support batched gradient computation.
- If `check_batched_grad=True`, then `test_fn_grad` invokes gradcheck
with `check_batched_grad=True`.
- If `check_batched_gradgrad=True`, then `test_fn_gradgradgrad` invokes
gradgradcheck with `check_batched_grad=True`.

The improved gradcheck error message looks like the following when an
exception is thrown while computing batched gradients:
https://gist.github.com/zou3519/5a0f46f908ba036259ca5e3752fd642f

Future
- Sometime in the not-near future, we will separate out "batched grad
testing" from "gradcheck" for the purposes of OpInfo to make the
testing more granular and also so that we can test that the vmap
fallback doesn't get invoked (currently batched gradient testing only
tests that the output values are correct).

Test Plan: - run tests `pytest test/test_ops.py -v -k "Gradients"`

Reviewed By: ejguan

Differential Revision: D25997703

Pulled By: zou3519

fbshipit-source-id: 6d2d444d6348ae6cdc24c32c6c0622bd67b9eb7b
2021-01-21 15:13:06 -08:00
Ivan Yashchuk
f9a5ba7398 Added linalg.slogdet (#49194)
Summary:
This PR adds `torch.linalg.slogdet`.

Changes compared to the original torch.slogdet:

- Complex input now works as in NumPy
- Added out= variant (allocates temporary and makes a copy for now)
- Updated `slogdet_backward` to work with complex input

Ref. https://github.com/pytorch/pytorch/issues/42666

Pull Request resolved: https://github.com/pytorch/pytorch/pull/49194

Reviewed By: VitalyFedyunin

Differential Revision: D25916959

Pulled By: mruberry

fbshipit-source-id: cf9be8c5c044870200dcce38be48cd0d10e61a48
2021-01-19 07:28:12 -08:00
Peter Bell
53473985b8 test_ops: Only run complex gradcheck when complex is supported (#49018)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/49018

Test Plan: Imported from OSS

Reviewed By: ngimel

Differential Revision: D25868683

Pulled By: mruberry

fbshipit-source-id: d8c4d89c11939fc7d81db8190ac6b9b551e4cbf5
2021-01-12 04:48:30 -08:00
Xiong Wei
3779bdec56 Implementing NumPy-like function torch.broadcast_to (#48997)
Summary:
Related https://github.com/pytorch/pytorch/issues/38349

Implement NumPy-like function `torch.broadcast_to` to broadcast the input tensor to a new shape.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/48997

Reviewed By: anjali411, ngimel

Differential Revision: D25663937

Pulled By: mruberry

fbshipit-source-id: 0415c03f92f02684983f412666d0a44515b99373
2020-12-21 11:24:50 -08:00
Ivan Yashchuk
f5ee619d2a Updated derivative rules for complex svd and pinverse (#47761)
Summary:
Updated `svd_backward` to work correctly for complex-valued inputs.
Updated `common_methods_invocations.py` to take dtype, device arguments for input construction.
Removed `test_pinverse` from `test_autograd.py`, it is replaced by entries to `common_methods_invocations.py`.
Added `svd` and `pinverse` to list of complex tests.

References for complex-valued SVD differentiation:

- https://giggleliu.github.io/2019/04/02/einsumbp.html
- https://arxiv.org/abs/1909.02659

The derived rules assume gauge invariance of loss functions, so the result would not be correct for loss functions that are not gauge invariant.
https://re-ra.xyz/Gauge-Problem-in-Automatic-Differentiation/

The same rule is implemented in Tensorflow and [BackwardsLinalg.jl](https://github.com/GiggleLiu/BackwardsLinalg.jl).

Ref. https://github.com/pytorch/pytorch/issues/33152

Pull Request resolved: https://github.com/pytorch/pytorch/pull/47761

Reviewed By: ngimel

Differential Revision: D25658897

Pulled By: mruberry

fbshipit-source-id: ba33ecbbea3f592238c01e62c7f193daf22a9d01
2020-12-20 14:39:31 -08:00
Mike Ruberry
5fcfebd84a Disables method variant grad and grad grad checks (#49576)
Summary:
These are redundant with the functional variant checks and can be very costly, as some grad and gradgrad testing takes minutes to run per variant. Maybe in the future we'll add them back for operations with divergent method implementations.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/49576

Reviewed By: albanD, ngimel

Differential Revision: D25631691

Pulled By: mruberry

fbshipit-source-id: 247f750979d9dafab2454cdbfa992a2aa6da724a
2020-12-17 23:46:40 -08:00
Mike Ruberry
f5b68e74d7 Revert D25574962: [pytorch][PR] Updated derivative rules for complex svd and pinverse
Test Plan: revert-hammer

Differential Revision:
D25574962 (9955355853)

Original commit changeset: 832b61303e88

fbshipit-source-id: d73f77f3e51b0f535dad6d21c5bebf8d41a6bfbd
2020-12-17 00:59:43 -08:00
Ivan Yashchuk
9955355853 Updated derivative rules for complex svd and pinverse (#47761)
Summary:
Updated `svd_backward` to work correctly for complex-valued inputs.
Updated `common_methods_invocations.py` to take dtype, device arguments for input construction.
Removed `test_pinverse` from `test_autograd.py`, it is replaced by entries to `common_methods_invocations.py`.
Added `svd` and `pinverse` to list of complex tests.

References for complex-valued SVD differentiation:

- https://giggleliu.github.io/2019/04/02/einsumbp.html
- https://arxiv.org/abs/1909.02659

The derived rules assume gauge invariance of loss functions, so the result would not be correct for loss functions that are not gauge invariant.
https://re-ra.xyz/Gauge-Problem-in-Automatic-Differentiation/

The same rule is implemented in Tensorflow and [BackwardsLinalg.jl](https://github.com/GiggleLiu/BackwardsLinalg.jl).

Ref. https://github.com/pytorch/pytorch/issues/33152

Pull Request resolved: https://github.com/pytorch/pytorch/pull/47761

Reviewed By: izdeby

Differential Revision: D25574962

Pulled By: mruberry

fbshipit-source-id: 832b61303e883ad3a451b84850ccf0f36763a6f6
2020-12-16 12:32:22 -08:00
Peter Bell
533c837833 Register OpInfos for torch.fft transforms (#48427)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/48427

Test Plan: Imported from OSS

Reviewed By: ngimel

Differential Revision: D25266218

Pulled By: mruberry

fbshipit-source-id: 406e7ed5956bc7445daf8c027c9b4d2c8ff88fa1
2020-12-07 17:19:29 -08:00
Lillian Johnson
0e4f9a7872 Refactored OpInfo testing to support custom SampleInputs, added addmm to op_db to test (#48627)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/48627

Several changes to the OpInfo testing suite:
- Changed test_ops.py to support sample.inputs that are longer than a single element
- Changed OpInfo class to use custom sample_input generator functions, changed UnaryUfuncInfo to use new format
- Added mvp addmm op to operator database to test out sample.inputs with a length greater than a single element

Test Plan: Imported from OSS

Reviewed By: mruberry

Differential Revision: D25234178

Pulled By: Lilyjjo

fbshipit-source-id: cca2c60af7e6deb849a1cc3770c04ed88865016c
2020-12-02 19:59:40 -08:00
Lillian Johnson
90faf43151 Support for OpInfo-based testing for operators in JIT (#47696)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/47696

Test Plan: Imported from OSS

Reviewed By: mruberry

Differential Revision: D25212436

Pulled By: Lilyjjo

fbshipit-source-id: 1fd2884d86b2afd6321ae1599d755b4beae4670a
2020-12-02 19:59:37 -08:00
kshitij12345
bcc85a363e [numpy] torch.sigmoid : promote integer inputs to float (#47551)
Summary:
Reference https://github.com/pytorch/pytorch/issues/42515

Pull Request resolved: https://github.com/pytorch/pytorch/pull/47551

Reviewed By: ngimel

Differential Revision: D25211953

Pulled By: mruberry

fbshipit-source-id: 9174cda401aeba0fd585a4c9bda166dbcf64f42f
2020-12-01 23:28:57 -08:00
Peter Bell
9500e8a081 Testing: Improve interaction between dtypes and ops decorators (#48426)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/48426

Tests are run on the intersection of the dtypes requested and the types that are
supported by the operator (or are _not_ if `unsupported_dtypes_only` is used).

Test Plan: Imported from OSS

Reviewed By: ngimel

Differential Revision: D25205835

Pulled By: mruberry

fbshipit-source-id: 2c6318a1a3dc9836af7361f32caf9df28d8a792b
2020-11-30 20:46:22 -08:00
anjali411
d94bd998ec Update backward formulas (Re #44444) (#46275)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/46275

Re #44444

Test Plan: Imported from OSS

Reviewed By: zou3519

Differential Revision: D24285785

Pulled By: anjali411

fbshipit-source-id: c60ecd4fe4f144132085f2c91d3b950e92b2a491
2020-10-25 19:40:59 -07:00
Edward Yang
546aab66c1 Revert D24027761: Update backward definition for more operators and reenable tests in test_ops.py
Test Plan: revert-hammer

Differential Revision:
D24027761 (7d809f5d8e)

Original commit changeset: c1f707c2a039

fbshipit-source-id: 30750d2f08886036fb8b2cd0ae51c7732d3b7b19
2020-10-02 18:52:57 -07:00
anjali411
7d809f5d8e Update backward definition for more operators and reenable tests in test_ops.py (#44444)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/44444

This PR:
1. Fixes https://github.com/pytorch/pytorch/issues/41510. Updates backward formula for the following functions: `asin`, `acos`, `asinh`, `acosh`, `atan`, `atanh`, `div`, `log`, `log10`, `log2`, `log1p`, `pow`, `reciprocal`, `angle`.
2. Re-enables the tests in `test_ops.py`.
3. Adds dispatch for complex dtypes for `tanh_backward`.
4. Re-enables commented tests in `common_methods_invocation.py`.

Test Plan: Imported from OSS

Reviewed By: glaringlee

Differential Revision: D24027761

Pulled By: anjali411

fbshipit-source-id: c1f707c2a039149a6e04bbde53ee120d9119d99a
2020-10-02 13:37:10 -07:00
Antonio Cuni
37f9af7f29 Missing tests about torch.xxx(out=...) (#44465)
Summary:
PR opened just to run the CI tests

Pull Request resolved: https://github.com/pytorch/pytorch/pull/44465

Reviewed By: ngimel

Differential Revision: D23907565

Pulled By: mruberry

fbshipit-source-id: 620661667877f1e9a2bab17d19988e2dc986fc0f
2020-09-29 04:54:46 -07:00
anjali411
9f67176b82 Complex gradcheck logic (#43208)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/43208

This PR adds gradcheck for complex. The logic used for complex gradcheck is described in Section 3.5.3 here: https://arxiv.org/pdf/1701.00392.pdf

More concretely, this PR introduces the following changes:
1. Updates get_numerical_jacobian to take as input a scalar value for vector (v). Adds gradcheck logic for C -> C, C-> R, R -> C. For R -> C functions, only the real value of gradient is propagated.
2. Adds backward definition for `torch.complex` and also adds a test to verify the definition added.
3. Updates backward for `mul`, `sin`, `cos`, `sinh`, `cosh`.
4. Adds tests for all `torch.real`, `torch.imag`, `torch.view_as_real`, `torch.view_as_complex`, `torch.conj`.

Follow up tasks:
1. Add more thorough tests for R -> C cases. Specifically, add R->C test variants for functions. for e.g., `torch.mul(complex_tensor, real_tensor)`
2. Add back commented test in `common_methods_invocation.py`.
3. Add more special case checking for complex gradcheck to make debugging easier.
4. Update complex autograd note.
5. disable complex autograd for operators not tested for complex.

Test Plan: Imported from OSS

Reviewed By: zou3519

Differential Revision: D23655088

Pulled By: anjali411

fbshipit-source-id: caa75e09864b5f6ead0f988f6368dce64cf15deb
2020-09-20 22:05:04 -07:00
Mike Ruberry
c48f511c7e Moves some of TestTorchMathOps to OpInfos (#44277)
Summary:
This PR fixes three OpInfo-related bugs and moves some functions from TestTorchMathOps to be tested using the OpInfo pattern. The bugs are:

- A skip test path in test_ops.py incorrectly formatted its string argument
- Decorating the tests in common_device_type.py was incorrectly always applying decorators to the original test, not the op-specific variant of the test. This could cause the same decorator to be applied multiple times, overriding past applications.
- make_tensor was incorrectly constructing tensors in some cases

The functions moved are:

- asin
- asinh
- sinh
- acosh
- tan
- atan
- atanh
- tanh
- log
- log10
- log1p
- log2

In a follow-up PR more or all of the remaining functions in TestTorchMathOps will be refactored as OpInfo-based tests.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/44277

Reviewed By: mrshenli, ngimel

Differential Revision: D23617361

Pulled By: mruberry

fbshipit-source-id: edb292947769967de9383f6a84eb327f027509e0
2020-09-10 17:31:50 -07:00
Elias Ellison
e0c65abd38 Revert D23568330: [pytorch][PR] Moves some of TestTorchMathOps to OpInfos
Test Plan: revert-hammer

Differential Revision:
D23568330 (a953a825cc)

Original commit changeset: 03e69fccdbfd

fbshipit-source-id: 04ec6843c5eb3c84ddf226dad0088172d9bed84d
2020-09-09 15:48:56 -07:00
Mike Ruberry
a953a825cc Moves some of TestTorchMathOps to OpInfos (#44277)
Summary:
This PR fixes three OpInfo-related bugs and moves some functions from TestTorchMathOps to be tested using the OpInfo pattern. The bugs are:

- A skip test path in test_ops.py incorrectly formatted its string argument
- Decorating the tests in common_device_type.py was incorrectly always applying decorators to the original test, not the op-specific variant of the test. This could cause the same decorator to be applied multiple times, overriding past applications.
- make_tensor was incorrectly constructing tensors in some cases

The functions moved are:

- asin
- asinh
- sinh
- acosh
- tan
- atan
- atanh
- tanh
- log
- log10
- log1p
- log2

In a follow-up PR more or all of the remaining functions in TestTorchMathOps will be refactored as OpInfo-based tests.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/44277

Reviewed By: ngimel

Differential Revision: D23568330

Pulled By: mruberry

fbshipit-source-id: 03e69fccdbfd560217c34ce4e9a5f20e10d05a5e
2020-09-09 09:41:03 -07:00
Thomas Viehmann
7c61f57bec test_ops: skipTest only takes a single argument (#44181)
Summary:
Fixes a broken skipTest from https://github.com/pytorch/pytorch/issues/43451, e.g. in the ROCm CI.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/44181

Reviewed By: ngimel

Differential Revision: D23568608

Pulled By: malfet

fbshipit-source-id: 557048bd5f0086ffac38d1c48255badb63869899
2020-09-07 18:32:59 -07:00
Mike Ruberry
665feda15b Adds opinfo-based autograd tests and (un)supported dtype tests (#43451)
Summary:
This PR adds a new test suite, test_ops.py, designed for generic tests across all operators with OpInfos. It currently has two kinds of tests:

- it validates that the OpInfo has the correct supported dtypes by verifying that unsupported dtypes throw an error and supported dtypes do not
- it runs grad and gradgrad checks on each op and its variants (method and inplace) that has an OpInfo

This is a significant expansion and simplification of the current autogenerated autograd tests, which spend considerable processing their inputs. As an alternative, this PR extends OpInfos with "SampleInputs" that are much easier to use. These sample inputs are analogous to the existing tuples in`method_tests()`.

Future PRs will extend OpInfo-based testing to other uses of `method_tests()`, like test_jit.py, to ensure that new operator tests can be implemented entirely using an OpInfo.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/43451

Reviewed By: albanD

Differential Revision: D23481723

Pulled By: mruberry

fbshipit-source-id: 0c2cdeacc1fdaaf8c69bcd060d623fa3db3d6459
2020-09-03 02:50:48 -07:00