Commit Graph

78 Commits

Author SHA1 Message Date
Kushashwa Ravi Shrimali
08020220f3 [Testing] Adding reference tests to OpInfo class (#59369)
Summary:
This PR will ideally add `ref` argument to `OpInfo` base class. The idea is to add reference checks for all the ops _eligible_. For more discussion, please check https://github.com/pytorch/pytorch/issues/58294

* [x] Migrate (but not removing yet) and modify helper functions from `UnaryUfuncOpInfo` class to `OpInfo` base class.
* [x] Test the reference checks for multiple ops. (also decide a list of different and eligible ops for this)
* [x] Handle possible edge cases (for example: `uint64` isn't implemented in PyTorch but is there in NumPy, and this needs to be handled -- more on this later) -- _Update_: We decided that these reference tests should only test for values and not types.
* [x] Create a sample PR for a single (of all different categories?) on adding reference functions to the eligible ops. -- _Update_: This is being done in this PR only.
* [x] ~Remove reference tests from `test_unary_ufuncs.py` and test to make sure that nothing breaks.~ (*Update*: We won't be touching Unary Ufunc reference tests in this PR)
* [x] Add comments, remove unnecessary prints/comments (added for debugging).

Note: To keep the PR description short, examples of edge cases encountered have been mentioned in the comments below.

cc: mruberry pmeier kshitij12345

Pull Request resolved: https://github.com/pytorch/pytorch/pull/59369

Reviewed By: ngimel

Differential Revision: D29347252

Pulled By: mruberry

fbshipit-source-id: 69719deddb1d23c53db45287a7e66c1bfe7e65bb
2021-06-23 19:26:08 -07:00
Rong Rong (AI Infra)
510334f34b [BE] clean up IS_PYTORCH_CI and IN_CI (#60279)
Summary:
`IS_PYTORCH_CI` and `IN_CI` are used randomly, however in some cases IN_CI is not currently set because it only exist in .circleci/scripts/setup_ci_environment.sh. This cleans up the 2 flags and only use IN_CI

Pull Request resolved: https://github.com/pytorch/pytorch/pull/60279

Test Plan: CI

Reviewed By: seemethere

Differential Revision: D29239545

Pulled By: walterddr

fbshipit-source-id: a069424a2bb8790a3adfdaf0dc460301026bf8c7
2021-06-20 19:45:07 -07:00
Mike Ruberry
59b10036d5 Unifies OpInfo dtype tests (#60157)
Summary:
Simplifies the OpInfo dtype tests and produces nicer error messages, like:

```
AssertionError: Items in the first set but not the second:
torch.bfloat16
Items in the second set but not the first:
torch.int64 : Attempted to compare [set] types: Expected: {torch.float64, torch.float32, torch.float16, torch.bfloat16}; Actual: {torch.float64, torch.float32, torch.float16, torch.int64}.
The supported dtypes for logcumsumexp on cuda according to its OpInfo are
        {torch.float64, torch.float32, torch.float16, torch.int64}, but the detected supported dtypes are {torch.float64, torch.float32, torch.float16, torch.bfloat16}.
        The following dtypes should be added to the OpInfo: {torch.bfloat16}. The following dtypes should be removed from the OpInfo: {torch.int64}.
```

Pull Request resolved: https://github.com/pytorch/pytorch/pull/60157

Reviewed By: ngimel

Differential Revision: D29188665

Pulled By: mruberry

fbshipit-source-id: e84c9892c6040ea47adb027cfef3a6c0fd2f9f3c
2021-06-17 06:34:54 -07:00
kshitij12345
64aec8d2ca [testing] OpInfoHelper tool (#58698)
Summary:
Fixes: https://github.com/pytorch/pytorch/issues/57577

Usage:
Add OpInfo entry to `common_methods_invocations` with `dtypes=_DYNAMIC_DYTPES`
Eg.
```
OpInfo('atan2',
        dtypes=_DYNAMIC_DTYPES,
        sample_inputs_func=sample_inputs_atan2,)
```

Run the helper with `python -m torch.testing._internal.opinfo_helper`

Output
```
OpInfo(atan2,
       # hint: all_types + (torch.bool,),
       dtypes=[torch.float32, torch.float64, torch.uint8, torch.int8, torch.int16, torch.int32, torch.int64, torch.bool],
       # hint: all_types + (torch.bool, torch.bfloat16, torch.float16),
       dtypesIfCUDA=[torch.float32, torch.float64, torch.uint8, torch.int8, torch.int16, torch.int32, torch.int64, torch.bool, torch.bfloat16, torch.float16],
       sample_inputs_func=sample_inputs_atan2)
```

Output without CUDA (run with `$ CUDA_VISIBLE_DEVICES=-1 python -m torch.testing._internal.opinfo_helper`)
```
UserWarning: WARNING: CUDA is not available, information pertaining to CUDA could be wrong
  warnings.warn("WARNING: CUDA is not available, information pertaining to CUDA could be wrong")
OpInfo(atan2,
       # hint: all_types + (torch.bool,),
       dtypes=[torch.float32, torch.float64, torch.uint8, torch.int8, torch.int16, torch.int32, torch.int64, torch.bool],
       sample_inputs_func=sample_inputs_atan2)
```

Pull Request resolved: https://github.com/pytorch/pytorch/pull/58698

Reviewed By: H-Huang

Differential Revision: D29160668

Pulled By: mruberry

fbshipit-source-id: 707370a83b451b02ad2fe539775c8c50ecf90be8
2021-06-16 17:17:03 -07:00
Rong Rong (AI Infra)
5b9fced70a add output_process_fn_grad before sum().backward() (#59971)
Summary:
This should fix `to_sparse` test issue.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/59971

Test Plan:
CI

Also: directly examine the RuntimeError thrown from test_unsupported_backward
- Before:
```
NotImplementedError: Could not run 'aten::sum' with arguments from the 'SparseCPU' backend.
```
- After:
```
to_dense() not supported for float16 on CPU
```

Reviewed By: soulitzer

Differential Revision: D29112558

Pulled By: walterddr

fbshipit-source-id: c2acd22cd18d5b34d25209b8415feb3ba28fa104
2021-06-14 16:20:03 -07:00
Rong Rong (AI Infra)
26beda8ed5 [BE] unsupported backward failing on single sample (#59455)
Summary:
Echo on https://github.com/pytorch/pytorch/pull/58260#discussion_r637467625

similar to `test_unsupported_dtype` which only check exception raised on the first sample. we should do similar things for unsupported_backward as well. The goal for both test is to remind developer to
1. add a new dtype to the support list if they are fulling runnable without failure (over all samples)
2. replace the skip mechanism which will indefinitely ignore tests without warning

Pull Request resolved: https://github.com/pytorch/pytorch/pull/59455

Test Plan: CI.

Reviewed By: mruberry

Differential Revision: D28927169

Pulled By: walterddr

fbshipit-source-id: 2993649fc17a925fa331e27c8ccdd9b24dd22c20
2021-06-09 08:17:03 -07:00
Victor Quach
5fc105b323 Raise NotImplementedError on forward passes (#59483)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/59483

... for functions that are not implemented

Test Plan: Imported from OSS

Reviewed By: albanD

Differential Revision: D28933806

fbshipit-source-id: dadae1af6609f15419cf0f47a98361dc87dff849
2021-06-08 14:03:19 -07:00
Mike Ruberry
de40c8e495 Adds remaining OpInfos and removes redundant test generators (#55558)
Summary:
Per title.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/55558

Reviewed By: ngimel

Differential Revision: D28922522

Pulled By: mruberry

fbshipit-source-id: 89cefd93788bc8aa0683f4583cf5caa81aa2dc93
2021-06-06 14:52:26 -07:00
kshitij12345
da972afdcd OpInfo: to_sparse (#59445)
Summary:
Reference: https://github.com/pytorch/pytorch/issues/54261

Pull Request resolved: https://github.com/pytorch/pytorch/pull/59445

Reviewed By: ngimel

Differential Revision: D28920866

Pulled By: mruberry

fbshipit-source-id: ba8d3071d9937096288b69511000eeb007f53434
2021-06-05 19:13:58 -07:00
anjali411
3607478ecd Conjugate View (#54987)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/54987

Based off of ezyang (https://github.com/pytorch/pytorch/pull/44799) and bdhirsh (https://github.com/pytorch/pytorch/pull/43702) 's prototype:

Here's a summary of the changes in this PR:
This PR adds a new dispatch key called Conjugate. This enables us to make conjugate operation a view and leverage the specialized library functions that fast path with the hermitian operation (conj + transpose).

1. Conjugate operation will now return a view with conj bit (1) for complex tensors and returns self for non-complex tensors as before. This also means `torch.view_as_real` will no longer be a view on conjugated complex tensors and is hence disabled. To fill the gap, we have added `torch.view_as_real_physical` which would return the real tensor agnostic of the conjugate bit on the input complex tensor. The information about conjugation on the old tensor can be obtained by calling `.is_conj()` on the new tensor.
2. NEW API:
    a) `.conj()` -- now returning a view.
    b) `.conj_physical()` -- does the physical conjugate operation. If the conj bit for input was set, you'd get `self.clone()`, else you'll get a new tensor with conjugated value in its memory.
    c) `.conj_physical_()`, and `out=` variant
    d) `.resolve_conj()`  -- materializes the conjugation. returns self if the conj bit is unset, else returns a new tensor with conjugated values and conj bit set to 0.
    e) `.resolve_conj_()` in-place version of (d)
    f) `view_as_real_physical` -- as described in (1), it's functionally same as `view_as_real`, just that it doesn't error out on conjugated tensors.
    g) `view_as_real` -- existing function, but now errors out on conjugated tensors.
3. Conjugate Fallback
    a) Vast majority of PyTorch functions would currently use this fallback when they are called on a conjugated tensor.
    b) This fallback is well equipped to handle the following cases:
        - functional operation e.g., `torch.sin(input)`
        - Mutable inputs and in-place operations e.g., `tensor.add_(2)`
        - out-of-place operation e.g., `torch.sin(input, out=out)`
        - Tensorlist input args
        - NOTE: Meta tensors don't work with conjugate fallback.
4. Autograd
    a) `resolve_conj()` is an identity function w.r.t. autograd
    b) Everything else works as expected.
5. Testing:
    a) All method_tests run with conjugate view tensors.
    b) OpInfo tests that run with conjugate views
        - test_variant_consistency_eager/jit
        - gradcheck, gradgradcheck
        - test_conj_views (that only run for `torch.cfloat` dtype)

NOTE: functions like `empty_like`, `zero_like`, `randn_like`, `clone` don't propagate the conjugate bit.

Follow up work:
1. conjugate view RFC
2. Add neg bit to re-enable view operation on conjugated tensors
3. Update linalg functions to call into specialized functions that fast path with the hermitian operation.

Test Plan: Imported from OSS

Reviewed By: VitalyFedyunin

Differential Revision: D28227315

Pulled By: anjali411

fbshipit-source-id: acab9402b9d6a970c6d512809b627a290c8def5f
2021-06-04 14:12:41 -07:00
kshitij12345
49c2da0ee0 [testing] improve broadcasts_input error message (#58295)
Summary:
Context:
The Error message when `broadcasts_input` is marked incorrectly is uninformative [See Error Currently]
https://github.com/pytorch/pytorch/pull/57941#discussion_r631749435

Error Currently
```
Traceback (most recent call last):
  File "/home/kshiteej/Pytorch/pytorch_i0_promotion/test/test_ops.py", line 326, in test_variant_consistency_eager
    _test_consistency_helper(samples, variants)
  File "/home/kshiteej/Pytorch/pytorch_i0_promotion/test/test_ops.py", line 310, in _test_consistency_helper
    variant_forward = variant(cloned,
  File "/home/kshiteej/.conda/envs/pytorch-cuda-dev/lib/python3.8/unittest/case.py", line 227, in __exit__
    self._raiseFailure("{} not raised".format(exc_name))
  File "/home/kshiteej/.conda/envs/pytorch-cuda-dev/lib/python3.8/unittest/case.py", line 164, in _raiseFailure
    raise self.test_case.failureException(msg)
AssertionError: RuntimeError not raised
```

Error After PR
```
Traceback (most recent call last):
  File "/home/kshiteej/Pytorch/pytorch_i0_promotion/test/test_ops.py", line 329, in test_variant_consistency_eager
    _test_consistency_helper(samples, variants)
  File "/home/kshiteej/Pytorch/pytorch_i0_promotion/test/test_ops.py", line 313, in _test_consistency_helper
    variant_forward = variant(cloned,
  File "/home/kshiteej/.conda/envs/pytorch-cuda-dev/lib/python3.8/unittest/case.py", line 227, in __exit__
    self._raiseFailure("{} not raised".format(exc_name))
  File "/home/kshiteej/.conda/envs/pytorch-cuda-dev/lib/python3.8/unittest/case.py", line 164, in _raiseFailure
    raise self.test_case.failureException(msg)
AssertionError: RuntimeError not raised : inplace variant either allowed resizing or you have marked the sample SampleInput(input=Tensor, args=(tensor([[[ 2.1750, -8.5027, -3.1403, -6.9942,  3.2609],
         [-2.5057, -5.9123, -5.4633,  6.1203, -8.2124],
         [-3.5802, -8.4869, -6.0700,  2.3431, -8.1955],
         [-7.3316,  1.3248, -6.8661,  7.1483, -8.0719],
         [ 4.5977, -4.0448, -6.2044, -2.1314, -8.4956]],

        [[ 3.2769, -8.4360,  1.2826,  7.1749,  4.7653],
         [-0.2816, -2.5997, -4.7659, -3.7814,  3.9704],
         [-2.1778, -3.8117, -6.0276, -0.8423, -5.9646],
         [ 8.6544, -3.0922,  0.2558, -4.9318, -4.7596],
         [ 4.5583,  4.3830,  5.8793,  0.9713, -2.1481]],

        [[-1.0447,  0.9334,  7.6405, -4.8933, -7.4010],
         [ 7.7168, -8.4266, -5.5980, -6.9368,  7.1309],
         [-8.7720, -5.0890, -0.4975,  1.9518,  1.7074],
         [-8.5783,  8.5510, -8.5459, -3.5451,  8.4319],
         [ 8.5052, -8.9149, -6.6298, -1.2750, -5.7367]],

        [[-6.5625,  8.2795, -4.9311,  1.9501, -7.1777],
         [-8.4035,  1.1136, -7.6418, -7.0726, -2.8281],
         [ 4.2668, -0.2883, -6.2246,  2.3396,  1.2911],
         [ 4.6550, -1.9525,  4.4873, -3.8061, -0.8653],
         [-3.4256,  4.4423,  8.2937, -5.3456, -4.2624]],

        [[ 7.6128, -6.3932,  4.7131, -5.4938,  6.4792],
         [-6.5385,  2.4385,  4.5570,  3.7803, -8.3281],
         [-2.9785, -4.4745, -1.1778, -8.9324,  1.3663],
         [ 3.7437,  3.5171, -6.3135, -8.4519, -2.7033],
         [-5.0568, -8.4630, -4.2870, -3.7284, -1.5238]]], device='cuda:0',
       dtype=torch.float32, requires_grad=True),), broadcasts_input=True) incorrectly with `broadcasts_self=True
```

**NOTE**:
Printing the sample looks very verbose and it may be hard to figure out which sample is incorrectly configured if there are multiple samples with similar input shapes.

Two Options to make this error less verbose
* Don't print the sample and just print `inplace variant either allowed resizing or you have marked one of the sample incorrectly with broadcasts_self=True`
* Have some mechanism to name samples which will be printed in the `repr` (which will need extra machinery)

Pull Request resolved: https://github.com/pytorch/pytorch/pull/58295

Reviewed By: ngimel

Differential Revision: D28627308

Pulled By: mruberry

fbshipit-source-id: b3bdeacac3cf9c0d984f0b85410ecce474291d20
2021-05-25 21:14:17 -07:00
albanD
9afe9fba29 Reland OpInfo support for forward AD (#58304)
Summary:
Try 3 to land this.
Trying ci-all label to ensure we test everything.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/58304

Reviewed By: heitorschueroff

Differential Revision: D28474343

Pulled By: albanD

fbshipit-source-id: 8230fa3c0a8d3633f09999e7c2f47dbdc5fe57e9
2021-05-17 12:33:27 -07:00
Alban Desmaison
4bcaa5ae20 Revert D28412496: Revert "Revert D28387767: Add forward AD test for op info"
Test Plan: revert-hammer

Differential Revision:
D28412496 (4f28c0b590)

Original commit changeset: 5b8e30b5e807

fbshipit-source-id: 5a47aad4d5428e97e2d2b4acb4192909360870cd
2021-05-14 08:26:03 -07:00
albanD
4f28c0b590 Revert "Revert D28387767: Add forward AD test for op info" (#58230)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/58230

This reverts commit f88297c66b.

Test Plan: Imported from OSS

Reviewed By: ejguan

Differential Revision: D28412496

Pulled By: albanD

fbshipit-source-id: 5b8e30b5e80771dedf999c3aaa9791fc9026f8c1
2021-05-14 06:55:44 -07:00
Nikolay Korovaiko
d304bb070a Gelu Backward, Contribution from Kevin Stephano (#58249)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/58249

Test Plan: Imported from OSS

Reviewed By: ejguan

Differential Revision: D28425381

Pulled By: Krovatkin

fbshipit-source-id: 21b7ac972220b6c35b285e3b66f05eb392002408
2021-05-13 16:36:44 -07:00
Mike Ruberry
f88297c66b Revert D28387767: Add forward AD test for op info
Test Plan: revert-hammer

Differential Revision:
D28387767 (26b6d044cd)

Original commit changeset: 369d76921c84

fbshipit-source-id: 91ac961339bdd5e1e2530d2655364f9fe46cdafb
2021-05-12 20:41:25 -07:00
albanD
26b6d044cd Add forward AD test for op info (#57701)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/57701

The new OpInfo flag has the following semantic:
- If it says that it supports forward AD, we run gradcheck with forward AD to ensure it is correct
- If it says that it does not support it, we check that the corresponding error is raised

All the added tests take 3s to run for CPU builds and 1min for GPU builds which should be pretty negligible compared to the test_ops runtime for each of these arch.

Test Plan: Imported from OSS

Reviewed By: agolynski

Differential Revision: D28387767

Pulled By: albanD

fbshipit-source-id: 369d76921c8460aa4548f9b5159b7297994672f5
2021-05-12 18:49:18 -07:00
Elias Ellison
7627dd568a hardswish reland (#57652)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/57652

Test Plan: Imported from OSS

Reviewed By: Krovatkin

Differential Revision: D28226724

Pulled By: eellison

fbshipit-source-id: 585a91ffab7a855b5600e79130a37be25ef9b354
2021-05-05 17:21:43 -07:00
Shen Li
887d0e5657 Revert D28197820: [JIT][NNC] add hardswish symbolic gradient and NNC lowering
Test Plan: revert-hammer

Differential Revision:
D28197820 (0142fd0b57)

Original commit changeset: 05305d85c5bb

fbshipit-source-id: 2e1d9699515982ba2a9be06e83a2ce043ec857ee
2021-05-05 07:53:30 -07:00
eellison
0142fd0b57 [JIT][NNC] add hardswish symbolic gradient and NNC lowering (#57383)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/57383

Notes: I picked up an activation from https://github.com/pytorch/pytorch/issues/56969. You can look at the [activations.cpp](https://github.com/pytorch/pytorch/blob/master/aten/src/ATen/native/cpu/Activation.cpp#L429) file which has both forward and backward kernel code to help you write the NNC lowering and the symbolic gradient.

I added a test in test_jit_fuser_te for the fusion, and I added an OpInfo and asserted that we expect to see autodiffable nodes to test the symbolic gradient.

Test Plan: Imported from OSS

Reviewed By: mrshenli

Differential Revision: D28197820

Pulled By: eellison

fbshipit-source-id: 05305d85c5bb0847c8f911b95ba47b137dca7e90
2021-05-04 23:39:59 -07:00
Rong Rong (AI Infra)
095c328d9f Add supported backward_dtype to OpInfo (#56156)
Summary:
Related to https://github.com/pytorch/pytorch/issues/55601.

- [x] removed complex autograd checker in `test_supported_backward`
- [x] created `backward_dtype[If<Device>]` that inherits from normal `dtype[If<Device>]` by default
- [x] removed all skip for backward test, instead added backward dtype
- [x] change complex autograd to a function call: `support_complex_autograd(device_type)` that depends on `backward_dtype*` since they essentially mean the same thing for complex types

TODO for next PR
- add `test_unsupported_backward` to verify they are actually unsupported.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/56156

Reviewed By: mruberry

Differential Revision: D27926717

Pulled By: walterddr

fbshipit-source-id: 9a4af8612278ca44a97b6f1510b6b175852c893b
2021-04-30 08:24:58 -07:00
Philip Meier
db32b69591 quote str kwarg values in test_ops.py::TestCommon::test_jit_alias_remapping (#57120)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/57119.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/57120

Reviewed By: gchanan

Differential Revision: D28086601

Pulled By: mruberry

fbshipit-source-id: 566a53c2365f2d128da49ac58463e37b36455831
2021-04-30 04:29:12 -07:00
albanD
10fd7d8be6 Add option to OpInfo to skip gradgrad check and empty cdist OpInfo (#56603)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/56603

Test Plan: Imported from OSS

Reviewed By: mruberry

Differential Revision: D27939204

Pulled By: albanD

fbshipit-source-id: c7c80551ef3c34c822832891a99104440893ea4c
2021-04-23 14:06:33 -07:00
Jeffrey Wan
d01302431c Enable fast gradcheck for real inputs and outputs (#55237)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/55237

In this PR, we reenable fast-gradcheck and resolve misc issues that arise:
Before landing this PR, land #55182 so that slow tests are still being run periodically.

Bolded indicates the issue is handled in this PR, otherwise it is handled in a previous PR.

**Non-determinism issues**:
- ops that do not have deterministic implementation (as documented https://pytorch.org/docs/stable/generated/torch.use_deterministic_algorithms.html#torch.use_deterministic_algorithms)
  - test_pad_cuda (replication_pad2d) (test_nn)
  - interpolate (test_nn)
  - cummin, cummax (scatter_add_cuda_kernel) (test_ops)
  - test_fn_gradgrad_prod_cpu_float64 (test_ops)

Randomness:
  - RRelu (new module tests) - we fix by using our own generator as to avoid messing with user RNG state (handled in #54480)

Numerical precision issues:
- jacobian mismatch: test_gelu (test_nn, float32, not able to replicate locally) - we fixed this by disabling for float32 (handled in previous  PR)
- cholesky_solve (test_linalg): #56235 handled in previous PR
- **cumprod** (test_ops) - #56275 disabled fast gradcheck

Not yet replicated:
 - test_relaxed_one_hot_categorical_2d (test_distributions)

Test Plan: Imported from OSS

Reviewed By: albanD

Differential Revision: D27920906

fbshipit-source-id: 894dd7bf20b74f1a91a5bc24fe56794b4ee24656
2021-04-22 19:46:37 -07:00
kshitij12345
6350fcef83 [testing] add broadcasts_input and verifies the behaviour for inplace_variant. (#55771)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/55595

* Add `broadcasts_input` attribute to SampleInput
* Update test_variant_consistency_eager to verify that sample with `broadcasts_input==True` and inplace variant raises an error.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/55771

Reviewed By: jbschlosser, ngimel

Differential Revision: D27760530

Pulled By: mruberry

fbshipit-source-id: feb0658730d4cff483848a5ade9512837a65c24c
2021-04-15 07:39:50 -07:00
Mike Ruberry
399b66c813 Ports logdet from method_tests() to op_db (#55743)
Summary:
Per title. Also updates some tensor construction helpers.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/55743

Reviewed By: ngimel

Differential Revision: D27702060

Pulled By: mruberry

fbshipit-source-id: f64b7bee855733ad1f4fd182819ceec5831d9878
2021-04-11 20:39:16 -07:00
Natalia Gimelshein
0910363e8f adds data_ptr checks to in-place OpInfo variant tests and out OpInfo tests (#55527)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/55088.
Unfortunately, this test wouldn't have caught index_add_ breakage (because index_add_ breakage would appear only in a particular type promotion situation).

Pull Request resolved: https://github.com/pytorch/pytorch/pull/55527

Reviewed By: mruberry

Differential Revision: D27671138

Pulled By: ngimel

fbshipit-source-id: b52411f5a6d81098b706dfda4d0c9a16716414d7
2021-04-08 23:09:28 -07:00
kshitij12345
17e5ba44f1 [testing] Support input samples where self is broadcasted. (#53014)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/50747

Reference https://github.com/pytorch/pytorch/issues/50006

Pull Request resolved: https://github.com/pytorch/pytorch/pull/53014

Reviewed By: SplitInfinity, ngimel

Differential Revision: D27615320

Pulled By: mruberry

fbshipit-source-id: a48bccf06aef1ee8f66a89e6b2bbe79736700b2b
2021-04-07 08:20:27 -07:00
lezcano
158cdece65 Correct many OpInfos "test_out" skips. (#55141)
Summary:
Partially solves https://github.com/pytorch/pytorch/issues/54061

This PR solves many of the "easy to solve" problems with `out=` not notifying when it resizes a tensor. It also reports the cause of some fails of the `out=` operation in the tests. Hopefully this way we will be able to catch some errors that do not come simply from not using `resize_output`.
cc mruberry anjali411

Pull Request resolved: https://github.com/pytorch/pytorch/pull/55141

Reviewed By: anjali411

Differential Revision: D27568755

Pulled By: mruberry

fbshipit-source-id: a32546555fef8d241de2ef635a99e5615461ed09
2021-04-06 08:41:25 -07:00
Ivan Yashchuk
2798f38c86 Added checks for dtype and device of OpInfo's sample_inputs (#54949)
Summary:
Currently, it's not tested whether `op.sample_inputs` actually used the provided dtype and device arguments. This PR fixes that introducing asserts in `test_supported_dtypes`.
This will help to detect incorrectly generated inputs in the future.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/54949

Reviewed By: H-Huang

Differential Revision: D27435952

Pulled By: mruberry

fbshipit-source-id: 8465c459b9b0c007411a9a74340bc2755519624a
2021-04-01 09:34:51 -07:00
Heitor Schueroff
8b02d1207b Fixed OpInfo jit tests failing for TensorList inputs (#54954)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/53906

Pull Request resolved: https://github.com/pytorch/pytorch/pull/54954

Reviewed By: glaringlee

Differential Revision: D27474863

Pulled By: heitorschueroff

fbshipit-source-id: cf8c1cac6fd1cceacd6be73a2eb49d28a5cfc20a
2021-04-01 07:09:07 -07:00
Heitor Schueroff
8b8c4096ee Added OpInfo gradcheck_wrapper to replace output_func (#54914)
Summary:
Added a field to `OpInfo` to provide a wrapper function for gradcheck. This is useful for functions that need to perform some extra input/output processing to work with gradcheck.

fixes https://github.com/pytorch/pytorch/issues/50837

Pull Request resolved: https://github.com/pytorch/pytorch/pull/54914

Reviewed By: H-Huang

Differential Revision: D27435234

Pulled By: heitorschueroff

fbshipit-source-id: fa3e9b61f3d3df221243fd142ddb8b7861dbf669
2021-03-31 20:23:49 -07:00
Heitor Schueroff
6d87b3667f Added support for TensorList inputs in OpInfo (#54922)
Summary:
Stack:
* https://github.com/pytorch/pytorch/issues/54954 Fixed OpInfo jit tests failing for TensorList inputs
* __#54922 Added support for TensorList inputs in OpInfo__

Updated OpInfo to accept either a `Tensor` or `TensorList` as `sample.input` and added workarounds to make this work with gradcheck.

Note: JIT testing support for TensorList inputs will be added in a follow up PR.

Fixes https://github.com/pytorch/pytorch/issues/51996

Pull Request resolved: https://github.com/pytorch/pytorch/pull/54922

Reviewed By: H-Huang

Differential Revision: D27448952

Pulled By: heitorschueroff

fbshipit-source-id: 3f24a56f6180eb2d044dcfc89ba59fce8acfe278
2021-03-31 04:42:10 -07:00
Mike Ruberry
94efb48e16 Adds the cfloat dtype to the eager and jit variant consistency tests (#54854)
Summary:
Per title. One skip for addmm was needed. Either it or the jit test doesn't seem to handle a complex literal properly.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/54854

Reviewed By: anjali411

Differential Revision: D27395651

Pulled By: mruberry

fbshipit-source-id: 0bfadf0a8500f26d3a89f56f104fb44561f594d9
2021-03-29 08:15:27 -07:00
kshitij12345
b7c5d57563 [testing] support op with args/kwargs in test_unary_ufunc (#52194)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/51242

Pull Request resolved: https://github.com/pytorch/pytorch/pull/52194

Reviewed By: ngimel

Differential Revision: D27385139

Pulled By: mruberry

fbshipit-source-id: 63118dee33a138ef13810465d2d2d9fa194dfb28
2021-03-28 18:10:20 -07:00
76181208+imaginary-person@users.noreply.github.com
5cd8a77e01 Skip inplace autograd test if inplace variant doesn't exist (#54460)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/54413

1. Skip inplace autograd test for an op if its inplace variant does not exist.
2. For ops that don't have an inplace variant, remove redundant `supports_inplace_autograd=False` assignments in their `OpInfo`s.
3. Ops having inplace variants that do not support autograd should not have `supports_inplace_autograd=False` entries removed from their `OpInfo`s.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/54460

Reviewed By: ngimel

Differential Revision: D27255938

Pulled By: mruberry

fbshipit-source-id: f15334b09e68995e9f26adc2ff3e59c292689ee8
2021-03-23 21:10:37 -07:00
Heitor Schueroff
591084abb8 Deprecate torch.matrix_power in favor of torch.linalg.matrix_power (#53538)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/53538

* #52608 Added torch.linalg.matrix_power

Test Plan: Imported from OSS

Reviewed By: bdhirsh

Differential Revision: D27261531

Pulled By: heitorschueroff

fbshipit-source-id: 5a944b390f3cc6896c2aa92ba467319ddc9309e4
2021-03-23 15:11:24 -07:00
Mike Ruberry
7b939d934e Simplifes OpInfo test matrix to reduce test time (#53255)
Summary:
This PR:

- Updates the structure of the SampleInput class to require the "input" attribute be a tensor
- Limits unary ufuncs to test only the uint8, long, float16, bfloat16, float and cfloat dtypes by default
- Limits variant testing to the float dtype
- Removes test_variant_consistency from test_unary_ufuncs.py since it's now redundant with variant testing in test_ops.py
- Adds backwards supported testing to clarify failures that were coming from variant testing

This should decrease test e2e time.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/53255

Reviewed By: ngimel

Differential Revision: D27043643

Pulled By: mruberry

fbshipit-source-id: 91d6b483ad6e2cd1b9ade939d42082980ae14217
2021-03-22 03:48:27 -07:00
Ivan Yashchuk
564456ac44 Added autograd support for torch.orgqr (#52637)
Summary:
This PR adds autograd support for `torch.orgqr`.

Since `torch.orgqr` is one of few functions that expose LAPACK's naming and all other linear algebra routines were renamed a long time ago, I also added a new function with a new name and `torch.orgqr` now is an alias for it.

The new proposed name is `householder_product`. For a matrix `input` and a vector `tau` LAPACK's orgqr operation takes columns of `input` (called Householder vectors or elementary reflectors) scalars of `tau` that together represent Householder matrices and then the product of these matrices is computed. See https://www.netlib.org/lapack/lug/node128.html.
Other linear algebra libraries that I'm aware of do not expose this LAPACK function, so there is some freedom in naming it. It is usually used internally only for QR decomposition, but can be useful for deep learning tasks now when it supports differentiation.

Resolves https://github.com/pytorch/pytorch/issues/50104

Pull Request resolved: https://github.com/pytorch/pytorch/pull/52637

Reviewed By: agolynski

Differential Revision: D27114246

Pulled By: mruberry

fbshipit-source-id: 9ab51efe52aec7c137aa018c7bd486297e4111ce
2021-03-18 05:42:18 -07:00
kshitij12345
df7c0a06d6 [testing] assert no duplicate in method_tests for an OpInfo entry (#53492)
Summary:
Assert no duplicate in method_tests for an OpInfo entry

Pull Request resolved: https://github.com/pytorch/pytorch/pull/53492

Reviewed By: izdeby

Differential Revision: D26882441

Pulled By: mruberry

fbshipit-source-id: f0631ea2b46b74285c76365c679bd45abc917d63
2021-03-14 21:58:39 -07:00
Mike Ruberry
2d36b30a8c Expands OpInfo out= testing (#53259)
Summary:
Addresses several of the challenges described in https://github.com/pytorch/pytorch/issues/49468.

This PR builds on https://github.com/pytorch/pytorch/pull/50741 and https://github.com/pytorch/pytorch/issues/53105 to extend OpInfo out= testing. It covers the following cases for ops that produce a single tensor:

- out= values don't affect computation
- out= noncontiguous produces the correct output and preserves strides
- out= with the wrong shape throws a warning
- out= with an empty tensor throws no warning
- out= with the wrong device throws an error
- out= with a dtype the computation's result can't be "safely" cast to throws an error

It works with operations that produce a single tensor and operations that produce an iterable of tensors (the latter is tested with operations like torch.svd).

In addition to the new out= test, the OpInfos have been updated. "supports_tensor_out" is replaced with the more general and straightforward "supports_out" metadata, and many operations which previously had to skip out= testing with an explicit SkipInfo no longer need to. A couple redundant tests in test_unary_ufuncs.py have been removed, too.

One other perk of these tests is that once all operations have OpInfos this will allow us to validate that we've universally deprecated incorrectly sized tensors passed to out=, and give us the option to actually disable the behavior.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/53259

Reviewed By: mrshenli

Differential Revision: D26894723

Pulled By: mruberry

fbshipit-source-id: 2b536e9baf126f36386a35f2f806dd88c58690b3
2021-03-09 08:19:26 -08:00
Mike Ruberry
1e992810b5 Revert D26811466: [pytorch][PR] [reland] Add OpInfo for bitwise_not and make ROCM and CUDA OpInfo tests consistent
Test Plan: revert-hammer

Differential Revision:
D26811466 (a5ada2127d)

Original commit changeset: 8434a7515d83

fbshipit-source-id: 9c2c760e18154a88cf7531e45843a802e3f3d19c
2021-03-08 15:47:47 -08:00
kshitij12345
a5ada2127d [reland] Add OpInfo for bitwise_not and make ROCM and CUDA OpInfo tests consistent (#53181)
Summary:
Reference: https://github.com/pytorch/pytorch/issues/42515

This PR also enables the OpInfo tests on ROCM to check the same dtypes that of CUDA.

Note: Reland https://github.com/pytorch/pytorch/issues/51944

Pull Request resolved: https://github.com/pytorch/pytorch/pull/53181

Reviewed By: zhangguanheng66

Differential Revision: D26811466

Pulled By: mruberry

fbshipit-source-id: 8434a7515d83ed859db1b2f916fad81a9deaeb9b
2021-03-08 03:39:01 -08:00
Sam Estep
8c798e0622 Forbid trailing whitespace (#53406)
Summary:
Context: https://github.com/pytorch/pytorch/pull/53299#discussion_r587882857

These are the only hand-written parts of this diff:
- the addition to `.github/workflows/lint.yml`
- the file endings changed in these four files (to appease FB-internal land-blocking lints):
  - `GLOSSARY.md`
  - `aten/src/ATen/core/op_registration/README.md`
  - `scripts/README.md`
  - `torch/csrc/jit/codegen/fuser/README.md`

The rest was generated by running this command (on macOS):
```
git grep -I -l ' $' -- . ':(exclude)**/contrib/**' ':(exclude)third_party' | xargs gsed -i 's/ *$//'
```

I looked over the auto-generated changes and didn't see anything that looked problematic.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/53406

Test Plan:
This run (after adding the lint but before removing existing trailing spaces) failed:
- https://github.com/pytorch/pytorch/runs/2043032377

This run (on the tip of this PR) succeeded:
- https://github.com/pytorch/pytorch/runs/2043296348

Reviewed By: walterddr, seemethere

Differential Revision: D26856620

Pulled By: samestep

fbshipit-source-id: 3f0de7f7c2e4b0f1c089eac9b5085a58dd7e0d97
2021-03-05 17:22:55 -08:00
Alban Desmaison
59b2b8b091 Revert D26727660: [pytorch][PR] Add OpInfo for bitwise_not and make ROCM and CUDA OpInfo tests consistent
Test Plan: revert-hammer

Differential Revision:
D26727660 (816646bd6f)

Original commit changeset: 3aea236cf000

fbshipit-source-id: 91c6ec0c55c0295bb209f450ae3c96bee0a37356
2021-03-03 06:08:48 -08:00
kshitij12345
816646bd6f Add OpInfo for bitwise_not and make ROCM and CUDA OpInfo tests consistent (#51944)
Summary:
Reference: https://github.com/pytorch/pytorch/issues/42515

This PR also enables the OpInfo tests on ROCM to check the same dtypes that of CUDA.

Few tests have to be skipped (due to failure).

Pull Request resolved: https://github.com/pytorch/pytorch/pull/51944

Reviewed By: H-Huang

Differential Revision: D26727660

Pulled By: mruberry

fbshipit-source-id: 3aea236cf0002f46c2737afbda2ed3efccfe14f5
2021-03-02 22:56:40 -08:00
kshitij12345
aa603cb2ce add OpInfo entry for signbit (#52198)
Summary:
Reference: https://github.com/pytorch/pytorch/issues/42515

Pull Request resolved: https://github.com/pytorch/pytorch/pull/52198

Reviewed By: H-Huang

Differential Revision: D26727598

Pulled By: mruberry

fbshipit-source-id: 282350febbd0b1af73320f0e912bf553d386d4b0
2021-03-02 10:38:34 -08:00
kshitij12345
49b59e3472 Add OpInfo entries for i0 and logical_not (#51956)
Summary:
Reference: https://github.com/pytorch/pytorch/issues/42515

Pull Request resolved: https://github.com/pytorch/pytorch/pull/51956

Reviewed By: albanD

Differential Revision: D26404440

Pulled By: mruberry

fbshipit-source-id: dd73e63155dd4a200afb38a5e566eb2132e69fde
2021-02-23 10:12:05 -08:00
vfdev
8b0cb5ede3 OpInfo: Added clamp and trunc tests with aliases (#51167)
Summary:
Description:
- Added clamp, trunc tests with aliases
- Added tests for aliases for asin(h), acos(h), etc
- fixed 'fix' alias implementation
- fixed annotations in test_jit_alias_remapping
- updated native_functions.yaml aliases guidelines

Blocked by https://github.com/pytorch/pytorch/issues/50368

cc mruberry

Pull Request resolved: https://github.com/pytorch/pytorch/pull/51167

Reviewed By: gchanan

Differential Revision: D26245753

Pulled By: mruberry

fbshipit-source-id: e17b657f0515139735a8a677b1ae284904f98aef
2021-02-10 05:36:18 -08:00
Mike Ruberry
594a66d778 Warn about floor_divide performing incorrect rounding (#50281) (#50281)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/50281

Pull Request resolved: https://github.com/pytorch/pytorch/pull/51745

Test Plan: Imported from OSS

Reviewed By: ngimel

Pulled By: mruberry

Differential Revision: D26257855

fbshipit-source-id: e5d497cf07b0c746838ed081c5d0e82fb4cb701b
2021-02-10 03:13:34 -08:00