Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/54987
Based off of ezyang (https://github.com/pytorch/pytorch/pull/44799) and bdhirsh (https://github.com/pytorch/pytorch/pull/43702) 's prototype:
Here's a summary of the changes in this PR:
This PR adds a new dispatch key called Conjugate. This enables us to make conjugate operation a view and leverage the specialized library functions that fast path with the hermitian operation (conj + transpose).
1. Conjugate operation will now return a view with conj bit (1) for complex tensors and returns self for non-complex tensors as before. This also means `torch.view_as_real` will no longer be a view on conjugated complex tensors and is hence disabled. To fill the gap, we have added `torch.view_as_real_physical` which would return the real tensor agnostic of the conjugate bit on the input complex tensor. The information about conjugation on the old tensor can be obtained by calling `.is_conj()` on the new tensor.
2. NEW API:
a) `.conj()` -- now returning a view.
b) `.conj_physical()` -- does the physical conjugate operation. If the conj bit for input was set, you'd get `self.clone()`, else you'll get a new tensor with conjugated value in its memory.
c) `.conj_physical_()`, and `out=` variant
d) `.resolve_conj()` -- materializes the conjugation. returns self if the conj bit is unset, else returns a new tensor with conjugated values and conj bit set to 0.
e) `.resolve_conj_()` in-place version of (d)
f) `view_as_real_physical` -- as described in (1), it's functionally same as `view_as_real`, just that it doesn't error out on conjugated tensors.
g) `view_as_real` -- existing function, but now errors out on conjugated tensors.
3. Conjugate Fallback
a) Vast majority of PyTorch functions would currently use this fallback when they are called on a conjugated tensor.
b) This fallback is well equipped to handle the following cases:
- functional operation e.g., `torch.sin(input)`
- Mutable inputs and in-place operations e.g., `tensor.add_(2)`
- out-of-place operation e.g., `torch.sin(input, out=out)`
- Tensorlist input args
- NOTE: Meta tensors don't work with conjugate fallback.
4. Autograd
a) `resolve_conj()` is an identity function w.r.t. autograd
b) Everything else works as expected.
5. Testing:
a) All method_tests run with conjugate view tensors.
b) OpInfo tests that run with conjugate views
- test_variant_consistency_eager/jit
- gradcheck, gradgradcheck
- test_conj_views (that only run for `torch.cfloat` dtype)
NOTE: functions like `empty_like`, `zero_like`, `randn_like`, `clone` don't propagate the conjugate bit.
Follow up work:
1. conjugate view RFC
2. Add neg bit to re-enable view operation on conjugated tensors
3. Update linalg functions to call into specialized functions that fast path with the hermitian operation.
Test Plan: Imported from OSS
Reviewed By: VitalyFedyunin
Differential Revision: D28227315
Pulled By: anjali411
fbshipit-source-id: acab9402b9d6a970c6d512809b627a290c8def5f
Summary:
Context:
The Error message when `broadcasts_input` is marked incorrectly is uninformative [See Error Currently]
https://github.com/pytorch/pytorch/pull/57941#discussion_r631749435
Error Currently
```
Traceback (most recent call last):
File "/home/kshiteej/Pytorch/pytorch_i0_promotion/test/test_ops.py", line 326, in test_variant_consistency_eager
_test_consistency_helper(samples, variants)
File "/home/kshiteej/Pytorch/pytorch_i0_promotion/test/test_ops.py", line 310, in _test_consistency_helper
variant_forward = variant(cloned,
File "/home/kshiteej/.conda/envs/pytorch-cuda-dev/lib/python3.8/unittest/case.py", line 227, in __exit__
self._raiseFailure("{} not raised".format(exc_name))
File "/home/kshiteej/.conda/envs/pytorch-cuda-dev/lib/python3.8/unittest/case.py", line 164, in _raiseFailure
raise self.test_case.failureException(msg)
AssertionError: RuntimeError not raised
```
Error After PR
```
Traceback (most recent call last):
File "/home/kshiteej/Pytorch/pytorch_i0_promotion/test/test_ops.py", line 329, in test_variant_consistency_eager
_test_consistency_helper(samples, variants)
File "/home/kshiteej/Pytorch/pytorch_i0_promotion/test/test_ops.py", line 313, in _test_consistency_helper
variant_forward = variant(cloned,
File "/home/kshiteej/.conda/envs/pytorch-cuda-dev/lib/python3.8/unittest/case.py", line 227, in __exit__
self._raiseFailure("{} not raised".format(exc_name))
File "/home/kshiteej/.conda/envs/pytorch-cuda-dev/lib/python3.8/unittest/case.py", line 164, in _raiseFailure
raise self.test_case.failureException(msg)
AssertionError: RuntimeError not raised : inplace variant either allowed resizing or you have marked the sample SampleInput(input=Tensor, args=(tensor([[[ 2.1750, -8.5027, -3.1403, -6.9942, 3.2609],
[-2.5057, -5.9123, -5.4633, 6.1203, -8.2124],
[-3.5802, -8.4869, -6.0700, 2.3431, -8.1955],
[-7.3316, 1.3248, -6.8661, 7.1483, -8.0719],
[ 4.5977, -4.0448, -6.2044, -2.1314, -8.4956]],
[[ 3.2769, -8.4360, 1.2826, 7.1749, 4.7653],
[-0.2816, -2.5997, -4.7659, -3.7814, 3.9704],
[-2.1778, -3.8117, -6.0276, -0.8423, -5.9646],
[ 8.6544, -3.0922, 0.2558, -4.9318, -4.7596],
[ 4.5583, 4.3830, 5.8793, 0.9713, -2.1481]],
[[-1.0447, 0.9334, 7.6405, -4.8933, -7.4010],
[ 7.7168, -8.4266, -5.5980, -6.9368, 7.1309],
[-8.7720, -5.0890, -0.4975, 1.9518, 1.7074],
[-8.5783, 8.5510, -8.5459, -3.5451, 8.4319],
[ 8.5052, -8.9149, -6.6298, -1.2750, -5.7367]],
[[-6.5625, 8.2795, -4.9311, 1.9501, -7.1777],
[-8.4035, 1.1136, -7.6418, -7.0726, -2.8281],
[ 4.2668, -0.2883, -6.2246, 2.3396, 1.2911],
[ 4.6550, -1.9525, 4.4873, -3.8061, -0.8653],
[-3.4256, 4.4423, 8.2937, -5.3456, -4.2624]],
[[ 7.6128, -6.3932, 4.7131, -5.4938, 6.4792],
[-6.5385, 2.4385, 4.5570, 3.7803, -8.3281],
[-2.9785, -4.4745, -1.1778, -8.9324, 1.3663],
[ 3.7437, 3.5171, -6.3135, -8.4519, -2.7033],
[-5.0568, -8.4630, -4.2870, -3.7284, -1.5238]]], device='cuda:0',
dtype=torch.float32, requires_grad=True),), broadcasts_input=True) incorrectly with `broadcasts_self=True
```
**NOTE**:
Printing the sample looks very verbose and it may be hard to figure out which sample is incorrectly configured if there are multiple samples with similar input shapes.
Two Options to make this error less verbose
* Don't print the sample and just print `inplace variant either allowed resizing or you have marked one of the sample incorrectly with broadcasts_self=True`
* Have some mechanism to name samples which will be printed in the `repr` (which will need extra machinery)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/58295
Reviewed By: ngimel
Differential Revision: D28627308
Pulled By: mruberry
fbshipit-source-id: b3bdeacac3cf9c0d984f0b85410ecce474291d20
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/57701
The new OpInfo flag has the following semantic:
- If it says that it supports forward AD, we run gradcheck with forward AD to ensure it is correct
- If it says that it does not support it, we check that the corresponding error is raised
All the added tests take 3s to run for CPU builds and 1min for GPU builds which should be pretty negligible compared to the test_ops runtime for each of these arch.
Test Plan: Imported from OSS
Reviewed By: agolynski
Differential Revision: D28387767
Pulled By: albanD
fbshipit-source-id: 369d76921c8460aa4548f9b5159b7297994672f5
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/57383
Notes: I picked up an activation from https://github.com/pytorch/pytorch/issues/56969. You can look at the [activations.cpp](https://github.com/pytorch/pytorch/blob/master/aten/src/ATen/native/cpu/Activation.cpp#L429) file which has both forward and backward kernel code to help you write the NNC lowering and the symbolic gradient.
I added a test in test_jit_fuser_te for the fusion, and I added an OpInfo and asserted that we expect to see autodiffable nodes to test the symbolic gradient.
Test Plan: Imported from OSS
Reviewed By: mrshenli
Differential Revision: D28197820
Pulled By: eellison
fbshipit-source-id: 05305d85c5bb0847c8f911b95ba47b137dca7e90
Summary:
Related to https://github.com/pytorch/pytorch/issues/55601.
- [x] removed complex autograd checker in `test_supported_backward`
- [x] created `backward_dtype[If<Device>]` that inherits from normal `dtype[If<Device>]` by default
- [x] removed all skip for backward test, instead added backward dtype
- [x] change complex autograd to a function call: `support_complex_autograd(device_type)` that depends on `backward_dtype*` since they essentially mean the same thing for complex types
TODO for next PR
- add `test_unsupported_backward` to verify they are actually unsupported.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/56156
Reviewed By: mruberry
Differential Revision: D27926717
Pulled By: walterddr
fbshipit-source-id: 9a4af8612278ca44a97b6f1510b6b175852c893b
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/55237
In this PR, we reenable fast-gradcheck and resolve misc issues that arise:
Before landing this PR, land #55182 so that slow tests are still being run periodically.
Bolded indicates the issue is handled in this PR, otherwise it is handled in a previous PR.
**Non-determinism issues**:
- ops that do not have deterministic implementation (as documented https://pytorch.org/docs/stable/generated/torch.use_deterministic_algorithms.html#torch.use_deterministic_algorithms)
- test_pad_cuda (replication_pad2d) (test_nn)
- interpolate (test_nn)
- cummin, cummax (scatter_add_cuda_kernel) (test_ops)
- test_fn_gradgrad_prod_cpu_float64 (test_ops)
Randomness:
- RRelu (new module tests) - we fix by using our own generator as to avoid messing with user RNG state (handled in #54480)
Numerical precision issues:
- jacobian mismatch: test_gelu (test_nn, float32, not able to replicate locally) - we fixed this by disabling for float32 (handled in previous PR)
- cholesky_solve (test_linalg): #56235 handled in previous PR
- **cumprod** (test_ops) - #56275 disabled fast gradcheck
Not yet replicated:
- test_relaxed_one_hot_categorical_2d (test_distributions)
Test Plan: Imported from OSS
Reviewed By: albanD
Differential Revision: D27920906
fbshipit-source-id: 894dd7bf20b74f1a91a5bc24fe56794b4ee24656
Summary:
Fixes https://github.com/pytorch/pytorch/issues/55088.
Unfortunately, this test wouldn't have caught index_add_ breakage (because index_add_ breakage would appear only in a particular type promotion situation).
Pull Request resolved: https://github.com/pytorch/pytorch/pull/55527
Reviewed By: mruberry
Differential Revision: D27671138
Pulled By: ngimel
fbshipit-source-id: b52411f5a6d81098b706dfda4d0c9a16716414d7
Summary:
Partially solves https://github.com/pytorch/pytorch/issues/54061
This PR solves many of the "easy to solve" problems with `out=` not notifying when it resizes a tensor. It also reports the cause of some fails of the `out=` operation in the tests. Hopefully this way we will be able to catch some errors that do not come simply from not using `resize_output`.
cc mruberry anjali411
Pull Request resolved: https://github.com/pytorch/pytorch/pull/55141
Reviewed By: anjali411
Differential Revision: D27568755
Pulled By: mruberry
fbshipit-source-id: a32546555fef8d241de2ef635a99e5615461ed09
Summary:
Currently, it's not tested whether `op.sample_inputs` actually used the provided dtype and device arguments. This PR fixes that introducing asserts in `test_supported_dtypes`.
This will help to detect incorrectly generated inputs in the future.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/54949
Reviewed By: H-Huang
Differential Revision: D27435952
Pulled By: mruberry
fbshipit-source-id: 8465c459b9b0c007411a9a74340bc2755519624a
Summary:
Added a field to `OpInfo` to provide a wrapper function for gradcheck. This is useful for functions that need to perform some extra input/output processing to work with gradcheck.
fixes https://github.com/pytorch/pytorch/issues/50837
Pull Request resolved: https://github.com/pytorch/pytorch/pull/54914
Reviewed By: H-Huang
Differential Revision: D27435234
Pulled By: heitorschueroff
fbshipit-source-id: fa3e9b61f3d3df221243fd142ddb8b7861dbf669
Summary:
Stack:
* https://github.com/pytorch/pytorch/issues/54954 Fixed OpInfo jit tests failing for TensorList inputs
* __#54922 Added support for TensorList inputs in OpInfo__
Updated OpInfo to accept either a `Tensor` or `TensorList` as `sample.input` and added workarounds to make this work with gradcheck.
Note: JIT testing support for TensorList inputs will be added in a follow up PR.
Fixes https://github.com/pytorch/pytorch/issues/51996
Pull Request resolved: https://github.com/pytorch/pytorch/pull/54922
Reviewed By: H-Huang
Differential Revision: D27448952
Pulled By: heitorschueroff
fbshipit-source-id: 3f24a56f6180eb2d044dcfc89ba59fce8acfe278
Summary:
Per title. One skip for addmm was needed. Either it or the jit test doesn't seem to handle a complex literal properly.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/54854
Reviewed By: anjali411
Differential Revision: D27395651
Pulled By: mruberry
fbshipit-source-id: 0bfadf0a8500f26d3a89f56f104fb44561f594d9
Summary:
Fixes https://github.com/pytorch/pytorch/issues/54413
1. Skip inplace autograd test for an op if its inplace variant does not exist.
2. For ops that don't have an inplace variant, remove redundant `supports_inplace_autograd=False` assignments in their `OpInfo`s.
3. Ops having inplace variants that do not support autograd should not have `supports_inplace_autograd=False` entries removed from their `OpInfo`s.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/54460
Reviewed By: ngimel
Differential Revision: D27255938
Pulled By: mruberry
fbshipit-source-id: f15334b09e68995e9f26adc2ff3e59c292689ee8
Summary:
This PR:
- Updates the structure of the SampleInput class to require the "input" attribute be a tensor
- Limits unary ufuncs to test only the uint8, long, float16, bfloat16, float and cfloat dtypes by default
- Limits variant testing to the float dtype
- Removes test_variant_consistency from test_unary_ufuncs.py since it's now redundant with variant testing in test_ops.py
- Adds backwards supported testing to clarify failures that were coming from variant testing
This should decrease test e2e time.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/53255
Reviewed By: ngimel
Differential Revision: D27043643
Pulled By: mruberry
fbshipit-source-id: 91d6b483ad6e2cd1b9ade939d42082980ae14217
Summary:
This PR adds autograd support for `torch.orgqr`.
Since `torch.orgqr` is one of few functions that expose LAPACK's naming and all other linear algebra routines were renamed a long time ago, I also added a new function with a new name and `torch.orgqr` now is an alias for it.
The new proposed name is `householder_product`. For a matrix `input` and a vector `tau` LAPACK's orgqr operation takes columns of `input` (called Householder vectors or elementary reflectors) scalars of `tau` that together represent Householder matrices and then the product of these matrices is computed. See https://www.netlib.org/lapack/lug/node128.html.
Other linear algebra libraries that I'm aware of do not expose this LAPACK function, so there is some freedom in naming it. It is usually used internally only for QR decomposition, but can be useful for deep learning tasks now when it supports differentiation.
Resolves https://github.com/pytorch/pytorch/issues/50104
Pull Request resolved: https://github.com/pytorch/pytorch/pull/52637
Reviewed By: agolynski
Differential Revision: D27114246
Pulled By: mruberry
fbshipit-source-id: 9ab51efe52aec7c137aa018c7bd486297e4111ce
Summary:
Addresses several of the challenges described in https://github.com/pytorch/pytorch/issues/49468.
This PR builds on https://github.com/pytorch/pytorch/pull/50741 and https://github.com/pytorch/pytorch/issues/53105 to extend OpInfo out= testing. It covers the following cases for ops that produce a single tensor:
- out= values don't affect computation
- out= noncontiguous produces the correct output and preserves strides
- out= with the wrong shape throws a warning
- out= with an empty tensor throws no warning
- out= with the wrong device throws an error
- out= with a dtype the computation's result can't be "safely" cast to throws an error
It works with operations that produce a single tensor and operations that produce an iterable of tensors (the latter is tested with operations like torch.svd).
In addition to the new out= test, the OpInfos have been updated. "supports_tensor_out" is replaced with the more general and straightforward "supports_out" metadata, and many operations which previously had to skip out= testing with an explicit SkipInfo no longer need to. A couple redundant tests in test_unary_ufuncs.py have been removed, too.
One other perk of these tests is that once all operations have OpInfos this will allow us to validate that we've universally deprecated incorrectly sized tensors passed to out=, and give us the option to actually disable the behavior.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/53259
Reviewed By: mrshenli
Differential Revision: D26894723
Pulled By: mruberry
fbshipit-source-id: 2b536e9baf126f36386a35f2f806dd88c58690b3
Summary:
Context: https://github.com/pytorch/pytorch/pull/53299#discussion_r587882857
These are the only hand-written parts of this diff:
- the addition to `.github/workflows/lint.yml`
- the file endings changed in these four files (to appease FB-internal land-blocking lints):
- `GLOSSARY.md`
- `aten/src/ATen/core/op_registration/README.md`
- `scripts/README.md`
- `torch/csrc/jit/codegen/fuser/README.md`
The rest was generated by running this command (on macOS):
```
git grep -I -l ' $' -- . ':(exclude)**/contrib/**' ':(exclude)third_party' | xargs gsed -i 's/ *$//'
```
I looked over the auto-generated changes and didn't see anything that looked problematic.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/53406
Test Plan:
This run (after adding the lint but before removing existing trailing spaces) failed:
- https://github.com/pytorch/pytorch/runs/2043032377
This run (on the tip of this PR) succeeded:
- https://github.com/pytorch/pytorch/runs/2043296348
Reviewed By: walterddr, seemethere
Differential Revision: D26856620
Pulled By: samestep
fbshipit-source-id: 3f0de7f7c2e4b0f1c089eac9b5085a58dd7e0d97
Summary:
Reference: https://github.com/pytorch/pytorch/issues/42515
This PR also enables the OpInfo tests on ROCM to check the same dtypes that of CUDA.
Few tests have to be skipped (due to failure).
Pull Request resolved: https://github.com/pytorch/pytorch/pull/51944
Reviewed By: H-Huang
Differential Revision: D26727660
Pulled By: mruberry
fbshipit-source-id: 3aea236cf0002f46c2737afbda2ed3efccfe14f5
Summary:
Removed skipCUDAIfRocm to re-enable tests for
ROCM platform.
Initially, Only 4799 cases were being run.
Out of those, 882 cases were being skipped.
After removing skipCUDAIfRocm from two places
in test_ops.py, now more than 8000 cases are
being executed, out of which only 282 cases
are bing skipped, which are FFT related tests.
Signed-off-by: Arindam Roy <rarindam@gmail.com>
Fixes #{issue number}
Pull Request resolved: https://github.com/pytorch/pytorch/pull/50500
Reviewed By: albanD
Differential Revision: D25920303
Pulled By: mrshenli
fbshipit-source-id: b2d17b7e2d1de4f9fdd6f1660fb4cad5841edaa0
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/50279
This allows different sample inputs to have different behavior for the same
operator. For example, `div(..., rounding_mode='true')` will promote but other
rounding modes don't. The current boolean flag is too restrictive to allow this.
Test Plan: Imported from OSS
Reviewed By: ngimel
Differential Revision: D25950011
Pulled By: mruberry
fbshipit-source-id: 7e82b82bedc626b2b6970d92d5b25676183ec384
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/50818
This PR does two things:
1. Add batched grad testing to OpInfo
2. Improve the error message from `gradcheck` if batched gradient
computation fails to include suggestions for workarounds.
To add batched grad testing to OpInfo, this PR:
- adds new `check_batched_grad=True` and `check_batched_gradgrad=True`
attributes to OpInfo. These are True by default because we expect most
operators to support batched gradient computation.
- If `check_batched_grad=True`, then `test_fn_grad` invokes gradcheck
with `check_batched_grad=True`.
- If `check_batched_gradgrad=True`, then `test_fn_gradgradgrad` invokes
gradgradcheck with `check_batched_grad=True`.
The improved gradcheck error message looks like the following when an
exception is thrown while computing batched gradients:
https://gist.github.com/zou3519/5a0f46f908ba036259ca5e3752fd642f
Future
- Sometime in the not-near future, we will separate out "batched grad
testing" from "gradcheck" for the purposes of OpInfo to make the
testing more granular and also so that we can test that the vmap
fallback doesn't get invoked (currently batched gradient testing only
tests that the output values are correct).
Test Plan: - run tests `pytest test/test_ops.py -v -k "Gradients"`
Reviewed By: ejguan
Differential Revision: D25997703
Pulled By: zou3519
fbshipit-source-id: 6d2d444d6348ae6cdc24c32c6c0622bd67b9eb7b
Summary:
This PR adds `torch.linalg.slogdet`.
Changes compared to the original torch.slogdet:
- Complex input now works as in NumPy
- Added out= variant (allocates temporary and makes a copy for now)
- Updated `slogdet_backward` to work with complex input
Ref. https://github.com/pytorch/pytorch/issues/42666
Pull Request resolved: https://github.com/pytorch/pytorch/pull/49194
Reviewed By: VitalyFedyunin
Differential Revision: D25916959
Pulled By: mruberry
fbshipit-source-id: cf9be8c5c044870200dcce38be48cd0d10e61a48
Summary:
Related https://github.com/pytorch/pytorch/issues/38349
Implement NumPy-like function `torch.broadcast_to` to broadcast the input tensor to a new shape.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/48997
Reviewed By: anjali411, ngimel
Differential Revision: D25663937
Pulled By: mruberry
fbshipit-source-id: 0415c03f92f02684983f412666d0a44515b99373