Commit Graph

288 Commits

Author SHA1 Message Date
Brian Hirsh
4a2d2e5e40 Change API type Tensor[] for structured kernels. (#73350)
Partially fixes: #66328

This PR:
- adds support for `ITensorList` to the dispatcher for:
  - computing the dispatch key
  - boxing and unboxing `ITensorList`
- modified the codegen for structured kernels:
  - codegen APIs use `ITensorList` instead of `ArrayRef<Tensor>`

**Changes summary:**

- Signature changes due to the different APIs:
  - dispatcher API (e.g. `BatchingRegistrations.cpp`)
  - C++ API (e.g. `TensorShape.cpp`)
- Miscelaneous functions used by codegen'd functions (e.g. `FunctionalTensorWrapper.*`)
- Dispatcher changes for handling `ITensorList` correctly (e.g. `DispatchKeyExtractor.h`)
- Signature changes of `at::cat` due to the need of `const` inside `TensorBody.h`
- Forward declarations of `ITensorList` (e.g. `MethodOperators.h`)
- Codegen changes, special casing structured kernels (e.g. `gen.py`)

**Short description of structured kernels special casing:**

I introduced, mainly, 5 types of changes to the codegen for generating code depending on
whether the kernel is structured or not:

1. Added a `structured_type_override` flag to the `argument_type` function definition of
the affected APIs (mainly the dispatcher and C++ APIs).
  - `api/cpp.py`, `api/dispatcher.py`, `api/native.py`
2. Added a `structured_type_override` member to the signature
classes (e.g. `CppSignature`), since `FunctionSchema` doesn't really know whether the
function is structured or not
  - `api/types.py`
3. Added a `part_of_structured_group` to `NativeFunction` class, which is just a
convenient function to forward to `structured_type_override` wherever needed
  - `model.py`
4. Appropriately changed the rest of the codegen, whenever it used either the signature
classes or the `arguments` function directly
5. Added a check for `const ITensorList&` type wherever there was a check for `TensorList`
Pull Request resolved: https://github.com/pytorch/pytorch/pull/73350
Approved by: https://github.com/bdhirsh
2022-09-26 21:46:38 +00:00
Edward Z. Yang
4c01c51266 Symintifying slice ops (#85196)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/85196
Approved by: https://github.com/ezyang
2022-09-23 22:01:32 +00:00
Mikayla Gawarecki
77f1f98479 Re-introduce torch.Tensor.to_padded_tensor (#85293)
Differential Revision: [D39629004](https://our.internmc.facebook.com/intern/diff/D39629004)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/85293
Approved by: https://github.com/cpuhrsch
2022-09-21 18:45:56 +00:00
Edward Z. Yang
3eb27229dd as_strided symbolic support (#85264)
Signed-off-by: Edward Z. Yang <ezyang@fb.com>

Differential Revision: [D39662820](https://our.internmc.facebook.com/intern/diff/D39662820)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/85264
Approved by: https://github.com/wconstab
2022-09-21 13:34:55 +00:00
Benoit Steiner
86d8c61c7c Revert D39583438: Multisect successfully blamed D39583438 for test or build failures (#85277)
Summary:
This diff is reverting D39583438
D39583438 has been identified to be causing the following test or build failures:
Tests affected:
- https://www.internalfb.com/intern/test/281475048999851/

Here's the Multisect link:
https://www.internalfb.com/intern/testinfra/multisect/1260522
Here are the tasks that are relevant to this breakage:
T124797105: 18 tests started failing for employee benoitsteiner in the last 2 weeks
We're generating a revert to back out the changes in this diff, please note the backout may land if someone accepts it.

Test Plan: NA

Reviewed By: benoitsteiner

Differential Revision: D39599694

Pull Request resolved: https://github.com/pytorch/pytorch/pull/85277
Approved by: https://github.com/dagitses
2022-09-20 15:38:58 +00:00
kshitij12345
a4dca9822d [composite compliance] prod (#81969)
Ref: #69991

Also fixes #82644 (fix similar to #81617)

For CompositeCompliance, we can't use `item` to choose a special fast-path when Tensor is a Subclass. Instead we always dispatch to the slower but safer implementation.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/81969
Approved by: https://github.com/zou3519
2022-09-20 08:03:36 +00:00
Thomas Viehmann
e41d758e26 Handle implicit real->complex casting for backward of stack (#84993)
Fixes: #75852

P.S.: Yay for the PyTorch foundation.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/84993
Approved by: https://github.com/soulitzer
2022-09-19 21:20:34 +00:00
lezcano
d710c95cc0 Implement forward AD for scatter_reduce (#85000)
I left the case `reduction="prod"` for future work as it's a bit of a pain.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/85000
Approved by: https://github.com/soulitzer
2022-09-16 17:45:07 +00:00
Elias Ellison
54c9c4e73d Flip fake tensors on in aot autograd (#84968)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/84968
Approved by: https://github.com/Chillee
2022-09-16 15:27:48 +00:00
Pearu Peterson
a225f3cfce torch.zero_ on a sparse compressed tensor resets nnz to 0 (#85030)
Fixes #84997 and #82683

Pull Request resolved: https://github.com/pytorch/pytorch/pull/85030
Approved by: https://github.com/cpuhrsch
2022-09-15 18:42:38 +00:00
Richard Zou
3a107bc9be [functorch] fix vmapvjpvjp test for prelu (#84939)
Turns out this is just a composite compliance issue. Branching on if
something requires grad or not can lead to incorrect gradients if we
have a BatchedTensor wrapping a tensor that requires grad.

Test Plan:
- tests
Pull Request resolved: https://github.com/pytorch/pytorch/pull/84939
Approved by: https://github.com/soulitzer
2022-09-15 00:36:30 +00:00
Mikayla Gawarecki
e217b30b0f Add torch.nested namespace (#84102)
First step towards #83775
- only `to_padded_tensor` is moved to the nested namespace for now
- following the schema used for `special`, `fft`, `linalg` and other namespaces, nested functions are registered in native_functions.yaml as `nested_{function_name}` and are bound to the desired Python name in
`torch/nested/__init__.py`, and the desired C++ name in `torch/csrc/api/include/torch/nested.h`.

~~**Question**: should we keep the documentation for `Tensor.to_padded_tensor` or can this deleted since it is shared by `torch.nested.to_padded_tensor`?~~

[generated nested docs](https://docs-preview.pytorch.org/84102/nested.html?highlight=nested#module-torch.nested)

Differential Revision: [D39361148](https://our.internmc.facebook.com/intern/diff/D39361148)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/84102
Approved by: https://github.com/drisspg
2022-09-12 16:31:05 +00:00
Ivan Yashchuk
01c54ad6de Remove deprecated torch.eig (#70982)
The time has come to remove deprecated linear algebra related functions. This PR removes `torch.eig`.

cc @jianyuh @nikitaved @pearu @mruberry @walterddr @IvanYashchuk @xwang233 @Lezcano
Pull Request resolved: https://github.com/pytorch/pytorch/pull/70982
Approved by: https://github.com/Lezcano, https://github.com/malfet
2022-09-09 21:31:57 +00:00
nikitaved
3eb16509c7 optimize householder product backward to be more memory-efficient (#84627)
A follow-up on discussions in https://github.com/pytorch/pytorch/pull/84180.
Makes backward more memory efficient with the lesser number of kernel calls.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/84627
Approved by: https://github.com/kshitij12345, https://github.com/zou3519
2022-09-07 15:29:47 +00:00
kshitij12345
07d398fb26 [composite compliance] linalg_householder_product (#84180)
Ref: #69991
Pull Request resolved: https://github.com/pytorch/pytorch/pull/84180
Approved by: https://github.com/zou3519
2022-09-07 09:33:37 +00:00
kshitij12345
65ea3d0621 [composite compliance] cov, corrcoef (#82954)
Ref: #69991
Pull Request resolved: https://github.com/pytorch/pytorch/pull/82954
Approved by: https://github.com/zou3519
2022-08-26 15:14:37 +00:00
Mario Lezcano
3e6e0a1d10 Support a stable double backward on linalg.det for real inputs (#80217)
The complex case still fails. I do not know why.

Fixes https://github.com/pytorch/pytorch/issues/62327
Fixes https://github.com/pytorch/pytorch/issues/53364
Pull Request resolved: https://github.com/pytorch/pytorch/pull/80217
Approved by: https://github.com/nikitaved, https://github.com/albanD, https://github.com/malfet
2022-08-24 15:18:56 +00:00
Mario Lezcano
aad89bb771 Make the derivative of masked_fill more efficient (#83515)
There's no need to add all the zeros if we extract all the non-zero
elements.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/83515
Approved by: https://github.com/albanD, https://github.com/soulitzer
2022-08-18 13:00:12 +00:00
Kurt Mohler
be5b3df6cc Update std_mean/var_mean/nanmean/nansum signatures with int[1]? dim (#82912)
### Description
Change the type of the `dim` arg for `std_mean/var_mean/nanmean/nansum` to `int[1]?` in `native_functions.yaml`

### Issue
Part of #29137

### Testing

Pull Request resolved: https://github.com/pytorch/pytorch/pull/82912
Approved by: https://github.com/albanD
2022-08-10 16:58:26 +00:00
kshitij12345
10e7a25488 [composite compliance] eig_backward (#82957)
Ref #69991
Pull Request resolved: https://github.com/pytorch/pytorch/pull/82957
Approved by: https://github.com/zou3519
2022-08-08 15:18:48 +00:00
Kurt Mohler
2bfae07a79 Enable dim=None for torch.mean (#81286)
Part of #79525

This will require coordination with XLA before merging, just like #79881
Pull Request resolved: https://github.com/pytorch/pytorch/pull/81286
Approved by: https://github.com/albanD
2022-07-28 22:34:56 +00:00
Nikolay Korovaiko
d2c47d559c Revert "Revert "Enabling SymInt in autograd; take 3 (#81145)"" ; make sure is_intlist checks for symintnodes (#82189)
### Description
<!-- What did you change and why was it needed? -->

### Issue
<!-- Link to Issue ticket or RFP -->

### Testing
<!-- How did you test your change? -->

Pull Request resolved: https://github.com/pytorch/pytorch/pull/82189
Approved by: https://github.com/ezyang
2022-07-26 20:47:11 +00:00
lezcano
11fe277b62 [PrimTorch] Add reference for torch.norm (#81765)
This ref does more things than `torch.norm`, and it fixes a few bugs
that `torch.norm` has. This implementation and the `torch.norm`
implementation come to terms in the next PR of this stack

We put this PR before, as otherwise `test_decomp.py` was failing.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/81765
Approved by: https://github.com/ngimel
2022-07-25 19:57:21 +00:00
Kshiteej K
db0e121b46 [composite compliance] put, take (#81094)
Reference: #69991

This PR makes `put` CompositeExplicit as it is implemented in terms of `put_` (for which we can't handle Composite Compliance at the implementation level).

Ref (put implementation)
478081c698/aten/src/ATen/native/TensorAdvancedIndexing.cpp (L619-L621)

Also, we update the `take` gradient formula to handle Tensor Subclass .
Pull Request resolved: https://github.com/pytorch/pytorch/pull/81094
Approved by: https://github.com/zou3519
2022-07-25 15:05:16 +00:00
kshitij12345
5880a66758 [composite compliance] matrix_exp (#81225)
Ref: #69991
Pull Request resolved: https://github.com/pytorch/pytorch/pull/81225
Approved by: https://github.com/zou3519
2022-07-25 11:11:29 +00:00
PyTorch MergeBot
c078476eb0 Revert "Enabling SymInt in autograd; take 3 (#81145)"
This reverts commit 032facd6e6.

Reverted https://github.com/pytorch/pytorch/pull/81145 on behalf of https://github.com/jeanschmidt due to breaking internal builds
2022-07-22 11:15:20 +00:00
Nikolay Korovaiko
032facd6e6 Enabling SymInt in autograd; take 3 (#81145)
Fixes #ISSUE_NUMBER

Pull Request resolved: https://github.com/pytorch/pytorch/pull/81145
Approved by: https://github.com/ezyang
2022-07-22 00:14:50 +00:00
Edward Z. Yang
84c8a9f88e Use slow but safe formula for prod_backward (#81617)
prod performs a sync to test for zeros as the formula is substantially
simpler if there are no zeros, but this doesn't work for meta tensors.
The double backwards formula works great in all cases though!

Signed-off-by: Edward Z. Yang <ezyang@fb.com>
Pull Request resolved: https://github.com/pytorch/pytorch/pull/81617
Approved by: https://github.com/soulitzer
2022-07-18 18:45:32 +00:00
PyTorch MergeBot
4963adcc8d Revert "[composite compliance] matrix_exp (#81225)"
This reverts commit 367c695237.

Reverted https://github.com/pytorch/pytorch/pull/81225 on behalf of https://github.com/clee2000 due to broke functorch https://github.com/pytorch/pytorch/runs/7345901504?check_suite_focus=true
2022-07-14 19:53:51 +00:00
kshitij12345
367c695237 [composite compliance] matrix_exp (#81225)
Ref: #69991
Pull Request resolved: https://github.com/pytorch/pytorch/pull/81225
Approved by: https://github.com/zou3519
2022-07-14 18:19:11 +00:00
lezcano
b5b9db9f84 Make kl_div a composite function. (#80334)
Benchmarks: https://github.com/pytorch/pytorch/pull/80334#issuecomment-1167229285

Fixes https://github.com/pytorch/pytorch/issues/80158
Fixes https://github.com/pytorch/pytorch/issues/78867
Fixes https://github.com/pytorch/pytorch/issues/69230

Supersedes https://github.com/pytorch/pytorch/pull/79007
Supersedes https://github.com/pytorch/pytorch/pull/69212
Supersedes https://github.com/pytorch/pytorch/pull/19659
Pull Request resolved: https://github.com/pytorch/pytorch/pull/80334
Approved by: https://github.com/ezyang
2022-07-13 20:07:36 +00:00
Kurt Mohler
23bdb570cf Reland: Enable dim=None for torch.sum (#79881)
Part of #29137

Reland of #75845
Pull Request resolved: https://github.com/pytorch/pytorch/pull/79881
Approved by: https://github.com/albanD, https://github.com/kulinseth
2022-07-09 00:54:42 +00:00
PyTorch MergeBot
f2c8557521 Revert "Make kl_div a composite function. (#80334)"
This reverts commit 828c787ea9.

Reverted https://github.com/pytorch/pytorch/pull/80334 on behalf of https://github.com/ezyang due to doesn't work with xla
2022-07-06 17:51:06 +00:00
lezcano
828c787ea9 Make kl_div a composite function. (#80334)
Benchmarks: https://github.com/pytorch/pytorch/pull/80334#issuecomment-1167229285

Fixes https://github.com/pytorch/pytorch/issues/80158
Fixes https://github.com/pytorch/pytorch/issues/78867
Fixes https://github.com/pytorch/pytorch/issues/69230

Supersedes https://github.com/pytorch/pytorch/pull/79007
Supersedes https://github.com/pytorch/pytorch/pull/69212
Supersedes https://github.com/pytorch/pytorch/pull/19659
Pull Request resolved: https://github.com/pytorch/pytorch/pull/80334
Approved by: https://github.com/ezyang
2022-07-04 19:33:43 +00:00
lezcano
37a5819665 Make slogdet, linalg.sloget and logdet support metatensors (#79742)
This PR also adds complex support for logdet, and makes all these
functions support out= and be composite depending on one function. We
also extend the support of `logdet` to complex numbers and improve the
docs of all these functions.

We also use `linalg_lu_factor_ex` in these functions, so we remove the
synchronisation present before.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/79742
Approved by: https://github.com/IvanYashchuk, https://github.com/albanD
2022-07-01 16:09:21 +00:00
Hao Zhuang
0ca9888000 Correct the math of repeat_backward in the function comment (#80286)
Correct the math of repeat_backward in the function comment.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/80286
Approved by: https://github.com/albanD
2022-06-28 16:22:46 +00:00
lezcano
42a2359612 Add forward AD for linalg.det and simplify its backward (#79487)
This PR is in preparation for implementing `logdet` and `slogdet` as
structured kernels + implementing them with more efficient derivatives

We implement forward AD for det. We also simplify the implementation of
the backward, and leave a note on how to implement it properly for
singular matrices. We leave thad for future work.

Note (by looking at the OpInfo) that the current implementation passes
the same tests as the one before. We skip the forward-over-backward in
the singular case, as that one was not working in the gradgrad case
either.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/79487
Approved by: https://github.com/nikitaved, https://github.com/albanD
2022-06-24 14:15:17 +00:00
lezcano
44ff6be35a Fix backward of binary_cross_entropy_with_logits
The previous PR in this stack uncovered an error in the forward over
backward for this function.

In this PR, we fix this error and we also fix the gradgrad
implementation (and make it more stable and faster using `logsigmoid`).
We also move the double backward for this function to `FunctoinsManual`
as there's no reason for it to be in `native_functions`

Pull Request resolved: https://github.com/pytorch/pytorch/pull/80083

Approved by: https://github.com/zou3519
2022-06-23 01:31:08 +00:00
lezcano
f54e7b4ad6 More forward AD formulas
This PR:
- Corrects the forward AD formula of `torch.sgn`.
  - The reason why we can't use `auto_element_wise` for this operations is rather subtle. I left a comment.
  - This, in turn, fixes a problem we had in forward-over-backward for `linalg.svd` and other spectral decompositions (and `norm`, `linalg.norm`, `linalg.matrix_norm`) that were using `torch.abs` (whose derivative is given by `torch.sgn`.
- Implement the formula for a number of missing operations `nansum`, `amax`, `amin`...
- Simplified a few formulas, most notably the forward AD for `div` and the derivative of `norm`, `linalg.norm` and `vector_norm` for `ord=+-inf`.
- Correct the formula for `mean`, `std_mean`, `var_mean` when `dim` is provided and equal to `()` (or `None`)
- A few minor improvements to `sum_backward`, `unsqueeze_multiple` and formulas depending on them
- Fix the derivatives of `std_mean` and `std_var` (complex support,
ASAN, forward AD...)

Fixes: https://github.com/pytorch/pytorch/issues/67539

Pull Request resolved: https://github.com/pytorch/pytorch/pull/80082

Approved by: https://github.com/zou3519
2022-06-23 01:31:08 +00:00
PyTorch MergeBot
e3d0a3ca88 Revert "More forward AD formulas"
This reverts commit 6b20ef6b91.

Reverted https://github.com/pytorch/pytorch/pull/77975 on behalf of https://github.com/janeyx99 due to I think this is the real culprit of the broken tests in 28a7ee8cec for the trunk-only slow test job
2022-06-22 19:30:02 +00:00
PyTorch MergeBot
942c371bbc Revert "Fix backward of binary_cross_entropy_with_logits"
This reverts commit 28a7ee8cec.

Reverted https://github.com/pytorch/pytorch/pull/79381 on behalf of https://github.com/janeyx99 due to Sorry, 28a7ee8cec this PR breaks trunk-only slow test job
2022-06-22 17:41:09 +00:00
lezcano
28a7ee8cec Fix backward of binary_cross_entropy_with_logits
The previous PR in this stack uncovered an error in the forward over
backward for this function.

In this PR, we fix this error and we also fix the gradgrad
implementation (and make it more stable and faster using `logsigmoid`).
We also move the double backward for this function to `FunctoinsManual`
as there's no reason for it to be in `native_functions`

Pull Request resolved: https://github.com/pytorch/pytorch/pull/79381

Approved by: https://github.com/soulitzer
2022-06-22 14:28:56 +00:00
lezcano
6b20ef6b91 More forward AD formulas
This PR:
- Corrects the forward AD formula of `torch.sgn`.
  - The reason why we can't use `auto_element_wise` for this operations is rather subtle. I left a comment.
  - This, in turn, fixes a problem we had in forward-over-backward for `linalg.svd` and other spectral decompositions (and `norm`, `linalg.norm`, `linalg.matrix_norm`) that were using `torch.abs` (whose derivative is given by `torch.sgn`.
- Implement the formula for a number of missing operations `nansum`, `amax`, `amin`...
- Simplified a few formulas, most notably the forward AD for `div` and the derivative of `norm`, `linalg.norm` and `vector_norm` for `ord=+-inf`.
- Correct the formula for `mean`, `std_mean`, `var_mean` when `dim` is provided and equal to `()` (or `None`)
- A few minor improvements to `sum_backward`, `unsqueeze_multiple` and formulas depending on them
- Fix the derivatives of `std_mean` and `std_var` (complex support,
ASAN, forward AD...)

Fixes: https://github.com/pytorch/pytorch/issues/67539

Pull Request resolved: https://github.com/pytorch/pytorch/pull/77975

Approved by: https://github.com/soulitzer
2022-06-22 14:28:56 +00:00
Driss Guessous
a098937c20 Add factory function derivatives (#79872)
Adding derivatives for factory functions, this issue is used for tracking: #79044

Pull Request resolved: https://github.com/pytorch/pytorch/pull/79872
Approved by: https://github.com/cpuhrsch, https://github.com/soulitzer
2022-06-21 00:53:11 +00:00
lezcano
16f30b494c Make l1_loss composite
Fixing the forward AD for `sgn` in the next PR of this stack uncovered a
number of issues with the derivatives of `l1_loss`. Upon inspection,
`l1_loss` was just implemented as a composite function, but it was not
differentiable. This PR makes it a fully differentiable function.

As a side note, `l1_loss_out` was incorrect in a number of ways. Even
more, it is not exposed to the public as `F.l1_loss` does not accept an
`out=` parameter. As such it is not even tested. I wonder how useful is
to have `out=` variants for loss functions if we don't expose them at
all. Even more, I wonder how useful is to have `_out` variants  for loss
functions, given that their most normal use case is to return just a
real number cc jbschlosser

Pull Request resolved: https://github.com/pytorch/pytorch/pull/79804

Approved by: https://github.com/zou3519, https://github.com/malfet
2022-06-20 19:10:54 +00:00
PyTorch MergeBot
d4a9438786 Revert "Make l1_loss composite"
This reverts commit 61a5c779bf.

Reverted https://github.com/pytorch/pytorch/pull/78257 on behalf of https://github.com/malfet due to This breaks executorch
2022-06-17 18:14:21 +00:00
Kshiteej K
04b98df87a [fix] composite compliance: eig, eigh, symeig (#79698)
Ref: https://github.com/pytorch/pytorch/issues/69991
Pull Request resolved: https://github.com/pytorch/pytorch/pull/79698
Approved by: https://github.com/Lezcano, https://github.com/albanD
2022-06-17 14:13:04 +00:00
PyTorch MergeBot
ee6ebfc06b Revert "Enable dim=None for torch.sum (#75845)"
This reverts commit e79a51f7db.

Reverted https://github.com/pytorch/pytorch/pull/75845 on behalf of https://github.com/malfet due to Breaks MacOS builds, see e79a51f7db
2022-06-16 22:01:41 +00:00
Kurt Mohler
e79a51f7db Enable dim=None for torch.sum (#75845)
Part of #29137

Pull Request resolved: https://github.com/pytorch/pytorch/pull/75845
Approved by: https://github.com/ezyang
2022-06-16 20:17:07 +00:00
lezcano
61a5c779bf Make l1_loss composite
Fixing the forward AD for `sgn` in the next PR of this stack uncovered a
number of issues with the derivatives of `l1_loss`. Upon inspection,
`l1_loss` was just implemented as a composite function, but it was not
differentiable. This PR makes it a fully differentiable function.

As a side note, `l1_loss_out` was incorrect in a number of ways. Even
more, it is not exposed to the public as `F.l1_loss` does not accept an
`out=` parameter. As such it is not even tested. I wonder how useful is
to have `out=` variants for loss functions if we don't expose them at
all. Even more, I wonder how useful is to have `_out` variants  for loss
functions, given that their most normal use case is to return just a
real number cc jbschlosser

Pull Request resolved: https://github.com/pytorch/pytorch/pull/78257

Approved by: https://github.com/jbschlosser
2022-06-16 00:03:22 +00:00