Commit Graph

335 Commits

Author SHA1 Message Date
Ivan Yashchuk
01c54ad6de Remove deprecated torch.eig (#70982)
The time has come to remove deprecated linear algebra related functions. This PR removes `torch.eig`.

cc @jianyuh @nikitaved @pearu @mruberry @walterddr @IvanYashchuk @xwang233 @Lezcano
Pull Request resolved: https://github.com/pytorch/pytorch/pull/70982
Approved by: https://github.com/Lezcano, https://github.com/malfet
2022-09-09 21:31:57 +00:00
Edward Z. Yang
591b75bf98 Redo how custom/python_custom methods on TensorImpl work (#84641)
A longstanding confusion in the implementation of fake tensor and proxy tensor is what to do about torch.ops.aten.sym_sizes and related calls. In particular, when you have a tensor that (1) has symbolic shapes and (2) has a `__torch_dispatch__` call, previously, you would always get `__torch_dispatch__` calls for sizes/strides query, *even if you didn't request it* via the dispatch kwargs in `make_wrapper_subclass`.

The reason for this is because we were previously mixing several concepts: "I want to dispatch to Python", "I want to call a virtual method" and "I have dynamic shapes". A single boolean variable controlled all of these things, and so it was not possible to understand inside TensorImpl what the user had actually originally requested.

In this PR, we track each of these concepts individually so that we can preserve user intent. Then, we combine these into a single "policy" variable that controls whether or not we can use the fastpath or not. For the policy to trigger, we only need one of the exceptional cases to be true.

Billing of changes:
* Rename `set_sizes_strides_policy` to `set_custom_sizes_strides`; in general, you cannot DIRECTLY set policy; you have to indirectly set it by the public functions.
* Some helpers for sizes and strides, since it's more complicated (as it is an enum, rather than just bools as is the case for device and layout). `matches_python_custom` is used to test the Python dispatch user ask. `matches_policy` does the policy test (only used in the user facing functions.)
* I reorged the accessor methods so that they are more logical. This makes the diff bad, so I recommend reading the final code directly.
* The default custom implementations now more reliably call their default() implementations
* As bonus refactor, I devirtualized some functions that don't need to be virtual
* `set_sym_sizes_and_strides` is renamed to `set_sizes_and_strides` to make it easier to use in template contexts; it optionally takes a storage offset now so you can set all three values at the same time. If you use the SymInt overload but there are no symbolic integers, we give you a normal resize.
* This adds `sym_storage_offset` since we had that in the symbolic shapes branch and there's no reason not to put it in (and it reduces merge conflicts)

Signed-off-by: Edward Z. Yang <ezyang@fb.com>
Pull Request resolved: https://github.com/pytorch/pytorch/pull/84641
Approved by: https://github.com/wconstab
2022-09-09 13:41:13 +00:00
kshitij12345
eddc2370ec [functorch] vmapvjpvjp (re-enable test with skips and xfails) (#83999)
Enable `vmapvjpvjp` test and add relevant skips and xfails.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/83999
Approved by: https://github.com/zou3519
2022-09-08 13:35:19 +00:00
PyTorch MergeBot
76fc690522 Revert "[functorch] vmapvjpvjp (re-enable test with skips and xfails) (#83999)"
This reverts commit 9addeccb6b.

Reverted https://github.com/pytorch/pytorch/pull/83999 on behalf of https://github.com/kshitij12345 due to Broke trunk
2022-09-08 10:44:37 +00:00
kshitij12345
9addeccb6b [functorch] vmapvjpvjp (re-enable test with skips and xfails) (#83999)
Enable `vmapvjpvjp` test and add relevant skips and xfails.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/83999
Approved by: https://github.com/zou3519
2022-09-08 06:23:12 +00:00
samdow
29672b2136 [functorch] add pinv batch rule (#83761)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/83761
Approved by: https://github.com/zou3519
2022-09-07 23:15:23 +00:00
PyTorch MergeBot
acb4a09628 Revert "Call jit decomposition in VariableType to increase forward AD coverage (#84151)"
This reverts commit 42d99e6f19.

Reverted https://github.com/pytorch/pytorch/pull/84151 on behalf of https://github.com/malfet due to Regressed test_jvpvjp_nn_functional_layer_norm_cuda_float32, see 42d99e6f19
2022-09-07 18:02:27 +00:00
soulitzer
42d99e6f19 Call jit decomposition in VariableType to increase forward AD coverage (#84151)
This PR:
- updates forward AD codegen in core to generate code that tries calling into decompositions registered to jit when
   - (1) the function is not in-place or out variant
   - AND (2) the function is differentiable (requires_derivative=True)
   - AND (3) there are no forward AD formulas registered
   - To simplify things we always generating the if/else (as long as (1) is true), but generate 'false' when either (2) or (3) are false.
 - removes the mechanism from functorch
    - (follow up) some functorch tests should be updated here so they no longer have to compute the Jacobian with vjp
  - factors out some logic to generate the any_has_forward_grad condition
     - (bc-breaking) when TensorList inputs unexpectedly have forward grad, the error will no longer contain the name

See https://github.com/pytorch/pytorch/pull/84151#issuecomment-1238519247 for codegen output and more discussion.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/84151
Approved by: https://github.com/samdow, https://github.com/albanD, https://github.com/zou3519
2022-09-07 15:31:46 +00:00
kshitij12345
07d398fb26 [composite compliance] linalg_householder_product (#84180)
Ref: #69991
Pull Request resolved: https://github.com/pytorch/pytorch/pull/84180
Approved by: https://github.com/zou3519
2022-09-07 09:33:37 +00:00
Ivan Yashchuk
65e887c041 Remove unnecessary copy from torch._refs.to, add OpInfo for torch.Tensor.to (#84270)
This PR removes unnecessary copy from `torch._refs.to`, adds OpInfo for `torch.Tensor.to`.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/84270
Approved by: https://github.com/ngimel
2022-09-01 07:18:42 +00:00
lezcano
0bdcfcb840 Strenghten preconditions of linalg.cross (#83798)
This makes `linalg.cross` array API complaint (https://github.com/data-apis/array-api/issues/415) and fixes a few bugs.

Fixes https://github.com/pytorch/pytorch/issues/77629
Fixes https://github.com/pytorch/pytorch/issues/83756
Pull Request resolved: https://github.com/pytorch/pytorch/pull/83798
Approved by: https://github.com/mruberry
2022-08-24 15:17:12 +00:00
PyTorch MergeBot
bbe803cb35 Revert "Strenghten preconditions of linalg.cross (#83798)"
This reverts commit 7f0198e739.

Reverted https://github.com/pytorch/pytorch/pull/83798 on behalf of https://github.com/janeyx99 due to Sorry, land race caused functorch issues 7f0198e739
2022-08-23 19:36:43 +00:00
lezcano
7f0198e739 Strenghten preconditions of linalg.cross (#83798)
This makes `linalg.cross` array API complaint (https://github.com/data-apis/array-api/issues/415) and fixes a few bugs.

Fixes https://github.com/pytorch/pytorch/issues/77629
Fixes https://github.com/pytorch/pytorch/issues/83756
Pull Request resolved: https://github.com/pytorch/pytorch/pull/83798
Approved by: https://github.com/mruberry
2022-08-23 18:06:51 +00:00
samdow
df048414e0 [functorch] add linalg cross batch rule (#83759)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/83759
Approved by: https://github.com/zou3519
2022-08-23 16:57:38 +00:00
samdow
e2e71c1f4c [functorch] add linalg solve batch rule (#82814)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/82814
Approved by: https://github.com/zou3519
2022-08-18 22:12:19 +00:00
Richard Zou
d84dc589c2 [functorch] relax as_strided batching rule (#83597)
Previously there was a constraint that the bdim is required to be at
the front. As I noted in the comment in the code that I wrote years ago,
this is not necessary for correctness, we were just guarding against
potentially incorrect behavior and assumed most people would not vmap
over dimensions other than 0.

Now, the above assumption did not age very well, because we have batch
rules that return a BatchedTensor where the bdim is something other than
0 (e.g. convolution batch rule).

This PR deletes the check for that assumption and adds additional manual
tests that the as_strided batching rule works when one vmaps over a dimension
other than 0.

Automatic tests don't exist because it's a bit hard to get the
test_vmap_exhaustive test runner to replicate the strides of the inputs
faithfully.

Test Plan:
- wait for tests
Pull Request resolved: https://github.com/pytorch/pytorch/pull/83597
Approved by: https://github.com/samdow
2022-08-18 19:17:47 +00:00
Richard Zou
69728d7dd9 [functorch] annotate test_jvpvjp (#83530)
Most of these are just "forward-mode Ad formula not implemented"
Pull Request resolved: https://github.com/pytorch/pytorch/pull/83530
Approved by: https://github.com/samdow
2022-08-18 19:17:46 +00:00
Richard Zou
7e7afcabe7 [functorch] classify some more test failures (#83520)
Classifies test failures for test_vmapvjp and test_vmapjvpall

Test Plan:
- tests
Pull Request resolved: https://github.com/pytorch/pytorch/pull/83520
Approved by: https://github.com/samdow
2022-08-16 18:11:12 +00:00
Richard Zou
52b8a58197 [functorch] audit skips and xfails for vjp tests (#83518)
Went through test_vjp, test_grad, test_vjpvjp
Pull Request resolved: https://github.com/pytorch/pytorch/pull/83518
Approved by: https://github.com/samdow
2022-08-16 18:11:12 +00:00
Edward Z. Yang
cf4fb5a631 Make test_jvpvjp_as_strided_scatter skipped due to flaky (#83516)
Signed-off-by: Edward Z. Yang <ezyang@fb.com>
Pull Request resolved: https://github.com/pytorch/pytorch/pull/83516
Approved by: https://github.com/zou3519
2022-08-16 15:36:47 +00:00
Horace He
f77adb71cb made some minor refactoring of minifier (#83439)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/83439
Approved by: https://github.com/ezyang
2022-08-16 09:30:42 +00:00
Richard Zou
60295e3abd [functorch] Delete functorch_lagging_op_db (#83418)
No need to have a lagging op db because there are no more sync issues
between functorch and pytorch. If someone adds a new OpInfo, then we
should explicitly check if we support it or not.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/83418
Approved by: https://github.com/samdow
2022-08-15 19:23:03 +00:00
Richard Zou
b99f972e07 [functorch] update functorch lagging db (#83346)
I'm planning on removing functorch lagging op db because it doesn't make
sense in the context of being a part of PyTorch. Before that happens,
this PR updates it, and a future PR will delete it.

Test Plan:
- wait for tests
Pull Request resolved: https://github.com/pytorch/pytorch/pull/83346
Approved by: https://github.com/samdow
2022-08-13 01:56:01 +00:00
Richard Zou
f8c408b79a [functorch] vjpvjp inplace testing (#83119)
Test Plan:
- run tests
Pull Request resolved: https://github.com/pytorch/pytorch/pull/83119
Approved by: https://github.com/Chillee
2022-08-11 13:36:22 +00:00
Richard Zou
ffc4a50259 [functorch] in-place testing for test_vjp (#83114)
This is relatively simple; we just test that `input.clone().inplace_(...)`
gives us the correct gradients while ignoring incompatible sample
inputs.

Test Plan:
- wait for tests
Pull Request resolved: https://github.com/pytorch/pytorch/pull/83114
Approved by: https://github.com/Chillee
2022-08-11 13:36:21 +00:00
Richard Zou
3dc402fd1e [functorch] in-place jvp testing (#83077)
Testing code is starting to trend toward entropy.
Not sure if the in-place tests are actually "necessary" but better safe
than sorry.

Test Plan:
- run tests
Pull Request resolved: https://github.com/pytorch/pytorch/pull/83077
Approved by: https://github.com/Chillee
2022-08-11 13:36:18 +00:00
Richard Zou
3aeb5e4ff9 [functorch] remove some testing hacks (#83079)
I'm not sure why I called this a hack in the first place (perhaps I
wanted to use tree_map and pytrees didn't support namedtuples?). This PR
deletes some comments and the conversion from namedtuple -> tuple
(because that is unnecessary).
Pull Request resolved: https://github.com/pytorch/pytorch/pull/83079
Approved by: https://github.com/Chillee
2022-08-11 03:00:55 +00:00
Richard Zou
810884411d [functorch] transpose_, t_ and aliases thereof batching rules (#82903)
Test Plan:
- run tests
Pull Request resolved: https://github.com/pytorch/pytorch/pull/82903
Approved by: https://github.com/Chillee
2022-08-08 19:52:28 +00:00
samdow
ae399d009f [functorch] Add a bunch of low hanging fruit linalg batch rules (#82177)
This gets ~9 batching rules using only decomps and existing macros so I stuck them all together, happy to break it up if there's a more logical way.

Also gets slogdet and solves for https://github.com/pytorch/functorch/issues/984. I haven't profiled locally but I will
Pull Request resolved: https://github.com/pytorch/pytorch/pull/82177
Approved by: https://github.com/zou3519
2022-08-04 22:32:07 +00:00
samdow
fbbd036871 [Reland] [functorch] Fix linalg batch rules to error on non-matrix inputs (#82176)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/82176
Approved by: https://github.com/zou3519
2022-08-03 16:46:28 +00:00
PyTorch MergeBot
31292599eb Revert "[functorch] Fix linalg batch rules to error on non-matrix inputs (#82176)"
This reverts commit 1dfcad84aa.

Reverted https://github.com/pytorch/pytorch/pull/82176 on behalf of https://github.com/zengk95 due to This looks like it's breaking functorch tests on master
2022-08-02 19:34:00 +00:00
samdow
1dfcad84aa [functorch] Fix linalg batch rules to error on non-matrix inputs (#82176)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/82176
Approved by: https://github.com/zou3519
2022-08-02 13:59:11 +00:00
Kshiteej K
db0e121b46 [composite compliance] put, take (#81094)
Reference: #69991

This PR makes `put` CompositeExplicit as it is implemented in terms of `put_` (for which we can't handle Composite Compliance at the implementation level).

Ref (put implementation)
478081c698/aten/src/ATen/native/TensorAdvancedIndexing.cpp (L619-L621)

Also, we update the `take` gradient formula to handle Tensor Subclass .
Pull Request resolved: https://github.com/pytorch/pytorch/pull/81094
Approved by: https://github.com/zou3519
2022-07-25 15:05:16 +00:00
Richard Zou
68bd687297 Add more functorch shards to PR CI (#82013)
This includes a configuration for linux CUDA, which will give us enough
test coverage for functorch to confidently begin accepting PRs to it again.

NB: Previously it turns out that some tests were not being skipped, even
though we added a skip decorator.

Test Plan:
- wait for CI
- check that the tests being skipped with a skip decorator are actually
skipped via reading test logs
Pull Request resolved: https://github.com/pytorch/pytorch/pull/82013
Approved by: https://github.com/janeyx99
2022-07-25 14:23:18 +00:00
Richard Zou
5f4e8c0a4d Add ability to functorch tests via run_test.py (#82012)
This PR:
- adds the ability to run functorch tests via run_test.py
- changes the functorch shards in PyTorch CI to invoke functorch tests
via run_test.py

The main motivation for this is so that functorch tests hook into the
standard PyTorch test infrastructure.

Questions for reviewers:
- the functorch tests are located outside of the pytorch/test folder
(they're in the pytorch/functorch/test folder). Is this OK? (run_test.py
works locally for me).

Test Plan:

- checked that `python run_test.py --functorch` ran functorch tests
locally
- Local mock test: added `{"test_compilation_for_dynamic_shape
(__main__.TestCompileCache)":
["https://github.com/pytorch/pytorch/issues/82016", ["linux"]]}` to .pytorch-disabled-tests.json, ran functorch tests, verified that the test was skipped.
- Wait for CI
Pull Request resolved: https://github.com/pytorch/pytorch/pull/82012
Approved by: https://github.com/janeyx99
2022-07-25 14:23:18 +00:00
kshitij12345
5880a66758 [composite compliance] matrix_exp (#81225)
Ref: #69991
Pull Request resolved: https://github.com/pytorch/pytorch/pull/81225
Approved by: https://github.com/zou3519
2022-07-25 11:11:29 +00:00
PyTorch MergeBot
3bd08e3410 Revert "Add more functorch shards to PR CI (#81919)"
This reverts commit 68d18f217f.

Reverted https://github.com/pytorch/pytorch/pull/81919 on behalf of https://github.com/janeyx99 due to Reverting in the meantime as the test skips are not working
2022-07-22 18:08:20 +00:00
Richard Zou
68d18f217f Add more functorch shards to PR CI (#81919)
This PR adds functorch shards to some more linux configurations on Pull
Requests. What's missing so far (and coming in the near future) is:
- adding a shard for windows
- adding a shard for asan (functorch currently times out under asan)
- adding shards for things that run in trunk, like mac-os.

Test Plan:
- wait for tests
Pull Request resolved: https://github.com/pytorch/pytorch/pull/81919
Approved by: https://github.com/kit1980
2022-07-22 02:44:41 +00:00
Samantha Andow
43523f4602 [functorch] remove kl_div skips (pytorch/functorch#975) 2022-07-21 13:41:38 -07:00
Samantha Andow
e2361bcc0f [functorch] Add exhaustive testing of vmap autograd composability (pytorch/functorch#851)
* refactor to make simpler based on comments

* cleanup

* more failing tests

* fix test failures

* more test failures

* update xfails
2022-07-21 13:41:38 -07:00
Samantha Andow
cb62788990 [functorch] Allow batch norm with all variations of batching when training=False (pytorch/functorch#958)
* allow batch norm with all variations of batching when training=False

* make running mean/var always call contiguous
2022-07-21 13:41:38 -07:00
Prem
f926b0a4dc [functorch] Replacing iterator with tuple for ops (pytorch/functorch#971) 2022-07-21 13:41:38 -07:00
Richard Zou
63820e9b7d [functorch] Align functorch lint with PyTorch, part II (pytorch/functorch#968) 2022-07-21 13:41:38 -07:00
Richard Zou
bc147a3d3d [functorch] masked_fill.Scalar batch rule (pytorch/functorch#964)
Related to https://github.com/pytorch/functorch/issues/957

It's difficult to write a batching rule for masked_fill.Tensor, so I
didn't write one for that.
2022-07-21 13:41:38 -07:00
Richard Zou
d02f085f70 [functorch] Align functorch's flake8 config with pytorch's (pytorch/functorch#963) 2022-07-21 13:41:37 -07:00
Richard Zou
67b104af02 [functorch] Add skips to coordinate land (pytorch/functorch#962)
For https://github.com/pytorch/pytorch/pull/80217.
https://github.com/pytorch/functorch/issues/961 is the tracking issue so
we don't forget to remove the skips.
2022-07-21 13:41:37 -07:00
Edward Z. Yang
d95d9af43f [functorch] Minor improvements for _autograd_grad (pytorch/functorch#750)
I was really annoyed at the fact that we preallocate result
tensors for everything and then throw most of them out.  New
code variant doesn't do that.

Signed-off-by: Edward Z. Yang <ezyang@fb.com>
2022-07-21 13:41:37 -07:00
Samantha Andow
a60ff6985d [functorch] Generate n^2 not n^3 inputs for batch and instance norm; small batch norm fix (pytorch/functorch#951)
* refactor batch norm exhaustive inputs

* fix typo in batch rule

* fix expand issue, add without cudnn xfail
2022-07-21 13:41:37 -07:00
samdow
d546e857c2 [functorch] Revert "remove kl_div skips"
This reverts commit eeb29ecb1e023bec61a7b13773faff6554a572bc.
2022-07-21 13:41:37 -07:00
samdow
4704d865a6 [functorch] remove kl_div skips 2022-07-21 13:41:37 -07:00