Commit Graph

31 Commits

Author SHA1 Message Date
Edward Z. Yang
42fefd4403 Sparse fake tensor support (#82172)
Add support for sparse fake tensors.

- The testing strategy is to run a fake tensor cross ref test on `test_sparse.py`. This is necessary because OpInfo sparse coverage is completely nonexistent. We could have tried to turn on cross ref testing globally for all files, but that would be very time consuming and the tests I'm interested in are mostly in this file. There are some exclusions in testing for things that don't work.
- I make fake tensor converter raise a UnsupportedFakeTensorException if the meta converter fails to do a conversion (which can happen in a relatively large number of situations).
- I relax fake tensor invariants so that you can make a fake tensor from a meta tensor. This is useful because in the cross ref test sometimes we operate on meta tensors.
- Fake tensor wrapping is improved to handle the case when a function doesn't return any tensors
- Meta converter is taught how to convert sparse tensors to meta

There's still a little more cleanup that needs to be done, but this is good for review.

Signed-off-by: Edward Z. Yang <ezyang@fb.com>
Pull Request resolved: https://github.com/pytorch/pytorch/pull/82172
Approved by: https://github.com/eellison
2022-08-03 14:29:36 +00:00
Nikolay Korovaiko
fd68b0931f sym_numel (#82374)
### Description
This PR makes `numel` symint-aware similar to `sym_sizes()` and `sym_strides()`. Similar to https://github.com/pytorch/pytorch/pull/81300 . This PR is the part of a bigger project to support dynamic_shapes.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/82374
Approved by: https://github.com/ezyang
2022-08-03 06:33:45 +00:00
Ivan Yashchuk
900e93d351 Add context manager for conditional rewrites of torch.* to torch._refs.* calls (#81764)
Adds a new context manager `TorchRefsNvfuserCapabilityMode` for conditional rewrite of `torch.*` calls to `torch._refs.*` based on whether the decomposition consisting of prims supports nvFuser execution or not.

A new optional argument for `TorchRefsMode` is added - `should_fallback_fn`, a callable that returns whether the original `torch.foo` or the replacement `torch._refs.foo` should be used.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/81764
Approved by: https://github.com/ezyang
2022-08-02 11:02:10 +00:00
Edward Z. Yang
bf387e894f Fix a NotImplemented mode bug and improve Parameter handling for fake tensor (#82574)
Partially addresses https://github.com/pytorch/pytorch/issues/82547

The repro script still doesn't work with fake tensor, but it is now
expected as fake tensor does not work unless all inputs are explicitly
wrapped into fake tensor.

Signed-off-by: Edward Z. Yang <ezyang@fb.com>
Pull Request resolved: https://github.com/pytorch/pytorch/pull/82574
Approved by: https://github.com/eellison
2022-08-01 20:40:01 +00:00
Edward Z. Yang
98215923ad Correctly unpack constants when used in multi-return output (#82568)
Partial fix for https://github.com/pytorch/pytorch/issues/82547

Signed-off-by: Edward Z. Yang <ezyang@fb.com>
Pull Request resolved: https://github.com/pytorch/pytorch/pull/82568
Approved by: https://github.com/IvanYashchuk, https://github.com/davidberard98
2022-08-01 20:40:01 +00:00
Edward Z. Yang
98b9dfa129 Add decompositions for zero_, fill_, new_full, new_zeros, new_ones (#82332)
This makes symbolic tracing tests for logsigmoid and xlogy start working again.

While I'm at it, add pin_memory and layout kwargs to empty; but they
don't actually do anything and raise an error if they are non standard.

Signed-off-by: Edward Z. Yang <ezyang@fb.com>
Pull Request resolved: https://github.com/pytorch/pytorch/pull/82332
Approved by: https://github.com/eellison
2022-07-28 04:02:02 +00:00
Elias Ellison
1c0f7bd6d2 Enable complex for meta tensors (#79975)
There weren't really any fundamental blockers
- add support for `aten::complex`
- update `angle` for complex
- remove the error in the fallback kernel
Pull Request resolved: https://github.com/pytorch/pytorch/pull/79975
Approved by: https://github.com/ezyang
2022-07-27 22:19:14 +00:00
Edward Z. Yang
617e90db22 Add meta support for eye (#82309)
Signed-off-by: Edward Z. Yang <ezyang@fb.com>
Pull Request resolved: https://github.com/pytorch/pytorch/pull/82309
Approved by: https://github.com/bdhirsh
2022-07-27 18:42:47 +00:00
Edward Z. Yang
d38ffa6a4c Make all of new_/_like factory functions composite explicit autograd (#82238)
Once CompositeImplicitAutograd gets registered to Python key, this will
ensure that tensor subclasses can interpose on these functions directly
rather than getting decomposed.  We prefer not decomposing as these
functions are functional, but their implementations use inplace
operations (and are thus more difficult to deal with, unless you use
functionalization.)

Signed-off-by: Edward Z. Yang <ezyang@fb.com>
Pull Request resolved: https://github.com/pytorch/pytorch/pull/82238
Approved by: https://github.com/zou3519, https://github.com/bdhirsh
2022-07-27 18:33:46 +00:00
Horace He
a42616e0bf Revert "Revert "Ported aten::cross to work with symints (#82052)"" (#82287)
This reverts commit e519dd37e1.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/82287
Approved by: https://github.com/ezyang
2022-07-27 04:51:06 +00:00
PyTorch MergeBot
e519dd37e1 Revert "Ported aten::cross to work with symints (#82052)"
This reverts commit 30ed427d2e.

Reverted https://github.com/pytorch/pytorch/pull/82052 on behalf of https://github.com/Chillee due to broke build on master
2022-07-27 01:04:42 +00:00
Horace He
30ed427d2e Ported aten::cross to work with symints (#82052)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/82052
Approved by: https://github.com/ezyang
2022-07-27 00:45:26 +00:00
Horace He
91b4648633 Did some cleanup of symbolic shapes (#82051)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/82051
Approved by: https://github.com/eellison, https://github.com/ezyang
2022-07-27 00:45:26 +00:00
Horace He
fc389cc0a0 Added new_empty.symint overload and a new_empty ref (#82049)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/82049
Approved by: https://github.com/ezyang
2022-07-27 00:31:57 +00:00
lezcano
11fe277b62 [PrimTorch] Add reference for torch.norm (#81765)
This ref does more things than `torch.norm`, and it fixes a few bugs
that `torch.norm` has. This implementation and the `torch.norm`
implementation come to terms in the next PR of this stack

We put this PR before, as otherwise `test_decomp.py` was failing.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/81765
Approved by: https://github.com/ngimel
2022-07-25 19:57:21 +00:00
Mostafa Elhoushi
0894c4967d Add test_make_fx_model_train example (#980) (#82011)
Summary: Pull Request resolved: https://github.com/pytorch/functorch/pull/980

Test Plan: CI should pass

Differential Revision: D38078694

Pulled By: mostafaelhoushi

Pull Request resolved: https://github.com/pytorch/pytorch/pull/82011
Approved by: https://github.com/Chillee
2022-07-25 12:43:17 +00:00
Horace He
1a18ff3247 Revert "Revert "Added dynamic shape POC (#81093)"" (#82063)
This reverts commit 0888a4844c.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/82063
Approved by: https://github.com/ezyang
2022-07-23 22:35:50 +00:00
PyTorch MergeBot
0888a4844c Revert "Added dynamic shape POC (#81093)"
This reverts commit 8169a85dc6.

Reverted https://github.com/pytorch/pytorch/pull/81093 on behalf of https://github.com/janeyx99 due to Broke slow tests on trunk 8169a85dc6.
2022-07-23 11:30:37 +00:00
Horace He
8169a85dc6 Added dynamic shape POC (#81093)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/81093
Approved by: https://github.com/ezyang, https://github.com/eellison
2022-07-23 04:46:32 +00:00
PyTorch MergeBot
521d5ae1ce Revert "Enable reentrant dispatch for decompositions (#81598)"
This reverts commit 08b9544e1c.

Reverted https://github.com/pytorch/pytorch/pull/81598 on behalf of https://github.com/ezyang due to out of tree failures
2022-07-22 00:21:18 +00:00
Edward Z. Yang
5b88a2078b Follow GitHub relabeling of oncall: fx for test owners (#81821)
Signed-off-by: Edward Z. Yang <ezyang@fb.com>
Pull Request resolved: https://github.com/pytorch/pytorch/pull/81821
Approved by: https://github.com/janeyx99
2022-07-21 01:50:06 +00:00
David Berard
08b9544e1c Enable reentrant dispatch for decompositions (#81598)
This allows us to avoid tracing through CompositeImplicitAutograd ops
when decomposing via make_fx or other decomposition methods that use
tracing.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/81598
Approved by: https://github.com/ezyang
2022-07-20 21:26:16 +00:00
Edward Z. Yang
fca03eeec1 Make proxy tensor support item() calls on torch.tensor constants (#81192)
This PR is doing a few interrelated things, all of which are necessary to get correctness. Read the comment in torch/fx/experimental/proxy_tensor.py for the high level overview.

Let's break down the parts of this PR:

* Bug fix where `enable_torch_dispatch_mode` with `None` doesn't work. This make `enable_torch_dispatch_mode(current_mode.inner)` work which is the basis for how we temporarily disable fake tensor mode.
* Bug fix for when fake tensor mode is combined with a non-mode tensor subclass. This actually could be ablated from this PR but it affects where the logic for allowing non fake tensor inputs with lift goes, so it's all in here in one go. There are some relevant tests for the fix in fake tensor, but it turns out I didn't need this because I'm always using proxy tensors as a mode (which ensures the ordering is right.)
* New `lift_fresh` view operator.  Note that like lift, we have to manually write the functionalize kernel for these functions.
* The actual change, which is to save constants when we see them in the proxy tensor mode, and then propagate them as we go (because otherwise you'll handle mutations on constants incorrectly--see test.)

This is mildly BC-breaking if anyone was previously interposing on
at::lift, but this operator was relatively new and I checked
functorch which has no explicit reference to lift.  So I think it
should not be too disruptive.

Signed-off-by: Edward Z. Yang <ezyang@fb.com>
Pull Request resolved: https://github.com/pytorch/pytorch/pull/81192
Approved by: https://github.com/samdow, https://github.com/bdhirsh
2022-07-15 03:53:40 +00:00
Horace He
b7046e9b7f Stopped ProxyTensor from turning aten::lift tensors into proxy objects (#81024)
```
def f():
    val = torch.tensor(float('inf'))
    return torch.full((100, 100), val)
```
today we turn `val` into a ProxyTensor, and then complain when we try to turn `val` into a scalar.

We call `aten::lift` when we call `torch.tensor(5)`, so this just prevents those from being turned into ProxyTensors unnecessarily.

cc: @ezyang
Pull Request resolved: https://github.com/pytorch/pytorch/pull/81024
Approved by: https://github.com/ezyang
2022-07-07 04:54:31 +00:00
David Berard
00f651811a Interpreter for decomposing aten -> prims (#79989)
If an aten -> prim decomposition is needed *after* the initial trace
with make_fx, this interpreter can be used to perform the decomposition.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/79989
Approved by: https://github.com/SherlockNoMad
2022-06-29 21:16:28 +00:00
Horace He
615dd25088 Made Proxy Tensor Mode also trace overloads (#80403)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/80403
Approved by: https://github.com/zou3519
2022-06-28 04:31:43 +00:00
PyTorch MergeBot
4e33c8c6bb switched over to using faketensor in proxytensor (#79634)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/79634
Approved by: https://github.com/albanD
2022-06-27 19:55:47 +00:00
Horace He
159d459c50 Switched to tracing overloads by default (#80013)
There are many cases where it's more convenient to use overloads, but we've hesitated in doing so since we can't torchscript it directly.

Luckily, it's pretty easy to strip overloads. See https://github.com/pytorch/functorch/pull/899
Pull Request resolved: https://github.com/pytorch/pytorch/pull/80013
Approved by: https://github.com/zou3519
2022-06-22 18:55:06 +00:00
Peter Bell
9bf52f4be8 Add OpInfo for torch.equal and fix support for non-standard bools
Pull Request resolved: https://github.com/pytorch/pytorch/pull/79389

Approved by: https://github.com/mruberry
2022-06-20 23:48:39 +00:00
Horace He
f5d7e5a192 started using mode-based tracing
Pull Request resolved: https://github.com/pytorch/pytorch/pull/79638

Approved by: https://github.com/samdow
2022-06-17 20:24:49 +00:00
Horace He
4d88affb5d Ported proxy tensor tests over to core (#78890)
Will fill out later
Pull Request resolved: https://github.com/pytorch/pytorch/pull/78890
Approved by: https://github.com/ezyang, https://github.com/zou3519
2022-06-07 00:28:53 +00:00