Commit Graph

185 Commits

Author SHA1 Message Date
Elias Ellison
8a6b076196 lift numpy tensor, add randperm support (#83191)
Couple changes needed to trace huggingface w fake tensors.

Similar to https://github.com/pytorch/pytorch/pull/81927, need to call liftfresh for tensors created from numpy tensors. Also adds randperm for meta.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/83191
Approved by: https://github.com/bdhirsh
2022-08-10 22:27:51 +00:00
Elias Ellison
4f61aa79dd Update nan_to_num prims (#82908)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/82908
Approved by: https://github.com/voznesenskym, https://github.com/ngimel
2022-08-08 14:28:28 +00:00
Edward Z. Yang
42fefd4403 Sparse fake tensor support (#82172)
Add support for sparse fake tensors.

- The testing strategy is to run a fake tensor cross ref test on `test_sparse.py`. This is necessary because OpInfo sparse coverage is completely nonexistent. We could have tried to turn on cross ref testing globally for all files, but that would be very time consuming and the tests I'm interested in are mostly in this file. There are some exclusions in testing for things that don't work.
- I make fake tensor converter raise a UnsupportedFakeTensorException if the meta converter fails to do a conversion (which can happen in a relatively large number of situations).
- I relax fake tensor invariants so that you can make a fake tensor from a meta tensor. This is useful because in the cross ref test sometimes we operate on meta tensors.
- Fake tensor wrapping is improved to handle the case when a function doesn't return any tensors
- Meta converter is taught how to convert sparse tensors to meta

There's still a little more cleanup that needs to be done, but this is good for review.

Signed-off-by: Edward Z. Yang <ezyang@fb.com>
Pull Request resolved: https://github.com/pytorch/pytorch/pull/82172
Approved by: https://github.com/eellison
2022-08-03 14:29:36 +00:00
Elias Ellison
09a6d0b5bb Fake: copy over grad attribute (#82593)
Copy over the grad attribute... let me any suggestions of other tests/opinfos/crossrefs I should be doing to make this more comprehensive.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/82593
Approved by: https://github.com/ezyang
2022-08-02 04:08:42 +00:00
Elias Ellison
642aed8b99 Add Autocast Support for FakeTensors / use fake device dispatch keys (#82449)
From PR:
```
Note: [Fake Tensor Dispatch Keys]
In order to model the behavior of device-specific autocast
and autograd logic, we update the dispatch keys of FakeTensors
to reflect their fake device. This includes the BackendComponent
(DispatchKey::Meta -> DispatchKey::CUDA), and also the BackendComponent
related Autocast and Autograd keys. __torch__dispatch__ sits below
Autocast and Autograd, and is only invoked when we are at the
kernel for the BackendComponent. Then, we add Meta to the
thread-local dispatch include set to hit the meta kernel
instead of the kernel of the BackendComponent for the fake device.
```

Also adds the `conv1/2/3d.padding` operators to the Autocast rule set. Without that fix, the FakeTensor dtype would diverge.

See: https://github.com/pytorch/pytorch/issues/81608

Pull Request resolved: https://github.com/pytorch/pytorch/pull/82449
Approved by: https://github.com/ezyang
2022-08-01 21:40:36 +00:00
Elias Ellison
452af0bc44 Call lift fresh in valueToTensor (#81927)
`valueToTensor` is invoked in many python apis such as `x[0] = 0.5`, which lifts the 0.5 to a tensor. the problem is described similarly in https://github.com/pytorch/pytorch/pull/81609 (/s/scalar_to_tensor/valueToTensor)
> scalar_to_tensor is not dispatched and thus there is no interposition point for modes to ensure that the resulting tensor is appropriately wrapped. lift_fresh introduces this interposition point. This prevents FakeTensorMode from erroring

Pull Request resolved: https://github.com/pytorch/pytorch/pull/81927
Approved by: https://github.com/ezyang
2022-07-26 22:07:59 +00:00
Robert
47b4ec8aa7 Conv2d shape calculation for meta tensors (#79834)
Fixes #79512

This PR adds support for convolutional meta modules and computes the output shape correctly for some meta input tensor.
Currently in progress, no tests written so far.

**Feature implementations**:
- [x] `Conv1d`
- [x] `Conv2d`
- [x] `Conv3d`

**Tests**:
- [x] `Conv1d`
- [x] `Conv2d`
- [x] `Conv3d`

cc @albanD @anjali411
Pull Request resolved: https://github.com/pytorch/pytorch/pull/79834
Approved by: https://github.com/ezyang, https://github.com/albanD
2022-07-23 05:58:56 +00:00
samdow
2ac24675cc get rid of push_torch_{dispatch, function}_mode (#78215)
Currently we have 2 ways of doing the same thing for torch dispatch and function modes:
`with push_torch_dispatch_mode(X)` or `with X.push(...)`
is now the equivalent of doing
`with X()`

This removes the first API (which is older and private so we don't need to go through a deprecation cycle)

There is some risk here that this might land race with a PR that uses the old API but in general it seems like most are using the `with X()` API or `enable_torch_dispatch_mode(X())` which isn't getting removed.

EDIT: left the `with X.push(...)` API since there were ~3 land races with that over the past day or so. But made it give a warning and ask users to use the other API
Pull Request resolved: https://github.com/pytorch/pytorch/pull/78215
Approved by: https://github.com/ezyang
2022-07-22 18:56:37 +00:00
Elias Ellison
15fef3dc7e normalize cuda device (#81739)
Fix for https://github.com/pytorch/pytorch/issues/81068
Pull Request resolved: https://github.com/pytorch/pytorch/pull/81739
Approved by: https://github.com/ezyang
2022-07-21 19:48:56 +00:00
Elias Ellison
745739d8f3 reland of #80545 w skip rocm (#81738)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/81738
Approved by: https://github.com/ezyang
2022-07-21 19:46:57 +00:00
Horace He
a5fb41e3d3 Revert "Revert "Refactored prim utils into _prims_utils folder (#81746)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/81746
Approved by: https://github.com/anijain2305, https://github.com/Krovatkin
2022-07-20 23:43:57 +00:00
PyTorch MergeBot
e43a02c314 Revert "Refactored prim utils into _prims_utils folder (#81088)"
This reverts commit 80231d0a72.

Reverted https://github.com/pytorch/pytorch/pull/81088 on behalf of https://github.com/jeanschmidt due to breaking internal tests
2022-07-19 19:56:41 +00:00
Horace He
80231d0a72 Refactored prim utils into _prims_utils folder (#81088)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/81088
Approved by: https://github.com/ngimel
2022-07-19 03:55:51 +00:00
PyTorch MergeBot
d8cd15b00c Revert "Allow returned tensor impls in fallback (#80545)"
This reverts commit d7e4520d1d.

Reverted https://github.com/pytorch/pytorch/pull/80545 on behalf of https://github.com/malfet due to New test broke rocm, see d7e4520d1d
2022-07-01 01:30:40 +00:00
Elias Ellison
d7e4520d1d Allow returned tensor impls in fallback (#80545)
Previously, we had disallowed any reused storages in fallback kernels because the correct aliasing relationships weren't set up correctly. However, we can support the case of a reused TensorImpl by just returning the input FakeTensor it corresponds to. Additionally, we have `inplace_view` as a tag which will prevent us from running fallback kernels for kernels that mutate metadata. (thanks @ezyang who originally suggested I do this).

Fix for https://github.com/pytorch/torchdynamo/issues/467
Pull Request resolved: https://github.com/pytorch/pytorch/pull/80545
Approved by: https://github.com/ezyang
2022-06-30 21:57:54 +00:00
Elias Ellison
1058b47562 Weak-ref-ify MetaConverter and FakeTensorConverter (#80544)
Make `MetaConverter` and `FakeTensorConverter` hold weak references to their memoized tensors, and also have `MetaConverter` hold weak reference to Tensor storage. Otherwise it can be tricky for users to make sure all existing FakeTensors or FakeTensorModes are deleted which otherwise will leak memory.

I ran into https://github.com/pytorch/pytorch/issues/7733 which I was able to get around with the following (see comment from code):

```
# torch.Tensors cannot be used as a key in a dictionary
# because they define a custom __eq__ function which when used
# to resolve hash collisions will throw when comparing tensors:
# "RuntimeError: bool value of Tensor with more than one value is ambiguous."
# To avoid that, we use an object which will hold a Tensor and use
# its id for both hashing and equality.
# In order to use this as a weak key reference, we cannot
# simply use weakref.WeakKeyDictionary because the newly constructed
# WeakTensorRefKey only use would be a dictionary so it would have no strong
# references.
# To get around this issue, we can use it as a normal key, and then set
# `weakref.finalize` to delete the key when its contained tensor dies.
```

While for the tensor memo we can set a `weakref.finalize` callback that will remove the corresponding `WeakTensorRefKey` from the tensor memo, at the point that this callback is invoked the tensor storage is not yet deallocated.. See comment from code:

```
# [expired-storages]
# NB: even though the tensor has died,
# the deallocation of its storage can take longer,
# even when the storage has no other uses/views.
# In this case, the StorageWeakRef object will be kept alive
# longer than it needs to be, however the storage itself
# will be deallocated. We retain the possibly dead storages
# and periodically check if any of them are expired and
# can be freed.
```

partial fix for https://github.com/pytorch/torchdynamo/issues/468
Pull Request resolved: https://github.com/pytorch/pytorch/pull/80544
Approved by: https://github.com/ezyang
2022-06-29 23:36:35 +00:00
Elias Ellison
c1d11f40c9 [FakeTensor] Use the device of the meta tensor for fallback kernel (#80193)
Otherwise, I would run into issues such as fp16 not supported for Conv Cpu, or cudnn_rnn not implemented for cpu.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/80193
Approved by: https://github.com/Chillee
2022-06-24 20:00:07 +00:00
PyTorch MergeBot
35268bdc2a Revert "[FakeTensor] Use the device of the meta tensor for fallback kernel (#80193)"
This reverts commit 93e70c5973.

Reverted https://github.com/pytorch/pytorch/pull/80193 on behalf of https://github.com/b0noI due to broken test: https://github.com/pytorch/pytorch/runs/7035945243?check_suite_focus=true
2022-06-24 14:53:37 +00:00
Elias Ellison
93e70c5973 [FakeTensor] Use the device of the meta tensor for fallback kernel (#80193)
Otherwise, I would run into issues such as fp16 not supported for Conv Cpu, or cudnn_rnn not implemented for cpu.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/80193
Approved by: https://github.com/Chillee
2022-06-24 03:53:21 +00:00
Elias Ellison
7e54959a8a Add support for indexing cuda tensors with cpu (#80115)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/80115
Approved by: https://github.com/Chillee
2022-06-23 22:46:40 +00:00
Elias Ellison
9705fb03b3 Add support for a couple ops
Pull Request resolved: https://github.com/pytorch/pytorch/pull/79581

Approved by: https://github.com/Chillee
2022-06-20 22:25:39 +00:00
Elias Ellison
268bbecf1c Add option for allowing non-fake inputs, add deepcopy impl
Pull Request resolved: https://github.com/pytorch/pytorch/pull/79580

Approved by: https://github.com/samdow
2022-06-17 19:36:26 +00:00
Elias Ellison
13a8867c01 Add Dynamic Output Shape Tagdfor ata-dependent ops, handle in FakeTensor
Pull Request resolved: https://github.com/pytorch/pytorch/pull/79170

Approved by: https://github.com/ezyang
2022-06-09 22:16:16 +00:00
Elias Ellison
3c5a3ca9e8 Make FakeTensors return meta within kerenl invocation, add FakeTensor op tests
Pull Request resolved: https://github.com/pytorch/pytorch/pull/78972

Approved by: https://github.com/ezyang
2022-06-09 01:39:27 +00:00
Elias Ellison
fe7a13496e Add CPU Fallback
Pull Request resolved: https://github.com/pytorch/pytorch/pull/78522

Approved by: https://github.com/ezyang
2022-06-08 22:35:13 +00:00
Elias Ellison
290d0979f1 Migrate FakeTensors to always call into FakeTensorMode and have them hold a reference
Pull Request resolved: https://github.com/pytorch/pytorch/pull/78677

Approved by: https://github.com/ezyang
2022-06-08 22:30:34 +00:00
Elias Ellison
a711ce4a2a add non-kwarg device and _like constructors
Pull Request resolved: https://github.com/pytorch/pytorch/pull/78536

Approved by: https://github.com/ezyang
2022-06-07 20:36:41 +00:00
PyTorch MergeBot
0df77f320f Revert "add non-kwarg device and _like constructors"
This reverts commit 44937da6db.

Reverted https://github.com/pytorch/pytorch/pull/78536 on behalf of https://github.com/janeyx99 due to Broke meta tests on trunk and on PR https://github.com/pytorch/pytorch/runs/6765692797?check_suite_focus=true
2022-06-07 04:11:05 +00:00
Elias Ellison
44937da6db add non-kwarg device and _like constructors
Pull Request resolved: https://github.com/pytorch/pytorch/pull/78536

Approved by: https://github.com/ezyang
2022-06-07 00:28:59 +00:00
Elias Ellison
26d273959c Add Caching of Conversion to Fake/Meta tensors in FakeTensorMode
Pull Request resolved: https://github.com/pytorch/pytorch/pull/78090

Approved by: https://github.com/ezyang
2022-06-03 13:56:00 +00:00
Elias Ellison
6671b504f7 Modernize FakeTensorMode, throw on non-fake inputs
Pull Request resolved: https://github.com/pytorch/pytorch/pull/78516

Approved by: https://github.com/samdow
2022-06-01 21:43:59 +00:00
Elias Ellison
cea7dd1646 Add FakeTensorMode
Pull Request resolved: https://github.com/pytorch/pytorch/pull/77972

Approved by: https://github.com/ezyang
2022-05-31 16:27:06 +00:00
Elias Ellison
4c18f362a9 add support for type_as/_to_copy
Pull Request resolved: https://github.com/pytorch/pytorch/pull/77971

Approved by: https://github.com/ezyang
2022-05-31 16:25:11 +00:00
Elias Ellison
98e0816986 Extend __new__ on subclasses to set custom_device and custom_strides
Pull Request resolved: https://github.com/pytorch/pytorch/pull/77970

Approved by: https://github.com/Chillee
2022-05-31 16:23:18 +00:00
Elias Ellison
678213ead2 Fake Tensor Part 1
Pull Request resolved: https://github.com/pytorch/pytorch/pull/77969

Approved by: https://github.com/ezyang
2022-05-31 16:20:35 +00:00