Commit Graph

219 Commits

Author SHA1 Message Date
Yanbo Liang
4f257a507c [Dynamo] Support Python builtin sorted function (#94949)
Fixes #94750

Pull Request resolved: https://github.com/pytorch/pytorch/pull/94949
Approved by: https://github.com/jansel, https://github.com/Skylion007
2023-02-16 21:27:11 +00:00
Xuehai Pan
046e88a291 [BE] [3/3] Rewrite super() calls in test (#94592)
Rewrite Python built-in class `super()` calls. Only non-semantic changes should be applied.

- #94587
- #94588
- #94592

Also, methods with only a `super()` call are removed:

```diff
class MyModule(nn.Module):
-   def __init__(self):
-       super().__init__()
-
    def forward(self, ...):
        ...
```

Some cases that change the semantics should be kept unchanged. E.g.:

f152a79be9/caffe2/python/net_printer.py (L184-L190)

f152a79be9/test/test_jit_fuser_te.py (L2628-L2635)

Pull Request resolved: https://github.com/pytorch/pytorch/pull/94592
Approved by: https://github.com/ezyang, https://github.com/seemethere
2023-02-12 22:20:53 +00:00
David Berard
3e6978172e [dynamo] Handle general tensor attributes with a getattr proxy node (#91840)
**Background:** Before this PR, support in dynamo for tensor attributes (e.g. `x.H`, `x.T`, ...) need to be individually implemented one-by-one. This could potentially lead to errors, e.g. if the implementation in [variables/tensor.py](21c7c7c72f/torch/_dynamo/variables/tensor.py (L160)) differs from the implementation from a direct call to the attribute. For attributes that were not special-cased in tensor.py, dynamo tracing would fail. This PR adds generic support for tensor attributes that return tensors without needing to specially handle them. (Notably, for x.real and x.imag, which previously weren't supported).

**In this PR:** This directly creates a proxy node for a `"call_function"` node with `target=getattr`, and feeds it into wrap_fx_proxy. This will produce a TensorVariable for the attribute returned.

This also removes the implementations for H, T, mH, mT which were broken (previously `torch.relu(x.T)` would fail). They now fall back to this default implementation (for which `torch.relu(x.T)` passes).

**Further context**:

* Ed's original suggestion in [90463](https://github.com/pytorch/pytorch/pull/90463#discussion_r1043398340) is to use `torch.Tensor.H.__get__(x)`. I wasn't able to get this to work; fx compilation fails with `getset_descriptor does not have attribute __module__`. Basically, the `__module__` attribute which is available on most python attributes, is not available on `getset_descriptor` objects. (i.e., these are implemented in C++ as attributes on torch.Tensor, so they don't obey some assumptions made by fx)
* Although both tensor attributes and methods (like `x.relu()`) both go through this, this PR should only handle attributes (e.g. see the `"getset_descriptor"` in variables/tensor.py). Methods are handled already by by GetAttrVariable.
* Prior to this PR, we already returned GetAttrVariables for unsupported attrs: the parent caller would catch the NotImplementedError and fallback to returning a GetAttrVariable. But if this GetAttrVariable was ever passed into a torch.\* function (as it could quite possibly be, since most of these attrs are tensors), it would fail because its proxy node would be missing an [example_value](https://github.com/pytorch/pytorch/blob/master/torch/_dynamo/utils.py#L1017). So: before, for some tensor x, `x.real` would work fine; but `torch.relu(x.real)` would fail.

**Testing**: added tests in test_misc.py for x.real, x.imag, x.T, x.real.T.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/91840
Approved by: https://github.com/ezyang
2023-02-01 22:34:03 +00:00
David Berard
58acab4616 [dynamo] support [tensor].type(torch.FloatTensor) (#93043)
for some tensor x, x.type(torch.FloatTensor) will essentially do the same thing as x.to(torch.float). x.type can be called with at least 3 types of inputs:
* a string "torch.FloatTensor"
* a dtype torch.float
* a tensor type torch.FloatTensor

the third option (torch.FloatTensor) fails in fx, because fx cannot trace torch.FloatTensor objects.  So this PR will replace the torch.FloatTensor type with a string "torch.FloatTensor"

Why not fix this in fx? Well, it's possible, but I'm not sure a nice way to do it. We would want to update [torch.fx.node.BaseArgumentTypes](d88bc38b0c/torch/fx/node.py (L17)) to contain torch.FloatTensor etc. We could hard-code a list of tensor types there (the types vary depending on build type, e.g. whether or not cuda tensors are available), but that's not great in case our hardcoded list differs from the actual list registered by python_tensor.cpp. Another option is to dynamically populate the list of types with `Union[tuple(...)])`, and fill the tuple with `torch._tensor_classes` (which is directly populated by python_tensor.cpp), but apparently this breaks most typecheckers.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/93043
Approved by: https://github.com/jansel
2023-01-27 21:27:13 +00:00
Yanbo Liang
c9ce0e63e8 [Dynamo] Support context wrapping(e.g, torch.no_grad) on nested functions w/o closure (#92922)
Fixes 14k github models: https://github.com/jansel/pytorch-jit-paritybench/blob/master/generated/test_ELEKTRONN_elektronn3.py

Pull Request resolved: https://github.com/pytorch/pytorch/pull/92922
Approved by: https://github.com/jansel, https://github.com/mlazos
2023-01-26 04:23:35 +00:00
Will Constable
e665f03ad8 Fix dynamo func defaults handling for torch.device, size, dtype (#92880)
Previously, these torch types were not handled in the wrap_bound_arg
handler.

Add a unit test and verify it is fixed.

Fixes #91084

Pull Request resolved: https://github.com/pytorch/pytorch/pull/92880
Approved by: https://github.com/ezyang
2023-01-24 21:50:43 +00:00
Yanbo Liang
0ab4ab9f8d [Dynamo] Fix calling UserDefinedObject.func should pass self object (#92050)
Fixes #90834

Pull Request resolved: https://github.com/pytorch/pytorch/pull/92050
Approved by: https://github.com/jansel
2023-01-21 05:47:01 +00:00
Will Constable
6cfaa92239 Handle tensor default func args when inlining (#90575)
Handle tensor default func/method args when inlining

    Previously, when inlining a function, its default arguments
    were only wrapped with VariableTrackers if non-tensor. Now,
    tensor default args are also handled by adding them to the
    parent InstructionTranslator as an attribute.

    - also patches up a missing source in nnmodule call_function,
      needed to properly guard on a default arg in its methods
    - adds new 'DefaultsSource' type which guards either a `__defaults__`
      or `__kwdefaults__` entry on a function

Fixes #90361  https://github.com/pytorch/torchdynamo/issues/1968

Pull Request resolved: https://github.com/pytorch/pytorch/pull/90575
Approved by: https://github.com/voznesenskym
2023-01-12 05:04:18 +00:00
Yanbo Liang
490c1cf650 [Dynamo] Support torch.get_default_dtype (#89790)
Fixes https://github.com/pytorch/torchdynamo/issues/1930

Pull Request resolved: https://github.com/pytorch/pytorch/pull/89790
Approved by: https://github.com/soumith
2022-12-19 04:14:11 +00:00
Yanbo Liang
5c133c5744 [Dynamo] Supports two torch.distributed.* functions (#90683)
Fixes Meta internal user cases.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/90683
Approved by: https://github.com/jansel
2022-12-13 19:06:38 +00:00
Michael Lazos
76f440f20a [dynamo] Rewrite inplace addcdiv and inplace add (#90330)
Rewrite inplace addcdiv to a div, mul and inplace add to avoid graph break
Rewrite inplace add to a mul and inplace add to avoid graph break

Needed to close optimizer graph breaks

Pull Request resolved: https://github.com/pytorch/pytorch/pull/90330
Approved by: https://github.com/jansel
2022-12-08 21:19:23 +00:00
Michael Lazos
2d9267ba30 [dynamo] Rewrite addcdiv in dynamo to its constituent ops (#90227)
This avoids a graph break when `value` is used. This fixes a graph break in the variants of Adam and Adagrad optimizers.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/90227
Approved by: https://github.com/jansel
2022-12-06 05:08:44 +00:00
Yanbo Liang
3916d729c8 [Dynamo] tensor.type() should return tensor types with CPU and GPU variants (#90021)
Fix errors from [7k github models](https://github.com/pytorch/torchdynamo/issues/1884)
```
Traceback (most recent call last):
  File "/scratch/ybliang/work/repos/pytorch/torch/_dynamo/utils.py", line 1062, in get_fake_value
    return wrap_fake_exception(
  File "/scratch/ybliang/work/repos/pytorch/torch/_dynamo/utils.py", line 739, in wrap_fake_exception
    return fn()
  File "/scratch/ybliang/work/repos/pytorch/torch/_dynamo/utils.py", line 1063, in <lambda>
    lambda: run_node(tx.output, node, args, kwargs, nnmodule)
  File "/scratch/ybliang/work/repos/pytorch/torch/_dynamo/utils.py", line 1112, in run_node
    raise RuntimeError(
RuntimeError: Failed running call_function <function einsum at 0x7fd8f246a4c0>(*('i,j->ij', FakeTensor(FakeTensor(..., device='meta', size=(4,)), cpu), FakeTensor(FakeTensor(..., device='meta', size=(2,)), cuda:0)), **{}):
Unhandled FakeTensor Device Propagation for aten.mul.Tensor, found two different devices cpu, cuda:0
(scroll up for backtrace)
```

The root cause is: ```tensor.type()``` should return ```torch.cuda.FloatTensor``` rather than ```torch.FloatTensor``` if it's on GPU.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/90021
Approved by: https://github.com/jansel
2022-12-02 18:57:43 +00:00
Yanbo Liang
37e46a5035 [Dynamo] Fix several bugs & code refactor in RangeVariable (#89322)
Fix bug in [7k github models](https://github.com/pytorch/torchdynamo/issues/1884): https://github.com/jansel/pytorch-jit-paritybench/blob/master/generated/test_clovaai_stargan_v2.py
```
E       TypeError: 'list' object cannot be interpreted as an integer
E
E       from user code:
E          File "/scratch/ybliang/work/repos/pytorch-jit-paritybench/generated/test_clovaai_stargan_v2.py", line 335, in forward
E           idx = torch.LongTensor(range(y.size(0)))
```

Pull Request resolved: https://github.com/pytorch/pytorch/pull/89322
Approved by: https://github.com/jansel
2022-11-23 19:44:48 +00:00
Yanbo Liang
9eed6b7f9a [Dynamo] Several fixes on TensorVariable & TorchVariable (#89486)
This is a group of bug fixes for [7k github models](https://github.com/pytorch/torchdynamo/issues/1884), it would fix 30+ model tests.
* Support ```tensor.type()```.
* Support ```tensor.get_device()```.
* Support ```torch.nn.functional._Reduction.get_enum```.
* Support ```torch._utils._get_device_index()```.
* Fallback ```tensor.data_ptr()```.
  * ```FakeTensor``` always returns 0
  * For no fake tensor propagation, we ```clone``` the input tensor, which makes no sense to track the original ```data_ptr```. And I don't think this is a very popular API.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/89486
Approved by: https://github.com/jansel
2022-11-23 19:39:45 +00:00
Akshit Khurana
7006ac6ee5 [Dynamo] Fix Tensor.T trace (#88642)
Summary:

Tensor.T considered T as a GetAttr and didn't progate "example_value"

Via https://pytorch.org/docs/stable/tensors.html#torch.Tensor.T
> If n is the number of dimensions in x, x.T is equivalent to
> x.permute(n-1, n-2, ..., 0).

Fixes pytorch/torchdynamo#1476

Test Plan:

pytest test/dynamo/test_functions.py::FunctionTests::test_T

Reviewers:

Subscribers:

Tasks:

Tags:

Differential Revision: [D41130306](https://our.internmc.facebook.com/intern/diff/D41130306)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/88642
Approved by: https://github.com/tugsbayasgalan, https://github.com/yanboliang, https://github.com/jansel
2022-11-09 23:44:30 +00:00
Michael Voznesensky
bc19494814 [Dynamo] Symbolic shape guards (#87570)
**Introduces symbolic shape guards into dynamo.**

In this PR, we take the existing fake tensor infra and plumbing in dynamo and we start passing a shape_env around. This shape_env does not get plumbed down to middle layers / backend yet - it only collects expressions from frontend invocations at the moment. We then translate these expressions into guards at the point where we take other guards installed throughout dynamo - and add them to check_fn.

Part 1 of https://docs.google.com/document/d/1QJ-M4zfMkD-fjHIqW089RptjLl9EgozZGCceUbvmgfY/edit#

cc @jansel @lezcano @fdrocha @mlazos @soumith @yanboliang @penguinwu @anijain2305
Pull Request resolved: https://github.com/pytorch/pytorch/pull/87570
Approved by: https://github.com/ezyang
2022-10-25 21:15:40 +00:00
Jason Ansel
8f71e8de7e Sync changes from pytorch/torchdynamo, enable tests (#86950)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/86950
Approved by: https://github.com/Chillee
2022-10-14 23:08:58 +00:00
Jason Ansel
c7c09722ad Move TorchDynamo into PyTorch core (#86461)
Context:
https://github.com/pytorch/torchdynamo/issues/1588

This PR moves [TorchDynamo](https://github.com/pytorch/torchdynamo) and TorchInductor into PyTorch core.
- `torchdynamo` becomes `torch._dynamo`
- `torchinductor` becomes `torch._inductor`

This PR was generated by running `copy_to_core.sh` in https://github.com/pytorch/torchdynamo/pull/1538

Pull Request resolved: https://github.com/pytorch/pytorch/pull/86461
Approved by: https://github.com/voznesenskym
2022-10-13 23:18:06 +00:00