Commit Graph

260 Commits

Author SHA1 Message Date
kshitij12345
e137ac6c59 [dynamo][torch_np] support linalg, random and fft module (#105320)
Support tracing through `np.linalg` with `torch_np` installed. Will update with other modules if this approach makes sense.

TODO:
* [x] Add test for `fft` and `random`.

Fixes https://github.com/pytorch/pytorch/issues/105269

Pull Request resolved: https://github.com/pytorch/pytorch/pull/105320
Approved by: https://github.com/ezyang, https://github.com/lezcano
2023-07-19 11:06:37 +00:00
kshitij12345
671a21926f [torch_np] update test to use ones_like instead of empty_like (#105453)
This test fails locally (probably because deterministic mode is not on by default).

We replace the use of `empty_like` to `ones_like` as this test doesn't need `empty_like`.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/105453
Approved by: https://github.com/lezcano
2023-07-18 13:13:11 +00:00
Animesh Jain
88aa51fe85 [dynamo] Support defaults for namedtuples (#105341)
Fixes https://github.com/pytorch/pytorch/issues/103008

Pull Request resolved: https://github.com/pytorch/pytorch/pull/105341
Approved by: https://github.com/jansel
2023-07-17 23:52:57 +00:00
Mengwei Liu
fb376f80a2 [retry][dynamo][numpy] Add support for np.dtype (#105034)
Original PR: #103546

Trying to support numpy function call in dynamo, with numpy dtype as argument.

For example:

```
def fn(x: int):
    return np.empty_like(x, dtype=np.float64)
```

This currently doesn't work because `NumpyVariable` doesn't implement `as_proxy()`. The idea in `as_proxy()` for now is to convert `np.float64` and other np.<dtype> into `str` and then feed into the corresponding `torch_np` method. The assumption here is that all `torch_np` methods that are taking `dtype` kwarg will be able to also take `str` as `dtype`. This assumption stands for `numpy`.

For previous example, we convert `np.float64` to `"float64"` in `as_proxy()` and then feed it into `torch_np.empy_like()` method.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/105034
Approved by: https://github.com/voznesenskym
2023-07-14 21:36:36 +00:00
Animesh Jain
9647a251cb [dynamo] Dataclass variables with default field (#104840)
The main complexity comes from the __init__ function of Dataclass variables which look something like this

```
[2023-07-10 05:01:29,548] torch._dynamo.symbolic_convert: [DEBUG] INLINING <code object __init__ at 0x7f7015154450, file "<string>", line 2>
  3           0 LOAD_FAST                1 (b)
              2 LOAD_FAST                0 (self)
              4 STORE_ATTR               0 (b)

  4           6 LOAD_FAST                2 (named_tensors)
              8 LOAD_DEREF               0 (_HAS_DEFAULT_FACTORY)
             10 IS_OP                    0
             12 POP_JUMP_IF_FALSE       20
             14 LOAD_DEREF               1 (_dflt_named_tensors)
             16 CALL_FUNCTION            0
             18 JUMP_FORWARD             2 (to 22)
        >>   20 LOAD_FAST                2 (named_tensors)
        >>   22 LOAD_FAST                0 (self)
             24 STORE_ATTR               1 (named_tensors)
             26 LOAD_CONST               0 (None)
             28 RETURN_VALUE
```

There are multiple issues
* VariableBuilder call in functions.py was wrong. We were calling *options as args.
* We were not setting source while tracking the new object. This led to no source for Dataclass variable, which has some new variables in its closures as seen in the above bytecode.
* There is IS_OP in above bytecode, which brings more cases.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/104840
Approved by: https://github.com/jansel
2023-07-13 01:25:57 +00:00
PyTorch MergeBot
f01deb23d5 Revert "[dynamo][numpy] Add support for np.dtype (#103546)"
This reverts commit 0710791929.

Reverted https://github.com/pytorch/pytorch/pull/103546 on behalf of https://github.com/voznesenskym due to Failed on bench, unclear why bench test did not run on CI ([comment](https://github.com/pytorch/pytorch/pull/103546#issuecomment-1631203461))
2023-07-11 17:23:11 +00:00
Mengwei Liu
0710791929 [dynamo][numpy] Add support for np.dtype (#103546)
## Problem

Trying to support numpy function call in dynamo, with numpy dtype as argument.

For example:

```
def fn(x: int):
    return np.empty_like(x, dtype=np.float64)
```

## Solution

This currently doesn't work because `NumpyVariable` doesn't implement `as_proxy()`. The idea in `as_proxy()` for now is to convert `np.float64` and other np.<dtype> into `torch.dtype` and then feed into the corresponding `torch_np` method.

For previous example, we convert `np.float64` to `torch.float64` in `as_proxy()` and then feed it into `torch_np.empy_like()` method.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/103546
Approved by: https://github.com/ezyang
2023-07-11 06:29:15 +00:00
Yukio Siraichi
40b8d10d5e Re-land: Turn translation validation on for tests and accuracy runs by default. (#104467)
Re-landing: #103611

Pull Request resolved: https://github.com/pytorch/pytorch/pull/104467
Approved by: https://github.com/malfet
2023-07-05 19:01:50 +00:00
PyTorch MergeBot
a2a8b4d415 Revert "Turn translation validation on for tests and accuracy runs by default. (#103611)"
This reverts commit e311bed2a8.

Reverted https://github.com/pytorch/pytorch/pull/103611 on behalf of https://github.com/malfet due to Broke inductor tests ([comment](https://github.com/pytorch/pytorch/pull/103611#issuecomment-1614850276))
2023-06-30 15:54:18 +00:00
Yukio Siraichi
e311bed2a8 Turn translation validation on for tests and accuracy runs by default. (#103611)
This PR turns translation validation on by default for tests and accuracy benchmark
runs. It also installs Z3 on CI.

The main changes are:

- Add `--no-translation-validation` as an option in _test/run_tests.py_
    - Set `PYTORCH_TEST_WITH_TV` environment variable
- Add `TEST_WITH_TV` variable in _torch/testing/_internal/common_utils.py_
- Turn translation validation on for accuracy benchmarks in _benchmarks/dynamo/common.py_
- Add Z3 installation on CI scripts

Pull Request resolved: https://github.com/pytorch/pytorch/pull/103611
Approved by: https://github.com/ezyang
2023-06-30 01:32:21 +00:00
Edward Z. Yang
bc6ec97e02 Switch dynamic_shapes to True by default (#103597)
Signed-off-by: Edward Z. Yang <ezyang@meta.com>

Pull Request resolved: https://github.com/pytorch/pytorch/pull/103597
Approved by: https://github.com/voznesenskym
2023-06-15 15:16:20 +00:00
Mengwei Liu
96c23fe212 [dynamo][numpy] Add support for builtin functions (#103457)
In order to be able to run stuff like:
```
def f(x):
	a = x.numpy()
        return a + a
```
This PR adds a branch in `BuiltinVariable` to handle `NumpyNdarrayVariable` case.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/103457
Approved by: https://github.com/ezyang
2023-06-15 09:18:45 +00:00
Edward Z. Yang
3596a853b4 Always apply new_empty special case in Dynamo (#103378)
Signed-off-by: Edward Z. Yang <ezyang@meta.com>

Pull Request resolved: https://github.com/pytorch/pytorch/pull/103378
Approved by: https://github.com/anijain2305
2023-06-13 19:49:35 +00:00
Mengwei Liu
2eac8bd2b8 [dynamo][numpy] Support ndarray methods (#97537)
This PR adds universal support for ndarray methods. After #100839 each `NumpyNdarrayVariable` should wrap a `torch.Tensor`. This PR adds a `numpy_method_wrapper` which converts the `torch.Tensor` to `torch_np.ndarray` and then call the numpy ndarray method. Then we also try to return a `torch.Tensor` (return as-is if the value is not ndarray-like)

Pull Request resolved: https://github.com/pytorch/pytorch/pull/97537
Approved by: https://github.com/ezyang
2023-06-12 17:21:31 +00:00
Chien-Chin Huang
fff5daf3ee [Dynamo] Support methods of NamedTuple (#103217)
This PR adds the support of calling NamedTuple methods. https://github.com/pytorch/pytorch/issues/91662

Pull Request resolved: https://github.com/pytorch/pytorch/pull/103217
Approved by: https://github.com/williamwen42
2023-06-10 00:01:40 +00:00
shibo19
c24b61bc20 Enable torch._C._get_privateuse1_backend_name in Dynamo tracing (#103141)
Fixes https://github.com/pytorch/pytorch/issues/103125
torch._C._get_privateuse1_backend_name()  will cause graph break, so I add it to the functions.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/103141
Approved by: https://github.com/yanboliang
2023-06-09 09:19:33 +00:00
Bin Bao
39bf86ae90 [dynamo] Support OrderedDict constructor with kwargs (#103192)
Summary: To solve an issue in https://github.com/pytorch/pytorch/issues/102878.
The solution follows the example in https://github.com/pytorch/pytorch/pull/98660.
It only solves a problem for standard OrderedDict. There is another
problem if we use a user-defined CustomDict.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/103192
Approved by: https://github.com/yanboliang
2023-06-08 12:14:21 +00:00
fduwjj
52e310f7a8 Enable torch.nn.init._calculate_correct_fan in dynamo tracing (#103182)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/103182
Approved by: https://github.com/yanboliang
2023-06-08 06:13:49 +00:00
fduwjj
c454534d25 Enable torch.get_autocast_gpu_dtype in Dynamo tracing (#103166)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/103166
Approved by: https://github.com/williamwen42, https://github.com/yanboliang
2023-06-07 21:31:45 +00:00
fduwjj
b5021ba981 Enable torch.is_complex in Dynamo tracing (#103154)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/103154
Approved by: https://github.com/yanboliang
2023-06-07 20:56:46 +00:00
Mengwei Liu
c304fddf68 [dynamo][numpy] Support graph break for numpy ndarray (#100839)
Issue: #93684

In previous PRs #95849 #99560 we redirect `numpy.*`, `<tensor>.numpy()` calls to `torch_np.*` methods and attributes, by creating `NumpyNdarrayVariable` for those calls.

We need to handle `NumpyNdarrayVariable` when graph break happens.

This PR did 2 things:
1. In `codegen.py` we made sure we can reconstruct the value wrapped by `NumpyNdarrayVariable`, to be `torch_np.ndarray` in the stack whenerver we recompiles the subgraph.
2. In `builder.py` we can wrap the value to be `NumpyNdarrayVariable` and save it as graph input.

-----

Starting from commit 6:

## A new design for supporting numpy in dynamo

In short the core concept doesn't change: we still convert `numpy` API calls to `torch_np` API calls. However, instead of wrapping a `torch_np.ndarray` in `NumpyNdarrayVariable`, the new design wraps a `torch.Tensor`.

The reason for doing this change is because we need to keep `torch.Tensor` everywhere in the captured graph, so that it works well with the backend of dynamo. See discussions in https://github.com/Quansight-Labs/numpy_pytorch_interop/issues/142 for details.

### Flow
This is an example showing how do we think about dynamo working on a simple function:
```python
def f(x: torch.Tensor, y: torch.Tensor):
    a, b = x.numpy(), y.numpy()
    c = np.add(x, y)
    return torch.from_numpy(c)
```
```

              +------------+             +------------+
 torch.Tensor |            |numpy.ndarray|            |
 -------------- .numpy()   --------------|            |
              |            |             |            |             +------------------+
              +------------+             | numpy.add  |numpy.ndarray|                  |torch.Tensor
              +------------+             |            --------------| torch.from_numpy --------------
 torch.Tensor |            |numpy.ndarray|            |             |                  |
 -------------- .numpy()   --------------|            |             +------------------+
              |            |             |            |
              +------------+             +------------+

              +------------+             +----------------+
 torch.Tensor |            |torch.Tensor |                |
 -------------- .detach()  --------------|                |
              |            |             |                |                +----------------+            +------------+
              +------------+             |                |torch_np.ndarray|                |torch.Tensor|            |torch.Tensor
                                         | torch_np.add   -----------------| util.to_tensor -------------| .detach()  --------------
              +------------+             |                |                |                |            |            |
 torch.Tensor |            |torch.Tensor |                |                +----------------+            +------------+
 -------------- .detach()  --------------|                |
              |            |             |                |
              +------------+         |   +----------------+                                   |
                                     |                       wrapper on torch_np.add          |
                                     +--------------------------------------------------------+
```

### Approach

`torch_np` APIs can take both `torch_np.ndarray` as well as `torch.Tensor`. What  we need to do is to have a wrapper for these APIs to convert the return value back to `torch.Tensor`. This way only the wrapper is showing up in the captured graph, with `torch.Tensor`s as input and `torch.Tensor` as output.

If we have a graph break or we've traced to the end of the program, we need to inspect all the `NumpyNdarrayVariable` in the stack and convert them back to `numpy.ndarray`, to make sure the compiled version is still behaving the same as the eager version.

### Examples
Here's an example of the graph generated:

```python
def fn(x: np.ndarray, y: np.ndarray):
    a = x.real
    b = y.real
    torch._dynamo.graph_break()
    return np.add(a, 1), np.add(b, 1)
```

Graph generated:

```
[2023-05-16 10:31:48,737] torch._dynamo.output_graph.__graph: [DEBUG] TRACED GRAPH
 __compiled_fn_0 <eval_with_key>.0 opcode         name            target                                                      args                    kwargs
-------------  --------------  ----------------------------------------------------------  ----------------------  --------
placeholder    l_x_            L_x_                                                        ()                      {}
placeholder    l_y_            L_y_                                                        ()                      {}
call_function  from_numpy      <built-in method from_numpy of type object at 0x12b1fdc80>  (l_x_,)                 {}
call_function  from_numpy_1    <built-in method from_numpy of type object at 0x12b1fdc80>  (l_y_,)                 {}
call_function  attr_wrapper    <function attr_wrapper at 0x12e8693a0>                      (from_numpy, 'real')    {}
call_function  attr_wrapper_1  <function attr_wrapper at 0x12e8693a0>                      (from_numpy_1, 'real')  {}
output         output          output                                                      ((),)                   {}

[2023-05-16 10:31:48,908] torch._dynamo.output_graph.__graph: [DEBUG] TRACED GRAPH
 __compiled_fn_2 <eval_with_key>.1 opcode         name           target                                                      args                             kwargs
-------------  -------------  ----------------------------------------------------------  -------------------------------  --------
placeholder    l_a_           L_a_                                                        ()                               {}
placeholder    l_b_           L_b_                                                        ()                               {}
call_function  from_numpy     <built-in method from_numpy of type object at 0x12b1fdc80>  (l_a_,)                          {}
call_function  from_numpy_1   <built-in method from_numpy of type object at 0x12b1fdc80>  (l_b_,)                          {}
call_function  wrapped_add    <Wrapped function <original add>>                           (from_numpy, 1)                  {}
call_function  wrapped_add_1  <Wrapped function <original add>>                           (from_numpy_1, 1)                {}
output         output         output                                                      ((wrapped_add, wrapped_add_1),)  {}

```
### Changes

* `codegen.py`: reconstruct `numpy.ndarray` from `NumpyNdarrayVariable` by adding bytecode to call `utils.to_numpy_helper()`.
*  `output_graph.py`: getting rid of legacy code that does exactly what `codegen.py` does, which only handling return case but not graph break case.
*  `utils.py`: added helpers to convert `numpy.ndarray` to `torch.Tensor` and vice versa. Also adding a wrapper class that takes in a function. In `__call__` it calls the function and converts its out to `torch.Tensor` (or a list of it).
* `builder.py`: add method to wrap `numpy.ndarray` graph inputs into `NumpyNdarrayVariable`, by calling `torch.numpy` in the proxy.
* `misc.py`: `numpy` API calls goes into `NumpyVariable` and we find the function with the same name in `torch_np` module, then wrap it with the wrapper defined in `utils.py`.
* `tensor.py`, `torch.py`: proxy `tensor.numpy()` to be `torch.detach()` but wrap it with `NumpyNdarrayVariable`. Similarly, `torch.from_numpy()` -> `torch.detach()` but wrap it with `TensorVariable`. In `NumpyNdarrayVariable`, do the similar `torch_np.ndarray` to `torch.Tensor` wrapping for attributes.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/100839
Approved by: https://github.com/ezyang
2023-06-03 00:54:25 +00:00
Will Constable
e7cc41772d Add dynamo collections.deque support (#102412)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/102412
Approved by: https://github.com/jansel, https://github.com/voznesenskym
2023-05-31 03:54:20 +00:00
Will Constable
c06d33ce43 Add dynamo itertools.combinations support (#102379)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/102379
Approved by: https://github.com/jansel
2023-05-26 22:48:24 +00:00
Wanchao Liang
d40f4f12f6 [dynamo] add itertools.chain support (#102247)
This PR adds itertools chain support to dynamo

Pull Request resolved: https://github.com/pytorch/pytorch/pull/102247
Approved by: https://github.com/jansel
2023-05-25 21:26:09 +00:00
Michael Lazos
2434a205de Support unary not on lists (#102210)
Fixes #ISSUE_NUMBER

Pull Request resolved: https://github.com/pytorch/pytorch/pull/102210
Approved by: https://github.com/anijain2305
2023-05-25 02:45:36 +00:00
Michael Lazos
23dbdd900f Full default dict support in dynamo (#102202)
Allows arbitrary default dict factories and construction of a default dict in a compiled function - needed for [this function](2e2a74670d/torch/utils/_foreach_utils.py (LL21C5-L21C395)) used to group params in the foreach optimizer.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/102202
Approved by: https://github.com/yanboliang
2023-05-25 01:41:38 +00:00
Yanbo Liang
369a256381 [Dynamo] Remove cross import in dynamo unit tests (#100851)
Fixes #ISSUE_NUMBER

Pull Request resolved: https://github.com/pytorch/pytorch/pull/100851
Approved by: https://github.com/jansel
2023-05-11 17:07:25 +00:00
PyTorch MergeBot
d98d95fb9f Revert "[Dynamo] Remove cross import in dynamo unit tests (#100851)"
This reverts commit c4bbeb5b8a.

Reverted https://github.com/pytorch/pytorch/pull/100851 on behalf of https://github.com/jeanschmidt due to breaking internal builds ([comment](https://github.com/pytorch/pytorch/pull/100851#issuecomment-1540646623))
2023-05-09 18:30:01 +00:00
Yanbo Liang
c4bbeb5b8a [Dynamo] Remove cross import in dynamo unit tests (#100851)
Fixes #ISSUE_NUMBER

Pull Request resolved: https://github.com/pytorch/pytorch/pull/100851
Approved by: https://github.com/jansel
2023-05-08 20:16:57 +00:00
Larry Liu
f5853342ea [dynamo][numpy] Handle return value being numpy ndarray (#99560)
On top of #95849 this PR is trying to handle the special case when dealing with numpy.

Consider the following example:

```
def f(x: torch.Tensor) -> np.ndarray:
	a = x.numpy()
	return a.T
```
In previous PR this will error out because we translate `a.T` to be a method call on `torch_np.ndarray.T` which is also a `torch_np.ndarray`.

This PR handles this case, by conditionally converting a `torch_np.ndarray` to `np.ndarray` before returning, to match the original behavior.

The compiled version will be:

```
def f(x):
    ___tmp_0 = __compiled_fn_0(x)
    if isinstance(___tmp_0, torch_np.ndarray):
        return ___tmp_0.tensor.numpy()
    else:
        return ___tmp_0
```

Pull Request resolved: https://github.com/pytorch/pytorch/pull/99560
Approved by: https://github.com/jansel, https://github.com/yanboliang
2023-04-27 16:18:35 +00:00
Larry Liu
687afeb686 [dynamo][numpy] Add NumpyTensorVariable to translate ndarray attribute calls to tensor attributes (#95849)
Issue: #93684

# Problem

Reduce graph breaks when dynamo compiles python functions containing numpy functions and ndarray operations.

# Design (as I know it)

* Use torch_np.ndarray(a wrapper of tensor) to back a `VariableTracker`: `NumpyTensorVariable`.
* Translate all attributes and methods calls, on ndarray, to torch_np.ndarray equivalent.

This PR adds `NumpyTensorVariable` and supports:
1.  tensor to ndarray, ndarray to tensor
2. numpy functions such as numpy.meshgrid()
3. ndarray attributes such as `itemsize`, `stride`

Next PR will handle returning `np.ndarray` and add support for ndarray methods
Pull Request resolved: https://github.com/pytorch/pytorch/pull/95849
Approved by: https://github.com/ezyang
2023-04-27 16:18:35 +00:00
Animesh Jain
3dcc7b396c [easy] iterate dict with sorted keys for accuracy checking (#99793)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/99793
Approved by: https://github.com/jansel
2023-04-24 21:26:35 +00:00
Jason Ansel
f4354b2a5e [dynamo] Support dict kwargs constructor (#98660)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/98660
Approved by: https://github.com/yanboliang
2023-04-20 15:40:00 +00:00
Jason Ansel
baa06790f8 Unbreak torch.compile on macos (#99119)
It seems like #96980 made torch.compile() completely ignore the `backend=` arg on macos rendering the entire API useless even if the user wasn't using mps tensors.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/99119
Approved by: https://github.com/msaroufim
2023-04-14 15:30:27 +00:00
Jason Ansel
0c162adfa8 [dynamo] Support callable() on user defined functions (#98662)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/98662
Approved by: https://github.com/yanboliang
2023-04-11 05:43:46 +00:00
Yanbo Liang
9c7b03d51e [Dynamo] Fix bug of torch.is_floating_point & is_complex (#98393)
Fixes #95192

Pull Request resolved: https://github.com/pytorch/pytorch/pull/98393
Approved by: https://github.com/ezyang
2023-04-05 20:58:16 +00:00
Jason Ansel
55afaa46a4 Support functools.partial and itertools.product (#98120)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/98120
Approved by: https://github.com/anijain2305
2023-04-03 18:23:25 +00:00
Mark Saroufim
1b08a01361 Default to aot_eager for torch.compile on MPS (#96980)
Fixes https://github.com/pytorch/pytorch/issues/96976

Pull Request resolved: https://github.com/pytorch/pytorch/pull/96980
Approved by: https://github.com/kulinseth, https://github.com/albanD, https://github.com/ZainRizvi
2023-03-25 14:21:39 +00:00
Yanbo Liang
12ab4f08b7 [Dynamo] No graph break on namedtuple and potential other functions (#96122)
```collections.namedtuple``` caused 40+ ```dynamo.export``` testing failing in 14k github models.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/96122
Approved by: https://github.com/jansel, https://github.com/mlazos
2023-03-07 08:00:21 +00:00
Yanbo Liang
6ca286df69 [Dynamo] Support call dict with list/tuple as input (#95928)
Fixes Meta internal use case

Pull Request resolved: https://github.com/pytorch/pytorch/pull/95928
Approved by: https://github.com/jansel
2023-03-04 05:52:33 +00:00
Yanbo Liang
b5ff41a47a [Dynamo] No graph break on calling dict & collections.OrderedDict() (#95250)
It's common to call ```dict()``` or ```collections.OrderedDict()``` inside of ```forward``` function, so we should not graph break.

This pattern has been used in many places including:
* The use case in [torchvision](
928b05cad3/torchvision/models/_utils.py (L66-L73)).
* It causes ~100 model failures(nopython=True) in the 14k github models.
* Also it hits several Meta internal use cases.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/95250
Approved by: https://github.com/jansel
2023-02-23 09:03:07 +00:00
Yanbo Liang
4f257a507c [Dynamo] Support Python builtin sorted function (#94949)
Fixes #94750

Pull Request resolved: https://github.com/pytorch/pytorch/pull/94949
Approved by: https://github.com/jansel, https://github.com/Skylion007
2023-02-16 21:27:11 +00:00
Xuehai Pan
046e88a291 [BE] [3/3] Rewrite super() calls in test (#94592)
Rewrite Python built-in class `super()` calls. Only non-semantic changes should be applied.

- #94587
- #94588
- #94592

Also, methods with only a `super()` call are removed:

```diff
class MyModule(nn.Module):
-   def __init__(self):
-       super().__init__()
-
    def forward(self, ...):
        ...
```

Some cases that change the semantics should be kept unchanged. E.g.:

f152a79be9/caffe2/python/net_printer.py (L184-L190)

f152a79be9/test/test_jit_fuser_te.py (L2628-L2635)

Pull Request resolved: https://github.com/pytorch/pytorch/pull/94592
Approved by: https://github.com/ezyang, https://github.com/seemethere
2023-02-12 22:20:53 +00:00
David Berard
3e6978172e [dynamo] Handle general tensor attributes with a getattr proxy node (#91840)
**Background:** Before this PR, support in dynamo for tensor attributes (e.g. `x.H`, `x.T`, ...) need to be individually implemented one-by-one. This could potentially lead to errors, e.g. if the implementation in [variables/tensor.py](21c7c7c72f/torch/_dynamo/variables/tensor.py (L160)) differs from the implementation from a direct call to the attribute. For attributes that were not special-cased in tensor.py, dynamo tracing would fail. This PR adds generic support for tensor attributes that return tensors without needing to specially handle them. (Notably, for x.real and x.imag, which previously weren't supported).

**In this PR:** This directly creates a proxy node for a `"call_function"` node with `target=getattr`, and feeds it into wrap_fx_proxy. This will produce a TensorVariable for the attribute returned.

This also removes the implementations for H, T, mH, mT which were broken (previously `torch.relu(x.T)` would fail). They now fall back to this default implementation (for which `torch.relu(x.T)` passes).

**Further context**:

* Ed's original suggestion in [90463](https://github.com/pytorch/pytorch/pull/90463#discussion_r1043398340) is to use `torch.Tensor.H.__get__(x)`. I wasn't able to get this to work; fx compilation fails with `getset_descriptor does not have attribute __module__`. Basically, the `__module__` attribute which is available on most python attributes, is not available on `getset_descriptor` objects. (i.e., these are implemented in C++ as attributes on torch.Tensor, so they don't obey some assumptions made by fx)
* Although both tensor attributes and methods (like `x.relu()`) both go through this, this PR should only handle attributes (e.g. see the `"getset_descriptor"` in variables/tensor.py). Methods are handled already by by GetAttrVariable.
* Prior to this PR, we already returned GetAttrVariables for unsupported attrs: the parent caller would catch the NotImplementedError and fallback to returning a GetAttrVariable. But if this GetAttrVariable was ever passed into a torch.\* function (as it could quite possibly be, since most of these attrs are tensors), it would fail because its proxy node would be missing an [example_value](https://github.com/pytorch/pytorch/blob/master/torch/_dynamo/utils.py#L1017). So: before, for some tensor x, `x.real` would work fine; but `torch.relu(x.real)` would fail.

**Testing**: added tests in test_misc.py for x.real, x.imag, x.T, x.real.T.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/91840
Approved by: https://github.com/ezyang
2023-02-01 22:34:03 +00:00
David Berard
58acab4616 [dynamo] support [tensor].type(torch.FloatTensor) (#93043)
for some tensor x, x.type(torch.FloatTensor) will essentially do the same thing as x.to(torch.float). x.type can be called with at least 3 types of inputs:
* a string "torch.FloatTensor"
* a dtype torch.float
* a tensor type torch.FloatTensor

the third option (torch.FloatTensor) fails in fx, because fx cannot trace torch.FloatTensor objects.  So this PR will replace the torch.FloatTensor type with a string "torch.FloatTensor"

Why not fix this in fx? Well, it's possible, but I'm not sure a nice way to do it. We would want to update [torch.fx.node.BaseArgumentTypes](d88bc38b0c/torch/fx/node.py (L17)) to contain torch.FloatTensor etc. We could hard-code a list of tensor types there (the types vary depending on build type, e.g. whether or not cuda tensors are available), but that's not great in case our hardcoded list differs from the actual list registered by python_tensor.cpp. Another option is to dynamically populate the list of types with `Union[tuple(...)])`, and fill the tuple with `torch._tensor_classes` (which is directly populated by python_tensor.cpp), but apparently this breaks most typecheckers.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/93043
Approved by: https://github.com/jansel
2023-01-27 21:27:13 +00:00
Yanbo Liang
c9ce0e63e8 [Dynamo] Support context wrapping(e.g, torch.no_grad) on nested functions w/o closure (#92922)
Fixes 14k github models: https://github.com/jansel/pytorch-jit-paritybench/blob/master/generated/test_ELEKTRONN_elektronn3.py

Pull Request resolved: https://github.com/pytorch/pytorch/pull/92922
Approved by: https://github.com/jansel, https://github.com/mlazos
2023-01-26 04:23:35 +00:00
Will Constable
e665f03ad8 Fix dynamo func defaults handling for torch.device, size, dtype (#92880)
Previously, these torch types were not handled in the wrap_bound_arg
handler.

Add a unit test and verify it is fixed.

Fixes #91084

Pull Request resolved: https://github.com/pytorch/pytorch/pull/92880
Approved by: https://github.com/ezyang
2023-01-24 21:50:43 +00:00
Yanbo Liang
0ab4ab9f8d [Dynamo] Fix calling UserDefinedObject.func should pass self object (#92050)
Fixes #90834

Pull Request resolved: https://github.com/pytorch/pytorch/pull/92050
Approved by: https://github.com/jansel
2023-01-21 05:47:01 +00:00
Will Constable
6cfaa92239 Handle tensor default func args when inlining (#90575)
Handle tensor default func/method args when inlining

    Previously, when inlining a function, its default arguments
    were only wrapped with VariableTrackers if non-tensor. Now,
    tensor default args are also handled by adding them to the
    parent InstructionTranslator as an attribute.

    - also patches up a missing source in nnmodule call_function,
      needed to properly guard on a default arg in its methods
    - adds new 'DefaultsSource' type which guards either a `__defaults__`
      or `__kwdefaults__` entry on a function

Fixes #90361  https://github.com/pytorch/torchdynamo/issues/1968

Pull Request resolved: https://github.com/pytorch/pytorch/pull/90575
Approved by: https://github.com/voznesenskym
2023-01-12 05:04:18 +00:00
Yanbo Liang
490c1cf650 [Dynamo] Support torch.get_default_dtype (#89790)
Fixes https://github.com/pytorch/torchdynamo/issues/1930

Pull Request resolved: https://github.com/pytorch/pytorch/pull/89790
Approved by: https://github.com/soumith
2022-12-19 04:14:11 +00:00
Yanbo Liang
5c133c5744 [Dynamo] Supports two torch.distributed.* functions (#90683)
Fixes Meta internal user cases.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/90683
Approved by: https://github.com/jansel
2022-12-13 19:06:38 +00:00
Michael Lazos
76f440f20a [dynamo] Rewrite inplace addcdiv and inplace add (#90330)
Rewrite inplace addcdiv to a div, mul and inplace add to avoid graph break
Rewrite inplace add to a mul and inplace add to avoid graph break

Needed to close optimizer graph breaks

Pull Request resolved: https://github.com/pytorch/pytorch/pull/90330
Approved by: https://github.com/jansel
2022-12-08 21:19:23 +00:00
Michael Lazos
2d9267ba30 [dynamo] Rewrite addcdiv in dynamo to its constituent ops (#90227)
This avoids a graph break when `value` is used. This fixes a graph break in the variants of Adam and Adagrad optimizers.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/90227
Approved by: https://github.com/jansel
2022-12-06 05:08:44 +00:00
Yanbo Liang
3916d729c8 [Dynamo] tensor.type() should return tensor types with CPU and GPU variants (#90021)
Fix errors from [7k github models](https://github.com/pytorch/torchdynamo/issues/1884)
```
Traceback (most recent call last):
  File "/scratch/ybliang/work/repos/pytorch/torch/_dynamo/utils.py", line 1062, in get_fake_value
    return wrap_fake_exception(
  File "/scratch/ybliang/work/repos/pytorch/torch/_dynamo/utils.py", line 739, in wrap_fake_exception
    return fn()
  File "/scratch/ybliang/work/repos/pytorch/torch/_dynamo/utils.py", line 1063, in <lambda>
    lambda: run_node(tx.output, node, args, kwargs, nnmodule)
  File "/scratch/ybliang/work/repos/pytorch/torch/_dynamo/utils.py", line 1112, in run_node
    raise RuntimeError(
RuntimeError: Failed running call_function <function einsum at 0x7fd8f246a4c0>(*('i,j->ij', FakeTensor(FakeTensor(..., device='meta', size=(4,)), cpu), FakeTensor(FakeTensor(..., device='meta', size=(2,)), cuda:0)), **{}):
Unhandled FakeTensor Device Propagation for aten.mul.Tensor, found two different devices cpu, cuda:0
(scroll up for backtrace)
```

The root cause is: ```tensor.type()``` should return ```torch.cuda.FloatTensor``` rather than ```torch.FloatTensor``` if it's on GPU.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/90021
Approved by: https://github.com/jansel
2022-12-02 18:57:43 +00:00
Yanbo Liang
37e46a5035 [Dynamo] Fix several bugs & code refactor in RangeVariable (#89322)
Fix bug in [7k github models](https://github.com/pytorch/torchdynamo/issues/1884): https://github.com/jansel/pytorch-jit-paritybench/blob/master/generated/test_clovaai_stargan_v2.py
```
E       TypeError: 'list' object cannot be interpreted as an integer
E
E       from user code:
E          File "/scratch/ybliang/work/repos/pytorch-jit-paritybench/generated/test_clovaai_stargan_v2.py", line 335, in forward
E           idx = torch.LongTensor(range(y.size(0)))
```

Pull Request resolved: https://github.com/pytorch/pytorch/pull/89322
Approved by: https://github.com/jansel
2022-11-23 19:44:48 +00:00
Yanbo Liang
9eed6b7f9a [Dynamo] Several fixes on TensorVariable & TorchVariable (#89486)
This is a group of bug fixes for [7k github models](https://github.com/pytorch/torchdynamo/issues/1884), it would fix 30+ model tests.
* Support ```tensor.type()```.
* Support ```tensor.get_device()```.
* Support ```torch.nn.functional._Reduction.get_enum```.
* Support ```torch._utils._get_device_index()```.
* Fallback ```tensor.data_ptr()```.
  * ```FakeTensor``` always returns 0
  * For no fake tensor propagation, we ```clone``` the input tensor, which makes no sense to track the original ```data_ptr```. And I don't think this is a very popular API.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/89486
Approved by: https://github.com/jansel
2022-11-23 19:39:45 +00:00
Akshit Khurana
7006ac6ee5 [Dynamo] Fix Tensor.T trace (#88642)
Summary:

Tensor.T considered T as a GetAttr and didn't progate "example_value"

Via https://pytorch.org/docs/stable/tensors.html#torch.Tensor.T
> If n is the number of dimensions in x, x.T is equivalent to
> x.permute(n-1, n-2, ..., 0).

Fixes pytorch/torchdynamo#1476

Test Plan:

pytest test/dynamo/test_functions.py::FunctionTests::test_T

Reviewers:

Subscribers:

Tasks:

Tags:

Differential Revision: [D41130306](https://our.internmc.facebook.com/intern/diff/D41130306)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/88642
Approved by: https://github.com/tugsbayasgalan, https://github.com/yanboliang, https://github.com/jansel
2022-11-09 23:44:30 +00:00
Michael Voznesensky
bc19494814 [Dynamo] Symbolic shape guards (#87570)
**Introduces symbolic shape guards into dynamo.**

In this PR, we take the existing fake tensor infra and plumbing in dynamo and we start passing a shape_env around. This shape_env does not get plumbed down to middle layers / backend yet - it only collects expressions from frontend invocations at the moment. We then translate these expressions into guards at the point where we take other guards installed throughout dynamo - and add them to check_fn.

Part 1 of https://docs.google.com/document/d/1QJ-M4zfMkD-fjHIqW089RptjLl9EgozZGCceUbvmgfY/edit#

cc @jansel @lezcano @fdrocha @mlazos @soumith @yanboliang @penguinwu @anijain2305
Pull Request resolved: https://github.com/pytorch/pytorch/pull/87570
Approved by: https://github.com/ezyang
2022-10-25 21:15:40 +00:00
Jason Ansel
8f71e8de7e Sync changes from pytorch/torchdynamo, enable tests (#86950)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/86950
Approved by: https://github.com/Chillee
2022-10-14 23:08:58 +00:00
Jason Ansel
c7c09722ad Move TorchDynamo into PyTorch core (#86461)
Context:
https://github.com/pytorch/torchdynamo/issues/1588

This PR moves [TorchDynamo](https://github.com/pytorch/torchdynamo) and TorchInductor into PyTorch core.
- `torchdynamo` becomes `torch._dynamo`
- `torchinductor` becomes `torch._inductor`

This PR was generated by running `copy_to_core.sh` in https://github.com/pytorch/torchdynamo/pull/1538

Pull Request resolved: https://github.com/pytorch/pytorch/pull/86461
Approved by: https://github.com/voznesenskym
2022-10-13 23:18:06 +00:00