Commit Graph

86 Commits

Author SHA1 Message Date
Jon Chuang
6e770c0dda [dynamo] Add itertools.repeat via polyfill (#110953)
Fixes https://github.com/pytorch/pytorch/issues/110286

Pull Request resolved: https://github.com/pytorch/pytorch/pull/110953
Approved by: https://github.com/ezyang
2023-10-10 20:40:33 +00:00
Animesh Jain
e1f0f9c64e [dynamo][easy] Move code from GetAttrVariable to a suitable place (#110535)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/110535
Approved by: https://github.com/jansel
2023-10-08 22:37:34 +00:00
Jon Chuang
844ea6408b feat(dynamo): handle accumulate kwargs ("func", "initial") (#110686)
Follow up to: https://github.com/pytorch/pytorch/pull/110683

Pull Request resolved: https://github.com/pytorch/pytorch/pull/110686
Approved by: https://github.com/ezyang
2023-10-08 07:06:52 +00:00
Animesh Jain
58637c4b43 [dynamo] Remove SuperSource (#110475)
The motivation for removing this is already present in the pre-PR comments. Copying it

~~~
# NB - SuperSource is a weird one.
# it is our only source with 2 bases, so we use the objec
# as the base, rather than the type, since an invocation
# like super(Foo, foo) is represented here, the source object base is more spiritually
# aligned with the instance, rather than the type.
# This whole construction is questionable tho, and we should probably find a way to
# avoid this exception to our otherwise nice source parentage invariant.
~~~

Instead of using super(a, b), we can use `type(b).__mro__[index]`.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/110475
Approved by: https://github.com/jansel
2023-10-08 04:45:06 +00:00
Jon Chuang
9b55194f81 fix(dynamo): Incorrect accumulate implementation, bad tests (#110683)
Root cause of: https://github.com/pytorch/pytorch/issues/110287

Fixed many tests that didn't actually test due to unreliability of `CompileCounter.frame_count` in detecting graph breaks: https://github.com/pytorch/pytorch/issues/110730

Pull Request resolved: https://github.com/pytorch/pytorch/pull/110683
Approved by: https://github.com/voznesenskym
2023-10-06 23:07:56 +00:00
Yanbo Liang
9bc5e10899 [New][1/N] Dynamo skipfiles refactor (#110330)
This is the replacement of #109567. Now I preserved all existing semantics and only focusing on API (for developers) and code structure changes.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/110330
Approved by: https://github.com/ezyang
2023-10-03 16:50:33 +00:00
atalman
b253fc9c93 Revert "[1/N] Dynamo skipfiles refactor (#109567)" (#110296)
This reverts commit 84c5435b29.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/110296
Approved by: https://github.com/yanboliang
2023-09-29 20:35:46 +00:00
Yanbo Liang
84c5435b29 [1/N] Dynamo skipfiles refactor (#109567)
This is 1/N of the dynamo skipfiles/allowed_functions refactor, the major change in this PR includes:
* Refactor & define the [skipfiles rules](https://github.com/pytorch/pytorch/pull/109567/files#diff-5aa3ce9db729bf0901ea97a5d3cc51924cc8575d9c516c1c8f572a35de92544aR56) and interface
* For every ```skipfiles.check```, we return both the check result and the skip/inline reason and log them for debugging.
* We found several latent issues/bugs and incorrect implementations in the codebase, but I'm planning to fix them in follow-up PRs to make the refactor decoupled with bug fixes.
* More details in the inline comments.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/109567
Approved by: https://github.com/ezyang, https://github.com/jansel, https://github.com/anijain2305
2023-09-28 18:36:46 +00:00
PyTorch MergeBot
75462fd870 Revert "[1/N] Dynamo skipfiles refactor (#109567)"
This reverts commit f8e0ebec8c.

Reverted https://github.com/pytorch/pytorch/pull/109567 on behalf of https://github.com/huydhn due to Many jobs are failing in trunk after this with FILENAME_ALLOWLIST is not defined error f8e0ebec8c. This looks like a landrace ([comment](https://github.com/pytorch/pytorch/pull/109567#issuecomment-1738344950))
2023-09-28 02:22:22 +00:00
Yanbo Liang
f8e0ebec8c [1/N] Dynamo skipfiles refactor (#109567)
This is 1/N of the dynamo skipfiles/allowed_functions refactor, the major change in this PR includes:
* Refactor & define the [skipfiles rules](https://github.com/pytorch/pytorch/pull/109567/files#diff-5aa3ce9db729bf0901ea97a5d3cc51924cc8575d9c516c1c8f572a35de92544aR56) and interface
* For every ```skipfiles.check```, we return both the check result and the skip/inline reason and log them for debugging.
* We found several latent issues/bugs and incorrect implementations in the codebase, but I'm planning to fix them in follow-up PRs to make the refactor decoupled with bug fixes.
* More details in the inline comments.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/109567
Approved by: https://github.com/ezyang, https://github.com/jansel, https://github.com/anijain2305
2023-09-28 01:21:59 +00:00
Michael Voznesensky
a902150a1e [Easy] ConstantVariable() -> .create (#109896)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/109896
Approved by: https://github.com/ezyang
2023-09-22 22:30:15 +00:00
Michael Lazos
24ba4b7059 [dynamo][__torch_function__ 1/n] Add getset descriptor and __get__ vars (#109542)
Adds the MethodWrapperVariable and GetSetDescriptor variable types. These are used in `__torch_function__` tracing to represent attribute reads (`__get__`) and for comparing unbound methods. (the func argument when `__torch_function__` is dispatched from a method call)

towards tracing for https://github.com/pytorch/pytorch/issues/93723

Pull Request resolved: https://github.com/pytorch/pytorch/pull/109542
Approved by: https://github.com/jansel
2023-09-22 10:39:15 +00:00
weifengpy
9021fb8dac [dynamo] implement custom dict variable as a general solution for HF's ModelOutput class (#105044)
before the PR, for HF's ModelOutput class, we use dicts.py/DataClassVariable with our own implementation on __getItem__, __setAttr__, __setItem__. There is a risk that ModelOutput logic may change since it is a user code

after the PR, we inline __getItem__, __setAttr__, __setItem__ using dicts.py/CustomizedDictVariable so the logic always keep AA

unit test
* python test/dynamo/test_model_output.py -k test_HF_bert_model_output

test on HF benchmark
* python benchmarks/dynamo/huggingface.py -d cuda --inference --accuracy --progress --inductor --print-dataframe-summary 2>&1
* all metric are the same before/after the PR, including pass rate, unique_graphs, graph_breaks, unique_graph_breaks
  * before the PR: P790393916
  * after the PR: P790368991

Pull Request resolved: https://github.com/pytorch/pytorch/pull/105044
Approved by: https://github.com/jansel
2023-09-14 17:15:50 +00:00
Michael Voznesensky
e4350d6d4e Functools partial support in dynamo (#108846)
The strategy for supporting functools partials is relatively straightforward.

There are 2 cases we need to support:

**1) Functools partials as input**
In this case, we are first seeing the functools partial and it is guaranteed to have a source. As such, the args, keywords, and func of the functools partial are passed through VariableBuilder. As this is the first time we are seeing these objects (as it is an input), we re-enter VariableBuilder with a source referencing the args, keywords, and func as attributes of the input to produce:

- func: A callable VariableTracker (UDF, TorchVariable, etc) depending on the value of `func`
- args: List[VariableTracker] - note, not ListVariableTracker!
- keywords: Dict[str, VariableTracker]

A major benefit of this structure is that it very elegantly matches the args to `call_function`.

We then compose a FunctoolsPartialVariable from the VariableTrackers made above.

**2) Functools partials created within compile**
In this case, we already have all the args as known VTs, and thus just compose a FunctoolsPartialVariable as we do for case (1).

For both (1) and (2) - we propagate all guards from the func, args, and keyword VTs to the FunctoolsPartialVariable

Pull Request resolved: https://github.com/pytorch/pytorch/pull/108846
Approved by: https://github.com/ezyang, https://github.com/jansel
2023-09-09 17:25:02 +00:00
voznesenskym
5d85d897e0 Torchrec Enablement Fixes - Re-PR 107910 (#108018)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/108018
Approved by: https://github.com/wconstab
2023-08-28 19:47:53 +00:00
lezcano
db39a81e1e Add a flag that allows breaking on NumPy ops (#107687)
This was removed in 63d406a6a9
Resotiring, as it's rather useful for debugging.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/107687
Approved by: https://github.com/larryliu0820
2023-08-23 01:21:22 +00:00
lezcano
a9dca53438 NumPy support in torch.compile (#106211)
RFC: https://github.com/pytorch/rfcs/pull/54
First commit is the contents of https://github.com/Quansight-Labs/numpy_pytorch_interop/

We have already been using this in core for the last few months as a external dependency. This PR pulls all these into core.

In the next commits, I do a number of things in this order
- Fix a few small issues
- Make the tests that this PR adds pass
- Bend backwards until lintrunner passes
- Remove the optional dependency on `torch_np` and simply rely on the upstreamed code
- Fix a number dynamo tests that were passing before (they were not tasting anything I think) and are not passing now.

Missing from this PR (but not blocking):
- Have a flag that deactivates tracing NumPy functions and simply breaks. There used to be one but after the merge stopped working and I removed it. @lezcano to investigate.
- https://github.com/pytorch/pytorch/pull/106431#issuecomment-1667079543. @voznesenskym to submit a fix after we merge.

All the tests in `tests/torch_np` take about 75s to run.

This was a work by @ev-br, @rgommers @honno and I. I did not create this PR via ghstack (which would have been convenient) as this is a collaboration, and ghstack doesn't allow for shared contributions.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/106211
Approved by: https://github.com/ezyang
2023-08-11 00:39:32 +00:00
Justin Chu
8a688277a2 [BE] Enable ruff's UP rules and autoformat dynamo / functorch and refs (#105432)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/105432
Approved by: https://github.com/ezyang
2023-07-19 13:48:44 +00:00
kshitij12345
e137ac6c59 [dynamo][torch_np] support linalg, random and fft module (#105320)
Support tracing through `np.linalg` with `torch_np` installed. Will update with other modules if this approach makes sense.

TODO:
* [x] Add test for `fft` and `random`.

Fixes https://github.com/pytorch/pytorch/issues/105269

Pull Request resolved: https://github.com/pytorch/pytorch/pull/105320
Approved by: https://github.com/ezyang, https://github.com/lezcano
2023-07-19 11:06:37 +00:00
Mengwei Liu
fb376f80a2 [retry][dynamo][numpy] Add support for np.dtype (#105034)
Original PR: #103546

Trying to support numpy function call in dynamo, with numpy dtype as argument.

For example:

```
def fn(x: int):
    return np.empty_like(x, dtype=np.float64)
```

This currently doesn't work because `NumpyVariable` doesn't implement `as_proxy()`. The idea in `as_proxy()` for now is to convert `np.float64` and other np.<dtype> into `str` and then feed into the corresponding `torch_np` method. The assumption here is that all `torch_np` methods that are taking `dtype` kwarg will be able to also take `str` as `dtype`. This assumption stands for `numpy`.

For previous example, we convert `np.float64` to `"float64"` in `as_proxy()` and then feed it into `torch_np.empy_like()` method.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/105034
Approved by: https://github.com/voznesenskym
2023-07-14 21:36:36 +00:00
PyTorch MergeBot
f01deb23d5 Revert "[dynamo][numpy] Add support for np.dtype (#103546)"
This reverts commit 0710791929.

Reverted https://github.com/pytorch/pytorch/pull/103546 on behalf of https://github.com/voznesenskym due to Failed on bench, unclear why bench test did not run on CI ([comment](https://github.com/pytorch/pytorch/pull/103546#issuecomment-1631203461))
2023-07-11 17:23:11 +00:00
Mengwei Liu
0710791929 [dynamo][numpy] Add support for np.dtype (#103546)
## Problem

Trying to support numpy function call in dynamo, with numpy dtype as argument.

For example:

```
def fn(x: int):
    return np.empty_like(x, dtype=np.float64)
```

## Solution

This currently doesn't work because `NumpyVariable` doesn't implement `as_proxy()`. The idea in `as_proxy()` for now is to convert `np.float64` and other np.<dtype> into `torch.dtype` and then feed into the corresponding `torch_np` method.

For previous example, we convert `np.float64` to `torch.float64` in `as_proxy()` and then feed it into `torch_np.empy_like()` method.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/103546
Approved by: https://github.com/ezyang
2023-07-11 06:29:15 +00:00
Animesh Jain
4005152b92 [dynamo] Organize higherorderops variable trackers (#104565)
The main change is moving the higherorderops from torch.py to higher_order_ops.py. And creating smaller subclasses of HigherOrderOp for cond, map etc

Pull Request resolved: https://github.com/pytorch/pytorch/pull/104565
Approved by: https://github.com/zou3519
2023-07-05 22:19:26 +00:00
William Wen
998c07799f [dynamo] fix deep nested closure cell KeyError (#104222)
Fix https://github.com/pytorch/pytorch/issues/99639 by handling the case in `InliningInstructionTranslator`'s `LOAD_CLOSURE` definition when the requested cell is not in `self.closure_cells`.

My intuition is that the behavior of `LOAD_DEREF` and `STORE_DEREF` on a cell/freevar should not depend on whether or not we called `LOAD_CLOSURE` (that is, we shouldn't create a new cell var in `LOAD_CLOSURE` like in https://github.com/pytorch/pytorch/pull/101357). But we need a way to push cells created by the inlined function that were not present in the caller - `InlinedClosureVariable` is used to differentiate these cells from other cells.

Adding this test causes an error though (EDIT: this test is not relevant to this PR and instead just reveals that `cond` with Python side effects is still broken):
```python
    def test_closure_out_of_scope_cell_with_cond(self):
        from functorch.experimental.control_flow import cond
        cell1 = torch.rand(3, 3)
        cell2 = torch.rand(3, 3)
        orig3 = torch.rand(3, 3)
        def test(x):
            cell3 = orig3.clone()
            def then():
                nonlocal cell3
                cell3 += cell1
                return cell3
            def els():
                nonlocal cell3
                cell3 += cell2
                return cell3
            return cond(x > 0, then, els, [])
        opt_fn = torch._dynamo.optimize("eager")(test)
        result1 = opt_fn(1)
        self.assertTrue(torch.allclose(result1, orig3 + cell1))
        result2 = opt_fn(-1)
        self.assertTrue(torch.allclose(result1, orig3 + cell1 + cell2))
```
```
Traceback (most recent call last):
  File "/scratch/williamwen/work/pytorch2/test/dynamo/test_misc.py", line 1768, in test_closure_out_of_scope_cell_with_cond
    result1 = opt_fn(1)
  File "/scratch/williamwen/work/pytorch2/torch/_dynamo/eval_frame.py", line 295, in _fn
    return fn(*args, **kwargs)
  File "/scratch/williamwen/work/pytorch2/torch/_dynamo/eval_frame.py", line 448, in catch_errors
    return callback(frame, cache_size, hooks, frame_state)
  File "/scratch/williamwen/work/pytorch2/torch/_dynamo/convert_frame.py", line 526, in _convert_frame
    result = inner_convert(frame, cache_size, hooks, frame_state)
  File "/scratch/williamwen/work/pytorch2/torch/_dynamo/convert_frame.py", line 127, in _fn
    return fn(*args, **kwargs)
  File "/scratch/williamwen/work/pytorch2/torch/_dynamo/convert_frame.py", line 360, in _convert_frame_assert
    return _compile(
  File "/scratch/williamwen/work/pytorch2/torch/_dynamo/utils.py", line 180, in time_wrapper
    r = func(*args, **kwargs)
  File "/scratch/williamwen/work/pytorch2/torch/_dynamo/convert_frame.py", line 430, in _compile
    out_code = transform_code_object(code, transform)
  File "/scratch/williamwen/work/pytorch2/torch/_dynamo/bytecode_transformation.py", line 1000, in transform_code_object
    transformations(instructions, code_options)
  File "/scratch/williamwen/work/pytorch2/torch/_dynamo/convert_frame.py", line 415, in transform
    tracer.run()
  File "/scratch/williamwen/work/pytorch2/torch/_dynamo/symbolic_convert.py", line 2029, in run
    super().run()
  File "/scratch/williamwen/work/pytorch2/torch/_dynamo/symbolic_convert.py", line 708, in run
    and self.step()
  File "/scratch/williamwen/work/pytorch2/torch/_dynamo/symbolic_convert.py", line 668, in step
    getattr(self, inst.opname)(inst)
  File "/scratch/williamwen/work/pytorch2/torch/_dynamo/symbolic_convert.py", line 391, in wrapper
    return inner_fn(self, inst)
  File "/scratch/williamwen/work/pytorch2/torch/_dynamo/symbolic_convert.py", line 1100, in CALL_FUNCTION
    self.call_function(fn, args, {})
  File "/scratch/williamwen/work/pytorch2/torch/_dynamo/symbolic_convert.py", line 559, in call_function
    self.push(fn.call_function(self, args, kwargs))
  File "/scratch/williamwen/work/pytorch2/torch/_dynamo/variables/torch.py", line 1061, in call_function
    (false_r, false_graph, false_lifted_freevars) = speculate_branch(False)
  File "/scratch/williamwen/work/pytorch2/torch/_dynamo/variables/torch.py", line 1044, in speculate_branch
    ret_val, ret_graph, ret_lifted_freevars = speculate_subgraph(
  File "/scratch/williamwen/work/pytorch2/torch/_dynamo/variables/torch.py", line 850, in speculate_subgraph
    output = f.call_function(tx, args, {})
  File "/scratch/williamwen/work/pytorch2/torch/_dynamo/variables/functions.py", line 121, in call_function
    return tx.inline_user_function_return(
  File "/scratch/williamwen/work/pytorch2/torch/_dynamo/symbolic_convert.py", line 595, in inline_user_function_return
    result = InliningInstructionTranslator.inline_call(self, fn, args, kwargs)
  File "/scratch/williamwen/work/pytorch2/torch/_dynamo/symbolic_convert.py", line 2134, in inline_call
    return cls.inline_call_(parent, func, args, kwargs)
  File "/scratch/williamwen/work/pytorch2/torch/_dynamo/symbolic_convert.py", line 2231, in inline_call_
    tracer.run()
  File "/scratch/williamwen/work/pytorch2/torch/_dynamo/symbolic_convert.py", line 708, in run
    and self.step()
  File "/scratch/williamwen/work/pytorch2/torch/_dynamo/symbolic_convert.py", line 668, in step
    getattr(self, inst.opname)(inst)
  File "/scratch/williamwen/work/pytorch2/torch/_dynamo/symbolic_convert.py", line 162, in impl
    self.push(fn_var.call_function(self, self.popn(nargs), {}))
  File "/scratch/williamwen/work/pytorch2/torch/_dynamo/variables/builtin.py", line 497, in call_function
    proxy = tx.output.create_proxy(
  File "/scratch/williamwen/work/pytorch2/torch/_dynamo/output_graph.py", line 345, in create_proxy
    return self.current_tracer.create_proxy(*args, **kwargs)
  File "/scratch/williamwen/work/pytorch2/torch/_dynamo/output_graph.py", line 1109, in create_proxy
    new_arg = self.lift_tracked_freevar_to_input(arg)
  File "/scratch/williamwen/work/pytorch2/torch/_dynamo/output_graph.py", line 1226, in lift_tracked_freevar_to_input
    self.parent.lift_tracked_freevar_to_input(proxy)
  File "/scratch/williamwen/work/pytorch2/torch/_dynamo/output_graph.py", line 1219, in lift_tracked_freevar_to_input
    assert (
AssertionError: lift_tracked_freevar_to_input on root SubgraphTracer

from user code:
   File "/scratch/williamwen/work/pytorch2/test/dynamo/test_misc.py", line 1766, in test
    return cond(x > 0, then, els, [])
  File "/scratch/williamwen/work/pytorch2/test/dynamo/test_misc.py", line 1764, in els
    cell3 += cell2
```

Pull Request resolved: https://github.com/pytorch/pytorch/pull/104222
Approved by: https://github.com/jansel
2023-06-28 17:54:13 +00:00
Yan Li
3ca8542dff Fix _saved_tensors argument issue in test (#103594)
Summary:
fix broken test in

https://github.com/pytorch/pytorch/issues/103460

Test Plan: pytest ./generated/test_pabloppp_pytorch_tools.py -k test_015

Differential Revision: D46723640

Pull Request resolved: https://github.com/pytorch/pytorch/pull/103594
Approved by: https://github.com/yanboliang
2023-06-20 19:03:41 +00:00
Angela Yi
4a72708d2b [dynamo] Fix Autograd Function Classmethod bug (#103175)
Fixes https://github.com/pytorch/pytorch/issues/103139

Pull Request resolved: https://github.com/pytorch/pytorch/pull/103175
Approved by: https://github.com/williamwen42, https://github.com/yanboliang
2023-06-08 18:15:27 +00:00
Bin Bao
39bf86ae90 [dynamo] Support OrderedDict constructor with kwargs (#103192)
Summary: To solve an issue in https://github.com/pytorch/pytorch/issues/102878.
The solution follows the example in https://github.com/pytorch/pytorch/pull/98660.
It only solves a problem for standard OrderedDict. There is another
problem if we use a user-defined CustomDict.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/103192
Approved by: https://github.com/yanboliang
2023-06-08 12:14:21 +00:00
Mengwei Liu
c304fddf68 [dynamo][numpy] Support graph break for numpy ndarray (#100839)
Issue: #93684

In previous PRs #95849 #99560 we redirect `numpy.*`, `<tensor>.numpy()` calls to `torch_np.*` methods and attributes, by creating `NumpyNdarrayVariable` for those calls.

We need to handle `NumpyNdarrayVariable` when graph break happens.

This PR did 2 things:
1. In `codegen.py` we made sure we can reconstruct the value wrapped by `NumpyNdarrayVariable`, to be `torch_np.ndarray` in the stack whenerver we recompiles the subgraph.
2. In `builder.py` we can wrap the value to be `NumpyNdarrayVariable` and save it as graph input.

-----

Starting from commit 6:

## A new design for supporting numpy in dynamo

In short the core concept doesn't change: we still convert `numpy` API calls to `torch_np` API calls. However, instead of wrapping a `torch_np.ndarray` in `NumpyNdarrayVariable`, the new design wraps a `torch.Tensor`.

The reason for doing this change is because we need to keep `torch.Tensor` everywhere in the captured graph, so that it works well with the backend of dynamo. See discussions in https://github.com/Quansight-Labs/numpy_pytorch_interop/issues/142 for details.

### Flow
This is an example showing how do we think about dynamo working on a simple function:
```python
def f(x: torch.Tensor, y: torch.Tensor):
    a, b = x.numpy(), y.numpy()
    c = np.add(x, y)
    return torch.from_numpy(c)
```
```

              +------------+             +------------+
 torch.Tensor |            |numpy.ndarray|            |
 -------------- .numpy()   --------------|            |
              |            |             |            |             +------------------+
              +------------+             | numpy.add  |numpy.ndarray|                  |torch.Tensor
              +------------+             |            --------------| torch.from_numpy --------------
 torch.Tensor |            |numpy.ndarray|            |             |                  |
 -------------- .numpy()   --------------|            |             +------------------+
              |            |             |            |
              +------------+             +------------+

              +------------+             +----------------+
 torch.Tensor |            |torch.Tensor |                |
 -------------- .detach()  --------------|                |
              |            |             |                |                +----------------+            +------------+
              +------------+             |                |torch_np.ndarray|                |torch.Tensor|            |torch.Tensor
                                         | torch_np.add   -----------------| util.to_tensor -------------| .detach()  --------------
              +------------+             |                |                |                |            |            |
 torch.Tensor |            |torch.Tensor |                |                +----------------+            +------------+
 -------------- .detach()  --------------|                |
              |            |             |                |
              +------------+         |   +----------------+                                   |
                                     |                       wrapper on torch_np.add          |
                                     +--------------------------------------------------------+
```

### Approach

`torch_np` APIs can take both `torch_np.ndarray` as well as `torch.Tensor`. What  we need to do is to have a wrapper for these APIs to convert the return value back to `torch.Tensor`. This way only the wrapper is showing up in the captured graph, with `torch.Tensor`s as input and `torch.Tensor` as output.

If we have a graph break or we've traced to the end of the program, we need to inspect all the `NumpyNdarrayVariable` in the stack and convert them back to `numpy.ndarray`, to make sure the compiled version is still behaving the same as the eager version.

### Examples
Here's an example of the graph generated:

```python
def fn(x: np.ndarray, y: np.ndarray):
    a = x.real
    b = y.real
    torch._dynamo.graph_break()
    return np.add(a, 1), np.add(b, 1)
```

Graph generated:

```
[2023-05-16 10:31:48,737] torch._dynamo.output_graph.__graph: [DEBUG] TRACED GRAPH
 __compiled_fn_0 <eval_with_key>.0 opcode         name            target                                                      args                    kwargs
-------------  --------------  ----------------------------------------------------------  ----------------------  --------
placeholder    l_x_            L_x_                                                        ()                      {}
placeholder    l_y_            L_y_                                                        ()                      {}
call_function  from_numpy      <built-in method from_numpy of type object at 0x12b1fdc80>  (l_x_,)                 {}
call_function  from_numpy_1    <built-in method from_numpy of type object at 0x12b1fdc80>  (l_y_,)                 {}
call_function  attr_wrapper    <function attr_wrapper at 0x12e8693a0>                      (from_numpy, 'real')    {}
call_function  attr_wrapper_1  <function attr_wrapper at 0x12e8693a0>                      (from_numpy_1, 'real')  {}
output         output          output                                                      ((),)                   {}

[2023-05-16 10:31:48,908] torch._dynamo.output_graph.__graph: [DEBUG] TRACED GRAPH
 __compiled_fn_2 <eval_with_key>.1 opcode         name           target                                                      args                             kwargs
-------------  -------------  ----------------------------------------------------------  -------------------------------  --------
placeholder    l_a_           L_a_                                                        ()                               {}
placeholder    l_b_           L_b_                                                        ()                               {}
call_function  from_numpy     <built-in method from_numpy of type object at 0x12b1fdc80>  (l_a_,)                          {}
call_function  from_numpy_1   <built-in method from_numpy of type object at 0x12b1fdc80>  (l_b_,)                          {}
call_function  wrapped_add    <Wrapped function <original add>>                           (from_numpy, 1)                  {}
call_function  wrapped_add_1  <Wrapped function <original add>>                           (from_numpy_1, 1)                {}
output         output         output                                                      ((wrapped_add, wrapped_add_1),)  {}

```
### Changes

* `codegen.py`: reconstruct `numpy.ndarray` from `NumpyNdarrayVariable` by adding bytecode to call `utils.to_numpy_helper()`.
*  `output_graph.py`: getting rid of legacy code that does exactly what `codegen.py` does, which only handling return case but not graph break case.
*  `utils.py`: added helpers to convert `numpy.ndarray` to `torch.Tensor` and vice versa. Also adding a wrapper class that takes in a function. In `__call__` it calls the function and converts its out to `torch.Tensor` (or a list of it).
* `builder.py`: add method to wrap `numpy.ndarray` graph inputs into `NumpyNdarrayVariable`, by calling `torch.numpy` in the proxy.
* `misc.py`: `numpy` API calls goes into `NumpyVariable` and we find the function with the same name in `torch_np` module, then wrap it with the wrapper defined in `utils.py`.
* `tensor.py`, `torch.py`: proxy `tensor.numpy()` to be `torch.detach()` but wrap it with `NumpyNdarrayVariable`. Similarly, `torch.from_numpy()` -> `torch.detach()` but wrap it with `TensorVariable`. In `NumpyNdarrayVariable`, do the similar `torch_np.ndarray` to `torch.Tensor` wrapping for attributes.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/100839
Approved by: https://github.com/ezyang
2023-06-03 00:54:25 +00:00
Will Constable
e7cc41772d Add dynamo collections.deque support (#102412)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/102412
Approved by: https://github.com/jansel, https://github.com/voznesenskym
2023-05-31 03:54:20 +00:00
Will Constable
c06d33ce43 Add dynamo itertools.combinations support (#102379)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/102379
Approved by: https://github.com/jansel
2023-05-26 22:48:24 +00:00
Wanchao Liang
d40f4f12f6 [dynamo] add itertools.chain support (#102247)
This PR adds itertools chain support to dynamo

Pull Request resolved: https://github.com/pytorch/pytorch/pull/102247
Approved by: https://github.com/jansel
2023-05-25 21:26:09 +00:00
Michael Lazos
23dbdd900f Full default dict support in dynamo (#102202)
Allows arbitrary default dict factories and construction of a default dict in a compiled function - needed for [this function](2e2a74670d/torch/utils/_foreach_utils.py (LL21C5-L21C395)) used to group params in the foreach optimizer.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/102202
Approved by: https://github.com/yanboliang
2023-05-25 01:41:38 +00:00
Michael Voznesensky
4c1bc91f42 Support autograd.Function w/ grad (#99483)
This PR adds support for tracing autograd.Function with grad.

A few important bullet points outlining our approach:

1) Our goal is to verify soundness in order to add a call_function to the autograd.Function's `apply` to the graph.
2) We achieve (1) by either verifying soundness or rejecting soundness, by ensuring that both forward and backward of the autograd.Function are sound.
3) For the forward, if we verify soundness, we install its guards into the graph.
4) For the backward, if we verify soundness, we throw it out. However, backwards soundness verification is more onerous, and has a config driven set of banned attrs and methods for tensors.

1-4 above are achieved by turning the forward and backward into UserDefinedFunctionVariables, and inlining through them, relying on dynamo's soundness detection. If we graph break in these, we raise and treat them as unsound. As noted above, backwards is stricter yet.

For the tracing, the safety comes from dynamo's HigherOrderOperator system. That system ensures that not only do we trace soundly, but that no new variables are lifted into inputs during the tracing, and that the forward and backwards are entirely self contained.

Whenever we reject a function as unsound, we restore back, as usual.

Due to some limitations in the lifting logic, we have an escape hatch we implemented for tensors that are known in forward, but cross into backwards through save_tensors (save) /saved_tensors (load). We escape hatch here to avoid having the known saved tensors coming from forward end up being accidentally treated as lifted variables (and rejected). This is sound, but a little hacky feeling.

Additionally, due to some limitations in fx node removal, combined with how we produce subgraphs for the traces installed from HigherOrderOperators, we had to improve our node removal logic. In the event of a restore, we remove the old nodes from the graph, as usual in dynamo. However, because the references to these nodes may exist in subgraphs, we traverse any nodes users and remove them first if and only if they are in another graph. This is always sound, because removal should only be downstream of restoration at this point.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/99483
Approved by: https://github.com/zou3519
2023-05-19 01:26:21 +00:00
Larry Liu
687afeb686 [dynamo][numpy] Add NumpyTensorVariable to translate ndarray attribute calls to tensor attributes (#95849)
Issue: #93684

# Problem

Reduce graph breaks when dynamo compiles python functions containing numpy functions and ndarray operations.

# Design (as I know it)

* Use torch_np.ndarray(a wrapper of tensor) to back a `VariableTracker`: `NumpyTensorVariable`.
* Translate all attributes and methods calls, on ndarray, to torch_np.ndarray equivalent.

This PR adds `NumpyTensorVariable` and supports:
1.  tensor to ndarray, ndarray to tensor
2. numpy functions such as numpy.meshgrid()
3. ndarray attributes such as `itemsize`, `stride`

Next PR will handle returning `np.ndarray` and add support for ndarray methods
Pull Request resolved: https://github.com/pytorch/pytorch/pull/95849
Approved by: https://github.com/ezyang
2023-04-27 16:18:35 +00:00
Animesh Jain
006785cd46 [dynamo][hf_bigbird] Actually graph break on tensor.unsqueeze_/resize_ (#99986)
Currently, we return `unimplemented` w/o a graph break on seeing a x.unsqueeze_ when x is input. This essentially means we fall back to the original frame.

This PR actually graph breaks so that we can generate the continuation frame for the rest of the function. Instead of graph breaking at LOAD_ATTR, we delay the graph break to the actual CALL_FUNCTION, where its cleaner to graph break.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/99986
Approved by: https://github.com/jansel
2023-04-26 18:50:06 +00:00
Jason Ansel
47c685def3 [dynamo] Support DELETE_ATTR (#98698)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/98698
Approved by: https://github.com/yanboliang
2023-04-15 20:31:40 +00:00
Jason Ansel
e9be0b0fb9 [dynamo] Support functools.wraps (#98699)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/98699
Approved by: https://github.com/yanboliang, https://github.com/voznesenskym
2023-04-15 03:24:06 +00:00
Jason Ansel
f4858fa8ef Improve dynamo support for autograd.Function (#98158)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/98158
Approved by: https://github.com/yanboliang, https://github.com/anijain2305
2023-04-10 00:33:51 +00:00
PyTorch MergeBot
e394f6db5a Revert "Improve dynamo support for autograd.Function (#98158)"
This reverts commit 4716fa2411.

Reverted https://github.com/pytorch/pytorch/pull/98158 on behalf of https://github.com/huydhn due to Sorry for reverting your PR, but it seems to breaks MacOS trunk job 4716fa2411.  The signal was missing from the PR because we disabled MacOS job yesterday due to https://github.com/pytorch/pytorch/issues/98362
2023-04-06 18:15:02 +00:00
Jason Ansel
4716fa2411 Improve dynamo support for autograd.Function (#98158)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/98158
Approved by: https://github.com/yanboliang, https://github.com/anijain2305
2023-04-06 16:44:37 +00:00
Jason Ansel
55afaa46a4 Support functools.partial and itertools.product (#98120)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/98120
Approved by: https://github.com/anijain2305
2023-04-03 18:23:25 +00:00
Jason Ansel
76074dc0a3 Improve support for dict subclasses (#98154)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/98154
Approved by: https://github.com/anijain2305
2023-04-03 01:42:08 +00:00
Jason Ansel
35b3309539 Fix graph break from inline patched init (#98150)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/98150
Approved by: https://github.com/anijain2305, https://github.com/yanboliang
2023-04-03 01:11:30 +00:00
Yanbo Liang
9be9592f28 [Dynamo] Code refactor: move context managers out of misc.py (#97958)
misc.py and test_misc.py is too big, moving context managers to context.py and test_context.py.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/97958
Approved by: https://github.com/ezyang, https://github.com/anijain2305, https://github.com/mlazos, https://github.com/voznesenskym
2023-03-31 23:15:39 +00:00
Sam Gross
87f5e92916 [dynamo] Add guards for deterministic algos (#96695)
Inductor now falls back to eager mode for deterministic algos. Add guards in dynamo to check if the deterministic algos mode changes.

See #93537

Pull Request resolved: https://github.com/pytorch/pytorch/pull/96695
Approved by: https://github.com/ngimel, https://github.com/jansel
2023-03-31 16:28:45 +00:00
Aaron Gokaslan
9c3fbe7475 [BE] Enable flake8-simplify checks (#97984)
Enable some sensible flake8-simplify rules. Mainly wanted to enable the SIM101, and `yield from` SIM103 checks. @kit1980 since you wanted to be tagged on this CI check.

Enabling this check also helped flag one logical bug so it's definitely beneficial (also fixed in this PR).

Pull Request resolved: https://github.com/pytorch/pytorch/pull/97984
Approved by: https://github.com/ezyang
2023-03-31 03:40:21 +00:00
William Wen
24a5d006f2 [dynamo 3.11] Refactor create_instruction (#96499)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/96499
Approved by: https://github.com/jansel, https://github.com/albanD
2023-03-30 17:05:27 +00:00
Yanbo Liang
7fcf8b1829 [Dynamo] Support torch.{cuda/cpu}.amp.autocast (#95416)
For Meta internal use cases.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/95416
Approved by: https://github.com/jansel
2023-03-10 21:48:08 +00:00
PyTorch MergeBot
3ce1e15cf7 Revert "[Dynamo] Support torch.{cuda/cpu}.amp.autocast (#95416)"
This reverts commit c88aa336aa.

Reverted https://github.com/pytorch/pytorch/pull/95416 on behalf of https://github.com/huydhn due to Sorry for reverting your PR. But it seems that the smoke test issue is related as it starts to fail consistently in trunk https://hud.pytorch.org/hud/pytorch/pytorch/master/1?per_page=50&name_filter=inductor_torchbench_smoketest_perf
2023-03-08 06:51:57 +00:00
Yanbo Liang
c88aa336aa [Dynamo] Support torch.{cuda/cpu}.amp.autocast (#95416)
For Meta internal use cases.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/95416
Approved by: https://github.com/jansel
2023-03-08 01:40:27 +00:00