Commit Graph

84 Commits

Author SHA1 Message Date
zhxchen17
622688fab9 [export] Fix graph output mismatch issue with constant outputs. (#115280)
Fixes #ISSUE_NUMBER

Pull Request resolved: https://github.com/pytorch/pytorch/pull/115280
Approved by: https://github.com/tugsbayasgalan
2023-12-07 06:11:08 +00:00
Jason Ansel
aa70e31610 [dynamo] Fix MutableSideEffects returning alias (#115095)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/115095
Approved by: https://github.com/yanboliang
2023-12-05 19:01:03 +00:00
PyTorch MergeBot
5d170fce29 Revert "Support tensors as Dict keys (#111196)"
This reverts commit b0805fa5d0.

Reverted https://github.com/pytorch/pytorch/pull/111196 on behalf of https://github.com/huydhn due to Sorry for reverting your change, but it is failing internally. I will provide the details there ([comment](https://github.com/pytorch/pytorch/pull/111196#issuecomment-1813410149))
2023-11-15 23:08:00 +00:00
lezcano
b0805fa5d0 Support tensors as Dict keys (#111196)
This prepares the PR where we implement sets in terms of dicts.
To do so, rather than storing internally a dictionary that maps literals
to VariableTrackers, it stores (pretty much) a dictionary from VTs to VTs.
To do so, keys are wrapped in an opaque internal class `_Hashable`.
The Hashable class is opaque on purpose so that it fails hard if
if it inadvertently leaks back into user code.

We also found and fixed a number of latent bugs and inconsistencies
in the way dynamo checked what can be a dict key. More generally, we
make much clearer what are the things that need to be modified to add
a new supported key type to Dicts.

Fixes https://github.com/pytorch/pytorch/issues/107595
Fixes https://github.com/pytorch/pytorch/issues/111603
Pull Request resolved: https://github.com/pytorch/pytorch/pull/111196
Approved by: https://github.com/jansel
2023-11-14 19:14:03 +00:00
Jason Ansel
3914566c73 [dynamo] Refactor OrderedDict to dict (#113234)
In Python3 all dicts are ordered.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/113234
Approved by: https://github.com/oulgen, https://github.com/lezcano
2023-11-08 09:27:08 +00:00
Jason Ansel
843a8ecd24 [dynamo] Remove VariableTracker.add_options (#111725)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/111725
Approved by: https://github.com/voznesenskym
ghstack dependencies: #111306, #111415
2023-11-07 19:55:19 +00:00
Jason Ansel
9664190952 [dynamo] Eagerly install guards (#111415)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/111415
Approved by: https://github.com/voznesenskym
ghstack dependencies: #111306
2023-11-07 19:55:19 +00:00
Jez Ng
b8ac5bbcbd [dynamo] Enable typechecking for bytecode_transformation.py (#112561)
As part of this diff, I have upgraded the `python_version` config setting to 3.11. `bytecode_transformation.py` (and a few other files) have functions using APIs only available in Python 3.11+. Those APIs are gated by a sys.version_info check in their typeshed .pyi files. So setting the min version to 3.11 allows those functions to typecheck properly.

An alternative is to make the relevant types Any:

```
if sys.version_info >= (3, 11):
    _Positions = dis.Positions
else:
    _Positions = Any
```

However, with python_version = 3.8, that means we're not getting any useful typechecking signal when encountering values of type _Position.

Changing the python_version to 3.11 does mean that we will stop typechecking codepaths that run only on lower versions, but that seems a small price to pay. It does also mean that we won't catch code that uses newer APIs without the appropriate version check, but again, not sure this has much of an impact.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/112561
Approved by: https://github.com/ezyang
2023-11-04 19:36:27 +00:00
Jez Ng
413baa1b25 [dynamo] Enable typechecking for codegen.py (#111992)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/111992
Approved by: https://github.com/Skylion007, https://github.com/eellison
ghstack dependencies: #111894
2023-10-26 04:54:16 +00:00
Michael Lazos
1d9a7f9e43 [Reland] TensorWithTFOverride inheritance from TensorVariable (#111766)
Accidentally merged https://github.com/pytorch/pytorch/pull/111730 with ghstack, so relanding

Pull Request resolved: https://github.com/pytorch/pytorch/pull/111766
Approved by: https://github.com/jansel
2023-10-23 04:33:16 +00:00
Michael Lazos
62df159c3f move tf override tensor to torch_function.py (#111714)
Moves TensorWithTFOverride to torch_function.py

Pull Request resolved: https://github.com/pytorch/pytorch/pull/111714
Approved by: https://github.com/eellison, https://github.com/voznesenskym
2023-10-21 02:29:01 +00:00
Michael Lazos
a55ecec195 [dynamo][__torch_function__ 2/n] Refactor TensorWithTFOverrideVariable (#109556)
This is purely a refactor that preserves the existing behavior and tests.

The main contributions of the PR are to refactor the dispatch of `__torch_function__` to enable calling it with  TF override objects in any argument position and matching the eager dispatch behavior.

This will allow for the following in upcoming PRs:

1) have TensorWithTFOverrideVariable inherit from TensorVariable
2) enable tracing through the base `__torch_function__` implementation.

Note: this depends on https://github.com/pytorch/pytorch/pull/109542

towards tracing for https://github.com/pytorch/pytorch/issues/93723

Pull Request resolved: https://github.com/pytorch/pytorch/pull/109556
Approved by: https://github.com/jansel, https://github.com/ezyang
2023-10-20 18:53:38 +00:00
Animesh Jain
8ed08e5a7c [dynamo] Graph break on rng get/set state - remove GeneratorStateSource (#109410)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/109410
Approved by: https://github.com/ezyang
ghstack dependencies: #109411
2023-09-22 22:31:55 +00:00
lezcano
a9dca53438 NumPy support in torch.compile (#106211)
RFC: https://github.com/pytorch/rfcs/pull/54
First commit is the contents of https://github.com/Quansight-Labs/numpy_pytorch_interop/

We have already been using this in core for the last few months as a external dependency. This PR pulls all these into core.

In the next commits, I do a number of things in this order
- Fix a few small issues
- Make the tests that this PR adds pass
- Bend backwards until lintrunner passes
- Remove the optional dependency on `torch_np` and simply rely on the upstreamed code
- Fix a number dynamo tests that were passing before (they were not tasting anything I think) and are not passing now.

Missing from this PR (but not blocking):
- Have a flag that deactivates tracing NumPy functions and simply breaks. There used to be one but after the merge stopped working and I removed it. @lezcano to investigate.
- https://github.com/pytorch/pytorch/pull/106431#issuecomment-1667079543. @voznesenskym to submit a fix after we merge.

All the tests in `tests/torch_np` take about 75s to run.

This was a work by @ev-br, @rgommers @honno and I. I did not create this PR via ghstack (which would have been convenient) as this is a collaboration, and ghstack doesn't allow for shared contributions.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/106211
Approved by: https://github.com/ezyang
2023-08-11 00:39:32 +00:00
William Wen
018ac76362 fix x.numpy() breaks in #106211 (#106327)
Fixes https://github.com/pytorch/pytorch/issues/106316. Need to promote [this](https://dev-discuss.pytorch.org/t/supporting-dynamo-in-python-3-11-null/1393) a little more I guess. I'm going to make a PR soon that will add `push_null` arg to `load_import_from` and other function call codegen methods that are missing the field, so that we can push null as early in the function call sequence as possible.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/106327
Approved by: https://github.com/lezcano
2023-07-31 21:19:27 +00:00
Michael Lazos
dfe7a1e089 [dynamo] Support wrapping + returning tensor subclasses (#104802)
as title - used for [tracing the FSDP collectives](d8cb80e382/torch/distributed/_functional_collectives.py (L425))

Pull Request resolved: https://github.com/pytorch/pytorch/pull/104802
Approved by: https://github.com/jansel
2023-07-09 22:16:10 +00:00
Michael Voznesensky
aece6705d1 Move locals/globals to output graph, make it easier to access them anywhere (#103456)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/103456
Approved by: https://github.com/jansel
2023-06-14 20:04:33 +00:00
Mengwei Liu
c304fddf68 [dynamo][numpy] Support graph break for numpy ndarray (#100839)
Issue: #93684

In previous PRs #95849 #99560 we redirect `numpy.*`, `<tensor>.numpy()` calls to `torch_np.*` methods and attributes, by creating `NumpyNdarrayVariable` for those calls.

We need to handle `NumpyNdarrayVariable` when graph break happens.

This PR did 2 things:
1. In `codegen.py` we made sure we can reconstruct the value wrapped by `NumpyNdarrayVariable`, to be `torch_np.ndarray` in the stack whenerver we recompiles the subgraph.
2. In `builder.py` we can wrap the value to be `NumpyNdarrayVariable` and save it as graph input.

-----

Starting from commit 6:

## A new design for supporting numpy in dynamo

In short the core concept doesn't change: we still convert `numpy` API calls to `torch_np` API calls. However, instead of wrapping a `torch_np.ndarray` in `NumpyNdarrayVariable`, the new design wraps a `torch.Tensor`.

The reason for doing this change is because we need to keep `torch.Tensor` everywhere in the captured graph, so that it works well with the backend of dynamo. See discussions in https://github.com/Quansight-Labs/numpy_pytorch_interop/issues/142 for details.

### Flow
This is an example showing how do we think about dynamo working on a simple function:
```python
def f(x: torch.Tensor, y: torch.Tensor):
    a, b = x.numpy(), y.numpy()
    c = np.add(x, y)
    return torch.from_numpy(c)
```
```

              +------------+             +------------+
 torch.Tensor |            |numpy.ndarray|            |
 -------------- .numpy()   --------------|            |
              |            |             |            |             +------------------+
              +------------+             | numpy.add  |numpy.ndarray|                  |torch.Tensor
              +------------+             |            --------------| torch.from_numpy --------------
 torch.Tensor |            |numpy.ndarray|            |             |                  |
 -------------- .numpy()   --------------|            |             +------------------+
              |            |             |            |
              +------------+             +------------+

              +------------+             +----------------+
 torch.Tensor |            |torch.Tensor |                |
 -------------- .detach()  --------------|                |
              |            |             |                |                +----------------+            +------------+
              +------------+             |                |torch_np.ndarray|                |torch.Tensor|            |torch.Tensor
                                         | torch_np.add   -----------------| util.to_tensor -------------| .detach()  --------------
              +------------+             |                |                |                |            |            |
 torch.Tensor |            |torch.Tensor |                |                +----------------+            +------------+
 -------------- .detach()  --------------|                |
              |            |             |                |
              +------------+         |   +----------------+                                   |
                                     |                       wrapper on torch_np.add          |
                                     +--------------------------------------------------------+
```

### Approach

`torch_np` APIs can take both `torch_np.ndarray` as well as `torch.Tensor`. What  we need to do is to have a wrapper for these APIs to convert the return value back to `torch.Tensor`. This way only the wrapper is showing up in the captured graph, with `torch.Tensor`s as input and `torch.Tensor` as output.

If we have a graph break or we've traced to the end of the program, we need to inspect all the `NumpyNdarrayVariable` in the stack and convert them back to `numpy.ndarray`, to make sure the compiled version is still behaving the same as the eager version.

### Examples
Here's an example of the graph generated:

```python
def fn(x: np.ndarray, y: np.ndarray):
    a = x.real
    b = y.real
    torch._dynamo.graph_break()
    return np.add(a, 1), np.add(b, 1)
```

Graph generated:

```
[2023-05-16 10:31:48,737] torch._dynamo.output_graph.__graph: [DEBUG] TRACED GRAPH
 __compiled_fn_0 <eval_with_key>.0 opcode         name            target                                                      args                    kwargs
-------------  --------------  ----------------------------------------------------------  ----------------------  --------
placeholder    l_x_            L_x_                                                        ()                      {}
placeholder    l_y_            L_y_                                                        ()                      {}
call_function  from_numpy      <built-in method from_numpy of type object at 0x12b1fdc80>  (l_x_,)                 {}
call_function  from_numpy_1    <built-in method from_numpy of type object at 0x12b1fdc80>  (l_y_,)                 {}
call_function  attr_wrapper    <function attr_wrapper at 0x12e8693a0>                      (from_numpy, 'real')    {}
call_function  attr_wrapper_1  <function attr_wrapper at 0x12e8693a0>                      (from_numpy_1, 'real')  {}
output         output          output                                                      ((),)                   {}

[2023-05-16 10:31:48,908] torch._dynamo.output_graph.__graph: [DEBUG] TRACED GRAPH
 __compiled_fn_2 <eval_with_key>.1 opcode         name           target                                                      args                             kwargs
-------------  -------------  ----------------------------------------------------------  -------------------------------  --------
placeholder    l_a_           L_a_                                                        ()                               {}
placeholder    l_b_           L_b_                                                        ()                               {}
call_function  from_numpy     <built-in method from_numpy of type object at 0x12b1fdc80>  (l_a_,)                          {}
call_function  from_numpy_1   <built-in method from_numpy of type object at 0x12b1fdc80>  (l_b_,)                          {}
call_function  wrapped_add    <Wrapped function <original add>>                           (from_numpy, 1)                  {}
call_function  wrapped_add_1  <Wrapped function <original add>>                           (from_numpy_1, 1)                {}
output         output         output                                                      ((wrapped_add, wrapped_add_1),)  {}

```
### Changes

* `codegen.py`: reconstruct `numpy.ndarray` from `NumpyNdarrayVariable` by adding bytecode to call `utils.to_numpy_helper()`.
*  `output_graph.py`: getting rid of legacy code that does exactly what `codegen.py` does, which only handling return case but not graph break case.
*  `utils.py`: added helpers to convert `numpy.ndarray` to `torch.Tensor` and vice versa. Also adding a wrapper class that takes in a function. In `__call__` it calls the function and converts its out to `torch.Tensor` (or a list of it).
* `builder.py`: add method to wrap `numpy.ndarray` graph inputs into `NumpyNdarrayVariable`, by calling `torch.numpy` in the proxy.
* `misc.py`: `numpy` API calls goes into `NumpyVariable` and we find the function with the same name in `torch_np` module, then wrap it with the wrapper defined in `utils.py`.
* `tensor.py`, `torch.py`: proxy `tensor.numpy()` to be `torch.detach()` but wrap it with `NumpyNdarrayVariable`. Similarly, `torch.from_numpy()` -> `torch.detach()` but wrap it with `TensorVariable`. In `NumpyNdarrayVariable`, do the similar `torch_np.ndarray` to `torch.Tensor` wrapping for attributes.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/100839
Approved by: https://github.com/ezyang
2023-06-03 00:54:25 +00:00
Larry Liu
687afeb686 [dynamo][numpy] Add NumpyTensorVariable to translate ndarray attribute calls to tensor attributes (#95849)
Issue: #93684

# Problem

Reduce graph breaks when dynamo compiles python functions containing numpy functions and ndarray operations.

# Design (as I know it)

* Use torch_np.ndarray(a wrapper of tensor) to back a `VariableTracker`: `NumpyTensorVariable`.
* Translate all attributes and methods calls, on ndarray, to torch_np.ndarray equivalent.

This PR adds `NumpyTensorVariable` and supports:
1.  tensor to ndarray, ndarray to tensor
2. numpy functions such as numpy.meshgrid()
3. ndarray attributes such as `itemsize`, `stride`

Next PR will handle returning `np.ndarray` and add support for ndarray methods
Pull Request resolved: https://github.com/pytorch/pytorch/pull/95849
Approved by: https://github.com/ezyang
2023-04-27 16:18:35 +00:00
William Wen
762a2079c7 [dynamo 3.11] make create_instruction kwarg mandatory (#98032)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/98032
Approved by: https://github.com/albanD
2023-03-31 18:20:51 +00:00
William Wen
14ef91cea6 [dynamo 3.11] small bug fixes (#96508)
Bugs fixed:
	- CALL_FUNCTION_EX expects null pop in symbolic_convert
	- make_function_with_closure codegen requires a push_null
	- copy over the closure in eval_frame.c
	- add JUMP_FORWARD to terminal opcodes
	- enum repr fix in utils.py
	- fix symbolic_convert's break_graph_if_unsupported wrapper

Pull Request resolved: https://github.com/pytorch/pytorch/pull/96508
Approved by: https://github.com/jansel
2023-03-31 18:18:12 +00:00
William Wen
24a5d006f2 [dynamo 3.11] Refactor create_instruction (#96499)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/96499
Approved by: https://github.com/jansel, https://github.com/albanD
2023-03-30 17:05:27 +00:00
Yanbo Liang
f388bec985 [Dynamo] torch.Generator state should have a source and be reconstructed properly (#97403)
Fixes #97077 partially.

During FX graph propagation, we request every tensor should have source:
a524123c91/torch/_dynamo/variables/builder.py (L929)
However, the output of ```torch.Generator.get_state()``` is a tensor but without source, since it's generated inside of the FX graph. My change is following what we did for [Python random functions](https://github.com/pytorch/pytorch/blob/master/torch/_dynamo/variables/user_defined.py#L260), to have a dedicated ```GeneratorStateSource```. We have to also update the reconstruction logics, since we will reuse the ```TensorVariable``` reconstruction.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/97403
Approved by: https://github.com/jansel, https://github.com/mlazos
2023-03-29 04:31:23 +00:00
jon-chuang
7a192cc51c dynamo: wrap graph break inst in try except block - with context manager setup/teardown (#94758)
Replacement to https://github.com/pytorch/pytorch/pull/94672.

Follow up to https://github.com/pytorch/pytorch/pull/94137.

We simply replace the set grad mode try except blocks with one for a more generic contextmanager (using `__enter__` and `__exit__`), storing the context manager into a `symbolic_local` for the duration of the try block.

(see https://github.com/pytorch/torchdynamo/issues/207 for the original motivation)

This allows us to handle calling inner functions with graph breaks for any arbitrarily deep nesting of live context managers subclassing `AbstractContextManager`. (see tests)

Pull Request resolved: https://github.com/pytorch/pytorch/pull/94758
Approved by: https://github.com/yanboliang
2023-03-06 14:04:17 +00:00
William Wen
04d931d979 [dynamo 3.11] changes to MAKE_FUNCTION and MATCH_KEYS (#94100)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/94100
Approved by: https://github.com/albanD, https://github.com/jansel
2023-02-21 18:47:34 +00:00
William Wen
d5aaf54261 [dynamo 3.11] fix cell/freevar offsets (#94099)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/94099
Approved by: https://github.com/albanD, https://github.com/jansel
2023-02-21 18:47:32 +00:00
William Wen
055a9e45aa [dynamo 3.11] changes to LOAD_GLOBAL and function calls (#94098)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/94098
Approved by: https://github.com/albanD
2023-02-21 18:47:30 +00:00
William Wen
d567df9f36 [dynamo 3.11] remap dup/rotate to copy/swap (#93988)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/93988
Approved by: https://github.com/jansel, https://github.com/albanD, https://github.com/mlazos
2023-02-14 04:25:14 +00:00
Michael Voznesensky
bbe33532ae Rename DynamicShapeVariable to SymNodeVariable cause thats what it is (#94152)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/94152
Approved by: https://github.com/ezyang
2023-02-08 10:41:10 +00:00
Aaron Gokaslan
8fce9a09cd [BE]: pyupgrade Python to 3.8 - imports and object inheritance only (#94308)
Apply parts of pyupgrade to torch (starting with the safest changes).
This PR only does two things: removes the need to inherit from object and removes unused future imports.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/94308
Approved by: https://github.com/ezyang, https://github.com/albanD
2023-02-07 21:10:56 +00:00
Nikita Shulga
5976f0bdfe Set min supported Python version to 3.8 (#93155)
Also, grep for `if sys.version_info .cond. (3, 8)` and replaces them with appropriate action.

This is a last in a series of PRs that moved CI/CD away from testing PyTorch behavior against Python-3.7.

Fixes https://github.com/pytorch/pytorch/issues/80513

Pull Request resolved: https://github.com/pytorch/pytorch/pull/93155
Approved by: https://github.com/huydhn
2023-01-29 18:28:46 +00:00
Andrew M. James
7cd951c21e Properly guard all numpy usage within dynamo and remove UnspecializedNumpyVariable (#90795)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/90795
Approved by: https://github.com/ngimel, https://github.com/cpuhrsch
2023-01-06 22:36:38 +00:00
Michael Voznesensky
06ce1338bc [dynamo] Port all pytorch/dynamo and test/dynamo pieces over from symbolic-shapes branch (#88768)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/88768
Approved by: https://github.com/jansel, https://github.com/ezyang
2022-11-13 04:50:21 +00:00
Jason Ansel
c7c09722ad Move TorchDynamo into PyTorch core (#86461)
Context:
https://github.com/pytorch/torchdynamo/issues/1588

This PR moves [TorchDynamo](https://github.com/pytorch/torchdynamo) and TorchInductor into PyTorch core.
- `torchdynamo` becomes `torch._dynamo`
- `torchinductor` becomes `torch._inductor`

This PR was generated by running `copy_to_core.sh` in https://github.com/pytorch/torchdynamo/pull/1538

Pull Request resolved: https://github.com/pytorch/pytorch/pull/86461
Approved by: https://github.com/voznesenskym
2022-10-13 23:18:06 +00:00