Commit Graph

150 Commits

Author SHA1 Message Date
Michael Lazos
62df159c3f move tf override tensor to torch_function.py (#111714)
Moves TensorWithTFOverride to torch_function.py

Pull Request resolved: https://github.com/pytorch/pytorch/pull/111714
Approved by: https://github.com/eellison, https://github.com/voznesenskym
2023-10-21 02:29:01 +00:00
Michael Lazos
a55ecec195 [dynamo][__torch_function__ 2/n] Refactor TensorWithTFOverrideVariable (#109556)
This is purely a refactor that preserves the existing behavior and tests.

The main contributions of the PR are to refactor the dispatch of `__torch_function__` to enable calling it with  TF override objects in any argument position and matching the eager dispatch behavior.

This will allow for the following in upcoming PRs:

1) have TensorWithTFOverrideVariable inherit from TensorVariable
2) enable tracing through the base `__torch_function__` implementation.

Note: this depends on https://github.com/pytorch/pytorch/pull/109542

towards tracing for https://github.com/pytorch/pytorch/issues/93723

Pull Request resolved: https://github.com/pytorch/pytorch/pull/109556
Approved by: https://github.com/jansel, https://github.com/ezyang
2023-10-20 18:53:38 +00:00
Kazuaki Ishizaki
2c1b009e39 Fix typo under torch/_dynamo directory (#110459)
This PR fixes typo of comments in files under `torch/_dynamo` directory

Pull Request resolved: https://github.com/pytorch/pytorch/pull/110459
Approved by: https://github.com/colesbury
2023-10-04 16:05:05 +00:00
eellison
98c8550158 Fix Triplet Margin Loss Opinfo (#110302)
Triplet Margin Loss takes in a Callable `distance_function` parameter which is not supported as an argument on the fx graph. See previous error:

> File "/scratch/eellison/work/pytorch/torch/_dynamo/symbolic_convert.py", line 562, in call_function
self.push(fn.call_function(self, args, kwargs))
File "/scratch/eellison/work/pytorch/torch/_dynamo/variables/torch.py", line 723, in call_function
*proxy_args_kwargs(args, kwargs),
File "/scratch/eellison/work/pytorch/torch/_dynamo/utils.py", line 504, in proxy_args_kwargs
f"call_function args: {typestr(*args)} {typestr(*list(kwargs.values()))}"
File "/scratch/eellison/work/pytorch/torch/_dynamo/exc.py", line 143, in unimplemented
raise Unsupported(msg)
torch._dynamo.exc.Unsupported: call_function args: TensorVariable() TensorVariable() TensorVariable() ConstantVariable(float) NNModuleVariable()

This is fixable by just inlining into `triplet_margin_loss` and continuing to compile it. This required support for `has_torch_function_variadic`.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/110302
Approved by: https://github.com/mlazos
2023-10-03 20:26:13 +00:00
cdzhan
175b626216 Enable torch.promote_types in Dynamo tracing (#110358)
Fixes #109508

Pull Request resolved: https://github.com/pytorch/pytorch/pull/110358
Approved by: https://github.com/Skylion007
2023-10-02 15:20:36 +00:00
Yukio Siraichi
6f48d872d0 Re-land: Break graph on manual_seed. (#109109)
Re-landing: #108647 (old #107594)

Pull Request resolved: https://github.com/pytorch/pytorch/pull/109109
Approved by: https://github.com/lezcano
2023-09-28 15:28:40 +00:00
Tugsbayasgalan Manlaibaatar
bf7307adf8 Support inference_mode decorator (#109274)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/109274
Approved by: https://github.com/williamwen42
2023-09-27 22:21:42 +00:00
Moritz Hennen
09c598745c Rename torch._C._TensorBase to TensorBase (#109940)
I have gone ahead and implemented the renaming of the type `torch._C._TensorBase` to a non-private class name `TensorBase`.
The changes also include leaving `torch._C._TensorBase` as an alias to the new type: 70458768fb/torch/csrc/autograd/python_variable.cpp (L2196-L2197) both in the c++ code and in the corresponding `__init__.pyi.in` file:
70458768fb/torch/_C/__init__.pyi.in (L1522)

Fixes #109438

Pull Request resolved: https://github.com/pytorch/pytorch/pull/109940
Approved by: https://github.com/ezyang
2023-09-25 19:10:22 +00:00
Animesh Jain
8ed08e5a7c [dynamo] Graph break on rng get/set state - remove GeneratorStateSource (#109410)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/109410
Approved by: https://github.com/ezyang
ghstack dependencies: #109411
2023-09-22 22:31:55 +00:00
Michael Voznesensky
a902150a1e [Easy] ConstantVariable() -> .create (#109896)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/109896
Approved by: https://github.com/ezyang
2023-09-22 22:30:15 +00:00
Edward Z. Yang
518308a740 Trace through pytree API with dynamo. (#108533)
Fix: #107315

This PR enables dynamo to trace through the `pytree` API by inlining its functions. In
order to do so, a few details of `pytree` had to be changed.

In summary, this PR:

- Introduces `TreeSpecVariable` for representing `TreeSpec` instances
- Specializes `<type>.__bases__` call, returning a `TupleVariable`
- Enables the call to `id` builtin function for every variable that implements
  `as_python_constant` method
- Specializes `ConstantVariable.call_method` for its (un)flatten functions
- Implements `UserDefinedObjectVariable.as_python_constant`
- Modifies `pytree` by:
    - Make `SUPPORTED_NODES` a map of ids (instead of types) to `NodeDef`
    - Removed `functools.wraps` function, since it can't be inlined

Pull Request resolved: https://github.com/pytorch/pytorch/pull/108533
Approved by: https://github.com/ezyang, https://github.com/voznesenskym
ghstack dependencies: #109201
2023-09-20 00:04:56 +00:00
Edward Z. Yang
677a1010e6 Implement traceable torch.tensor when you have SymInt/SymFloat inputs (#109515)
I just ported the C++ torch.tensor implementation to Python, swapping out the inner bits to successively stack tensors together, so that we can trace through `scalar_tensor`.

Signed-off-by: Edward Z. Yang <ezyang@meta.com>

Pull Request resolved: https://github.com/pytorch/pytorch/pull/109515
Approved by: https://github.com/voznesenskym
ghstack dependencies: #109513
2023-09-19 13:19:57 +00:00
aashishthakur10
9e86a093e4 add torch.device to python type (#108116)
Fixes #107856

This PR adds torch.device instance check in the python_type method for torch variables in dynamo.

@ezyang

Pull Request resolved: https://github.com/pytorch/pytorch/pull/108116
Approved by: https://github.com/msaroufim, https://github.com/ezyang
2023-09-18 02:20:30 +00:00
Edward Z. Yang
e027de2c86 Add torch.distributed get_rank and get_world_size to constant_fold_functions (#109029)
Signed-off-by: Edward Z. Yang <ezyang@meta.com>

Pull Request resolved: https://github.com/pytorch/pytorch/pull/109029
Approved by: https://github.com/bdhirsh
2023-09-13 00:52:43 +00:00
PyTorch MergeBot
8caaa4f4cd Revert "Re-land: Break graph on manual_seed. (#108647)"
This reverts commit c887309437.

Reverted https://github.com/pytorch/pytorch/pull/108647 on behalf of https://github.com/huydhn due to Ouch, we are hit again my another internal import error from https://github.com/pytorch/pytorch/blob/main/torch/_inductor/config.py#L205-L206 ([comment](https://github.com/pytorch/pytorch/pull/108647#issuecomment-1712230103))
2023-09-08 21:18:00 +00:00
Yukio Siraichi
c887309437 Re-land: Break graph on manual_seed. (#108647)
Trying to re-land #107594.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/108647
Approved by: https://github.com/eellison
2023-09-07 12:52:38 +00:00
Yanbo Liang
027e3b7910 [Forward-fix] check if source is None when using tensor out variants (#108700)
Summary: As title

Test Plan: Sandcastle

Reviewed By: JacobSzwejbka

Differential Revision: D49029357

Pull Request resolved: https://github.com/pytorch/pytorch/pull/108700
Approved by: https://github.com/angelayi
2023-09-07 01:51:02 +00:00
PyTorch MergeBot
48286d34a4 Revert "Break graph on manual_seed. (#107594)"
This reverts commit 6ad5568cbc.

Reverted https://github.com/pytorch/pytorch/pull/107594 on behalf of https://github.com/huydhn due to Sorry for reverting your change, but it has an import issue that breaks internal code ([comment](https://github.com/pytorch/pytorch/pull/107594#issuecomment-1705584405))
2023-09-04 18:00:37 +00:00
Yukio Siraichi
2e3fce5450 Add dynamo support for rdiv dunder method. (#108422)
Fix: #106646

Pull Request resolved: https://github.com/pytorch/pytorch/pull/108422
Approved by: https://github.com/eellison
2023-09-02 00:59:22 +00:00
Wanchao Liang
a29b9101fa [dynamo] fix dynamo + DTensor to work with 2d (#108329)
pair debugged with @wconstab and we found some issue in both dynamo and
the TP's fsdp extension side. This PR fixes the dynamo + DTensor integration
so that the current graph break FSDP can work with tensor parallel by moving
the torch.compile after FSDP wrapping.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/108329
Approved by: https://github.com/Skylion007, https://github.com/wconstab
2023-08-31 22:46:26 +00:00
Yanbo Liang
dabdb97087 [Dynamo] Graph break on functions using tensor out variants (#108182)
Fixes #108021

Pull Request resolved: https://github.com/pytorch/pytorch/pull/108182
Approved by: https://github.com/eellison
2023-08-31 17:49:14 +00:00
Yukio Siraichi
6ad5568cbc Break graph on manual_seed. (#107594)
Fix: #107187

Pull Request resolved: https://github.com/pytorch/pytorch/pull/107594
Approved by: https://github.com/eellison
2023-08-30 17:24:11 +00:00
PyTorch MergeBot
4e47ea5131 Revert "Break graph on manual_seed. (#107594)"
This reverts commit 6c28de2437.

Reverted https://github.com/pytorch/pytorch/pull/107594 on behalf of https://github.com/huydhn due to Sorry for reverting your change, but it seems to cause failures in trunk on inductor/test_torchinductor_opinfo.py::TestInductorOpInfoCUDA::test_comprehensive_uniform_cuda_float, likely a landrace ([comment](https://github.com/pytorch/pytorch/pull/107594#issuecomment-1697783965))
2023-08-29 16:38:01 +00:00
Yukio Siraichi
6c28de2437 Break graph on manual_seed. (#107594)
Fix: #107187

Pull Request resolved: https://github.com/pytorch/pytorch/pull/107594
Approved by: https://github.com/eellison
2023-08-29 12:59:57 +00:00
Jason Ansel
73235d08c3 [dynamo] Graph break on pack_padded_sequence (#108096)
This is to workaround #93501.

Fixes errors in:
```
./benchmarks/dynamo/torchbench.py --inference --performance --no-skip --inductor --freezing --only tacotron2
```

Pull Request resolved: https://github.com/pytorch/pytorch/pull/108096
Approved by: https://github.com/davidberard98
2023-08-29 00:08:11 +00:00
lezcano
db39a81e1e Add a flag that allows breaking on NumPy ops (#107687)
This was removed in 63d406a6a9
Resotiring, as it's rather useful for debugging.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/107687
Approved by: https://github.com/larryliu0820
2023-08-23 01:21:22 +00:00
lezcano
612c8a8c84 Guard numpy imports in the dynamo folder (#107299)
Fixes https://github.com/pytorch/pytorch/issues/107228

Pull Request resolved: https://github.com/pytorch/pytorch/pull/107299
Approved by: https://github.com/atalman
2023-08-21 19:07:20 +00:00
Will Constable
eee2f57257 Raise TypeError for calling moduletype in dynamo (#107393)
Fixes #107314

Pull Request resolved: https://github.com/pytorch/pytorch/pull/107393
Approved by: https://github.com/williamwen42
2023-08-19 20:04:33 +00:00
lezcano
a9dca53438 NumPy support in torch.compile (#106211)
RFC: https://github.com/pytorch/rfcs/pull/54
First commit is the contents of https://github.com/Quansight-Labs/numpy_pytorch_interop/

We have already been using this in core for the last few months as a external dependency. This PR pulls all these into core.

In the next commits, I do a number of things in this order
- Fix a few small issues
- Make the tests that this PR adds pass
- Bend backwards until lintrunner passes
- Remove the optional dependency on `torch_np` and simply rely on the upstreamed code
- Fix a number dynamo tests that were passing before (they were not tasting anything I think) and are not passing now.

Missing from this PR (but not blocking):
- Have a flag that deactivates tracing NumPy functions and simply breaks. There used to be one but after the merge stopped working and I removed it. @lezcano to investigate.
- https://github.com/pytorch/pytorch/pull/106431#issuecomment-1667079543. @voznesenskym to submit a fix after we merge.

All the tests in `tests/torch_np` take about 75s to run.

This was a work by @ev-br, @rgommers @honno and I. I did not create this PR via ghstack (which would have been convenient) as this is a collaboration, and ghstack doesn't allow for shared contributions.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/106211
Approved by: https://github.com/ezyang
2023-08-11 00:39:32 +00:00
kshitij12345
cce2c52b0b [pt2] support vmap (#101707)
Teach dynamo about `vmap`

Pull Request resolved: https://github.com/pytorch/pytorch/pull/101707
Approved by: https://github.com/zou3519
2023-08-09 03:39:33 +00:00
Tugsbayasgalan Manlaibaatar
df50f91571 Support fx_pytree in dynamo (#105574)
This PR does two things:
1. Make dynamo trace through fx_pytree (on top of torch.utils._pytree) so that generated graph modules can be retraced.
2. Fix bug where unflatten not returning dynamo VariableTracker.

Differential Revision: [D47734623](https://our.internmc.facebook.com/intern/diff/D47734623)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/105574
Approved by: https://github.com/yanboliang, https://github.com/ydwu4
2023-07-29 05:08:15 +00:00
Jason Ansel
099345f1e5 [Compiled Autograd] Handle aten.sym_size/aten.sym_stride (#105814)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/105814
Approved by: https://github.com/voznesenskym
2023-07-28 21:42:51 +00:00
kshitij12345
920b446da9 dynamo: support disable_saved_tensors_hooks (#104869)
Functorch transforms use this context manager which will lead to graph-breaks.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/104869
Approved by: https://github.com/zou3519
2023-07-26 07:27:37 +00:00
Wanchao Liang
c76c84bde4 [dynamo] make ProcessGroupVariable a DistributedVariable (#105593)
This PR move the ProcessGroupVariable from UDO to DistributedVT
so that Distributed VTs are consolidated together

Pull Request resolved: https://github.com/pytorch/pytorch/pull/105593
Approved by: https://github.com/voznesenskym
2023-07-26 06:42:50 +00:00
Michael Voznesensky
bf693f2000 Strengthen ConstantVariable invariants (#105796)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/105796
Approved by: https://github.com/ezyang
2023-07-24 20:41:12 +00:00
Wanchao Liang
f139aab2f4 [dynamo] add initial dynamo support for DTensor (#103146)
This PR adds initial dynamo support for DTensor, in particular, it:
- allows DTensor be passed into a compiled function, and allow fakify
DTensor during dynamo tracing by turning the inner local tensor to meta
tensor.
- We use `allow_in_graph` to include `DTensor` and `DTensor.from_local` to be represented as `TorchVariable`
- The dtensor created becomes a normal `TensorVariable` and it would insert any tensor operations to the output graph just like torch.Tensor
- note that dtensor have a new instance method `redistribute` compare to plain tensor, and we currently special handle it in `TensorVariable`

`from_local` and `redistribute` both accepts some non-trival metadata as arguments (i.e. DeviceMesh, Placement) which fx.Graph does not support. In order to let these two APIs appear in the dynamo captured graph, we encoded the metadata into a new_function (like `functools.partial`) and the new function only accepts prim args (i.e. tensor), then we put `call_function` with this new_function to the graph. This is suggested by @ezyang. The underlying rationale here is that the metadata will not change across the graph invocations so it's safe to encode them.

Captured graph:
```
    def forward(self, L_x_ : torch.Tensor):
        l_x_ = L_x_

        # File: /scratch/wanchaol/work/pytorch/test/distributed/_tensor/test_dtensor.py:685, code: dt = DTensor.from_local(x, mesh, [Shard(0)], run_check=False)
        prim_from_local = torch__dynamo_variables_torch_prim_from_local(l_x_, run_check = False);  l_x_ = None

        # File: /scratch/wanchaol/work/pytorch/test/distributed/_tensor/test_dtensor.py:686, code: return dt.redistribute(mesh, [Replicate()]).to_local() + 2
        prim_redistribute = torch__dynamo_variables_tensor_prim_redistribute(prim_from_local);  prim_from_local = None
        to_local = prim_redistribute.to_local();  prim_redistribute = None
        add = to_local + 2;  to_local = None
        return (add,)
```

Pull Request resolved: https://github.com/pytorch/pytorch/pull/103146
Approved by: https://github.com/voznesenskym
2023-07-19 16:01:12 +00:00
David Berard
ad6dad810e [dynamo][profiler] More verbose profiler warning (#105362)
torch.profiler.record_function and torch.profiler.profile are ignored by dynamo. In the common case, users have `record_function` in the middle of their program in order to annotate a section of the profile.

The previous error message was `Profiler will be ignored`. Users would think that profiling would be completely ignored.

Now the message will look like `Profiler function <class 'torch.autograd.profiler.record_function'> will be ignored`.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/105362
Approved by: https://github.com/yanboliang, https://github.com/aaronenyeshi
2023-07-18 04:42:13 +00:00
Michael Lazos
05eea20eb9 [dynamo] Simulate torch function enablement state (#105091)
Part of https://github.com/pytorch/pytorch/issues/93723

Pull Request resolved: https://github.com/pytorch/pytorch/pull/105091
Approved by: https://github.com/voznesenskym, https://github.com/anijain2305
2023-07-13 17:42:20 +00:00
Michael Lazos
0433cb0596 [dynamo] simulate tracing tree_map_only (#104815)
Fixes #ISSUE_NUMBER

Pull Request resolved: https://github.com/pytorch/pytorch/pull/104815
Approved by: https://github.com/voznesenskym
2023-07-10 18:05:35 +00:00
Animesh Jain
4005152b92 [dynamo] Organize higherorderops variable trackers (#104565)
The main change is moving the higherorderops from torch.py to higher_order_ops.py. And creating smaller subclasses of HigherOrderOp for cond, map etc

Pull Request resolved: https://github.com/pytorch/pytorch/pull/104565
Approved by: https://github.com/zou3519
2023-07-05 22:19:26 +00:00
William Wen
76a91075ea propagate pred guards in TorchHigherOrderOperatorVariable call_function for cond (#104379)
Fixes https://github.com/pytorch/pytorch/issues/104372

Pull Request resolved: https://github.com/pytorch/pytorch/pull/104379
Approved by: https://github.com/voznesenskym, https://github.com/ydwu4, https://github.com/zou3519
2023-06-29 20:47:00 +00:00
Animesh Jain
c0aa442cb5 [dynamo][higher order op] Relaxing too restrictive check for output to be a list/tuple of tensors (#104221)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/104221
Approved by: https://github.com/ydwu4, https://github.com/zou3519
2023-06-28 00:30:43 +00:00
Animesh Jain
75dab587ef [dynamo] FSDP + AC + torch.compile (#103953)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/103953
Approved by: https://github.com/wanchaol
2023-06-24 01:40:56 +00:00
Michael Voznesensky
ec24f1e4cc Simulate treespec flattening/unflattening (#101896)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/101896
Approved by: https://github.com/jansel, https://github.com/anijain2305
2023-06-23 10:53:15 +00:00
kshitij12345
d552c271db [pt2] grad support (#102264)
Teach dynamo about grad

Pull Request resolved: https://github.com/pytorch/pytorch/pull/102264
Approved by: https://github.com/zou3519
2023-06-21 10:13:09 +00:00
PyTorch MergeBot
e737a8486f Revert "[pt2] grad support (#102264)"
This reverts commit 85b83954c8.

Reverted https://github.com/pytorch/pytorch/pull/102264 on behalf of https://github.com/huydhn due to This is failing in trunk 85b83954c8 and looks like a landrace ([comment](https://github.com/pytorch/pytorch/pull/102264#issuecomment-1600001309))
2023-06-21 03:02:55 +00:00
kshitij12345
85b83954c8 [pt2] grad support (#102264)
Teach dynamo about grad

Pull Request resolved: https://github.com/pytorch/pytorch/pull/102264
Approved by: https://github.com/zou3519
2023-06-21 01:37:08 +00:00
Tugsbayasgalan Manlaibaatar
d4b85f3031 Support params/buffers inside cond and map (#102310)
With #102022, params and buffers are always treated as special case of free variables. In this PR, I switch cond and map implementation to the this method and deprecate the old tracing mechanism.

Differential Revision: [D46746202](https://our.internmc.facebook.com/intern/diff/D46746202)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/102310
Approved by: https://github.com/avikchaudhuri, https://github.com/zou3519
2023-06-20 05:33:10 +00:00
PyTorch MergeBot
2087d32811 Revert "Support params/buffers inside cond and map (#102310)"
This reverts commit 766f236bad.

Reverted https://github.com/pytorch/pytorch/pull/102310 on behalf of https://github.com/huydhn due to The test is failing in trunk 766f236bad ([comment](https://github.com/pytorch/pytorch/pull/102310#issuecomment-1592159710))
2023-06-15 00:29:20 +00:00
Tugsbayasgalan Manlaibaatar
766f236bad Support params/buffers inside cond and map (#102310)
With #102022, params and buffers are always treated as special case of free variables. In this PR, I switch cond and map implementation to the this method and deprecate the old tracing mechanism.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/102310
Approved by: https://github.com/avikchaudhuri, https://github.com/zou3519
2023-06-14 22:32:33 +00:00