Commit Graph

54 Commits

Author SHA1 Message Date
ydwu4
d84bcb9c8c [HigherOrderOp] expose torch.cond (#110293)
This pr expose torch._higher_order_ops.cond as torch.cond.

1. Need to add #noqa: F811 to the _check calls in torch/__init__.py to address some confusing linter error "Redefinition of unused 'cond'" but only one cond is imported and for these lines that have this error, they don't define the cond but just use it as an argument.
2. Also add cond to the list that allows it to be traced through so as dynamo could trigger the CondHigherOrder logic instead of creating a TorchVariable.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/110293
Approved by: https://github.com/zou3519
2023-10-07 20:39:52 +00:00
PyTorch MergeBot
576b80d23e Revert "[HigherOrderOp] expose torch.cond (#110293)"
This reverts commit 601f872831.

Reverted https://github.com/pytorch/pytorch/pull/110293 on behalf of https://github.com/ydwu4 due to Sorry, didn't check the error carefully on the PR. A doc error is related to this pr ([comment](https://github.com/pytorch/pytorch/pull/110293#issuecomment-1751176719))
2023-10-06 17:44:17 +00:00
ydwu4
601f872831 [HigherOrderOp] expose torch.cond (#110293)
This pr expose torch._higher_order_ops.cond as torch.cond.

1. Need to add #noqa: F811 to the _check calls in torch/__init__.py to address some confusing linter error "Redefinition of unused 'cond'" but only one cond is imported and for these lines that have this error, they don't define the cond but just use it as an argument.
2. Also add cond to the list that allows it to be traced through so as dynamo could trigger the CondHigherOrder logic instead of creating a TorchVariable.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/110293
Approved by: https://github.com/zou3519
2023-10-06 17:04:31 +00:00
eellison
98c8550158 Fix Triplet Margin Loss Opinfo (#110302)
Triplet Margin Loss takes in a Callable `distance_function` parameter which is not supported as an argument on the fx graph. See previous error:

> File "/scratch/eellison/work/pytorch/torch/_dynamo/symbolic_convert.py", line 562, in call_function
self.push(fn.call_function(self, args, kwargs))
File "/scratch/eellison/work/pytorch/torch/_dynamo/variables/torch.py", line 723, in call_function
*proxy_args_kwargs(args, kwargs),
File "/scratch/eellison/work/pytorch/torch/_dynamo/utils.py", line 504, in proxy_args_kwargs
f"call_function args: {typestr(*args)} {typestr(*list(kwargs.values()))}"
File "/scratch/eellison/work/pytorch/torch/_dynamo/exc.py", line 143, in unimplemented
raise Unsupported(msg)
torch._dynamo.exc.Unsupported: call_function args: TensorVariable() TensorVariable() TensorVariable() ConstantVariable(float) NNModuleVariable()

This is fixable by just inlining into `triplet_margin_loss` and continuing to compile it. This required support for `has_torch_function_variadic`.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/110302
Approved by: https://github.com/mlazos
2023-10-03 20:26:13 +00:00
Yanbo Liang
9bc5e10899 [New][1/N] Dynamo skipfiles refactor (#110330)
This is the replacement of #109567. Now I preserved all existing semantics and only focusing on API (for developers) and code structure changes.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/110330
Approved by: https://github.com/ezyang
2023-10-03 16:50:33 +00:00
atalman
b253fc9c93 Revert "[1/N] Dynamo skipfiles refactor (#109567)" (#110296)
This reverts commit 84c5435b29.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/110296
Approved by: https://github.com/yanboliang
2023-09-29 20:35:46 +00:00
Yanbo Liang
84c5435b29 [1/N] Dynamo skipfiles refactor (#109567)
This is 1/N of the dynamo skipfiles/allowed_functions refactor, the major change in this PR includes:
* Refactor & define the [skipfiles rules](https://github.com/pytorch/pytorch/pull/109567/files#diff-5aa3ce9db729bf0901ea97a5d3cc51924cc8575d9c516c1c8f572a35de92544aR56) and interface
* For every ```skipfiles.check```, we return both the check result and the skip/inline reason and log them for debugging.
* We found several latent issues/bugs and incorrect implementations in the codebase, but I'm planning to fix them in follow-up PRs to make the refactor decoupled with bug fixes.
* More details in the inline comments.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/109567
Approved by: https://github.com/ezyang, https://github.com/jansel, https://github.com/anijain2305
2023-09-28 18:36:46 +00:00
PyTorch MergeBot
75462fd870 Revert "[1/N] Dynamo skipfiles refactor (#109567)"
This reverts commit f8e0ebec8c.

Reverted https://github.com/pytorch/pytorch/pull/109567 on behalf of https://github.com/huydhn due to Many jobs are failing in trunk after this with FILENAME_ALLOWLIST is not defined error f8e0ebec8c. This looks like a landrace ([comment](https://github.com/pytorch/pytorch/pull/109567#issuecomment-1738344950))
2023-09-28 02:22:22 +00:00
Yanbo Liang
f8e0ebec8c [1/N] Dynamo skipfiles refactor (#109567)
This is 1/N of the dynamo skipfiles/allowed_functions refactor, the major change in this PR includes:
* Refactor & define the [skipfiles rules](https://github.com/pytorch/pytorch/pull/109567/files#diff-5aa3ce9db729bf0901ea97a5d3cc51924cc8575d9c516c1c8f572a35de92544aR56) and interface
* For every ```skipfiles.check```, we return both the check result and the skip/inline reason and log them for debugging.
* We found several latent issues/bugs and incorrect implementations in the codebase, but I'm planning to fix them in follow-up PRs to make the refactor decoupled with bug fixes.
* More details in the inline comments.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/109567
Approved by: https://github.com/ezyang, https://github.com/jansel, https://github.com/anijain2305
2023-09-28 01:21:59 +00:00
Edward Z. Yang
518308a740 Trace through pytree API with dynamo. (#108533)
Fix: #107315

This PR enables dynamo to trace through the `pytree` API by inlining its functions. In
order to do so, a few details of `pytree` had to be changed.

In summary, this PR:

- Introduces `TreeSpecVariable` for representing `TreeSpec` instances
- Specializes `<type>.__bases__` call, returning a `TupleVariable`
- Enables the call to `id` builtin function for every variable that implements
  `as_python_constant` method
- Specializes `ConstantVariable.call_method` for its (un)flatten functions
- Implements `UserDefinedObjectVariable.as_python_constant`
- Modifies `pytree` by:
    - Make `SUPPORTED_NODES` a map of ids (instead of types) to `NodeDef`
    - Removed `functools.wraps` function, since it can't be inlined

Pull Request resolved: https://github.com/pytorch/pytorch/pull/108533
Approved by: https://github.com/ezyang, https://github.com/voznesenskym
ghstack dependencies: #109201
2023-09-20 00:04:56 +00:00
Animesh Jain
2b6d983b8b Reland [dynamo][activation checkpointing] Trace through ActivationWrapper (#109327)
Fixes https://github.com/pytorch/pytorch/issues/108269
Original reverted PR - https://github.com/pytorch/pytorch/pull/108599

Pull Request resolved: https://github.com/pytorch/pytorch/pull/109327
Approved by: https://github.com/aakhundov
2023-09-15 03:43:59 +00:00
Animesh Jain
5349615240 [dynamo] Unblock a model with jit.isinstance (#109178)
prevents this error

```
File "/tmp/jetter.azp5q59y/torch/fx/proxy.py", line 291, in create_arg
python/0     raise NotImplementedError(f"argument of type: {type(a)}")
python/0 torch._dynamo.exc.InternalTorchDynamoError: argument of type: <class 'typing._GenericAlias'>
```

Pull Request resolved: https://github.com/pytorch/pytorch/pull/109178
Approved by: https://github.com/yanboliang
2023-09-15 01:19:46 +00:00
PyTorch MergeBot
77691e8bc3 Revert "[dynamo][activation checkpointing] Trace through ActivationWrapper (#108599)"
This reverts commit 9efe0f7bf2.

Reverted https://github.com/pytorch/pytorch/pull/108599 on behalf of https://github.com/huydhn due to Sorry for reverting your change, but test_ddp_activation_checkpointing is failing distributed ROCm test in trunk ([comment](https://github.com/pytorch/pytorch/pull/108599#issuecomment-1710479387))
2023-09-07 16:47:40 +00:00
Animesh Jain
9efe0f7bf2 [dynamo][activation checkpointing] Trace through ActivationWrapper (#108599)
Fixes https://github.com/pytorch/pytorch/issues/108269

Pull Request resolved: https://github.com/pytorch/pytorch/pull/108599
Approved by: https://github.com/rohan-varma
2023-09-07 00:32:18 +00:00
lezcano
612c8a8c84 Guard numpy imports in the dynamo folder (#107299)
Fixes https://github.com/pytorch/pytorch/issues/107228

Pull Request resolved: https://github.com/pytorch/pytorch/pull/107299
Approved by: https://github.com/atalman
2023-08-21 19:07:20 +00:00
lezcano
4eac43d046 Trace through Tensor slots (#107159)
Namely
```
__delattr__
__delitem__
__getattribute__
__getitem__
__setattr__
__setitem__
__str__
```

We don't trace through `__init__`.

Fixes https://github.com/pytorch/pytorch/issues/106648

Pull Request resolved: https://github.com/pytorch/pytorch/pull/107159
Approved by: https://github.com/Skylion007
2023-08-19 08:56:25 +00:00
Animesh Jain
7cb2a6bfab [dynamo][fallback] Fallback to eager when backend fails with fake tensor exceptions (#107179)
Example (I think we should fix this test case for real, but using this to test the ux around fallbacks)

~~~
@torch.compile(backend="aot_eager")
def fn(x):
    return torch.sum(x, dim=1).tolist()

print(fn(torch.rand(4, 4).to(dtype=torch.int64)))
~~~

Running the script as is

~~~
[2023-08-14 14:53:48,863] torch._dynamo.output_graph: [WARNING] Backend compiler failed with a fake tensor exception at
[2023-08-14 14:53:48,863] torch._dynamo.output_graph: [WARNING]   File "/data/users/anijain/pytorch/examples/spl.py", line 5, in fn
[2023-08-14 14:53:48,863] torch._dynamo.output_graph: [WARNING]     return torch.sum(x, dim=1).tolist()
[2023-08-14 14:53:48,863] torch._dynamo.output_graph: [WARNING] Falling back to eager for this frame. Please use TORCH_LOGS=graph_breaks to see the full stack trace.
[0, 0, 0, 0]
~~~

Running the script with TORCH_LOGS="graph_breaks"

~~~
[2023-08-14 14:54:15,689] torch._dynamo.output_graph.__graph_breaks: [DEBUG] WON'T CONVERT fn /data/users/anijain/pytorch/examples/spl.py line 3
[2023-08-14 14:54:15,689] torch._dynamo.output_graph.__graph_breaks: [DEBUG] ========== TorchDynamo Stack Trace ==========
[2023-08-14 14:54:15,689] torch._dynamo.output_graph.__graph_breaks: [DEBUG] Traceback (most recent call last):
[2023-08-14 14:54:15,689] torch._dynamo.output_graph.__graph_breaks: [DEBUG]   File "/data/users/anijain/pytorch/torch/_dynamo/output_graph.py", line 995, in call_user_compiler
[2023-08-14 14:54:15,689] torch._dynamo.output_graph.__graph_breaks: [DEBUG]     compiled_fn = compiler_fn(gm, self.example_inputs())
[2023-08-14 14:54:15,689] torch._dynamo.output_graph.__graph_breaks: [DEBUG]   File "/data/users/anijain/pytorch/torch/_dynamo/repro/after_dynamo.py", line 117, in debug_wrapper
[2023-08-14 14:54:15,689] torch._dynamo.output_graph.__graph_breaks: [DEBUG]     compiled_gm = compiler_fn(gm, example_inputs)
[2023-08-14 14:54:15,689] torch._dynamo.output_graph.__graph_breaks: [DEBUG]   File "/data/users/anijain/pytorch/torch/__init__.py", line 1586, in __call__
[2023-08-14 14:54:15,689] torch._dynamo.output_graph.__graph_breaks: [DEBUG]     return self.compiler_fn(model_, inputs_, **self.kwargs)
[2023-08-14 14:54:15,689] torch._dynamo.output_graph.__graph_breaks: [DEBUG]   File "/data/users/anijain/pytorch/torch/_dynamo/backends/common.py", line 55, in compiler_fn
[2023-08-14 14:54:15,689] torch._dynamo.output_graph.__graph_breaks: [DEBUG]     cg = aot_module_simplified(gm, example_inputs, **kwargs)
[2023-08-14 14:54:15,689] torch._dynamo.output_graph.__graph_breaks: [DEBUG]   File "/data/users/anijain/pytorch/torch/_functorch/aot_autograd.py", line 3795, in aot_module_simplified
[2023-08-14 14:54:15,689] torch._dynamo.output_graph.__graph_breaks: [DEBUG]     compiled_fn = create_aot_dispatcher_function(
[2023-08-14 14:54:15,689] torch._dynamo.output_graph.__graph_breaks: [DEBUG]   File "/data/users/anijain/pytorch/torch/_dynamo/utils.py", line 194, in time_wrapper
[2023-08-14 14:54:15,689] torch._dynamo.output_graph.__graph_breaks: [DEBUG]     r = func(*args, **kwargs)
[2023-08-14 14:54:15,689] torch._dynamo.output_graph.__graph_breaks: [DEBUG]   File "/data/users/anijain/pytorch/torch/_functorch/aot_autograd.py", line 3283, in create_aot_dispatcher_function
[2023-08-14 14:54:15,689] torch._dynamo.output_graph.__graph_breaks: [DEBUG]     fw_metadata = run_functionalized_fw_and_collect_metadata(
[2023-08-14 14:54:15,689] torch._dynamo.output_graph.__graph_breaks: [DEBUG]   File "/data/users/anijain/pytorch/torch/_functorch/aot_autograd.py", line 757, in inner
[2023-08-14 14:54:15,689] torch._dynamo.output_graph.__graph_breaks: [DEBUG]     flat_f_outs = f(*flat_f_args)
[2023-08-14 14:54:15,689] torch._dynamo.output_graph.__graph_breaks: [DEBUG]   File "/data/users/anijain/pytorch/torch/_functorch/aot_autograd.py", line 3400, in functional_call
[2023-08-14 14:54:15,689] torch._dynamo.output_graph.__graph_breaks: [DEBUG]     out = Interpreter(mod).run(*args[params_len:], **kwargs)
[2023-08-14 14:54:15,689] torch._dynamo.output_graph.__graph_breaks: [DEBUG]   File "/data/users/anijain/pytorch/torch/fx/interpreter.py", line 138, in run
[2023-08-14 14:54:15,689] torch._dynamo.output_graph.__graph_breaks: [DEBUG]     self.env[node] = self.run_node(node)
[2023-08-14 14:54:15,689] torch._dynamo.output_graph.__graph_breaks: [DEBUG]   File "/data/users/anijain/pytorch/torch/fx/interpreter.py", line 195, in run_node
[2023-08-14 14:54:15,689] torch._dynamo.output_graph.__graph_breaks: [DEBUG]     return getattr(self, n.op)(n.target, args, kwargs)
[2023-08-14 14:54:15,689] torch._dynamo.output_graph.__graph_breaks: [DEBUG]   File "/data/users/anijain/pytorch/torch/fx/interpreter.py", line 289, in call_method
[2023-08-14 14:54:15,689] torch._dynamo.output_graph.__graph_breaks: [DEBUG]     return getattr(self_obj, target)(*args_tail, **kwargs)
[2023-08-14 14:54:15,689] torch._dynamo.output_graph.__graph_breaks: [DEBUG]   File "/data/users/anijain/pytorch/torch/utils/_stats.py", line 20, in wrapper
[2023-08-14 14:54:15,689] torch._dynamo.output_graph.__graph_breaks: [DEBUG]     return fn(*args, **kwargs)
[2023-08-14 14:54:15,689] torch._dynamo.output_graph.__graph_breaks: [DEBUG]   File "/data/users/anijain/pytorch/torch/_subclasses/fake_tensor.py", line 1233, in __torch_dispatch__
[2023-08-14 14:54:15,689] torch._dynamo.output_graph.__graph_breaks: [DEBUG]     return self.dispatch(func, types, args, kwargs)
[2023-08-14 14:54:15,689] torch._dynamo.output_graph.__graph_breaks: [DEBUG]   File "/data/users/anijain/pytorch/torch/_subclasses/fake_tensor.py", line 1470, in dispatch
[2023-08-14 14:54:15,689] torch._dynamo.output_graph.__graph_breaks: [DEBUG]     op_impl_out = op_impl(self, func, *args, **kwargs)
[2023-08-14 14:54:15,689] torch._dynamo.output_graph.__graph_breaks: [DEBUG]   File "/data/users/anijain/pytorch/torch/_subclasses/fake_tensor.py", line 501, in local_scalar_dense
[2023-08-14 14:54:15,689] torch._dynamo.output_graph.__graph_breaks: [DEBUG]     raise DataDependentOutputException(func)
[2023-08-14 14:54:15,689] torch._dynamo.output_graph.__graph_breaks: [DEBUG] torch._subclasses.fake_tensor.DataDependentOutputException: aten._local_scalar_dense.default
[2023-08-14 14:54:15,689] torch._dynamo.output_graph.__graph_breaks: [DEBUG]
[2023-08-14 14:54:15,689] torch._dynamo.output_graph.__graph_breaks: [DEBUG] While executing %item : [num_users=1] = call_method[target=item](args = (%getitem,), kwargs = {})
[2023-08-14 14:54:15,689] torch._dynamo.output_graph.__graph_breaks: [DEBUG] Original traceback:
[2023-08-14 14:54:15,689] torch._dynamo.output_graph.__graph_breaks: [DEBUG]   File "/data/users/anijain/pytorch/examples/spl.py", line 5, in fn
[2023-08-14 14:54:15,689] torch._dynamo.output_graph.__graph_breaks: [DEBUG]     return torch.sum(x, dim=1).tolist()
[2023-08-14 14:54:15,689] torch._dynamo.output_graph.__graph_breaks: [DEBUG]
[2023-08-14 14:54:15,689] torch._dynamo.output_graph.__graph_breaks: [DEBUG]
~~~~

Pull Request resolved: https://github.com/pytorch/pytorch/pull/107179
Approved by: https://github.com/ezyang
2023-08-16 14:57:42 +00:00
lezcano
a9dca53438 NumPy support in torch.compile (#106211)
RFC: https://github.com/pytorch/rfcs/pull/54
First commit is the contents of https://github.com/Quansight-Labs/numpy_pytorch_interop/

We have already been using this in core for the last few months as a external dependency. This PR pulls all these into core.

In the next commits, I do a number of things in this order
- Fix a few small issues
- Make the tests that this PR adds pass
- Bend backwards until lintrunner passes
- Remove the optional dependency on `torch_np` and simply rely on the upstreamed code
- Fix a number dynamo tests that were passing before (they were not tasting anything I think) and are not passing now.

Missing from this PR (but not blocking):
- Have a flag that deactivates tracing NumPy functions and simply breaks. There used to be one but after the merge stopped working and I removed it. @lezcano to investigate.
- https://github.com/pytorch/pytorch/pull/106431#issuecomment-1667079543. @voznesenskym to submit a fix after we merge.

All the tests in `tests/torch_np` take about 75s to run.

This was a work by @ev-br, @rgommers @honno and I. I did not create this PR via ghstack (which would have been convenient) as this is a collaboration, and ghstack doesn't allow for shared contributions.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/106211
Approved by: https://github.com/ezyang
2023-08-11 00:39:32 +00:00
kshitij12345
cce2c52b0b [pt2] support vmap (#101707)
Teach dynamo about `vmap`

Pull Request resolved: https://github.com/pytorch/pytorch/pull/101707
Approved by: https://github.com/zou3519
2023-08-09 03:39:33 +00:00
Kshiteej K
af78e139a8 [functorch] fix dynamo support for functorch.grad (#106610)
Ref: https://github.com/pytorch/pytorch/pull/106475#discussion_r1282384503

Pull Request resolved: https://github.com/pytorch/pytorch/pull/106610
Approved by: https://github.com/zou3519
2023-08-07 17:44:49 +00:00
Wanchao Liang
f139aab2f4 [dynamo] add initial dynamo support for DTensor (#103146)
This PR adds initial dynamo support for DTensor, in particular, it:
- allows DTensor be passed into a compiled function, and allow fakify
DTensor during dynamo tracing by turning the inner local tensor to meta
tensor.
- We use `allow_in_graph` to include `DTensor` and `DTensor.from_local` to be represented as `TorchVariable`
- The dtensor created becomes a normal `TensorVariable` and it would insert any tensor operations to the output graph just like torch.Tensor
- note that dtensor have a new instance method `redistribute` compare to plain tensor, and we currently special handle it in `TensorVariable`

`from_local` and `redistribute` both accepts some non-trival metadata as arguments (i.e. DeviceMesh, Placement) which fx.Graph does not support. In order to let these two APIs appear in the dynamo captured graph, we encoded the metadata into a new_function (like `functools.partial`) and the new function only accepts prim args (i.e. tensor), then we put `call_function` with this new_function to the graph. This is suggested by @ezyang. The underlying rationale here is that the metadata will not change across the graph invocations so it's safe to encode them.

Captured graph:
```
    def forward(self, L_x_ : torch.Tensor):
        l_x_ = L_x_

        # File: /scratch/wanchaol/work/pytorch/test/distributed/_tensor/test_dtensor.py:685, code: dt = DTensor.from_local(x, mesh, [Shard(0)], run_check=False)
        prim_from_local = torch__dynamo_variables_torch_prim_from_local(l_x_, run_check = False);  l_x_ = None

        # File: /scratch/wanchaol/work/pytorch/test/distributed/_tensor/test_dtensor.py:686, code: return dt.redistribute(mesh, [Replicate()]).to_local() + 2
        prim_redistribute = torch__dynamo_variables_tensor_prim_redistribute(prim_from_local);  prim_from_local = None
        to_local = prim_redistribute.to_local();  prim_redistribute = None
        add = to_local + 2;  to_local = None
        return (add,)
```

Pull Request resolved: https://github.com/pytorch/pytorch/pull/103146
Approved by: https://github.com/voznesenskym
2023-07-19 16:01:12 +00:00
kshitij12345
e137ac6c59 [dynamo][torch_np] support linalg, random and fft module (#105320)
Support tracing through `np.linalg` with `torch_np` installed. Will update with other modules if this approach makes sense.

TODO:
* [x] Add test for `fft` and `random`.

Fixes https://github.com/pytorch/pytorch/issues/105269

Pull Request resolved: https://github.com/pytorch/pytorch/pull/105320
Approved by: https://github.com/ezyang, https://github.com/lezcano
2023-07-19 11:06:37 +00:00
Michael Lazos
86680a6c0b [dynamo] handle calls to typing.cast (#104799)
Fixes #ISSUE_NUMBER

Pull Request resolved: https://github.com/pytorch/pytorch/pull/104799
Approved by: https://github.com/jansel
2023-07-10 21:05:17 +00:00
kshitij12345
d552c271db [pt2] grad support (#102264)
Teach dynamo about grad

Pull Request resolved: https://github.com/pytorch/pytorch/pull/102264
Approved by: https://github.com/zou3519
2023-06-21 10:13:09 +00:00
PyTorch MergeBot
e737a8486f Revert "[pt2] grad support (#102264)"
This reverts commit 85b83954c8.

Reverted https://github.com/pytorch/pytorch/pull/102264 on behalf of https://github.com/huydhn due to This is failing in trunk 85b83954c8 and looks like a landrace ([comment](https://github.com/pytorch/pytorch/pull/102264#issuecomment-1600001309))
2023-06-21 03:02:55 +00:00
kshitij12345
85b83954c8 [pt2] grad support (#102264)
Teach dynamo about grad

Pull Request resolved: https://github.com/pytorch/pytorch/pull/102264
Approved by: https://github.com/zou3519
2023-06-21 01:37:08 +00:00
Michael Lazos
c75e064dd6 Disallow _foreach_utils.py, but allow it to be inlined (#102221)
This function should not be allowed, but should be inlineable.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/102221
Approved by: https://github.com/anijain2305
2023-06-02 05:14:09 +00:00
PyTorch MergeBot
8aa48315de Revert "Disallow _foreach_utils.py, but allow it to be inlined (#102221)"
This reverts commit 552299c42c.

Reverted https://github.com/pytorch/pytorch/pull/102221 on behalf of https://github.com/huydhn due to Sorry for reverting your PR. It starts to break dynamo jobs in trunk 552299c42c and it looks like a landrace ([comment](https://github.com/pytorch/pytorch/pull/102221#issuecomment-1563694599))
2023-05-26 01:27:19 +00:00
Michael Lazos
552299c42c Disallow _foreach_utils.py, but allow it to be inlined (#102221)
This function should not be allowed, but should be inlineable.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/102221
Approved by: https://github.com/anijain2305
2023-05-25 23:48:36 +00:00
PyTorch MergeBot
d0bb8fdc64 Revert "[dynamo] Minor refactor to use is_allowed to decide inlining of NNModule methods (#101910)"
This reverts commit 8b2a9f81cc.

Reverted https://github.com/pytorch/pytorch/pull/101910 on behalf of https://github.com/DanilBaibak due to Break internal build ([comment](https://github.com/pytorch/pytorch/pull/101910#issuecomment-1556782524))
2023-05-22 08:37:12 +00:00
Animesh Jain
8b2a9f81cc [dynamo] Minor refactor to use is_allowed to decide inlining of NNModule methods (#101910)
Fixes #101609

Pull Request resolved: https://github.com/pytorch/pytorch/pull/101910
Approved by: https://github.com/yanboliang
2023-05-20 03:34:20 +00:00
Yanbo Liang
29de581764 [Dynamo] Graph break on torch.cuda.set_device() (#101668)
Fixes #97280

Pull Request resolved: https://github.com/pytorch/pytorch/pull/101668
Approved by: https://github.com/jansel
2023-05-17 21:35:08 +00:00
Will Constable
2dca418112 Reland basic dynamo support for traceable collectives (#100476)
Relative to the original land, this also contains:
- Fix torchdeploy import of functional collectives
- Can't import torchdynamo utils due to torch._refs being missing

Pull Request resolved: https://github.com/pytorch/pytorch/pull/100476
Approved by: https://github.com/kumpera
2023-05-04 04:25:35 +00:00
Richard Zou
3d10e748e7 [Reland] Initial version of Dynamo capture for HigherOrderOperator (#100544)
Original PR #99988

The problem was that we added `wrap` to torch._ops which actually puts
it on `torch.ops.wrap` which is a namespace that can be open-registered
to. The fix is that we now shove `wrap` into a new file

Pull Request resolved: https://github.com/pytorch/pytorch/pull/100544
Approved by: https://github.com/voznesenskym
2023-05-03 20:49:05 +00:00
Shabab Ayub
287f74c4fc Revert D45387167: Multisect successfully blamed D45387167 for test or build failures (#100424)
Summary:
This diff is reverting D45387167
D45387167: Basic dynamo support for traceable collectives (#94440) by wconstab has been identified to be causing the following test or build failures (internal)

If you believe this diff has been generated in error you may Commandeer and Abandon it.

Test Plan: NA

Reviewed By: s4ayub

Differential Revision: D45448312

Pull Request resolved: https://github.com/pytorch/pytorch/pull/100424
Approved by: https://github.com/rohan-varma, https://github.com/kumpera
2023-05-03 16:10:54 +00:00
PyTorch MergeBot
58f796ff5d Revert "Initial version of Dynamo capture for HigherOrderOperator (#99988)"
This reverts commit 4c99f9cdf2.

Reverted https://github.com/pytorch/pytorch/pull/99988 on behalf of https://github.com/atalman due to breaking internal builds ([comment](https://github.com/pytorch/pytorch/pull/99988#issuecomment-1533081452))
2023-05-03 14:02:40 +00:00
Animesh Jain
5fbb40669f [dynamo][moco] Disallow_in_graph distributed APIs (#100071)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/100071
Approved by: https://github.com/jansel, https://github.com/H-Huang
2023-05-02 20:09:25 +00:00
Richard Zou
4c99f9cdf2 Initial version of Dynamo capture for HigherOrderOperator (#99988)
This PR introduces a `wrap(body_fn, *args)` higher order operator
The semantics of `wrap(body_fn, *args)` is to just run `body_fn(*args)`

Underneath Dynamo, this PR makes it so that we rewrite calls to
`wrap(body_fn, *args)` with `wrap(new_fn, *new_args)` where `new_fn` has
no free variables. This PR does not update cond/map to use the new
mechanism yet (we do not support nn.Modues yet, will come in the future).

The design we take is:
- OutputGraph represents the graph being built by Dynamo that may be
compiled and executed.
- OutputGraph owns a root SubgraphTracer, where it builds the FX graph.
- OutputGraph may own multiple nested SubgraphTracers.
- When we need to trace the body function of a HigherOrderOperator, we
construct a new SubgraphTracer to build the graph of the body function.

Mechanically, when Dynamo sees a new `wrap` HigherOrderOperator with a
body function, it:
- Creates a new SubgraphTracer via OutputGraph.new_subtracer
- Executes the body function
This captures the body function into the graph on the new
SubgraphTracer while modifying the state of the OutputGraph. For
example, the OutputGraph may receive new GraphArgs, new guards, and new
side effects.

If capture of the body function fails, then Dynamo graph breaks on the
HigherOrderOperator.

Test Plan:
- added test/dynamo/test_higher_order_ops.py

Future:
- We're not actually able to tell Dynamo to completely graph break on the
HigherOrderOperator. Instead, when we do graph break, Dynamo begins
introspecting `HigherOrderOperator.__call__`. It should probably not do
this.
- Ideally we would error out on new SideEffects. I don't know how to do
this yet.
- We don't support dealing with nn.Modules yet (e.g. calling nn.Modules
or accessing attributes of tracked nn.Modules from a body_fn). There's
an open question on what should actually happen here
- Ideally we would rewrite map/cond to use the new mechanism but we need
to fix the previous bullet point before we can get there.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/99988
Approved by: https://github.com/voznesenskym, https://github.com/anijain2305
2023-05-02 17:11:02 +00:00
Will Constable
100a25d021 Basic dynamo support for traceable collectives (#94440)
Make traceable collectives work with torchdynamo,
bypassing problems with tracing the AsyncTensor subclass.

Accept a suboptimal solution for now, and optimize it later.
For now, wait happens immediately, which generally forces an early sync.

Later, find a way either in dynamo or AOT stack to handle
AsyncCollectiveTensor to get the wait in the optimal place.

Note on implementation:
- Dynamo traces 'user-level' fc apis that are designed to behave differently
  in eager vs compiled.  In eager, there will be work-obj registration and
  a wrapper subclass will insert a 'wait' call at the appropriate time.
  In compile/trace mode, wait will be immetiately called, and work obj
  registration is required to be handled by the compile backend at runtime.
- Dynamo needs to trace into some of the helper functions in the 'user-level'
  api, such as '_expand_group' which is essentially a constant transformation.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/94440
Approved by: https://github.com/kumpera
2023-04-27 05:38:36 +00:00
Animesh Jain
31eb9949e4 [dynamo] disallow_in_graph bugfix (#99600)
Testing if the minor change breaks other test cases.

For the added test case, TorchDynamo causes graph break on `torch.ops.foo.custom` but then again starts running on the recursively invoked frame - `foo_cpu` on L48 in testfile. This raises assertion like this

~~~
Traceback (most recent call last):
  File "/scratch/anijain/work/pytorch/test/dynamo/test_decorators.py", line 65, in test_disallow_in_graph_for_custom_op
    res = opt_fn(x)
  File "/scratch/anijain/work/pytorch/torch/_dynamo/eval_frame.py", line 252, in _fn
    return fn(*args, **kwargs)
  File "/scratch/anijain/work/pytorch/test/dynamo/test_decorators.py", line 56, in fn
    b = torch.ops.foo.custom(a)
  File "/scratch/anijain/work/pytorch/torch/_ops.py", line 646, in __call__
    return self._op(*args, **kwargs or {})
  File "/scratch/anijain/work/pytorch/torch/_dynamo/eval_frame.py", line 401, in catch_errors
    return callback(frame, cache_size, hooks, frame_state)
  File "/scratch/anijain/work/pytorch/torch/_dynamo/convert_frame.py", line 495, in _convert_frame
    result = inner_convert(frame, cache_size, hooks, frame_state)
  File "/scratch/anijain/work/pytorch/torch/_dynamo/convert_frame.py", line 122, in _fn
    return fn(*args, **kwargs)
  File "/scratch/anijain/work/pytorch/torch/_dynamo/convert_frame.py", line 331, in _convert_frame_assert
    return _compile(
  File "/scratch/anijain/work/pytorch/torch/_dynamo/utils.py", line 169, in time_wrapper
    r = func(*args, **kwargs)
  File "/scratch/anijain/work/pytorch/torch/_dynamo/convert_frame.py", line 401, in _compile
    out_code = transform_code_object(code, transform)
  File "/scratch/anijain/work/pytorch/torch/_dynamo/bytecode_transformation.py", line 1000, in transform_code_object
    transformations(instructions, code_options)
  File "/scratch/anijain/work/pytorch/torch/_dynamo/convert_frame.py", line 371, in transform
    tracer = InstructionTranslator(
  File "/scratch/anijain/work/pytorch/torch/_dynamo/symbolic_convert.py", line 1890, in __init__
    self.symbolic_locals = collections.OrderedDict(
  File "/scratch/anijain/work/pytorch/torch/_dynamo/symbolic_convert.py", line 1893, in <genexpr>
    VariableBuilder(
  File "/scratch/anijain/work/pytorch/torch/_dynamo/variables/builder.py", line 165, in __call__
    return self._wrap(value).clone(**self.options())
  File "/scratch/anijain/work/pytorch/torch/_dynamo/variables/builder.py", line 290, in _wrap
    return type_dispatch(self, value)
  File "/scratch/anijain/work/pytorch/torch/_dynamo/variables/builder.py", line 776, in wrap_tensor
    tensor_variable = wrap_fx_proxy(
  File "/scratch/anijain/work/pytorch/torch/_dynamo/variables/builder.py", line 923, in wrap_fx_proxy
    return wrap_fx_proxy_cls(
  File "/scratch/anijain/work/pytorch/torch/_dynamo/variables/builder.py", line 983, in wrap_fx_proxy_cls
    example_value = wrap_to_fake_tensor_and_record(
  File "/scratch/anijain/work/pytorch/torch/_dynamo/variables/builder.py", line 1213, in wrap_to_fake_tensor_and_record
    fake_e = wrap_fake_exception(
  File "/scratch/anijain/work/pytorch/torch/_dynamo/utils.py", line 835, in wrap_fake_exception
    return fn()
  File "/scratch/anijain/work/pytorch/torch/_dynamo/variables/builder.py", line 1214, in <lambda>
    lambda: tx.fake_mode.from_tensor(
  File "/scratch/anijain/work/pytorch/torch/_subclasses/fake_tensor.py", line 1434, in from_tensor
    return self.fake_tensor_converter(
  File "/scratch/anijain/work/pytorch/torch/_subclasses/fake_tensor.py", line 329, in __call__
    return self.from_real_tensor(
  File "/scratch/anijain/work/pytorch/torch/_subclasses/fake_tensor.py", line 283, in from_real_tensor
    out = self.meta_converter(
  File "/scratch/anijain/work/pytorch/torch/_subclasses/meta_utils.py", line 531, in __call__
    r = self.meta_tensor(
  File "/scratch/anijain/work/pytorch/torch/_subclasses/meta_utils.py", line 184, in meta_tensor
    assert not torch._C._dispatch_tls_local_exclude_set().has(
AssertionError:

~~~

It seems `_dynamo.disable` is the right option for custom ops added by `torch.library`.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/99600
Approved by: https://github.com/jansel
2023-04-22 12:40:33 +00:00
PyTorch MergeBot
629377ea8b Revert "Replace _dynamo.config with an object instead of module (#96455)"
This reverts commit 420104a886.

Reverted https://github.com/pytorch/pytorch/pull/96455 on behalf of https://github.com/jansel due to BC breaking, was landed prematurely
2023-04-12 15:06:14 +00:00
Han Qi
420104a886 Replace _dynamo.config with an object instead of module (#96455)
Summary:
    Replace _dynamo.config with an object instead of module

    Current usage patterns of setting and reading fields on config will work
    unchanged.

    Only changes needed going forward:
    1. import torch._dynamo.config will not work. However, just doing
       import torch._dynamo is sufficient to access dynamo config
       as torch._dynamo.config.

    2. Files inside of _dynamo folder need to access config via
       from torch._dynamo.config_util import config instead of
       from torch._dynamo import config. Because _dynamo/__init__.py
       imports some of the files so it would be circular import.

Test Plan:

Reviewers:

Subscribers:

Tasks:

Tags:

Fixes #ISSUE_NUMBER

Pull Request resolved: https://github.com/pytorch/pytorch/pull/96455
Approved by: https://github.com/williamwen42
2023-04-11 21:23:32 +00:00
Yanbo Liang
7fcf8b1829 [Dynamo] Support torch.{cuda/cpu}.amp.autocast (#95416)
For Meta internal use cases.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/95416
Approved by: https://github.com/jansel
2023-03-10 21:48:08 +00:00
PyTorch MergeBot
3ce1e15cf7 Revert "[Dynamo] Support torch.{cuda/cpu}.amp.autocast (#95416)"
This reverts commit c88aa336aa.

Reverted https://github.com/pytorch/pytorch/pull/95416 on behalf of https://github.com/huydhn due to Sorry for reverting your PR. But it seems that the smoke test issue is related as it starts to fail consistently in trunk https://hud.pytorch.org/hud/pytorch/pytorch/master/1?per_page=50&name_filter=inductor_torchbench_smoketest_perf
2023-03-08 06:51:57 +00:00
Yanbo Liang
c88aa336aa [Dynamo] Support torch.{cuda/cpu}.amp.autocast (#95416)
For Meta internal use cases.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/95416
Approved by: https://github.com/jansel
2023-03-08 01:40:27 +00:00
Kazuaki Ishizaki
46385b3e48 Fix typos under torch/_dynamo directory (#95599)
This PR fixes typos in comments and messages of `.py` files under `torch/_dynamo` directory

Pull Request resolved: https://github.com/pytorch/pytorch/pull/95599
Approved by: https://github.com/ezyang
2023-02-28 03:44:24 +00:00
Yanbo Liang
057bc7191d [Dynamo] Remove torch.autograd.profiler.profile workaround in UserDefined (#95504)
Fixes #ISSUE_NUMBER

Pull Request resolved: https://github.com/pytorch/pytorch/pull/95504
Approved by: https://github.com/williamwen42
2023-02-25 05:15:01 +00:00
min-jean-cho
900e09c872 [Dynamo] Support torch.Tensor.fn as TorchVariable, not UserDefinedObjectVariable, preventing graph break (#93243)
As found in #92709, thanks to @ngimel and @jansel, currently `torch.Tensor.fn` points to `UserDefinedObjectVariable` rather than `TorchVariable`. The root cause is due to https://github.com/pytorch/pytorch/pull/92709#pullrequestreview-1273357406. To prevent this, build `TorchVariable`  of `torch.Tensor.fn` pointing to `torch.ops.aten.fn`.

This issue propagates to `torch.Tensor.fn` causing graph break with `nopython=True`.
```python
import torch
import torch._dynamo as dynamo

#op = torch.ops.aten.abs_ # no graph break
op = torch.Tensor.abs_ # graph break
args = torch.empty(10)

def foo(args):
    return op(args)

opt_foo = dynamo.optimize("inductor", nopython=True)(foo)
y_ = opt_foo(args)

```

Pull Request resolved: https://github.com/pytorch/pytorch/pull/93243
Approved by: https://github.com/jansel
2023-02-07 09:26:50 +00:00
blzheng
0c1777acec Dynamo benchmark: add CPU specific changes (#88477)
This pr adds some CPU specific changes:

- Add support for IPEX backend
- https://github.com/pytorch/torchdynamo/issues/1618
- https://github.com/pytorch/torchdynamo/issues/1534
- Enable CPU launcher in runner.py.
- Fix the issue that some environment variables are not support on CPU

Pull Request resolved: https://github.com/pytorch/pytorch/pull/88477
Approved by: https://github.com/jgong5, https://github.com/jansel
2023-01-07 09:26:06 +00:00
Andrew M. James
7cd951c21e Properly guard all numpy usage within dynamo and remove UnspecializedNumpyVariable (#90795)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/90795
Approved by: https://github.com/ngimel, https://github.com/cpuhrsch
2023-01-06 22:36:38 +00:00