Commit Graph

796 Commits

Author SHA1 Message Date
PyTorch MergeBot
60fe2f4420 Revert "Torch package support in dynamo (#91821)"
This reverts commit 3726d23219.

Reverted https://github.com/pytorch/pytorch/pull/91821 on behalf of https://github.com/huydhn due to The change causes flakiness on trunk. See https://github.com/pytorch/pytorch/issues/92196#issuecomment-1386368909 for more details
2023-01-18 02:17:25 +00:00
Michael Voznesensky
3726d23219 Torch package support in dynamo (#91821)
Fixes #ISSUE_NUMBER

Pull Request resolved: https://github.com/pytorch/pytorch/pull/91821
Approved by: https://github.com/suo, https://github.com/malfet
2023-01-10 06:53:15 +00:00
PyTorch MergeBot
f6c7cf1bf5 Revert "Torch package support in dynamo (#91821)"
This reverts commit eeb3e49ed4.

Reverted https://github.com/pytorch/pytorch/pull/91821 on behalf of https://github.com/malfet due to According to minihud broke misc tests, see eeb3e49ed4
2023-01-09 14:39:14 +00:00
Michael Voznesensky
eeb3e49ed4 Torch package support in dynamo (#91821)
Fixes #ISSUE_NUMBER

Pull Request resolved: https://github.com/pytorch/pytorch/pull/91821
Approved by: https://github.com/suo
2023-01-08 01:46:24 +00:00
PyTorch MergeBot
6a3ddd0171 Revert "Don't graph break on patched module methods or aliased methods (#91018)"
This reverts commit d6fc2d82ca.

Reverted https://github.com/pytorch/pytorch/pull/91018 on behalf of https://github.com/kit1980 due to After this PR, inductor / cuda11.6-py3.10-gcc7-sm86 / test fails every time with CUDA out of memory during OPTForCausalLM
2022-12-21 19:54:15 +00:00
William Wen
d6fc2d82ca Don't graph break on patched module methods or aliased methods (#91018)
See added tests for the cases that were fixed.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/91018
Approved by: https://github.com/Morgan77523, https://github.com/anijain2305
2022-12-21 16:29:15 +00:00
Yanbo Liang
511fbad830 [Dynamo] Fix builder for class with metaclass (#90807)
Fixes Meta internal user case: a class with metaclass can't be identified as ```UserDefinedClassVariable```.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/90807
Approved by: https://github.com/jansel
2022-12-20 05:02:28 +00:00
Edward Z. Yang
dfe916ca88 Dynamo comptime, with public ComptimeContext API (#90983)
This PR adds `@comptime`, a decorator that causes a given function to be executed at compile time when Dynamo is symbolically evaluating their program. To query the Dynamo state, we offer a public ComptimeContext API which provides a limited set of APIs for querying Dynamo's internal state. We intend for users to use this API and plan to keep it stable. Here are some things you can do with it:

* You want to breakpoint Dynamo compilation when it starts processing a particular line of user code: give comptime a function that calls breakpoint
* You want to manually induce a graph break for testing purposes; give comptime a function that calls unimplemented
* You want to perform a debug print, but you don't want to induce a graph break; give comptime a function that prints.
* You can print what the symbolic locals at a given point in time are.
* You can print out the partial graph the Dynamo had traced at this point.
* (My original motivating use case.) You want to add some facts to the shape env, so that a guard evaluation on an unbacked SymInt doesn't error with data-dependent. Even if you don't know what the final user API for this should be, with comptime you can hack out something quick and dirty. (This is not in this PR, as it depends on some other in flight PRs.)

Check out the tests to see examples of comptime in action.

In short, comptime is a very powerful debugging tool that lets you drop into Dynamo from user code, without having to manually jerry-rig pdb inside Dynamo to trigger after N calls.

Signed-off-by: Edward Z. Yang <ezyang@fb.com>

Pull Request resolved: https://github.com/pytorch/pytorch/pull/90983
Approved by: https://github.com/jansel
2022-12-19 11:06:01 +00:00
David Berard
5d70d12812 [dynamo] turn torch.backends.cudnn.is_acceptable into a constant (#90323)
Tracing `torch.backends.cudnn.is_acceptable(Tensor) -> bool:` fails with:

```
...
  File "/scratch/dberard/dynamo38/pytorch/torch/_dynamo/variables/functions.py", line 196, in call_function
    return super(UserFunctionVariable, self).call_function(tx, args, kwargs)
  File "/scratch/dberard/dynamo38/pytorch/torch/_dynamo/variables/functions.py", line 67, in call_function
    return tx.inline_user_function_return(
  File "/scratch/dberard/dynamo38/pytorch/torch/_dynamo/symbolic_convert.py", line 426, in inline_user_function_return
    result = InliningInstructionTranslator.inline_call(self, fn, args, kwargs)
  File "/scratch/dberard/dynamo38/pytorch/torch/_dynamo/symbolic_convert.py", line 1698, in inline_call
    return cls.inline_call_(parent, func, args, kwargs)
  File "/scratch/dberard/dynamo38/pytorch/torch/_dynamo/symbolic_convert.py", line 1752, in inline_call_
    tracer.run()
  File "/scratch/dberard/dynamo38/pytorch/torch/_dynamo/symbolic_convert.py", line 485, in run
    and self.step()
  File "/scratch/dberard/dynamo38/pytorch/torch/_dynamo/symbolic_convert.py", line 455, in step
    getattr(self, inst.opname)(inst)
  File "/scratch/dberard/dynamo38/pytorch/torch/_dynamo/symbolic_convert.py", line 281, in wrapper
    return inner_fn(self, inst)
  File "/scratch/dberard/dynamo38/pytorch/torch/_dynamo/symbolic_convert.py", line 912, in CALL_FUNCTION
    self.call_function(fn, args, {})
  File "/scratch/dberard/dynamo38/pytorch/torch/_dynamo/symbolic_convert.py", line 389, in call_function
    self.push(fn.call_function(self, args, kwargs))
  File "/scratch/dberard/dynamo38/pytorch/torch/_dynamo/variables/torch.py", line 431, in call_function
    tensor_variable = wrap_fx_proxy(
  File "/scratch/dberard/dynamo38/pytorch/torch/_dynamo/variables/builder.py", line 662, in wrap_fx_proxy
    return wrap_fx_proxy_cls(
  File "/scratch/dberard/dynamo38/pytorch/torch/_dynamo/variables/builder.py", line 820, in wrap_fx_proxy_cls
    raise AssertionError(
AssertionError: torch.* op returned non-Tensor bool call_function <function is_acceptable at 0x7f00deefb790>
```

So instead, evaluate `is_acceptable()` and convert the result to a constant. The result of `is_acceptable(tensor) -> bool` depends on:
* dtype/device of the input tensor (this should already be guarded)
* properties of the build & whether cudnn is available
* some global state that gets initialized during the first call to `torch.backends.cudnn._init()` (this is NOT guarded in this PR)

Note: this fixes tts_angular with FSDP. This was an issue with FSDP because FSDP modules are interpreted as UnspecializedNNModules, and UnspecializedNNModules try to inline calls. In comparison, NNModules (e.g. when the tts_angular model is not wrapped in FSDP) do not inline calls and instead evaluate subsequent calls. In subsequent calls, cudnn.is_acceptable would be skipped by eval_frame.py:catch_errors because it is not in an allowlist.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/90323
Approved by: https://github.com/jansel
2022-12-16 23:26:54 +00:00
Bin Bao
93ac8c4aeb [dynamo] Refactor how autocast parameters are binded (#90953)
Summary: Use `inspect.signature` for unified args handling

Test Plan: `test_dynamo`

Differential Revision: D42078621

Pull Request resolved: https://github.com/pytorch/pytorch/pull/90953
Approved by: https://github.com/brad-mengchi
2022-12-16 23:12:49 +00:00
Michael Voznesensky
6c8ef6a4c2 Add tracing context, Integrate dynamo guards into torch._guards (#90647)
As defined here: https://docs.google.com/document/d/1oniZEgAaHE1IMByPRWRKbUHeaW06E2HMfCTCQyMRLek/edit#

This PR creates a new structure, a TracingContext, whose lifecycle matches that of the traced frame. It carries on it a GuardsContext, and eventually, a FakeTensorMode. It is the source of truth of all accumulated guards.

In this PR, we create the structure, and integrate it into dynamo. We do so by mapping OutputGraph's guards structure to its guard structure.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/90647
Approved by: https://github.com/ezyang
2022-12-14 07:35:32 +00:00
Yanbo Liang
e2674aafed [Dynamo] Supports calling parent class‘s non classmethod from child class (#90682)
Fixes https://github.com/pytorch/pytorch/issues/90558

Pull Request resolved: https://github.com/pytorch/pytorch/pull/90682
Approved by: https://github.com/jansel
2022-12-12 22:33:46 +00:00
Michael Lazos
9c4189f82d [dynamo] Add is_compiling for dynamo (#90329)
`is_tracing` returns True during dynamo tracing and False when run in Eager

Pull Request resolved: https://github.com/pytorch/pytorch/pull/90329
Approved by: https://github.com/jansel
2022-12-09 20:19:41 +00:00
Michael Voznesensky
4cdc96fb4f Add hooks structure for passing around user provided hooks, add a new guard_failure_fn (#90371)
This PR introduces a new function we can pass to torch._dynamo.optimize - guard_failure_fn. Usage is in the PR, and the one stacked on top of it, but the gist of it is that it emits failed guard reason strings alongside code. This is useful for tests and debugging, as it gives far finer grained assertions and control than the compile counter alone.

This is a resubmit of https://github.com/pytorch/pytorch/pull/90129

Pull Request resolved: https://github.com/pytorch/pytorch/pull/90371
Approved by: https://github.com/ezyang
2022-12-07 17:51:53 +00:00
Edward Z. Yang
962ebe88a2 Assert there are no outstanding side effects before calling cond (#90208)
The current cond implementation is silently incorrect when
there are outstanding side effects, since the locally tracked
side effects are lost when the recursive export call is made.
At least we raise an assert now.

I'm working on a refactor of cond which should be able to sidestep
this problem. Maybe.

Signed-off-by: Edward Z. Yang <ezyang@fb.com>

Differential Revision: [D41746973](https://our.internmc.facebook.com/intern/diff/D41746973)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/90208
Approved by: https://github.com/voznesenskym
2022-12-06 03:53:48 +00:00
Michael Lazos
342d78d1a2 Cache guards once per variable tracker, rather than re-propagating them repeatedly (#89827)
This improves tracing performance of optimizer tracing significantly (2x). In essence this just removes the recursion from propagate because it is not necessary. ListVariables and ConstDictVariables already contain the guards from the items contained in them.

Adds two other optimizations for special cases of `recursively_contains`

helps with https://github.com/pytorch/torchdynamo/issues/1803

Pull Request resolved: https://github.com/pytorch/pytorch/pull/89827
Approved by: https://github.com/anijain2305, https://github.com/jansel
2022-12-02 01:45:05 +00:00
Michael Lazos
2d32e5dd09 add env/config flag to disable dynamo (#89828)
as title

Pull Request resolved: https://github.com/pytorch/pytorch/pull/89828
Approved by: https://github.com/anijain2305
2022-11-30 01:59:44 +00:00
zhxchen17
a70082a863 [functorch] Move cond.py to _cond.py and expose cond() under functorch.experimental.control_flow. (#89819)
Summary:
Similar to https://github.com/pytorch/pytorch/pull/88767 we want to reduce the chance that users
accidentally import private functions from `functorch.experimental.cond` as if they were public
interfaces. We also move `cond()` under `control_flow.py` to stay consistent with `map()` op.

Test Plan:
CI

Reviewers:

Subscribers:

Tasks:

Tags:

Pull Request resolved: https://github.com/pytorch/pytorch/pull/89819
Approved by: https://github.com/zou3519
2022-11-30 01:50:44 +00:00
Edward Z. Yang
0c96841a20 Cond capture with fake tensors actually works; don't raise in this case (#89638)
Signed-off-by: Edward Z. Yang <ezyang@fb.com>

Pull Request resolved: https://github.com/pytorch/pytorch/pull/89638
Approved by: https://github.com/anjali411
2022-11-24 22:46:40 +00:00
Yanbo Liang
e4ccec6eca [Dynamo] Fix bug of using customized torch.autograd.Function (#89397)
Fixes https://github.com/pytorch/torchdynamo/issues/1899

Pull Request resolved: https://github.com/pytorch/pytorch/pull/89397
Approved by: https://github.com/jansel
2022-11-24 05:28:58 +00:00
Yanbo Liang
9eed6b7f9a [Dynamo] Several fixes on TensorVariable & TorchVariable (#89486)
This is a group of bug fixes for [7k github models](https://github.com/pytorch/torchdynamo/issues/1884), it would fix 30+ model tests.
* Support ```tensor.type()```.
* Support ```tensor.get_device()```.
* Support ```torch.nn.functional._Reduction.get_enum```.
* Support ```torch._utils._get_device_index()```.
* Fallback ```tensor.data_ptr()```.
  * ```FakeTensor``` always returns 0
  * For no fake tensor propagation, we ```clone``` the input tensor, which makes no sense to track the original ```data_ptr```. And I don't think this is a very popular API.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/89486
Approved by: https://github.com/jansel
2022-11-23 19:39:45 +00:00
Yanbo Liang
186192bb26 [Dynamo] Fix bugs when calling tensor.data and tensor.layout (#89257)
Fix bugs in [7k github models](https://github.com/pytorch/torchdynamo/issues/1884).
* Legacy code still use ```tensor.data```, I think we can use ```tensor.detach``` to rewrite, not sure if there is anything I didn't anticipate.
* Support ```tensor.layout```.

The root cause of these issues are: dynamo wraps unimplemented ```tensor.x``` call into ```GetAttrVariable(TensorVariable, x)```, but this op was not inserted into FX graph. Hence, during the fake tensor propagation, it throws ```KeyError: 'example_value` ```.

For these two popular attributes, Dynamo should support them anyway. However, if dynamo should support ___all___ ```tensor.x``` call and not fallback to ```GetAttrVariable```, I think it's debatable.
If I turn off fake tensor propagation, it works well even not including this fix. So I'm curious if we should improve the fake propagation to cover similar cases. cc @mlazos @soumith @voznesenskym @penguinwu @anijain2305 @EikanWang @jgong5 @Guobing-Chen @chunyuan-w @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @desertfire @jansel @eellison

```
Traceback (most recent call last):
  File "/scratch/ybliang/work/repos/pytorch/torch/_dynamo/convert_frame.py", line 404, in _compile
    out_code = transform_code_object(code, transform)
  File "/scratch/ybliang/work/repos/pytorch/torch/_dynamo/bytecode_transformation.py", line 341, in transform_code_object
    transformations(instructions, code_options)
  File "/scratch/ybliang/work/repos/pytorch/torch/_dynamo/convert_frame.py", line 392, in transform
    tracer.run()
  File "/scratch/ybliang/work/repos/pytorch/torch/_dynamo/symbolic_convert.py", line 1523, in run
    super().run()
  File "/scratch/ybliang/work/repos/pytorch/torch/_dynamo/symbolic_convert.py", line 389, in run
    and self.step()
  File "/scratch/ybliang/work/repos/pytorch/torch/_dynamo/symbolic_convert.py", line 359, in step
    getattr(self, inst.opname)(inst)
  File "/scratch/ybliang/work/repos/pytorch/torch/_dynamo/symbolic_convert.py", line 193, in wrapper
    return inner_fn(self, inst)
  File "/scratch/ybliang/work/repos/pytorch/torch/_dynamo/symbolic_convert.py", line 865, in CALL_FUNCTION_KW
    self.call_function(fn, args, kwargs)
  File "/scratch/ybliang/work/repos/pytorch/torch/_dynamo/symbolic_convert.py", line 301, in call_function
    self.push(fn.call_function(self, args, kwargs))
  File "/scratch/ybliang/work/repos/pytorch/torch/_dynamo/variables/torch.py", line 407, in call_function
    tensor_variable = wrap_fx_proxy(
  File "/scratch/ybliang/work/repos/pytorch/torch/_dynamo/variables/builder.py", line 636, in wrap_fx_proxy
    return wrap_fx_proxy_cls(
  File "/scratch/ybliang/work/repos/pytorch/torch/_dynamo/variables/builder.py", line 676, in wrap_fx_proxy_cls
    example_value = get_fake_value(proxy.node, tx)
  File "/scratch/ybliang/work/repos/pytorch/torch/_dynamo/utils.py", line 1024, in get_fake_value
    args, kwargs = torch.fx.node.map_arg((node.args, node.kwargs), visit)
  File "/scratch/ybliang/work/repos/pytorch/torch/fx/node.py", line 613, in map_arg
    return map_aggregate(a, lambda x: fn(x) if isinstance(x, Node) else x)
  File "/scratch/ybliang/work/repos/pytorch/torch/fx/node.py", line 621, in map_aggregate
    t = tuple(map_aggregate(elem, fn) for elem in a)
  File "/scratch/ybliang/work/repos/pytorch/torch/fx/node.py", line 621, in <genexpr>
    t = tuple(map_aggregate(elem, fn) for elem in a)
  File "/scratch/ybliang/work/repos/pytorch/torch/fx/node.py", line 627, in map_aggregate
    return immutable_dict((k, map_aggregate(v, fn)) for k, v in a.items())
  File "/scratch/ybliang/work/repos/pytorch/torch/fx/node.py", line 627, in <genexpr>
    return immutable_dict((k, map_aggregate(v, fn)) for k, v in a.items())
  File "/scratch/ybliang/work/repos/pytorch/torch/fx/node.py", line 631, in map_aggregate
    return fn(a)
  File "/scratch/ybliang/work/repos/pytorch/torch/fx/node.py", line 613, in <lambda>
    return map_aggregate(a, lambda x: fn(x) if isinstance(x, Node) else x)
  File "/scratch/ybliang/work/repos/pytorch/torch/_dynamo/utils.py", line 1022, in visit
    return n.meta["example_value"]
KeyError: 'example_value\n\nfrom user code:\n   File "./generated/test_BayesWatch_pytorch_prunes.py", line 108, in forward\n    return torch.zeros([x.size()[0], self.channels, x.size()[2] // self.spatial, x.size()[3] // self.spatial], dtype=x.dtype, layout=x.layout, device=x.device)\n\nSet torch._dynamo.config.verbose=True for more information\n\n\nYou can suppress this exception and fall back to eager by setting:\n    torch._dynamo.config.suppress_errors = True\n'

```

Pull Request resolved: https://github.com/pytorch/pytorch/pull/89257
Approved by: https://github.com/jansel
2022-11-21 22:44:01 +00:00
Yanbo Liang
81a4aeabdf [Dynamo] Support Tensor.nelement & torch.cuda.is_available (#89164)
Fix several errors in [7k github models](https://github.com/pytorch/torchdynamo/issues/1198).

Pull Request resolved: https://github.com/pytorch/pytorch/pull/89164
Approved by: https://github.com/soumith
2022-11-18 18:43:15 +00:00
Yanbo Liang
b72f5b9ae3 [Dynamo] Support typing.Mapping & Support function as argument (#88963)
These missing features come from https://github.com/pytorch/benchmark/pull/1302, where we'd like to enable E2E hf_bert dynamo train/eval. The dependent [HuggingFace accelerate library](https://huggingface.co/docs/accelerate/index) requires these improvements.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/88963
Approved by: https://github.com/jansel
2022-11-17 06:57:42 +00:00
Yanbo Liang
e70f446a16 [Dynamo] Fix bug in NamedTupleVariable (#89110)
Fixes https://github.com/pytorch/torchdynamo/issues/1866

Pull Request resolved: https://github.com/pytorch/pytorch/pull/89110
Approved by: https://github.com/jansel
2022-11-16 21:59:31 +00:00
Yanbo Liang
848e7240a1 [Dynamo] Add a dummy profiler to avoid activating real profiler (#88930)
See context at https://github.com/pytorch/torchdynamo/issues/1721#issuecomment-1312396059

Pull Request resolved: https://github.com/pytorch/pytorch/pull/88930
Approved by: https://github.com/jansel
2022-11-16 19:08:49 +00:00
Animesh Jain
9d2f5a2784 [dynamo] Support if cond on NNModuleVariable (#89095)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/89095
Approved by: https://github.com/yanboliang, https://github.com/mlazos
2022-11-16 08:51:30 +00:00
Yanbo Liang
911a1349dd [Dynamo] Fix torch.is_tensor and torch.overrides.is_tensor_like (#88704)
Fixes error from 7k github models: https://github.com/jansel/pytorch-jit-paritybench/blob/master/generated/test_arashwan_matrixnet.py

Error:
```
AssertionError: torch.* op returned non-Tensor bool call_function <function is_tensor at 0x7fca94d0faf0>

from user code:
   File "/scratch/ybliang/work/repos/pytorch-jit-paritybench/generated/test_arashwan_matrixnet.py", line 749, in scatter
      return scatter_map(inputs)
   File "/scratch/ybliang/work/repos/pytorch-jit-paritybench/generated/test_arashwan_matrixnet.py", line 741, in scatter_map
      assert not torch.is_tensor(obj), 'Tensors not supported in scatter.'
```

Pull Request resolved: https://github.com/pytorch/pytorch/pull/88704
Approved by: https://github.com/jansel
2022-11-14 22:45:50 +00:00
Michael Voznesensky
06ce1338bc [dynamo] Port all pytorch/dynamo and test/dynamo pieces over from symbolic-shapes branch (#88768)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/88768
Approved by: https://github.com/jansel, https://github.com/ezyang
2022-11-13 04:50:21 +00:00
Yanbo Liang
6fe47b682f [Dynamo] Fix str(Guard.obj_weakref) bug to re-ennable support overriding __getattr__ (#88564)
See my inline comments!

Pull Request resolved: https://github.com/pytorch/pytorch/pull/88564
Approved by: https://github.com/ezyang, https://github.com/anijain2305
2022-11-11 22:31:32 +00:00
Yanbo Liang
b30222e0c4 [Dynamo] Add complete support for Tensor.is_contiguous (#88407)
Fixes https://github.com/pytorch/torchdynamo/issues/1783

Pull Request resolved: https://github.com/pytorch/pytorch/pull/88407
Approved by: https://github.com/jansel
2022-11-10 23:47:21 +00:00
Michael Suo
c0e6b4329f [dynamo] only error out on nested fx trace if dynamo is optimizing (#88640)
I think this is the final resolution to issue caused by
https://github.com/pytorch/pytorch/pull/87797. The nvfuser issue that PR
tripped up was because, even though we're correctly disabling
torchdynamo via a `DisableContext`, the nested fx trace check was still
firing. This PR properly narrows it to only fire if we're not disabled.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/88640
Approved by: https://github.com/yf225
2022-11-08 23:52:21 +00:00
Yu Guo
a37524085d [torchdynamo] support torch.autograd._profiler_enabled (#88378)
fix https://github.com/pytorch/torchdynamo/issues/1826

Pull Request resolved: https://github.com/pytorch/pytorch/pull/88378
Approved by: https://github.com/voznesenskym
2022-11-07 20:36:26 +00:00
Animesh Jain
36582574f3 [dynamo] Skip mutation detection for inference mode (#88406)
Skip the mutation detection for inference_mode, and raise a warning. This helps one internal model

Related to https://github.com/pytorch/torchdynamo/issues/1768

@ezyang What do you think about this? The issue that Dynamo mutation detector uses version counter to detect mutation.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/88406
Approved by: https://github.com/ezyang
2022-11-03 22:56:05 +00:00
Michael Suo
923a5e9685 [dynamo] Error when user nests FX with dynamo (#87797)
Today, this doesn't work and dynamo errors out in a very non-obvious way (see:
https://gist.github.com/suo/dde04830372ab51a4a34ea760f14200a).

Here, we detect the error early and exit with a nicer msg. Also add a
config option to just no-op dynamo (which need to unblock internal
enablement).

Pull Request resolved: https://github.com/pytorch/pytorch/pull/87797
Approved by: https://github.com/yf225, https://github.com/soumith, https://github.com/jansel
2022-11-02 17:38:56 +00:00
Yanbo Liang
ccf6b558a4 [Dynamo] UserFunctionVariable supports type & ABCMeta as arguments (#88257)
Fixes https://github.com/pytorch/torchdynamo/issues/1785

Pull Request resolved: https://github.com/pytorch/pytorch/pull/88257
Approved by: https://github.com/ezyang
2022-11-02 06:58:04 +00:00
PyTorch MergeBot
c0761a835b Revert "[dynamo] Error when user nests FX with dynamo (#87797)"
This reverts commit 1da5aeb97b.

Reverted https://github.com/pytorch/pytorch/pull/87797 on behalf of https://github.com/ezyang due to breaks nvfuser stack, needs more investigation
2022-10-31 23:49:37 +00:00
Michael Suo
1da5aeb97b [dynamo] Error when user nests FX with dynamo (#87797)
Today, this doesn't work and dynamo errors out in a very non-obvious way (see:
https://gist.github.com/suo/dde04830372ab51a4a34ea760f14200a).

Here, we detect the error early and exit with a nicer msg. Also add a
config option to just no-op dynamo (which need to unblock internal
enablement).

cc @jansel @mlazos @soumith @voznesenskym @yanboliang @penguinwu @anijain2305 @EikanWang @jgong5 @Guobing-Chen @chunyuan-w @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx
Pull Request resolved: https://github.com/pytorch/pytorch/pull/87797
Approved by: https://github.com/yf225, https://github.com/soumith, https://github.com/jansel
2022-10-28 04:59:08 +00:00
PyTorch MergeBot
cda0d5a57b Revert "[dynamo] Error when user nests FX with dynamo (#87797)"
This reverts commit a485528a7e.

Reverted https://github.com/pytorch/pytorch/pull/87797 on behalf of https://github.com/kit1980 due to Broke linux-bionic-py3.7-clang9 / test (dynamo, 2, 2, linux.2xlarge), same error on pull
2022-10-27 21:16:58 +00:00
Michael Suo
a485528a7e [dynamo] Error when user nests FX with dynamo (#87797)
Today, this doesn't work and dynamo errors out in a very non-obvious way (see:
https://gist.github.com/suo/dde04830372ab51a4a34ea760f14200a).

Here, we detect the error early and exit with a nicer msg. Also add a
config option to just no-op dynamo (which need to unblock internal
enablement).

cc @jansel @mlazos @soumith @voznesenskym @yanboliang @penguinwu @anijain2305
Pull Request resolved: https://github.com/pytorch/pytorch/pull/87797
Approved by: https://github.com/yf225, https://github.com/soumith, https://github.com/jansel
2022-10-27 17:17:59 +00:00
Edward Z. Yang
96691865b9 [dynamo] Unify raise_on_* config to suppress_errors and raise by default (#87440)
I noticed that a lot of bugs are being suppressed by torchdynamo's default
error suppression, and worse yet, there's no way to unsuppress them.  After
discussion with voz and soumith, we decided that we will unify error suppression
into a single option (suppress_errors) and default suppression to False.

If your model used to work and no longer works, try TORCHDYNAMO_SUPPRESS_ERRORS=1
to bring back the old suppression behavior.

Signed-off-by: Edward Z. Yang <ezyang@fb.com>

cc @jansel @lezcano @fdrocha @mlazos @soumith @voznesenskym @yanboliang
Pull Request resolved: https://github.com/pytorch/pytorch/pull/87440
Approved by: https://github.com/voznesenskym, https://github.com/albanD
2022-10-21 17:03:29 +00:00
Michael Voznesensky
2fd008ed43 [dynamo] Add support for invoking nn sequential (#87156)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/87156
Approved by: https://github.com/jansel
2022-10-20 18:14:40 +00:00
Jason Ansel
d45e99acf5 [dynamo] Put printing graph breaks behind a config option (#87026)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/87026
Approved by: https://github.com/soumith, https://github.com/voznesenskym
2022-10-16 19:53:42 +00:00
Jason Ansel
054a2fd6c2 Sync changes from pytorch/torchdynamo (#87013)
This updates to:
6380959be2

Generated with:
https://github.com/pytorch/torchdynamo/blob/main/copy_to_core.sh
Pull Request resolved: https://github.com/pytorch/pytorch/pull/87013
Approved by: https://github.com/voznesenskym
2022-10-15 21:00:57 +00:00
Jason Ansel
8f71e8de7e Sync changes from pytorch/torchdynamo, enable tests (#86950)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/86950
Approved by: https://github.com/Chillee
2022-10-14 23:08:58 +00:00
Jason Ansel
c7c09722ad Move TorchDynamo into PyTorch core (#86461)
Context:
https://github.com/pytorch/torchdynamo/issues/1588

This PR moves [TorchDynamo](https://github.com/pytorch/torchdynamo) and TorchInductor into PyTorch core.
- `torchdynamo` becomes `torch._dynamo`
- `torchinductor` becomes `torch._inductor`

This PR was generated by running `copy_to_core.sh` in https://github.com/pytorch/torchdynamo/pull/1538

Pull Request resolved: https://github.com/pytorch/pytorch/pull/86461
Approved by: https://github.com/voznesenskym
2022-10-13 23:18:06 +00:00