Commit Graph

721 Commits

Author SHA1 Message Date
Steven Troxler
17fd4885aa [dynamo] Support custom dict constructor with kwargs (#112513)
Summary:

As of https://github.com/pytorch/pytorch/pull/103192, dynamo
supports code that creates OrderedDict instances using kwargs
for the key-value pairs rather than passing a dict literal.

But custom dicts (for example subclasses of OrderedDict) follow
a different codepath so that we can check for conditions such
as a custom `__init__` that need to force a graph break.

This commit allows kwargs for custom dict constructors - if the
args are empty and the class is not also a dataclass (which is
the case that, for example, a
`transformers.modeling_outputs.ModelOutput` instance will wind
up hitting) then treat the kwargs as the key-value pairs.

NOTE: For this to behave 100% correctly, we are relying on
the fact that python dicts behave like ordered dicts so that they
preserve the kwargs' ordering. Technically it is not guaranteed that
future versions of Python will respect this; if that behavior changes
we would need to ensure that dynamo uses OrderedDict for kwargs all
the way down in order to handle special cases like OrderedDict where
the kwargs' ordering does matter.

Test Plan:

```
pytest test/dynamo/test_functions.py
```

I also verified that the new test fails without the changes to
`dicts.py`.

Reviewers: yanboliang

Pull Request resolved: https://github.com/pytorch/pytorch/pull/112513
Approved by: https://github.com/yanboliang
2023-10-31 20:55:38 +00:00
PyTorch MergeBot
bc098c7fc2 Revert "[dynamo] ExecutorchCallDelegateHigherOrderVariable - add sanity check that input and output tensors are disjoint (#111960)"
This reverts commit 25f06ee51b.

Reverted https://github.com/pytorch/pytorch/pull/111960 on behalf of https://github.com/izaitsevfb due to Breaks internal tests, [T168506136](https://www.internalfb.com/intern/tasks/?t=168506136) ([comment](https://github.com/pytorch/pytorch/pull/111960#issuecomment-1787964742))
2023-10-31 20:14:20 +00:00
PyTorch MergeBot
b1b3d489f3 Revert "[dynamo] Be stricter about HigherOrderOperator kwargs (#111938)"
This reverts commit eb8af4dc67.

Reverted https://github.com/pytorch/pytorch/pull/111938 on behalf of https://github.com/izaitsevfb due to Reverting to unblock the revert of #111960 ([comment](https://github.com/pytorch/pytorch/pull/111938#issuecomment-1787960567))
2023-10-31 20:10:58 +00:00
rzou
1483097679 Update how Dynamo decides to graph break on an OpOverloadPacket (#112200)
Previously, under config.only_allow_pt2_compliant_ops, Dynamo graph
breaks when it see an OpOverloadPacket where any overloads are not
PT2 compliant. This is potentially brittle: if someone (unlikely) adds
a new overload for a custom operator, then this would cause a
previously non-graph-breaking call to the OpOverloadPacket to graph break.

In this PR:
- When Dynamo is about to write a call to an operator to the FX graph,
we check if it is PT2 compliant.
- For OpOverload, we check to see if the tag is on it
- For OpOverloadPacket, we do overload resolution and check to see if
  the tag is on the OpOverload that it resolves to.

Test Plan:
- new tests, existing tests

Pull Request resolved: https://github.com/pytorch/pytorch/pull/112200
Approved by: https://github.com/bdhirsh
2023-10-31 19:10:37 +00:00
Jon Chuang
2e40e09d57 [dynamo] {*}Tensor.__init__ from list of Tensor/ndarray as torch.stack(List[FakeTensor]) (#111741)
Follow up to https://github.com/pytorch/pytorch/pull/111665

Fixes: https://github.com/pytorch/pytorch/issues/106207

Pull Request resolved: https://github.com/pytorch/pytorch/pull/111741
Approved by: https://github.com/lezcano
2023-10-31 18:44:04 +00:00
voznesenskym
b91fcdf4aa [dynamo] Add support for register_post_accumulate_grad_hook (#112325)
lint

Pull Request resolved: https://github.com/pytorch/pytorch/pull/112325
Approved by: https://github.com/jansel
2023-10-31 17:04:49 +00:00
Peter Bell
66c32d099a Use pytree.arg_tree_leaves everywhere (#112394)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/112394
Approved by: https://github.com/lezcano
ghstack dependencies: #112391, #112392, #112393
2023-10-31 15:57:06 +00:00
Jon Chuang
479f5eb029 [dynamo] Remove dead code - real_value_tensor_positive_aliases (#111911)
(legality) It is currently impossible (and should remain impossible) - (due to dedup guards - all static tensors are unique) - to access the same **static** tensor value from a **different source**.

As for `getattr(nn.Module, tensor)` source collisions, we will never instantiate a `nn.Module getattr` source for a static tensor, due to:
- side-effect tracking (as long as we track all static tensors - see also https://github.com/pytorch/pytorch/pull/112025 for extra sanity check)
- See: c8a5bb451e/torch/_dynamo/variables/builder.py (L227)

(no worse) In any case, this field is currently unused.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/111911
Approved by: https://github.com/voznesenskym
2023-10-30 23:10:52 +00:00
Jason Ansel
4b8a5e1854 [dynamo] Remove VariableTracker.as_specialized (#112363)
My local testing can't seem to find this function actually doing anything.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/112363
Approved by: https://github.com/yanboliang
2023-10-30 20:07:55 +00:00
BowenBao
b97afc4018 Support 'BaseOutput' and subclasses from 'diffusers' in dynamo (#111978)
Extending the workarounds for `transformers` `ModelOutput` to cover `diffusers` `BaseOutput`. Together with https://github.com/huggingface/diffusers/pull/5459 it should unblock export for `diffusers` models.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/111978
Approved by: https://github.com/jansel
2023-10-30 19:53:31 +00:00
Oguz Ulgen
219763c38d Support calling user defined triton kernels with kernel.run (#112292)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/112292
Approved by: https://github.com/jansel
ghstack dependencies: #112290
2023-10-30 17:51:23 +00:00
Oguz Ulgen
1250032c2e [Inductor] Add triton.autotune support for user defined triton kernels with complex grids (#112290)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/112290
Approved by: https://github.com/jansel
2023-10-30 17:48:27 +00:00
Peter Bell
bbd5b935e4 Use pytree.tree_leaves everywhere (#112324)
This changes all the instances I could find of `tree_flatten(...)[0]` or
`x, _ = tree_flatten` to use `tree_leaves`.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/112324
Approved by: https://github.com/lezcano
ghstack dependencies: #112327, #112323
2023-10-30 03:39:04 +00:00
Jason Ansel
f5088d2e45 [dynamo] fix None routing bug during var_getattr on UDO (#111614)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/111614
Approved by: https://github.com/jansel
2023-10-29 01:57:43 +00:00
Jon Chuang
eb8af4dc67 [dynamo] Be stricter about HigherOrderOperator kwargs (#111938)
kwargs need to be handled carefully in speculate subgraph. We should be clearer about the contract of what the inputs are.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/111938
Approved by: https://github.com/zou3519
2023-10-28 18:54:33 +00:00
Oguz Ulgen
c14c4efc0e [Inductor] Add triton.autotune support for user defined triton kernels with constant/simple grids (#112228)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/112228
Approved by: https://github.com/jansel
2023-10-28 17:30:35 +00:00
Jason Ansel
0948550c53 [dynamo] Remove mutation in AutogradFunctionContextVariable (#112216)
AutogradFunctionContextVariable was mutating self._saved_tensors, which is generally not allowed since VariableTracker objects should be read-only and are frequently copied via apply/clone.  This was causing some test failures up the PR stack.

This moves the mutation into a separate object that is not copied.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/112216
Approved by: https://github.com/voznesenskym
ghstack dependencies: #112122
2023-10-28 06:46:48 +00:00
Jason Ansel
c7b78fb76c [dynamo] Replace recursively_contains with parents_tracker (#112122)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/112122
Approved by: https://github.com/voznesenskym
2023-10-28 06:46:48 +00:00
Jon Chuang
25f06ee51b [dynamo] ExecutorchCallDelegateHigherOrderVariable - add sanity check that input and output tensors are disjoint (#111960)
Fixes https://github.com/pytorch/pytorch/issues/111917

Pull Request resolved: https://github.com/pytorch/pytorch/pull/111960
Approved by: https://github.com/zou3519
2023-10-28 02:48:43 +00:00
PyTorch MergeBot
8d44999183 Revert "[Inductor] Add triton.autotune support for user defined triton kernels with constant/simple grids (#112228)"
This reverts commit dbb31a2984.

Reverted https://github.com/pytorch/pytorch/pull/112228 on behalf of https://github.com/huydhn due to Sorry for reverting your change but it is failing ROCm test in trunk dbb31a2984 ([comment](https://github.com/pytorch/pytorch/pull/112228#issuecomment-1783660326))
2023-10-28 01:51:32 +00:00
Oguz Ulgen
dbb31a2984 [Inductor] Add triton.autotune support for user defined triton kernels with constant/simple grids (#112228)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/112228
Approved by: https://github.com/jansel
2023-10-27 21:40:22 +00:00
PyTorch MergeBot
c67236a05d Revert "[dynamo] Be stricter about HigherOrderOperator kwargs (#111938)"
This reverts commit edafe2ddb9.

Reverted https://github.com/pytorch/pytorch/pull/111938 on behalf of https://github.com/izaitsevfb due to Fails meta internal executorch tests with `torch._dynamo.exc.InternalTorchDynamoError: name 'p_kwargs' is not defined` ([comment](https://github.com/pytorch/pytorch/pull/111938#issuecomment-1783538268))
2023-10-27 21:37:48 +00:00
PyTorch MergeBot
089e7aa4ac Revert "[dynamo] ExecutorchCallDelegateHigherOrderVariable - add sanity check that input and output tensors are disjoint (#111960)"
This reverts commit 27cf49549a.

Reverted https://github.com/pytorch/pytorch/pull/111960 on behalf of https://github.com/izaitsevfb due to Fails internal executorch tests with module 'torch.utils._pytree' has no attribute 'tree_flatten_only' ([comment](https://github.com/pytorch/pytorch/pull/111960#issuecomment-1783532843))
2023-10-27 21:32:30 +00:00
Yanbo Liang
061bf1a153 [5/N] Make torch context manager a TorchCtxManagerClassVariable (#111622)
Major change in this PR is to make torch context manager class a separate ```TorchCtxManagerClassVariable```, since we have dynamo implementation for these ctx managers.

I was thinking to wrap them as ```UserDefinedClassVariable``` and do dispatch at ```USCVariable.call_function```, but it seems almost the same amount of work and this way is more clear.

This is on the way of moving ```TorchVariable``` to ```TorchFunctionVariable``` which will only handle the functions who would be allowed in graph (e.g, ```torch.sin```) and constant folded (e.g, ```torch.is_floating_point```). All other torch functions would be go through skip/inline rules, and would be wrapped as ```UserFunctionVariable``` (for inlined) and ```SkipFilesVariable``` (for skipped).
The next steps:
* Wrap torch modules, classes, objects as regular ```PythonModuleVariable```, ```UserDefinedClassVariable``` and ```UserDefinedObjectVariable```.
* Generate the allow in graph torch functions list and wrap them as ```TorchFunctionVariable```.
* Finally merge ```skipfiles.check``` and ```is_allowed``` into one function ```allow_skip.check(fn)``` which would return a Enum of allow, skip and inline.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/111622
Approved by: https://github.com/jansel
2023-10-27 21:26:54 +00:00
lezcano
1774704fc1 [dynamo] Simplify add_dict in preparation to refactor it with call_set (#110523)
The previous implementation had a fair amount of repeated code, and did
things like calling `add_options` where options was always empty (which
is fine, as the guards are already set within ConstDictVariable).

Pull Request resolved: https://github.com/pytorch/pytorch/pull/110523
Approved by: https://github.com/yanboliang, https://github.com/jansel
ghstack dependencies: #110522
2023-10-27 20:17:10 +00:00
lezcano
1dcbd1c088 [dynamo] [easy] Move Set to dicts.py (#110522)
A set is more of a dict than a list if you ask me.
This comes before the refactor where we implement sets and dicts via the
same logic.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/110522
Approved by: https://github.com/jansel
2023-10-27 20:17:10 +00:00
Jon Chuang
d3bf6803b6 [dynamo] add sanity check that we do not wrap tracked tensors (#112025)
Identified as a result of https://github.com/pytorch/pytorch/pull/111911

Pull Request resolved: https://github.com/pytorch/pytorch/pull/112025
Approved by: https://github.com/ezyang
2023-10-27 17:15:03 +00:00
Michael Lazos
a6e556f8b0 Support calling __torch_function__ attribute access (#111737)
Triggers `__torch_function__` tracing on attribute/method/property access matching the eager behavior for non-overridden attributes/methods/properties that are present on `torch.Tensor`.

Some caveats:
1. for methods there doesn't seem to be a way to check if the original implementation of a method is overridden via monkey patching or not. For example:
```
class LocalSubclass(torch.Tensor):
    @classmethod
    def __torch_function__(cls, func, types, args=(), kwargs=None):
        if kwargs is None:
            kwargs = {}
        return super().__torch_function__(func, types, args, kwargs)

x = torch.ones(2, 2).as_subclass(LocalSubclass)

> x.sigmoid
<built-in method sigmoid of LocalSubclass object at 0x7f8d305bb5e0>
```
There isn't a way to verify that this built-in method is equivalent to the base `torch.Tensor` implementation as each instance will have a different built-in method object that can't be traced back to the original `torch.Tensor` impl. You can check that the class itself has the original implementation via
```
> inspect.getattr_static(LocalSubclass, "sigmoid")
<method 'sigmoid' of 'torch._C.TensorBase' objects>
```
But we can't detect if the user dynamically patches an object with a built-in method called sigmoid which does something completely different.

2. If a user overrides a method but calls the original implementation we will still graph break. This will require modifying `SuperVariable` (and any other way to get the original impl) to handle tensor subclasses.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/111737
Approved by: https://github.com/jansel, https://github.com/ezyang
2023-10-27 04:57:19 +00:00
Nikita Shulga
ac4cc5dbea [Dynamo] Do not crash if numpy is not installed (#112175)
`s/isinstance(value, np.generic)/np is not None and isinstance(value, np.generic)/`

Found while looking at https://github.com/pytorch/pytorch/pull/110512

Pull Request resolved: https://github.com/pytorch/pytorch/pull/112175
Approved by: https://github.com/ev-br, https://github.com/kit1980
2023-10-27 00:39:28 +00:00
lezcano
47ccf04885 Split SymNode into its own file (#112037)
This PR:

- Moves TrueDiv, LShift, RShift, IsNonOverlappingAndDenseIndicator to `_sympy.functions.py`
- Moves SymNode to `fx.experimental.sym_node`.
  - This file does not have any SymPy dependencies at import time
  - It installs the magic methods in Sym{Bool,Int,Float}.
  - N.b. With this split, we may be able to move Sym{Bool,Int,Float} to this file, and remove quite a few of the hacks around these classes
- Imports `sym_node` in `torch/__init__.py` rather than the whole `symbolic_shapes.py`.
  This breaks the import-time dependency between torch and SymPy

Pull Request resolved: https://github.com/pytorch/pytorch/pull/112037
Approved by: https://github.com/peterbell10
ghstack dependencies: #112035, #112036
2023-10-26 23:32:27 +00:00
Jon Chuang
27cf49549a [dynamo] ExecutorchCallDelegateHigherOrderVariable - add sanity check that input and output tensors are disjoint (#111960)
Fixes https://github.com/pytorch/pytorch/issues/111917

Pull Request resolved: https://github.com/pytorch/pytorch/pull/111960
Approved by: https://github.com/zou3519
2023-10-26 21:13:05 +00:00
Edward Z. Yang
7da713bbaf Convert evaluate_expr GuardOnDataDependentSymNode into graph break (#111919)
Extracted this failure from
https://github.com/pytorch/pytorch/pull/110155

Signed-off-by: Edward Z. Yang <ezyang@meta.com>

Pull Request resolved: https://github.com/pytorch/pytorch/pull/111919
Approved by: https://github.com/lezcano
2023-10-26 16:28:00 +00:00
Jon Chuang
edafe2ddb9 [dynamo] Be stricter about HigherOrderOperator kwargs (#111938)
kwargs need to be handled carefully in speculate subgraph. We should be clearer about the contract of what the inputs are.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/111938
Approved by: https://github.com/zou3519
2023-10-26 03:51:30 +00:00
Evgeni Burovski
a4e4f41cce MAINT: graph break on numpy.__version__ (#112083)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/112083
Approved by: https://github.com/lezcano
ghstack dependencies: #112081, #112082
2023-10-26 01:03:45 +00:00
Jon Chuang
0ed461ae4c [dynamo] Ensure Dynamo uses this graph's fakes for Tensor example_values (#111954)
Fixes https://github.com/pytorch/pytorch/issues/111869, Fixes (detailed list of cases handled): https://github.com/pytorch/pytorch/pull/111913#discussion_r1370267313, fully fixes: https://github.com/pytorch/pytorch/issues/111873

Adds sanity checks ensuring that Dynamo uses this graph's fakes for Tensor `example_values`

Handles the main (and only?) entrypoints for new `FakeTensor`s in a Dynamo graph:
- `wrap_fx_proxy_cls`
- `VariableTracker.wrap_tensor`

Ensures that `get_fake_value` returns a fake except when we know we are going to properly wrap non-fakes.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/111954
Approved by: https://github.com/ezyang
2023-10-25 23:54:18 +00:00
rzou
e5049648be Add a "pt2 compliant" tag; add config to graph break on non-pt2_compliant ops (#111933)
This PR:
- adds the pt2 compliant tag. This tag specifies that the operator works
  with the PT2 compilation APIs. A custom op author should test their
  ops with opcheck if they choose to add this tag.
- adds a config for Dynamo to allow only pt2 compliant ops into the
  graph and graph break on all other OpOverload/OpOverloadPacket.

Bikeshedding help wanted on the name of the tag. It should be easily
grep-able so we can set up rules for it.

Test Plan:
- new tests

Reviewers:

Subscribers:

Tasks:

Tags:

Pull Request resolved: https://github.com/pytorch/pytorch/pull/111933
Approved by: https://github.com/ezyang
ghstack dependencies: #111912, #111915, #111948
2023-10-25 21:20:59 +00:00
Jon Chuang
f3b42ab5b9 feat(dynamo): remove inconsistent tracing histories by acknowledging possibility of inconsistent side-effects (#110804)
Fixes https://github.com/pytorch/pytorch/issues/110765

CC @voznesenskym  @yanboliang @Fidget-Spinner @anijain2305 @soulitzer @ezyang

Pull Request resolved: https://github.com/pytorch/pytorch/pull/110804
Approved by: https://github.com/ezyang, https://github.com/voznesenskym
2023-10-25 19:27:11 +00:00
PyTorch MergeBot
7e654c8f88 Revert "WIP / TST: allow testing torch._numpy under Dynamo (#110401)"
This reverts commit 5ed4a423de.

Reverted https://github.com/pytorch/pytorch/pull/110401 on behalf of https://github.com/huydhn due to Sorry for reverting your change, but it is failing dynamo job in trunk 5ed4a423de ([comment](https://github.com/pytorch/pytorch/pull/110401#issuecomment-1779811943))
2023-10-25 18:21:16 +00:00
Edward Z. Yang
07ccaabee7 Make profiler function will be ignored warn only once (#111921)
Signed-off-by: Edward Z. Yang <ezyang@meta.com>

Pull Request resolved: https://github.com/pytorch/pytorch/pull/111921
Approved by: https://github.com/mlazos, https://github.com/oulgen
2023-10-25 17:45:31 +00:00
Evgeni Burovski
5ed4a423de WIP / TST: allow testing torch._numpy under Dynamo (#110401)
Use conditional imports: when running under dynamo, import the original NumPy not torch._numpy. This is what we want to trace, not our implementation.

With this, the test suite passes with and without `PYTORCH_TEST_WITH_DYNAMO=1` (modulo a couple of test modules which are not meant to be compiled, e.g. `test_nep50_examples`). There are two new decorators, `x{fail,pass}ifTorchDynamo`, the `xpass` in most cases indicates a graph break and a fallback to eager for things we do not implement.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/110401
Approved by: https://github.com/lezcano
2023-10-25 16:02:16 +00:00
Jon Chuang
e574a8ab55 [dynamo] Add sanity checks to ensure no double-wrapping of FakeTensors produced by the current graph (#111913)
Partially fixes: https://github.com/pytorch/pytorch/issues/111873

Pull Request resolved: https://github.com/pytorch/pytorch/pull/111913
Approved by: https://github.com/ezyang
2023-10-25 01:18:32 +00:00
PyTorch MergeBot
5344468712 Revert "[dynamo] Properly track user-defined types for type() (#110794)"
This reverts commit ad4ccf9689.

Reverted https://github.com/pytorch/pytorch/pull/110794 on behalf of https://github.com/ezyang due to looks like this actually fails internal tests ([comment](https://github.com/pytorch/pytorch/pull/110794#issuecomment-1778002262))
2023-10-24 20:42:26 +00:00
Jon Chuang
4ac848cf77 [dynamo] Perf (MapHigherOrderVariable): do not unnecessarily get_real_value (#111920)
`get_real_value` will run the real tensor computation via the fx graph, which could be really expensive.

Let's just do the sensible thing by running the fx graph on the fake value

Pull Request resolved: https://github.com/pytorch/pytorch/pull/111920
Approved by: https://github.com/ezyang, https://github.com/zou3519
2023-10-24 19:44:25 +00:00
ydwu4
cd034e1793 [HigherOrderOp] don't mannually set input for cond (#111611)
We set mannualy_set_graph_inputs to False for CondHigherOrder. After that, it became necessary to deduplicate the inputs.  We'll add pytree tests in the follow-up pr.

Test Plan:
existing tests.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/111611
Approved by: https://github.com/zou3519
ghstack dependencies: #111610
2023-10-24 18:56:23 +00:00
Jon Chuang
6d78f34a06 fix regression which creates a new fake tensor (#111864)
Fixes regression identified here: ccd6b373b5 (r1369334484)

Now that `get_fake_value` will identify aliases, we should not try to wrap the fake value again.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/111864
Approved by: https://github.com/eellison
2023-10-24 05:11:48 +00:00
Jon Chuang
36d34ce951 [dynamo] support comparing LHS constant with tensor (#111492)
Fixes https://github.com/pytorch/pytorch/issues/108582

Depends on https://github.com/pytorch/pytorch/pull/111557 for fixing broken integration tests. (due to this PR unblocking an in-graph set membership)

Pull Request resolved: https://github.com/pytorch/pytorch/pull/111492
Approved by: https://github.com/Skylion007
2023-10-23 19:05:14 +00:00
Ken Jin
ad4ccf9689 [dynamo] Properly track user-defined types for type() (#110794)
Closes https://github.com/pytorch/pytorch/issues/110315.

Thanks to @ezyang for the easy repro!

Pull Request resolved: https://github.com/pytorch/pytorch/pull/110794
Approved by: https://github.com/ezyang
2023-10-23 17:34:23 +00:00
Michael Lazos
fb8876069d Support tracing base torch_function impl (#111731)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/111731
Approved by: https://github.com/jansel
ghstack dependencies: #111730
2023-10-23 07:11:32 +00:00
Michael Lazos
1d9a7f9e43 [Reland] TensorWithTFOverride inheritance from TensorVariable (#111766)
Accidentally merged https://github.com/pytorch/pytorch/pull/111730 with ghstack, so relanding

Pull Request resolved: https://github.com/pytorch/pytorch/pull/111766
Approved by: https://github.com/jansel
2023-10-23 04:33:16 +00:00
Jason Ansel
c65c0682b1 [dynamo] Expand _nonvar_fields names (#111749)
This should be a small compile time optimization, since we won't need to
walk these fields in apply().

Pull Request resolved: https://github.com/pytorch/pytorch/pull/111749
Approved by: https://github.com/yanboliang
2023-10-23 02:58:16 +00:00