Commit Graph

701 Commits

Author SHA1 Message Date
Oguz Ulgen
dbb31a2984 [Inductor] Add triton.autotune support for user defined triton kernels with constant/simple grids (#112228)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/112228
Approved by: https://github.com/jansel
2023-10-27 21:40:22 +00:00
PyTorch MergeBot
c67236a05d Revert "[dynamo] Be stricter about HigherOrderOperator kwargs (#111938)"
This reverts commit edafe2ddb9.

Reverted https://github.com/pytorch/pytorch/pull/111938 on behalf of https://github.com/izaitsevfb due to Fails meta internal executorch tests with `torch._dynamo.exc.InternalTorchDynamoError: name 'p_kwargs' is not defined` ([comment](https://github.com/pytorch/pytorch/pull/111938#issuecomment-1783538268))
2023-10-27 21:37:48 +00:00
PyTorch MergeBot
089e7aa4ac Revert "[dynamo] ExecutorchCallDelegateHigherOrderVariable - add sanity check that input and output tensors are disjoint (#111960)"
This reverts commit 27cf49549a.

Reverted https://github.com/pytorch/pytorch/pull/111960 on behalf of https://github.com/izaitsevfb due to Fails internal executorch tests with module 'torch.utils._pytree' has no attribute 'tree_flatten_only' ([comment](https://github.com/pytorch/pytorch/pull/111960#issuecomment-1783532843))
2023-10-27 21:32:30 +00:00
Yanbo Liang
061bf1a153 [5/N] Make torch context manager a TorchCtxManagerClassVariable (#111622)
Major change in this PR is to make torch context manager class a separate ```TorchCtxManagerClassVariable```, since we have dynamo implementation for these ctx managers.

I was thinking to wrap them as ```UserDefinedClassVariable``` and do dispatch at ```USCVariable.call_function```, but it seems almost the same amount of work and this way is more clear.

This is on the way of moving ```TorchVariable``` to ```TorchFunctionVariable``` which will only handle the functions who would be allowed in graph (e.g, ```torch.sin```) and constant folded (e.g, ```torch.is_floating_point```). All other torch functions would be go through skip/inline rules, and would be wrapped as ```UserFunctionVariable``` (for inlined) and ```SkipFilesVariable``` (for skipped).
The next steps:
* Wrap torch modules, classes, objects as regular ```PythonModuleVariable```, ```UserDefinedClassVariable``` and ```UserDefinedObjectVariable```.
* Generate the allow in graph torch functions list and wrap them as ```TorchFunctionVariable```.
* Finally merge ```skipfiles.check``` and ```is_allowed``` into one function ```allow_skip.check(fn)``` which would return a Enum of allow, skip and inline.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/111622
Approved by: https://github.com/jansel
2023-10-27 21:26:54 +00:00
lezcano
1774704fc1 [dynamo] Simplify add_dict in preparation to refactor it with call_set (#110523)
The previous implementation had a fair amount of repeated code, and did
things like calling `add_options` where options was always empty (which
is fine, as the guards are already set within ConstDictVariable).

Pull Request resolved: https://github.com/pytorch/pytorch/pull/110523
Approved by: https://github.com/yanboliang, https://github.com/jansel
ghstack dependencies: #110522
2023-10-27 20:17:10 +00:00
lezcano
1dcbd1c088 [dynamo] [easy] Move Set to dicts.py (#110522)
A set is more of a dict than a list if you ask me.
This comes before the refactor where we implement sets and dicts via the
same logic.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/110522
Approved by: https://github.com/jansel
2023-10-27 20:17:10 +00:00
Jon Chuang
d3bf6803b6 [dynamo] add sanity check that we do not wrap tracked tensors (#112025)
Identified as a result of https://github.com/pytorch/pytorch/pull/111911

Pull Request resolved: https://github.com/pytorch/pytorch/pull/112025
Approved by: https://github.com/ezyang
2023-10-27 17:15:03 +00:00
Michael Lazos
a6e556f8b0 Support calling __torch_function__ attribute access (#111737)
Triggers `__torch_function__` tracing on attribute/method/property access matching the eager behavior for non-overridden attributes/methods/properties that are present on `torch.Tensor`.

Some caveats:
1. for methods there doesn't seem to be a way to check if the original implementation of a method is overridden via monkey patching or not. For example:
```
class LocalSubclass(torch.Tensor):
    @classmethod
    def __torch_function__(cls, func, types, args=(), kwargs=None):
        if kwargs is None:
            kwargs = {}
        return super().__torch_function__(func, types, args, kwargs)

x = torch.ones(2, 2).as_subclass(LocalSubclass)

> x.sigmoid
<built-in method sigmoid of LocalSubclass object at 0x7f8d305bb5e0>
```
There isn't a way to verify that this built-in method is equivalent to the base `torch.Tensor` implementation as each instance will have a different built-in method object that can't be traced back to the original `torch.Tensor` impl. You can check that the class itself has the original implementation via
```
> inspect.getattr_static(LocalSubclass, "sigmoid")
<method 'sigmoid' of 'torch._C.TensorBase' objects>
```
But we can't detect if the user dynamically patches an object with a built-in method called sigmoid which does something completely different.

2. If a user overrides a method but calls the original implementation we will still graph break. This will require modifying `SuperVariable` (and any other way to get the original impl) to handle tensor subclasses.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/111737
Approved by: https://github.com/jansel, https://github.com/ezyang
2023-10-27 04:57:19 +00:00
Nikita Shulga
ac4cc5dbea [Dynamo] Do not crash if numpy is not installed (#112175)
`s/isinstance(value, np.generic)/np is not None and isinstance(value, np.generic)/`

Found while looking at https://github.com/pytorch/pytorch/pull/110512

Pull Request resolved: https://github.com/pytorch/pytorch/pull/112175
Approved by: https://github.com/ev-br, https://github.com/kit1980
2023-10-27 00:39:28 +00:00
lezcano
47ccf04885 Split SymNode into its own file (#112037)
This PR:

- Moves TrueDiv, LShift, RShift, IsNonOverlappingAndDenseIndicator to `_sympy.functions.py`
- Moves SymNode to `fx.experimental.sym_node`.
  - This file does not have any SymPy dependencies at import time
  - It installs the magic methods in Sym{Bool,Int,Float}.
  - N.b. With this split, we may be able to move Sym{Bool,Int,Float} to this file, and remove quite a few of the hacks around these classes
- Imports `sym_node` in `torch/__init__.py` rather than the whole `symbolic_shapes.py`.
  This breaks the import-time dependency between torch and SymPy

Pull Request resolved: https://github.com/pytorch/pytorch/pull/112037
Approved by: https://github.com/peterbell10
ghstack dependencies: #112035, #112036
2023-10-26 23:32:27 +00:00
Jon Chuang
27cf49549a [dynamo] ExecutorchCallDelegateHigherOrderVariable - add sanity check that input and output tensors are disjoint (#111960)
Fixes https://github.com/pytorch/pytorch/issues/111917

Pull Request resolved: https://github.com/pytorch/pytorch/pull/111960
Approved by: https://github.com/zou3519
2023-10-26 21:13:05 +00:00
Edward Z. Yang
7da713bbaf Convert evaluate_expr GuardOnDataDependentSymNode into graph break (#111919)
Extracted this failure from
https://github.com/pytorch/pytorch/pull/110155

Signed-off-by: Edward Z. Yang <ezyang@meta.com>

Pull Request resolved: https://github.com/pytorch/pytorch/pull/111919
Approved by: https://github.com/lezcano
2023-10-26 16:28:00 +00:00
Jon Chuang
edafe2ddb9 [dynamo] Be stricter about HigherOrderOperator kwargs (#111938)
kwargs need to be handled carefully in speculate subgraph. We should be clearer about the contract of what the inputs are.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/111938
Approved by: https://github.com/zou3519
2023-10-26 03:51:30 +00:00
Evgeni Burovski
a4e4f41cce MAINT: graph break on numpy.__version__ (#112083)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/112083
Approved by: https://github.com/lezcano
ghstack dependencies: #112081, #112082
2023-10-26 01:03:45 +00:00
Jon Chuang
0ed461ae4c [dynamo] Ensure Dynamo uses this graph's fakes for Tensor example_values (#111954)
Fixes https://github.com/pytorch/pytorch/issues/111869, Fixes (detailed list of cases handled): https://github.com/pytorch/pytorch/pull/111913#discussion_r1370267313, fully fixes: https://github.com/pytorch/pytorch/issues/111873

Adds sanity checks ensuring that Dynamo uses this graph's fakes for Tensor `example_values`

Handles the main (and only?) entrypoints for new `FakeTensor`s in a Dynamo graph:
- `wrap_fx_proxy_cls`
- `VariableTracker.wrap_tensor`

Ensures that `get_fake_value` returns a fake except when we know we are going to properly wrap non-fakes.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/111954
Approved by: https://github.com/ezyang
2023-10-25 23:54:18 +00:00
rzou
e5049648be Add a "pt2 compliant" tag; add config to graph break on non-pt2_compliant ops (#111933)
This PR:
- adds the pt2 compliant tag. This tag specifies that the operator works
  with the PT2 compilation APIs. A custom op author should test their
  ops with opcheck if they choose to add this tag.
- adds a config for Dynamo to allow only pt2 compliant ops into the
  graph and graph break on all other OpOverload/OpOverloadPacket.

Bikeshedding help wanted on the name of the tag. It should be easily
grep-able so we can set up rules for it.

Test Plan:
- new tests

Reviewers:

Subscribers:

Tasks:

Tags:

Pull Request resolved: https://github.com/pytorch/pytorch/pull/111933
Approved by: https://github.com/ezyang
ghstack dependencies: #111912, #111915, #111948
2023-10-25 21:20:59 +00:00
Jon Chuang
f3b42ab5b9 feat(dynamo): remove inconsistent tracing histories by acknowledging possibility of inconsistent side-effects (#110804)
Fixes https://github.com/pytorch/pytorch/issues/110765

CC @voznesenskym  @yanboliang @Fidget-Spinner @anijain2305 @soulitzer @ezyang

Pull Request resolved: https://github.com/pytorch/pytorch/pull/110804
Approved by: https://github.com/ezyang, https://github.com/voznesenskym
2023-10-25 19:27:11 +00:00
PyTorch MergeBot
7e654c8f88 Revert "WIP / TST: allow testing torch._numpy under Dynamo (#110401)"
This reverts commit 5ed4a423de.

Reverted https://github.com/pytorch/pytorch/pull/110401 on behalf of https://github.com/huydhn due to Sorry for reverting your change, but it is failing dynamo job in trunk 5ed4a423de ([comment](https://github.com/pytorch/pytorch/pull/110401#issuecomment-1779811943))
2023-10-25 18:21:16 +00:00
Edward Z. Yang
07ccaabee7 Make profiler function will be ignored warn only once (#111921)
Signed-off-by: Edward Z. Yang <ezyang@meta.com>

Pull Request resolved: https://github.com/pytorch/pytorch/pull/111921
Approved by: https://github.com/mlazos, https://github.com/oulgen
2023-10-25 17:45:31 +00:00
Evgeni Burovski
5ed4a423de WIP / TST: allow testing torch._numpy under Dynamo (#110401)
Use conditional imports: when running under dynamo, import the original NumPy not torch._numpy. This is what we want to trace, not our implementation.

With this, the test suite passes with and without `PYTORCH_TEST_WITH_DYNAMO=1` (modulo a couple of test modules which are not meant to be compiled, e.g. `test_nep50_examples`). There are two new decorators, `x{fail,pass}ifTorchDynamo`, the `xpass` in most cases indicates a graph break and a fallback to eager for things we do not implement.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/110401
Approved by: https://github.com/lezcano
2023-10-25 16:02:16 +00:00
Jon Chuang
e574a8ab55 [dynamo] Add sanity checks to ensure no double-wrapping of FakeTensors produced by the current graph (#111913)
Partially fixes: https://github.com/pytorch/pytorch/issues/111873

Pull Request resolved: https://github.com/pytorch/pytorch/pull/111913
Approved by: https://github.com/ezyang
2023-10-25 01:18:32 +00:00
PyTorch MergeBot
5344468712 Revert "[dynamo] Properly track user-defined types for type() (#110794)"
This reverts commit ad4ccf9689.

Reverted https://github.com/pytorch/pytorch/pull/110794 on behalf of https://github.com/ezyang due to looks like this actually fails internal tests ([comment](https://github.com/pytorch/pytorch/pull/110794#issuecomment-1778002262))
2023-10-24 20:42:26 +00:00
Jon Chuang
4ac848cf77 [dynamo] Perf (MapHigherOrderVariable): do not unnecessarily get_real_value (#111920)
`get_real_value` will run the real tensor computation via the fx graph, which could be really expensive.

Let's just do the sensible thing by running the fx graph on the fake value

Pull Request resolved: https://github.com/pytorch/pytorch/pull/111920
Approved by: https://github.com/ezyang, https://github.com/zou3519
2023-10-24 19:44:25 +00:00
ydwu4
cd034e1793 [HigherOrderOp] don't mannually set input for cond (#111611)
We set mannualy_set_graph_inputs to False for CondHigherOrder. After that, it became necessary to deduplicate the inputs.  We'll add pytree tests in the follow-up pr.

Test Plan:
existing tests.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/111611
Approved by: https://github.com/zou3519
ghstack dependencies: #111610
2023-10-24 18:56:23 +00:00
Jon Chuang
6d78f34a06 fix regression which creates a new fake tensor (#111864)
Fixes regression identified here: ccd6b373b5 (r1369334484)

Now that `get_fake_value` will identify aliases, we should not try to wrap the fake value again.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/111864
Approved by: https://github.com/eellison
2023-10-24 05:11:48 +00:00
Jon Chuang
36d34ce951 [dynamo] support comparing LHS constant with tensor (#111492)
Fixes https://github.com/pytorch/pytorch/issues/108582

Depends on https://github.com/pytorch/pytorch/pull/111557 for fixing broken integration tests. (due to this PR unblocking an in-graph set membership)

Pull Request resolved: https://github.com/pytorch/pytorch/pull/111492
Approved by: https://github.com/Skylion007
2023-10-23 19:05:14 +00:00
Ken Jin
ad4ccf9689 [dynamo] Properly track user-defined types for type() (#110794)
Closes https://github.com/pytorch/pytorch/issues/110315.

Thanks to @ezyang for the easy repro!

Pull Request resolved: https://github.com/pytorch/pytorch/pull/110794
Approved by: https://github.com/ezyang
2023-10-23 17:34:23 +00:00
Michael Lazos
fb8876069d Support tracing base torch_function impl (#111731)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/111731
Approved by: https://github.com/jansel
ghstack dependencies: #111730
2023-10-23 07:11:32 +00:00
Michael Lazos
1d9a7f9e43 [Reland] TensorWithTFOverride inheritance from TensorVariable (#111766)
Accidentally merged https://github.com/pytorch/pytorch/pull/111730 with ghstack, so relanding

Pull Request resolved: https://github.com/pytorch/pytorch/pull/111766
Approved by: https://github.com/jansel
2023-10-23 04:33:16 +00:00
Jason Ansel
c65c0682b1 [dynamo] Expand _nonvar_fields names (#111749)
This should be a small compile time optimization, since we won't need to
walk these fields in apply().

Pull Request resolved: https://github.com/pytorch/pytorch/pull/111749
Approved by: https://github.com/yanboliang
2023-10-23 02:58:16 +00:00
Jon Chuang
5af97fedd2 [dynamo] Fix context wrapping grad mode variable (#111534)
Fixes https://github.com/pytorch/pytorch/issues/111528

Makes use of `ContextWrappingVariable` so that the function will enter the grad mode whenever it is called, and exit once it is done calling.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/111534
Approved by: https://github.com/jansel
2023-10-22 20:55:48 +00:00
Jon Chuang
c4ab229a82 [dynamo] Implement set.__contains__ for Tensor as object match of FakeTensor (#111738)
Fixes https://github.com/pytorch/pytorch/issues/111556

Dynamo implementation of `set.__contains__` previously used `__eq__` match.

But this is wrong when `__eq__` match does not imply `__hash__` match, as is the case for `torch.Tensor`, leading to inconsistent results. See: https://github.com/pytorch/pytorch/issues/111542

Hence implement as Tensor object match i.e. proxy node `'example_value'` FakeTensor match.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/111738
Approved by: https://github.com/lezcano
2023-10-22 17:40:34 +00:00
Chen, Zejun
8e60d646b9 [dynamo][stream]support device-agnostic stream in dynamo and capture stream/event method in fx graph (#108312)
This PR implements 2 things:
1. support the device agnostic stream and runtime APIs captured by the dynamo.
2. support the stream methods(include the event) captured by the dynamo.

Here are details for 1st.
Previously the stream captured in dynamo was tightly bind to CUDA. Here we implement a global singleton container named `StreamMethodContainer` for different backends to register their associated stream methods to dynamo. When import the backend’s product, the stream operations can be registered directly by calling

```
device_stream_method = {'current_stream': method_1,
                         'create_stream_context': method_2,
                         'set_stream': method_3,
                         'set_stream_by_id': method_4}
torch._dynamo.stream.register_stream_method(device_name, device_stream_method)
```

Stream methods need to be passed in this API according to the precise semantics represented by the dict key in `device_stream_method`. After register, these methods can be used by dynamo to capture the stream operations in users’ script, for example, get the current stream or set the specific stream. Additionally, the wrapped stream variable and the stream context variable are changed to be the device-agnostic, the proxy functions of these variables are assigned by the associated methods in the container. All of this are illustrated in the below. Below is a illustration.

![image](https://github.com/pytorch/pytorch/assets/74231238/37ac7350-c539-4167-9886-c3744ecab65d)

Pull Request resolved: https://github.com/pytorch/pytorch/pull/108312
Approved by: https://github.com/jansel, https://github.com/jgong5
2023-10-22 13:22:58 +00:00
Yanbo Liang
bf01a7b023 [3/N] Merge skipfiles.check rules (#111451)
This major change in this PR is to consolidate the skipfiles.check rules, the major thing done is merging the original ```FILE_INLINELIST``` with ```SUBMOD_INLINELIST``` into new ```MOD_INLINELIST``` and a legacy  ```LEGACY_MOD_INLINELIST```.
Let's use the following example to illustrate what is the expected behavior for this force inline list:
fa995626a8/torch/_dynamo/skipfiles.py (L344-L369)

The handling logic is:
* If f2 is inlined, we will check both ```MOD_INLINELIST``` and ```LEGACY_MOD_INLINELIST``` to consultant force inline rules for f3.
* If f2 is skipped, we will check ```LEGACY_MOD_INLINELIST``` only for inline rules for f3.

The reason behind this design is: if f2 is skipped, if we always trace all recursively called functions, we will go to the very low level functions (e.g, ```super().__init__```) which caused graph breaks. We treated this as a signal that all functions that f2 recursively called should be skipped as well if f2 is skipped. This is also a feature that many PyTorch developers requested, they just want to skip all recursive functions if they mark the upper level functions as skipped.

For PyTorch developers, we should only use ```MOD_INLINELIST``` going forward. I think most of the modules in the ```LEGACY_MOD_INLINELIST``` are legacy things to workaround when we didn't have a good skip/inline API.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/111451
Approved by: https://github.com/ezyang
2023-10-22 04:35:15 +00:00
Brian Hirsh
62942b075c dynamo: graph break on resize_ (#111553)
AOTAutograd's handling for resize_() isn't fully robust (and on top of that, functionalization can potentially give up and raise an error if the tensor you're resizing has outstanding views).

So given that, and given that resize_() is rare, I updated dynamo to graph break on resize_() instead.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/111553
Approved by: https://github.com/ezyang
2023-10-22 02:27:14 +00:00
Jon Chuang
47eed65481 [dynamo] Add is_ support for Tensors, force get_fake_value to reuse previously computed example_value if available (#111565)
Use FakeTensor id match as equivalent to object identity match

cc

Pull Request resolved: https://github.com/pytorch/pytorch/pull/111565
Approved by: https://github.com/ezyang
2023-10-21 13:56:30 +00:00
Jon Chuang
344fc98991 [dynamo] fix: SetVariable should test Tensor identity based example_value FakeTensor, not fx.Node (#111696)
FX Node changes after in-place op. FakeTensor remains the same.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/111696
Approved by: https://github.com/ezyang
2023-10-21 08:49:21 +00:00
Michael Lazos
62df159c3f move tf override tensor to torch_function.py (#111714)
Moves TensorWithTFOverride to torch_function.py

Pull Request resolved: https://github.com/pytorch/pytorch/pull/111714
Approved by: https://github.com/eellison, https://github.com/voznesenskym
2023-10-21 02:29:01 +00:00
Jon Chuang
101210e2ce [dynamo] cast single-elem tensors to float and int (#111518)
Fixes https://github.com/pytorch/pytorch/issues/109538

Pull Request resolved: https://github.com/pytorch/pytorch/pull/111518
Approved by: https://github.com/ezyang
2023-10-20 22:53:58 +00:00
voznesenskym
303c54dbd9 [dynamo] share a subgraph tracer across fwd and bwd in autograd.Function (#111588)
Fixes https://github.com/pytorch/pytorch/issues/111031

The current design of autograd.Function tracing in dynamo is that we:

1) speculate fwd, and if its fine,
2) speculate bwd, and if its fine
3) install the .apply in the graph alongside fwd guards

The mechanism for doing so involves creating HOPs for fwd, bwd, and apply. The speculation for fwd and bwd create their own subtracer. This is fine, until a proxy created in fwd is used in bwd.

For a simple example, consider:

```
 class Foo(Function):
            @staticmethod
            def forward(ctx, x):
                ctx.x0 = x.size(0)
                return x * 2

            @staticmethod
            def backward(ctx, grad_out):
                return grad_out * ctx.x0
```
the value stored at `x0` is a proxy - but it is a proxy belonging to the fwd speculation subtracer. Rather than teaching it to the subtracer for bwd, we choose to create a subtracer that covers both fwd and bwd speculation.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/111588
Approved by: https://github.com/zou3519
2023-10-20 21:32:02 +00:00
Michael Lazos
a55ecec195 [dynamo][__torch_function__ 2/n] Refactor TensorWithTFOverrideVariable (#109556)
This is purely a refactor that preserves the existing behavior and tests.

The main contributions of the PR are to refactor the dispatch of `__torch_function__` to enable calling it with  TF override objects in any argument position and matching the eager dispatch behavior.

This will allow for the following in upcoming PRs:

1) have TensorWithTFOverrideVariable inherit from TensorVariable
2) enable tracing through the base `__torch_function__` implementation.

Note: this depends on https://github.com/pytorch/pytorch/pull/109542

towards tracing for https://github.com/pytorch/pytorch/issues/93723

Pull Request resolved: https://github.com/pytorch/pytorch/pull/109556
Approved by: https://github.com/jansel, https://github.com/ezyang
2023-10-20 18:53:38 +00:00
Aaron Gokaslan
cb856b08b2 [BE]: Attach cause to some exceptions and enable RUFF TRY200 (#111496)
Did some easy fixes from enabling TRY200. Most of these seem like oversights instead of intentional. The proper way to silence intentional errors is with `from None` to note that you thought about whether it should contain the cause and decided against it.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/111496
Approved by: https://github.com/malfet
2023-10-19 21:56:36 +00:00
Jon Chuang
79529ef657 [dynamo] fix graph break when listlike of tensor contains const (#111572)
Fixes https://github.com/pytorch/pytorch/pull/111557#discussion_r1365620968

Pull Request resolved: https://github.com/pytorch/pytorch/pull/111572
Approved by: https://github.com/voznesenskym, https://github.com/lezcano
2023-10-19 19:51:28 +00:00
Yanbo Liang
e708de83b9 [4/N] Reorder VariableBuilder._wrap (#111409)
Reorganize the priority inside of ```VariableBuilder._wrap```:
* is_allowed returning True -> TorchVariable
* skipfiles.check returning True -> SkipFilesVariable
* UserFunctionVariable/UserMethodVariable (This is means both is_allowed and skipfiles.check returning False, then inlining by default)
* UserDefinedClassVariable
* UserDefinedObjectVariable (the ultimate default value)

Pull Request resolved: https://github.com/pytorch/pytorch/pull/111409
Approved by: https://github.com/jansel
2023-10-17 21:12:34 +00:00
Evgeni Burovski
5a8a89360d Handle the .tolist method of np.arrays in dynamo (#111382)
Fixes part 1 of https://github.com/pytorch/pytorch/issues/111370#issuecomment-1764730773

While at it, add a test for numpy ndarray `.size` attribute. This started as an attempt to remove the delegation of what looks like a `.size()` method --- which does not exist in numpy --- on the same line this patch adds a `tolist` to.
But this is apparently needed for something else and existing tests start failing. Thus, declare it as _ain't broken don't fix_, and only keep the test. Can remove the test if wanted though.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/111382
Approved by: https://github.com/lezcano
2023-10-16 22:56:52 +00:00
Evgeni Burovski
48989bc820 trace frames with np.ndarray (#110512)
Fixes #109604

Resubmit gh-109715 + several skips and small fixes to make tests pass.

The main fix here is by @ysiraichi : previously, dynamo did not resume tracing numpy ndarrays after a graph break.
While at it, fix several small issues Yukio's fix uncovers:

- graph break gracefully on numpy dtypes which do not map to torch.dtypes (uint16 etc)
- recognize array scalars in dynamo, treat them as 0D ndarrays
- make sure that iterating over torch.ndarray generates arrays not bare tensors

Pull Request resolved: https://github.com/pytorch/pytorch/pull/110512
Approved by: https://github.com/lezcano
2023-10-15 00:56:10 +00:00
Yanbo Liang
da662248fb [Dynamo] Fix autograd.Function tracing errors loudly involving saved tensors (#111277)
Fixes #104792

Pull Request resolved: https://github.com/pytorch/pytorch/pull/111277
Approved by: https://github.com/jansel, https://github.com/zou3519
2023-10-15 00:47:59 +00:00
Peter Bell
8747e4c8c1 [dynamo] Add specialized variable tracker for sys.modules (#110990)
`sys.modules` is currently treated as a constant dictionary and any reference to
it will result in guards on the full contents of `sys.modules`. This instead
adds a specialized variable tracker which tries to guard only on the modules
referenced by the code. e.g.

```
sys.modules["operator"].add(x, x)
```

will generate the guard
```
___dict_contains('operator', G['sys'].modules)
```

It does this with special support for `__contains__` `__getitem__` and `.get`
which are probably the most commonly used with `sys.modules`. For anything else
we just fall back to building the dict tracker as normal.

While accessing `sys.modules` may seem unusual, it actually comes up when
inlining the `warnings.catch_warnings` context manager which internally accesses
`sys.modules["warnings"]`.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/110990
Approved by: https://github.com/ezyang
2023-10-13 20:08:40 +00:00
PyTorch MergeBot
2b6f281e5c Revert "Remove dead code (#111207)"
This reverts commit c2ed714f54.

Reverted https://github.com/pytorch/pytorch/pull/111207 on behalf of https://github.com/huydhn due to Sorry for reverting this, but it breaks lint c2ed714f54 ([comment](https://github.com/pytorch/pytorch/pull/111207#issuecomment-1762126366))
2023-10-13 19:56:11 +00:00
lezcano
c2ed714f54 Remove dead code (#111207)
This dictionary is not used anywhere. The _make_dupe_guard function does
not exist anymore

Pull Request resolved: https://github.com/pytorch/pytorch/pull/111207
Approved by: https://github.com/Skylion007, https://github.com/voznesenskym
2023-10-13 18:46:27 +00:00