Commit Graph

106 Commits

Author SHA1 Message Date
Michael Voznesensky
10fbdcf72c Re-PR of 90269 - Force all nn_module associated tensors to be static (#99108)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/99108
Approved by: https://github.com/ezyang
2023-04-14 05:53:48 +00:00
Edward Z. Yang
cd078d376e GraphArg is always length one, adjust APIs accordingly (#99059)
Signed-off-by: Edward Z. Yang <ezyang@meta.com>

Pull Request resolved: https://github.com/pytorch/pytorch/pull/99059
Approved by: https://github.com/voznesenskym
2023-04-14 03:11:25 +00:00
Edward Z. Yang
e613a419ed Remove dead wrap_sym (#99049)
I'm pretty sure this isn't used by anything

Signed-off-by: Edward Z. Yang <ezyang@meta.com>

Pull Request resolved: https://github.com/pytorch/pytorch/pull/99049
Approved by: https://github.com/Skylion007, https://github.com/voznesenskym
2023-04-14 03:11:25 +00:00
Edward Z. Yang
cc345d181a Change unspec ints to not be duck-sized (#99010)
Signed-off-by: Edward Z. Yang <ezyang@meta.com>

Pull Request resolved: https://github.com/pytorch/pytorch/pull/99010
Approved by: https://github.com/janeyx99
2023-04-14 03:09:05 +00:00
Michael Voznesensky
ccc9a3d726 Automatic Dynamic Shapes (#98923)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/98923
Approved by: https://github.com/ezyang
2023-04-13 02:39:23 +00:00
PyTorch MergeBot
629377ea8b Revert "Replace _dynamo.config with an object instead of module (#96455)"
This reverts commit 420104a886.

Reverted https://github.com/pytorch/pytorch/pull/96455 on behalf of https://github.com/jansel due to BC breaking, was landed prematurely
2023-04-12 15:06:14 +00:00
Andrew Gu
c93ff384c3 [Easy] Reuse source variable in wrap_tensor (#98845)
2fab2893aa/torch/_dynamo/variables/builder.py (L759-L760)
We already save `source = self.get_source()` to begin `wrap_tensor()`. Since the source should be fixed at `VariableBuilder` construction time, we should be okay to reuse the `source` variable instead of calling `get_source()` every time.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/98845
Approved by: https://github.com/ezyang
2023-04-11 22:23:59 +00:00
PaliC
6cbe5c5ef7 Fix Lint (#98873)
Fixes lint errors introduced by [#98433](https://github.com/pytorch/pytorch/pull/98779)

Pull Request resolved: https://github.com/pytorch/pytorch/pull/98873
Approved by: https://github.com/huydhn, https://github.com/malfet
2023-04-11 21:47:21 +00:00
Han Qi
420104a886 Replace _dynamo.config with an object instead of module (#96455)
Summary:
    Replace _dynamo.config with an object instead of module

    Current usage patterns of setting and reading fields on config will work
    unchanged.

    Only changes needed going forward:
    1. import torch._dynamo.config will not work. However, just doing
       import torch._dynamo is sufficient to access dynamo config
       as torch._dynamo.config.

    2. Files inside of _dynamo folder need to access config via
       from torch._dynamo.config_util import config instead of
       from torch._dynamo import config. Because _dynamo/__init__.py
       imports some of the files so it would be circular import.

Test Plan:

Reviewers:

Subscribers:

Tasks:

Tags:

Fixes #ISSUE_NUMBER

Pull Request resolved: https://github.com/pytorch/pytorch/pull/96455
Approved by: https://github.com/williamwen42
2023-04-11 21:23:32 +00:00
Avik Chaudhuri
88dae230d0 dynamic range constraint API (#98779)
This diff adds the ability to specify range constraints on dynamic dimensions. (Previously we only supported declaring a dynamic dimension, which gets the default range `[2, sympy.oo]`.)

One point worth calling out: our initial design called for compound expressions like `lower <= dynamic_dim(x, d) <= upper`. However this seems difficult to support, because of a combination of desugaring and overloading semantics for such compound expressions in Python. Rather than silently doing the wrong thing, we explicitly error in this case and recommend users to specify multiple constraints, which is supported.

Differential Revision: [D44847318](https://our.internmc.facebook.com/intern/diff/D44847318/)

Pull Request resolved: https://github.com/pytorch/pytorch/pull/98779
Approved by: https://github.com/ezyang
2023-04-11 20:11:46 +00:00
Andrew Gu
c9adc4c376 [Dynamo] De-dup graph inputs (#98775)
###  Overview
This PR de-duplicates graph inputs in TorchDynamo, using the `Source` as the unique identifier for each input. This closes https://github.com/pytorch/pytorch/issues/98743 and https://github.com/pytorch/pytorch/issues/98625.

### Details
`VariableBuilder.wrap_tensor()` should return a `VariableTracker` for the passed-in `value: Tensor`. If `value` is duplicated, we should avoid calling `OutputGraph.create_graph_input()` and `OutputGraph.add_grapharg()`.
- Note that `create_graph_input()` and `add_grapharg()` are not 1:1. For a constant source and either `wrap_sym()` or `wrap_unspecialized_primitive()`, TorchDynamo still calls `create_graph_input()` but not `add_grapharg()`.
- Note that `create_graph_input()` should be called before constructing the corresponding `VariableTracker`. TorchDynamo needs the `fx.Proxy` object to pass to `wrap_fx_proxy()`.

In this PR, the `OutputGraph` saves an additional mapping `input_source_to_var` from each graph input's `Source` to its `VariableTracker`, which works because `Source` is now hashable. This mapping should be updated each time `create_graph_input()` is called. However, since we must construct the `VariableTracker` after `create_graph_input()` returns, we must have a separate call to the `OutputGraph` to update the mapping.

If anyone has any suggestion on how to coalesce this logic and avoid having to remember to update `input_source_to_var` for each `create_graph_input()`, I would love to hear it.

<details>
<summary> Alternate Approach</summary>

Initially, I tried having TorchDynamo construct a new but equivalent `VariableTracker` for the duplicated tensor. However, I abandoned this approach after hitting an assertion in `def wrap_fx_proxy_cls()` due to `"example_value"` already being in the proxy node's metadata because we were reusing the primary tensor's `Proxy` object. Reusing the exact `VariableTracker` also seems less error-prone instead of requiring constructing a new but identical `VariableTracker`.
</details>

### Testing
#### Global Variable Test
```
import torch
@torch.compile()
def f():
    return x + x
x = torch.randn(3)
f()
```

Before:
```
====== Forward graph 0 ======
 <eval_with_key>.6 class <lambda>(torch.nn.Module):
    def forward(self, arg0_1: f32[3], arg1_1: f32[3]):
        # File: /data/users/ezyang/b/pytorch/ff.py:5, code: return x + x
        add: f32[3] = torch.ops.aten.add.Tensor(arg0_1, arg1_1);  arg0_1 = arg1_1 = None
        return (add,)
```

After (only `arg0_1` and no more `arg1_1`):
```
 ====== Forward graph 0 ======
 <eval_with_key>.4 class <lambda>(torch.nn.Module):
    def forward(self, arg0_1: f32[3]):
        # File: dynamo/test_dup_global.py:8, code: return x + x
        add: f32[3] = torch.ops.aten.add.Tensor(arg0_1, arg0_1);  arg0_1 = None
        return (add,)
```

#### FSDP Test
Before we error on
```
File "/.../pytorch/torch/_guards.py", line 244, in __post_init__
    assert self.input_source_a != self.input_source_b
```
and now there is no error.

---
The rename from `name_to_input` to `input_name_to_proxy` is not part of the core logic change and is a remnant from initial attempts. I can undo it later if desired, but I also feel that the new name is more informative. It also fixes the type annotation.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/98775
Approved by: https://github.com/ezyang, https://github.com/voznesenskym
2023-04-11 18:07:20 +00:00
Jason Ansel
f4858fa8ef Improve dynamo support for autograd.Function (#98158)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/98158
Approved by: https://github.com/yanboliang, https://github.com/anijain2305
2023-04-10 00:33:51 +00:00
PyTorch MergeBot
e394f6db5a Revert "Improve dynamo support for autograd.Function (#98158)"
This reverts commit 4716fa2411.

Reverted https://github.com/pytorch/pytorch/pull/98158 on behalf of https://github.com/huydhn due to Sorry for reverting your PR, but it seems to breaks MacOS trunk job 4716fa2411.  The signal was missing from the PR because we disabled MacOS job yesterday due to https://github.com/pytorch/pytorch/issues/98362
2023-04-06 18:15:02 +00:00
Jason Ansel
4716fa2411 Improve dynamo support for autograd.Function (#98158)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/98158
Approved by: https://github.com/yanboliang, https://github.com/anijain2305
2023-04-06 16:44:37 +00:00
Edward Z. Yang
f98c1809a4 Add mark_static (#98427)
Signed-off-by: Edward Z. Yang <ezyang@meta.com>

Pull Request resolved: https://github.com/pytorch/pytorch/pull/98427
Approved by: https://github.com/voznesenskym
2023-04-06 12:58:16 +00:00
Edward Z. Yang
30d47e4520 Do not track parameters, do not generate guards (#98350)
Signed-off-by: Edward Z. Yang <ezyang@meta.com>

Pull Request resolved: https://github.com/pytorch/pytorch/pull/98350
Approved by: https://github.com/voznesenskym
2023-04-05 03:48:46 +00:00
Michael Voznesensky
b1e60bfb6a Pass f_locals as a dict rather than kwargs (#98107)
Fixes https://github.com/pytorch/pytorch/issues/97688

One big problem is that instead of printing x < y we now print
`E["x"] < E["y"]` and now all of the tests wobbled and I'm mad.

Signed-off-by: Edward Z. Yang <ezyangmeta.com>

Pull Request resolved: https://github.com/pytorch/pytorch/pull/98107
Approved by: https://github.com/ezyang
2023-04-04 00:30:08 +00:00
Jason Ansel
b9d3b3f595 Improve support for contextlib.nullcontext (#98111)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/98111
Approved by: https://github.com/anijain2305
2023-04-02 02:33:14 +00:00
Yanbo Liang
9be9592f28 [Dynamo] Code refactor: move context managers out of misc.py (#97958)
misc.py and test_misc.py is too big, moving context managers to context.py and test_context.py.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/97958
Approved by: https://github.com/ezyang, https://github.com/anijain2305, https://github.com/mlazos, https://github.com/voznesenskym
2023-03-31 23:15:39 +00:00
Edward Z. Yang
97fc8ea5f4 Run the benchmark suite with dynamic batch only (#97912)
Symbolic shapes compile time on full CI with inductor is horribly long (even though our aot_eager local runs seemed to suggest that the added latency was only 10s per model.) To patch over the problem for now, run the benchmark suite with dynamic batch only.  This should absolve a lot of sins.

Signed-off-by: Edward Z. Yang <ezyang@meta.com>

Pull Request resolved: https://github.com/pytorch/pytorch/pull/97912
Approved by: https://github.com/janeyx99, https://github.com/desertfire
2023-03-30 18:04:48 +00:00
Aaron Gokaslan
47dca20d80 [BE] Enable flake8-comprehension rule C417 (#97880)
Enables flake8-comprehension rule C417. Ruff autogenerated these fixes to the codebase.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/97880
Approved by: https://github.com/ezyang, https://github.com/kit1980, https://github.com/albanD
2023-03-30 14:34:24 +00:00
Michael Lazos
e6909f6ccc [Dynamo] Fix for tuple construction from tuple iterators (#97862)
Fixes #93405

In short - when calling the builtin function `Tuple` on a list variable we added a list length guard. This paired with converting tuple iterators to a ListIteratorVariable resulted in this guard being improperly added.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/97862
Approved by: https://github.com/yanboliang, https://github.com/jansel
2023-03-29 19:20:05 +00:00
Edward Z. Yang
8372c5dc68 Refactor dynamic dims api, stateless internals, higher level export API (#96699)
The purpose of this API is to execute a few large components of work:

1) Refactor all the internals of plumbing dynamic dimension information after dynamo to be stateless
2) Decouple allocation controls around dynamic dimensions from verification
3) For (2), for allocation, create an enum that dictates whether we are in DUCK (default today), STATIC (aka assume_static_default in the past), or DYNAMIC (aka user constrained, do not duck shape)
4) For (2), for verification, we separate out the list of dynamic ranges entirely from allocation. This means shape_env does not tracking for what we verify on, and instead, it is the callers job to invoke produce_guards() with the various things they want verified, specifically, with the valid ranges. We do use constrain ranges to refine value ranges when doing analysis.
5) We have decided, therefore, as an extension of (4) to double down on "late" checks versus "eager" checks, primarily because the mechanisms for gathering what actually matters happens during guards, and should be a purview of the caller seeking guards, not the shape env. However, for dynamo, these structures are essentially one and the same.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/96699
Approved by: https://github.com/avikchaudhuri, https://github.com/ezyang
2023-03-29 16:55:49 +00:00
Will Constable
c1a6dde79e Make dynamo-FSDP skip guards (#97463)
Create a new GuardSource for FSDP modules, and use it
to opt out of guard installation.

Based on @awgu's work in https://github.com/pytorch/pytorch/pull/97091

Pull Request resolved: https://github.com/pytorch/pytorch/pull/97463
Approved by: https://github.com/voznesenskym, https://github.com/jansel, https://github.com/awgu
2023-03-28 04:04:34 +00:00
Yanbo Liang
e3df6a7c8a [Dynamo] Unspec int list if enabling dynamic_shapes (#97557)
Fixes #97348

Pull Request resolved: https://github.com/pytorch/pytorch/pull/97557
Approved by: https://github.com/ezyang, https://github.com/jansel
2023-03-27 06:12:43 +00:00
Edward Z. Yang
fa4c77e39b Rename PyOperator to HigherOrderOperator (#97493)
Twice this week I have had people confuse "operator defined with Python
operator registration aka torch.library" and "PyOperator which is used
to define control flow operators and other operators that cannot be
represented in JIT schema."  Renaming PyOperator for clarity.

Signed-off-by: Edward Z. Yang <ezyang@meta.com>

Pull Request resolved: https://github.com/pytorch/pytorch/pull/97493
Approved by: https://github.com/SherlockNoMad
2023-03-24 05:04:02 +00:00
Michael Voznesensky
f9ce593267 Extend aot autograd dedup guards to params, stop using positions (#96774)
The purpose of this PR is to remove reliance on argument positions in dedup guards, AND extend the functionality to params.

A version of this PR was stamped prior https://github.com/pytorch/pytorch/pull/95831 - but was kinda gross, because it was based on an underlying PR that did way too much with source names.

This PR leaves most of that alone, in favor of just reusing the same name standardization logic that dynamo module registration does.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/96774
Approved by: https://github.com/ezyang
2023-03-21 05:59:33 +00:00
Michael Voznesensky
722c4e59a4 Replace source check with assert (#95640)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/95640
Approved by: https://github.com/ezyang
2023-03-19 21:51:59 +00:00
Guang Yang
d4b8ed2b11 Fail fast when dynamo attempts to add unspecialized int/float as additional graph inputs (#96786)
Summary:
Verified the changes to catch unspecialized int/floats being added as additional graph in D44037548 prior to RP(https://github.com/pytorch/pytorch/pull/95621).

However, with #95621 the issue to be solved originally is no longer valid because int & float in `forward` will always be specialized in export. This RP is to add the assertion anyway *(though not be hit unless there is a regression)* to immediately catch the attempt to add unspecialized int/float to additional graphargs

Test Plan:
Example of the error message would look like:
```
Dynamo attempts to add additional input: value=9.999999747378752e-06, source=NNModuleSource(inner=AttrSource(base=NNModuleSource(inner=AttrSource(base=LocalInputSource(local_name='self', pos=0), member='torch_module')), member='eps'))
```
Passed all export tests
```
Buck UI: https://www.internalfb.com/buck2/fea72653-5549-47e7-a9bf-740eb86a8e26
Test UI: https://www.internalfb.com/intern/testinfra/testrun/8725724422167257
RE: reSessionID-7b3470b1-c293-4c4a-9671-dd0b7a2839b8  Up: 6.0 KiB  Down: 0 B
Jobs completed: 101. Time elapsed: 115.7s.
Tests finished: Pass 98. Fail 0. Fatal 0. Skip 0. 0 builds failed
```

Differential Revision: D44075910

Pull Request resolved: https://github.com/pytorch/pytorch/pull/96786
Approved by: https://github.com/tugsbayasgalan, https://github.com/ezyang
2023-03-17 21:15:18 +00:00
Yanbo Liang
166117e050 control_flow.{cond/map} allows tracked_fakes divergence (#96546)
Fixes #96473

Pull Request resolved: https://github.com/pytorch/pytorch/pull/96546
Approved by: https://github.com/ezyang
2023-03-14 07:06:54 +00:00
Edward Z. Yang
507feb805f Don't specialize torch.Size with specialize_int = False (#96419)
Fixes https://github.com/pytorch/pytorch/issues/95868

Signed-off-by: Edward Z. Yang <ezyang@meta.com>

Pull Request resolved: https://github.com/pytorch/pytorch/pull/96419
Approved by: https://github.com/jansel, https://github.com/ngimel
2023-03-14 01:32:58 +00:00
Edward Z. Yang
384d3ec2b6 Extra CR comments from #95621 (#96043)
Specifically:
063e441471 (r1120306196)
https://github.com/pytorch/pytorch/pull/95621#discussion_r1125015510

Signed-off-by: Edward Z. Yang <ezyang@meta.com>

Pull Request resolved: https://github.com/pytorch/pytorch/pull/96043
Approved by: https://github.com/Chillee, https://github.com/albanD
2023-03-10 01:10:48 +00:00
Edward Z. Yang
02a18b1a97 Properly avoid wrapping numbers as tensors before backend (#96193)
This partially reverts https://github.com/pytorch/pytorch/pull/96051 with a proper fix.

Signed-off-by: Edward Z. Yang <ezyang@meta.com>

Pull Request resolved: https://github.com/pytorch/pytorch/pull/96193
Approved by: https://github.com/jansel
2023-03-07 17:57:47 +00:00
Edward Z. Yang
d303665d33 Make int unspecialization actually work (#95621)
OK, so this PR used to be about reducing the number of constants we specialize on, but it turns out that unspecialization was ~essentially never used (because we still constant specialized way too aggressively) and I ended up having to fix a bunch of issues to actually get tests to pass. So this PR is now "make int unspecialization actually work". As part of this, I have to turn off unspecialization by default, as there are still latent bugs in inductor.

The general strategy is that an unspecialized int is represented as a SymInt. Representing it as a 0d tensor (which is what the code used to do) is untenable: (1) we often need unspecialized ints to participate in size computations, but we have no way of propagating sympy expressions through tensor compute, and (2) a lot of APIs work when passed SymInt, but not when passed a Tensor. However, I continue to represent Numpy scalars as Tensors, as they are rarely used for size computation and they have an explicit dtype, so they are more accurately modeled as 0d tensors.

* I folded in the changes from https://github.com/pytorch/pytorch/pull/95099 as I cannot represent unspecialized ints as SymInts without also turning on dynamic shapes. This also eliminates the necessity for test_unspec.py, as toggling specialization without dynamic shapes doesn't do anything. As dynamic shapes defaults to unspecializing, I just deleted this entirely; for the specialization case, I rely on regular static shape tests to catch it. (Hypothetically, we could also rerun all the tests with dynamic shapes, but WITH int/float specialization, but this seems... not that useful? I mean, I guess export wants it, but I'd kind of like our Source heuristic to improve enough that export doesn't have to toggle this either.)
* Only 0/1 integers get specialized by default now
* A hodgepodge of fixes. I'll comment on the PR about them.

Fixes https://github.com/pytorch/pytorch/issues/95469

Signed-off-by: Edward Z. Yang <ezyang@meta.com>

Pull Request resolved: https://github.com/pytorch/pytorch/pull/95621
Approved by: https://github.com/jansel, https://github.com/Chillee
2023-03-04 01:22:08 +00:00
Michael Voznesensky
8093abce3e Always get attr static out (#95771)
Discussion here https://github.com/pytorch/pytorch/issues/95630#issuecomment-1449596766

Pull Request resolved: https://github.com/pytorch/pytorch/pull/95771
Approved by: https://github.com/jansel
2023-03-01 23:05:44 +00:00
Michael Voznesensky
34a7c79eac Rename func (#95639)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/95639
Approved by: https://github.com/ezyang
2023-03-01 23:03:09 +00:00
Michael Voznesensky
eff5ae8746 Better mark_dynamic assertions (#95566)
This PR allows us to reuse the static per tensor decision making we make at fake tensorification time. We can use this to avoid setting up dynamic dim guards later if the tensor was never a candidate.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/95566
Approved by: https://github.com/ezyang
2023-02-28 00:02:22 +00:00
Edward Z. Yang
6624a73837 Move istype and object identity tests into a dispatching dictionary. (#95476)
The idea is to make it a little more obvious which branch you're going to go down in a subset of cases, and make it easier to detect if you've accidentally shadowed one condition with another (the reason I wrote this in the first place.) The type dictionary also makes it harder for people to accidentally use isinstance when they should have used istype.

Signed-off-by: Edward Z. Yang <ezyang@meta.com>

Pull Request resolved: https://github.com/pytorch/pytorch/pull/95476
Approved by: https://github.com/jansel
2023-02-27 02:50:58 +00:00
Yanbo Liang
02d44e5de4 [Dynamo] Support CUDA stream passed from outside of torch.compile decrator (#94627)
Fixes #94499

Pull Request resolved: https://github.com/pytorch/pytorch/pull/94627
Approved by: https://github.com/jansel
2023-02-25 19:15:59 +00:00
Edward Z. Yang
69d62373aa Move multi-line wrap functions to helper (#95472)
My intention is to collapse all of the istype() and isinstance() and object identity tests into a more structured form involving a dict lookup. To do this conveniently, I need every continuation to be expressible in a single expression. Thus, all multi-line wrap methods are moved. This is code motion only, no logic changes.

Signed-off-by: Edward Z. Yang <ezyang@meta.com>

Pull Request resolved: https://github.com/pytorch/pytorch/pull/95472
Approved by: https://github.com/Skylion007
2023-02-25 02:23:40 +00:00
Edward Z. Yang
a33d8133a5 Slight cleanup of VariableBuilder giant if condition (#95471)
Some of these changes are semantics preserving, some are not. Please review carefully.

* Use `istype(x, y)` over `type(x) is y`
* Use istype over isinstance in frozenset. If the user subclassed the type in question, we must treat it as a user defined class as it may have custom behavior
* The `isinstance(value, (int, float))` condition for `wrap_unspecialized_primitive` is dead-ish; direct int/float values are caught earlier istype check. Technically however, if you subclassed int/float it would pass through, however this is almost assuredly not intended behavior

Signed-off-by: Edward Z. Yang <ezyang@meta.com>

Pull Request resolved: https://github.com/pytorch/pytorch/pull/95471
Approved by: https://github.com/Skylion007
2023-02-25 02:23:40 +00:00
Will Constable
a12e92d8e4 Support nn.Module forward hooks in torchdynamo (#92125)
Tweak dynamo behavior in 2 places when calling nn.Modules,
to route the call to __call__  instead of .forward(), since
__call__ is the codepath that eager users hit and will dispatch
to hooks correctly.
 (1) inside NNModuleVariable.call_function, which covers the common case
     of calling a module from code dynamo is already tracing
 (2) at the OptimizedModule layer, which is the entrypoint
     into a top-level nn.Module dynamo is about to compile

This exposes a new bug: NNModuleVariable used to special-case calling
module.forward() (which is a method) as a UserFunctionVariable with an extra
'self' arg.  After tracing into module.__call__, there is no longer a special
case for the eventual call into .forward, and it gets wrapped in a
UserDefinedObjectVariable following standard behavior of ._wrap().  UDOV can't be
called, so this broke some tests.

- Fix: add a new special case in _wrap() that treats methods as a UserDefinedMethod
  instead of UserDefinedObjectVariable.  Now, the forward method can be called.

Also, fix NNModuleVar.call_method routing forward back to __call__

Pull Request resolved: https://github.com/pytorch/pytorch/pull/92125
Approved by: https://github.com/ezyang, https://github.com/jansel, https://github.com/voznesenskym
2023-02-24 05:10:29 +00:00
Will Constable
24dd37ef51 Add BOOL_FALSE guard to optimize empty container case (#95248)
There is a fast way to implement a guard for an empty dict, which is to check its bool() value.

However, we can't use this guard in general, since we can only safely apply it at runtime if the runtime value actually is a dict (or, another type that works with 'bool' in the same way).  A counterexample is when a tensor is passed instead of a dict, and throws on bool() operator.

So we can put a type check in the guard, but that is slow enough it defeats the purpose.

Instead, we note that for the case of NNModuleVariables (which are specialized NNModules not unspecialized ones), we already have a hook in place to invalidate the guards if setattr is called.  I am claiming that setattr is the only way that the type of a property on an NNModule could change.  If I'm right, then it's safe to (a) only use this guard for NNModuleVariables, (b) not do a type check inside the guard.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/95248
Approved by: https://github.com/voznesenskym
2023-02-23 21:35:15 +00:00
Michael Voznesensky
500ebb2cd6 Fine grained dynamic shape controls (#94787)
https://docs.google.com/document/d/1aoIyYE8_6cYpWqS25thzVoIiKsT5aaUEOiiPwbIXt8k/edit

Pull Request resolved: https://github.com/pytorch/pytorch/pull/94787
Approved by: https://github.com/ezyang
2023-02-17 22:28:37 +00:00
PyTorch MergeBot
30c07722d1 Revert "Inductor: fix incorrect result of inplace unsqueeze (#94797)"
This reverts commit 6ae06e49ac.

Reverted https://github.com/pytorch/pytorch/pull/94797 on behalf of https://github.com/ezyang due to bad approach, and can lead to subtle further bugs
2023-02-17 22:22:27 +00:00
PyTorch MergeBot
e0ede1cc30 Revert "Fine grained dynamic shape controls (#94787)"
This reverts commit 2aa806608b.

Reverted https://github.com/pytorch/pytorch/pull/94787 on behalf of https://github.com/kit1980 due to After this PR, test_autocast_sdpa_dynamic_shapes_static_default started to fail with RuntimeError: Cannot call sizes() on tensor with symbolic sizes/strides: https://github.com/pytorch/pytorch/actions/runs/4206176846/jobs/7299657478
2023-02-17 19:52:16 +00:00
Michael Voznesensky
2aa806608b Fine grained dynamic shape controls (#94787)
https://docs.google.com/document/d/1aoIyYE8_6cYpWqS25thzVoIiKsT5aaUEOiiPwbIXt8k/edit

Pull Request resolved: https://github.com/pytorch/pytorch/pull/94787
Approved by: https://github.com/ezyang
2023-02-17 17:39:22 +00:00
blzheng
6ae06e49ac Inductor: fix incorrect result of inplace unsqueeze (#94797)
This pr aims to fix the incorrect result in the following test case.
```
@torch._dynamo.optimize("inductor")
def fn(a):
    unsqueeze_ = torch.ops.aten.unsqueeze_.default(a, 0)
    return unsqueeze_

args = [
      ((1, 1, 1, 12, 11, 3), (396, 396, 396, 33, 3, 1), torch.int64, "cpu")
       ]
args = [rand_strided(sh, st, dt, dev) for (sh, st, dt, dev) in args]

with torch.no_grad():
    out = fn(*args)

# expected result: (396, 396, 396, 396, 33, 3, 1) torch.Size([1, 1, 1, 1, 12, 11, 3])
print(args[0].stride(), args[0].shape) # incorrect result: (396, 396, 396, 396, 396, 396, 33, 3, 1) torch.Size([1, 1, 1, 1, 1, 1, 12, 11, 3])
```
**Root cause**

1. [fake_tensor](https://github.com/pytorch/pytorch/blob/master/torch/_dynamo/variables/builder.py#L140) is changed during [tracer.run](https://github.com/pytorch/pytorch/blob/master/torch/_dynamo/convert_frame.py#L311), then it will [pass incorrect inputs to inductor](https://github.com/pytorch/pytorch/blob/master/torch/_dynamo/output_graph.py#L670).
2. example_inputs are changed during [propagate](https://github.com/pytorch/pytorch/blob/master/torch/_inductor/mkldnn.py#L509)

Pull Request resolved: https://github.com/pytorch/pytorch/pull/94797
Approved by: https://github.com/jgong5, https://github.com/jansel
2023-02-16 05:57:42 +00:00
Jason Ansel
ae57bd6630 PT2/TorchScript interoperability fix (#94678)
Allows torch.compile() to inline into ScriptFunction

Pull Request resolved: https://github.com/pytorch/pytorch/pull/94678
Approved by: https://github.com/ezyang
2023-02-15 01:21:10 +00:00
Aaron Gokaslan
3d82d8d0ed [BE] Enable more flake8-comprehensions checks (#94601)
I applied some flake8 fixes and enabled checking for them in the linter. I also enabled some checks for my previous comprehensions PR.

This is a follow up to #94323 where I enable the flake8 checkers for the fixes I made and fix a few more of them.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/94601
Approved by: https://github.com/ezyang
2023-02-10 23:40:29 +00:00