Commit Graph

88 Commits

Author SHA1 Message Date
Jason Ansel
8858edad65 [dynamo] Refactor test cross importing (#113242)
Having tests import tests is a bit annoying because fbcode/oss have different paths.  This moves that stuff into a helper function.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/113242
Approved by: https://github.com/yanboliang
2023-11-09 01:36:27 +00:00
Aaron Gokaslan
9c1fb2cbb3 [BE]: Enable ruff PIE794 and fix bugs it found in test suite (#112989)
Enables some tests that were incorrectly not being run and enables PIE794 globally. This rule checks if a classvar is defined twice as flags it as it is likely a bug. In fact, we found several cases where it was a bug. It does have a couple of false positives which I flagged upstream and replaced with noqas: https://github.com/astral-sh/ruff/issues/8497

Pull Request resolved: https://github.com/pytorch/pytorch/pull/112989
Approved by: https://github.com/malfet
2023-11-05 22:11:53 +00:00
Kazuaki Ishizaki
9089242048 Fix typo under test directory (#112346)
This PR fixes typo in comments and messages under `test` directory. This PR also fixes related typo in messages under `torch` directory.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/112346
Approved by: https://github.com/kit1980, https://github.com/ezyang
2023-11-03 07:53:33 +00:00
Jon Chuang
6d78f34a06 fix regression which creates a new fake tensor (#111864)
Fixes regression identified here: ccd6b373b5 (r1369334484)

Now that `get_fake_value` will identify aliases, we should not try to wrap the fake value again.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/111864
Approved by: https://github.com/eellison
2023-10-24 05:11:48 +00:00
Jon Chuang
47eed65481 [dynamo] Add is_ support for Tensors, force get_fake_value to reuse previously computed example_value if available (#111565)
Use FakeTensor id match as equivalent to object identity match

cc

Pull Request resolved: https://github.com/pytorch/pytorch/pull/111565
Approved by: https://github.com/ezyang
2023-10-21 13:56:30 +00:00
Michael Voznesensky
1e7947b3e0 Revert "Reland 3rd try [finishing colesbury's PR 100642] Guard on nn.Module dicts and type (#109323)" + Forward fixes + test (#110964)
This reverts commit f786fbdebd.

Forward fixes

Pull Request resolved: https://github.com/pytorch/pytorch/pull/110964
Approved by: https://github.com/ezyang, https://github.com/anijain2305
2023-10-11 05:16:47 +00:00
Animesh Jain
ce8b4f56d8 [dynamo] Dont put nn module guards on torch inbuilt nn modules (#110230)
This is one way to fix https://github.com/pytorch/pytorch/issues/110048

Looking for feedback.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/110230
Approved by: https://github.com/ezyang
2023-09-29 00:43:16 +00:00
PyTorch MergeBot
559d1f94a0 Revert "[Dynamo][Test] reland testcase with state (#109713)"
This reverts commit 5c897eacff.

Reverted https://github.com/pytorch/pytorch/pull/109713 on behalf of https://github.com/PaliC due to creates a out of memory error for macos tests ([comment](https://github.com/pytorch/pytorch/pull/109713#issuecomment-1728314478))
2023-09-20 19:34:07 +00:00
Kaichao You
5c897eacff [Dynamo][Test] reland testcase with state (#109713)
Reland the PR https://github.com/pytorch/pytorch/pull/108750 reverted by https://github.com/pytorch/pytorch/issues/108838 , since https://github.com/pytorch/pytorch/pull/108969 has been merged.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/109713
Approved by: https://github.com/anijain2305
2023-09-20 18:19:18 +00:00
Animesh Jain
f786fbdebd Reland 3rd try [finishing colesbury's PR 100642] Guard on nn.Module dicts and type (#109323)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/109323
Approved by: https://github.com/huydhn, https://github.com/voznesenskym
2023-09-15 08:44:14 +00:00
PyTorch MergeBot
92de1d2d02 Revert "[Dynamo][Test]Add a testcase for module with training state (#108750)"
This reverts commit f90444cf0b.

Reverted https://github.com/pytorch/pytorch/pull/108750 on behalf of https://github.com/huydhn due to Sorry for reverting you change, but it starts failing this test https://github.com/pytorch/pytorch/issues/108838 without https://github.com/pytorch/pytorch/pull/108883 and the latter has been reverted ([comment](https://github.com/pytorch/pytorch/pull/108750#issuecomment-1712708800))
2023-09-10 04:45:00 +00:00
PyTorch MergeBot
56c2386157 Revert "reland [finishing colesbury's PR 100642] Guard on nn.Module dicts and type (#108883)"
This reverts commit d4230e5574.

Reverted https://github.com/pytorch/pytorch/pull/108883 on behalf of https://github.com/huydhn due to Per the discussion thread on D49122208, reverting this change ([comment](https://github.com/pytorch/pytorch/pull/108883#issuecomment-1712707853))
2023-09-10 04:40:02 +00:00
Michael Voznesensky
e4350d6d4e Functools partial support in dynamo (#108846)
The strategy for supporting functools partials is relatively straightforward.

There are 2 cases we need to support:

**1) Functools partials as input**
In this case, we are first seeing the functools partial and it is guaranteed to have a source. As such, the args, keywords, and func of the functools partial are passed through VariableBuilder. As this is the first time we are seeing these objects (as it is an input), we re-enter VariableBuilder with a source referencing the args, keywords, and func as attributes of the input to produce:

- func: A callable VariableTracker (UDF, TorchVariable, etc) depending on the value of `func`
- args: List[VariableTracker] - note, not ListVariableTracker!
- keywords: Dict[str, VariableTracker]

A major benefit of this structure is that it very elegantly matches the args to `call_function`.

We then compose a FunctoolsPartialVariable from the VariableTrackers made above.

**2) Functools partials created within compile**
In this case, we already have all the args as known VTs, and thus just compose a FunctoolsPartialVariable as we do for case (1).

For both (1) and (2) - we propagate all guards from the func, args, and keyword VTs to the FunctoolsPartialVariable

Pull Request resolved: https://github.com/pytorch/pytorch/pull/108846
Approved by: https://github.com/ezyang, https://github.com/jansel
2023-09-09 17:25:02 +00:00
Animesh Jain
d4230e5574 reland [finishing colesbury's PR 100642] Guard on nn.Module dicts and type (#108883)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/108883
Approved by: https://github.com/voznesenskym, https://github.com/huydhn
2023-09-09 03:12:31 +00:00
PyTorch MergeBot
72f24d0001 Revert "[dynamo][finishing colesbury's PR 100642] Guard on nn.Module dicts and type (#108528)"
This reverts commit 34bb74c4cf.

Reverted https://github.com/pytorch/pytorch/pull/108528 on behalf of https://github.com/huydhn due to Sorry for reverting your change, but it has some nasty merge conflicts after the revert of D48910794. I need to revert this so the conflict could be resolved. Please help rebase this tomorrow and reland the change ([comment](https://github.com/pytorch/pytorch/pull/108528#issuecomment-1711034781))
2023-09-08 03:49:41 +00:00
youkaichao
f90444cf0b [Dynamo][Test]Add a testcase for module with training state (#108750)
Add the problem mentioned in https://github.com/pytorch/pytorch/issues/105653 into tests. This issue has been addressed by https://github.com/pytorch/pytorch/pull/108528 .

Pull Request resolved: https://github.com/pytorch/pytorch/pull/108750
Approved by: https://github.com/anijain2305
2023-09-08 02:39:42 +00:00
Zhengxu Chen
c75aec90d3 [dynamo] Record nn_module_stack also for unspecialized nn modules. (#108281)
Summary: Currently node metadata "nn_module_stack" is only being used by export. For some export model, we still want to retain nn_module_stack for unspecialized module for various purposes. This diff add a path to also record nn_module_stack when unspecialized module has a source available.

Test Plan: test_export_nn_module_stack_patched_module

Differential Revision: D48841193

Pull Request resolved: https://github.com/pytorch/pytorch/pull/108281
Approved by: https://github.com/yanboliang, https://github.com/tugsbayasgalan
2023-09-07 15:38:46 +00:00
Animesh Jain
34bb74c4cf [dynamo][finishing colesbury's PR 100642] Guard on nn.Module dicts and type (#108528)
**This PR is a 99% copy paste of Sam Gross** (@colesbury) work at https://github.com/pytorch/pytorch/pull/100642. Copied from there

--------
The NN_MODULE guard now subsumes guards on Module attributes. The check_fn will fail if the module attributes are changed (such as Module.training), parameters, submodules, and buffers are added or removed, and if fields are changed on the type itself.

This gives up specificity in the guard check -- if any field is changed the check_fn fails -- for faster overall checks.

-----

Pull Request resolved: https://github.com/pytorch/pytorch/pull/108528
Approved by: https://github.com/ezyang
2023-09-07 01:45:47 +00:00
Jason Ansel
6d61d74545 [dynamo] Fix setattr nn.Module with new attribute (#108098)
This is one (but not all) issues in DALLE2_pytorch

Pull Request resolved: https://github.com/pytorch/pytorch/pull/108098
Approved by: https://github.com/eellison
ghstack dependencies: #108096, #108087
2023-08-29 02:58:48 +00:00
Animesh Jain
9d2ffc5dfa [reland][Dynamo] cache_size policy #107496 (#108069)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/108069
Approved by: https://github.com/yanboliang
2023-08-28 22:06:54 +00:00
PyTorch MergeBot
b4c6c4da88 Revert "[Dynamo] cache_size policy (#107496)"
This reverts commit 4175a6e944.

Reverted https://github.com/pytorch/pytorch/pull/107496 on behalf of https://github.com/ZainRizvi due to Breaking internal builds ([comment](https://github.com/pytorch/pytorch/pull/107496#issuecomment-1693590121))
2023-08-25 16:07:14 +00:00
Animesh Jain
4175a6e944 [Dynamo] cache_size policy (#107496)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/107496
Approved by: https://github.com/ezyang
ghstack dependencies: #107645
2023-08-24 21:50:00 +00:00
Wanchao Liang
9c2b4a35a3 [dtensor] group all dynamo tests together (#107487)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/107487
Approved by: https://github.com/fduwjj
ghstack dependencies: #107472, #107473
2023-08-21 23:56:00 +00:00
Jason Lu
bc88028e8e Back out "Reland "Make adding buffers more like adding parameters (#104069)" (#106224)" (#106743)
Summary:
Original commit changeset: 81319beb97f3

Original Phabricator Diff: D47961182

Test Plan: revert to maintain backward compat with legacy ads_dper3 production package. Read details in: S357822

Reviewed By: atuljangra

Differential Revision: D48131623

@diff-train-skip-merge
(D48131623 landed internally)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/106743
Approved by: https://github.com/malfet
2023-08-08 15:27:34 +00:00
Mikayla Gawarecki
d8e5f2aa6d Reland "Make adding buffers more like adding parameters (#104069)" (#106224)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/106224
Approved by: https://github.com/atalman, https://github.com/albanD
2023-07-31 17:18:56 +00:00
Michael Voznesensky
8549abc347 Grab bag of DTensor enablement stuff (Enable whole graph capture for DTensor) (#105787)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/105787
Approved by: https://github.com/ezyang
2023-07-30 00:17:45 +00:00
Elias Ellison
76a2ec49d7 [Dynamo] Ignore no-op tensor assignment (#106092)
Ignore no-op `self.attr = self.attr` on NN Modules when attr is a Tensor attribute.

This comes from a [llama pattern](https://github.com/pytorch/benchmark/blob/main/torchbenchmark/models/llama/model.py#L121-L122). Normally, when a set attr occurs on an nn module we turn it into an `UnspecializedNNModuleVariable` which prevents static buffers and parameters. In subsequent pr i will add support for cudagraph mutation of buffers/params, which with this pr takes llama 1.6x -> 4.4x in inference

Pull Request resolved: https://github.com/pytorch/pytorch/pull/106092
Approved by: https://github.com/yanboliang
2023-07-28 17:16:19 +00:00
Edward Z. Yang
7b9d250f06 Change _dynamo.export to be export(f)(*args, **kwargs) (#106109)
Signed-off-by: Edward Z. Yang <ezyang@meta.com>

Pull Request resolved: https://github.com/pytorch/pytorch/pull/106109
Approved by: https://github.com/voznesenskym
2023-07-27 21:41:13 +00:00
Aaron Gokaslan
6d43c89f37 [BE]: Update Ruff to 0.0.280 (#105724)
Removes unusued loop values in python dictionary iteration. Automated fix from Ruff master

Pull Request resolved: https://github.com/pytorch/pytorch/pull/105724
Approved by: https://github.com/ezyang, https://github.com/janeyx99
2023-07-22 23:03:34 +00:00
Andrey Talman
c6653b65d8 Back out "Make adding buffers more like adding parameters (#104069)" (#105581)
Summary:
D47537831 is breaking pyper tests: https://fb.workplace.com/groups/802176577445480/posts/1018902842439518/

with `TypeError: register_buffer() takes 3 positional arguments but 4 were given`

Original commit changeset: d4b4069fbd38

Original Phabricator Diff: D47537831

Test Plan:
```
buck2 run //caffe2/torch/fb/training_toolkit/integration_tests/training_lifecycle/cogwheel_tests/pyper_release_v2:cogwheel_smallworld_inline_cvr_infer_pyper_pyper__canary_offline_training-launcher -- --run-harness-in-tupperware --build-fbpkg ads_dper3 --build-fbpkg training_platform
```

Reviewed By: atalman

Differential Revision: D47600140

Pull Request resolved: https://github.com/pytorch/pytorch/pull/105581
Approved by: https://github.com/mikaylagawarecki
2023-07-20 03:39:53 +00:00
Justin Chu
8a688277a2 [BE] Enable ruff's UP rules and autoformat dynamo / functorch and refs (#105432)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/105432
Approved by: https://github.com/ezyang
2023-07-19 13:48:44 +00:00
ekamiti
32d422f335 Make adding buffers more like adding parameters (#104069)
Add similar semantics for creating a buffer object similar to creating a parameter. This is done by introducing a new `Buffer` class that can be used for type disambiguation. The underlying functionality of registering a buffer remains the same as the `register_buffer` method has not been changed. The `persistent` parameter in the `Buffer` type is to indicate whether a buffer object should be persistent or not. Other non-test changes have to do with getting the new `Buffer` type recognized by inductor and dynamo. Remaining changes are test changes to make sure that the `Buffer` type can be used as a drop in replacement for `register_buffer` as it just leads to `register_buffer` being called. The addition of this new functionality still allows for normal tensors to be used as buffers so these changes are intended to be backwards compatible.

Fixes #35735

Pull Request resolved: https://github.com/pytorch/pytorch/pull/104069
Approved by: https://github.com/mikaylagawarecki
2023-07-17 17:59:05 +00:00
Aaron Gokaslan
2f95a3d0fc [BE]: Apply ruff PERF fixes to torch (#104917)
Applies automated ruff fixes in the PERF modules and enables all automatic ones. I also updated ruff which applied some additional fixes.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/104917
Approved by: https://github.com/ezyang, https://github.com/albanD
2023-07-11 20:45:21 +00:00
Danni Li
db4aed6a03 Include nn.ParameterDict in dynamo __getitem__ (#99771)
Summary:

Fix: #99735

Test Plan: Please see GitHub tests.

Differential Revision: D45200616

Pull Request resolved: https://github.com/pytorch/pytorch/pull/99771
Approved by: https://github.com/Skylion007, https://github.com/anijain2305
2023-07-11 08:19:04 +00:00
Yanbo Liang
1be1f5090e [Dynamo] Fix broken NNModule comparison (#103812)
Fixes #ISSUE_NUMBER

Pull Request resolved: https://github.com/pytorch/pytorch/pull/103812
Approved by: https://github.com/msaroufim
2023-06-20 04:01:24 +00:00
Edward Z. Yang
9946499228 Continue simplifying dynamic shapes tests (#103592)
Remove the static by default / no automatic dynamic configuration as this is about to become the default.

Signed-off-by: Edward Z. Yang <ezyang@meta.com>

Pull Request resolved: https://github.com/pytorch/pytorch/pull/103592
Approved by: https://github.com/voznesenskym, https://github.com/Skylion007
2023-06-14 19:35:51 +00:00
Mark Saroufim
790f5732f6 Fix Graph Break on builtin comparison on NNModule (#103176)
Fixes https://github.com/pytorch/pytorch/issues/102338

Pull Request resolved: https://github.com/pytorch/pytorch/pull/103176
Approved by: https://github.com/anijain2305
2023-06-07 22:51:43 +00:00
Will Feng
61736679cd [Dynamo] No graph break for super(MyConv{1/2/3}d, self).forward and super(MyConvTranspose, self).forward (#102509)
before the PR, running super(MyConv1d, self).forward or super(MyConvTranspose, self).foward, dynamo will create a graph break when executing NNModuleVariable.call_method and raise unimplemented error for name=_conv_forward / _output_padding. see issue for full detail: https://github.com/pytorch/pytorch/issues/101155

after the PR, for torch.nn.conv module with function name _conv_forward / _output_padding, we inline the function with tx.inline_user_function_return

code refactor: added NNModuleVariable._inline_user_function_return_helper to consolidaste tx.inline_user_function_return into 1 place to keep code dry. after factor, there are 2 uncolidated inline_user_function_return with different ```fn``` and ```source``` logic. the code is still dry. For local testing, they are covered by test_modulelist, test_moduledict, test_conv_call_super_forward_directly and test_conv_transpose_call_super_forward_directly in test_modules.py

Differential Revision: [D46494460](https://our.internmc.facebook.com/intern/diff/D46494460)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/102509
Approved by: https://github.com/yanboliang
2023-06-06 22:01:17 +00:00
Ali Moezzi
719584600b Merge original module attributes with attributes assigned by __setattr__ (#102910)
Fixes https://github.com/pytorch/pytorch/issues/94478 @davidberard98

Pull Request resolved: https://github.com/pytorch/pytorch/pull/102910
Approved by: https://github.com/Skylion007, https://github.com/Neilblaze, https://github.com/davidberard98
2023-06-05 19:14:07 +00:00
David Berard
c36d235db0 Revert "implement __dir__ for dynamo (#102480)" (#102766)
This reverts commit b02f48b181.

If a user does this:

```
mod = torch.compile(mod)
mod.is_compiled = True
assert "is_compiled" in dir(mod)
```

it will fail after #102480.

Differential Revision: [D46368712](https://our.internmc.facebook.com/intern/diff/D46368712)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/102766
Approved by: https://github.com/msaroufim
2023-06-02 19:40:44 +00:00
ALi
b02f48b181 implement __dir__ for dynamo (#102480)
Fixes #94478 modules' attributes are not included in when `__dir__` is called on the optimized module.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/102480
Approved by: https://github.com/msaroufim
2023-05-30 18:46:10 +00:00
Wanchao Liang
c1db235040 [dynamo] fix module buffers call (#102251)
This PR fixes module buffers call and extract module.buffers similar to
module.parameters

Pull Request resolved: https://github.com/pytorch/pytorch/pull/102251
Approved by: https://github.com/wconstab
2023-05-25 21:26:09 +00:00
Animesh Jain
7a17e9d0b6 [dynamo] Bugfix for unspecialized nn module variable (#101859)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/101859
Approved by: https://github.com/yanboliang, https://github.com/shingjan
2023-05-20 00:46:56 +00:00
Yanbo Liang
d855b6aed6 [Dynamo] Add unit test for explicitly calling __call__ (#100146)
@wconstab As we discussed last Friday, I added the unit test for explicitly calling __call__ and added comment to explain why we redirecting ```UserMethodVariable.call_function``` to ```NNModuleVariable.call_method``` for a certain case.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/100146
Approved by: https://github.com/wconstab
2023-04-27 15:47:11 +00:00
Yanbo Liang
2989d6c93d [Dynamo] Fix constructing lazy submodule inside of lazy module's initialize_parameters (#100047)
This PR fixed two issues:
* Constructing lazy submodule inside of lazy module's ```initialize_parameters``` - don't unspecialized module if it's lazy.
* Fixes #100001

Pull Request resolved: https://github.com/pytorch/pytorch/pull/100047
Approved by: https://github.com/jansel
2023-04-26 23:36:31 +00:00
David Berard
d976df49c5 [dynamo] don't use LazyModuleMixin.cls_to_become if it is None (#99943)
**TL;DR**: This PR fixes handling for lazy modules where `cls_to_become is None`. In those cases, we should leave the type of the lazy module as the old value.

**Details**:
Lazy modules are intended to be initialized at execution; some of them are also supposed to switch to a different type after they have been initialized. However, not all are supposed to switch; see this logic from `nn/modules/lazy.py`

```python
    def _infer_parameters(self, ...):
        ...
        if module.cls_to_become is not None:
            module.__class__ = module.cls_to_become
```

i.e., we should leave the module type as the old value if `module.cls_to_become is None`. This PR updates dynamo's handling to match this behavior.

Test `test_lazy_module_no_cls_to_become` added to `test/dynamo/test_module.py`.

Differential Revision: [D45253698](https://our.internmc.facebook.com/intern/diff/D45253698)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/99943
Approved by: https://github.com/jansel
2023-04-25 21:34:11 +00:00
Aaron Gokaslan
e2a3817dfd [BE] Enable C419 rule for any all shortcircuiting (#99890)
Apparently https://github.com/pytorch/pytorch/pull/78142 made torch.JIT allow for simple generator expressions which allows us to enable rules that replace unnecessary list comprehensions with generators in any/all. This was originally part of #99280 but I split it off into this PR so that it can be easily reverted should anything break.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/99890
Approved by: https://github.com/justinchuby, https://github.com/kit1980, https://github.com/malfet
2023-04-25 15:02:13 +00:00
Michael Voznesensky
04f7a2a5e1 Support module dict iter (#99503)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/99503
Approved by: https://github.com/Chillee, https://github.com/jansel
2023-04-19 21:54:35 +00:00
Will Constable
e6aa8e0729 Test and document dynamo backward hooks support (#99382)
No new support added, but backward hooks are working and now there is a test and some documentation about the limitations (hooks firing after whole graph).

Pull Request resolved: https://github.com/pytorch/pytorch/pull/99382
Approved by: https://github.com/yanboliang
2023-04-18 03:03:29 +00:00
Yanbo Liang
05809c7d3b [Dynamo] No graph break for explicit calling Conv{1/2/3}d.forward & ConvTranspose{1/2/3}d.forward (#99015)
Before this PR, if users call ```Conv2d(x)```, dynamo handles it well(no graph break) and puts a ```call_module``` op in the FX graph. However, if users explicitly call ```Conv2d.forward(x)``` in another ```forward``` function, the inlining would be failed(caused graph break). This PR fixed this issue by translating the explicit ```Conv2d.forward(x)``` to ```Conv2d(x)```.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/99015
Approved by: https://github.com/jansel, https://github.com/wconstab
2023-04-15 08:04:13 +00:00