Commit Graph

79 Commits

Author SHA1 Message Date
Animesh Jain
f786fbdebd Reland 3rd try [finishing colesbury's PR 100642] Guard on nn.Module dicts and type (#109323)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/109323
Approved by: https://github.com/huydhn, https://github.com/voznesenskym
2023-09-15 08:44:14 +00:00
PyTorch MergeBot
92de1d2d02 Revert "[Dynamo][Test]Add a testcase for module with training state (#108750)"
This reverts commit f90444cf0b.

Reverted https://github.com/pytorch/pytorch/pull/108750 on behalf of https://github.com/huydhn due to Sorry for reverting you change, but it starts failing this test https://github.com/pytorch/pytorch/issues/108838 without https://github.com/pytorch/pytorch/pull/108883 and the latter has been reverted ([comment](https://github.com/pytorch/pytorch/pull/108750#issuecomment-1712708800))
2023-09-10 04:45:00 +00:00
PyTorch MergeBot
56c2386157 Revert "reland [finishing colesbury's PR 100642] Guard on nn.Module dicts and type (#108883)"
This reverts commit d4230e5574.

Reverted https://github.com/pytorch/pytorch/pull/108883 on behalf of https://github.com/huydhn due to Per the discussion thread on D49122208, reverting this change ([comment](https://github.com/pytorch/pytorch/pull/108883#issuecomment-1712707853))
2023-09-10 04:40:02 +00:00
Michael Voznesensky
e4350d6d4e Functools partial support in dynamo (#108846)
The strategy for supporting functools partials is relatively straightforward.

There are 2 cases we need to support:

**1) Functools partials as input**
In this case, we are first seeing the functools partial and it is guaranteed to have a source. As such, the args, keywords, and func of the functools partial are passed through VariableBuilder. As this is the first time we are seeing these objects (as it is an input), we re-enter VariableBuilder with a source referencing the args, keywords, and func as attributes of the input to produce:

- func: A callable VariableTracker (UDF, TorchVariable, etc) depending on the value of `func`
- args: List[VariableTracker] - note, not ListVariableTracker!
- keywords: Dict[str, VariableTracker]

A major benefit of this structure is that it very elegantly matches the args to `call_function`.

We then compose a FunctoolsPartialVariable from the VariableTrackers made above.

**2) Functools partials created within compile**
In this case, we already have all the args as known VTs, and thus just compose a FunctoolsPartialVariable as we do for case (1).

For both (1) and (2) - we propagate all guards from the func, args, and keyword VTs to the FunctoolsPartialVariable

Pull Request resolved: https://github.com/pytorch/pytorch/pull/108846
Approved by: https://github.com/ezyang, https://github.com/jansel
2023-09-09 17:25:02 +00:00
Animesh Jain
d4230e5574 reland [finishing colesbury's PR 100642] Guard on nn.Module dicts and type (#108883)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/108883
Approved by: https://github.com/voznesenskym, https://github.com/huydhn
2023-09-09 03:12:31 +00:00
PyTorch MergeBot
72f24d0001 Revert "[dynamo][finishing colesbury's PR 100642] Guard on nn.Module dicts and type (#108528)"
This reverts commit 34bb74c4cf.

Reverted https://github.com/pytorch/pytorch/pull/108528 on behalf of https://github.com/huydhn due to Sorry for reverting your change, but it has some nasty merge conflicts after the revert of D48910794. I need to revert this so the conflict could be resolved. Please help rebase this tomorrow and reland the change ([comment](https://github.com/pytorch/pytorch/pull/108528#issuecomment-1711034781))
2023-09-08 03:49:41 +00:00
youkaichao
f90444cf0b [Dynamo][Test]Add a testcase for module with training state (#108750)
Add the problem mentioned in https://github.com/pytorch/pytorch/issues/105653 into tests. This issue has been addressed by https://github.com/pytorch/pytorch/pull/108528 .

Pull Request resolved: https://github.com/pytorch/pytorch/pull/108750
Approved by: https://github.com/anijain2305
2023-09-08 02:39:42 +00:00
Zhengxu Chen
c75aec90d3 [dynamo] Record nn_module_stack also for unspecialized nn modules. (#108281)
Summary: Currently node metadata "nn_module_stack" is only being used by export. For some export model, we still want to retain nn_module_stack for unspecialized module for various purposes. This diff add a path to also record nn_module_stack when unspecialized module has a source available.

Test Plan: test_export_nn_module_stack_patched_module

Differential Revision: D48841193

Pull Request resolved: https://github.com/pytorch/pytorch/pull/108281
Approved by: https://github.com/yanboliang, https://github.com/tugsbayasgalan
2023-09-07 15:38:46 +00:00
Animesh Jain
34bb74c4cf [dynamo][finishing colesbury's PR 100642] Guard on nn.Module dicts and type (#108528)
**This PR is a 99% copy paste of Sam Gross** (@colesbury) work at https://github.com/pytorch/pytorch/pull/100642. Copied from there

--------
The NN_MODULE guard now subsumes guards on Module attributes. The check_fn will fail if the module attributes are changed (such as Module.training), parameters, submodules, and buffers are added or removed, and if fields are changed on the type itself.

This gives up specificity in the guard check -- if any field is changed the check_fn fails -- for faster overall checks.

-----

Pull Request resolved: https://github.com/pytorch/pytorch/pull/108528
Approved by: https://github.com/ezyang
2023-09-07 01:45:47 +00:00
Jason Ansel
6d61d74545 [dynamo] Fix setattr nn.Module with new attribute (#108098)
This is one (but not all) issues in DALLE2_pytorch

Pull Request resolved: https://github.com/pytorch/pytorch/pull/108098
Approved by: https://github.com/eellison
ghstack dependencies: #108096, #108087
2023-08-29 02:58:48 +00:00
Animesh Jain
9d2ffc5dfa [reland][Dynamo] cache_size policy #107496 (#108069)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/108069
Approved by: https://github.com/yanboliang
2023-08-28 22:06:54 +00:00
PyTorch MergeBot
b4c6c4da88 Revert "[Dynamo] cache_size policy (#107496)"
This reverts commit 4175a6e944.

Reverted https://github.com/pytorch/pytorch/pull/107496 on behalf of https://github.com/ZainRizvi due to Breaking internal builds ([comment](https://github.com/pytorch/pytorch/pull/107496#issuecomment-1693590121))
2023-08-25 16:07:14 +00:00
Animesh Jain
4175a6e944 [Dynamo] cache_size policy (#107496)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/107496
Approved by: https://github.com/ezyang
ghstack dependencies: #107645
2023-08-24 21:50:00 +00:00
Wanchao Liang
9c2b4a35a3 [dtensor] group all dynamo tests together (#107487)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/107487
Approved by: https://github.com/fduwjj
ghstack dependencies: #107472, #107473
2023-08-21 23:56:00 +00:00
Jason Lu
bc88028e8e Back out "Reland "Make adding buffers more like adding parameters (#104069)" (#106224)" (#106743)
Summary:
Original commit changeset: 81319beb97f3

Original Phabricator Diff: D47961182

Test Plan: revert to maintain backward compat with legacy ads_dper3 production package. Read details in: S357822

Reviewed By: atuljangra

Differential Revision: D48131623

@diff-train-skip-merge
(D48131623 landed internally)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/106743
Approved by: https://github.com/malfet
2023-08-08 15:27:34 +00:00
Mikayla Gawarecki
d8e5f2aa6d Reland "Make adding buffers more like adding parameters (#104069)" (#106224)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/106224
Approved by: https://github.com/atalman, https://github.com/albanD
2023-07-31 17:18:56 +00:00
Michael Voznesensky
8549abc347 Grab bag of DTensor enablement stuff (Enable whole graph capture for DTensor) (#105787)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/105787
Approved by: https://github.com/ezyang
2023-07-30 00:17:45 +00:00
Elias Ellison
76a2ec49d7 [Dynamo] Ignore no-op tensor assignment (#106092)
Ignore no-op `self.attr = self.attr` on NN Modules when attr is a Tensor attribute.

This comes from a [llama pattern](https://github.com/pytorch/benchmark/blob/main/torchbenchmark/models/llama/model.py#L121-L122). Normally, when a set attr occurs on an nn module we turn it into an `UnspecializedNNModuleVariable` which prevents static buffers and parameters. In subsequent pr i will add support for cudagraph mutation of buffers/params, which with this pr takes llama 1.6x -> 4.4x in inference

Pull Request resolved: https://github.com/pytorch/pytorch/pull/106092
Approved by: https://github.com/yanboliang
2023-07-28 17:16:19 +00:00
Edward Z. Yang
7b9d250f06 Change _dynamo.export to be export(f)(*args, **kwargs) (#106109)
Signed-off-by: Edward Z. Yang <ezyang@meta.com>

Pull Request resolved: https://github.com/pytorch/pytorch/pull/106109
Approved by: https://github.com/voznesenskym
2023-07-27 21:41:13 +00:00
Aaron Gokaslan
6d43c89f37 [BE]: Update Ruff to 0.0.280 (#105724)
Removes unusued loop values in python dictionary iteration. Automated fix from Ruff master

Pull Request resolved: https://github.com/pytorch/pytorch/pull/105724
Approved by: https://github.com/ezyang, https://github.com/janeyx99
2023-07-22 23:03:34 +00:00
Andrey Talman
c6653b65d8 Back out "Make adding buffers more like adding parameters (#104069)" (#105581)
Summary:
D47537831 is breaking pyper tests: https://fb.workplace.com/groups/802176577445480/posts/1018902842439518/

with `TypeError: register_buffer() takes 3 positional arguments but 4 were given`

Original commit changeset: d4b4069fbd38

Original Phabricator Diff: D47537831

Test Plan:
```
buck2 run //caffe2/torch/fb/training_toolkit/integration_tests/training_lifecycle/cogwheel_tests/pyper_release_v2:cogwheel_smallworld_inline_cvr_infer_pyper_pyper__canary_offline_training-launcher -- --run-harness-in-tupperware --build-fbpkg ads_dper3 --build-fbpkg training_platform
```

Reviewed By: atalman

Differential Revision: D47600140

Pull Request resolved: https://github.com/pytorch/pytorch/pull/105581
Approved by: https://github.com/mikaylagawarecki
2023-07-20 03:39:53 +00:00
Justin Chu
8a688277a2 [BE] Enable ruff's UP rules and autoformat dynamo / functorch and refs (#105432)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/105432
Approved by: https://github.com/ezyang
2023-07-19 13:48:44 +00:00
ekamiti
32d422f335 Make adding buffers more like adding parameters (#104069)
Add similar semantics for creating a buffer object similar to creating a parameter. This is done by introducing a new `Buffer` class that can be used for type disambiguation. The underlying functionality of registering a buffer remains the same as the `register_buffer` method has not been changed. The `persistent` parameter in the `Buffer` type is to indicate whether a buffer object should be persistent or not. Other non-test changes have to do with getting the new `Buffer` type recognized by inductor and dynamo. Remaining changes are test changes to make sure that the `Buffer` type can be used as a drop in replacement for `register_buffer` as it just leads to `register_buffer` being called. The addition of this new functionality still allows for normal tensors to be used as buffers so these changes are intended to be backwards compatible.

Fixes #35735

Pull Request resolved: https://github.com/pytorch/pytorch/pull/104069
Approved by: https://github.com/mikaylagawarecki
2023-07-17 17:59:05 +00:00
Aaron Gokaslan
2f95a3d0fc [BE]: Apply ruff PERF fixes to torch (#104917)
Applies automated ruff fixes in the PERF modules and enables all automatic ones. I also updated ruff which applied some additional fixes.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/104917
Approved by: https://github.com/ezyang, https://github.com/albanD
2023-07-11 20:45:21 +00:00
Danni Li
db4aed6a03 Include nn.ParameterDict in dynamo __getitem__ (#99771)
Summary:

Fix: #99735

Test Plan: Please see GitHub tests.

Differential Revision: D45200616

Pull Request resolved: https://github.com/pytorch/pytorch/pull/99771
Approved by: https://github.com/Skylion007, https://github.com/anijain2305
2023-07-11 08:19:04 +00:00
Yanbo Liang
1be1f5090e [Dynamo] Fix broken NNModule comparison (#103812)
Fixes #ISSUE_NUMBER

Pull Request resolved: https://github.com/pytorch/pytorch/pull/103812
Approved by: https://github.com/msaroufim
2023-06-20 04:01:24 +00:00
Edward Z. Yang
9946499228 Continue simplifying dynamic shapes tests (#103592)
Remove the static by default / no automatic dynamic configuration as this is about to become the default.

Signed-off-by: Edward Z. Yang <ezyang@meta.com>

Pull Request resolved: https://github.com/pytorch/pytorch/pull/103592
Approved by: https://github.com/voznesenskym, https://github.com/Skylion007
2023-06-14 19:35:51 +00:00
Mark Saroufim
790f5732f6 Fix Graph Break on builtin comparison on NNModule (#103176)
Fixes https://github.com/pytorch/pytorch/issues/102338

Pull Request resolved: https://github.com/pytorch/pytorch/pull/103176
Approved by: https://github.com/anijain2305
2023-06-07 22:51:43 +00:00
Will Feng
61736679cd [Dynamo] No graph break for super(MyConv{1/2/3}d, self).forward and super(MyConvTranspose, self).forward (#102509)
before the PR, running super(MyConv1d, self).forward or super(MyConvTranspose, self).foward, dynamo will create a graph break when executing NNModuleVariable.call_method and raise unimplemented error for name=_conv_forward / _output_padding. see issue for full detail: https://github.com/pytorch/pytorch/issues/101155

after the PR, for torch.nn.conv module with function name _conv_forward / _output_padding, we inline the function with tx.inline_user_function_return

code refactor: added NNModuleVariable._inline_user_function_return_helper to consolidaste tx.inline_user_function_return into 1 place to keep code dry. after factor, there are 2 uncolidated inline_user_function_return with different ```fn``` and ```source``` logic. the code is still dry. For local testing, they are covered by test_modulelist, test_moduledict, test_conv_call_super_forward_directly and test_conv_transpose_call_super_forward_directly in test_modules.py

Differential Revision: [D46494460](https://our.internmc.facebook.com/intern/diff/D46494460)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/102509
Approved by: https://github.com/yanboliang
2023-06-06 22:01:17 +00:00
Ali Moezzi
719584600b Merge original module attributes with attributes assigned by __setattr__ (#102910)
Fixes https://github.com/pytorch/pytorch/issues/94478 @davidberard98

Pull Request resolved: https://github.com/pytorch/pytorch/pull/102910
Approved by: https://github.com/Skylion007, https://github.com/Neilblaze, https://github.com/davidberard98
2023-06-05 19:14:07 +00:00
David Berard
c36d235db0 Revert "implement __dir__ for dynamo (#102480)" (#102766)
This reverts commit b02f48b181.

If a user does this:

```
mod = torch.compile(mod)
mod.is_compiled = True
assert "is_compiled" in dir(mod)
```

it will fail after #102480.

Differential Revision: [D46368712](https://our.internmc.facebook.com/intern/diff/D46368712)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/102766
Approved by: https://github.com/msaroufim
2023-06-02 19:40:44 +00:00
ALi
b02f48b181 implement __dir__ for dynamo (#102480)
Fixes #94478 modules' attributes are not included in when `__dir__` is called on the optimized module.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/102480
Approved by: https://github.com/msaroufim
2023-05-30 18:46:10 +00:00
Wanchao Liang
c1db235040 [dynamo] fix module buffers call (#102251)
This PR fixes module buffers call and extract module.buffers similar to
module.parameters

Pull Request resolved: https://github.com/pytorch/pytorch/pull/102251
Approved by: https://github.com/wconstab
2023-05-25 21:26:09 +00:00
Animesh Jain
7a17e9d0b6 [dynamo] Bugfix for unspecialized nn module variable (#101859)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/101859
Approved by: https://github.com/yanboliang, https://github.com/shingjan
2023-05-20 00:46:56 +00:00
Yanbo Liang
d855b6aed6 [Dynamo] Add unit test for explicitly calling __call__ (#100146)
@wconstab As we discussed last Friday, I added the unit test for explicitly calling __call__ and added comment to explain why we redirecting ```UserMethodVariable.call_function``` to ```NNModuleVariable.call_method``` for a certain case.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/100146
Approved by: https://github.com/wconstab
2023-04-27 15:47:11 +00:00
Yanbo Liang
2989d6c93d [Dynamo] Fix constructing lazy submodule inside of lazy module's initialize_parameters (#100047)
This PR fixed two issues:
* Constructing lazy submodule inside of lazy module's ```initialize_parameters``` - don't unspecialized module if it's lazy.
* Fixes #100001

Pull Request resolved: https://github.com/pytorch/pytorch/pull/100047
Approved by: https://github.com/jansel
2023-04-26 23:36:31 +00:00
David Berard
d976df49c5 [dynamo] don't use LazyModuleMixin.cls_to_become if it is None (#99943)
**TL;DR**: This PR fixes handling for lazy modules where `cls_to_become is None`. In those cases, we should leave the type of the lazy module as the old value.

**Details**:
Lazy modules are intended to be initialized at execution; some of them are also supposed to switch to a different type after they have been initialized. However, not all are supposed to switch; see this logic from `nn/modules/lazy.py`

```python
    def _infer_parameters(self, ...):
        ...
        if module.cls_to_become is not None:
            module.__class__ = module.cls_to_become
```

i.e., we should leave the module type as the old value if `module.cls_to_become is None`. This PR updates dynamo's handling to match this behavior.

Test `test_lazy_module_no_cls_to_become` added to `test/dynamo/test_module.py`.

Differential Revision: [D45253698](https://our.internmc.facebook.com/intern/diff/D45253698)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/99943
Approved by: https://github.com/jansel
2023-04-25 21:34:11 +00:00
Aaron Gokaslan
e2a3817dfd [BE] Enable C419 rule for any all shortcircuiting (#99890)
Apparently https://github.com/pytorch/pytorch/pull/78142 made torch.JIT allow for simple generator expressions which allows us to enable rules that replace unnecessary list comprehensions with generators in any/all. This was originally part of #99280 but I split it off into this PR so that it can be easily reverted should anything break.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/99890
Approved by: https://github.com/justinchuby, https://github.com/kit1980, https://github.com/malfet
2023-04-25 15:02:13 +00:00
Michael Voznesensky
04f7a2a5e1 Support module dict iter (#99503)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/99503
Approved by: https://github.com/Chillee, https://github.com/jansel
2023-04-19 21:54:35 +00:00
Will Constable
e6aa8e0729 Test and document dynamo backward hooks support (#99382)
No new support added, but backward hooks are working and now there is a test and some documentation about the limitations (hooks firing after whole graph).

Pull Request resolved: https://github.com/pytorch/pytorch/pull/99382
Approved by: https://github.com/yanboliang
2023-04-18 03:03:29 +00:00
Yanbo Liang
05809c7d3b [Dynamo] No graph break for explicit calling Conv{1/2/3}d.forward & ConvTranspose{1/2/3}d.forward (#99015)
Before this PR, if users call ```Conv2d(x)```, dynamo handles it well(no graph break) and puts a ```call_module``` op in the FX graph. However, if users explicitly call ```Conv2d.forward(x)``` in another ```forward``` function, the inlining would be failed(caused graph break). This PR fixed this issue by translating the explicit ```Conv2d.forward(x)``` to ```Conv2d(x)```.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/99015
Approved by: https://github.com/jansel, https://github.com/wconstab
2023-04-15 08:04:13 +00:00
Will Constable
6eab5e88c8 Graph-break on allowed modules if they have hooks (#97184)
Allowed modules are stuck into dynamo's fx graph as call_module
nodes, without dynamo doing any tracing of the module.  This means
during AOT trace time, hooks will fire during tracing when the
call_module is executed, but the hooks themselves will disappear
after that and not be present in the compiled program.
  (worse, if they performed any tensor operations, those would get
   traced so you could end up with part of the hook's functionality).

To circumvent this, there are two options for 'allowed modules' with hooks.
1) don't treat them as 'allowed' - trace into them
2) graph-break, so the module is no longer part of the dynamo trace at all

(1) will fail for users that opted into allowed modules becuase they know
    their module has problems being traced by dynamo.
(2) causes graph breaks on common modules such as nn.Linear, just because they
    are marked as 'allowed'.

It would help matters if we could differentiate between types of allowed modules
  (A) allowed to avoid overheads - used for common ops like nn.Linear
  (B) allowed to avoid dynamo graphbreaks caused by unsupported code

Ideally, we'd use method (1) for group (A) and (2) for (B).

For now, graph-break on all cases of allowed modules.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/97184
Approved by: https://github.com/jansel
2023-04-15 01:46:15 +00:00
Yanbo Liang
e20981bda9 [Dynamo] Fix Lazy Module initialization with constant arg (#98996)
Fixes Meta internal user case

Pull Request resolved: https://github.com/pytorch/pytorch/pull/98996
Approved by: https://github.com/williamwen42
2023-04-13 17:37:25 +00:00
Yanbo Liang
78ff7ca24a [Dynamo] Fix Sequential nn module with duplicated submodule (#98880)
Fixes #98852

Pull Request resolved: https://github.com/pytorch/pytorch/pull/98880
Approved by: https://github.com/ngimel
2023-04-12 23:09:50 +00:00
Yanbo Liang
3b6a78ea87 [Dynamo] Lazy Module support list/tuple input (#98809)
Fixes Meta internal user case.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/98809
Approved by: https://github.com/wconstab
2023-04-11 20:38:04 +00:00
Will Constable
390c51bf87 Skip nnmodule hook guards by default (#98371)
This PR makes basic nnmodule forward hooks work by default, without any overhead.  But it leaves silent correctness issues if users modify/remove their hooks later, thus also emits a warning.

- the usual case is to not use hooks, so avoid guard overhead here
- registering any hook before compile will trigger a warning about hook support
- registering a hook later (or removing one) requires user knowledge and opting in,
  currently this isn't warnable (but maybe we can observe compiled nnmodules to make it
  warnable).

Why skip hook guards by default instead of not tracing __call__/hooks by default?
- avoid having a mode flag that alters dynamo tracing behavior (harder to test both codepaths
  in CI with full coverage)
- the most basic hook usecase (registering a hook before compile, and never removing it)
  will work by default with this PR, while it would require enablement and incur overhead
  in the 'not tracing __call__' proposal.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/98371
Approved by: https://github.com/jansel
2023-04-07 15:10:51 +00:00
Matthias Reso
96595617b9 Support Modules with custom __getitem__ method through fallback (#97932)
This PR allows to torch.compile torch.nn.Module with custom __getitem__ methods but falling back to Python.

Fixes #97720

Pull Request resolved: https://github.com/pytorch/pytorch/pull/97932
Approved by: https://github.com/yanboliang
2023-04-04 20:42:17 +00:00
Michael Voznesensky
b1e60bfb6a Pass f_locals as a dict rather than kwargs (#98107)
Fixes https://github.com/pytorch/pytorch/issues/97688

One big problem is that instead of printing x < y we now print
`E["x"] < E["y"]` and now all of the tests wobbled and I'm mad.

Signed-off-by: Edward Z. Yang <ezyangmeta.com>

Pull Request resolved: https://github.com/pytorch/pytorch/pull/98107
Approved by: https://github.com/ezyang
2023-04-04 00:30:08 +00:00
Yanbo Liang
a6bd21d935 [Dynamo] Eagerly initializing Lazy Module to reduce graph breaks (#97946)
Fixes Meta internal user case.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/97946
Approved by: https://github.com/wconstab
2023-04-03 22:24:43 +00:00
Will Constable
f4ac8e0052 Add dynamo config skip_nnmodule_hook_guards (#97830)
This lets users that are sure they won't use hooks avoid overhead
related to dynamo guards on (assumedly) empty hook dicts on all
nn modules.

Only enable this flag if you are sure you won't change hook-behavior
after compiling.  It is ok to register a hook and then compile, if
you promise never to remove/alter the hook.  It is also ok to
not register a hook and compile, if you never register a hook later.

Note- this is not the best we can do, and hopefully in the future
we can avoid the need for this option following some of these paths
- make guards fast enough to not be an issue when guarding on hook
  dicts
- make a mode where dynamo actually skips tracing __call__ so
  hooks are consistently ignored by compiled programs
- use nnmodule versioning so hook changes can be guarded without
  explicit hook dict guards

Pull Request resolved: https://github.com/pytorch/pytorch/pull/97830
Approved by: https://github.com/jansel
2023-03-29 04:25:27 +00:00