Commit Graph

67 Commits

Author SHA1 Message Date
James Wu
9668210302 Allow bypasses for Precompile when guards, etc. cannot be serialized (#160902)
This adds a new function `bypass_package` and `CompilePackage.bypass_current_entry()`. This allows us to safely bypass if there are models with unserializable or incompatible parts. When we encounter something incompatible, we'll raise a bypass and ignore that particular code in DynamoCodeEntry.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/160902
Approved by: https://github.com/zhxchen17
2025-08-21 18:20:42 +00:00
James Wu
90fd06be71 Various bugfixes for running NanoGPT training (#159166)
Fix various small bugs with running nanogpt on torchbenchmark in OSS under python 3.10. After these changes, the following now succeeds:

```
tlp python benchmarks/dynamo/torchbench.py --only nanogpt --performance  --training --backend inductor  --caching-precompile --warm-start-latency
```

Cold start: https://manifold.edge.x2p.facebook.net/v0/read/tree/logs/.tmp12LuZ5/index.html?bucketName=tlparse_reports&apiKey=tlparse_reports-key&withPayload=1&timeoutMsec=10000

Warm start (we are invesigating the recompile):
https://manifold.edge.x2p.facebook.net/v0/read/tree/logs/.tmpT5YTB2/index.html?bucketName=tlparse_reports&apiKey=tlparse_reports-key&withPayload=1&timeoutMsec=10000

Pull Request resolved: https://github.com/pytorch/pytorch/pull/159166
Approved by: https://github.com/zhxchen17
2025-07-30 16:30:22 +00:00
zhxchen17
1d8cdf373b [dynamo] Guard serialization for NAME_MATCH (#152332)
Differential Revision: [D73780430](https://our.internmc.facebook.com/intern/diff/D73780430/)

Pull Request resolved: https://github.com/pytorch/pytorch/pull/152332
Approved by: https://github.com/jansel
ghstack dependencies: #152325, #152326, #152327, #152328, #152329, #152330, #152331
2025-04-29 20:16:00 +00:00
Animesh Jain
eb9c127341 [dynamo][optimizers] Install ID_GUARDED tensors into the Fx graph (#147824)
Earlier, with inline flag we were lifting id-guarded tensors to the inputs to the Fx graph. But this offers no benefit. Main idea behind lifting parameters as inputs was to reuse the compilation units across many instances of the nn-module. However, if we are guarding on the `id`, we are explicitly specializing the compiled artifact to the parameter.

This PR installs the parameters back into the graph. The benefit is removal of all pre-graph bytecode to extract the id-guarded tensors from locals/globals. This increases speedup from 1.67x to 1.75x for an internal model that has large number of optimizer parameters.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/147824
Approved by: https://github.com/jansel

Co-authored-by: Jason Ansel <jansel@meta.com>
2025-02-28 03:22:11 +00:00
Xuehai Pan
3ce352e389 [BE][PYFMT] migrate PYFMT for torch._dynamo to ruff format (#144549)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/144549
Approved by: https://github.com/jansel
2025-02-28 03:03:53 +00:00
Raymond Li
21c2565f35 Document dynamo (#146736)
Many files in dynamo are currently lacking file/module-level documentation, which makes it hard to know what they do at a glance and without digging into the code. This fixes that.

Note: documentation was AI-generated and could be incorrect, please review carefully.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/146736
Approved by: https://github.com/jansel, https://github.com/StrongerXi, https://github.com/anijain2305, https://github.com/zou3519
2025-02-13 00:02:21 +00:00
Michael Lazos
616ac94175 [Dynamo] Fix spammy optimizer warning (#146374)
Fixes https://discuss.pytorch.org/t/torch-compile-optimizer-step-generates-excessive-warning-messages/216067/7

Pull Request resolved: https://github.com/pytorch/pytorch/pull/146374
Approved by: https://github.com/anijain2305
2025-02-05 01:03:49 +00:00
Aaron Orenstein
a79100ab11 PEP585 update - torch/_dynamo (#145105)
See #145101 for details.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/145105
Approved by: https://github.com/bobrenjc93
2025-01-18 20:47:11 +00:00
Animesh Jain
dec1a6d0f0 [dynamo] Separate out GetItemSource and DictGetItemSource (#143926)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/143926
Approved by: https://github.com/jansel
2025-01-01 02:39:41 +00:00
Tom Ritchford
dc23f1944a Remove unused Python variables in torch/[_-a]* (#133492)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/133492
Approved by: https://github.com/albanD
2024-12-12 17:39:14 +00:00
PyTorch MergeBot
5c97ac9721 Revert "Remove unused Python variables in torch/[_-a]* (#133492)"
This reverts commit fda975a7b3.

Reverted https://github.com/pytorch/pytorch/pull/133492 on behalf of https://github.com/clee2000 due to Sorry, I need to revert this in order to revert something else.  The only thing you need to do is rebase and remerge ([comment](https://github.com/pytorch/pytorch/pull/133492#issuecomment-2536635516))
2024-12-11 17:29:12 +00:00
Tom Ritchford
fda975a7b3 Remove unused Python variables in torch/[_-a]* (#133492)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/133492
Approved by: https://github.com/albanD
2024-12-10 21:48:44 +00:00
Michael Lazos
ffd5197138 Ensure index for state guard construction is a source (#140515)
Fixes https://github.com/pytorch/pytorch/issues/140393

Pull Request resolved: https://github.com/pytorch/pytorch/pull/140515
Approved by: https://github.com/anijain2305, https://github.com/vmoens
2024-11-15 22:02:50 +00:00
Tom Ritchford
e1c4548441 [dynamo] Simplify creation of VariableTrackers (#135714)
## `VariableTracker::build()` hides the Builders

### The problem

In the current code, creating a `VariableTracker` involves choosing one of two `Builder` classes and either calling a method, or calling a constructor that creates an object that you immediately call, [like this](083c9149b7/torch/_dynamo/variables/functions.py (L761-L768)).

Variations on this code are repeated in many places.

More, the `Builder` classes have a lot of dependencies, so they have to be loaded late in the whole import process to avoid circular imports, so they end up being repeatedly imported at local scope.

### The solution

In this commit, the import from `builder` and the logic of choosing and calling the Builder class are hidden in a single static factory method, `VariableTracker.build()`, easier to reason about and to import.

This commit net lowers the total lines of code by over 150 lines by removing repetitive logic and unnecessary local imports.

**CHANGES:** Originally the name of the static method was `VariableTracker.create()` but a static method on a derived class, `LazyVariableTracker.create()` now exists with a different signature that's irreconcilable, so the new static method was renamed to `VariableTracker.build()`.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/135714
Approved by: https://github.com/jansel
2024-10-18 09:36:46 +00:00
Michael Lazos
d50d5df2fb Add warning for non static grads in optimizer variable (#137554)
Fixes https://github.com/pytorch/pytorch/issues/112548

Pull Request resolved: https://github.com/pytorch/pytorch/pull/137554
Approved by: https://github.com/williamwen42
2024-10-10 01:23:21 +00:00
IvanKobzarev
9581508383 [aotd] Cleanup on subclasses in inductor freezing (#136549)
Cleanup:
1/ We do not need to unwrap_subclasses() in freezing wrapper, as it will be wrapped by AOTD wrappers which inclused SubclassesWrapper
2/ No need to use weakreferences for unwrapped list, dynamo optimizers need to clean unwrapped list along with original params_flat.
Verfified fbcode tests compiled_optimizers

Differential Revision: [D63393651](https://our.internmc.facebook.com/intern/diff/D63393651)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/136549
Approved by: https://github.com/bdhirsh
2024-09-27 11:20:03 +00:00
Li, Xingyuan
dcfa415e6e [Inductor UT] Reuse inductor UT for intel GPU test/inductor/test_compiled_optimizers.py (#133083)
[Inductor UT] Reuse Inductor test case for Intel GPU.
Reuse `test/inductor/test_compiled_optimizers.py`

Pull Request resolved: https://github.com/pytorch/pytorch/pull/133083
Approved by: https://github.com/etaf, https://github.com/jansel, https://github.com/mlazos
2024-08-17 01:15:26 +00:00
Animesh Jain
acad2050c1 [easy][dynamo] Add tx as an arg in getitem_const (#132899)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/132899
Approved by: https://github.com/yanboliang
ghstack dependencies: #132806
2024-08-07 21:35:41 +00:00
Oguz Ulgen
6e79932543 Add basic mypy annotations to dynamo (#132415)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/132415
Approved by: https://github.com/XuehaiPan, https://github.com/jamesjwu
2024-08-04 18:43:36 +00:00
PyTorch MergeBot
3558a8cf4a Revert "Add basic mypy annotations to dynamo (#132415)"
This reverts commit 71e22e0959.

Reverted https://github.com/pytorch/pytorch/pull/132415 on behalf of https://github.com/ZainRizvi due to Sorry, this PR has entered a weird state in the diff train. Trying to revert it to skip it, and then we can try relanding it ([comment](https://github.com/pytorch/pytorch/pull/132415#issuecomment-2267631785))
2024-08-04 18:39:29 +00:00
Oguz Ulgen
71e22e0959 Add basic mypy annotations to dynamo (#132415)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/132415
Approved by: https://github.com/XuehaiPan, https://github.com/jamesjwu
2024-08-01 20:14:25 +00:00
Xuehai Pan
e74ba1b34a [BE][Easy][15/19] enforce style for empty lines in import segments in torch/_d*/ (#129767)
See https://github.com/pytorch/pytorch/pull/129751#issue-2380881501. Most changes are auto-generated by linter.

You can review these PRs via:

```bash
git diff --ignore-all-space --ignore-blank-lines HEAD~1
```

Pull Request resolved: https://github.com/pytorch/pytorch/pull/129767
Approved by: https://github.com/anijain2305
2024-07-31 21:18:11 +00:00
Oguz Ulgen
7a42470bcb Annotate all InstructionTranslator (#131509)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/131509
Approved by: https://github.com/zou3519
2024-07-24 23:45:53 +00:00
PyTorch MergeBot
5db5865614 Revert "Annotate all InstructionTranslator (#131509)"
This reverts commit eafbd20f23.

Reverted https://github.com/pytorch/pytorch/pull/131509 on behalf of https://github.com/clee2000 due to sorry need to revert this to revert something else, I think you only need to rebase and remerge ([comment](https://github.com/pytorch/pytorch/pull/131509#issuecomment-2249000843))
2024-07-24 22:29:49 +00:00
Oguz Ulgen
eafbd20f23 Annotate all InstructionTranslator (#131509)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/131509
Approved by: https://github.com/zou3519
2024-07-24 05:31:01 +00:00
Xuehai Pan
973037be6a [BE][Easy] apply autofix for ruff rules unnecessary-collection-call (C408): list() / tuple() / dict() (#130199)
This PR changes the empty collection factory call to Python literals:

- `list()` -> `[]`
- `tuple()` -> `()`
- `dict()` -> `{}`

The Python literals are more performant and safer. For example, the bytecode for building an empty dictionary:

```bash
$ python3 -m dis - <<EOS
import collections

d1 = {}
d2 = dict()

dict = collections.OrderedDict
d3 = dict()
EOS
```

```text
  0           0 RESUME                   0

  1           2 LOAD_CONST               0 (0)
              4 LOAD_CONST               1 (None)
              6 IMPORT_NAME              0 (collections)
              8 STORE_NAME               0 (collections)

  3          10 BUILD_MAP                0
             12 STORE_NAME               1 (d1)

  4          14 PUSH_NULL
             16 LOAD_NAME                2 (dict)
             18 CALL                     0
             26 STORE_NAME               3 (d2)

  6          28 LOAD_NAME                0 (collections)
             30 LOAD_ATTR                8 (OrderedDict)
             50 STORE_NAME               2 (dict)

  7          52 PUSH_NULL
             54 LOAD_NAME                2 (dict)
             56 CALL                     0
             64 STORE_NAME               5 (d3)
             66 RETURN_CONST             1 (None)
```

The dict literal `{}` only has one bytecode `BUILD_MAP`, while the factory call `dict()` has three `PUSH_NULL + LOAD_NAME + CALL`. Also, the factory call is not safe if users override the `dict` name in `locals` or `globals` (see the example of replacing with `OrderedDict` above).

Pull Request resolved: https://github.com/pytorch/pytorch/pull/130199
Approved by: https://github.com/malfet
2024-07-11 17:30:28 +00:00
Michael Lazos
e72ef4f22a Fix capturable enablement conditions (#125826)
Only enable capturable if state hasn't been initialized and all parameters are on CUDA.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/125826
Approved by: https://github.com/anijain2305
ghstack dependencies: #125825
2024-05-11 06:29:59 +00:00
Aaron Gokaslan
1dd42e42c4 [BE]: Try TCH autofixes on torch/ (#125536)
Tries TCH autofixes and see what breaks

Pull Request resolved: https://github.com/pytorch/pytorch/pull/125536
Approved by: https://github.com/ezyang
2024-05-05 23:13:59 +00:00
laithsakka
0b70026d3b Do not pass none to has_pending_mutation (#125359)
#fix https://github.com/pytorch/pytorch/issues/125315

Several failures when inlining nn module is enabled are due to passing None to has_pending_mutation
from previous code, it sounds like its expected for variable to be none when not found, In that case we should skip it and not call has_pending_mutation
this is tested in https://github.com/pytorch/pytorch/pull/125354

Pull Request resolved: https://github.com/pytorch/pytorch/pull/125359
Approved by: https://github.com/mlazos
2024-05-02 09:08:22 +00:00
Michael Lazos
5050e627dc Defer marking_static_address (#124309)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/124309
Approved by: https://github.com/anijain2305
ghstack dependencies: #123324, #123404, #123405
2024-04-19 17:20:57 +00:00
Xuehai Pan
93e249969b [BE] enable ruff rule RSE and remove useless parentheses in raise statements (#124261)
Remove useless parentheses in `raise` statements if the exception type is raised with no argument.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/124261
Approved by: https://github.com/albanD
2024-04-17 19:29:34 +00:00
Michael Lazos
bff321716c Remove special handling of step with closure (#123620)
Implements https://github.com/pytorch/pytorch/issues/123479

Pull Request resolved: https://github.com/pytorch/pytorch/pull/123620
Approved by: https://github.com/anijain2305
ghstack dependencies: #123496, #123497, #123551, #123552, #123618
2024-04-09 21:15:24 +00:00
Michael Lazos
7c23fed12c Move step to cpu if state is already initialized (#123618)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/123618
Approved by: https://github.com/anijain2305
ghstack dependencies: #123496, #123497, #123551, #123552
2024-04-09 09:04:18 +00:00
Michael Lazos
89e6292d48 Defer setting capturable in optimizer variable (#123497)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/123497
Approved by: https://github.com/anijain2305
ghstack dependencies: #123496
2024-04-08 19:31:25 +00:00
PyTorch MergeBot
d9ac80f80c Revert "Defer setting capturable in optimizer variable (#123497)"
This reverts commit 76b290344f.

Reverted https://github.com/pytorch/pytorch/pull/123497 on behalf of https://github.com/PaliC due to seems to have broken distributed/fsdp/test_fsdp_hybrid_shard.py as per 26bf05ccac ([comment](https://github.com/pytorch/pytorch/pull/123496#issuecomment-2043251234))
2024-04-08 17:06:05 +00:00
Michael Lazos
76b290344f Defer setting capturable in optimizer variable (#123497)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/123497
Approved by: https://github.com/anijain2305
ghstack dependencies: #123496
2024-04-08 08:34:19 +00:00
Michael Lazos
d9d25076fe Reduce guards of optimizer state dict to guard once per param group (#123413)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/123413
Approved by: https://github.com/anijain2305
2024-04-06 00:12:59 +00:00
Michael Lazos
3e2b7e6052 [dynamo][guard overhead] Data ptr guard optimizer state tensors (#122858)
Stricter (but faster) guarding on optimizer state tensors

Pull Request resolved: https://github.com/pytorch/pytorch/pull/122858
Approved by: https://github.com/anijain2305
2024-04-03 21:42:06 +00:00
Animesh Jain
d91db70295 [dynamo][cpp-guards] Optimize tensor.grad accessor (#123226)
For LayoutLM model, reduces C++ guard overhead by 1.48x. These are the numbers

![image](https://github.com/pytorch/pytorch/assets/13822661/25cfc35b-b67d-4903-8403-71fa931dacdd)

Pull Request resolved: https://github.com/pytorch/pytorch/pull/123226
Approved by: https://github.com/jansel
2024-04-03 05:32:13 +00:00
Animesh Jain
5b42c41b19 [dynamo][improve-guard-overhead] Skip TENSOR_MATCH guards on parameters for optimizers (#122647)
**1.32x  guard overhead reduction** (1.092 vs vs 0.827 ms) for MegatronBertForCausalLM with 394 params.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/122647
Approved by: https://github.com/jansel, https://github.com/mlazos
ghstack dependencies: #122646
2024-03-27 19:40:43 +00:00
Jason Ansel
477d154ffd [dynamo] Add missing _nonvar_fields annotations (#122219)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/122219
Approved by: https://github.com/anijain2305
ghstack dependencies: #122218
2024-03-20 07:53:18 +00:00
Jason Ansel
46bf37b3f7 [dynamo] Replace VariableTracker.apply with visit/realize_all (#122218)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/122218
Approved by: https://github.com/anijain2305
2024-03-20 07:53:18 +00:00
Jason Ansel
153a01833b [dynamo] Optimize SourcelessBuilder (#122063)
Improves `benchmarks/dynamo/microbenchmarks/dynamo_microbenchmarks.py`
from 2.7s to 2.5s.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/122063
Approved by: https://github.com/anijain2305
ghstack dependencies: #122039, #122043, #122055, #122058, #122060
2024-03-19 04:23:30 +00:00
Michael Lazos
15abc56bd5 Graph break on step closure in optimizer (#121777)
Fixes https://github.com/pytorch/pytorch/issues/116494

Pull Request resolved: https://github.com/pytorch/pytorch/pull/121777
Approved by: https://github.com/yanboliang
2024-03-14 03:18:23 +00:00
Michael Lazos
c5ef4df274 guard on grads being None in compiled optimizers (#121291)
Fixes #115607

We were missing guards when the grads were set to `None`. So if we compiled the optimizer with grads set to their proper value, and then with the grads set to `None` we'd continuously run the `None` version because all of the guards would pass and it would be ordered before the correct version in the cache.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/121291
Approved by: https://github.com/Skylion007, https://github.com/anijain2305
2024-03-06 18:33:23 +00:00
lezcano
cdbc29e91a [dynamo,optim] Use the actual sources from the parameters when tracing "params" in an optimizer (#118535)
Fixes the unnecessary guards described at https://github.com/pytorch/pytorch/pull/117983#discussion_r1467622149

Pull Request resolved: https://github.com/pytorch/pytorch/pull/118535
Approved by: https://github.com/mlazos
ghstack dependencies: #117982, #118098, #117983, #117625, #118194, #118003, #118208, #118199
2024-02-02 14:42:56 +00:00
lezcano
75a5c41921 [dynamo,optim] Place guards on the args before assuming they exist (#117983)
This enables the new way of writing guards for dicts. Before we were
doing things like
```
  L['self'].param_groups[0][___dict_keys_getitem(L['self'].param_groups[0], 0)][3] is L['self'].param_groups[0]['params'][3]
```
without knowing whether `L['self'].param_groups[0][___dict_keys_getitem(L['self'].param_groups[0], 0)]` was a list.

On a different note, I'll probably write a pass to recover the previous
way to place guards on dicts via something like `DICT_KEYS`  as an
optimisation, as it seems relevant for optimisers.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/117983
Approved by: https://github.com/mlazos
ghstack dependencies: #117982, #118098
2024-02-02 14:37:46 +00:00
Edward Z. Yang
d03173e88c Unify MYPYINDUCTOR and MYPY (#118432)
The original motivation for MYPYINDUCTOR was a faster type checking configuration that only checked a subset of files. With the removal of `follow_imports = ignore`, we are now able to use dmypy to do fast incremental typechecking, eliminating the need for this.

Perhaps erroneously, when I tee'ed up this PR I elected to delete the `follow_imports = skip` designations in the mypy-inductor.ini. This lead to a number of extra type error suppressions that I manually edited. You will need to review.

Signed-off-by: Edward Z. Yang <ezyang@meta.com>

Pull Request resolved: https://github.com/pytorch/pytorch/pull/118432
Approved by: https://github.com/Skylion007
ghstack dependencies: #118414, #118418
2024-01-27 17:23:20 +00:00
rzou
5e0ef84b01 [dynamo] Refactor install_global_once, remove usages of install_global_unsafe (#118100)
We split install_global_once into two APIs:
- `install_global_by_id(prefix, value) -> name`: installs a global if it hasn't
been installed yet
- `install_global(prefix, value) -> name`: always installs the global (and
  generates a unique name for it)

Then, we refactor most callsites of `install_global_unsafe` to one of
the previous. Some callsites cannot be refactored because we create the
global name first, do a lot of stuff with it, and then install it.

This fixes more test flakiness.

Test Plan:
- Existing tests; I can't reliably repro the flakiness
Pull Request resolved: https://github.com/pytorch/pytorch/pull/118100
Approved by: https://github.com/ezyang, https://github.com/mlazos
2024-01-24 23:25:44 +00:00
Michael Lazos
f302a0d380 Re-enable SGD (#117434)
Re-enables the SGD optimizer now that compile times are more reasonable. [Benchmark run](https://github.com/pytorch/pytorch/actions/runs/7511073761)

Pull Request resolved: https://github.com/pytorch/pytorch/pull/117434
Approved by: https://github.com/anijain2305, https://github.com/janeyx99
2024-01-19 04:28:50 +00:00