PyTorch MergeBot
c916a8efc5
Revert "Use the device interface for detecting Triton availability ( #139171 )"
...
This reverts commit 940b60db97 .
Reverted https://github.com/pytorch/pytorch/pull/139171 on behalf of https://github.com/ZainRizvi due to Sorry but this is breaking internally. @jansel can you please help get these changes working? See D70946254 for more details. To validate the fixes internally, you can follow the instructions here: https://fburl.com/fixing-ghfirst-reverts ([comment](https://github.com/pytorch/pytorch/pull/139171#issuecomment-2715392451 ))
2025-03-11 18:49:21 +00:00
George White
940b60db97
Use the device interface for detecting Triton availability ( #139171 )
...
This allows for each device type to check current devices for Triton compatibility and ensure their Triton backend is present.
This PR replaces the `has_triton()` global method which was previously used for this task, and moves the initial check for each Inductor backend on to their associated `BaseScheduler` subclass. This means that other backends, such as Halide, can also implement their own availability checks.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/139171
Approved by: https://github.com/jansel
2025-03-11 03:56:11 +00:00
Raymond Li
21c2565f35
Document dynamo ( #146736 )
...
Many files in dynamo are currently lacking file/module-level documentation, which makes it hard to know what they do at a glance and without digging into the code. This fixes that.
Note: documentation was AI-generated and could be incorrect, please review carefully.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/146736
Approved by: https://github.com/jansel , https://github.com/StrongerXi , https://github.com/anijain2305 , https://github.com/zou3519
2025-02-13 00:02:21 +00:00
Aaron Orenstein
a79100ab11
PEP585 update - torch/_dynamo ( #145105 )
...
See #145101 for details.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/145105
Approved by: https://github.com/bobrenjc93
2025-01-18 20:47:11 +00:00
Edward Z. Yang
c480a479b1
Make automatic_dynamic state live per CodeId, rather than on code object ( #138740 )
...
This is semantics changing as if you are dealing with multiple code objects which have exactly the same filename/firstlineno/name, but are distinct objects, and need non-aliasing automatic dynamic state. Otherwise, this should be equivalent (modulo lifetime). I want to do this because when I do PGO I can't index on code object identity, need a stable identifier.
Signed-off-by: Edward Z. Yang <ezyang@meta.com>
Pull Request resolved: https://github.com/pytorch/pytorch/pull/138740
Approved by: https://github.com/bobrenjc93
ghstack dependencies: #138693 , #138717
2024-10-27 03:08:41 +00:00
Bob Ren
d4cc2aaf1e
type _dynamo/logging.py ( #136956 )
...
Pull Request resolved: https://github.com/pytorch/pytorch/pull/136956
Approved by: https://github.com/Skylion007
2024-10-01 14:35:54 +00:00
Xuehai Pan
e74ba1b34a
[BE][Easy][15/19] enforce style for empty lines in import segments in torch/_d*/ ( #129767 )
...
See https://github.com/pytorch/pytorch/pull/129751#issue-2380881501 . Most changes are auto-generated by linter.
You can review these PRs via:
```bash
git diff --ignore-all-space --ignore-blank-lines HEAD~1
```
Pull Request resolved: https://github.com/pytorch/pytorch/pull/129767
Approved by: https://github.com/anijain2305
2024-07-31 21:18:11 +00:00
Aaron Orenstein
dcfa7702c3
Flip default value for mypy disallow_untyped_defs [1/11] ( #127838 )
...
See #127836 for details.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/127838
Approved by: https://github.com/oulgen
2024-06-08 18:16:33 +00:00
Edward Z. Yang
8b95fb4eb8
Add stack trace to "start tracing" log ( #118217 )
...
When debugging problems on unfamiliar model code, I often want to know
"how did I end up in this compiled region." Printing the stack trace at
tracing start lets me find out this information.
Looks like this:
```
[2024-01-24 12:07:11,819] [0/1] torch._dynamo.symbolic_convert: [INFO] Step 1: torchdynamo start tracing f /data/users/ezyang/c/pytorch/b.py:3
[2024-01-24 12:07:11,819] [0/1] torch._dynamo.symbolic_convert: [INFO] Stack (most recent call last):
[2024-01-24 12:07:11,819] [0/1] torch._dynamo.symbolic_convert: [INFO] File "/data/users/ezyang/c/pytorch/b.py", line 9, in <module>
[2024-01-24 12:07:11,819] [0/1] torch._dynamo.symbolic_convert: [INFO] f(torch.randn(5))
[2024-01-24 12:07:11,819] [0/1] torch._dynamo.symbolic_convert: [INFO] File "/data/users/ezyang/c/pytorch/torch/_dynamo/eval_frame.py", line 437, in _fn
[2024-01-24 12:07:11,819] [0/1] torch._dynamo.symbolic_convert: [INFO] return fn(*args, **kwargs)
[2024-01-24 12:07:11,819] [0/1] torch._dynamo.symbolic_convert: [INFO] File "/data/users/ezyang/c/pytorch/torch/_dynamo/eval_frame.py", line 601, in catch_errors
[2024-01-24 12:07:11,819] [0/1] torch._dynamo.symbolic_convert: [INFO] return callback(frame, cache_entry, hooks, frame_state)
[2024-01-24 12:07:11,819] [0/1] torch._dynamo.symbolic_convert: [INFO] File "/data/users/ezyang/c/pytorch/torch/_dynamo/convert_frame.py", line 743, in _convert_frame
[2024-01-24 12:07:11,819] [0/1] torch._dynamo.symbolic_convert: [INFO] result = inner_convert(frame, cache_entry, hooks, frame_state)
[2024-01-24 12:07:11,819] [0/1] torch._dynamo.symbolic_convert: [INFO] File "/data/users/ezyang/c/pytorch/torch/_dynamo/convert_frame.py", line 386, in _convert_frame_assert
[2024-01-24 12:07:11,819] [0/1] torch._dynamo.symbolic_convert: [INFO] return _compile(
[2024-01-24 12:07:11,819] [0/1] torch._dynamo.symbolic_convert: [INFO] File "/data/users/ezyang/c/pytorch/torch/_dynamo/convert_frame.py", line 645, in _compile
[2024-01-24 12:07:11,819] [0/1] torch._dynamo.symbolic_convert: [INFO] guarded_code = compile_inner(code, one_graph, hooks, transform)
[2024-01-24 12:07:11,819] [0/1] torch._dynamo.symbolic_convert: [INFO] File "/data/users/ezyang/c/pytorch/torch/_dynamo/utils.py", line 248, in time_wrapper
[2024-01-24 12:07:11,819] [0/1] torch._dynamo.symbolic_convert: [INFO] r = func(*args, **kwargs)
[2024-01-24 12:07:11,819] [0/1] torch._dynamo.symbolic_convert: [INFO] File "/data/users/ezyang/c/pytorch/torch/_dynamo/convert_frame.py", line 526, in compile_inner
[2024-01-24 12:07:11,819] [0/1] torch._dynamo.symbolic_convert: [INFO] out_code = transform_code_object(code, transform)
[2024-01-24 12:07:11,819] [0/1] torch._dynamo.symbolic_convert: [INFO] File "/data/users/ezyang/c/pytorch/torch/_dynamo/bytecode_transformation.py", line 1033, in transform_code_object
[2024-01-24 12:07:11,819] [0/1] torch._dynamo.symbolic_convert: [INFO] transformations(instructions, code_options)
[2024-01-24 12:07:11,819] [0/1] torch._dynamo.symbolic_convert: [INFO] File "/data/users/ezyang/c/pytorch/torch/_dynamo/convert_frame.py", line 151, in _fn
[2024-01-24 12:07:11,819] [0/1] torch._dynamo.symbolic_convert: [INFO] return fn(*args, **kwargs)
[2024-01-24 12:07:11,819] [0/1] torch._dynamo.symbolic_convert: [INFO] File "/data/users/ezyang/c/pytorch/torch/_dynamo/convert_frame.py", line 473, in transform
[2024-01-24 12:07:11,819] [0/1] torch._dynamo.symbolic_convert: [INFO] tracer = InstructionTranslator(
[2024-01-24 12:07:11,819] [0/1] torch._dynamo.symbolic_convert: [INFO] File "/data/users/ezyang/c/pytorch/torch/_dynamo/symbolic_convert.py", line 2030, in __init__
[2024-01-24 12:07:11,819] [0/1] torch._dynamo.symbolic_convert: [INFO] _step_logger()(
[2024-01-24 12:07:11,819] [0/1] torch._dynamo.symbolic_convert: [INFO] File "/data/users/ezyang/c/pytorch/torch/_dynamo/logging.py", line 55, in log
[2024-01-24 12:07:11,819] [0/1] torch._dynamo.symbolic_convert: [INFO] logger.log(level, "Step %s: %s", step, msg, **kwargs)
```
Signed-off-by: Edward Z. Yang <ezyang@meta.com>
Pull Request resolved: https://github.com/pytorch/pytorch/pull/118217
Approved by: https://github.com/albanD
ghstack dependencies: #118215
2024-01-25 06:53:12 +00:00
Justin Chu
6e3cdcad08
Fix flake8 lint errors - part 2 - manual fixes ( #99799 )
...
<!--
copilot:all
-->
### <samp>🤖 Generated by Copilot at 8aef78f</samp>
### Summary
📝 🚀 🛠️
<!--
1. 📝 for modifying the logging format and style
2. 🚀 for improving performance and avoiding unnecessary string creation
3. 🛠️ for fixing flake8 issues
-->
This pull request updates some logging calls to use old-style string formatting with `%s` placeholders instead of f-strings in `torch/_dynamo/logging.py`, `torch/_functorch/compilers.py`, and `torch/fx/passes/pass_manager.py` as part of a logging standardization effort. It also adds a `# noqa: F404` comment to the `import __future__` statement in `torch/overrides.py` to fix a flake8 warning.
> _`log` uses old style_
> _formatting strings with `%s`_
> _logging is faster_
### Walkthrough
* Standardize logging format and style to use old-style string formatting with `%s` placeholders instead of f-string syntax for performance and consistency ([link](https://github.com/pytorch/pytorch/pull/99799/files?diff=unified&w=0#diff-18807f7fd187b8bc8e69e93722566195b36d5bf269099b415a6f90b552228d6bL55-R55 ), [link](https://github.com/pytorch/pytorch/pull/99799/files?diff=unified&w=0#diff-fae8a66564055743ec031edb87eb22edeebf7fdebef9d21660d5e6a6252e5222L370-R373 ), [link](https://github.com/pytorch/pytorch/pull/99799/files?diff=unified&w=0#diff-5f3e37ded032f24e247dcf4a3be4b73ea0cf21382e342631742e5a04550202e1L72-R72 ))
* Suppress flake8 warning for `import __future__` statement in `torch/overrides.py` with `# noqa: F404` comment ([link](https://github.com/pytorch/pytorch/pull/99799/files?diff=unified&w=0#diff-4f601fe7f31e875ee4354882c0bb490bc35e51d3d413d058cc5fda3be8ca9f15L23-R23 ))
Pull Request resolved: https://github.com/pytorch/pytorch/pull/99799
Approved by: https://github.com/Skylion007
2023-04-24 06:03:26 +00:00
Michael Lazos
ee9a9b7add
Remove old logging callsites ( #98095 )
...
Get around GH first issue, OSS only changes for https://github.com/pytorch/pytorch/pull/97182
Pull Request resolved: https://github.com/pytorch/pytorch/pull/98095
Approved by: https://github.com/anijain2305
2023-04-01 00:57:37 +00:00
Michael Lazos
a1c46e5f8f
component-level configurable logging for dynamo, inductor, aot ( #94858 )
...
Summary:
Adds NNC-like logging that is configured through an env var `TORCH_COMPILE_LOGS`
Examples:
`TORCH_LOGS="dynamo,guards" python script.py` - prints dynamo logs at level INFO with guards of all functions that are compiled
`TORCH_LOGS="+dynamo,guards,graph" python script.py` - prints dynamo logs at level DEBUG with guards and graphs (in tabular) format of all graphs that are compiled
[More examples with full output](https://gist.github.com/mlazos/b17f474457308ce15e88c91721ac1cce )
Implementation:
The implementation parses the log settings from the environment, finds any components (aot, dynamo, inductor) or other loggable objects (guards, graph, etc.) and generates a log_state object. This object contains all of the enabled artifacts, and a qualified log name -> level mapping. _init_logs then adds handlers to the highest level logs (the registered logs), and sets any artifact loggers to level DEBUG if the artifact is enabled.
Note: set_logs is an alternative for manipulating the log_state, but if the environment contains TORCH_LOGS, the environment settings will be prioritized.
Adding a new log:
To add a new log, a dev should add their log name to torch._logging._registrations (there are examples there already).
Adding a new artifact:
To add a new artifact, a dev should add their artifact name to torch._logging._registrations as well.
Additionally, wherever the artifact is logged, `torch._logging.getArtifactLogger(__name__, <artifact_name>)` should be used instead of the standard logging implementation.
[design doc](https://docs.google.com/document/d/1ZRfTWKa8eaPq1AxaiHrq4ASTPouzzlPiuquSBEJYwS8/edit# )
Pull Request resolved: https://github.com/pytorch/pytorch/pull/94858
Approved by: https://github.com/ezyang
2023-03-18 04:17:31 +00:00
BowenBao
60a68477a6
Bump black version to 23.1.0 ( #96578 )
...
Pull Request resolved: https://github.com/pytorch/pytorch/pull/96578
Approved by: https://github.com/ezyang
2023-03-15 06:27:59 +00:00
Avik Chaudhuri
178d2a38e0
debug shape guards ( #95848 )
...
Adds logging when shape guards are added and when symbols are specialized to constants.
Differential Revision: [D43719743](https://our.internmc.facebook.com/intern/diff/D43719743/ )
Differential Revision: [D43719743](https://our.internmc.facebook.com/intern/diff/D43719743 )
Pull Request resolved: https://github.com/pytorch/pytorch/pull/95848
Approved by: https://github.com/ezyang
2023-03-14 16:05:28 +00:00
Jason Ansel
45eadc2c4d
ConfigModule for _{dynamo,inductor}.config ( #93252 )
...
This refactors the way dynamo/inductor configs are handled to check for invalid configs and add options like patching and serialization.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/93252
Approved by: https://github.com/voznesenskym
2023-02-01 19:38:05 +00:00
Mark Saroufim
15af4b1cee
Dynamo, FX, Inductor Progress Bars ( #88384 )
...
There are 3 progress bars each gated behind their own config, all off by default for now
1. Dynamo: Macro level config for dynamo, AOT, inductor
2. FX: Progress bar for each pass, with their names
3. Inductor
Pull Request resolved: https://github.com/pytorch/pytorch/pull/88384
Approved by: https://github.com/wconstab , https://github.com/mlazos , https://github.com/malfet
2022-12-21 11:56:58 +00:00
Michael Lazos
730e44bbc7
Add logging for aot autograd and unified debug flag ( #88987 )
...
- Adds `log_level` to aot's config
- Outputs log to `<graph_name>_<log_level>.log` in aot_torchinductor subfolder of the debug directory
- Modifies the Inductor debug context to use the graph name when naming the folder instead of the os pid
- Adds `TORCH_COMPILE_DEBUG` flag to enable it, (as well as separate dynamo and inductor logs)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/88987
Approved by: https://github.com/Chillee
2022-12-09 17:28:10 +00:00
PyTorch MergeBot
6581063583
Revert "Dynamo, FX, Inductor Progress Bars ( #88384 )"
...
This reverts commit db0ce4acf3 .
Reverted https://github.com/pytorch/pytorch/pull/88384 on behalf of https://github.com/malfet due to Broke test_public_bindings across the board
2022-12-09 16:32:25 +00:00
Mark Saroufim
db0ce4acf3
Dynamo, FX, Inductor Progress Bars ( #88384 )
...
There are 3 progress bars each gated behind their own config, all off by default for now
1. Dynamo: Macro level config for dynamo, AOT, inductor
2. FX: Progress bar for each pass, with their names
3. Inductor
Pull Request resolved: https://github.com/pytorch/pytorch/pull/88384
Approved by: https://github.com/wconstab , https://github.com/mlazos
2022-12-09 04:32:31 +00:00
William Wen
d224ac7f77
Remove logging.CODE ( #90234 )
...
Fixes https://github.com/pytorch/torchdynamo/issues/1932
Discussed with @mlazos: if we still want to separate streams for code logging and the rest of info, we can use a separate logger object with a unique name.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/90234
Approved by: https://github.com/ezyang
2022-12-06 22:24:43 +00:00
Jean Schmidt
f62e54df8f
Reland "Dynamo, FX, Inductor Progress Bars ( #88384 )" … ( #90055 )
...
This commit had inconsistent internal land and pr merged. This caused merge conflicts that required revert in both places, normalize the internal commit stack, and then re-land properly.
Original commit: #88384 (011452a2a1 )
Inconsistent revert: #90018 (8566aa7c0b4bdca50bf85ca14705b4304de030b3)
Revert of the inconsistent revert to restore healthy state (or re-land of the original commit): cf3c3f2280
Landing the correct, internally congruent revert of the original commit: (This PR) #90055 (TBD)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/90055
Approved by: https://github.com/DanilBaibak , https://github.com/malfet
2022-12-02 13:28:00 +00:00
PyTorch MergeBot
cf3c3f2280
Revert "Revert "Dynamo, FX, Inductor Progress Bars ( #88384 )" ( #90018 )"
...
This reverts commit bcf4292f04 .
Reverted https://github.com/pytorch/pytorch/pull/90018 on behalf of https://github.com/jeanschmidt due to landed internal commit does not match with this one, causing merge conflict and preventing import and land new commits
2022-12-02 09:57:31 +00:00
Eli Uriegas
bcf4292f04
Revert "Dynamo, FX, Inductor Progress Bars ( #88384 )" ( #90018 )
...
This breaks in environments that use the fake tqdm 015b05af18/torch/hub.py (L26) which doesn't support the 'desc' kwarg and is not iterable
Original try using pytorchbot did not go through because of a merge
conflict: https://github.com/pytorch/pytorch/pull/88384#issuecomment-1334272489
This reverts commit 011452a2a1 .
Fixes #ISSUE_NUMBER
Pull Request resolved: https://github.com/pytorch/pytorch/pull/90018
Approved by: https://github.com/drisspg , https://github.com/dbort
2022-12-01 20:17:07 +00:00
Mark Saroufim
011452a2a1
Dynamo, FX, Inductor Progress Bars ( #88384 )
...
There are 3 progress bars each gated behind their own config, all off by default for now
1. Dynamo: Macro level config for dynamo, AOT, inductor
2. FX: Progress bar for each pass, with their names
3. Inductor
Pull Request resolved: https://github.com/pytorch/pytorch/pull/88384
Approved by: https://github.com/wconstab , https://github.com/mlazos
2022-11-30 06:07:14 +00:00
William Wen
a605a30732
Fix CODE level usage in dynamo config.py ( #87522 )
...
Fixes https://github.com/pytorch/torchdynamo/issues/1718 .
Tested by changing `log_level = logging.WARNING` in config.py to `log_level = logging.CODE` and running a test script that doesn't touch `log_level`.
cc @jansel @lezcano @fdrocha @mlazos @soumith @voznesenskym @yanboliang @penguinwu @anijain2305
Pull Request resolved: https://github.com/pytorch/pytorch/pull/87522
Approved by: https://github.com/mlazos
2022-10-25 22:47:54 +00:00
Michael Suo
31e731e5ae
[dynamo] fix logging ( #87239 )
...
Currently, setting `torch._dynamo.config.log_level` doesn't do anything,
as the module name has changed during the move.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/87239
Approved by: https://github.com/jansel , https://github.com/soumith , https://github.com/mlazos
2022-10-19 01:43:11 +00:00
Jason Ansel
054a2fd6c2
Sync changes from pytorch/torchdynamo ( #87013 )
...
This updates to:
6380959be2
Generated with:
https://github.com/pytorch/torchdynamo/blob/main/copy_to_core.sh
Pull Request resolved: https://github.com/pytorch/pytorch/pull/87013
Approved by: https://github.com/voznesenskym
2022-10-15 21:00:57 +00:00
Jason Ansel
c7c09722ad
Move TorchDynamo into PyTorch core ( #86461 )
...
Context:
https://github.com/pytorch/torchdynamo/issues/1588
This PR moves [TorchDynamo](https://github.com/pytorch/torchdynamo ) and TorchInductor into PyTorch core.
- `torchdynamo` becomes `torch._dynamo`
- `torchinductor` becomes `torch._inductor`
This PR was generated by running `copy_to_core.sh` in https://github.com/pytorch/torchdynamo/pull/1538
Pull Request resolved: https://github.com/pytorch/pytorch/pull/86461
Approved by: https://github.com/voznesenskym
2022-10-13 23:18:06 +00:00