pytorch/torch/_dynamo/optimizations
Michael Lazos 730e44bbc7 Add logging for aot autograd and unified debug flag (#88987)
- Adds `log_level` to aot's config
- Outputs log to `<graph_name>_<log_level>.log` in aot_torchinductor subfolder of the debug directory
- Modifies the Inductor debug context to use the graph name when naming the folder instead of the os pid
- Adds `TORCH_COMPILE_DEBUG` flag to enable it, (as well as separate dynamo and inductor logs)

Pull Request resolved: https://github.com/pytorch/pytorch/pull/88987
Approved by: https://github.com/Chillee
2022-12-09 17:28:10 +00:00
..
__init__.py
analysis.py Revert "Dynamo, FX, Inductor Progress Bars (#88384)" 2022-12-09 16:32:25 +00:00
backends.py [Dynamo] Fix llvm target for meta schedule & add torch to tvm ndarray helper func (#90214) 2022-12-07 19:23:56 +00:00
distributed.py Fix AssertionError fake_mode is not None in distributed (#90392) 2022-12-07 20:12:39 +00:00
inference.py
log_args.py Use dynamo fake tensor mode in aot_autograd, move aot_autograd compilation to lowering time [Merger of 89672 and 89773] (#90039) 2022-12-05 01:56:50 +00:00
normalize.py
subgraph.py
torchxla_integration.py dynamo/torchxla integration: trace on xla rather than eager (#88904) 2022-11-22 03:57:04 +00:00
training.py Add logging for aot autograd and unified debug flag (#88987) 2022-12-09 17:28:10 +00:00