| .. |
|
codegen
|
Emit torch.cuda.synchronize() after every kernel call in inductor (#90472)
|
2022-12-12 04:35:10 +00:00 |
|
triton_ops
|
[Dynamo] Fix llvm target for meta schedule & add torch to tvm ndarray helper func (#90214)
|
2022-12-07 19:23:56 +00:00 |
|
__init__.py
|
|
|
|
codecache.py
|
Revert "Dynamo, FX, Inductor Progress Bars (#88384)"
|
2022-12-09 16:32:25 +00:00 |
|
compile_fx.py
|
Always compile tiny graphs with AOTAutograd (#89775)
|
2022-12-08 03:41:29 +00:00 |
|
config.py
|
Emit torch.cuda.synchronize() after every kernel call in inductor (#90472)
|
2022-12-12 04:35:10 +00:00 |
|
cuda_properties.py
|
Fix TorchInductor benchmarking in fbcode (#88689)
|
2022-11-09 18:13:06 +00:00 |
|
debug.py
|
[pt2] Reset dynamo log level when exiting inductor debug context (#90473)
|
2022-12-12 04:39:37 +00:00 |
|
decomposition.py
|
[Inductor] GEMM Shape Padding Optimization (#90425)
|
2022-12-09 22:48:02 +00:00 |
|
dependencies.py
|
Added utility to count memory reads/written in Inductor (#89203)
|
2022-11-19 04:18:26 +00:00 |
|
exc.py
|
|
|
|
graph.py
|
Revert "Dynamo, FX, Inductor Progress Bars (#88384)"
|
2022-12-09 16:32:25 +00:00 |
|
ir.py
|
Revert "Dynamo, FX, Inductor Progress Bars (#88384)"
|
2022-12-09 16:32:25 +00:00 |
|
lowering.py
|
[inductor][Reland] Use decomposition for _to_copy (#90494)
|
2022-12-09 16:51:50 +00:00 |
|
metrics.py
|
[Inductor] Add test for Scheduler fusions (#90014)
|
2022-12-07 01:33:25 +00:00 |
|
overrides.py
|
inductor(CPU): add Conv+binary+unary fusion filter (#90259)
|
2022-12-12 06:04:55 +00:00 |
|
scheduler.py
|
Emit torch.cuda.synchronize() after every kernel call in inductor (#90472)
|
2022-12-12 04:35:10 +00:00 |
|
sizevars.py
|
Keep track of source name on all allocated SymInts (#90295)
|
2022-12-10 13:17:34 +00:00 |
|
utils.py
|
pad low precision matmuls when requested (#90235)
|
2022-12-06 04:13:24 +00:00 |
|
virtualized.py
|
Revert "Dynamo, FX, Inductor Progress Bars (#88384)"
|
2022-12-09 16:32:25 +00:00 |