pytorch/torch/_inductor
2025-06-12 16:48:52 +00:00
..
autoheuristic [BE][Ez]: Optimize unnecessary lambda with operator (#154722) 2025-05-30 23:47:10 +00:00
codegen [Inductor][CPU] Use AMX-based microkernels when M > 4 for GEMM template for INT4 weight (#155444) 2025-06-12 02:28:48 +00:00
compile_worker torch.compile: Supress stdout / stderr output from subprocesses when local (#153837) 2025-05-22 05:49:43 +00:00
fx_passes [inductor] handle -1 for pointless view pairs (#155295) 2025-06-11 22:20:36 +00:00
kernel unify symbolic_shapes and sizevars dynamic shapes APIs naming 1 (#154774) 2025-06-12 16:11:55 +00:00
package [export] Refactor pt2 save/load (#152495) 2025-06-04 06:04:29 +00:00
runtime Migrate from lru_cache to cache (#155613) 2025-06-11 19:44:18 +00:00
__autotune_main__.py
__init__.py Add optional device index to AOTIModelPackageLoader (#152093) 2025-05-04 11:40:12 +00:00
analyze_preserves_zero_mask.py
aoti_eager.py
async_compile.py Redo D75092426: [internal] Expose additional metadata to compilation callbacks (#155063) 2025-06-05 23:40:31 +00:00
autotune_process.py Migrate from lru_cache to cache (#155613) 2025-06-11 19:44:18 +00:00
bounds.py
choices.py
codecache.py [cutlass backend] Only consider to use re worker if nvcc doesn't exist (#155745) 2025-06-12 15:23:52 +00:00
comm_analysis.py
comm_lowering.py Revert "[inductor] Add typing to _inductor/ir.py (#149958)" 2025-06-06 15:19:16 +00:00
comms.py [PT2][comms] put visualize_overlap in a try-except block (#155222) 2025-06-05 23:39:48 +00:00
compile_fx_async.py
compile_fx_ext.py Migrate from lru_cache to cache (#155613) 2025-06-11 19:44:18 +00:00
compile_fx_subproc.py
compile_fx.py [inductor][invoke_subgraph] Mark invoke_subgraph outputs as user_visible to constrain output strides (#155395) 2025-06-12 03:58:16 +00:00
compiler_bisector.py Migrate from lru_cache to cache (#155613) 2025-06-11 19:44:18 +00:00
config.py inductor codecache: include private inductor configs in cache key (#153672) 2025-06-11 01:33:24 +00:00
constant_folding.py Add dont constant fold flag (#154945) 2025-06-10 14:52:26 +00:00
cpp_builder.py Migrate from lru_cache to cache (#155613) 2025-06-11 19:44:18 +00:00
cpu_vec_isa.py Migrate from lru_cache to cache (#155613) 2025-06-11 19:44:18 +00:00
cudagraph_trees.py Redo D75092426: [internal] Expose additional metadata to compilation callbacks (#155063) 2025-06-05 23:40:31 +00:00
cudagraph_utils.py
custom_graph_pass.py Revert "Custom FX pass for inductor's backend registration (#154841)" 2025-06-09 16:56:45 +00:00
debug.py [inductor][cutlass backend] Log prescreening elpase (#155508) 2025-06-12 16:48:52 +00:00
decomposition.py Migrate from lru_cache to cache (#155613) 2025-06-11 19:44:18 +00:00
dependencies.py Revert "[inductor] Add typing to _inductor/ir.py (#149958)" 2025-06-06 15:19:16 +00:00
dtype_propagation.py Migrate from lru_cache to cache (#155613) 2025-06-11 19:44:18 +00:00
exc.py
extern_node_serializer.py
freezing_utils.py
freezing.py [cudagraphs] Fix issue in collecting static_input_idxs (#152287) 2025-04-30 03:24:05 +00:00
fuzzer.py [AOTI][reland] Add an option to specify custom op C shim (#153968) 2025-05-21 15:57:57 +00:00
fx_utils.py Revert "Inductor logging + analysis of torch.profile (#149697)" 2025-06-10 15:38:40 +00:00
graph.py Update auto-tuning support for _scaled_grouped_mm (#150944) 2025-06-11 19:12:52 +00:00
hooks.py
index_propagation.py unify symbolic_shapes and sizevars dynamic shapes APIs naming 1 (#154774) 2025-06-12 16:11:55 +00:00
inductor_prims.py [inductor] lowering for fractional_max_pool3d (#148630) 2025-05-22 16:06:29 +00:00
ir.py unify symbolic_shapes and sizevars dynamic shapes APIs naming 1 (#154774) 2025-06-12 16:11:55 +00:00
jagged_lowerings.py Revert "[inductor] Add typing to _inductor/ir.py (#149958)" 2025-06-06 15:19:16 +00:00
loop_body.py Migrate from lru_cache to cache (#155613) 2025-06-11 19:44:18 +00:00
lowering.py Migrate from lru_cache to cache (#155613) 2025-06-11 19:44:18 +00:00
memory.py [PT2][memory] correct wait tensor output size (#153569) 2025-06-04 17:49:25 +00:00
metrics.py Replace runtime type parameterization (#155221) 2025-06-05 21:43:54 +00:00
mkldnn_ir.py Revert "[Inductor] Improve typing, and prepare for ABI-compatible AOTI C-shim dispatching (#154371)" 2025-06-08 17:37:29 +00:00
mkldnn_lowerings.py Revert "[inductor] Add typing to _inductor/ir.py (#149958)" 2025-06-06 15:19:16 +00:00
mock_cache.py
ops_handler.py Revert "[inductor] Add typing to _inductor/ir.py (#149958)" 2025-06-06 15:19:16 +00:00
optimize_indexing.py
output_code.py Reflect back mutation if we clone misaligned tensors (#154442) 2025-05-29 13:36:48 +00:00
pattern_matcher.py Migrate from lru_cache to cache (#155613) 2025-06-11 19:44:18 +00:00
quantized_lowerings.py [Inductor]Cleanup autotune_fallback_to_aten post-deprecation (#154331) 2025-05-29 20:29:58 +00:00
remote_cache.py [Indcutor Remote Cache] Raise an exception if redis module is required but not available (#151779) 2025-04-26 11:21:54 +00:00
scheduler.py Revert "Inductor logging + analysis of torch.profile (#149697)" 2025-06-10 15:38:40 +00:00
script.ld
select_algorithm.py [inductor][cutlass backend] Log prescreening elpase (#155508) 2025-06-12 16:48:52 +00:00
sizevars.py unify symbolic_shapes and sizevars dynamic shapes APIs naming 1 (#154774) 2025-06-12 16:11:55 +00:00
standalone_compile.py Add logging for guard miss failure (#153125) 2025-05-09 16:51:04 +00:00
subgraph_lowering.py
template_heuristics.py [Inductor] Add Additional Configs for persistent+TMA version of Triton mm and addmm (#150587) 2025-04-23 18:21:35 +00:00
test_case.py
test_operators.py
tiling_utils.py Turn on new tiling by default (#154768) 2025-06-06 21:19:35 +00:00
triton_bundler.py Keep raw cubin file around in case it gets deleted underneath us (#153064) 2025-05-08 14:29:19 +00:00
utils.py unify symbolic_shapes and sizevars dynamic shapes APIs naming 1 (#154774) 2025-06-12 16:11:55 +00:00
virtualized.py
wrapper_benchmark.py Revert "Inductor logging + analysis of torch.profile (#149697)" 2025-06-10 15:38:40 +00:00