pytorch/torch/_inductor/codegen
Aaron Gokaslan bd10fea79a [BE]: Enable F821 and fix bugs (#116579)
Fixes #112371

I tried to fix as many of the bugs as I could, a few I could not figure out what the proper fix for them was though and so I left them with noqas.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/116579
Approved by: https://github.com/ezyang
2024-01-01 08:40:46 +00:00
..
aoti_runtime [AOTInductor] Add updaing constant buffer to active buffer. (#116001) 2023-12-18 11:49:03 +00:00
cuda [BE]: Enable F821 and fix bugs (#116579) 2024-01-01 08:40:46 +00:00
__init__.py
common.py [BE]: Enable F821 and fix bugs (#116579) 2024-01-01 08:40:46 +00:00
cpp_prefix.h [inductor cpp] support vectorization for index_expr that depends on tiling itervar or with indirect indexing (#114545) 2023-12-26 05:36:39 +00:00
cpp.py [inductor][cpp] load as scalar for the index invariant in the vector range (#116387) 2023-12-26 08:45:04 +00:00
cuda_combined_scheduling.py [Inductor CUTLASS backend] Epilogue fusion codegen (Step 1) (#110890) 2023-11-06 19:42:10 +00:00
memory_planning.py Properly type CachedFunction & rename to CachedMethod (#114161) 2023-11-25 01:30:23 +00:00
triton_foreach.py [inductor] make inductor work with new triton compile interface (#115878) 2023-12-22 00:09:29 +00:00
triton_utils.py [AOTI] Support ReinterpretView in abi mode (#114169) 2023-11-21 17:08:00 +00:00
triton.py [BE]: Use iterable.chain.from_iterable where possible (#116376) 2023-12-27 19:20:07 +00:00
wrapper.py [Inductor Intel GPU backend Upstream] Step 1/3: Generalize device-bias code in code generation. (#116020) 2023-12-22 08:42:51 +00:00