pytorch/torch/fx
jjsjann123 f903f1ab34 Patching getitem in partitioner (#86713)
1. rejecting getitem operator in backends fusion query getitem is merged in a special post partition pass, backends that takes getitem shouldn't affect the logic
2. added test for failing cases

Fixes #86698

Pull Request resolved: https://github.com/pytorch/pytorch/pull/86713
Approved by: https://github.com/SherlockNoMad
2022-10-12 07:50:46 +00:00
..
experimental Revert "Reland 2 of Merge more symbolic meta kernels and symint changes from branch (#86334) (#86488)" 2022-10-11 23:39:50 +00:00
passes Patching getitem in partitioner (#86713) 2022-10-12 07:50:46 +00:00
__init__.py Refactor FX codegen into extensible Codegen object (#72566) 2022-02-11 18:13:29 +00:00
__init__.pyi
_compatibility.py
_pytree.py
_symbolic_trace.py [torch.fx.wrap] Use callable / function.__name__ instead of function.__code__.co_name (#84373) 2022-09-09 05:44:29 +00:00
annotate.py
graph_module.py Add __all__ to torch.{autograd, fx, cuda} submodules (#85343) 2022-10-09 14:46:54 +00:00
graph.py Add type and shape annotation for gm.print_readable() (#86562) 2022-10-12 05:39:54 +00:00
immutable_collections.py Add __all__ to torch.{fx, distributed, backends} submodules (#85079) 2022-09-20 12:51:08 +00:00
interpreter.py Docs: fix typo (#86273) 2022-10-06 22:38:50 +00:00
node.py Docs: fx.Node docs incorrectly state that the self argument is included in args for module calls (#86685) 2022-10-11 18:05:56 +00:00
operator_schemas.py Add __all__ to torch.{autograd, fx, cuda} submodules (#85343) 2022-10-09 14:46:54 +00:00
OVERVIEW.md
proxy.py better error message fix (#86422) 2022-10-08 00:06:05 +00:00
subgraph_rewriter.py Introduce a match filter for SubgraphRewriter (#86430) 2022-10-07 05:09:40 +00:00
tensor_type.py Improve getitem syntax for TensorType (#84555) 2022-09-06 18:36:24 +00:00
traceback.py Preserve stack trace for backward nodes over AOTAutograd (#83558) 2022-08-18 22:13:04 +00:00