mirror of
https://github.com/zebrajr/pytorch.git
synced 2025-12-07 12:21:27 +01:00
I'm cleaning this PR up as a proper way of disabling functionalization via config in AOTDispatcher. I removed the non-functionalization related changes from the original version: (1) preventing proxy mode (and functionalization) from incorrectly decomposing CIA ops (Ed has a PR for it here: https://github.com/pytorch/pytorch/pull/164939) (2) preventing python-dispatcher-based decomps above autograd from running. I'm not doing this for now, will likely do it in a followup Pull Request resolved: https://github.com/pytorch/pytorch/pull/164577 Approved by: https://github.com/ezyang ghstack dependencies: #165372 |
||
|---|---|---|
| .. | ||
| codegen | ||
| data | ||
| distributed | ||
| generated | ||
| opinfo | ||
| optests | ||
| test_module | ||
| __init__.py | ||
| autocast_test_lists.py | ||
| autograd_function_db.py | ||
| check_kernel_launches.py | ||
| common_cuda.py | ||
| common_device_type.py | ||
| common_dist_composable.py | ||
| common_distributed.py | ||
| common_dtype.py | ||
| common_fsdp.py | ||
| common_jit.py | ||
| common_methods_invocations.py | ||
| common_mkldnn.py | ||
| common_modules.py | ||
| common_mps.py | ||
| common_nn.py | ||
| common_optimizers.py | ||
| common_pruning.py | ||
| common_quantization.py | ||
| common_quantized.py | ||
| common_subclass.py | ||
| common_utils.py | ||
| composite_compliance.py | ||
| custom_op_db.py | ||
| custom_tensor.py | ||
| dist_utils.py | ||
| dynamo_test_failures.py | ||
| fake_config_module.py | ||
| fake_config_module2.py | ||
| fake_config_module3.py | ||
| hop_db.py | ||
| hypothesis_utils.py | ||
| inductor_utils.py | ||
| jit_metaprogramming_utils.py | ||
| jit_utils.py | ||
| logging_tensor.py | ||
| logging_utils.py | ||
| quantization_torch_package_models.py | ||
| static_module.py | ||
| subclasses.py | ||
| torchbind_impls.py | ||
| triton_utils.py | ||
| two_tensor.py | ||