pytorch/torch
Tugsbayasgalan (Tugsuu) Manlaibaatar 28be47c267 [RELAND][export] Exempt autograd ops for predispatch export (#117448)
Summary: Reland of https://github.com/pytorch/pytorch/pull/116527/files

Test Plan: CI

Differential Revision: D52675324

Pull Request resolved: https://github.com/pytorch/pytorch/pull/117448
Approved by: https://github.com/ydwu4
2024-01-16 19:32:15 +00:00
..
_awaits
_C Fix wrong class inheritance in pyi (#116404) 2024-01-12 21:25:29 +00:00
_C_flatbuffer
_custom_op Allow functionalization to work with optional mutable (#114803) 2023-11-30 23:48:03 +00:00
_decomp [export] Add unit test for SDPA export result (#117390) 2024-01-14 00:21:28 +00:00
_dispatch
_dynamo [Dynamo] Trace autograd.function in dynamo when inputs require grad (#116358) (#116897) 2024-01-16 03:57:13 +00:00
_export [RELAND][export] Exempt autograd ops for predispatch export (#117448) 2024-01-16 19:32:15 +00:00
_functorch [Dynamo] Trace autograd.function in dynamo when inputs require grad (#116358) (#116897) 2024-01-16 03:57:13 +00:00
_higher_order_ops [HigherOrderOp] change signature of map_impl (#117161) 2024-01-13 02:50:46 +00:00
_inductor [Dynamo] Trace autograd.function in dynamo when inputs require grad (#116358) (#116897) 2024-01-16 03:57:13 +00:00
_lazy
_library Refactor can_auto_functionalize (#115134) 2023-12-05 22:43:06 +00:00
_logging [export] Add TORCH_LOGS=export (#116993) 2024-01-11 03:02:23 +00:00
_numpy [dynamo] Fix np.issubdtype (#116459) 2024-01-05 01:48:07 +00:00
_prims Add support for torch.Generator type in TorchScript (#110413) 2023-11-21 23:07:21 +00:00
_prims_common Some basic support for uint{16,32,64} codegen in CPU inductor (#116810) 2024-01-12 23:13:28 +00:00
_refs Add decomposition for torch.block_diag (#115096) 2023-12-11 20:04:22 +00:00
_subclasses Experimental non-strict mode (#114658) 2024-01-04 12:24:58 +00:00
_vendor vendor packaging.version (#114108) 2023-11-21 11:51:23 +00:00
amp Add Half support for CPU autocast on eager mode (#112484) 2023-11-21 20:08:28 +00:00
ao [Quant] Add dynamic quantization config for x86 inductor backend (#115337) 2024-01-10 11:33:37 +00:00
autograd Compiled autograd: Lift autograd functions' backward and provide default key for custom autograd functions (#115573) 2024-01-10 18:01:28 +00:00
backends Add config to disable TransformerEncoder/MHA fastpath (#112212) 2024-01-02 23:59:30 +00:00
compiler Add a wrapper to transform a NumPy function into a PyTorch function (#114610) 2024-01-02 18:35:29 +00:00
contrib
cpu
csrc [BE] Delete unused is_dynamo_compiling (#117455) 2024-01-14 15:15:29 +00:00
cuda Try creating a bf16 tensor as a last resort of is_bf16_supported(). (#115924) 2024-01-01 01:15:30 +00:00
distributed Update state_dict.py to propagate cpu offload (#117453) 2024-01-15 22:13:37 +00:00
distributions Fix hang in VonMises rejection sampling for small values of concentration (#114498) 2023-12-04 23:07:06 +00:00
export [RELAND] Error grad mode op in export API (#117420) 2024-01-13 21:36:29 +00:00
fft
func
futures
fx [RELAND][export] Exempt autograd ops for predispatch export (#117448) 2024-01-16 19:32:15 +00:00
jit [BE]: Update flake8 to v6.1.0 and fix lints (#116591) 2024-01-03 06:04:44 +00:00
legacy
lib
linalg
masked make_fx can now SymIntify int inputs (#113452) 2023-11-18 06:39:09 +00:00
monitor
mps
multiprocessing Robustify torch.multiprocessing.spawn error reporting to be less deadlock prone (#114688) 2023-12-09 03:36:43 +00:00
nested [Nested Tensor]Support SDPA math fallback for jagged layout nested tensor (#116445) 2024-01-12 17:30:40 +00:00
nn Update BCEWithLogitsLoss documentation regarding pos_weight (#117046) 2024-01-12 18:26:25 +00:00
onnx Fix ONNXProgram.save to use torch.load(..., mmap=True) for large models (#117295) 2024-01-12 04:38:27 +00:00
optim Migrate nontensor step and CUDA params state_dict tests to OptimizerInfo (#116509) 2024-01-12 22:32:37 +00:00
package [BE]: Add better handling of pathlib.Path with os calls (#116564) 2023-12-31 01:46:03 +00:00
profiler
quantization
signal Fix NaN bug in torch.signal.windows.kaiser (#116470) 2024-01-08 22:24:52 +00:00
sparse Update F32 sparse semi-structured support for CUTLASS back-end (#116017) 2023-12-22 16:53:04 +00:00
special
testing Check invariants for dynamo_test_failures.py (#117391) 2024-01-16 17:14:43 +00:00
utils [dynamo] Added dyn shapes support for math trigo ops: sin(h), cos(h), tan(h) ... (#114866) 2024-01-11 11:52:28 +00:00
__config__.py
__future__.py
__init__.py [dynamo] Added dyn shapes support for math trigo ops: sin(h), cos(h), tan(h) ... (#114866) 2024-01-11 11:52:28 +00:00
_appdirs.py
_classes.py
_compile.py
_custom_ops.py
_deploy.py
_guards.py Add Stateful/Stateless symbolic contexts, use fresh fake mode for dynamo backends (#113926) (#114526) 2023-11-26 23:40:32 +00:00
_jit_internal.py
_linalg_utils.py
_lobpcg.py
_lowrank.py
_meta_registrations.py [CPU] Add flash attention mask version (#115913) 2024-01-07 04:58:23 +00:00
_namedtensor_internals.py
_ops.py pre_dispatch aot_export (#115188) 2023-12-25 04:51:21 +00:00
_python_dispatcher.py
_sources.py
_storage_docs.py
_streambase.py
_tensor_docs.py Pyi doc inclusion + fix (#117267) 2024-01-15 13:06:53 +00:00
_tensor_str.py Do not error when printing view created in no-grad modified in-place in no-grad (#113716) 2023-11-16 18:57:56 +00:00
_tensor.py Fix torch.detach doc-string (#115850) 2023-12-22 20:04:33 +00:00
_torch_docs.py Pyi doc inclusion + fix (#117267) 2024-01-15 13:06:53 +00:00
_utils_internal.py [inductor][Observability] Add log for Optimus to enable easier debug (#110452) 2023-12-01 18:25:56 +00:00
_utils.py pre_dispatch aot_export (#115188) 2023-12-25 04:51:21 +00:00
_VF.py
_vmap_internals.py
_weights_only_unpickler.py Make Float8 types serializeable (#114662) 2023-11-29 23:23:23 +00:00
abi-check.cpp
CMakeLists.txt [BE] [cuDNN] Always build assuming cuDNN >= 8.1 (#95722) 2024-01-03 15:41:28 +00:00
custom_class_detail.h
custom_class.h [Reland] [1/N] Fixes clang-tidy warnings in header files (#114668) 2023-11-29 07:11:51 +00:00
extension.h
functional.py make_fx can now SymIntify int inputs (#113452) 2023-11-18 06:39:09 +00:00
hub.py Increase hub download chunk size (#116536) 2024-01-03 17:38:45 +00:00
library.h
library.py Optimize inspect.stack() call in caffe2/torch/library.py (#114700) 2023-11-29 20:54:02 +00:00
overrides.py [dynamo] Added dyn shapes support for math trigo ops: sin(h), cos(h), tan(h) ... (#114866) 2024-01-11 11:52:28 +00:00
py.typed
quasirandom.py
random.py
README.txt
return_types.py [pytree] register pytree node type in both C++ pytree and Python pytree (#112111) 2023-11-28 11:41:38 +00:00
script.h
serialization.py [BE]: Use os.fspath and os.PathLike in torch serialization (#116562) 2023-12-30 20:53:10 +00:00
storage.py
torch_version.py vendor packaging.version (#114108) 2023-11-21 11:51:23 +00:00
types.py improve annotation device parameters where a device ordinal is allowed (#113647) 2023-11-17 14:41:22 +00:00
version.py.tpl

Note [TH abstraction violation]
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

TH/THC provide some hpp headers, which are proper C++ headers rather than
C headers.  These headers serve double duty as *internal implementation
detail* headers, whose contents should largely not be used by external
clients.

Ideally, we would not install these headers at all; instead, you should
use public functions (in headers like `THTensor.h`, NOT `THTensor.hpp`)
to manipulate these structs.  However, there are a few places
in torch/csrc where we violate this abstraction.  They are marked with
a pointer to this note.  Each of those sites will have to be refactored
when we refactor the guts of THTensor and related structures.