pytorch/torch
Edward Yang 2d7119f943 Revert D26753571: [pytorch][PR] add submodules to sys.modules so their attributes can be pickled
Test Plan: revert-hammer

Differential Revision:
D26753571 (fbf9745c85)

Original commit changeset: 2bda03bab39f

fbshipit-source-id: cc9cc4f508af122b0fdec7f8475343bd9badb9db
2021-03-02 11:11:31 -08:00
..
_C Fix GradBucket Typing (#52943) 2021-02-27 20:04:38 -08:00
autograd fix(docs): indent in docstring of key_averages (#53006) 2021-03-01 15:18:20 -08:00
backends Add size op in nnapi serializer (#52026) 2021-02-10 15:57:01 -08:00
contrib Add type annotations to _tensorboard_vis.py and hipify_python.py (#49834) 2021-01-04 09:29:51 -08:00
csrc Add streams boundary check to torch::cuda::scatter` (#53057) 2021-03-02 10:58:10 -08:00
cuda Mypy fixes for pytorch master (#52090) 2021-02-17 10:39:51 -08:00
distributed Revert D26696938: Update and expose ZeroRedundancyOptimizer docs 2021-03-02 07:14:23 -08:00
distributions Add sample validation for LKJCholesky.log_prob (#52763) 2021-02-25 16:12:29 -08:00
fft [doc] Fix documentations of torch functions (#52982) 2021-03-01 09:59:57 -08:00
for_onnx
futures [JIT/Futures] support set_exception api (#50983) 2021-02-04 20:22:19 -08:00
fx [FX] Use precompiled regex in graph name processing (#52853) 2021-02-25 17:21:38 -08:00
jit Ge v1 (#52136) 2021-02-28 00:53:13 -08:00
legacy
lib Log nccl debug level in ProcessGroupNCCL (#52803) 2021-03-01 14:57:22 -08:00
linalg Revert D26375734: Implemented torch.linalg.multi_dot 2021-02-25 00:43:57 -08:00
multiprocessing Drop unused imports (#49972) 2021-01-13 12:26:17 -08:00
nn quant norm layers: move scale + zp to buffers (#52861) 2021-02-25 17:23:39 -08:00
onnx [ONNX] Update LayerNorm symbolic to handle autocasting (#52199) (#52350) 2021-02-19 10:57:15 -08:00
optim Added torch.no_grad() to update_bn (#52654) 2021-02-25 11:35:38 -08:00
package [package] catch exceptions from calling reduce function. (#53061) 2021-03-01 21:27:08 -08:00
profiler Add FLOPS support to the new profiler API. (#51734) 2021-02-05 15:03:35 -08:00
quantization [quant][graphmode][fx][fp16] Add fp16 support for silu (#52865) 2021-03-02 02:11:29 -08:00
sparse [*.py] Rename "Arguments:" to "Args:" (#49736) 2020-12-28 09:34:47 -08:00
testing Remove useless test_reference_numerics skip infos (#52890) 2021-03-02 10:49:21 -08:00
utils disable dill extension behavior (#53118) 2021-03-02 11:07:08 -08:00
__config__.py Expose CXX_FLAGS through __config__ (#47861) 2020-12-01 19:58:29 -08:00
__future__.py
__init__.py Revert D26753571: [pytorch][PR] add submodules to sys.modules so their attributes can be pickled 2021-03-02 11:11:31 -08:00
_appdirs.py
_autograd_functions.py make torch.lu differentiable. (#46284) 2020-10-23 10:13:46 -07:00
_classes.py [*.py] Rename "Arguments:" to "Args:" (#49736) 2020-12-28 09:34:47 -08:00
_deploy.py [package] Pull out _UnpicklerWrapper into PackageUnpickler (#53049) 2021-03-01 18:40:52 -08:00
_jit_internal.py [Usability] Capture argument names for traced functions and modules (#51775) 2021-02-10 18:28:08 -08:00
_linalg_utils.py Drop unused imports (#49972) 2021-01-13 12:26:17 -08:00
_lobpcg.py [*.py] Rename "Arguments:" to "Args:" (#49736) 2020-12-28 09:34:47 -08:00
_lowrank.py Drop unused imports (#49972) 2021-01-13 12:26:17 -08:00
_namedtensor_internals.py
_ops.py Back out "Revert D26077905: Back out "Revert D25850783: Add torch::deploy, an embedded torch-python interpreter"" (#51267) 2021-01-28 19:30:45 -08:00
_python_dispatcher.py Improve docs around Math/DefaultBackend & add PythonDispatcher class. (#50854) 2021-01-25 23:10:36 -08:00
_six.py Clean up usage of torch._six partially (#49785) 2021-02-08 13:58:34 -08:00
_storage_docs.py
_tensor_docs.py Updates rounding_mode documentation to remove "true" (#52202) 2021-02-12 09:19:39 -08:00
_tensor_str.py Reland: Add base forward grad logic (#49734) 2020-12-22 12:11:27 -08:00
_torch_docs.py [doc] Fix documentations of torch functions (#52982) 2021-03-01 09:59:57 -08:00
_utils_internal.py Back out "Revert D26077905: Back out "Revert D25850783: Add torch::deploy, an embedded torch-python interpreter"" (#51267) 2021-01-28 19:30:45 -08:00
_utils.py Introduce mlc device (ML Compute device) to PyTorch's device list (#50634) 2021-02-24 22:39:11 -08:00
_VF.py
_vmap_internals.py Beef up {jacobian, hessian} vectorize docs; eliminate a warning (#51638) 2021-02-03 17:15:16 -08:00
abi-check.cpp
CMakeLists.txt Use touch() in pathlib for better compatibility on Windows (#52729) 2021-02-25 13:46:21 -08:00
custom_class_detail.h [PyTorch] Remove reference_cast in make_boxed_from_unboxed_functor (#51319) 2021-02-17 10:58:44 -08:00
custom_class.h Add a demo backend with compiler (#52603) 2021-02-26 11:53:34 -08:00
deploy.h [deploy] torch::deploy API (#51754) 2021-02-18 02:30:08 -08:00
extension.h
functional.py [doc] Fix documentations of torch functions (#52982) 2021-03-01 09:59:57 -08:00
hub.py add close() method to tqdm mock (#46040) 2020-12-21 12:24:30 -08:00
library.h Introduce mlc device (ML Compute device) to PyTorch's device list (#50634) 2021-02-24 22:39:11 -08:00
overrides.py Revert D26375734: Implemented torch.linalg.multi_dot 2021-02-25 00:43:57 -08:00
py.typed
quasirandom.py [SobolEngine] Fix edge case of dtype of first sample (#51578) 2021-02-02 14:24:56 -08:00
random.py [*.py] Rename "Arguments:" to "Args:" (#49736) 2020-12-28 09:34:47 -08:00
README.txt
script.h
serialization.py Use doctest directly to get docstring examples (#50596) 2021-01-20 15:55:36 -08:00
storage.py Add type informations to torch/storage.py (#46876) 2020-11-06 11:34:10 -08:00
tensor.py Introduce mlc device (ML Compute device) to PyTorch's device list (#50634) 2021-02-24 22:39:11 -08:00
types.py

Note [TH abstraction violation]
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

TH/THC provide some hpp headers, which are proper C++ headers rather than
C headers.  These headers serve double duty as *internal implementation
detail* headers, whose contents should largely not be used by external
clients.

Ideally, we would not install these headers at all; instead, you should
use public functions (in headers like `THTensor.h`, NOT `THTensor.hpp`)
to manipulate these structs.  However, there are a few places
in torch/csrc where we violate this abstraction.  They are marked with
a pointer to this note.  Each of those sites will have to be refactored
when we refactor the guts of THTensor and related structures.