pytorch/torch
Akifumi Imanishi aa1fd6b45a Add LazyBatchNormXd (#51548)
Summary:
This PR implements UninitializedBuffer and LazyBatchnormXd based on https://github.com/pytorch/pytorch/issues/44538. (cc. emcastillo and albanD)

Pull Request resolved: https://github.com/pytorch/pytorch/pull/51548

Reviewed By: zhangguanheng66

Differential Revision: D26276903

Pulled By: albanD

fbshipit-source-id: 0ac706974178363f8af075e59b41d5989418922f
2021-02-05 10:27:04 -08:00
..
_C [JIT/Futures] support set_exception api (#50983) 2021-02-04 20:22:19 -08:00
autograd make forward AD API private (#51693) 2021-02-04 19:02:29 -08:00
backends
contrib Add type annotations to _tensorboard_vis.py and hipify_python.py (#49834) 2021-01-04 09:29:51 -08:00
csrc [JIT] Allow implicit boolean conversion of containers (#51683) 2021-02-05 00:34:35 -08:00
cuda torch.cuda.memory_allocated to return {} if not initialized (#51179) 2021-01-28 20:38:17 -08:00
distributed Revert D26249330: [Gradient Compression] Add a documentation page for DDP communication hooks 2021-02-04 22:38:06 -08:00
distributions Fix Dirichlet.arg_constraints event_dim (#51369) 2021-02-02 10:26:45 -08:00
fft Add centered FFT example to fftshift docs (#51223) 2021-01-27 23:50:48 -08:00
for_onnx
futures [JIT/Futures] support set_exception api (#50983) 2021-02-04 20:22:19 -08:00
fx [FX] Fix mypy error in FX for rewriter (#51740) 2021-02-04 13:15:51 -08:00
jit Add LazyBatchNormXd (#51548) 2021-02-05 10:27:04 -08:00
legacy
lib [Gradient Compression] Check if the backend is NCCL when a DDP communication hook is registered (#51759) 2021-02-05 09:59:12 -08:00
linalg [doc] Fix inconsistencies with torch.linalg.inv and deprecate torch.inverse (#51672) 2021-02-04 17:19:45 -08:00
multiprocessing Drop unused imports (#49972) 2021-01-13 12:26:17 -08:00
nn Add LazyBatchNormXd (#51548) 2021-02-05 10:27:04 -08:00
onnx fix bug (#51222) (#51527) 2021-02-04 12:44:44 -08:00
optim [optim] make functional api be private (#51316) (#51665) 2021-02-03 17:59:05 -08:00
package [package] use WeakValueDictionary for global imported module registry (#51666) 2021-02-04 09:42:18 -08:00
profiler Add 'repeat' argument to profiler.schedule (#51630) 2021-02-04 13:51:04 -08:00
quantization Numeric Suite: Swap with shadow modules only for quantized part of model (#51052) 2021-02-04 11:40:30 -08:00
sparse [*.py] Rename "Arguments:" to "Args:" (#49736) 2020-12-28 09:34:47 -08:00
testing [RPC] Add option to make rref.get_type not block. (#50977) 2021-02-04 20:18:50 -08:00
utils Expand benchmark utils docs (#51664) 2021-02-04 00:22:41 -08:00
__config__.py
__future__.py
__init__.py make forward AD API private (#51693) 2021-02-04 19:02:29 -08:00
_appdirs.py
_autograd_functions.py
_classes.py [*.py] Rename "Arguments:" to "Args:" (#49736) 2020-12-28 09:34:47 -08:00
_jit_internal.py [Bug] fix for module_has_exports (#50680) 2021-01-27 16:03:24 -08:00
_linalg_utils.py Drop unused imports (#49972) 2021-01-13 12:26:17 -08:00
_lobpcg.py [*.py] Rename "Arguments:" to "Args:" (#49736) 2020-12-28 09:34:47 -08:00
_lowrank.py Drop unused imports (#49972) 2021-01-13 12:26:17 -08:00
_namedtensor_internals.py
_ops.py Back out "Revert D26077905: Back out "Revert D25850783: Add torch::deploy, an embedded torch-python interpreter"" (#51267) 2021-01-28 19:30:45 -08:00
_python_dispatcher.py Improve docs around Math/DefaultBackend & add PythonDispatcher class. (#50854) 2021-01-25 23:10:36 -08:00
_six.py Remove redundant code for unsupported Python versions (#49486) 2021-01-06 12:45:46 -08:00
_storage_docs.py
_tensor_docs.py Add division overload with rounding_mode selection (#51706) 2021-02-04 13:08:36 -08:00
_tensor_str.py Reland: Add base forward grad logic (#49734) 2020-12-22 12:11:27 -08:00
_torch_docs.py [doc] Fix inconsistencies with torch.linalg.inv and deprecate torch.inverse (#51672) 2021-02-04 17:19:45 -08:00
_utils_internal.py Back out "Revert D26077905: Back out "Revert D25850783: Add torch::deploy, an embedded torch-python interpreter"" (#51267) 2021-01-28 19:30:45 -08:00
_utils.py add type annotations to torch._utils (#49705) 2021-01-07 16:20:16 -08:00
_VF.py
_vmap_internals.py Beef up {jacobian, hessian} vectorize docs; eliminate a warning (#51638) 2021-02-03 17:15:16 -08:00
abi-check.cpp
CMakeLists.txt Refactor build targets for torch::deploy (#50288) 2021-01-22 09:16:32 -08:00
custom_class_detail.h
custom_class.h
extension.h
functional.py Removed typographical error from tech docs (#51286) 2021-02-03 14:09:36 -08:00
hub.py add close() method to tqdm mock (#46040) 2020-12-21 12:24:30 -08:00
library.h [PyTorch Mobile] Skip inferring function schema from the C++ function type (#50457) 2021-02-03 00:37:35 -08:00
overrides.py make forward AD API private (#51693) 2021-02-04 19:02:29 -08:00
py.typed
quasirandom.py [SobolEngine] Fix edge case of dtype of first sample (#51578) 2021-02-02 14:24:56 -08:00
random.py [*.py] Rename "Arguments:" to "Args:" (#49736) 2020-12-28 09:34:47 -08:00
README.txt
script.h
serialization.py Use doctest directly to get docstring examples (#50596) 2021-01-20 15:55:36 -08:00
storage.py
tensor.py Fix pickling for Tensor subclasses (redo) (#47732) 2021-02-01 07:32:52 -08:00
types.py

Note [TH abstraction violation]
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

TH/THC provide some hpp headers, which are proper C++ headers rather than
C headers.  These headers serve double duty as *internal implementation
detail* headers, whose contents should largely not be used by external
clients.

Ideally, we would not install these headers at all; instead, you should
use public functions (in headers like `THTensor.h`, NOT `THTensor.hpp`)
to manipulate these structs.  However, there are a few places
in torch/csrc where we violate this abstraction.  They are marked with
a pointer to this note.  Each of those sites will have to be refactored
when we refactor the guts of THTensor and related structures.