pytorch/torch
Will Feng a54416d208 [C++ API] Remove deprecated torch::nn::BatchNorm / FeatureDropout / modules_ordered_dict and torch::nn::init::Nonlinearity / FanMode (#34508)
Summary:
This PR is BC-breaking in the following way:
- The deprecated `torch::nn::BatchNorm` is removed in favor of `torch::nn::BatchNorm{1,2,3}d`
- The deprecated `torch::nn::FeatureDropout` is removed in favor of `torch::nn::Dropout{2,3}d`
- The deprecated `torch::nn::modules_ordered_dict` is removed. User should do `Sequential sequential({{"m1", MyModule(1)}, {"m2", MyModule(2)}})` instead.
- The deprecated `torch::nn::init::Nonlinearity` is removed, in favor of the following enums:
    - `torch::kLinear`
    - `torch::kConv1D`
    - `torch::kConv2D`
    - `torch::kConv3D`
    - `torch::kConvTranspose1D`
    - `torch::kConvTranspose2D`
    - `torch::kConvTranspose3D`
    - `torch::kSigmoid`
    - `torch::kTanh`
    - `torch::kReLU`
    - `torch::kLeakyReLU`
- The deprecated `torch::nn::init::FanMode` is removed, in favor of the following enums:
    - `torch::kFanIn`
    - `torch::kFanOut`
Pull Request resolved: https://github.com/pytorch/pytorch/pull/34508

Differential Revision: D20351601

Pulled By: yf225

fbshipit-source-id: cca0cd112f29a31bb023e348ca8f82780e42bea3
2020-03-12 10:09:58 -07:00
..
autograd [test all] Back out "Revert D20171428: [profiler] fix chrome tracing for profiler run with cuda" 2020-03-05 09:05:56 -08:00
backends Stop using ctypes to interface with CUDA libraries. (#33678) 2020-03-11 07:22:46 -07:00
contrib
csrc [C++ API] Remove deprecated torch::nn::BatchNorm / FeatureDropout / modules_ordered_dict and torch::nn::init::Nonlinearity / FanMode (#34508) 2020-03-12 10:09:58 -07:00
cuda Stop using ctypes to interface with CUDA libraries. (#33678) 2020-03-11 07:22:46 -07:00
distributed Split deserialize from _run_function in RPC internal.py (#34494) 2020-03-09 20:41:00 -07:00
distributions Support cdf for mixture_same_family distribution (#33408) 2020-03-03 07:31:24 -08:00
for_onnx
jit [JIT] remove builtin interpolate functions (#34514) 2020-03-12 09:21:33 -07:00
legacy
lib fix more errors (#34480) 2020-03-09 14:54:15 -07:00
multiprocessing [torch/multiprocessing] Update documentation indicating that start_method is ignored for mp.spawn() (#33070) 2020-02-07 15:26:00 -08:00
nn add quantized ELU activation (#34267) 2020-03-12 09:31:00 -07:00
onnx small typos (#34589) 2020-03-11 11:01:31 -07:00
optim Fix typo in documentation (#34581) 2020-03-11 13:57:10 -07:00
quantization [quant] Speed up per-channel min-max observer (#34118) 2020-03-05 18:29:41 -08:00
sparse
testing [distributed] quicker exit in the case of failed tests in distributed (#34150) 2020-03-11 11:27:17 -07:00
utils Stop using ctypes to interface with CUDA libraries. (#33678) 2020-03-11 07:22:46 -07:00
__config__.py
__future__.py
__init__.py Revert D20193196: [pytorch][PR] PCA and SVD for low-rank matrices, LOBPCG for positive-defined generalized eigenvalue problem 2020-03-11 09:24:34 -07:00
__init__.pyi.in fix type stub errors (#33762) 2020-02-27 06:58:53 -08:00
_classes.py [JIT] Add torch.classes.load_library 2020-01-23 14:56:20 -08:00
_jit_internal.py [JIT] Preserve qualified names on traced modules (#34395) 2020-03-09 19:23:53 -07:00
_namedtensor_internals.py Remove all remaining usages of BUILD_NAMEDTENSOR (#31116) 2019-12-12 09:53:03 -08:00
_ops.py [JIT] Passing custom class as arg (#32260) 2020-01-23 14:54:59 -08:00
_overrides.py Revert D20195053: [pytorch][PR] Add API for listing functions overridable by __torch_function__ 2020-03-04 10:13:54 -08:00
_six.py Revert D20193196: [pytorch][PR] PCA and SVD for low-rank matrices, LOBPCG for positive-defined generalized eigenvalue problem 2020-03-11 09:24:34 -07:00
_storage_docs.py
_tensor_docs.py Fix deprecated python "add" calls (#33428) 2020-02-26 09:02:31 -08:00
_tensor_str.py Added tensor.is_complex(), is_complex and dtype.is_complex py binding, tensor printing, and dixed the scalar type returned for complex float (#33268) 2020-02-20 13:38:01 -08:00
_torch_docs.py Adds true_divide function, analogous to Python 's, JAX's, NumPy's (true) division (#34236) 2020-03-09 21:06:33 -07:00
_utils_internal.py Don't use RTLD_GLOBAL to load _C. (#31162) 2020-01-09 07:28:15 -08:00
_utils.py Preserve Backward compatibility of models serialized before #31040 (#33796) 2020-02-26 13:40:38 -08:00
_VF.py [JIT] fix resolving of functions in torch/functional. fix compilation of torch.stft (#33504) 2020-02-26 18:35:43 -08:00
abi-check.cpp
CMakeLists.txt Stop using ctypes to interface with CUDA libraries. (#33678) 2020-03-11 07:22:46 -07:00
custom_class_detail.h [JIT] Introduce BuiltinOpFunction and integrate into torchbind (#34098) 2020-03-07 10:03:56 -08:00
custom_class.h [jit] kill script namespace (#34515) 2020-03-11 23:32:48 -07:00
extension.h
functional.py Revert D20193196: [pytorch][PR] PCA and SVD for low-rank matrices, LOBPCG for positive-defined generalized eigenvalue problem 2020-03-11 09:24:34 -07:00
hub.py Consider hub_dir alongside TORCH_HOME env variable for storing hub models (#32844) 2020-02-05 15:35:53 -08:00
py.typed
quasirandom.py Fix crash of SobolEngine if default tensor type is cuda (#32496) 2020-01-29 08:49:18 -08:00
random.py
README.txt
script.h [jit] do the code reorg (#33851) 2020-02-27 13:02:51 -08:00
serialization.py Revert D20193196: [pytorch][PR] PCA and SVD for low-rank matrices, LOBPCG for positive-defined generalized eigenvalue problem 2020-03-11 09:24:34 -07:00
storage.py Fix all occurrences of C416. (#33429) 2020-02-21 08:32:22 -08:00
tensor.py avoid large vector copy when query per_channel q_params (#31040) 2020-02-19 16:24:24 -08:00

Note [TH abstraction violation]
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

TH/THC provide some hpp headers, which are proper C++ headers rather than
C headers.  These headers serve double duty as *internal implementation
detail* headers, whose contents should largely not be used by external
clients.

Ideally, we would not install these headers at all; instead, you should
use public functions (in headers like `THTensor.h`, NOT `THTensor.hpp`)
to manipulate these structs.  However, there are a few places
in torch/csrc where we violate this abstraction.  They are marked with
a pointer to this note.  Each of those sites will have to be refactored
when we refactor the guts of THTensor and related structures.