pytorch/torch
David Riazati fc6a9a19ea Add torch._C._nn built-in, more weak fns (#13322)
Summary:
This PR adds functions defined in `torch._C._nn` as builtin functions (including inplace variants). This allows for the conversion of more functions to weak script

NB: many `torch.nn.functional` functions will have to be slightly rewritten to avoid early returns (as with `threshold` in this PR)

Converts these functions to weak script:
* `threshold`
* `relu`
* `hardtanh`
* `relu6`
* `elu`
* `selu`
* `celu`
* `leaky_relu`
* `rrelu`
* `tanh`
* `sigmoid`
Pull Request resolved: https://github.com/pytorch/pytorch/pull/13322

Differential Revision: D12852203

Pulled By: driazati

fbshipit-source-id: 220670df32cb1ff39d120bdc04aa1bd41209c809
2018-11-05 21:02:18 -08:00
..
_thnn Update from Facebook (#8887) 2018-06-26 14:55:48 -07:00
autograd fix handling of single input in gradcheck (#13543) 2018-11-04 20:28:34 -08:00
backends Add support for torch.backends.cudnn.enabled (#13057) 2018-10-31 09:31:09 -07:00
contrib Remove stages from IR, they are not longer used 2018-10-05 13:58:15 -07:00
csrc Fix erase_number_type pass, negative indices in c2 and some onnx symbolics (#12888) 2018-11-05 19:13:49 -08:00
cuda Rewrite http://pytorch.org -> https://pytorch.org throughout project (#12636) 2018-10-15 13:03:27 -07:00
distributed Error msg on TCP backend (#13596) 2018-11-05 16:40:02 -08:00
distributions Rename potrf to cholesky (#12699) 2018-11-01 15:10:55 -07:00
for_onnx
jit Add torch._C._nn built-in, more weak fns (#13322) 2018-11-05 21:02:18 -08:00
legacy Remove torch/legacy (#11823) 2018-09-20 14:00:54 -07:00
lib Replaces c10d's CUDAEvent with ATen's (#13464) 2018-11-05 19:13:52 -08:00
multiprocessing Don't serialize hooks (#11705) 2018-10-16 20:11:03 -07:00
nn Add torch._C._nn built-in, more weak fns (#13322) 2018-11-05 21:02:18 -08:00
onnx Support new upsample in symbolic, caffe2 backend & caffe2 frontend (#13272) 2018-11-05 19:13:57 -08:00
optim Add name for required optimizer parameter. (#13202) 2018-10-29 15:02:21 -07:00
sparse Delete dead Tensor code paths (#5417) 2018-02-27 17:58:09 -05:00
testing Codemod to update our codebase to 0.4 standard (#6641) 2018-04-17 22:06:54 -04:00
utils Fix the bug when compile using nvcc compiler. (#13509) 2018-11-02 11:09:43 -07:00
__init__.py Update '__all__' in '__init.py__' (#12762) 2018-10-18 17:52:10 -07:00
_jit_internal.py Speed up resolution callback creation (#12859) 2018-10-23 20:40:04 -07:00
_ops.py Resolve builtins using a dict rather than by name (#10927) 2018-08-28 11:25:11 -07:00
_six.py Add weak script modules (#12682) 2018-10-23 09:06:02 -07:00
_storage_docs.py [ready] General documentation improvements (#5450) 2018-03-08 13:21:12 -05:00
_tensor_docs.py Add diag_embed to ATen and torch (#12447) 2018-11-05 08:55:28 -08:00
_tensor_str.py Fix print precision and match numpy behavior (#12746) 2018-10-24 18:12:51 -07:00
_torch_docs.py Add diag_embed to ATen and torch (#12447) 2018-11-05 08:55:28 -08:00
_utils_internal.py Use fixed MASTER_PORT in test_distributed (#13109) 2018-10-25 08:51:34 -07:00
_utils.py Don't serialize hooks (#11705) 2018-10-16 20:11:03 -07:00
abi-check.cpp Fixes for Torch Script C++ API (#11682) 2018-09-17 09:54:50 -07:00
CMakeLists.txt Remove size() from BatchDataset and templatize IndexType (#12960) 2018-11-05 17:13:09 -08:00
extension.h Restructure torch/torch.h and extension.h (#13482) 2018-11-05 16:46:52 -08:00
functional.py Rename potrf to cholesky (#12699) 2018-11-01 15:10:55 -07:00
hub.py Hub Implementation (#12228) 2018-10-29 18:43:14 -07:00
random.py [ready] General documentation improvements (#5450) 2018-03-08 13:21:12 -05:00
README.txt Make all of TH and THC C++. (#6913) 2018-04-28 07:45:02 -04:00
script.h Windows CI integration for custom ops (#12928) 2018-10-23 09:18:09 -07:00
serialization.py Reimplement storage slicing. (#11314) 2018-09-06 16:11:59 -07:00
storage.py Use torch.save in _StorageBase.__reduce__ (#9184) 2018-07-06 07:24:53 -07:00
tensor.py Rename potrf to cholesky (#12699) 2018-11-01 15:10:55 -07:00

Note [TH abstraction violation]
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

TH/THC provide some hpp headers, which are proper C++ headers rather than
C headers.  These headers serve double duty as *internal implementation
detail* headers, whose contents should largely not be used by external
clients.

Ideally, we would not install these headers at all; instead, you should
use public functions (in headers like `THTensor.h`, NOT `THTensor.hpp`)
to manipulate these structs.  However, there are a few places
in torch/csrc where we violate this abstraction.  They are marked with
a pointer to this note.  Each of those sites will have to be refactored
when we refactor the guts of THTensor and related structures.