pytorch/torch
David Riazati 4cdcbbf410 Use nn module tests in test_jit (#14238)
Summary:
This PR adds weak modules for all activation modules and uses `test_nn` module tests to test weak modules that have been annotated with `weak_module` and therefore are in `torch._jit_internal._weak_types`

Also depends on #14379
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14238

Differential Revision: D13192230

Pulled By: driazati

fbshipit-source-id: 36488960b6c91448b38c0fa65422539a93af8c5e
2018-11-27 21:19:51 -08:00
..
_thnn Update from Facebook (#8887) 2018-06-26 14:55:48 -07:00
autograd Allow cooperative structured objects to be passed modules in tracing (#13961) 2018-11-16 14:02:13 -08:00
backends Add support for torch.backends.cudnn.enabled (#13057) 2018-10-31 09:31:09 -07:00
contrib Remove stages from IR, they are not longer used 2018-10-05 13:58:15 -07:00
csrc Add boolean dispatch for function overloading (#14425) 2018-11-27 19:36:47 -08:00
cuda Give broadcast_coalesced tensors different version counters (#13594) 2018-11-07 21:49:35 -08:00
distributed Tensor type checking and informative error messages for torch.distributed (#14204) 2018-11-19 18:30:54 -08:00
distributions Batched cholesky decomposition (#14017) 2018-11-17 10:49:15 -08:00
for_onnx
jit Add boolean dispatch for function overloading (#14425) 2018-11-27 19:36:47 -08:00
legacy Remove torch/legacy (#11823) 2018-09-20 14:00:54 -07:00
lib Barrier synchronizes with prior work before completing (#14386) 2018-11-27 10:46:42 -08:00
multiprocessing Fixed torch.multiprocessing.spawn for not being able to spawn like dataloader workers (#14391) 2018-11-27 12:37:41 -08:00
nn Use nn module tests in test_jit (#14238) 2018-11-27 21:19:51 -08:00
onnx Speed-up "advanced" indexing operations (#13420) 2018-11-27 15:23:59 -08:00
optim Add name for required optimizer parameter. (#13202) 2018-10-29 15:02:21 -07:00
sparse fix doc for sparse.addmm (#14403) 2018-11-27 10:24:18 -08:00
testing Codemod to update our codebase to 0.4 standard (#6641) 2018-04-17 22:06:54 -04:00
utils Allow building libraries with setuptools that dont have abi suffix (#14130) 2018-11-27 17:35:53 -08:00
__init__.py Skip all builtin functions when importing names from _C._VariableFunctions to torch (#13884) 2018-11-15 13:23:57 -08:00
_jit_internal.py Add boolean dispatch for function overloading (#14425) 2018-11-27 19:36:47 -08:00
_ops.py Use realpath for loaded libraries (#13936) 2018-11-15 11:23:20 -08:00
_six.py Add weak script modules (#12682) 2018-10-23 09:06:02 -07:00
_storage_docs.py [ready] General documentation improvements (#5450) 2018-03-08 13:21:12 -05:00
_tensor_docs.py allow empty index for scatter_* methods (#14077) 2018-11-19 09:50:21 -08:00
_tensor_str.py Fix print precision and match numpy behavior (#12746) 2018-10-24 18:12:51 -07:00
_torch_docs.py roll along multiple dimensions 2018-11-27 20:32:30 -08:00
_utils_internal.py Use fixed MASTER_PORT in test_distributed (#13109) 2018-10-25 08:51:34 -07:00
_utils.py Don't serialize hooks (#11705) 2018-10-16 20:11:03 -07:00
abi-check.cpp Fixes for Torch Script C++ API (#11682) 2018-09-17 09:54:50 -07:00
CMakeLists.txt when BUILD_CAFFE2_OPS is OFF, torch-python needs a direct dep on nccl (#14430) 2018-11-27 15:53:31 -08:00
extension.h Restructure torch/torch.h and extension.h (#13482) 2018-11-05 16:46:52 -08:00
functional.py Add missing space in stft doc 2018-11-16 09:57:06 -08:00
hub.py Hub Implementation (#12228) 2018-10-29 18:43:14 -07:00
random.py [ready] General documentation improvements (#5450) 2018-03-08 13:21:12 -05:00
README.txt Make all of TH and THC C++. (#6913) 2018-04-28 07:45:02 -04:00
script.h Use torch:: instead of at:: in all C++ APIs (#13523) 2018-11-06 14:32:25 -08:00
serialization.py Reimplement storage slicing. (#11314) 2018-09-06 16:11:59 -07:00
storage.py Use torch.save in _StorageBase.__reduce__ (#9184) 2018-07-06 07:24:53 -07:00
tensor.py Rename potrf to cholesky (#12699) 2018-11-01 15:10:55 -07:00

Note [TH abstraction violation]
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

TH/THC provide some hpp headers, which are proper C++ headers rather than
C headers.  These headers serve double duty as *internal implementation
detail* headers, whose contents should largely not be used by external
clients.

Ideally, we would not install these headers at all; instead, you should
use public functions (in headers like `THTensor.h`, NOT `THTensor.hpp`)
to manipulate these structs.  However, there are a few places
in torch/csrc where we violate this abstraction.  They are marked with
a pointer to this note.  Each of those sites will have to be refactored
when we refactor the guts of THTensor and related structures.