pytorch/torch
Tongzhou Wang c6a923f486
Support modules that output scalar in Gather (and data parallel) (#7973)
* Support modules that output scalar in Gather (and data parallel)

* Improve warning msg
2018-06-01 16:20:39 -04:00
..
_thnn Split libATen.so into libATen_cpu.so and libATen_cuda.so (#7275) 2018-05-10 10:28:33 -07:00
autograd Fix profiler crash when no events register (#8034) 2018-06-01 14:38:24 -04:00
backends Split libATen.so into libATen_cpu.so and libATen_cuda.so (#7275) 2018-05-10 10:28:33 -07:00
contrib Refactor ir.h to distinguish Nodes and Values 2017-11-15 11:47:18 -08:00
csrc Factor python dependency out of interpreter (#7970) 2018-06-01 16:07:21 -04:00
cuda Static linkage for CUDA (#6807) 2018-04-22 13:57:17 -04:00
distributed Use customized python interpreter (#7520) 2018-05-12 13:06:39 -04:00
distributions Example for Transformed Distribution (#8011) 2018-06-01 16:23:57 +02:00
for_onnx Codemod Toffee -> ONNX, toffee -> onnx. Change file names to match 2017-09-06 13:45:39 -04:00
jit [JIT][script] Implement nn.Sequential that can be inlined into script modules (#7747) 2018-05-25 13:38:24 -07:00
legacy move softmax/logsoftmax to ATen (#6786) 2018-05-04 14:23:35 -04:00
lib Support CUDA tensors in ProcessGroupGloo (#7694) 2018-06-01 09:54:45 -07:00
multiprocessing Fix sharing of empty tensor in multiprocessing (#6229) 2018-04-03 11:49:40 -04:00
nn Support modules that output scalar in Gather (and data parallel) (#7973) 2018-06-01 16:20:39 -04:00
onnx Make AT_FORALL_SCALAR_TYPES usable outside of at::namespace. (#7935) 2018-05-31 20:50:16 -04:00
optim _LRSchedulers getstate include optimizer info (#7757) 2018-05-23 11:43:42 -04:00
sparse Delete dead Tensor code paths (#5417) 2018-02-27 17:58:09 -05:00
testing Codemod to update our codebase to 0.4 standard (#6641) 2018-04-17 22:06:54 -04:00
utils Fix seeding random module in DataLoader (#7886) 2018-05-29 15:55:04 -04:00
__init__.py Add torch.get_default_dtype doc (#6872) 2018-04-23 18:58:01 -04:00
_six.py Add FileNotFoundError to torch._six (#7524) 2018-05-12 20:54:26 -04:00
_storage_docs.py [ready] General documentation improvements (#5450) 2018-03-08 13:21:12 -05:00
_tensor_docs.py remove index from python bindings (fixes: #7639) (#7690) 2018-05-19 20:04:07 +02:00
_tensor_str.py fix scale on some tensors (#7189) 2018-05-02 15:33:02 -07:00
_torch_docs.py Update _torch_docs.py (#7700) 2018-05-19 11:12:02 -04:00
_utils.py Restore tensor.type, tensor.type_as docs (#5746) 2018-03-14 17:59:31 -04:00
functional.py Update docs with new tensor repr (#6454) 2018-04-21 07:35:37 -04:00
random.py [ready] General documentation improvements (#5450) 2018-03-08 13:21:12 -05:00
README.txt Make all of TH and THC C++. (#6913) 2018-04-28 07:45:02 -04:00
serialization.py support loading gzip (#6490) 2018-05-31 15:06:38 -04:00
storage.py Move repeat to torch/_utils.py (#4712) 2018-01-17 17:30:43 -05:00
tensor.py Trace size-dependent expressions correctly (#6554) 2018-05-04 10:55:39 +02:00

Note [TH abstraction violation]
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

TH/THC provide some hpp headers, which are proper C++ headers rather than
C headers.  These headers serve double duty as *internal implementation
detail* headers, whose contents should largely not be used by external
clients.

Ideally, we would not install these headers at all; instead, you should
use public functions (in headers like `THTensor.h`, NOT `THTensor.hpp`)
to manipulate these structs.  However, there are a few places
in torch/csrc where we violate this abstraction.  They are marked with
a pointer to this note.  Each of those sites will have to be refactored
when we refactor the guts of THTensor and related structures.