pytorch/torch
Zhengxu Chen 12daa4f663 [jit][edge] Enable CALL instruction in lite interpreter. (#65964)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/65964

ghstack-source-id: 141425519

Test Plan: buck run xplat/caffe2:test_lite_interpreter

Reviewed By: cccclai

Differential Revision: D31326149

fbshipit-source-id: 8a599d92f3fa4e6c125100adb36d89592e71e547
2021-10-25 14:44:33 -07:00
..
_C [ONNX] Deprecate fold_if pass (#65697) (#66145) 2021-10-22 13:46:20 -07:00
_masked Revert D31838513: Strided masked reduction: mean. 2021-10-21 18:32:42 -07:00
ao [quant] Fix docs build (#67169) 2021-10-25 08:02:26 -07:00
autograd [doc] typo (#66754) 2021-10-18 10:33:56 -07:00
backends Add quantized::convtranspose2d (#63914) 2021-09-24 17:07:29 -07:00
contrib
cpu
csrc [jit][edge] Enable CALL instruction in lite interpreter. (#65964) 2021-10-25 14:44:33 -07:00
cuda Remove dtype from torch.Storage and use only torch.ByteStorage (#62030) 2021-10-05 13:50:34 -07:00
distributed [ez] [Docs] Missing import in example for post_local_sgd (#67047) 2021-10-24 01:44:06 -07:00
distributions Prefer mT and mH over transpose(-2, -1) and transpose(-2, -1).conj() (#64181) 2021-10-18 13:02:25 -07:00
fft C++ API and docs for hfftn (#66127) 2021-10-07 12:48:36 -07:00
for_onnx
futures
fx [quant][graphmode][fx] Move quant-fx2trt unittests to test_quantize_fx.py (#67064) 2021-10-22 14:36:36 -07:00
jit [JIT] Freeze allows preservation of submodule attributes (#66102) 2021-10-25 07:56:20 -07:00
legacy
lib Update CMake and use native CUDA language support (#62445) 2021-10-11 09:05:48 -07:00
linalg Create linalg.matrix_exp (#62715) 2021-10-19 09:07:15 -07:00
multiprocessing Remove dtype from torch.Storage and use only torch.ByteStorage (#62030) 2021-10-05 13:50:34 -07:00
nn Accept 0-dim channel inputs in convolution layer (#66256) 2021-10-25 08:12:29 -07:00
onnx [ONNX] Symbolic: dynamic input for OneHot, bool for Einsum (#65940) (#66147) 2021-10-22 13:46:24 -07:00
optim [Pytorch][Bootcamp] Add fixes and vanilla testing for Adagrad non-vectorized and vectorized optimizers to handle complex numbers (#66671) 2021-10-25 10:13:21 -07:00
package [Codemod][FBSourceBlackLinter] Daily arc lint --take BLACK 2021-10-05 20:55:56 -07:00
profiler
quantization fx quant: enable linear-bn1d fusion for PTQ (#66484) 2021-10-18 10:14:28 -07:00
sparse
special [special] special alias for softmax (#62251) 2021-10-01 03:55:32 -07:00
testing Improve assert failure message in test_get_torch_func_signature_exhaustive (#67039) 2021-10-25 14:20:38 -07:00
utils Fix ArchiveReader to keep archive path (#67035) 2021-10-22 06:34:39 -07:00
__config__.py
__future__.py
__init__.py Strided masked reductions: sum, amax. Testing of masked reductions. (#65990) 2021-10-18 11:10:32 -07:00
_appdirs.py
_classes.py
_deploy.py
_jit_internal.py
_linalg_utils.py
_lobpcg.py Prefer mT and mH over transpose(-2, -1) and transpose(-2, -1).conj() (#64181) 2021-10-18 13:02:25 -07:00
_lowrank.py Prefer mT and mH over transpose(-2, -1) and transpose(-2, -1).conj() (#64181) 2021-10-18 13:02:25 -07:00
_namedtensor_internals.py
_ops.py
_python_dispatcher.py
_six.py
_sources.py
_storage_docs.py Remove dtype from torch.Storage and use only torch.ByteStorage (#62030) 2021-10-05 13:50:34 -07:00
_tensor_docs.py Revert D31474901: [pytorch][PR] [numpy] add torch.argwhere 2021-10-21 15:50:54 -07:00
_tensor_str.py
_tensor.py Remove dtype from torch.Storage and use only torch.ByteStorage (#62030) 2021-10-05 13:50:34 -07:00
_torch_docs.py Revert D31474901: [pytorch][PR] [numpy] add torch.argwhere 2021-10-21 15:50:54 -07:00
_utils_internal.py
_utils.py Remove dtype from torch.Storage and use only torch.ByteStorage (#62030) 2021-10-05 13:50:34 -07:00
_VF.py
_vmap_internals.py Allow None to pass through for vmap (#65565) 2021-10-03 19:53:49 -07:00
abi-check.cpp
autocast_mode.py
CMakeLists.txt Do not enforce unused vars rule for torch_deploy (#66447) 2021-10-11 15:24:19 -07:00
custom_class_detail.h
custom_class.h [Pytorch Edge] Extend runtime compatibility to custom classes (#66972) 2021-10-25 13:42:26 -07:00
deploy.h
extension.h
functional.py Fix linter (#67122) 2021-10-22 16:02:36 -07:00
hub.py Torchhub: More robust assumption regarding main or master branch (#64364) 2021-09-20 10:36:13 -07:00
library.h
overrides.py Implement histogramdd on CPU (#65318) 2021-10-21 16:09:31 -07:00
py.typed
quasirandom.py
random.py
README.txt
script.h
serialization.py Remove dtype from torch.Storage and use only torch.ByteStorage (#62030) 2021-10-05 13:50:34 -07:00
storage.py [oss][pytorch] Add quint2x4 dtype (#65545) 2021-10-06 14:22:00 -07:00
torch_version.py
types.py Remove dtype from torch.Storage and use only torch.ByteStorage (#62030) 2021-10-05 13:50:34 -07:00

Note [TH abstraction violation]
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

TH/THC provide some hpp headers, which are proper C++ headers rather than
C headers.  These headers serve double duty as *internal implementation
detail* headers, whose contents should largely not be used by external
clients.

Ideally, we would not install these headers at all; instead, you should
use public functions (in headers like `THTensor.h`, NOT `THTensor.hpp`)
to manipulate these structs.  However, there are a few places
in torch/csrc where we violate this abstraction.  They are marked with
a pointer to this note.  Each of those sites will have to be refactored
when we refactor the guts of THTensor and related structures.