pytorch/torch
Mike Iovine 057a01556c [Static Runtime] Do not use variadic_sigrid_transforms_torch_bind if out variant is disabled (#66221)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/66221

JIT doesn't have an implementation for this op, so we can only use it when out variants are enabled.

Reviewed By: hlu1

Differential Revision: D31445887

fbshipit-source-id: 4565ac4df751d8ee4052647574c43efa05ea1452
2021-10-07 06:57:17 -07:00
..
_C [PyTorch Edge][type] Add type check in compatibility api (#63129) 2021-10-06 02:23:44 -07:00
ao [quant] Add support for quantization of Embedding{Bag} in dynamic quant APIs (#65674) 2021-10-06 23:19:38 -07:00
autograd Refactor functional api vectorized jacobian to use batched grad parameter (#65566) 2021-10-03 19:55:08 -07:00
backends Add quantized::convtranspose2d (#63914) 2021-09-24 17:07:29 -07:00
contrib
cpu Allow disabling cache in autocast (automatic mixed precision) (#63552) 2021-09-08 07:47:18 -07:00
csrc [Static Runtime] Do not use variadic_sigrid_transforms_torch_bind if out variant is disabled (#66221) 2021-10-07 06:57:17 -07:00
cuda Remove dtype from torch.Storage and use only torch.ByteStorage (#62030) 2021-10-05 13:50:34 -07:00
distributed (torch/elastic) add fqdn hostname to error printout (#66182) 2021-10-07 01:40:02 -07:00
distributions
fft Implement n-dimensional hermitian FFTs (#63890) 2021-09-30 16:02:28 -07:00
for_onnx
futures
fx GELU Converter (#66008) 2021-10-06 22:25:43 -07:00
jit [PyTorch Edge][type] Add type check in compatibility api (#63129) 2021-10-06 02:23:44 -07:00
legacy
lib
linalg Array API: Add torch.linalg.matmul alias to torch.matmul (#63227) 2021-09-07 12:35:32 -07:00
multiprocessing Remove dtype from torch.Storage and use only torch.ByteStorage (#62030) 2021-10-05 13:50:34 -07:00
nn [quant] Add support for quantization of Embedding{Bag} in dynamic quant APIs (#65674) 2021-10-06 23:19:38 -07:00
onnx [ONNX] Enable export of __xor_ (#64042) (#64581) 2021-09-30 21:09:01 -07:00
optim Added validation of mode parameter in AveragedModel (#65921) 2021-10-03 08:42:28 -07:00
package [Codemod][FBSourceBlackLinter] Daily arc lint --take BLACK 2021-10-05 20:55:56 -07:00
profiler
quantization [quant] AO migration of the torch/quantization/quantize_fx.py and torch/quantization/fx/* (#65033) 2021-09-22 09:29:15 -07:00
sparse
special [special] special alias for softmax (#62251) 2021-10-01 03:55:32 -07:00
testing [quant] Add support for quantization of Embedding{Bag} in dynamic quant APIs (#65674) 2021-10-06 23:19:38 -07:00
utils Compile without -Wno-unused-variable (take 2) (#66041) 2021-10-04 20:39:39 -07:00
__config__.py
__future__.py
__init__.py [oss][pytorch] Add quint2x4 dtype (#65545) 2021-10-06 14:22:00 -07:00
_appdirs.py
_classes.py
_deploy.py
_jit_internal.py Support Union in TorchScript (#64234) 2021-09-03 06:12:24 -07:00
_linalg_utils.py
_lobpcg.py
_lowrank.py
_namedtensor_internals.py
_ops.py
_python_dispatcher.py
_six.py
_sources.py
_storage_docs.py Remove dtype from torch.Storage and use only torch.ByteStorage (#62030) 2021-10-05 13:50:34 -07:00
_tensor_docs.py Remove dtype from torch.Storage and use only torch.ByteStorage (#62030) 2021-10-05 13:50:34 -07:00
_tensor_str.py
_tensor.py Remove dtype from torch.Storage and use only torch.ByteStorage (#62030) 2021-10-05 13:50:34 -07:00
_torch_docs.py Clarified difference in behavior of empty_strided and as_strided (#64568) 2021-09-30 17:27:59 -07:00
_utils_internal.py
_utils.py Remove dtype from torch.Storage and use only torch.ByteStorage (#62030) 2021-10-05 13:50:34 -07:00
_VF.py
_vmap_internals.py Allow None to pass through for vmap (#65565) 2021-10-03 19:53:49 -07:00
abi-check.cpp
autocast_mode.py Allow disabling cache in autocast (automatic mixed precision) (#63552) 2021-09-08 07:47:18 -07:00
CMakeLists.txt Don't build shared library for AOT Compiler (#66227) 2021-10-06 15:57:32 -07:00
custom_class_detail.h
custom_class.h
deploy.h
extension.h
functional.py implement "xy" indexing for torch.meshgrid (#62724) 2021-09-17 08:31:17 -07:00
hub.py Torchhub: More robust assumption regarding main or master branch (#64364) 2021-09-20 10:36:13 -07:00
library.h
overrides.py Remove dtype from torch.Storage and use only torch.ByteStorage (#62030) 2021-10-05 13:50:34 -07:00
py.typed
quasirandom.py
random.py Adds return type annotation for fork_rng function (#63724) 2021-08-27 09:03:40 -07:00
README.txt
script.h
serialization.py Remove dtype from torch.Storage and use only torch.ByteStorage (#62030) 2021-10-05 13:50:34 -07:00
storage.py [oss][pytorch] Add quint2x4 dtype (#65545) 2021-10-06 14:22:00 -07:00
torch_version.py Added more version comparison operations (#63848) 2021-09-09 10:30:20 -07:00
types.py Remove dtype from torch.Storage and use only torch.ByteStorage (#62030) 2021-10-05 13:50:34 -07:00

Note [TH abstraction violation]
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

TH/THC provide some hpp headers, which are proper C++ headers rather than
C headers.  These headers serve double duty as *internal implementation
detail* headers, whose contents should largely not be used by external
clients.

Ideally, we would not install these headers at all; instead, you should
use public functions (in headers like `THTensor.h`, NOT `THTensor.hpp`)
to manipulate these structs.  However, there are a few places
in torch/csrc where we violate this abstraction.  They are marked with
a pointer to this note.  Each of those sites will have to be refactored
when we refactor the guts of THTensor and related structures.