pytorch/torch
albanD cd4aa9c95c Fix inplace check logic to be triggered when written to Tensor does not require gradients (#46296)
Summary:
Fix https://github.com/pytorch/pytorch/issues/46242

This ensures that the `check_inplace()` run the proper checks even if the Tensor that is being modified inplace does not requires gradient. As the Tensor written into it might require gradient and will make this inplace modification actually differentiable.
This contains:
- Codegen changes to tell `check_inplace()` if the inplace will be differentiable
- Changes in `handle_view_on_rebase` to work properly even when called for an input that does not require gradients (which was assumed to be true before)
- Corresponding tests (both warnings and the error raise internal assert errors without this fix)

Pull Request resolved: https://github.com/pytorch/pytorch/pull/46296

Reviewed By: ezyang

Differential Revision: D24903770

Pulled By: albanD

fbshipit-source-id: 74e65dad3d2e3b9f762cbb7b39f92f19d9a0b094
2020-11-16 08:06:06 -08:00
..
_C Add type informations to torch.cuda (#47134) 2020-11-13 21:34:35 -08:00
autograd Add max_src_column_width to autograd profiler (#46257) 2020-11-10 18:51:39 -08:00
backends PyTorch NNAPI integration prototype (#46780) 2020-11-05 21:31:01 -08:00
contrib Fix exception chaining in torch/ (#43836) 2020-08-31 20:26:23 -07:00
csrc Fix inplace check logic to be triggered when written to Tensor does not require gradients (#46296) 2020-11-16 08:06:06 -08:00
cuda Add type informations to torch.cuda (#47134) 2020-11-13 21:34:35 -08:00
distributed Revert D24524219: Remove balance and devices parameter from Pipe. 2020-11-12 19:31:19 -08:00
distributions Annotate torch.nn.cpp (#46490) 2020-10-23 17:40:32 -07:00
fft torch.fft: Two dimensional FFT functions (#45164) 2020-10-17 16:23:06 -07:00
for_onnx
futures fix #45552 - adding add_done_callback(fn) to torch.futures.Future (#45675) 2020-10-13 07:47:36 -07:00
fx change file name to snake style (#47914) 2020-11-14 01:29:25 -08:00
jit Revert D24740727: torch.Assert: make it torch.jit.script'able 2020-11-13 08:31:40 -08:00
legacy
lib [resubmit] Providing more information while crashing process in async error handling (#47246) 2020-11-13 20:11:06 -08:00
linalg Added linalg.cholesky (#46083) 2020-11-13 16:50:40 -08:00
multiprocessing Cleanup unused code for Python < 3.6 (#47822) 2020-11-13 21:37:01 -08:00
nn Revert "[quant] Remove nn.quantized.ReLU module and nn.quantized.functional.relu (#47415)" (#47949) 2020-11-14 08:40:30 -08:00
onnx Missing curly bracket. (#47855) 2020-11-13 21:17:24 -08:00
optim Revert D24262885: [pytorch][PR] Added foreach_zero_ API 2020-10-28 06:48:59 -07:00
package [packaging] simpler dependency plotting (#45686) 2020-10-06 23:40:00 -07:00
quantization Revert "[quant] Remove nn.quantized.ReLU module and nn.quantized.functional.relu (#47415)" (#47949) 2020-11-14 08:40:30 -08:00
sparse Revised sparse tensor documentation. (#45400) 2020-10-22 02:07:54 -07:00
testing Pass in smaller timeout into init_process_group for distributed_test (#47896) 2020-11-14 13:38:20 -08:00
utils Cleanup unused code for Python < 3.6 (#47822) 2020-11-13 21:37:01 -08:00
__config__.py
__future__.py
__init__.py Revert D24891767: rename torch.Assert to torch._assert 2020-11-13 08:35:05 -08:00
_appdirs.py Delete Python <= 3.5 specific checks from the code (#39879) 2020-06-15 08:16:06 -07:00
_autograd_functions.py make torch.lu differentiable. (#46284) 2020-10-23 10:13:46 -07:00
_classes.py [BE] Use f-string in various Python functions (#44161) 2020-09-04 07:38:25 -07:00
_jit_internal.py [JIT] add support for torch.jit.Final in python 3.6 (#47393) 2020-11-06 14:30:44 -08:00
_linalg_utils.py Support custom exception message (#41907) 2020-08-01 13:03:45 -07:00
_lobpcg.py Backward support for generalized eigenvalue solver with LOBPCG in forward [only k-rank SYMEIG case] (#43002) 2020-09-28 07:22:35 -07:00
_lowrank.py Allow large inputs to svd_lowrank. Fix inaccuracy in torch.svd docs. (#47440) 2020-11-09 21:04:48 -08:00
_namedtensor_internals.py
_ops.py move misc implementation out of jit/__init__.py (#41154) 2020-07-13 16:59:55 -07:00
_six.py Delete raise_from from torch._six (#43981) 2020-09-01 15:46:18 -07:00
_storage_docs.py Python API for Complex Storage and storage copy logic (#35771) 2020-05-01 11:47:22 -07:00
_tensor_docs.py Implement Tensor.new_empty_strided(sizes, strides, *, dtype, device, requires_grad) (#47225) 2020-11-09 08:31:01 -08:00
_tensor_str.py [quant] Create PerRowQuantizer for floating point scale and zero_point (#42612) 2020-08-13 11:20:53 -07:00
_torch_docs.py Revert D24543682: [pytorch][PR] Added support for complex input for torch.lu_solve 2020-11-13 08:24:53 -08:00
_utils_internal.py Remove py2 compatible future imports (#44735) 2020-09-16 12:55:57 -07:00
_utils.py [caffe2][torch] correctly re-raise Manifold StorageException 2020-08-28 11:41:10 -07:00
_VF.py Address JIT/Mypy issue with torch._VF (#43454) 2020-08-25 09:23:54 -07:00
_vmap_internals.py Allow vmap to accept nested python data structures as inputs (#46289) 2020-10-20 07:52:17 -07:00
abi-check.cpp
CMakeLists.txt make a way to disable callgrind (#46116) 2020-10-13 16:18:04 -07:00
custom_class_detail.h Test BC for built-in torchbind methods (#38560) 2020-05-15 19:06:59 -07:00
custom_class.h [TorchBind] Support using lambda function as TorchBind constructor (#47819) 2020-11-12 09:29:34 -08:00
extension.h
functional.py Revert "Fixed einsum compatibility/performance issues (#46398)" (#47821) 2020-11-12 08:11:40 -08:00
hub.py Cleanup unused code for Python < 3.6 (#47822) 2020-11-13 21:37:01 -08:00
library.h Rationalize inlining of kernels into the unboxing wrapper (#42845) 2020-10-15 04:02:51 -07:00
overrides.py Added linalg.cholesky (#46083) 2020-11-13 16:50:40 -08:00
py.typed remediation of S205607 2020-07-17 17:19:47 -07:00
quasirandom.py Type check quasirandom (#45434) 2020-09-28 16:49:38 -07:00
random.py Fix manual seed to unpack unsigned long (#42206) 2020-08-11 18:05:34 -07:00
README.txt
script.h
serialization.py Use storage.cpu() for moving storage to CPU in serialization. (#46028) 2020-10-13 12:51:10 -07:00
storage.py Add type informations to torch/storage.py (#46876) 2020-11-06 11:34:10 -08:00
tensor.py Fix output type of torch.max for Tensor subclasses. (#47110) 2020-11-10 19:45:36 -08:00
types.py Enable torch.tensor typechecks (#45077) 2020-09-24 08:22:06 -07:00

Note [TH abstraction violation]
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

TH/THC provide some hpp headers, which are proper C++ headers rather than
C headers.  These headers serve double duty as *internal implementation
detail* headers, whose contents should largely not be used by external
clients.

Ideally, we would not install these headers at all; instead, you should
use public functions (in headers like `THTensor.h`, NOT `THTensor.hpp`)
to manipulate these structs.  However, there are a few places
in torch/csrc where we violate this abstraction.  They are marked with
a pointer to this note.  Each of those sites will have to be refactored
when we refactor the guts of THTensor and related structures.