pytorch/torch
Sean Silva 640c1be900 Shape functions: Use friendlier clamping pattern
When start_val == 0, using the comparison `start_val > self[dim]` can be folded easily (0 is never strictly greater than the result of `self[dim]`), but `start_val >= self[dim]` can't. Since we assign `start_val = sef[dim]` in the body anyway, both these are equivalent

Pull Request resolved: https://github.com/pytorch/pytorch/pull/74980
Approved by: https://github.com/eellison
2022-04-05 13:42:08 +00:00
..
_C Dynamo+LTC: merging related code from staging branch to master (#75046) 2022-04-02 00:23:15 +00:00
_C_flatbuffer [4/5]Testing jit module in flatbuffer in Python. (#74387) 2022-03-24 23:29:47 +00:00
_lazy Dynamo+LTC: merging related code from staging branch to master (#75046) 2022-04-02 00:23:15 +00:00
_masked Revert "Support masked sum on CSR tensors [CPU, CUDA]" 2022-04-04 22:06:19 +00:00
amp add autocast cpu doc 2022-03-22 02:02:43 +00:00
ao [quant][refactor] Refactor find_matches for easier future extension (#74878) 2022-04-05 06:53:35 +00:00
autograd record_function: update to use custom_class API 2022-03-30 15:57:28 +00:00
backends Resolve int[]? arguments to new OptionalIntArrayRef class 2022-03-26 01:45:50 +00:00
contrib
cpu add autocast cpu doc 2022-03-22 02:02:43 +00:00
csrc Shape functions: Use friendlier clamping pattern 2022-04-05 13:42:08 +00:00
cuda Virtualize <type>Storage classes (#66970) 2022-03-22 23:44:48 +00:00
distributed [Model Averaging] Code simplification for _find_process_group function (#75007) 2022-04-04 20:31:22 +00:00
distributions Improve numerical stability of torch.distributions.wishart.Wishart (#72993) 2022-03-15 18:30:08 +00:00
fft
futures
fx [fx][1/2] add PassManager and refactor AFG/AGM (#74972) 2022-04-01 09:12:47 +00:00
jit Revert "Reland "[pytorch][PR] Support dataclasses in TorchScript" take 2 (#74353)" 2022-03-31 04:17:33 -07:00
legacy
lib
linalg
monitor
multiprocessing Virtualize <type>Storage classes (#66970) 2022-03-22 23:44:48 +00:00
nested [PyTorch] Delete NestedTensor Python wrapper (#74691) 2022-03-29 19:13:40 +00:00
nn [ao][sparsity] make sparsity and PTQ compose (#74845) 2022-04-05 03:35:41 +00:00
onnx [ONNX] Fix 1d case flatten export 2022-03-23 23:50:51 +00:00
optim Fix casting bug in state_step for optimizers when loading state dict 2022-04-05 01:27:18 +00:00
package [torch.package] add utility for determining where bad modules may come from (#74998) 2022-03-31 23:46:10 +00:00
profiler [pytorch profiler] enable iteration tracking for kineto (#72292) 2022-03-23 02:31:45 +00:00
quantization [quant] Rename _convert_do_not_use.py to convert.py (#74322) 2022-03-17 18:57:08 +00:00
sparse Add private conversion function from CSR to block CSR 2022-03-25 21:22:15 +00:00
special Implement torch.special.log_ndtr 2022-03-29 23:13:37 +00:00
testing Extend CSR constructor to support batched indices and values 2022-04-04 22:09:44 +00:00
utils [DataPipe] apply dill serialization for _Demux and add cache to traverse 2022-04-04 19:45:14 +00:00
__config__.py
__future__.py
__init__.py [PyTorch] Delete NestedTensor Python wrapper (#74691) 2022-03-29 19:13:40 +00:00
_appdirs.py
_classes.py
_deploy.py
_jit_internal.py Revert "Reland "[pytorch][PR] Support dataclasses in TorchScript" take 2 (#74353)" 2022-03-31 04:17:33 -07:00
_linalg_utils.py
_lobpcg.py [codemod][type-comments] Convert type comments in _lobpcg.py (#73088) 2022-03-08 22:17:33 +00:00
_lowrank.py
_namedtensor_internals.py
_ops.py Update __torch_dispatch__ to return op overload instead of the opoverload packet function (#72673) 2022-03-07 22:38:42 +00:00
_python_dispatcher.py Reland: "free up dispatch key space (in C++)" (#74963) 2022-03-31 21:52:38 +00:00
_six.py
_sources.py Revert "Reland "[pytorch][PR] Support dataclasses in TorchScript" take 2 (#74353)" 2022-03-31 04:17:33 -07:00
_storage_docs.py
_tensor_docs.py [BC-breaking] Use ScatterGatherKernel for scatter_reduce (CPU-only) (#74226) 2022-04-01 05:57:45 +00:00
_tensor_str.py [PyTorch] Move NestedTensor printing to _tensor_str.py (#74000) 2022-03-17 18:04:50 +00:00
_tensor.py Restore TestTorchFunctionOverride 2022-04-04 01:26:20 +00:00
_torch_docs.py [BC-breaking] Use ScatterGatherKernel for scatter_reduce (CPU-only) (#74226) 2022-04-01 05:57:45 +00:00
_utils_internal.py
_utils.py Fix serialization and deepcopying for wrapper subclasses 2022-02-24 18:21:25 +00:00
_VF.py
_vmap_internals.py
abi-check.cpp
CMakeLists.txt Revert "Allow specifying tags for aten operators in native_functions.yaml" 2022-03-28 18:04:38 +00:00
custom_class_detail.h
custom_class.h
deploy.h
extension.h
functional.py stft: Implement center padding in ATen 2022-04-01 01:14:52 +00:00
hub.py torch.hub security improvement: add new trust_repo parameter 2022-04-05 09:29:25 +00:00
library.h Eliminate unused parameters in PyTorch (#73749) 2022-03-04 02:31:37 +00:00
overrides.py Restore TestTorchFunctionOverride 2022-04-04 01:26:20 +00:00
py.typed
quasirandom.py
random.py
README.txt
return_types.py
script.h
serialization.py
storage.py Virtualize <type>Storage classes (#66970) 2022-03-22 23:44:48 +00:00
torch_version.py
types.py Introducing SymInt to Pytorch (for tracing size arithmetic) (master rebase) (#74861) 2022-03-31 21:59:59 +00:00

Note [TH abstraction violation]
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

TH/THC provide some hpp headers, which are proper C++ headers rather than
C headers.  These headers serve double duty as *internal implementation
detail* headers, whose contents should largely not be used by external
clients.

Ideally, we would not install these headers at all; instead, you should
use public functions (in headers like `THTensor.h`, NOT `THTensor.hpp`)
to manipulate these structs.  However, there are a few places
in torch/csrc where we violate this abstraction.  They are marked with
a pointer to this note.  Each of those sites will have to be refactored
when we refactor the guts of THTensor and related structures.