pytorch/torch
BowenBao cf70466970 [ONNX] Improve scope inference in function extraction
Cover more cases of scope inferencing where consecutive nodes don't have valid scope information. Usually these nodes are created in some pass where authors forgot to assign meaningful scope to them.
* One rule of `InferScope` is to check if the current node's outputs' users share the same scope. Recursively run `InferScope` on the user nodes if they are missing scope as well. Since the graph is SSA, the depth is finite.
* Fix one pass that missed scope information for a new node.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/71897
2022-01-31 23:58:53 +00:00
..
_C torch/monitor: merge Interval and FixedCount stats (#72009) 2022-01-30 23:21:59 +00:00
_masked Strided masked var. (#68738) 2021-12-01 19:19:37 -08:00
ao [reland][bc-breaking][quant][be] Refactor fuser_method to include is_qat argument" (#71956) 2022-01-31 23:02:22 +00:00
autograd Update docs for forward AD and make them public (#71643) 2022-01-28 03:33:00 +00:00
backends NNAPI: quant logistic fix (#70847) 2022-01-07 13:36:33 -08:00
contrib
cpu
csrc [ONNX] Improve scope inference in function extraction 2022-01-31 23:58:53 +00:00
cuda Document torch.cuda.ExternalStream, torch.cuda.caching_allocator_alloc and torch.cuda.caching_allocator_delete (#70126) 2022-01-12 15:44:40 -08:00
distributed Back out "Create torch.distributed.shard package." (#72062) 2022-01-31 18:29:27 +00:00
distributions TransformedDistribution.icdf: Fix erroneous icdf ValueError (#71393) 2022-01-28 00:34:08 +00:00
fft
for_onnx
futures Update min python version to 3.7 in setup.py and mypy configs (#71494) 2022-01-20 00:03:57 +00:00
fx [acc_ops] Move slice_tensor to consider single dim at a time (#5906) 2022-01-31 23:37:36 +00:00
jit Move bytecode generation to python (#71681) 2022-01-28 02:33:00 +00:00
legacy
lib
linalg Implement forward AD for linalg.svd and improve svd_backward (#70253) 2022-01-27 18:38:30 +00:00
monitor torch/monitor: TensorboardEventHandler (#71658) 2022-01-27 08:33:55 +00:00
multiprocessing make ProcessException pickleable (#70118) 2021-12-30 09:09:55 -08:00
nn Revert D33850228: [pytorch][PR] Implement Tanh Gelu Approximation 2022-01-31 17:44:19 +00:00
onnx [ONNX] Improve scope inference in function extraction 2022-01-31 23:58:53 +00:00
optim Remove state_dict from AveragedModel and use buffers instead (#71763) 2022-01-26 13:31:30 +00:00
package [Codemod][FBSourceBlackLinter] Daily arc lint --take BLACK 2022-01-11 04:20:46 -08:00
profiler Add low level torch.profiler.kineto_profile base class (#63302) 2021-12-14 14:47:43 -08:00
quantization [quant][fx][graphmode][be] Change the type for output of convert to be torch.nn.Module (#69959) 2021-12-29 20:33:32 -08:00
sparse Sparse CSR CUDA: Add torch.sparse.sampled_addmm (#68007) 2021-11-29 15:43:29 -08:00
special
testing [reland][bc-breaking][quant][be] Refactor fuser_method to include is_qat argument" (#71956) 2022-01-31 23:02:22 +00:00
utils Propagate full autocast state to CheckpointFunction's forward-inside-backward (#71169) 2022-01-27 00:31:53 +00:00
__config__.py
__future__.py
__init__.py expose return_types in Python (#66614) 2021-12-06 09:05:29 -08:00
_appdirs.py
_classes.py
_deploy.py
_jit_internal.py Back out "Make TorchScript Preserve Fully Qualified Class Name for Python Exceptions" 2021-12-27 09:11:46 -08:00
_linalg_utils.py
_lobpcg.py Fix trivial typo at the doc of torch.lobpcg (#71464) 2022-01-20 00:07:39 +00:00
_lowrank.py
_namedtensor_internals.py
_ops.py backout D33469839 (#71443) 2022-01-18 23:51:51 +00:00
_python_dispatcher.py
_six.py Update min python version to 3.7 in setup.py and mypy configs (#71494) 2022-01-20 00:03:57 +00:00
_sources.py
_storage_docs.py
_tensor_docs.py Update docs for torch.real to indicate that it's supported for real tensors (#71962) 2022-01-28 18:46:40 +00:00
_tensor_str.py fixed compilations on xla tensor print (#71147) 2022-01-27 02:28:19 +00:00
_tensor.py Document torch.cuda.ExternalStream, torch.cuda.caching_allocator_alloc and torch.cuda.caching_allocator_delete (#70126) 2022-01-12 15:44:40 -08:00
_torch_docs.py Implement derivatives for torch.remainder and torch.fmod wrt the second argument and update the docs (#69908) 2022-01-27 23:13:16 +00:00
_utils_internal.py
_utils.py
_VF.py
_vmap_internals.py
abi-check.cpp
autocast_mode.py
CMakeLists.txt [pytorch][PR] Add ability for a mobile::Module to save as flatbuffer (#70201) 2022-01-12 16:30:39 -08:00
custom_class_detail.h
custom_class.h [jit] Decouple ivalue.h from jit_type.h (#70119) 2022-01-07 18:34:17 -08:00
deploy.h
extension.h
functional.py Add linalg.lu_factor (#66933) 2022-01-05 20:32:12 -08:00
hub.py
library.h Codegen: python_torch_functions only include relevant operators (#68693) 2022-01-21 15:37:06 +00:00
overrides.py Revert D33850228: [pytorch][PR] Implement Tanh Gelu Approximation 2022-01-31 17:44:19 +00:00
py.typed
quasirandom.py
random.py
README.txt
return_types.py expose return_types in Python (#66614) 2021-12-06 09:05:29 -08:00
script.h
serialization.py Avoid dtype mismatch error in torch.save if storages are unallocated (#68787) 2021-11-24 09:51:29 -08:00
storage.py
torch_version.py Lazy import packaging in torch_version (#71345) 2022-01-18 22:12:41 +00:00
types.py

Note [TH abstraction violation]
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

TH/THC provide some hpp headers, which are proper C++ headers rather than
C headers.  These headers serve double duty as *internal implementation
detail* headers, whose contents should largely not be used by external
clients.

Ideally, we would not install these headers at all; instead, you should
use public functions (in headers like `THTensor.h`, NOT `THTensor.hpp`)
to manipulate these structs.  However, there are a few places
in torch/csrc where we violate this abstraction.  They are marked with
a pointer to this note.  Each of those sites will have to be refactored
when we refactor the guts of THTensor and related structures.