pytorch/torch
Philip Meier 802dd2b725 change sparse COO comparison strategy in assert_close (#68728)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/68728

This removes the ability for `assert_close` to `.coalesce()` the tensors internally. Additionally, we now also check `.sparse_dim()`. Sparse team: please make sure that is the behavior you want for all sparse COO comparisons in the future. #67796 will temporarily keep BC by always coalescing, but in the future `TestCase.assertEqual` will no longer do that.

cc nikitaved pearu cpuhrsch IvanYashchuk

Test Plan: Imported from OSS

Reviewed By: ngimel

Differential Revision: D33542996

Pulled By: mruberry

fbshipit-source-id: a8d2322c6ee1ca424e3efb14ab21787328cf28fc
2022-01-12 06:43:50 -08:00
..
_C Enable nested default hooks (#70932) 2022-01-11 15:03:49 -08:00
_masked Strided masked var. (#68738) 2021-12-01 19:19:37 -08:00
ao [quant] fix reduce_range warning (#71027) 2022-01-10 20:05:36 -08:00
autograd Enable nested default hooks (#70932) 2022-01-11 15:03:49 -08:00
backends NNAPI: quant logistic fix (#70847) 2022-01-07 13:36:33 -08:00
contrib
cpu
csrc Fix for tensor in list return added to wildcard set (#71170) 2022-01-11 22:12:39 -08:00
cuda Add nvidia-smi memory and utilization as native Python API (#69104) 2021-12-08 10:33:23 -08:00
distributed fix typo in debugging_hooks.py (#70956) 2022-01-10 12:59:42 -08:00
distributions [Reinstate] Wishart distribution (#70377) 2021-12-30 11:41:46 -08:00
fft
for_onnx
futures torch.futures doc formatting (#70630) 2022-01-07 15:22:22 -08:00
fx Per-overload torch.ops API (#67254) 2022-01-10 17:29:06 -08:00
jit Per-overload torch.ops API (#67254) 2022-01-10 17:29:06 -08:00
legacy
lib
linalg Remove unnecessary sync in linalg.det (#67014) 2022-01-05 20:33:34 -08:00
multiprocessing make ProcessException pickleable (#70118) 2021-12-30 09:09:55 -08:00
nn Fix docstring for nn.Hardshrink (#71012) 2022-01-07 18:56:47 -08:00
onnx Revert D32994274: [ONNX] Link to the wiki (#68505) 2022-01-11 07:40:14 -08:00
optim fix loading of older models that don't have maximize (#71023) 2022-01-10 06:01:24 -08:00
package [Codemod][FBSourceBlackLinter] Daily arc lint --take BLACK 2022-01-11 04:20:46 -08:00
profiler Add low level torch.profiler.kineto_profile base class (#63302) 2021-12-14 14:47:43 -08:00
quantization [quant][fx][graphmode][be] Change the type for output of convert to be torch.nn.Module (#69959) 2021-12-29 20:33:32 -08:00
sparse Sparse CSR CUDA: Add torch.sparse.sampled_addmm (#68007) 2021-11-29 15:43:29 -08:00
special
testing change sparse COO comparison strategy in assert_close (#68728) 2022-01-12 06:43:50 -08:00
utils Decprecating routed decoder (#70990) 2022-01-11 06:56:48 -08:00
__config__.py
__future__.py
__init__.py expose return_types in Python (#66614) 2021-12-06 09:05:29 -08:00
_appdirs.py
_classes.py
_deploy.py
_jit_internal.py Back out "Make TorchScript Preserve Fully Qualified Class Name for Python Exceptions" 2021-12-27 09:11:46 -08:00
_linalg_utils.py
_lobpcg.py
_lowrank.py
_namedtensor_internals.py
_ops.py Per-overload torch.ops API (#67254) 2022-01-10 17:29:06 -08:00
_python_dispatcher.py
_six.py
_sources.py
_storage_docs.py
_tensor_docs.py Fixes doc errors in Tensor.triu(), Tensor.tril(), Tensor.ravel(). (#71057) 2022-01-11 07:34:59 -08:00
_tensor_str.py added set_printoptions examples (#68324) 2021-12-14 07:40:52 -08:00
_tensor.py [quant] Remove warning for quantized Tensor in __dir__ (#69265) 2021-12-02 10:30:36 -08:00
_torch_docs.py Support Sparse CSR transpose. Fix clang-tidy warnings. (#70582) 2022-01-05 17:41:51 -08:00
_utils_internal.py
_utils.py
_VF.py
_vmap_internals.py
abi-check.cpp
autocast_mode.py
CMakeLists.txt Codegen: Generate seperate headers per operator (#68247) 2021-12-14 06:40:08 -08:00
custom_class_detail.h
custom_class.h [jit] Decouple ivalue.h from jit_type.h (#70119) 2022-01-07 18:34:17 -08:00
deploy.h
extension.h
functional.py Add linalg.lu_factor (#66933) 2022-01-05 20:32:12 -08:00
hub.py
library.h [PyTorch] Outline destructor of CppFunction (#63688) 2022-01-07 09:16:23 -08:00
overrides.py [WIP] [ATen] Add native_multi_attention_self_attention CPU + GPU implementation (#70649) 2022-01-08 21:50:41 -08:00
py.typed
quasirandom.py
random.py
README.txt
return_types.py expose return_types in Python (#66614) 2021-12-06 09:05:29 -08:00
script.h
serialization.py Avoid dtype mismatch error in torch.save if storages are unallocated (#68787) 2021-11-24 09:51:29 -08:00
storage.py
torch_version.py
types.py

Note [TH abstraction violation]
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

TH/THC provide some hpp headers, which are proper C++ headers rather than
C headers.  These headers serve double duty as *internal implementation
detail* headers, whose contents should largely not be used by external
clients.

Ideally, we would not install these headers at all; instead, you should
use public functions (in headers like `THTensor.h`, NOT `THTensor.hpp`)
to manipulate these structs.  However, there are a few places
in torch/csrc where we violate this abstraction.  They are marked with
a pointer to this note.  Each of those sites will have to be refactored
when we refactor the guts of THTensor and related structures.