Commit Graph

364 Commits

Author SHA1 Message Date
nikitaved
01069ad4be sparse.mm.backward: fix for non-contiguous grad values on CPU (#106127)
Fixes https://github.com/pytorch/pytorch/issues/102493.
The problem was that the backward implementation assumed inputs to be contiguous.
This might supersede https://github.com/pytorch/pytorch/pull/104520.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/106127
Approved by: https://github.com/cpuhrsch
2023-07-28 01:25:00 +00:00
Justin Chu
73e1455327 [BE] Enable ruff's UP rules and autoformat test/ (#105434)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/105434
Approved by: https://github.com/albanD
2023-07-19 20:36:06 +00:00
Aleksandar Samardžić
fc2f87b281 Add semi-structured sparse conversions (#103830)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/103830
Approved by: https://github.com/amjames, https://github.com/jcaip, https://github.com/cpuhrsch
2023-07-13 21:09:09 +00:00
Nikita Vedeneev
437bc5b1b7 sparse_mask: backward support for sparse lhs (take 2) (#104341)
This is a copy of https://github.com/pytorch/pytorch/pull/95165 with some bug fixes.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/104341
Approved by: https://github.com/albanD, https://github.com/pearu, https://github.com/amjames
2023-07-03 14:12:44 +00:00
PyTorch MergeBot
7274582390 Revert "sparse_mask: backward support for sparse lhs (#95165)"
This reverts commit f090fdf3b4.

Reverted https://github.com/pytorch/pytorch/pull/95165 on behalf of https://github.com/huydhn due to Sorry for reverting this. I think one of the tests test_sparse.py::TestSparseCUDA::test_sparse_mask_backward_cuda_complex128 is failing on slow gradcheck f090fdf3b4 ([comment](https://github.com/pytorch/pytorch/pull/95165#issuecomment-1604696109))
2023-06-23 18:40:15 +00:00
Nikita Vedeneev
f090fdf3b4 sparse_mask: backward support for sparse lhs (#95165)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/95165
Approved by: https://github.com/pearu, https://github.com/cpuhrsch
2023-06-23 12:27:27 +00:00
Aleksandar Samardžić
ab8fc41e2f Support bfloat16 dtype for CUTLASS-based semi-structured sparsity (#103978)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/103978
Approved by: https://github.com/cpuhrsch
2023-06-22 15:53:27 +00:00
Aleksandar Samardžić
09fdea8564 Fix autograd issue with identity conversions (#92022)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/92022
Approved by: https://github.com/pearu, https://github.com/mtaaooby, https://github.com/amjames, https://github.com/cpuhrsch
2023-06-21 21:23:03 +00:00
Aleksandar Samardžić
8fc687f7ee Add activation functions (ReLU and SiLU for now) for structured sparse linear operator (#101339)
Differential Revision: [D46453476](https://our.internmc.facebook.com/intern/diff/D46453476)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/101339
Approved by: https://github.com/cpuhrsch
2023-06-16 17:24:59 +00:00
Aleksandar Samardžić
2f893d04c8 Implement adding bias vector into structured sparse linear operator (#100881)
Differential Revision: [D46453477](https://our.internmc.facebook.com/intern/diff/D46453477)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/100881
Approved by: https://github.com/cpuhrsch, https://github.com/malfet
2023-06-15 16:16:09 +00:00
Pearu Peterson
45401ef745 Enable float16 and complex32 support for sparse CSR elementwise multiplication operation. (#100394)
As in the title. In addition, the PR adds float16 addcmul support for CPU device.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/100394
Approved by: https://github.com/amjames, https://github.com/cpuhrsch
2023-06-14 14:42:39 +00:00
Pearu Peterson
cbe270d233 Fix zeros_like for sparse tensors with batch dimensions. Add opinfo-based tests to like-functions. (#101215)
Fixes #101078

Pull Request resolved: https://github.com/pytorch/pytorch/pull/101215
Approved by: https://github.com/cpuhrsch
2023-06-13 16:02:10 +00:00
Nikita Vedeneev
2e2a74670d torch.sparse.softmax: allow negative dim (#102172)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/102172
Approved by: https://github.com/cpuhrsch
2023-05-24 19:43:47 +00:00
PyTorch MergeBot
a76c1af351 Revert "Implement adding bias vector into structured sparse linear operator (#100881)"
This reverts commit c3a893c659.

Reverted https://github.com/pytorch/pytorch/pull/100881 on behalf of https://github.com/izaitsevfb due to breaks internal builds, see D45972633 ([comment](https://github.com/pytorch/pytorch/pull/100881#issuecomment-1553621418))
2023-05-18 20:47:02 +00:00
PyTorch MergeBot
eb9ac9c156 Revert "Add activation functions (ReLU and SiLU for now) for structured sparse linear operator (#101339)"
This reverts commit bfb3941ad8.

Reverted https://github.com/pytorch/pytorch/pull/101339 on behalf of https://github.com/izaitsevfb due to Depends on #100881, which has to be reverted due to internal build breakage. ([comment](https://github.com/pytorch/pytorch/pull/101339#issuecomment-1553618216))
2023-05-18 20:42:44 +00:00
Aleksandar Samardžić
bfb3941ad8 Add activation functions (ReLU and SiLU for now) for structured sparse linear operator (#101339)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/101339
Approved by: https://github.com/cpuhrsch
2023-05-18 01:53:18 +00:00
Aleksandar Samardžić
c3a893c659 Implement adding bias vector into structured sparse linear operator (#100881)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/100881
Approved by: https://github.com/cpuhrsch
2023-05-17 05:46:22 +00:00
Pearu Peterson
65b15be04c Fix incorrect sparse_dim in COO.zero_() and in binary operations with zero-sized COO operands (#98292)
Fixes https://github.com/pytorch/pytorch/issues/97627

Pull Request resolved: https://github.com/pytorch/pytorch/pull/98292
Approved by: https://github.com/nikitaved, https://github.com/cpuhrsch, https://github.com/amjames
2023-05-11 19:05:34 +00:00
Aleksandar Samardžić
a8c2cd1039 Add CUTLASS-based MM for structured sparse linear operator (#100485)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/100485
Approved by: https://github.com/cpuhrsch
2023-05-09 21:05:15 +00:00
Pearu Peterson
92a7640b76 Add mul tests with sparse sample inputs (#100393)
This PR implements sparse sample inputs and error inputs for mul OpInfo.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/100393
Approved by: https://github.com/amjames, https://github.com/cpuhrsch
2023-05-09 16:13:14 +00:00
Larry Liu
687afeb686 [dynamo][numpy] Add NumpyTensorVariable to translate ndarray attribute calls to tensor attributes (#95849)
Issue: #93684

# Problem

Reduce graph breaks when dynamo compiles python functions containing numpy functions and ndarray operations.

# Design (as I know it)

* Use torch_np.ndarray(a wrapper of tensor) to back a `VariableTracker`: `NumpyTensorVariable`.
* Translate all attributes and methods calls, on ndarray, to torch_np.ndarray equivalent.

This PR adds `NumpyTensorVariable` and supports:
1.  tensor to ndarray, ndarray to tensor
2. numpy functions such as numpy.meshgrid()
3. ndarray attributes such as `itemsize`, `stride`

Next PR will handle returning `np.ndarray` and add support for ndarray methods
Pull Request resolved: https://github.com/pytorch/pytorch/pull/95849
Approved by: https://github.com/ezyang
2023-04-27 16:18:35 +00:00
Ilia Taraban
a1074ddf51 Enable cadd_sparse for BFloat16 on CPU (#96767)
Enabling **cadd_sparse** operation for BFloat16 on CPU to support BFloat16 operations in GNN libraries.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/96767
Approved by: https://github.com/jgong5, https://github.com/cpuhrsch
2023-04-14 19:50:49 +00:00
eqy
2fddcf0fc0 [CUDA][CUDA 11] Remove more CUDA 11 version checks (#92934)
Working on removing stragglers missed in previous CUDA version < 11.0 cleanup PRs.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/92934
Approved by: https://github.com/ngimel
2023-03-30 19:49:52 +00:00
Pearu Peterson
9d5ac03b9a Deprecate gradcheck check_sparse_nnz argument as duplicate of masked argument (#97187)
As in the title.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/97187
Approved by: https://github.com/soulitzer
2023-03-22 14:11:03 +00:00
Huy Do
679dec847e Use is_available instead of device_count to check for CUDA availability (#97043)
There are some tests that incorrectly uses the number of GPU devices `torch.cuda.device_count() > 0` to check for CUDA availability instead of the default `torch.cuda.is_available()` call.  This makes these tests more brittle when encountering infra flakiness on G5 runner using A10G, for example [test_pytorch_np](https://hud.pytorch.org/failure/FAILED%20test_tensorboard.py%3A%3ATestTensorBoardPyTorchNumpy%3A%3Atest_pytorch_np%20-%20RuntimeError%3A%20No%20CUDA%20GPUs%20are%20available).

The underlying problem is that GPU devices could crash on these runner.  While the root cause for that is unclear and we will try to upgrade to a new NVIDIA driver https://github.com/pytorch/pytorch/pull/96904 to see if it helps, we can also make these tests more resilient by using the correct check to skip tests correctly when GPU crashes.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/97043
Approved by: https://github.com/clee2000
2023-03-18 00:39:42 +00:00
Pearu Peterson
2abcafcfd8 Add masked_grad kw argument to to_dense (#96095)
As in the title.

The `masked_grad` kw argument is required for `to_dense` backward to distinguish the expected semantics of sparse tensors. `masked_grad=True` means that the `to_dense` backward will apply a mask to the returned gradient where the mask is defined by the input indices. The default semantics implies `masked_grad==True` for BC but see the [comment](https://github.com/pytorch/pytorch/pull/96095/files#diff-d4df180433a09071e891d552426911c227b30ae9b8a8e56da31046e7ecb1afbeR501-R513) in `to_dense_backward`.

As a consequence, existing code that is run through autograd engine must replace `.to_dense()` calls with `.to_dense(masked_grad=False)`. For example,
```python
torch.autograd.gradcheck(lambda x: torch.sum(x, [0]).to_dense())
torch.autograd.gradcheck(lambda x: torch.sparse.sum(x, [0]).to_dense())
```
(recall, gradcheck has `masked=False` as default) must be updated to
```python
torch.autograd.gradcheck(lambda x: torch.sum(x, [0]).to_dense(masked_grad=False))
torch.autograd.gradcheck(lambda x: torch.sparse.sum(x, [0]).to_dense(masked_grad=True), masked=True)
```

Fixes https://github.com/pytorch/pytorch/issues/95550

Pull Request resolved: https://github.com/pytorch/pytorch/pull/96095
Approved by: https://github.com/cpuhrsch
2023-03-16 21:38:11 +00:00
Nikita Vedeneev
0b5040b329 sparse_mask: remove syncs by removing calls to coalesce (#94406)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/94406
Approved by: https://github.com/cpuhrsch, https://github.com/pearu
2023-03-13 16:30:27 +00:00
Andrew M. James
2bcc0e9e18 Expand sparse.softmax zero nnz tests to cover cases of previously reported FPE. (#95646)
- Test cases with zero `nnz` added for `sparse.log_softmax`.
- Test cases with zero `nnz` for both `sparse.log_softmax` and
`torch.sparse_softmax` expanded to cover the backward pass.

These test additions prove resolution to #95371 and #82107.

Fixes #82107 #95371

Pull Request resolved: https://github.com/pytorch/pytorch/pull/95646
Approved by: https://github.com/cpuhrsch, https://github.com/pearu, https://github.com/nikitaved
2023-03-01 17:26:51 +00:00
Pearu Peterson
b89fda51cd Implement sparse semantics support in gradcheck (2nd try) (#95405)
Replaces https://github.com/pytorch/pytorch/pull/94714 that was reverted due to https://github.com/pytorch/pytorch/pull/94714#issuecomment-1442355648

Pull Request resolved: https://github.com/pytorch/pytorch/pull/95405
Approved by: https://github.com/albanD
2023-02-27 17:48:02 +00:00
Pearu Peterson
0c0694495b Fix a bug in nesting check_sparse_tensor_invariants context managers (#95372)
As in the title. The bug was reported in https://github.com/pytorch/pytorch/pull/94728#discussion_r1108892366 and has the following reproducer:
```python
>>> import torch
>>> check_ctx = torch.sparse.check_sparse_tensor_invariants(True)
>>> no_check_ctx = torch.sparse.check_sparse_tensor_invariants(False)
>>> with check_ctx:
...   assert torch.sparse.check_sparse_tensor_invariants.is_enabled()
...   with no_check_ctx:
...     assert not torch.sparse.check_sparse_tensor_invariants.is_enabled()
...   assert torch.sparse.check_sparse_tensor_invariants.is_enabled()
...
Traceback (most recent call last):
  File "<stdin>", line 5, in <module>
AssertionError
```

Pull Request resolved: https://github.com/pytorch/pytorch/pull/95372
Approved by: https://github.com/cpuhrsch
2023-02-23 18:22:13 +00:00
Zain Rizvi
808879ec8b Revert "Implement sparse semantics support in gradcheck (#94714)" (#95386)
This reverts commit 7ac511c29a from https://github.com/pytorch/pytorch/pull/94714 since it breaks periodic.

Git thinks there's a merge conflict due to an unfortunately located newline deletion, so reverting this one manually

Details behind the failure in https://github.com/pytorch/pytorch/pull/94714#issuecomment-1442160593
Pull Request resolved: https://github.com/pytorch/pytorch/pull/95386
Approved by: https://github.com/clee2000
2023-02-23 18:02:37 +00:00
Pearu Peterson
cece63f197 Add warn-once deprecation warning to legacy sparse constructors (#94850)
Addresses https://github.com/pytorch/pytorch/issues/68323#issuecomment-1425174341

Pull Request resolved: https://github.com/pytorch/pytorch/pull/94850
Approved by: https://github.com/amjames, https://github.com/cpuhrsch
2023-02-23 15:05:12 +00:00
kshitij12345
3b966a6ce3 [autograd] disable backward/grad for complex scalar output (#92753)
Fixes https://github.com/pytorch/pytorch/issues/92750

Pull Request resolved: https://github.com/pytorch/pytorch/pull/92753
Approved by: https://github.com/ezyang
2023-02-23 11:38:27 +00:00
Pearu Peterson
7ac511c29a Implement sparse semantics support in gradcheck (#94714)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/94714
Approved by: https://github.com/soulitzer, https://github.com/albanD
2023-02-22 20:03:25 +00:00
Nikita Vedeneev
3ace14eb8b [Bug fix] sparse_mask: wrong intersection on CUDA (#94829)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/94829
Approved by: https://github.com/cpuhrsch
2023-02-15 13:22:39 +00:00
Xuehai Pan
046e88a291 [BE] [3/3] Rewrite super() calls in test (#94592)
Rewrite Python built-in class `super()` calls. Only non-semantic changes should be applied.

- #94587
- #94588
- #94592

Also, methods with only a `super()` call are removed:

```diff
class MyModule(nn.Module):
-   def __init__(self):
-       super().__init__()
-
    def forward(self, ...):
        ...
```

Some cases that change the semantics should be kept unchanged. E.g.:

f152a79be9/caffe2/python/net_printer.py (L184-L190)

f152a79be9/test/test_jit_fuser_te.py (L2628-L2635)

Pull Request resolved: https://github.com/pytorch/pytorch/pull/94592
Approved by: https://github.com/ezyang, https://github.com/seemethere
2023-02-12 22:20:53 +00:00
Aaron Gokaslan
3d82d8d0ed [BE] Enable more flake8-comprehensions checks (#94601)
I applied some flake8 fixes and enabled checking for them in the linter. I also enabled some checks for my previous comprehensions PR.

This is a follow up to #94323 where I enable the flake8 checkers for the fixes I made and fix a few more of them.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/94601
Approved by: https://github.com/ezyang
2023-02-10 23:40:29 +00:00
Huy Do
c53bd0dd30 Mitigate broken test_coalesce_reference_cycle test on dynamo (#94622)
The test has been disabled and shows up on https://github.com/pytorch/test-infra/blob/generated-stats/stats/disabled-tests-condensed.json, but then the JSON file downloaded by the runner doesn't seem to have it.

Disable it explicitly to keep trunk green while investigating.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/94622
Approved by: https://github.com/weiwangmeta
2023-02-10 21:59:36 +00:00
PyTorch MergeBot
76ed1a81d1 Revert "COO intersection kernel: respect value intersection order (#92242)"
This reverts commit b07c839b70.

Reverted https://github.com/pytorch/pytorch/pull/92242 on behalf of https://github.com/jeanschmidt due to breaking vs17
2023-02-09 14:44:32 +00:00
Aleksandar Samardžić
e1f17b3530 Add CSR->BSC and CSC->BSR conversions (#93301)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/93301
Approved by: https://github.com/cpuhrsch
2023-02-07 19:22:05 +00:00
Nikita Vedeneev
b07c839b70 COO intersection kernel: respect value intersection order (#92242)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/92242
Approved by: https://github.com/cpuhrsch, https://github.com/amjames
2023-02-07 17:05:28 +00:00
Nikita Vedeneev
994f85d639 sparse_mask: extend lhs to sparse COO tensors (#92248)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/92248
Approved by: https://github.com/cpuhrsch, https://github.com/pearu
2023-02-01 09:00:07 +00:00
Aleksandar Samardžić
53f7fb9a22 Add CSC->BSC conversion (#92307)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/92307
Approved by: https://github.com/cpuhrsch
2023-01-30 17:03:36 +00:00
Pearu Peterson
65d6802e2f Improve error messages for sparse methods on tensors with unsupported backends/layouts. (#93149)
Fixes https://github.com/pytorch/pytorch/issues/92790

Pull Request resolved: https://github.com/pytorch/pytorch/pull/93149
Approved by: https://github.com/cpuhrsch
2023-01-27 19:50:23 +00:00
Pearu Peterson
0e92bbe5b1 Add sparse COO tensor support to torch.sum(dim=..., keepdim=...) (#92979)
Fixes #92757, #86232

Pull Request resolved: https://github.com/pytorch/pytorch/pull/92979
Approved by: https://github.com/cpuhrsch
2023-01-26 18:42:51 +00:00
Eddie Yan
0bf7506051 [CUDA] Drop CUDA < 11.0 test flags (#92605)
Follow-up of #89582 to drop flags like `CUDA11OrLater` in tests. Note that in some places it appears that `TEST_WITH_ROCM` is _implicitly_ guarded against via the `CUDA11OrLater` version check, based on my best-guess of how `torch.version.cuda` would behave in ROCM builds, so I've added `not TEST_WITH_ROCM` in cases where ROCM wasn't previously explicitly allowed.

CC @ptrblck @malfet @ngimel
Pull Request resolved: https://github.com/pytorch/pytorch/pull/92605
Approved by: https://github.com/ngimel
2023-01-24 04:34:06 +00:00
Nikita Vedeneev
9f381c9b7f sparse_sparse_matmul: simplify backward (#91712)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/91712
Approved by: https://github.com/albanD
2023-01-23 19:24:28 +00:00
Yanbo Liang
0ab4ab9f8d [Dynamo] Fix calling UserDefinedObject.func should pass self object (#92050)
Fixes #90834

Pull Request resolved: https://github.com/pytorch/pytorch/pull/92050
Approved by: https://github.com/jansel
2023-01-21 05:47:01 +00:00
Pearu Peterson
b3e4f5029b Add check-sparse-tensor-invariants flag to Context - 2nd try. (#92094)
This PR is a copy of https://github.com/pytorch/pytorch/pull/90849 that merge was reverted.

The PR adds "check sparse tensor invariants" flag to Context that when enabled will trigger sparse tensor data invariants checks in unsafe methods of constructing sparse COO/CSR/CSC/BSR/BSC tensors. The feature includes the following changes to UI:

`torch.sparse.check_sparse_tensor_invariants` class provides different ways to enable/disable the invariant checking.

`torch.sparse_coo/csr/csc/bsr/bsc/compressed_tensor` functions have a new optional argument `check_invariants` to enable/disable the invariant checks explicitly. When the `check_invariants` argument is specified, the global state of the feature is temporarily overridden.

The PR fixes https://github.com/pytorch/pytorch/issues/90833

Pull Request resolved: https://github.com/pytorch/pytorch/pull/92094
Approved by: https://github.com/cpuhrsch
2023-01-13 14:50:33 +00:00
PyTorch MergeBot
c7a22bb7c7 Revert "Add check-sparse-tensor-invariants flag to Context. (#90849)"
This reverts commit b9a035c1c5.

Reverted https://github.com/pytorch/pytorch/pull/90849 on behalf of https://github.com/DanilBaibak due to Break internal build
2023-01-12 09:58:16 +00:00