Commit Graph

24 Commits

Author SHA1 Message Date
Jane Xu
2aa0ba38a4 Make is_sparse a property of MaskedTensor (#110725)
Fixes #104574

Seeing that MaskedTensor is a prototype, the BC breaking nature of this change seems okay?

Locally tested:
<img width="1372" alt="image" src="https://github.com/pytorch/pytorch/assets/31798555/239e61ba-e0b9-4909-8c7a-0ce3869d7375">

Pull Request resolved: https://github.com/pytorch/pytorch/pull/110725
Approved by: https://github.com/cpuhrsch
2023-10-09 22:35:38 +00:00
Aaron Gokaslan
6d43c89f37 [BE]: Update Ruff to 0.0.280 (#105724)
Removes unusued loop values in python dictionary iteration. Automated fix from Ruff master

Pull Request resolved: https://github.com/pytorch/pytorch/pull/105724
Approved by: https://github.com/ezyang, https://github.com/janeyx99
2023-07-22 23:03:34 +00:00
Justin Chu
79c5e33349 [BE] Enable ruff's UP rules and autoformat nn/ mps/ and torch/ (#105436)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/105436
Approved by: https://github.com/malfet, https://github.com/albanD
2023-07-21 07:38:46 +00:00
Aleksandar Samardžić
09fdea8564 Fix autograd issue with identity conversions (#92022)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/92022
Approved by: https://github.com/pearu, https://github.com/mtaaooby, https://github.com/amjames, https://github.com/cpuhrsch
2023-06-21 21:23:03 +00:00
Aaron Gokaslan
47dca20d80 [BE] Enable flake8-comprehension rule C417 (#97880)
Enables flake8-comprehension rule C417. Ruff autogenerated these fixes to the codebase.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/97880
Approved by: https://github.com/ezyang, https://github.com/kit1980, https://github.com/albanD
2023-03-30 14:34:24 +00:00
Aaron Gokaslan
67d9790985 [BE] Apply almost all remaining flake8-comprehension checks (#94676)
Applies the remaining flake8-comprehension fixes and checks. This changes replace all remaining unnecessary generator expressions with list/dict/set comprehensions which are more succinct, performant, and better supported by our torch.jit compiler. It also removes useless generators such as 'set(a for a in b)`, resolving it into just the set call.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/94676
Approved by: https://github.com/ezyang
2023-02-12 01:01:25 +00:00
Samantha Andow
a7749ae177 [reland] rename DisableTorchFunction to DisableTorchFunctionSubclass (#88218) (#89221)
Summary: First half of #87990. This doesn't change any of the behavior and is just a rename

#88218 got reverted for internal breakages. This is the reland of started from internal

Differential Revision:
D41268423

LaMa Project: L1098534

Pull Request resolved: https://github.com/pytorch/pytorch/pull/89221
Approved by: https://github.com/meliy-meyada, https://github.com/zou3519
2023-01-04 18:32:49 +00:00
PyTorch MergeBot
ba4d5aae06 Revert "rename DisableTorchFunction to DisableTorchFunctionSubclass (#88218)"
This reverts commit 7f28be10e5.

Reverted https://github.com/pytorch/pytorch/pull/88218 on behalf of https://github.com/izaitsevfb due to BC-breaking change, D41211901
2022-11-11 19:13:05 +00:00
samdow
7f28be10e5 rename DisableTorchFunction to DisableTorchFunctionSubclass (#88218)
First half of #87990. This doesn't change any of the behavior and is just a rename

Pull Request resolved: https://github.com/pytorch/pytorch/pull/88218
Approved by: https://github.com/ezyang, https://github.com/zou3519
2022-11-10 14:51:13 +00:00
Tom Stein
fd60b818b9 [Python] refactor slices on sorted (#86995)
Sometimes you want to query the small element of a set of elements and use `sorted(elements)[0]` without a second thought. However, this is not optimal, since the entire list must be sorted first `O(n log n)`. It would be better to use the `min(elements)` method provided for this purpose `O(n)`.
Furthermore `sorted(elements)[::-1]` is not very efficient, because it would be better to use `sorted(elements, reverse=True)` to save the slice operation.

**TLDR: using `sorted(elements)[0]` is slow and can be replaced with `min(elements)`.**

I stumbled across these code snippets while playing around with CodeQL (see https://lgtm.com/query/4148064474379348546/).
Pull Request resolved: https://github.com/pytorch/pytorch/pull/86995
Approved by: https://github.com/jansel
2022-10-25 04:07:19 +00:00
Christian Puhrsch
ecd25df313 Add prototype warning to MaskedTensor constructor (#87107)
When a user constructs a MaskedTensor we should signal its development status to set expecations.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/87107
Approved by: https://github.com/bhosmer
2022-10-18 15:24:18 +00:00
George Qi
bc1d884061 [maskedtensor] use masked_softmax for forward/backward instead of regular softmax (#85845)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/85845
Approved by: https://github.com/cpuhrsch
2022-10-04 00:29:19 +00:00
Edward Z. Yang
3638089755 Ported reshape to symints and added a shim for BC (#85998)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/85998
Approved by: https://github.com/ezyang
2022-10-02 17:46:00 +00:00
PyTorch MergeBot
db4c6fe54f Revert "[maskedtensor] use masked_softmax for forward/backward instead of regular softmax (#85845)"
This reverts commit a4d10342e9.

Reverted https://github.com/pytorch/pytorch/pull/85845 on behalf of https://github.com/huydhn due to Sorry for reverting your PR but it breaks CUDA test_softmax_cuda (main.TestBasicsCUDA)
2022-09-30 23:54:49 +00:00
Edward Z. Yang
3b6588ab74 Consistent compute numel/contiguous strategy with SymInts (#85858)
Previously, our handling for contiguity was inconsistent in the following ways:

- is_strides_like 2d/3d and is_non_overlapping_and_dense always were computed
  based on sizes_and_strides_, even if you had symbolic ints
- Furthermore, even if you set custom policy for strides, these quantities were
  not overridable by subclasses
- Furthermore, we didn't even store these fields on ExtraMeta
- We duplicate implementations of compute_contiguous (plain, channels last,
  channels last 3d)
- We inconsistently called refresh_numel()/refresh_contiguous(), versus
  recomputing it ourselves

This factor makes a consistent strategy for all of the boolean fields, and
for numel computation.  After this refactor:

- All layout boolean fields are interposable via strides policy
  and can be overridden from Python; you will never access a garbage field
- All layout boolean fields are on ExtraMeta
- You can always call refresh_numel/contiguous, no matter if your Tensor is
  contiguous or not
- The numel/layout boolean fields are always populated consistently with
  the sizes strides fields (either on Tensor or ExtraMeta), even if you
  have custom policy
- There is only one implementation of the actual computation logic

Signed-off-by: Edward Z. Yang <ezyang@fb.com>

Differential Revision: [D39907696](https://our.internmc.facebook.com/intern/diff/D39907696)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/85858
Approved by: https://github.com/albanD
2022-09-30 21:26:34 +00:00
George Qi
a4d10342e9 [maskedtensor] use masked_softmax for forward/backward instead of regular softmax (#85845)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/85845
Approved by: https://github.com/cpuhrsch
2022-09-30 21:05:57 +00:00
George Qi
b60ad2e529 [maskedtensor] negative testing (#85938)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/85938
Approved by: https://github.com/cpuhrsch
2022-09-30 17:55:12 +00:00
George Qi
686555b663 [maskedtensor] port torch/_masked into torch/masked (#85515)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/85515
Approved by: https://github.com/cpuhrsch
2022-09-26 23:41:13 +00:00
George Qi
e3766e9855 [maskedtensor] move __torch_function/dispatch__ functions to a map (#85529)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/85529
Approved by: https://github.com/bhosmer
2022-09-23 18:31:20 +00:00
George Qi
0c46e3ec66 [maskedtensor] add basic tests and unary/binary/reduction tests from common_method_invocations (#82841)
Decided offline on the invariant that:

`masked_tensor` calls `MaskedTensor()`, which is analogous to `torch.tensor`
`as_masked_tensor` calls `MaskedTensor._from_values()`, which is analogous to `torch.as_tensor`
Pull Request resolved: https://github.com/pytorch/pytorch/pull/82841
Approved by: https://github.com/cpuhrsch, https://github.com/bhosmer
2022-09-22 07:37:04 +00:00
George Qi
5e9c26c8e2 [maskedtensor] adding reductions (#82839)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/82839
Approved by: https://github.com/bhosmer
2022-09-06 15:01:35 +00:00
George Qi
e10c47a7d0 [maskedtensor] adding unary and binary operations (#82837)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/82837
Approved by: https://github.com/bhosmer
2022-08-22 21:00:38 +00:00
joncrall
b136f3f310 More doctest refinements. (#83317)
Follow up to #82797

Now that the doctests themselves are in a better state, we should be able to enable xdoctest on the CI so they stay that way.

@ezyang @vadimkantorov
Pull Request resolved: https://github.com/pytorch/pytorch/pull/83317
Approved by: https://github.com/ezyang
2022-08-22 20:07:26 +00:00
George Qi
94ba085ce0 [maskedtensor] first commit, core and creation (#82836)
* __->__ #82836
Pull Request resolved: https://github.com/pytorch/pytorch/pull/82836
Approved by: https://github.com/albanD, https://github.com/bhosmer
2022-08-16 20:10:34 +00:00