Commit Graph

18 Commits

Author SHA1 Message Date
Peter Bell
454361435c Implement correction argument in torch.masked.{std,var} (#87118)
This makes the signature of `torch.masked.std` and `var` more consistent with the global namespace variant and also updates the sample inputs to repurpose the existing `sample_inputs_std_var` inputs which fully exercise the `correction` argument.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/87118
Approved by: https://github.com/cpuhrsch
2022-12-08 15:59:09 +00:00
PyTorch MergeBot
ba4d5aae06 Revert "rename DisableTorchFunction to DisableTorchFunctionSubclass (#88218)"
This reverts commit 7f28be10e5.

Reverted https://github.com/pytorch/pytorch/pull/88218 on behalf of https://github.com/izaitsevfb due to BC-breaking change, D41211901
2022-11-11 19:13:05 +00:00
samdow
7f28be10e5 rename DisableTorchFunction to DisableTorchFunctionSubclass (#88218)
First half of #87990. This doesn't change any of the behavior and is just a rename

Pull Request resolved: https://github.com/pytorch/pytorch/pull/88218
Approved by: https://github.com/ezyang, https://github.com/zou3519
2022-11-10 14:51:13 +00:00
Tom Stein
fd60b818b9 [Python] refactor slices on sorted (#86995)
Sometimes you want to query the small element of a set of elements and use `sorted(elements)[0]` without a second thought. However, this is not optimal, since the entire list must be sorted first `O(n log n)`. It would be better to use the `min(elements)` method provided for this purpose `O(n)`.
Furthermore `sorted(elements)[::-1]` is not very efficient, because it would be better to use `sorted(elements, reverse=True)` to save the slice operation.

**TLDR: using `sorted(elements)[0]` is slow and can be replaced with `min(elements)`.**

I stumbled across these code snippets while playing around with CodeQL (see https://lgtm.com/query/4148064474379348546/).
Pull Request resolved: https://github.com/pytorch/pytorch/pull/86995
Approved by: https://github.com/jansel
2022-10-25 04:07:19 +00:00
Christian Puhrsch
ecd25df313 Add prototype warning to MaskedTensor constructor (#87107)
When a user constructs a MaskedTensor we should signal its development status to set expecations.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/87107
Approved by: https://github.com/bhosmer
2022-10-18 15:24:18 +00:00
George Qi
bc1d884061 [maskedtensor] use masked_softmax for forward/backward instead of regular softmax (#85845)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/85845
Approved by: https://github.com/cpuhrsch
2022-10-04 00:29:19 +00:00
Edward Z. Yang
3638089755 Ported reshape to symints and added a shim for BC (#85998)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/85998
Approved by: https://github.com/ezyang
2022-10-02 17:46:00 +00:00
PyTorch MergeBot
db4c6fe54f Revert "[maskedtensor] use masked_softmax for forward/backward instead of regular softmax (#85845)"
This reverts commit a4d10342e9.

Reverted https://github.com/pytorch/pytorch/pull/85845 on behalf of https://github.com/huydhn due to Sorry for reverting your PR but it breaks CUDA test_softmax_cuda (main.TestBasicsCUDA)
2022-09-30 23:54:49 +00:00
Edward Z. Yang
3b6588ab74 Consistent compute numel/contiguous strategy with SymInts (#85858)
Previously, our handling for contiguity was inconsistent in the following ways:

- is_strides_like 2d/3d and is_non_overlapping_and_dense always were computed
  based on sizes_and_strides_, even if you had symbolic ints
- Furthermore, even if you set custom policy for strides, these quantities were
  not overridable by subclasses
- Furthermore, we didn't even store these fields on ExtraMeta
- We duplicate implementations of compute_contiguous (plain, channels last,
  channels last 3d)
- We inconsistently called refresh_numel()/refresh_contiguous(), versus
  recomputing it ourselves

This factor makes a consistent strategy for all of the boolean fields, and
for numel computation.  After this refactor:

- All layout boolean fields are interposable via strides policy
  and can be overridden from Python; you will never access a garbage field
- All layout boolean fields are on ExtraMeta
- You can always call refresh_numel/contiguous, no matter if your Tensor is
  contiguous or not
- The numel/layout boolean fields are always populated consistently with
  the sizes strides fields (either on Tensor or ExtraMeta), even if you
  have custom policy
- There is only one implementation of the actual computation logic

Signed-off-by: Edward Z. Yang <ezyang@fb.com>

Differential Revision: [D39907696](https://our.internmc.facebook.com/intern/diff/D39907696)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/85858
Approved by: https://github.com/albanD
2022-09-30 21:26:34 +00:00
George Qi
a4d10342e9 [maskedtensor] use masked_softmax for forward/backward instead of regular softmax (#85845)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/85845
Approved by: https://github.com/cpuhrsch
2022-09-30 21:05:57 +00:00
George Qi
b60ad2e529 [maskedtensor] negative testing (#85938)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/85938
Approved by: https://github.com/cpuhrsch
2022-09-30 17:55:12 +00:00
George Qi
686555b663 [maskedtensor] port torch/_masked into torch/masked (#85515)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/85515
Approved by: https://github.com/cpuhrsch
2022-09-26 23:41:13 +00:00
George Qi
e3766e9855 [maskedtensor] move __torch_function/dispatch__ functions to a map (#85529)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/85529
Approved by: https://github.com/bhosmer
2022-09-23 18:31:20 +00:00
George Qi
0c46e3ec66 [maskedtensor] add basic tests and unary/binary/reduction tests from common_method_invocations (#82841)
Decided offline on the invariant that:

`masked_tensor` calls `MaskedTensor()`, which is analogous to `torch.tensor`
`as_masked_tensor` calls `MaskedTensor._from_values()`, which is analogous to `torch.as_tensor`
Pull Request resolved: https://github.com/pytorch/pytorch/pull/82841
Approved by: https://github.com/cpuhrsch, https://github.com/bhosmer
2022-09-22 07:37:04 +00:00
George Qi
5e9c26c8e2 [maskedtensor] adding reductions (#82839)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/82839
Approved by: https://github.com/bhosmer
2022-09-06 15:01:35 +00:00
George Qi
e10c47a7d0 [maskedtensor] adding unary and binary operations (#82837)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/82837
Approved by: https://github.com/bhosmer
2022-08-22 21:00:38 +00:00
joncrall
b136f3f310 More doctest refinements. (#83317)
Follow up to #82797

Now that the doctests themselves are in a better state, we should be able to enable xdoctest on the CI so they stay that way.

@ezyang @vadimkantorov
Pull Request resolved: https://github.com/pytorch/pytorch/pull/83317
Approved by: https://github.com/ezyang
2022-08-22 20:07:26 +00:00
George Qi
94ba085ce0 [maskedtensor] first commit, core and creation (#82836)
* __->__ #82836
Pull Request resolved: https://github.com/pytorch/pytorch/pull/82836
Approved by: https://github.com/albanD, https://github.com/bhosmer
2022-08-16 20:10:34 +00:00