Commit Graph

12 Commits

Author SHA1 Message Date
Edward Z. Yang
3318a832b3 Tighten FakeTensor reentrancy asserts, add debugging (#102091)
When investigating failures in https://github.com/pytorch/pytorch/pull/100017 I realized that we were reentering FakeTensorMode even though there was already one on the stack. Although we have attempted assert for these cases in the past, e.g., as in https://github.com/pytorch/pytorch/pull/97186 it seems that the existing protections were insufficient.

In this particular case, the reapplication of FakeTensorMode was due to an interaction with NotImplemented multiple dispatch handling. If proxy tensor mode detects an unrecognized tensor type (this includes FakeTensor, if it is not tracked with a proxy), it will return NotImplemented to give this tensor a chance to unpack itself into proxyable operation. However, this is never the right thing for FakeTensor, where no unpacking is possible. However, today, FakeTensor attempts to reapply the FakeTensorMode, resulting in FakeTensorMode being twice on the stack.

This PR does a number of things:

* It adds an assert in `FakeTensorMode.__torch_dispatch__` that you must not already have this mode on the stack, this is ALWAYS an error
* It modifies `FakeTensor.__torch_dispatch__` to return `NotImplemented` if the mode is already active. This prevents us from readding the mode on the stack
* It adds a new logging artifact `not_implemented` which you can use to get debug logs about all of the times a `__torch_dispatch__` handler returned NotImplemented and why it did so. Your subclass has to manually opt into this logging, but I inserted the necessary logs for ProxyTensorMode and FakeTensor(Mode)
* `with fake_mode` now no-ops if the fake mode is already on the stack, which is what users want anyway
* I am BREAKING pre-autograd tracing, because it is currently doing something weird with the original C++ mode stack. Brian is going to follow up with a fix next week.

Signed-off-by: Edward Z. Yang <ezyang@meta.com>

Pull Request resolved: https://github.com/pytorch/pytorch/pull/102091
Approved by: https://github.com/thiagocrepaldi, https://github.com/eellison, https://github.com/wanchaol, https://github.com/bdhirsh
2023-05-24 05:37:51 +00:00
Philip Meier
7602aade0f fix random mask creation in test_maskedtensor (#97017)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/97017
Approved by: https://github.com/pearu, https://github.com/mruberry
2023-03-24 23:55:17 +00:00
George Qi
bc1d884061 [maskedtensor] use masked_softmax for forward/backward instead of regular softmax (#85845)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/85845
Approved by: https://github.com/cpuhrsch
2022-10-04 00:29:19 +00:00
PyTorch MergeBot
db4c6fe54f Revert "[maskedtensor] use masked_softmax for forward/backward instead of regular softmax (#85845)"
This reverts commit a4d10342e9.

Reverted https://github.com/pytorch/pytorch/pull/85845 on behalf of https://github.com/huydhn due to Sorry for reverting your PR but it breaks CUDA test_softmax_cuda (main.TestBasicsCUDA)
2022-09-30 23:54:49 +00:00
George Qi
a4d10342e9 [maskedtensor] use masked_softmax for forward/backward instead of regular softmax (#85845)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/85845
Approved by: https://github.com/cpuhrsch
2022-09-30 21:05:57 +00:00
George Qi
b60ad2e529 [maskedtensor] negative testing (#85938)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/85938
Approved by: https://github.com/cpuhrsch
2022-09-30 17:55:12 +00:00
George Qi
686555b663 [maskedtensor] port torch/_masked into torch/masked (#85515)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/85515
Approved by: https://github.com/cpuhrsch
2022-09-26 23:41:13 +00:00
George Qi
0c46e3ec66 [maskedtensor] add basic tests and unary/binary/reduction tests from common_method_invocations (#82841)
Decided offline on the invariant that:

`masked_tensor` calls `MaskedTensor()`, which is analogous to `torch.tensor`
`as_masked_tensor` calls `MaskedTensor._from_values()`, which is analogous to `torch.as_tensor`
Pull Request resolved: https://github.com/pytorch/pytorch/pull/82841
Approved by: https://github.com/cpuhrsch, https://github.com/bhosmer
2022-09-22 07:37:04 +00:00
Edward Z. Yang
e5fac7f5dc Optimize torch.ops.ns.opname.overload accessor in torch dispatch (#85132)
This doesn't actually seem to help all that much.

Signed-off-by: Edward Z. Yang <ezyang@fb.com>
Pull Request resolved: https://github.com/pytorch/pytorch/pull/85132
Approved by: https://github.com/wconstab
2022-09-16 20:21:03 +00:00
George Qi
5e9c26c8e2 [maskedtensor] adding reductions (#82839)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/82839
Approved by: https://github.com/bhosmer
2022-09-06 15:01:35 +00:00
George Qi
e10c47a7d0 [maskedtensor] adding unary and binary operations (#82837)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/82837
Approved by: https://github.com/bhosmer
2022-08-22 21:00:38 +00:00
George Qi
94ba085ce0 [maskedtensor] first commit, core and creation (#82836)
* __->__ #82836
Pull Request resolved: https://github.com/pytorch/pytorch/pull/82836
Approved by: https://github.com/albanD, https://github.com/bhosmer
2022-08-16 20:10:34 +00:00