Summary:
In case the inputs have a different layout, `assert_close(..., check_layout=False)` converts them to strided before comparison. This is helpful if you just want to compare the values of sparse COO / CSR tensor against a strided reference.
This keeps BC, since the default `check_layout=True` was the old, hard-coded behavior.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/65419
Reviewed By: H-Huang
Differential Revision: D31133629
Pulled By: mruberry
fbshipit-source-id: ca8918af81fb0e0ba263104836a4c2eeacdfc7e6
Summary:
This utilizes the feature introduced in https://github.com/pytorch/pytorch/issues/60091 to modify the header of the error message.
Before:
```python
AssertionError: Tensor-likes are not equal!
Mismatched elements: 1 / 2 (50.0%)
Greatest absolute difference: 1 at index 1
Greatest relative difference: 0.3333333432674408 at index 1
The failure occurred for the values.
```
After:
```python
AssertionError: Sparse COO values of tensor-likes are not equal!
Mismatched elements: 1 / 2 (50.0%)
Greatest absolute difference: 1 at index 1
Greatest relative difference: 0.3333333432674408 at index 1
```
Pull Request resolved: https://github.com/pytorch/pytorch/pull/61583
Reviewed By: malfet
Differential Revision: D30014797
Pulled By: cpuhrsch
fbshipit-source-id: 66e30645e94de5c8c96510822082ff9aabef5329
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/56058
User facing changes:
1. Adds a negative bit and corresponding new API (`is_neg()`,`resolve_neg()`)
2. `tensor.conj().imag` now returns a floating point tensor with neg bit set to 1 instead of a tensor with no notion of negative bit. Note that imag is still a view and all the view properties still hold for imag.
Non user facing changes:
1. Added a new Negative dispatch key and a backend fallback to handle it
2. Updated copy kernel to handle negative bit
3. Merged conjugate and negative bit fallback kernel
4. fixed https://github.com/pytorch/pytorch/issues/60478 (caused due to https://github.com/pytorch/pytorch/pull/54987)
Testing:
1. Added a new OpInfo based test `test_neg_view` (verifies that out-of-place and in-place operations work correctly for all operations when the input is a neg view tensor by checking the result against an actually negated tensor, verifies that autograd returns the same output for both neg view and actually negated tensors as well as it works fine when grad_out is a neg view).
2. Added a new test class containing `test_conj_view`, `test_neg_view`.
Test Plan: Imported from OSS
Reviewed By: soulitzer
Differential Revision: D29636403
fbshipit-source-id: 12214c9dc4806c51850f4a72a109db9527c0ca63
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/60638
Initial proposal in https://github.com/pytorch/pytorch/pull/58981#issuecomment-866690334. Opposed to the proposal, this PR only allows relaxing the type equality constraint to a common superclass constraint, for example `torch.Tensor` vs `torch.nn.Parameter`. Inputs that do not share a common superclass will still fail.
Test Plan: Imported from OSS
Reviewed By: soulitzer
Differential Revision: D29626811
Pulled By: mruberry
fbshipit-source-id: 1916c3b710d38889de7ce57eb0770c76cbbb8166
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/60536
`torch.isclose` does not do this bool tensors, which results in a test failure since subtraction (`abs(actual - expected)`) is not supported for them (see #58981). Since the `dtype` is already checked at this point, we can safely move the upcasting before `torch.isclose` is invoked.
Test Plan: Imported from OSS
Reviewed By: ngimel
Differential Revision: D29556356
Pulled By: mruberry
fbshipit-source-id: 4c65fad4f06cf402d6aab9dde5b127235766d5e0
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/60254
Before we only tested that the correct error message is returned if `msg` is passed as callable. This adds tests that make sure that
- the inputs passed to the callable are the same inputs passed to `torch.assert_close` and
- the `diagnostics` namespace has the same attributes and types as documented.
Test Plan: Imported from OSS
Reviewed By: ngimel
Differential Revision: D29556354
Pulled By: mruberry
fbshipit-source-id: 9793c6d86fda842b6329381fc03b945eee878464
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/60163
Changes to the default error message in case of mismatching values need to be reflected in the examples given in the docstring. Normally this should be enforced by a [`doctest`](https://docs.python.org/3/library/doctest.html). mruberry do you know why we don't have such a check?
Test Plan: Imported from OSS
Reviewed By: ngimel
Differential Revision: D29556353
Pulled By: mruberry
fbshipit-source-id: 8dbc3f566f429618811b542a059d9abde9a6530b
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/60091Closes#58383. (1) and (2) are implemented. (3) was rejected. No consensus was reached on (4) and (5).
Improvements:
- Instead of calling everything "Tensors" we now use "Scalars" and "Tensor-likes" depending on the shape. Plus, we now internally have the option to adapt this identifier for example to report "Imaginary components of complex tensor-likes", which is even more expressive.
- The reported conditions "not close" and "not equal" are now determined based on `rtol` and `atol`.
- The number of mismatched elements and the offending indices are only reported in case the inputs are not scalar
- The allowed `rtol` and `atol` is only reported if `> 0`
**Example 1**
```python
torch.testing.assert_close(1, 3, rtol=0, atol=1)
```
Before:
```
AssertionError: Tensors are not close!
Mismatched elements: 1 / 1 (100.0%)
Greatest absolute difference: 2 at 0 (up to 1 allowed)
Greatest relative difference: 0.6666666865348816 at 0 (up to 0 allowed)
```
After:
```
AssertionError: Scalars are not close!
Absolute difference: 2 (up to 1 allowed)
Relative difference: 0.6666666865348816
```
**Example 2**
```python
torch.manual_seed(0)
t = torch.rand((2, 2), dtype=torch.complex64)
torch.testing.assert_close(t, t + complex(0, 1))
```
Before:
```
AssertionError: Tensors are not close!
Mismatched elements: 4 / 4 (100.0%)
Greatest absolute difference: 1.0000000596046448 at (0, 0) (up to 1e-05 allowed)
Greatest relative difference: 0.8833684352411922 at (0, 1) (up to 1.3e-06 allowed)
The failure occurred for the imaginary part.
```
After:
```
AssertionError: Imaginary components of tensor-likes are not close!
Mismatched elements: 4 / 4 (100.0%)
Greatest absolute difference: 1.0000000596046448 at index (0, 0) (up to 1e-05 allowed)
Greatest relative difference: 0.8833684352411922 at index (0, 1) (up to 1.3e-06 allowed)
```
Test Plan: Imported from OSS
Reviewed By: ngimel
Differential Revision: D29556357
Pulled By: mruberry
fbshipit-source-id: 559d4a19ad4fc069b2b4f8cb5fc2f6058621e33d
Summary:
This adds support for quantized tensors the same way torch.testing._internal.common_utils.TestCase.assertEqual does:
bf269fdc98/torch/testing/_internal/common_utils.py (L1314-L1341)
- `.qscheme()` is checked for equality
- `.q_scale` and `q_zero_point` are checked for equality (see comment below) for `.qscheme() == torch.per_tensor_affine`
- `.q_per_channel_scales`, `q_per_channel_zero_points`, and `q_per_channel_axis` are checked for equality (see comment below) for `.qscheme() == torch.per_tensor_affine`
- values are checked with the default checks after a `.int_repr().to(torch.int32)` call
Pull Request resolved: https://github.com/pytorch/pytorch/pull/58926
Reviewed By: jerryzh168
Differential Revision: D29483532
Pulled By: mruberry
fbshipit-source-id: 003fde7e21cf844778a879c3de0a7c84d13877bd
Summary:
We need to resolve the conjugate bit for complex tensors, because otherwise we may not be able to access the imaginary component:
```python
>>> torch.tensor(complex(1, 1)).conj().imag
RuntimeError: view_as_real doesn't work on unresolved conjugated tensors. To resolve the conjugate tensor so you can view it as real, use self.resolve_conj(); however, be warned that the resulting tensor will NOT alias the original.
```
Pull Request resolved: https://github.com/pytorch/pytorch/pull/60522
Reviewed By: ngimel
Differential Revision: D29353095
Pulled By: mruberry
fbshipit-source-id: c36eaf883dd55041166f692f7b1d35cd2a34acfb
Summary:
This adds support for sparse tensors the same way `torch.testing._internal.common_utils.TestCase.assertEqual` does:
5c7dace309/torch/testing/_internal/common_utils.py (L1287-L1313)
- Tensors are coalesced before comparison.
- Indices and values are compared individually.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/58844
Reviewed By: zou3519
Differential Revision: D29160250
Pulled By: mruberry
fbshipit-source-id: b0955656c2c7ff3db37a1367427ca54ca14f2e87
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/58918
~Instead of a distinct `torch.testing.assert_close` and `torch.testing.assert_equal`, this makes `torch.testing.assert_equal` a special case of `torch.testing.assert_close` for `rtol=atol=0`. In this case the closeness definition `abs(actual - expected) <= atol + rtol * abs(expected)` boils down to `abs(actual - expected) <= 0`. Since `abs(x)` can never be `<0`, this is equivalent to `abs(a - b) == 0` and this again boils down to `a == b`.~
Following https://github.com/pytorch/pytorch/pull/58918#issuecomment-860642057 and some offline discussions, we opted to use `assert_equal` as an example how to `partial` it.
This makes maintaing the module a lot easier, because we don't need to keep two functions in sync.
Test Plan: Imported from OSS
Reviewed By: anjali411
Differential Revision: D29259404
Pulled By: mruberry
fbshipit-source-id: fa1a1fa93672a7ed1c5f0e4beb0dcd45b5c14fce
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/58917
In #54780 we opted to return `Optional[Exception]` from all internal
helper functions. Since then multiple PRs added functionality that needs
to amend the error message. For this we recreate the error
09a1b1cf87/torch/testing/_asserts.py (L417-L430)
To untangle this a little, this PR introduces the `_TestingErrorMeta`,
which carries the exception type and the message. The idiom
```python
exc = check_foo():
if exc:
return exc
```
is still valid although `exc` should be renamed to `error_meta` to
reflect the new nature. In the top-level functions
`assert_(equal|close)`
```python
exc = check_foo():
if exc:
raise exc
```
changes to
```python
error_meta = check_foo():
if error_meta:
raise error_meta.to_error()
```
Test Plan: Imported from OSS
Reviewed By: anjali411
Differential Revision: D29259405
Pulled By: mruberry
fbshipit-source-id: 9078fe326283d5aa3d0cf256bf007887df9bfbfb
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/58916
Using `pytest.UsageError` in case `pytest` is available adds almost
nothing as observed in
https://github.com/pytorch/pytorch/pull/53820#discussion_r593868752, but
makes it harder to maintain: due to the conditional import, `mypy` is
not able to handle `UsageError` in a type annotation.
Test Plan: Imported from OSS
Reviewed By: anjali411
Differential Revision: D29259409
Pulled By: mruberry
fbshipit-source-id: 82b00d13fa47db77383996d0caa69177804a48b6
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/58915
History:
- It was included for internal helper functions in the initial proposal
in #53820
- It was removed in #54780, since it is not honored when used with
`pytest`'s `--tb=native`, which is the default for PyTorch
Since PyTorch shouldn't be the only user of `assert_(equal|close)` we
add it here to the top-level functions `assert_(equal|close)`. If
`pytest` is used without `--tb=native`, the traceback for
```python
assert torch.eq(actual, expected), "Tensors are not equal!"
torch.testing.assert_equal(actual, expected)
```
looks the same, making it more concise.
Test Plan: Imported from OSS
Reviewed By: anjali411
Differential Revision: D29259406
Pulled By: mruberry
fbshipit-source-id: acee47b30b7f14def27433f7d56a4b19d77393c0
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/58914
Only the top-level functions `assert_(equal|close)` should raise the
exception to keep the traceback managable.
Test Plan: Imported from OSS
Reviewed By: anjali411
Differential Revision: D29259408
Pulled By: mruberry
fbshipit-source-id: 40dd52eec6f9e8166b3b239d5172ee44b749e8dc
Summary:
In contrast to the initial opinion in https://github.com/pytorch/pytorch/issues/55385, there are legitimate use cases for nested containers. One such example is the [output of `LSTM`'s](https://pytorch.org/docs/stable/generated/torch.nn.LSTM):
```python
output: Tuple[torch.Tensor, Tuple[torch.Tensor, torch.Tensor]] = torch.nn.LSTM()(input)
assert_close(output, expected)
```
Pull Request resolved: https://github.com/pytorch/pytorch/pull/57270
Reviewed By: albanD
Differential Revision: D28249303
Pulled By: mruberry
fbshipit-source-id: 75caa4414cc184ff0ce4cfc0dd5aafddfad42bcf
Summary:
Redo of https://github.com/pytorch/pytorch/issues/57135 out of stack
---
Currently all values are used for the reported absolute and relative differences. This usually works fine, but breaks down for the extremals:
```python
torch.testing.assert_close(torch.tensor([1.0, 0.0]), torch.tensor([2.0, 0.0]))
```
```
[...]
Greatest absolute difference: 1.0 at 0 (up to 1e-05 allowed)
Greatest relative difference: nan at 1 (up to 1.3e-06 allowed)
```
Although the second element is matching it is listed as offender for the greatest relative difference. The `NaN` stems from the `0 / 0` division.
To overcome this, we should only use the values that were considered a mismatch for the reported stats.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/57923
Reviewed By: ngimel
Differential Revision: D28317316
Pulled By: mruberry
fbshipit-source-id: 4c604493bbe13b37f41225ea9af9e839a7304161
Summary:
Redo of https://github.com/pytorch/pytorch/issues/56373 out of stack.
---
To reviewers: **please be nitpicky**. I've read this so often that I probably missed some typos and inconsistencies.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/57247
Reviewed By: albanD
Differential Revision: D28247402
Pulled By: mruberry
fbshipit-source-id: 71142678ee5c82cc8c0ecc1dad6a0b2b9236d3e6
Summary:
Currently we require type equality for `torch.testing.assert_(equal|close)`:
3db45bcb91/torch/testing/_asserts.py (L509-L513)
That means `assert_equal(1, 1.0)` will correctly fail. Although the type of a scalar is similiar to a dtype of a tensor, `assert_equal(1, 1.0, check_dtype=False)` will also fail while `assert_equal(torch.as_tensor(1), torch.as_tensor(1.0), check_dtype=False)` will pass.
To make the interface more consistent, this PR relaxes the type equality constraint, by disabling it in case both inputs are scalars.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/57532
Reviewed By: ngimel
Differential Revision: D28242428
Pulled By: mruberry
fbshipit-source-id: b643c77f48b64fc2c8a43925120d2b634ec336b5
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/55890
Proof-of-concept for https://github.com/pytorch/pytorch/pull/55145#issuecomment-817297273
With this the user is able to pass a custom error message to `assert_(equal|close)` which will be used in case the values mismatch. Optionally, a callable can be passed which will be called with mismatch diagnostics and should return an error message:
```python
def make_msg(a, b, info):
return (
f"Argh, we found {info.total_mismatches} mismatches! "
f"That is {info.mismatch_ratio:.1%}!"
)
torch.testing.assert_equal(torch.tensor(1), torch.tensor(2), msg=make_msg)
```
If you imagine `a` and `b` as the outputs of binary ufuncs, the error message could look like this:
```python
def make_msg(input, torch_output, numpy_output, info):
return (
f"For input {input} torch.binary_op() and np.binary_op() do not match: "
f"{torch_output} != {numpy_output}"
)
torch.testing.assert_equal(
torch.binary_op(input),
numpy.binary_op(input),
msg=lambda a, b, info: make_msg(input, a, b, info),
)
```
This should make it much easier for developers to find out what is actually going wrong.
Test Plan: Imported from OSS
Reviewed By: albanD
Differential Revision: D27903842
Pulled By: mruberry
fbshipit-source-id: 4c82e3d969e9a621789018018bec6399724cf388
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/55786
Add support to compare scalars as well as `np.ndarray`'s with torch.testing. We are reusing the mathcing functionality that is already in place for tensors, by casting the inputs. The approach can easily extended if we want to support other input types as long as they can be cast to a tensor.
Test Plan: Imported from OSS
Reviewed By: albanD
Differential Revision: D27903814
Pulled By: mruberry
fbshipit-source-id: fe3d063d0c9513cbd8b3408a2023e94c490c817e
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/55385
This renames `assert_tensors_(equal|close)` to `_check_tensors_(equal|close)` and exposes two new functions: `assert_(equal|close)`. In addition to tensor pairs, the newly added functions also support the comparison of tensors in sequences or mappings. Otherwise their signature stays the same.
Test Plan: Imported from OSS
Reviewed By: albanD
Differential Revision: D27903805
Pulled By: mruberry
fbshipit-source-id: 719d19a1d26de8d14cb25846e3d22a6ac828c80a
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/55145
Repeating the discussion from https://github.com/pytorch/pytorch/pull/54784#issuecomment-811792089
The error messages for mismatched values are directly adapted from the old `_compare_tensors_internal`:
50cb75edce/torch/testing/__init__.py (L104-L111)
A sample error message right now looks like this
```
With rtol=1.3e-06 and atol=1e-05, found 1 different element(s) out of 12 (8.3%). The greatest difference of 4.0 (5.0 vs. 9.0) occurred at index (2, 3)
```
Using the same data with `numpy.testing.assert_equal` gives the following output:
```
Not equal to tolerance rtol=1.3e-06, atol=1e-05
Mismatched elements: 1 / 12 (8.33%)
Max absolute difference: 4.
Max relative difference: 0.44444445
x: array([[5., 5., 5., 5.],
[5., 5., 5., 5.],
[5., 5., 5., 5.]], dtype=float32)
y: array([[5., 5., 5., 5.],
[5., 5., 5., 5.],
[5., 5., 5., 9.]], dtype=float32)
```
Pros:
- The info is presented in a list instead of a sentence. IMO this makes it more readable
- The maximum relative difference is reported, which is beneficial in case a comparison fails due to the `rtol`
Cons:
- The values of the inputs are reported (this can be disabled by passing `verbose=False`, but lets face it: most users will use the default setting). In case the inputs are large, the output gets truncated with `...`. Not only is it hard to visually find the mismatching values, they could also live within the truncated part, making the output completely useless.
- Even when visually find the offending values it is hard to parse this back to the index in the inputs.
This implements a mix of both to get a short but expressive message:
```
Tensors are not close according to rtol=1.3e-6 and atol=1e-05:
Mismatched elements: 1 / 12 (8.3%)
Max. rel. diff.: 4.44e-1 at (2, 3)
Max. abs. diff.: 4.0 at (2, 3)
```
Test Plan: Imported from OSS
Reviewed By: heitorschueroff
Differential Revision: D27877157
Pulled By: mruberry
fbshipit-source-id: 6898a995f116f127e3ae8ed0bcb1ada63eadc45a
Summary:
Fixes https://github.com/pytorch/pytorch/issues/54211
This was a little more annoying than expected, because the `exclude = ` key in `mypy.ini` is weird. I'll file an upstream issue about that.
I ignored one file, `torch/distributed/elastic/agent/server/api.py` that had ~8 errors that were hard to figure out. This can be done in a follow-up.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/55712
Reviewed By: walterddr
Differential Revision: D27694976
Pulled By: malfet
fbshipit-source-id: 228d8be6af040343ce46595dabaca212e69ccc68
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/54781
Right now the functions have divergent names with one postfixed `_equal` and the other `_allclose`. I've opted to use `_(equal|close)` over `_all(equal|close)` think it is a reasonable assumption that all values need to be equal or close for this pass even without explicitly naming the function this way.
Test Plan: Imported from OSS
Reviewed By: ngimel
Differential Revision: D27438957
Pulled By: mruberry
fbshipit-source-id: 2951dac06d1430e15119ae94eafa234f3eb02f09
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/54780
- In #53152 we opted to use `tb=native`. Thus, regardless if we use `pytest` to run the tests `__tracebackhide__` is not honored. and additional layers of helper functions make the traceback harder to parse. To overcome this, we change the internal helpers to return `ok: bool, msg: Optional[str]` and only raise the error in the top level function. We do that already in the current implementation that we are trying to replace:
36ce673f16/torch/testing/__init__.py (L92-L93)36ce673f16/torch/testing/__init__.py (L112)
Test Plan: Imported from OSS
Reviewed By: ngimel
Differential Revision: D27438849
Pulled By: mruberry
fbshipit-source-id: 3e7a33dabb45463c29e8b9736fad09efb523f18d
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/54769
Follow-up to #53820. This
- makes the `asserts.py` module private as per suggestion from rgommers in https://github.com/pytorch/pytorch/pull/53820#issuecomment-802661387. With this the functions should only be accessible through `torch.testing`, giving us the option the change the underlying structure later.
- moves the code from `torch/testing/__init__.py` to `torch/testing/_core.py` (happy to accept other name suggestions). Otherwise we can't import the new `_asserts.py` in `torch/testing/__init__.py` due to circular imports.
Test Plan: Imported from OSS
Reviewed By: mrshenli
Differential Revision: D27438451
Pulled By: mruberry
fbshipit-source-id: c7292b4d5709185b42b4aac8016648562688040e