Fixes#76430
**The Problem:** `opt_dtype` wasn't being taken into consideration when checking whether the input dtype was either floating point or complex dtype.
**The Solution:** run those checks with the dtype returned by `get_dtype_from_self(self, opt_dtype, true)`.
This fix restores the original behavior, before #61643. It also improves the error message so that the user better comprehends what happened. Finally, I also added 2 tests for ensuring the issue was fixed
-----
#### Before
```python
>>> a = torch.randint(0, 5, (5, 5), dtype=torch.int64)
>>> b = torch.tensor([], dtype=torch.float32)
>>> a.mean() # no dtype
RuntimeError: mean(): input dtype should be either floating point or complex dtypes. Got Long instead.
>>> a.mean(dtype=torch.float32) # with dtype
RuntimeError: mean(): input dtype should be either floating point or complex dtypes. Got Long instead.
>>> torch.mean(a, [], dtype=torch.float64, out=b) # with mismatching dtype and out dtype
RuntimeError: mean(): input dtype should be either floating point or complex dtypes. Got Long instead.
```
#### After
```python
>>> a = torch.randint(0, 5, (5, 5), dtype=torch.int64)
>>> b = torch.tensor([], dtype=torch.float32)
>>> a.mean() # no dtype
RuntimeError: mean(): at least one of (i) the input dtype and (ii) the desired output dtype should be either floating point or complex. Got (i) Long and (ii) None instead.
>>> a.mean(dtype=torch.float32) # with dtype
tensor(1.6800)
>>> torch.mean(a, [], dtype=torch.float64, out=b) # with mismatching dtype and out dtype
RuntimeError: Expected out tensor to have dtype double, but got float instead
```
Pull Request resolved: https://github.com/pytorch/pytorch/pull/76584
Approved by: https://github.com/ngimel
The bounds could overflow when the number of bins is larger than the type can use, e.g. when uint8 inputs want 256 bins.
Thank you, Yang Xiaobo, for reporting a reproducing example in the forums.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/76979
Approved by: https://github.com/ngimel
This PR makes the following improvements:
- moves the custom skip list for test_normalize_operator_exhaustive in test_fx_experimental to use the typical OpInfo skip architecture. The skips were updated to xfails, and that identified some operators which were no longer failing the test
- redundant tests with OpInfo-based testing in test_jit.py were removed
- test_dtypes was improved so its error messages are clear and it makes test_nondifferentiable redundant; the latter test has been removed
- OpInfo.supports_complex_autograd() is removed in favor of a more accurate and general test for whether the particular dtype is in the backward dtypes of the operator
- gradchecks have been improved to verify that an operator doesn't support grad if it claims not to
- gradchecks have been improved to test the gradient of all input tensors that require gradient
- the concept of "default test dtypes" has been removed
- excessive and mostly redundant out testing for elementwise unary operators has been removed
- metadata for whether an op supports nuanced "safe casting" to out behavior has been removed from OpInfos
- numerous skips have been converted to xfails
- numerous OpInfos have had their metadata fixed based on the new checks
- jit-specific utilities in common_methods_invocations.py have been moved to jit_programming_utils.py
Pull Request resolved: https://github.com/pytorch/pytorch/pull/75951
Approved by: https://github.com/ngimel
Summary:
Partially fixes https://github.com/pytorch/pytorch/issues/66066
This PR:
- cleans up op-specific testing from test_autograd. test_autograd should be reserved for testing generic autograd functionality
- tests related to an operator are better colocated
- see the tracker for details
What to think about when moving tests to their correct test suite:
- naming, make sure its not too generic
- how the test is parametrized, sometimes we need to add/remove a device/dtype parameter
- can this be merged with existing tests
Pull Request resolved: https://github.com/pytorch/pytorch/pull/67413
Reviewed By: jbschlosser, albanD
Differential Revision: D32031480
Pulled By: soulitzer
fbshipit-source-id: 8e13da1e58a38d5cecbfdfd4fe2b4fe6f816897f
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/64273
Reintroduced sample_inputs_prod and constrained the range of values for large reference tests.
This reverts commit e4fd2ab59c.
Test Plan: Imported from OSS
Reviewed By: albanD
Differential Revision: D30672097
Pulled By: heitorschueroff
fbshipit-source-id: b44ed8dfd5eb0c74c194164dafc3242f6728a78f
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/63554
Following https://github.com/pytorch/pytorch/pull/61840#issuecomment-884087809, this deprecates all the dtype getters publicly exposed in the `torch.testing` namespace. The reason for this twofold:
1. If someone is not familiar with the C++ dispatch macros PyTorch uses, the names are misleading. For example `torch.testing.floating_types()` will only give you `float32` and `float64` skipping `float16` and `bfloat16`.
2. The dtype getters provide very minimal functionality that can be easily emulated by downstream libraries.
We thought about [providing an replacement](https://gist.github.com/pmeier/3dfd2e105842ad0de4505068a1a0270a), but ultimately decided against it. The major problem is BC: by keeping it, either the namespace is getting messy again after a new dtype is added or we need to somehow version the return values of the getters.
Test Plan: Imported from OSS
Reviewed By: H-Huang
Differential Revision: D30662206
Pulled By: mruberry
fbshipit-source-id: a2bdb10ab02ae665df1b5b76e8afa9af043bbf56
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/62737
ReductionOpInfo is a specialization of OpInfo for reduction operators. For now, it is designed to work with reductions that return a single tensor and that reduce all elements along one or more dimensions to a single value. In particular this excludes operators such as `max` and `min` that return multiple tensors and `quantile` that can return multiple values.
fixes https://github.com/pytorch/pytorch/issues/49746
Test Plan: Imported from OSS
Reviewed By: ejguan
Differential Revision: D30406568
Pulled By: heitorschueroff
fbshipit-source-id: 218b1da1902f67bcf4c3681e2a0f0029a25d51f1
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/61083
It's already supported on CUDA, so it seems reasonable to support on CPU as
well. This also changes `test_nansum` to compare against `torch.sum` since numpy
doesn't support BFloat16. Note that `test_nansum_vs_numpy` checks against
NumPy as well, so that's still being tested.
Test Plan: Imported from OSS
Reviewed By: navahgar
Differential Revision: D30006227
Pulled By: heitorschueroff
fbshipit-source-id: 1449730e1936417e7de1f8b3cf8cdcc15518873c
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/50903
First part of #50010. Also fixes#51127.
Test Plan: Imported from OSS
Reviewed By: ngimel
Differential Revision: D27911345
Pulled By: mruberry
fbshipit-source-id: 7138fddc935802918ab9ff19f4bc1b9f4d745d41
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/55640
Mean is broken for complex types, since #53218 it's now allocating the result
as a real tensor which discards the imaginary component. This wasn't picked up
in testing because `_test_dim_ops` tests are defined as closures inside of
`_test_dim_ops` instead of as methods on the test class. The result is, they
never get run.
For best results, view diff with "Hide whitespace changes".
Test Plan: Imported from OSS
Reviewed By: ngimel
Differential Revision: D27671127
Pulled By: mruberry
fbshipit-source-id: 4a1f6fea1048919fda7339c867ee78e88f2d7bd2
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/49267
This PR builds upon the PR https://github.com/pytorch/pytorch/pull/48711 by RockingJavaBean. The original PR introduced a BC breaking change by making the interpolation parameter positional. Thus, previous invocations of torch.quantile that did not include the interpolation parameter failed after the PR landed.
To avoid BC breaking changes, we preserve the original signatures and make the interpolation parameter in the new signatures kwarg only. For now, interpolation cannot have a default value to avoid ambiguity with the deprecated signature. However, due to limitations of codegen and C++, we cannot have a required arg after optional ones. Thus, this PR also makes dim and keepdim requires args. Once we can remove the old signatures, dim, keepdim and interpolation parameters in the new signature will get the default values back.
__TODO__
---
- [ ] Run backward compat tests
This reverts commit 2f1d1eb7df.
Test Plan: Imported from OSS
Reviewed By: glaringlee
Differential Revision: D27337117
Pulled By: heitorschueroff
fbshipit-source-id: 7fe31f22027645e0d6cb3cab0392d532a4b362c9