`torch.norm` is very odd. Some notable issues are:
- The default value of `"fro"` in `torch.norm` has an odd behaviour when `dim=None`. This is handled in the new dispatch
- The treatment of the `dtype` argument in `torch.norm` was completely wrong. This should fix it
- Some `out=` variants in the previous implementation were also wrong. This should fix those.
- This new dispatch should make some paths much faster. For example, `torch.norm(x)` where `x` is complex.
I'll try to make the changes in these PRs as incremental as possible as this is a tricky one.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/81761
Approved by: https://github.com/ngimel
Part of #29137
**BC Breaking Note**
This PR breaks C++ API backward compatibility for `at::std`. A call that has argument types `at::std(Tensor, OptionalIntArrayRef, int64_t, bool)` used to resolve to the `std.correction` overload, but now it resolves to the `std.dim` overload. In order to call the `std.correction` overload, the `int64_t` argument can be wrapped in a `c10::optional`, so that the call has the form `at::std(Tensor, OptionalIntArrayRef, optional<int64_t>, bool)`. The same is true for the corresponding arguments of the `std.out` and `std.correction_out` overloads of `at::std_out`.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/81845
Approved by: https://github.com/albanD
Fixes#76430
**The Problem:** `opt_dtype` wasn't being taken into consideration when checking whether the input dtype was either floating point or complex dtype.
**The Solution:** run those checks with the dtype returned by `get_dtype_from_self(self, opt_dtype, true)`.
This fix restores the original behavior, before #61643. It also improves the error message so that the user better comprehends what happened. Finally, I also added 2 tests for ensuring the issue was fixed
-----
#### Before
```python
>>> a = torch.randint(0, 5, (5, 5), dtype=torch.int64)
>>> b = torch.tensor([], dtype=torch.float32)
>>> a.mean() # no dtype
RuntimeError: mean(): input dtype should be either floating point or complex dtypes. Got Long instead.
>>> a.mean(dtype=torch.float32) # with dtype
RuntimeError: mean(): input dtype should be either floating point or complex dtypes. Got Long instead.
>>> torch.mean(a, [], dtype=torch.float64, out=b) # with mismatching dtype and out dtype
RuntimeError: mean(): input dtype should be either floating point or complex dtypes. Got Long instead.
```
#### After
```python
>>> a = torch.randint(0, 5, (5, 5), dtype=torch.int64)
>>> b = torch.tensor([], dtype=torch.float32)
>>> a.mean() # no dtype
RuntimeError: mean(): at least one of (i) the input dtype and (ii) the desired output dtype should be either floating point or complex. Got (i) Long and (ii) None instead.
>>> a.mean(dtype=torch.float32) # with dtype
tensor(1.6800)
>>> torch.mean(a, [], dtype=torch.float64, out=b) # with mismatching dtype and out dtype
RuntimeError: Expected out tensor to have dtype double, but got float instead
```
Pull Request resolved: https://github.com/pytorch/pytorch/pull/76584
Approved by: https://github.com/ngimel
The bounds could overflow when the number of bins is larger than the type can use, e.g. when uint8 inputs want 256 bins.
Thank you, Yang Xiaobo, for reporting a reproducing example in the forums.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/76979
Approved by: https://github.com/ngimel
This PR makes the following improvements:
- moves the custom skip list for test_normalize_operator_exhaustive in test_fx_experimental to use the typical OpInfo skip architecture. The skips were updated to xfails, and that identified some operators which were no longer failing the test
- redundant tests with OpInfo-based testing in test_jit.py were removed
- test_dtypes was improved so its error messages are clear and it makes test_nondifferentiable redundant; the latter test has been removed
- OpInfo.supports_complex_autograd() is removed in favor of a more accurate and general test for whether the particular dtype is in the backward dtypes of the operator
- gradchecks have been improved to verify that an operator doesn't support grad if it claims not to
- gradchecks have been improved to test the gradient of all input tensors that require gradient
- the concept of "default test dtypes" has been removed
- excessive and mostly redundant out testing for elementwise unary operators has been removed
- metadata for whether an op supports nuanced "safe casting" to out behavior has been removed from OpInfos
- numerous skips have been converted to xfails
- numerous OpInfos have had their metadata fixed based on the new checks
- jit-specific utilities in common_methods_invocations.py have been moved to jit_programming_utils.py
Pull Request resolved: https://github.com/pytorch/pytorch/pull/75951
Approved by: https://github.com/ngimel
Summary:
Partially fixes https://github.com/pytorch/pytorch/issues/66066
This PR:
- cleans up op-specific testing from test_autograd. test_autograd should be reserved for testing generic autograd functionality
- tests related to an operator are better colocated
- see the tracker for details
What to think about when moving tests to their correct test suite:
- naming, make sure its not too generic
- how the test is parametrized, sometimes we need to add/remove a device/dtype parameter
- can this be merged with existing tests
Pull Request resolved: https://github.com/pytorch/pytorch/pull/67413
Reviewed By: jbschlosser, albanD
Differential Revision: D32031480
Pulled By: soulitzer
fbshipit-source-id: 8e13da1e58a38d5cecbfdfd4fe2b4fe6f816897f
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/64273
Reintroduced sample_inputs_prod and constrained the range of values for large reference tests.
This reverts commit e4fd2ab59c.
Test Plan: Imported from OSS
Reviewed By: albanD
Differential Revision: D30672097
Pulled By: heitorschueroff
fbshipit-source-id: b44ed8dfd5eb0c74c194164dafc3242f6728a78f
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/63554
Following https://github.com/pytorch/pytorch/pull/61840#issuecomment-884087809, this deprecates all the dtype getters publicly exposed in the `torch.testing` namespace. The reason for this twofold:
1. If someone is not familiar with the C++ dispatch macros PyTorch uses, the names are misleading. For example `torch.testing.floating_types()` will only give you `float32` and `float64` skipping `float16` and `bfloat16`.
2. The dtype getters provide very minimal functionality that can be easily emulated by downstream libraries.
We thought about [providing an replacement](https://gist.github.com/pmeier/3dfd2e105842ad0de4505068a1a0270a), but ultimately decided against it. The major problem is BC: by keeping it, either the namespace is getting messy again after a new dtype is added or we need to somehow version the return values of the getters.
Test Plan: Imported from OSS
Reviewed By: H-Huang
Differential Revision: D30662206
Pulled By: mruberry
fbshipit-source-id: a2bdb10ab02ae665df1b5b76e8afa9af043bbf56
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/62737
ReductionOpInfo is a specialization of OpInfo for reduction operators. For now, it is designed to work with reductions that return a single tensor and that reduce all elements along one or more dimensions to a single value. In particular this excludes operators such as `max` and `min` that return multiple tensors and `quantile` that can return multiple values.
fixes https://github.com/pytorch/pytorch/issues/49746
Test Plan: Imported from OSS
Reviewed By: ejguan
Differential Revision: D30406568
Pulled By: heitorschueroff
fbshipit-source-id: 218b1da1902f67bcf4c3681e2a0f0029a25d51f1
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/61083
It's already supported on CUDA, so it seems reasonable to support on CPU as
well. This also changes `test_nansum` to compare against `torch.sum` since numpy
doesn't support BFloat16. Note that `test_nansum_vs_numpy` checks against
NumPy as well, so that's still being tested.
Test Plan: Imported from OSS
Reviewed By: navahgar
Differential Revision: D30006227
Pulled By: heitorschueroff
fbshipit-source-id: 1449730e1936417e7de1f8b3cf8cdcc15518873c