Summary:
torch.logspace doesn't seem to have explained how integers are handled.
Add some clarification and some test when dtype is integral.
The CUDA implementation is also updated to be consistent with CPU implementation.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/47647
Reviewed By: gchanan
Differential Revision: D25843351
Pulled By: walterddr
fbshipit-source-id: 45237574d04c56992c18766667ff1ed71be77ac3
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/48668
Combine tests for `fmod` and `remainder`.
## BC-breaking Note:
In order to make `remainder` operator have type promotion, we have to introduce BC breaking.
### 1.7.1:
In the case where the second argument is a python number, the result is casted to the dtype of the first argument.
```python
>>> torch.remainder(x, 1.2)
tensor([0, 0, 0, 0, 0], dtype=torch.int32)
```
### This PR:
In the case where the second argument is a python number, the dtype of result is determined by type promotion of both inputs.
```python
>>> torch.remainder(x, 1.2)
tensor([1.0000, 0.8000, 0.6000, 0.4000, 0.2000])
```
Test Plan: Imported from OSS
Reviewed By: mruberry
Differential Revision: D25869136
Pulled By: ejguan
fbshipit-source-id: 8e5e87eec605a15060f715952de140f25644008c
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/48278
Remove various lines from tests due to no type promotion introduced from #47323
## BC-breaking Note:
In order to make `fmod` operator have type promotion, we have to introduce BC breaking.
### 1.7.1:
In the case where the second argument is a python number, the result is casted to the dtype of the first argument.
```python
>>> torch.fmod(x, 1.2)
tensor([0, 0, 0, 0, 0], dtype=torch.int32)
```
### Prior PR:
Check the BC-breaking note of #47323
### This PR:
In the case where the second argument is a python number, the dtype of result is determined by type promotion of both inputs.
```python
>>> torch.fmod(x, 1.2)
tensor([1.0000, 0.8000, 0.6000, 0.4000, 0.2000])
```
Test Plan: Imported from OSS
Reviewed By: mruberry
Differential Revision: D25869137
Pulled By: ejguan
fbshipit-source-id: bce763926731e095b75daf2e934bff7c03ff0832
Summary:
BC-breaking note:
This PR changes the behavior of the any and all functions to always return a bool tensor. Previously these functions were only defined on bool and uint8 tensors, and when called on uint8 tensors they would also return a uint8 tensor. (When called on a bool tensor they would return a bool tensor.)
PR summary:
https://github.com/pytorch/pytorch/pull/44790#issuecomment-725596687
Fixes 2 and 3
Also Fixes https://github.com/pytorch/pytorch/issues/48352
Changes
* Output dtype is always `bool` (consistent with numpy) **BC Breaking (Previously used to match the input dtype**)
* Uses vectorized version for all dtypes on CPU
* Enables test for complex
* Update doc for `torch.all` and `torch.any`
TODO
* [x] Update docs
* [x] Benchmark
* [x] Raise issue on XLA
Pull Request resolved: https://github.com/pytorch/pytorch/pull/47878
Reviewed By: albanD
Differential Revision: D25714324
Pulled By: mruberry
fbshipit-source-id: a87345f725297524242d69402dfe53060521ea5d
Summary:
This is related to https://github.com/pytorch/pytorch/issues/42666 .
I am opening this PR to have the opportunity to discuss things.
First, we need to consider the differences between `torch.svd` and `numpy.linalg.svd`:
1. `torch.svd` takes `some=True`, while `numpy.linalg.svd` takes `full_matrices=True`, which is effectively the opposite (and with the opposite default, too!)
2. `torch.svd` returns `(U, S, V)`, while `numpy.linalg.svd` returns `(U, S, VT)` (i.e., V transposed).
3. `torch.svd` always returns a 3-tuple; `numpy.linalg.svd` returns only `S` in case `compute_uv==False`
4. `numpy.linalg.svd` also takes an optional `hermitian=False` argument.
I think that the plan is to eventually deprecate `torch.svd` in favor of `torch.linalg.svd`, so this PR does the following:
1. Rename/adapt the old `svd` C++ functions into `linalg_svd`: in particular, now `linalg_svd` takes `full_matrices` and returns `VT`
2. Re-implement the old C++ interface on top of the new (by negating `full_matrices` and transposing `VT`).
3. The C++ version of `linalg_svd` *always* returns a 3-tuple (we can't do anything else). So, there is a python wrapper which manually calls `torch._C._linalg.linalg_svd` to tweak the return value in case `compute_uv==False`.
Currently, `linalg_svd_backward` is broken because it has not been adapted yet after the `V ==> VT` change, but before continuing and spending more time on it I wanted to make sure that the general approach is fine.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/45562
Reviewed By: H-Huang
Differential Revision: D25803557
Pulled By: mruberry
fbshipit-source-id: 4966f314a0ba2ee391bab5cda4563e16275ce91f
Summary:
Implements very simple changes suggested in the short discussion of the issue. Updated documentation to inform user that creation of tensor with memory mapped read only numpy arrays will probably cause a crash of the program. The displayed warning message was also updated to contain the information about issues concerning the use of a memory mapped read only numpy array. Closes https://github.com/pytorch/pytorch/issues/46741.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/49516
Reviewed By: mrshenli
Differential Revision: D25746115
Pulled By: mruberry
fbshipit-source-id: 3e534a8f2bc1f083a2835440d324bd6f30798ad4
Summary:
The first commit fixes the `MultiheadAttention` docstrings, which are causing a cryptic KaTeX crash.
The second commit fixes many documentation issues in `torch/_torch_docs.py`, and closes gh-43667 (missing "Keyword arguments" headers). It also fixes a weird duplicate docstring for `torch.argmin`; there's more of these, it looks like they were written based on whether the C++ implementation has an overload. That makes little sense to a Python user though, and the content is simply duplicate.
The `Shape:` heading for https://pytorch.org/docs/master/generated/torch.nn.MultiheadAttention.html looked bad, here's what it looks like with this PR:
<img width="475" alt="image" src="https://user-images.githubusercontent.com/98330/102797488-09a44e00-43b0-11eb-8788-acdf4e936f2f.png">
Pull Request resolved: https://github.com/pytorch/pytorch/pull/49684
Reviewed By: ngimel
Differential Revision: D25730909
Pulled By: mruberry
fbshipit-source-id: d25bcf8caf928e7e8e918017d119de12e10a46e9
Summary:
I am opening this PR early to have a place to discuss design issues.
The biggest difference between `torch.qr` and `numpy.linalg.qr` is that the former `torch.qr` takes a boolean parameter `some=True`, while the latter takes a string parameter `mode='reduced'` which can be one of the following:
`reduced`
this is completely equivalent to `some=True`, and both are the default.
`complete`
this is completely equivalent to `some=False`.
`r`
this returns only `r` instead of a tuple `(r, q)`. We have already decided that we don't want different return types depending on the parameters, so I propose to return `(r, empty_tensor)` instead. I **think** that in this mode it will be impossible to implement the backward pass, so we should raise an appropriate error in that case.
`raw`
in this mode, it returns `(h, tau)` instead of `(q, r)`. Internally, `h` and `tau` are obtained by calling lapack's `dgeqrf` and are later used to compute the actual values of `(q, r)`. The numpy docs suggest that these might be useful to call other lapack functions, but at the moment none of them is exposed by numpy and I don't know how often it is used in the real world.
I suppose the implementing the backward pass need attention to: the most straightforward solution is to use `(h, tau)` to compute `(q, r)` and then use the normal logic for `qr_backward`, but there might be faster alternatives.
`full`, `f`
alias for `reduced`, deprecated since numpy 1.8.0
`economic`, `e`
similar to `raw but it returns only `h` instead of `(h, tau). Deprecated since numpy 1.8.0
To summarize:
* `reduce`, `complete` and `r` are straightforward to implement.
* `raw` needs a bit of extra care, but I don't know how much high priority it is: since it is used rarely, we might want to not support it right now and maybe implement it in the future?
* I think we should just leave `full` and `economic` out, and possibly add a note to the docs explaining what you need to use instead
/cc mruberry
Pull Request resolved: https://github.com/pytorch/pytorch/pull/47764
Reviewed By: ngimel
Differential Revision: D25708870
Pulled By: mruberry
fbshipit-source-id: c25c70a23a02ec4322430d636542041e766ebe1b
Summary:
**BC-breaking Note:**
This PR updates PyTorch's digamma function to be consistent with SciPy's special.digamma function. This changes the result of the digamma function on the nonpositive integers, where the gamma function is not defined. Since the gamma function is undefined at these points, the (typical) derivative of the logarithm of the gamma function is also undefined at these points, and for negative integers this PR updates digamma to return NaN. For zero, however, it returns -inf to be consistent with SciPy.
Interestingly, SciPy made a similar change, which was noticed by at least one user: https://github.com/scipy/scipy/issues/9663#issue-396587679.
SciPy's returning of negative infinity at zero is intentional:
59347ae8b8/scipy/special/cephes/psi.c (L163)
This change is consistent with the C++ standard for the gamma function:
https://en.cppreference.com/w/cpp/numeric/math/tgamma
**PR Summary:**
Reference https://github.com/pytorch/pytorch/issues/42515
Pull Request resolved: https://github.com/pytorch/pytorch/pull/48302
Reviewed By: ngimel
Differential Revision: D25664087
Pulled By: mruberry
fbshipit-source-id: 1168e81e218bf9fe5b849db0e07e7b22e590cf73
Summary:
**BC-Breaking Note:**
This PR updates PyTorch's angle operator to be consistent with NumPy's. Previously angle would return zero for all floating point values (including NaN). Now angle returns `pi` for negative floating point values, zero for non-negative floating point values, and propagates NaNs.
**PR Summary:**
Reference: https://github.com/pytorch/pytorch/issues/42515
TODO:
* [x] Add BC-Breaking Note (Prev all real numbers returned `0` (even `nan`)) -> Fixed to match the correct behavior of NumPy.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/49163
Reviewed By: ngimel
Differential Revision: D25681758
Pulled By: mruberry
fbshipit-source-id: 54143fe6bccbae044427ff15d8daaed3596f9685
Summary:
Related https://github.com/pytorch/pytorch/issues/38349
Implement NumPy-like function `torch.broadcast_to` to broadcast the input tensor to a new shape.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/48997
Reviewed By: anjali411, ngimel
Differential Revision: D25663937
Pulled By: mruberry
fbshipit-source-id: 0415c03f92f02684983f412666d0a44515b99373
Summary:
BC-breaking note:
This PR changes the behavior of the any and all functions to always return a bool tensor. Previously these functions were only defined on bool and uint8 tensors, and when called on uint8 tensors they would also return a uint8 tensor. (When called on a bool tensor they would return a bool tensor.)
PR summary:
https://github.com/pytorch/pytorch/pull/44790#issuecomment-725596687
Fixes 2 and 3
Also Fixes https://github.com/pytorch/pytorch/issues/48352
Changes
* Output dtype is always `bool` (consistent with numpy) **BC Breaking (Previously used to match the input dtype**)
* Uses vectorized version for all dtypes on CPU
* Enables test for complex
* Update doc for `torch.all` and `torch.any`
TODO
* [x] Update docs
* [x] Benchmark
* [x] Raise issue on XLA
Pull Request resolved: https://github.com/pytorch/pytorch/pull/47878
Reviewed By: H-Huang
Differential Revision: D25421263
Pulled By: mruberry
fbshipit-source-id: c6c681ef94004d2bcc787be61a72aa059b333e69
Summary:
Fix https://github.com/pytorch/pytorch/issues/48523
Related https://github.com/pytorch/pytorch/issues/38349
**BC-breaking Note:**
This PR updates PyTorch's quantile function to add additional interpolation methods `lower`, `higher`, `nearest`, and `midpoint`, and these interpolation methods are currently supported by NumPy.
New parameter `interpolation` is added to the signature for both `torch.quantile` and `torch.nanquantile` functions.
- `quantile(input, q, dim=None, interpolation='linear', keepdim=False, *, out=None) -> Tensor`
- `nanquantile(input, q, dim=None, interpolation='linear', keepdim=False, *, out=None) -> Tensor`
Function signatures followed the NumPy-like style for the moment, keeping `out` at the end to be consistent with PyTorch.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/48711
Reviewed By: H-Huang
Differential Revision: D25428587
Pulled By: heitorschueroff
fbshipit-source-id: e98d24f6a651d302eb94f4ff4da18e38bdbf0124
Summary:
`torch.cholesky_solve` now works for complex inputs on GPU.
I moved the existing tests to `test_linalg.py` and modified them to test complex and float32 dtypes.
Differentiation also works correctly with complex inputs now.
Ref. https://github.com/pytorch/pytorch/issues/33152
Pull Request resolved: https://github.com/pytorch/pytorch/pull/47047
Reviewed By: ngimel
Differential Revision: D24730020
Pulled By: mruberry
fbshipit-source-id: 95402da5789c56e5a682019790985207fa28fa1f
Summary:
Ref https://github.com/pytorch/pytorch/issues/42175
This removes the 4 deprecated spectral functions: `torch.{fft,rfft,ifft,irfft}`. `torch.fft` is also now imported by by default.
The actual `at::native` functions are still used in `torch.stft` so can't be full removed yet. But will once https://github.com/pytorch/pytorch/issues/47601 has been merged.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/48594
Reviewed By: heitorschueroff
Differential Revision: D25298929
Pulled By: mruberry
fbshipit-source-id: e36737fe8192fcd16f7e6310f8b49de478e63bf0
Summary:
Relanding https://github.com/pytorch/pytorch/pull/46862
There was an issue with the simultaneous merge of two slightly conflicting PRs.
This PR adds `torch.lu_solve` for complex inputs both on CPU and GPU.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/48028
Reviewed By: linbinyu
Differential Revision: D25003700
Pulled By: zou3519
fbshipit-source-id: 24cd1babe9ccdbaa4e2ed23f08a9153d40d0f0cd