Summary:
Implements very simple changes suggested in the short discussion of the issue. Updated documentation to inform user that creation of tensor with memory mapped read only numpy arrays will probably cause a crash of the program. The displayed warning message was also updated to contain the information about issues concerning the use of a memory mapped read only numpy array. Closes https://github.com/pytorch/pytorch/issues/46741.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/49516
Reviewed By: mrshenli
Differential Revision: D25746115
Pulled By: mruberry
fbshipit-source-id: 3e534a8f2bc1f083a2835440d324bd6f30798ad4
Summary:
The first commit fixes the `MultiheadAttention` docstrings, which are causing a cryptic KaTeX crash.
The second commit fixes many documentation issues in `torch/_torch_docs.py`, and closes gh-43667 (missing "Keyword arguments" headers). It also fixes a weird duplicate docstring for `torch.argmin`; there's more of these, it looks like they were written based on whether the C++ implementation has an overload. That makes little sense to a Python user though, and the content is simply duplicate.
The `Shape:` heading for https://pytorch.org/docs/master/generated/torch.nn.MultiheadAttention.html looked bad, here's what it looks like with this PR:
<img width="475" alt="image" src="https://user-images.githubusercontent.com/98330/102797488-09a44e00-43b0-11eb-8788-acdf4e936f2f.png">
Pull Request resolved: https://github.com/pytorch/pytorch/pull/49684
Reviewed By: ngimel
Differential Revision: D25730909
Pulled By: mruberry
fbshipit-source-id: d25bcf8caf928e7e8e918017d119de12e10a46e9
Summary:
I am opening this PR early to have a place to discuss design issues.
The biggest difference between `torch.qr` and `numpy.linalg.qr` is that the former `torch.qr` takes a boolean parameter `some=True`, while the latter takes a string parameter `mode='reduced'` which can be one of the following:
`reduced`
this is completely equivalent to `some=True`, and both are the default.
`complete`
this is completely equivalent to `some=False`.
`r`
this returns only `r` instead of a tuple `(r, q)`. We have already decided that we don't want different return types depending on the parameters, so I propose to return `(r, empty_tensor)` instead. I **think** that in this mode it will be impossible to implement the backward pass, so we should raise an appropriate error in that case.
`raw`
in this mode, it returns `(h, tau)` instead of `(q, r)`. Internally, `h` and `tau` are obtained by calling lapack's `dgeqrf` and are later used to compute the actual values of `(q, r)`. The numpy docs suggest that these might be useful to call other lapack functions, but at the moment none of them is exposed by numpy and I don't know how often it is used in the real world.
I suppose the implementing the backward pass need attention to: the most straightforward solution is to use `(h, tau)` to compute `(q, r)` and then use the normal logic for `qr_backward`, but there might be faster alternatives.
`full`, `f`
alias for `reduced`, deprecated since numpy 1.8.0
`economic`, `e`
similar to `raw but it returns only `h` instead of `(h, tau). Deprecated since numpy 1.8.0
To summarize:
* `reduce`, `complete` and `r` are straightforward to implement.
* `raw` needs a bit of extra care, but I don't know how much high priority it is: since it is used rarely, we might want to not support it right now and maybe implement it in the future?
* I think we should just leave `full` and `economic` out, and possibly add a note to the docs explaining what you need to use instead
/cc mruberry
Pull Request resolved: https://github.com/pytorch/pytorch/pull/47764
Reviewed By: ngimel
Differential Revision: D25708870
Pulled By: mruberry
fbshipit-source-id: c25c70a23a02ec4322430d636542041e766ebe1b
Summary:
**BC-breaking Note:**
This PR updates PyTorch's digamma function to be consistent with SciPy's special.digamma function. This changes the result of the digamma function on the nonpositive integers, where the gamma function is not defined. Since the gamma function is undefined at these points, the (typical) derivative of the logarithm of the gamma function is also undefined at these points, and for negative integers this PR updates digamma to return NaN. For zero, however, it returns -inf to be consistent with SciPy.
Interestingly, SciPy made a similar change, which was noticed by at least one user: https://github.com/scipy/scipy/issues/9663#issue-396587679.
SciPy's returning of negative infinity at zero is intentional:
59347ae8b8/scipy/special/cephes/psi.c (L163)
This change is consistent with the C++ standard for the gamma function:
https://en.cppreference.com/w/cpp/numeric/math/tgamma
**PR Summary:**
Reference https://github.com/pytorch/pytorch/issues/42515
Pull Request resolved: https://github.com/pytorch/pytorch/pull/48302
Reviewed By: ngimel
Differential Revision: D25664087
Pulled By: mruberry
fbshipit-source-id: 1168e81e218bf9fe5b849db0e07e7b22e590cf73
Summary:
**BC-Breaking Note:**
This PR updates PyTorch's angle operator to be consistent with NumPy's. Previously angle would return zero for all floating point values (including NaN). Now angle returns `pi` for negative floating point values, zero for non-negative floating point values, and propagates NaNs.
**PR Summary:**
Reference: https://github.com/pytorch/pytorch/issues/42515
TODO:
* [x] Add BC-Breaking Note (Prev all real numbers returned `0` (even `nan`)) -> Fixed to match the correct behavior of NumPy.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/49163
Reviewed By: ngimel
Differential Revision: D25681758
Pulled By: mruberry
fbshipit-source-id: 54143fe6bccbae044427ff15d8daaed3596f9685
Summary:
Related https://github.com/pytorch/pytorch/issues/38349
Implement NumPy-like function `torch.broadcast_to` to broadcast the input tensor to a new shape.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/48997
Reviewed By: anjali411, ngimel
Differential Revision: D25663937
Pulled By: mruberry
fbshipit-source-id: 0415c03f92f02684983f412666d0a44515b99373
Summary:
BC-breaking note:
This PR changes the behavior of the any and all functions to always return a bool tensor. Previously these functions were only defined on bool and uint8 tensors, and when called on uint8 tensors they would also return a uint8 tensor. (When called on a bool tensor they would return a bool tensor.)
PR summary:
https://github.com/pytorch/pytorch/pull/44790#issuecomment-725596687
Fixes 2 and 3
Also Fixes https://github.com/pytorch/pytorch/issues/48352
Changes
* Output dtype is always `bool` (consistent with numpy) **BC Breaking (Previously used to match the input dtype**)
* Uses vectorized version for all dtypes on CPU
* Enables test for complex
* Update doc for `torch.all` and `torch.any`
TODO
* [x] Update docs
* [x] Benchmark
* [x] Raise issue on XLA
Pull Request resolved: https://github.com/pytorch/pytorch/pull/47878
Reviewed By: H-Huang
Differential Revision: D25421263
Pulled By: mruberry
fbshipit-source-id: c6c681ef94004d2bcc787be61a72aa059b333e69
Summary:
Fix https://github.com/pytorch/pytorch/issues/48523
Related https://github.com/pytorch/pytorch/issues/38349
**BC-breaking Note:**
This PR updates PyTorch's quantile function to add additional interpolation methods `lower`, `higher`, `nearest`, and `midpoint`, and these interpolation methods are currently supported by NumPy.
New parameter `interpolation` is added to the signature for both `torch.quantile` and `torch.nanquantile` functions.
- `quantile(input, q, dim=None, interpolation='linear', keepdim=False, *, out=None) -> Tensor`
- `nanquantile(input, q, dim=None, interpolation='linear', keepdim=False, *, out=None) -> Tensor`
Function signatures followed the NumPy-like style for the moment, keeping `out` at the end to be consistent with PyTorch.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/48711
Reviewed By: H-Huang
Differential Revision: D25428587
Pulled By: heitorschueroff
fbshipit-source-id: e98d24f6a651d302eb94f4ff4da18e38bdbf0124
Summary:
`torch.cholesky_solve` now works for complex inputs on GPU.
I moved the existing tests to `test_linalg.py` and modified them to test complex and float32 dtypes.
Differentiation also works correctly with complex inputs now.
Ref. https://github.com/pytorch/pytorch/issues/33152
Pull Request resolved: https://github.com/pytorch/pytorch/pull/47047
Reviewed By: ngimel
Differential Revision: D24730020
Pulled By: mruberry
fbshipit-source-id: 95402da5789c56e5a682019790985207fa28fa1f
Summary:
Ref https://github.com/pytorch/pytorch/issues/42175
This removes the 4 deprecated spectral functions: `torch.{fft,rfft,ifft,irfft}`. `torch.fft` is also now imported by by default.
The actual `at::native` functions are still used in `torch.stft` so can't be full removed yet. But will once https://github.com/pytorch/pytorch/issues/47601 has been merged.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/48594
Reviewed By: heitorschueroff
Differential Revision: D25298929
Pulled By: mruberry
fbshipit-source-id: e36737fe8192fcd16f7e6310f8b49de478e63bf0
Summary:
Relanding https://github.com/pytorch/pytorch/pull/46862
There was an issue with the simultaneous merge of two slightly conflicting PRs.
This PR adds `torch.lu_solve` for complex inputs both on CPU and GPU.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/48028
Reviewed By: linbinyu
Differential Revision: D25003700
Pulled By: zou3519
fbshipit-source-id: 24cd1babe9ccdbaa4e2ed23f08a9153d40d0f0cd
Summary:
The approach is to simply reuse `torch.repeat` but adding one more functionality to tile, which is to prepend 1's to reps arrays if there are more dimensions to the tensors than the reps given in input. Thus for a tensor of shape (64, 3, 24, 24) and reps of (2, 2) will become (1, 1, 2, 2), which is what NumPy does.
I've encountered some instability with the test on my end, where I could get a random failure of the test (due to, sometimes, random value of `self.dim()`, and sometimes, segfaults). I'd appreciate any feedback on the test or an explanation for this instability so I can this.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/47974
Reviewed By: ngimel
Differential Revision: D25148963
Pulled By: mruberry
fbshipit-source-id: bf63b72c6fe3d3998a682822e669666f7cc97c58
Summary:
Adds ldexp operator for https://github.com/pytorch/pytorch/issues/38349
I'm not entirely sure the changes to `NamedRegistrations.cpp` were needed but I saw other operators in there so I added it.
Normally the ldexp operator is used along with the frexp to construct and deconstruct floating point values. This is useful for performing operations on either the mantissa and exponent portions of floating point values.
Sleef, std math.h, and cuda support both ldexp and frexp but not for all data types. I wasn't able to figure out how to get the iterators to play nicely with a vectorized kernel so I have left this with just the normal CPU kernel for now.
This is the first operator I'm adding so please review with an eye for errors.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/45370
Reviewed By: mruberry
Differential Revision: D24333516
Pulled By: ranman
fbshipit-source-id: 2df78088f00aa9789aae1124eda399771e120d3f
Summary:
Fixes https://github.com/pytorch/pytorch/issues/47313
This PR implements `torch.addr` function using `TensorIterator` with `cpu_kernel_vec` and `gpu_kernel`.
It helps reduce memory usage, improve performance, and fix the bug when `beta` or `alpha` is a complex number.
Todo
- [x] benchmarking `torch.addr` for the change of this PR, as well as the legacy TH implementation used in PyTorch 1.6.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/47664
Reviewed By: zhangguanheng66
Differential Revision: D25059693
Pulled By: ngimel
fbshipit-source-id: 20a90824aa4cb2240e81a9f17a9e2f16ae6e3437
Summary:
Reference https://github.com/pytorch/pytorch/issues/38349
Delegates to `torch.transpose` (not sure what is the best way to alias)
TODO:
* [x] Add test
* [x] Add documentation
Pull Request resolved: https://github.com/pytorch/pytorch/pull/46041
Reviewed By: gchanan
Differential Revision: D25022816
Pulled By: mruberry
fbshipit-source-id: c80223d081cef84f523ef9b23fbedeb2f8c1efc5