Commit Graph

681 Commits

Author SHA1 Message Date
Nikita Shulga
22fff61f06 Revert D29794958: [pytorch][PR] changing trapz to trapezoid
Test Plan: revert-hammer

Differential Revision:
D29794958 (95cec8f4fa)

Original commit changeset: 60b9c07efd47

fbshipit-source-id: 2dcda2d62e01c2521a86ae5ed8246cfb686d3f64
2021-07-20 16:00:46 -07:00
Kevin Tse
95cec8f4fa changing trapz to trapezoid (#61475)
Summary:
This PR resolves issue https://github.com/pytorch/pytorch/issues/52606 while also adding support for complex number

Stack from [ghstack](https://github.com/ezyang/ghstack):
* https://github.com/pytorch/pytorch/issues/61616
* https://github.com/pytorch/pytorch/issues/61615
* **https://github.com/pytorch/pytorch/issues/61475**

Pull Request resolved: https://github.com/pytorch/pytorch/pull/61475

Reviewed By: mruberry

Differential Revision: D29794958

Pulled By: NivekT

fbshipit-source-id: 60b9c07efd47fd85b9c8178768fc7828d7b57d29
2021-07-20 15:25:55 -07:00
Kurt Mohler
87334c40a7 Remove torch._bmm and remove torch.bmm deterministic arg documentation (#61629)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/61571

Pull Request resolved: https://github.com/pytorch/pytorch/pull/61629

Reviewed By: mrshenli

Differential Revision: D29774486

Pulled By: albanD

fbshipit-source-id: bfc9119c478f0244d5be681bcf4954a3eb97e542
2021-07-20 10:55:43 -07:00
lezcano
65616184bc [Docs] Bundle of errata and small corrections / improvements for torch.linalg docs (#61578)
Summary:
This PR bundles a number of errata detected in the linalg docs over the last few weeks.

- Simpler Cholesky deprecation rule
- Remove repeated consecutive words
- Correct cond with rcond in lstsq
- Correct examples of lstsq
- More concise examples
- Use the names of the inputs / outputs in the variables of the examples

Pull Request resolved: https://github.com/pytorch/pytorch/pull/61578

Reviewed By: mrshenli

Differential Revision: D29757988

Pulled By: mruberry

fbshipit-source-id: a740a64826c065c1d7c1b8b498364d147008d76d
2021-07-20 09:58:09 -07:00
Tomasz Cheda
0263865bfe [Docs] Fix docs for torch.chunk (#61097)
Summary:
torch.chunk may return less than the requested number of chunks silently if some undocumented division constraints are not met. The functionality that users expect is provided by another function: torch.tensor_split

This has led to confusion countless times and who knows how many systems out there are fragile because of this.
My changes describe the discrepancy, show an example and direct users to the usually preferred function.

Issues mentioning this problem:
https://github.com/pytorch/pytorch/issues/9382
https://github.com/torch/torch7/issues/617

I considered documenting the constraint for when an unexpected number of chunks may be returned (it is  chunks*chunks>input.size[dim] ), so that users could quickly tell if their code may be affected. Please let me know if you think this should be in the docs or not.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/61097

Reviewed By: heitorschueroff

Differential Revision: D29660280

Pulled By: ezyang

fbshipit-source-id: 675086bc8a8882c1685a50a2c083ae8dd1854384
2021-07-19 06:13:04 -07:00
Anjali Chourdia
287603f51c Revert D29698486: [pytorch][PR] Remove torch._bmm and remove torch.bmm deterministic arg documentation
Test Plan: revert-hammer

Differential Revision:
D29698486 (328606699f)

Original commit changeset: 5af2d3803ab1

fbshipit-source-id: ce954c13196b1fb8277d61a686ac351d3bf13903
2021-07-16 11:02:09 -07:00
Kurt Mohler
328606699f Remove torch._bmm and remove torch.bmm deterministic arg documentation (#61629)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/61571

Pull Request resolved: https://github.com/pytorch/pytorch/pull/61629

Reviewed By: zou3519

Differential Revision: D29698486

Pulled By: albanD

fbshipit-source-id: 5af2d3803ab1eb093616bcfc7e074d8b57ef6958
2021-07-16 09:18:34 -07:00
Kushashwa Ravi Shrimali
7e1f01d4c0 Alias for polygamma (#59691)
Summary:
See https://github.com/pytorch/pytorch/issues/50345

cc: mruberry kshitij12345

Pull Request resolved: https://github.com/pytorch/pytorch/pull/59691

Reviewed By: gchanan

Differential Revision: D29707514

Pulled By: mruberry

fbshipit-source-id: 40c15e1fda3d9f7013977b0f36a77b228dda6aa5
2021-07-16 00:06:27 -07:00
kshitij12345
968a01a94a [special] migrate xlogy (#60641)
Summary:
Reference: https://github.com/pytorch/pytorch/issues/50345

Pull Request resolved: https://github.com/pytorch/pytorch/pull/60641

Reviewed By: gchanan

Differential Revision: D29709306

Pulled By: mruberry

fbshipit-source-id: e8a5f64009a895a25618637de40b55cf36b8f794
2021-07-15 15:32:09 -07:00
Nikita Shulga
e2c3049e2a Delete stable-sort-only-works-on-cpu warning (#61685)
Summary:
stable GPU sorting is implemented by https://github.com/pytorch/pytorch/pull/56821
Fixes https://github.com/pytorch/pytorch/issues/61682

Pull Request resolved: https://github.com/pytorch/pytorch/pull/61685

Reviewed By: gchanan

Differential Revision: D29704864

Pulled By: malfet

fbshipit-source-id: 3a5aa24bf6507be63844fe6016fb9e3c682f4d84
2021-07-15 13:34:41 -07:00
Anjali Chourdia
30e48bbeae Add neg bit (#56058)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/56058

User facing changes:
1. Adds a negative bit and corresponding new API (`is_neg()`,`resolve_neg()`)
2. `tensor.conj().imag` now returns a floating point tensor with neg bit set to 1 instead of a tensor with no notion of negative bit. Note that imag is still a view and all the view properties still hold for imag.

Non user facing changes:
1. Added a new Negative dispatch key and a backend fallback to handle it
2. Updated copy kernel to handle negative bit
3. Merged conjugate and negative bit fallback kernel
4. fixed https://github.com/pytorch/pytorch/issues/60478 (caused due to https://github.com/pytorch/pytorch/pull/54987)

Testing:
1. Added a new OpInfo based test `test_neg_view` (verifies that out-of-place and in-place operations work correctly for all operations when the input is a neg view tensor by checking the result against an actually negated tensor, verifies that autograd returns the same output for both neg view and actually negated tensors as well as it works fine when grad_out is a neg view).
2. Added a new test class containing `test_conj_view`, `test_neg_view`.

Test Plan: Imported from OSS

Reviewed By: soulitzer

Differential Revision: D29636403

fbshipit-source-id: 12214c9dc4806c51850f4a72a109db9527c0ca63
2021-07-13 13:50:42 -07:00
Michael Dagitses
5897a60480 warn about SVD outputs not supporting backprop (#61037)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/61037

* **#61037**

Test Plan: Imported from OSS

Reviewed By: albanD

Differential Revision: D29491985

Pulled By: dagitses

fbshipit-source-id: 6322e7c86cade52671062ee97d2fcb8c15d8aa86
2021-07-12 12:55:37 -07:00
Kushashwa Ravi Shrimali
423523d8bb Alias for logsumexp to special namespace (#58838)
Summary:
See https://github.com/pytorch/pytorch/issues/50345

cc: kshitij12345 Lezcano mruberry

Pull Request resolved: https://github.com/pytorch/pytorch/pull/58838

Reviewed By: malfet

Differential Revision: D29565033

Pulled By: mruberry

fbshipit-source-id: 9b715ea00c78f47b6f183357ee3c7d4c3abe4d01
2021-07-07 13:32:15 -07:00
Heitor Schueroff
f32f85e6da Implemented torch.corrcoef (#60420)
Summary:
Implements `torch.corrcoef` similar to [`np.corrcoef`](https://numpy.org/doc/stable/reference/generated/numpy.corrcoef.html) using `torch.cov` implemented in https://github.com/pytorch/pytorch/pull/58311.

closes https://github.com/pytorch/pytorch/issues/1254

Pull Request resolved: https://github.com/pytorch/pytorch/pull/60420

Reviewed By: mruberry

Differential Revision: D29474687

Pulled By: heitorschueroff

fbshipit-source-id: f3c7c5610363aebd88274a51fc77e3cf879cb611
2021-06-30 12:36:02 -07:00
Heitor Schueroff
ec9c03c234 Implemented torch.cov (#58311)
Summary:
Based from https://github.com/pytorch/pytorch/pull/50466

Adds the initial implementation of `torch.cov` similar to `numpy.cov`. For simplicity, we removed support for many parameters in `numpy.cov` that are either redundant such as `bias`, or have simple workarounds such as `y` and `rowvar`.

cc PandaBoi

closes https://github.com/pytorch/pytorch/issues/19037

Pull Request resolved: https://github.com/pytorch/pytorch/pull/58311

Reviewed By: jbschlosser

Differential Revision: D29431651

Pulled By: heitorschueroff

fbshipit-source-id: 167dea880f534934b145ba94291a9d634c25b01b
2021-06-29 14:02:39 -07:00
Kevin Tse
8cba365378 Fix incorrect doc about the dtype for torch.randint described in issue #56347 (#60507)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/60507

Fix incorrect documentation about the dtype for `torch.randint` described in issue #56347

Test Plan: Review documentation to make sure formatting is right

Reviewed By: bdhirsh

Differential Revision: D29321181

fbshipit-source-id: caae69a9bbb30052da518a3f5d22a7ed3504cdd2
2021-06-25 07:51:36 -07:00
lezcano
4e347f1242 [docs] Fix backticks in docs (#60474)
Summary:
There is a very common error when writing docs: One forgets to write a matching `` ` ``, and something like ``:attr:`x`` is rendered in the docs. This PR fixes most (all?) of these errors (and a few others).

I found these running ``grep -r ">[^#<][^<]*\`"`` on the `docs/build/html/generated` folder. The regex finds an HTML tag that does not start with `#` (as python comments in example code may contain backticks) and that contains a backtick in the rendered HTML.

This regex has not given any false positive in the current codebase, so I am inclined to suggest that we should add this check to the CI. Would this be possible / reasonable / easy to do malfet ?

Pull Request resolved: https://github.com/pytorch/pytorch/pull/60474

Reviewed By: mrshenli

Differential Revision: D29309633

Pulled By: albanD

fbshipit-source-id: 9621e0e9f87590cea060dd084fa367442b6bd046
2021-06-24 06:27:41 -07:00
Akifumi Imanishi
26cdec6ce4 Support torch.bitwise_{left/right}_shift and __rlshift__, __rrshift__ (#59544)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/58121

This PR implements `torch.bitwise_left_shift` and `torch.bitwise_right_shift` and `torch.Tensor.{__rlshift__/__rrshift__}`for compatibility with Python array API standard.
(cc: mruberry, rgommers, emcastillo, kmaehashi)

Pull Request resolved: https://github.com/pytorch/pytorch/pull/59544

Reviewed By: ngimel

Differential Revision: D29348869

Pulled By: mruberry

fbshipit-source-id: 329aee296cf890735e8a9f858bccfe87c03d06ca
2021-06-23 23:57:16 -07:00
Ilqar Ramazanli
90cd57ee16 To add edge_order=2 and documentation for gradient operator (#58165)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/56036
Fixes https://github.com/pytorch/pytorch/issues/56130

* All the interior points are computed using second order accurate central differences method for gradient operator. However, currently we only have first order method computation for edge points. In this PR we are adding second order methods for edge points as well.

* Currently, there is no detailed description of how gradient operator computed using second order method, and how to use parameters correctly. We add detailed explanation of meaning of each parameter, and return of the gradient operator, meanwhile giving description of the second-order computation.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/58165

Reviewed By: mruberry

Differential Revision: D29305321

Pulled By: iramazanli

fbshipit-source-id: 0e0e418eed801c8510b8babe2ad3d064479fb4d6
2021-06-23 03:35:15 -07:00
Saketh Are
729f7cd52f Implement histogram operator on CPU (#58780)
Summary:
The existing [torch.histc](https://pytorch.org/docs/stable/generated/torch.histc.html) operator is limited in comparison to [numpy.histogram](https://numpy.org/doc/stable/reference/generated/numpy.histogram.html). This PR adds torch.histogram on CPU. The new operator replicates numpy.histogram's behavior, including support for caller-specified bin edges and weights. It was motivated by previous community requests for histogram.

The implementation was [benchmarked](https://docs.google.com/spreadsheets/d/1xCR0jODchVvwdVSAjiLsNCkmyictA6j1LNfDpWOafjw/edit?usp=sharing) against numpy.histogram as well as torch.histc. This implementation is weakly faster than numpy.histogram across all types of inputs tested, and performs in line with torch.histc for the limited inputs histc supports.

mruberry

Pull Request resolved: https://github.com/pytorch/pytorch/pull/58780

Test Plan:
Added unit tests, OpInfo for the new torch.histogram operator.

Tested execution time on a variety of input sizes and compared to numpy.histogram performance: https://docs.google.com/spreadsheets/d/1xCR0jODchVvwdVSAjiLsNCkmyictA6j1LNfDpWOafjw/edit?usp=sharing

Reviewed By: ezyang

Differential Revision: D29134626

Pulled By: saketh-are

fbshipit-source-id: f2773085de1697f6bc6ffdeffe9a81267f51bdfc
2021-06-22 10:06:04 -07:00
kshitij12345
01e0296eb7 [special] migrate log1p, sinc, round to special namespace (#55878)
Summary:
Reference : https://github.com/pytorch/pytorch/issues/50345

Pull Request resolved: https://github.com/pytorch/pytorch/pull/55878

Reviewed By: zou3519, janeyx99

Differential Revision: D29160593

Pulled By: mruberry

fbshipit-source-id: f3ca9c541382bab33fb85d7817ce8ddc117c6826
2021-06-21 12:34:29 -07:00
Joel Schlosser
c645d39a77 Implementation of torch.isin() (#53125)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/3025

## Background

This PR implements a function similar to numpy's [`isin()`](https://numpy.org/doc/stable/reference/generated/numpy.isin.html#numpy.isin).

The op supports integral and floating point types on CPU and CUDA (+ half & bfloat16 for CUDA). Inputs can be one of:
* (Tensor, Tensor)
* (Tensor, Scalar)
* (Scalar, Tensor)

Internally, one of two algorithms is selected based on the number of elements vs. test elements. The heuristic for deciding which algorithm to use is taken from [numpy's implementation](fb215c7696/numpy/lib/arraysetops.py (L575)): if `len(test_elements) < 10 * len(elements) ** 0.145`, then a naive brute-force checking algorithm is used. Otherwise, a stablesort-based algorithm is used.

I've done some preliminary benchmarking to verify this heuristic on a devgpu, and determined for a limited set of tests that a power value of `0.407` instead of `0.145` is a better inflection point. For now, the heuristic has been left to match numpy's, but input is welcome for the best way to select it or whether it should be left the same as numpy's.

Tests are adapted from numpy's [isin and in1d tests](7dcd29aaaf/numpy/lib/tests/test_arraysetops.py).

Note: my locally generated docs look terrible for some reason, so I'm not including the screenshot for them until I figure out why.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/53125

Test Plan:
```
python test/test_ops.py   # Ex: python test/test_ops.py TestOpInfoCPU.test_supported_dtypes_isin_cpu_int32
python test/test_sort_and_select.py   # Ex: python test/test_sort_and_select.py TestSortAndSelectCPU.test_isin_cpu_int32
```

Reviewed By: soulitzer

Differential Revision: D29101165

Pulled By: jbschlosser

fbshipit-source-id: 2dcc38d497b1e843f73f332d837081e819454b4e
2021-06-14 13:50:53 -07:00
Kushashwa Ravi Shrimali
cf38b20c61 Alias for digamma as psi to special namespace (#59143)
Summary:
See https://github.com/pytorch/pytorch/issues/50345

cc: mruberry kshitij12345

Pull Request resolved: https://github.com/pytorch/pytorch/pull/59143

Reviewed By: jbschlosser

Differential Revision: D28986909

Pulled By: mruberry

fbshipit-source-id: bc8ff0375de968f3662b224689fa0a6b117f9c4e
2021-06-14 03:05:14 -07:00
Mike Ruberry
92513038e8 Revert D28994140: [pytorch][PR] Implemented torch.cov
Test Plan: revert-hammer

Differential Revision:
D28994140 (23c232554b)

Original commit changeset: 1890166c0a9c

fbshipit-source-id: 73dfe1b00464e38f004f99960cdeeb604ed4b20a
2021-06-13 02:33:37 -07:00
Heitor Schueroff
23c232554b Implemented torch.cov (#58311)
Summary:
Based from https://github.com/pytorch/pytorch/pull/50466

Adds the initial implementation of `torch.cov` similar to `numpy.cov`. For simplicity, we removed support for many parameters in `numpy.cov` that are either redundant such as `bias`, or have simple workarounds such as `y` and `rowvar`.

cc PandaBoi

TODO

- [x] Improve documentation

Pull Request resolved: https://github.com/pytorch/pytorch/pull/58311

Reviewed By: mruberry

Differential Revision: D28994140

Pulled By: heitorschueroff

fbshipit-source-id: 1890166c0a9c01e0a536acd91571cd704d632f44
2021-06-11 09:40:50 -07:00
Saketh Are
05b571ee8e fix name of 'dims' kwarg in torch.tile docs (#59471)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/59471

Fixes #59150

Test Plan: Imported from OSS

Reviewed By: zou3519

Differential Revision: D28908569

Pulled By: saketh-are

fbshipit-source-id: 57d0e75d899a1d9979e8bdb20dfd2b136dd63d1b
2021-06-07 13:18:19 -07:00
Akifumi Imanishi
0a5bfa9919 Support __rmod__ (#58476)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/58035.

This PR implements `torch.Tensor.__rmod__` and `torch.remainder(scalar, tensor)` for the compatibility with NumPy’s interface.
(cc: mruberry, rgommers, emcastillo, kmaehashi)

TODO:
  - [x] Update `tensor_binary_op` in test/test_binary_ufuncs.py after https://github.com/pytorch/pytorch/issues/58216 is merged.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/58476

Reviewed By: ngimel

Differential Revision: D28776810

Pulled By: mruberry

fbshipit-source-id: 74f8aea80f439ef2cc370333524e39971eeb7bf4
2021-06-05 16:19:24 -07:00
anjali411
3607478ecd Conjugate View (#54987)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/54987

Based off of ezyang (https://github.com/pytorch/pytorch/pull/44799) and bdhirsh (https://github.com/pytorch/pytorch/pull/43702) 's prototype:

Here's a summary of the changes in this PR:
This PR adds a new dispatch key called Conjugate. This enables us to make conjugate operation a view and leverage the specialized library functions that fast path with the hermitian operation (conj + transpose).

1. Conjugate operation will now return a view with conj bit (1) for complex tensors and returns self for non-complex tensors as before. This also means `torch.view_as_real` will no longer be a view on conjugated complex tensors and is hence disabled. To fill the gap, we have added `torch.view_as_real_physical` which would return the real tensor agnostic of the conjugate bit on the input complex tensor. The information about conjugation on the old tensor can be obtained by calling `.is_conj()` on the new tensor.
2. NEW API:
    a) `.conj()` -- now returning a view.
    b) `.conj_physical()` -- does the physical conjugate operation. If the conj bit for input was set, you'd get `self.clone()`, else you'll get a new tensor with conjugated value in its memory.
    c) `.conj_physical_()`, and `out=` variant
    d) `.resolve_conj()`  -- materializes the conjugation. returns self if the conj bit is unset, else returns a new tensor with conjugated values and conj bit set to 0.
    e) `.resolve_conj_()` in-place version of (d)
    f) `view_as_real_physical` -- as described in (1), it's functionally same as `view_as_real`, just that it doesn't error out on conjugated tensors.
    g) `view_as_real` -- existing function, but now errors out on conjugated tensors.
3. Conjugate Fallback
    a) Vast majority of PyTorch functions would currently use this fallback when they are called on a conjugated tensor.
    b) This fallback is well equipped to handle the following cases:
        - functional operation e.g., `torch.sin(input)`
        - Mutable inputs and in-place operations e.g., `tensor.add_(2)`
        - out-of-place operation e.g., `torch.sin(input, out=out)`
        - Tensorlist input args
        - NOTE: Meta tensors don't work with conjugate fallback.
4. Autograd
    a) `resolve_conj()` is an identity function w.r.t. autograd
    b) Everything else works as expected.
5. Testing:
    a) All method_tests run with conjugate view tensors.
    b) OpInfo tests that run with conjugate views
        - test_variant_consistency_eager/jit
        - gradcheck, gradgradcheck
        - test_conj_views (that only run for `torch.cfloat` dtype)

NOTE: functions like `empty_like`, `zero_like`, `randn_like`, `clone` don't propagate the conjugate bit.

Follow up work:
1. conjugate view RFC
2. Add neg bit to re-enable view operation on conjugated tensors
3. Update linalg functions to call into specialized functions that fast path with the hermitian operation.

Test Plan: Imported from OSS

Reviewed By: VitalyFedyunin

Differential Revision: D28227315

Pulled By: anjali411

fbshipit-source-id: acab9402b9d6a970c6d512809b627a290c8def5f
2021-06-04 14:12:41 -07:00
Jeffrey Wan
4ae5764d47 Add is_inference to native functions (#58729)
Summary:
Adds `is_inference` as a native function w/ manual cpp bindings.
Also changes instances of `is_inference_tensor` to `is_inference` to be consistent with other properties such as `is_complex`.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/58729

Reviewed By: mruberry

Differential Revision: D28874507

Pulled By: soulitzer

fbshipit-source-id: 0fa6bcdc72a4ae444705e2e0f3c416c1b28dadc7
2021-06-04 08:59:11 -07:00
Kushashwa Ravi Shrimali
44c20ce676 Alias for i0 to special namespace (#59141)
Summary:
See https://github.com/pytorch/pytorch/issues/50345

cc: mruberry kshitij12345

Pull Request resolved: https://github.com/pytorch/pytorch/pull/59141

Reviewed By: ngimel

Differential Revision: D28784097

Pulled By: mruberry

fbshipit-source-id: 9b61a21906ef337292686fd40e328502a79e6f09
2021-06-01 23:04:09 -07:00
Kushashwa Ravi Shrimali
0c1420aa3c OpInfo: fmod and remainder (#57941)
Summary:
See https://github.com/pytorch/pytorch/issues/54261

cc: mruberry Lezcano kshitij12345

Pull Request resolved: https://github.com/pytorch/pytorch/pull/57941

Reviewed By: mrshenli

Differential Revision: D28744464

Pulled By: mruberry

fbshipit-source-id: 19847277d4f8d3a39a706c2b3c9eddf0dedcb20c
2021-05-27 20:32:56 -07:00
Jeffrey Wan
9e60c7dee3 Add docstring for is_inference_mode_enabled (#59047)
Summary:
Fixes` #{issue number}

Testing:
```
>>> import torch
>>> torch.is_inference_mode_enabled.__doc__
'\nis_inference_mode_enabled(input) -> (bool)\n\nReturns True if inference mode is currently enabled.\n\nArgs:\n    input (Tensor): the input tensor.\n'
```

Pull Request resolved: https://github.com/pytorch/pytorch/pull/59047

Reviewed By: ailzhang

Differential Revision: D28726991

Pulled By: soulitzer

fbshipit-source-id: c117c7d73e551a1b5f0e215f2aed528bf558ef7c
2021-05-26 19:27:33 -07:00
Serhat Yilmaz
b4f3a989da [torch][repeat_interleave] Fix ambigious function call (#58881)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/58881

recently added new parameter to the function with PR: https://github.com/pytorch/pytorch/pull/58417

However, this introduced ambiguity when making call below:
  some_tensor.repeat_interleave(some_integer_value)

Making it optional to avoid the issue.

Reviewed By: ezyang, ngimel

Differential Revision: D28653820

fbshipit-source-id: 5bc0b1f326f069ff505554b51e3b24d60e69c843
2021-05-25 00:31:32 -07:00
Serhat Yilmaz
4ca4640bae [torch][repeat_interleave] remove stream syncronization if output size is given (#58417)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/58417

Same as title.

Test Plan:
Rely on CI signal.

Update unit test to exercise new code path as well.

Reviewed By: ngimel

Differential Revision: D28482927

fbshipit-source-id: 3ec8682810ed5c8547b1e8d3869924480ce63dcd
2021-05-22 20:53:28 -07:00
lezcano
d8c6b74b0b Deprecate torch.solve (#57741)
Summary:
Deprecate deprecate deprecate.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/57741

Reviewed By: agolynski

Differential Revision: D28379337

Pulled By: mruberry

fbshipit-source-id: a7a35ce1d3f25d8593698d89761c6c2d940db31a
2021-05-13 09:54:21 -07:00
lezcano
db13119fc4 Deprecate symeig (#57732)
Summary:
This one had a tricky usage of `torch.symeig` that had to be replaced. I tested the replacement locally though.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/57732

Reviewed By: bdhirsh

Differential Revision: D28328189

Pulled By: mruberry

fbshipit-source-id: 7f000fcbf2b029beabc76e5a89ff158b47977474
2021-05-12 02:21:35 -07:00
Nikita Vedeneev
c790fd2bf8 ATen lu_unpack. Required for making torch.lu_solve differentiable. (#46913)
Summary:
Backward methods for `torch.lu` and `torch.lu_solve` require the `torch.lu_unpack` method.
However, while `torch.lu` is a Python wrapper over a native function, so its gradient is implemented via `autograd.Function`,
`torch.lu_solve` is a native function, so it cannot access `torch.lu_unpack` as it is implemented in Python.

Hence this PR presents a native (ATen) `lu_unpack` version. It is also possible to update the gradients for `torch.lu` so that backward+JIT is supported (no JIT for `autograd.Function`) with this function.

~~The interface for this method is different from the original `torch.lu_unpack`, so it is decided to keep it hidden.~~

Pull Request resolved: https://github.com/pytorch/pytorch/pull/46913

Reviewed By: albanD

Differential Revision: D28355725

Pulled By: mruberry

fbshipit-source-id: 281260f3b6e93c15b08b2ba66d5a221314b00e78
2021-05-11 22:53:21 -07:00
Ilqar Ramazanli
8b816e9010 To implement gradient for Pytorch (#54617)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/56129

Pull Request resolved: https://github.com/pytorch/pytorch/pull/54617

Reviewed By: anjali411

Differential Revision: D28057452

Pulled By: iramazanli

fbshipit-source-id: 9bd86679282d34f5e5393e6447121586517eb4f0
2021-05-11 18:52:20 -07:00
Ivan Yashchuk
aaca12bcc2 Deprecate in docs torch.svd and change svd -> linalg_svd (#57981)
Summary:
This PR adds a note to the documentation that torch.svd is deprecated together with an upgrade guide on how to use `torch.linalg.svd` and `torch.linalg.svdvals` (Lezcano's instructions from https://github.com/pytorch/pytorch/issues/57549).
In addition, all usage of the old svd function is replaced with a new one from torch.linalg module, except for the `at::linalg_pinv` function, that fails the XLA CI build (https://github.com/pytorch/xla/issues/2755, see failure in draft PR https://github.com/pytorch/pytorch/pull/57772).

Pull Request resolved: https://github.com/pytorch/pytorch/pull/57981

Reviewed By: ngimel

Differential Revision: D28345558

Pulled By: mruberry

fbshipit-source-id: 02dd9ae6efe975026e80ca128e9b91dfc65d7213
2021-05-11 18:04:10 -07:00
lezcano
7707efed8f Deprecate matrix_rank (#57734)
Summary:
This one's straightforward

**BC-breaking Note**

This PR deprecates matrix_rank in favor of linalg.matrix_rank. An upgrade guide from matrix_rank to linalg.matrix_rank is provided in the documentation of matrix_rank.

It DOES NOT remove matrix_rank.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/57734

Reviewed By: bdhirsh

Differential Revision: D28318301

Pulled By: mruberry

fbshipit-source-id: b9a27f58fdad72f408ca8b83a70c9b1fc2ef28e9
2021-05-10 23:58:46 -07:00
lezcano
415ae54c31 Deprecate torch.eig (#57727)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/57727

Reviewed By: bdhirsh

Differential Revision: D28317984

Pulled By: mruberry

fbshipit-source-id: fa1aa1b78fd3611ac208bca93e2b745a1bac41f1
2021-05-10 23:31:02 -07:00
lezcano
24087d07ca Deprecate QR (#57745)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/57745

Reviewed By: bdhirsh

Differential Revision: D28318164

Pulled By: mruberry

fbshipit-source-id: b8e3cb9d7ab33f30c8653ec39f932a8af8bd2a50
2021-05-10 22:56:37 -07:00
lezcano
4fef1c1d74 Deprecate torch.cholesky (#57725)
Summary:
**BC-breaking note:**

This PR deprecates torch.cholesky in favor of torch.linalg.cholesky. A upgrade guide is added to the documentation for torch.cholesky.

Note this PR DOES NOT remove torch.cholesky.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/57725

Reviewed By: bdhirsh

Differential Revision: D28318260

Pulled By: mruberry

fbshipit-source-id: e7ba049321810e70f4de08e6ac37ff800e576152
2021-05-10 22:44:25 -07:00
lezcano
a93314dec3 Alias det, slogdet, matrix_power, inverse, pinverse (#57821)
Summary:
When doing this, I realised that `torch.linalg.pinv` did not have a note on the problems of its derivative (`torch.pinverse` did have it), so I added that.

As I was at it, I made a bit more explicit the recommendation for some functions in `torch.linalg`  to prefer other functions. I also changed the mentions of "stable" to "numerically stable" as discussed with IvanYashchuk and mruberry

If it seems like too much, I'm happy to move the recommendations part of `torch.linalg` to a different PR, but it was such a small thing that I figured it wouldn't be that big a deal if it was here.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/57821

Reviewed By: bdhirsh

Differential Revision: D28317959

Pulled By: mruberry

fbshipit-source-id: 6b116561bf3cba46fadc5ac14448e5d28ea88039
2021-05-10 22:00:59 -07:00
lezcano
ba84c91197 Deprecate torch.lstsq (#57743)
Summary:
**BC-breaking note:**

This PR deprecates torch.lstsq; it adds an upgrade guide for how to use torch.linalg.lstsq instead.

It DOES NOT remove torch.lstsq, but warns once when it's called

Pull Request resolved: https://github.com/pytorch/pytorch/pull/57743

Reviewed By: bdhirsh

Differential Revision: D28318196

Pulled By: mruberry

fbshipit-source-id: 0d6df29648a91a44c7d0ac58062c1099fcb61fb8
2021-05-10 21:39:19 -07:00
Mike Ruberry
3c87fe9b14 Revert D28117714: [pytorch][PR] ATen lu_unpack. Required for making torch.lu_solve differentiable.
Test Plan: revert-hammer

Differential Revision:
D28117714 (5c67d8dfd3)

Original commit changeset: befd33db12ec

fbshipit-source-id: 295b2134935542a903a73f90a7998239dfe6cc81
2021-05-09 23:20:06 -07:00
Nikita Vedeneev
5c67d8dfd3 ATen lu_unpack. Required for making torch.lu_solve differentiable. (#46913)
Summary:
Backward methods for `torch.lu` and `torch.lu_solve` require the `torch.lu_unpack` method.
However, while `torch.lu` is a Python wrapper over a native function, so its gradient is implemented via `autograd.Function`,
`torch.lu_solve` is a native function, so it cannot access `torch.lu_unpack` as it is implemented in Python.

Hence this PR presents a native (ATen) `lu_unpack` version. It is also possible to update the gradients for `torch.lu` so that backward+JIT is supported (no JIT for `autograd.Function`) with this function.

~~The interface for this method is different from the original `torch.lu_unpack`, so it is decided to keep it hidden.~~

Pull Request resolved: https://github.com/pytorch/pytorch/pull/46913

Reviewed By: astaff

Differential Revision: D28117714

Pulled By: mruberry

fbshipit-source-id: befd33db12ecc147afacac792418b6f4948fa4a4
2021-05-09 19:12:56 -07:00
Peter Bell
2043093217 Add correction parameter to std/var (#50903)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/50903

First part of #50010. Also fixes #51127.

Test Plan: Imported from OSS

Reviewed By: ngimel

Differential Revision: D27911345

Pulled By: mruberry

fbshipit-source-id: 7138fddc935802918ab9ff19f4bc1b9f4d745d41
2021-05-07 14:40:28 -07:00
Ivan Yashchuk
59d794b2c3 Port CPU torch.ormqr to ATen (#57315)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/57315

This PR ports `torch.ormqr` from TH to ATen.
CUDA path will be implemented in a follow-up PR.
With ATen port, support for complex and batched inputs is added.
The tests are rewritten and OpInfo entry is added.

We can implement the least squares solver with geqrf + ormqr +
triangular_solve. So it's useful to have this function renewed at least for the
internal code.

Resolves https://github.com/pytorch/pytorch/issues/24748

Test Plan: Imported from OSS

Reviewed By: ngimel

Differential Revision: D28242070

Pulled By: mruberry

fbshipit-source-id: f070bb6ac2f5a3269b163b22f7354e9089ed3061
2021-05-06 04:44:40 -07:00
Jeff Yang
03b5d87980 fix(docs): torch.add and torch.mul (#54672)
Summary:
fixes https://github.com/pytorch/pytorch/issues/39425
https://11813267-65600975-gh.circle-artifacts.com/0/docs/generated/torch.add.html
https://11813267-65600975-gh.circle-artifacts.com/0/docs/generated/torch.mul.html

Pull Request resolved: https://github.com/pytorch/pytorch/pull/54672

Reviewed By: ailzhang

Differential Revision: D27328523

Pulled By: zou3519

fbshipit-source-id: c804e3312b63ee209fef8bdfd8a92d46a345aa21
2021-05-04 08:38:06 -07:00