Commit Graph

166 Commits

Author SHA1 Message Date
Ivan Yashchuk
e9e1bb1a4e Fix device of info tensor for torch.linalg.inv_ex with MAGMA backend (#59223)
Summary:
This PR fixes `torch.linalg.inv_ex` with MAGMA backend.
`info` tensor was returned on CPU device even for CUDA inputs.
Now it's on the same device as input.

Fixes https://github.com/pytorch/pytorch/issues/58769

Pull Request resolved: https://github.com/pytorch/pytorch/pull/59223

Reviewed By: ngimel

Differential Revision: D28814876

Pulled By: mruberry

fbshipit-source-id: f66c6f06fb8bc305cb2e22b08750a25c8888fb65
2021-06-01 21:49:57 -07:00
Natalia Gimelshein
1871d4e604 avoid explicitly casting low precision inputs to fp32 in norm (#59134)
Summary:
Per title. Now `norm` with fp16/bfloat16 inputs and fp32 outputs on cuda won't do explicit cast

Pull Request resolved: https://github.com/pytorch/pytorch/pull/59134

Reviewed By: mruberry

Differential Revision: D28775729

Pulled By: ngimel

fbshipit-source-id: 896daa4f02e8a817cb7cb99ae8a93c02fa8dd5e9
2021-05-29 00:48:18 -07:00
Heitor Schueroff
72ae924fad Added sublist support for torch.einsum (#56625)
Summary:
This PR adds an alternative way of calling `torch.einsum`. Instead of specifying the subscripts as letters in the `equation` parameter, one can now specify the subscripts as a list of integers as in `torch.einsum(operand1, subscripts1, operand2, subscripts2, ..., [subscripts_out])`. This would be equivalent to `torch.einsum('<subscripts1>,<subscripts2>,...,->[<subscript_out>]', operand1, operand2, ...)`

TODO
- [x] Update documentation
- [x] Add more error checking
- [x] Update tests

Pull Request resolved: https://github.com/pytorch/pytorch/pull/56625

Reviewed By: zou3519

Differential Revision: D28062616

Pulled By: heitorschueroff

fbshipit-source-id: ec50ad34f127210696e7c545e4c0675166f127dc
2021-05-21 08:36:45 -07:00
Xiao Wang
691c139144 Do not use TF32 matmul in linalg and DDP tests (#56114)
Summary:
This PR does several things to relax test tolerance

- Do not use TF32 in cuda matmul in test_c10d. See https://github.com/pytorch/pytorch/issues/52941.
- Do not use TF32 in cuda matmul in test_linalg. Increase atol for float and cfloat. See https://github.com/pytorch/pytorch/issues/50453
    The tolerance is increased because most linear algebra operators are not that stable in single precision.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/56114

Reviewed By: ailzhang

Differential Revision: D28554467

Pulled By: ngimel

fbshipit-source-id: 90416be8e4c048bedb16903b01315584d344ecdf
2021-05-20 14:01:19 -07:00
Rong Rong (AI Infra)
64d23cc040 Revert D28379394: Update internal code for torch.linalg.solve
Test Plan: revert-hammer

Differential Revision:
D28379394 (b0833533a7)

Original commit changeset: b47f66bc1ee1

fbshipit-source-id: c81b34f45a1d82a2b1cecc8987048fa1055203d6
2021-05-13 19:49:41 -07:00
Ivan Yashchuk
b0833533a7 Update internal code for torch.linalg.solve (#56613)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/56613

Replace linalg_solve_helper with `lu_stub` + `lu_solve_stub`.
Once `lu_stub` and `lu_solve_stub` have cuSOLVER-based codepath,
`torch.linalg.solve` will have it as well.

Test Plan: Imported from OSS

Reviewed By: agolynski

Differential Revision: D28379394

Pulled By: mruberry

fbshipit-source-id: b47f66bc1ee12715da11dcffc92e31e67fa8c8f6
2021-05-13 16:57:29 -07:00
Ivan Yashchuk
5e65428503 Fix NumPy compatibility issue for torch.linalg.cond (#58041)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/58041

The shape of the returned result was different for NumPy and PyTorch for
`ord={-2, 2, None}`. Now it's fixed.

Test Plan: Imported from OSS

Reviewed By: ngimel

Differential Revision: D28405147

Pulled By: mruberry

fbshipit-source-id: 30293a017a0c0a7e9e3aabd470386235fef7b6a6
2021-05-13 09:42:18 -07:00
Ivan Yashchuk
a49406b331 Fixed batched version of torch.linalg.cond for singular inputs (#58040)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/58040

This PR uses `torch.linalg.inv_ex` to determine the non-invertible
inputs and return the condition number of infinity for such inputs.

Added OpInfo entry for `torch.linalg.cond`.

Test Plan: Imported from OSS

Reviewed By: ngimel

Differential Revision: D28405146

Pulled By: mruberry

fbshipit-source-id: 524b9a38309851fa6461cb787ef3fba5aa7d5328
2021-05-13 09:42:17 -07:00
Ivan Yashchuk
c1430c3425 Add torch.linalg.inv_ex without checking for errors by default (#58039)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/58039

The new function has the following signature
`inv_ex(Tensor inpit, *, bool check_errors=False) -> (Tensor inverse, Tensor info)`.
When `check_errors=True`, an error is thrown if the matrix is not invertible; `check_errors=False` - responsibility for checking the result is on the user.

`linalg_inv` is implemented using calls to `linalg_inv_ex` now.

Resolves https://github.com/pytorch/pytorch/issues/25095

Test Plan: Imported from OSS

Reviewed By: ngimel

Differential Revision: D28405148

Pulled By: mruberry

fbshipit-source-id: b8563a6c59048cb81e206932eb2f6cf489fd8531
2021-05-13 09:42:15 -07:00
lezcano
db13119fc4 Deprecate symeig (#57732)
Summary:
This one had a tricky usage of `torch.symeig` that had to be replaced. I tested the replacement locally though.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/57732

Reviewed By: bdhirsh

Differential Revision: D28328189

Pulled By: mruberry

fbshipit-source-id: 7f000fcbf2b029beabc76e5a89ff158b47977474
2021-05-12 02:21:35 -07:00
Nikita Vedeneev
c790fd2bf8 ATen lu_unpack. Required for making torch.lu_solve differentiable. (#46913)
Summary:
Backward methods for `torch.lu` and `torch.lu_solve` require the `torch.lu_unpack` method.
However, while `torch.lu` is a Python wrapper over a native function, so its gradient is implemented via `autograd.Function`,
`torch.lu_solve` is a native function, so it cannot access `torch.lu_unpack` as it is implemented in Python.

Hence this PR presents a native (ATen) `lu_unpack` version. It is also possible to update the gradients for `torch.lu` so that backward+JIT is supported (no JIT for `autograd.Function`) with this function.

~~The interface for this method is different from the original `torch.lu_unpack`, so it is decided to keep it hidden.~~

Pull Request resolved: https://github.com/pytorch/pytorch/pull/46913

Reviewed By: albanD

Differential Revision: D28355725

Pulled By: mruberry

fbshipit-source-id: 281260f3b6e93c15b08b2ba66d5a221314b00e78
2021-05-11 22:53:21 -07:00
Ivan Yashchuk
aaca12bcc2 Deprecate in docs torch.svd and change svd -> linalg_svd (#57981)
Summary:
This PR adds a note to the documentation that torch.svd is deprecated together with an upgrade guide on how to use `torch.linalg.svd` and `torch.linalg.svdvals` (Lezcano's instructions from https://github.com/pytorch/pytorch/issues/57549).
In addition, all usage of the old svd function is replaced with a new one from torch.linalg module, except for the `at::linalg_pinv` function, that fails the XLA CI build (https://github.com/pytorch/xla/issues/2755, see failure in draft PR https://github.com/pytorch/pytorch/pull/57772).

Pull Request resolved: https://github.com/pytorch/pytorch/pull/57981

Reviewed By: ngimel

Differential Revision: D28345558

Pulled By: mruberry

fbshipit-source-id: 02dd9ae6efe975026e80ca128e9b91dfc65d7213
2021-05-11 18:04:10 -07:00
Mike Ruberry
3c87fe9b14 Revert D28117714: [pytorch][PR] ATen lu_unpack. Required for making torch.lu_solve differentiable.
Test Plan: revert-hammer

Differential Revision:
D28117714 (5c67d8dfd3)

Original commit changeset: befd33db12ec

fbshipit-source-id: 295b2134935542a903a73f90a7998239dfe6cc81
2021-05-09 23:20:06 -07:00
Ivan Yashchuk
d11cce4f5e Add cuSOLVER path for torch.linalg.lstsq (#57317)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/57317

This PR implements QR-based least squares solver using geqrf, ormqr, and
triangular_solve operations.

Internal code of triangular_solve was fixed to handle correctly larger
sized rectangular arrays.

Test Plan: Imported from OSS

Reviewed By: ngimel

Differential Revision: D28312683

Pulled By: mruberry

fbshipit-source-id: dc8ae837a5fb0685d85c8733a47d7d25dc46443a
2021-05-09 21:19:10 -07:00
Nikita Vedeneev
5c67d8dfd3 ATen lu_unpack. Required for making torch.lu_solve differentiable. (#46913)
Summary:
Backward methods for `torch.lu` and `torch.lu_solve` require the `torch.lu_unpack` method.
However, while `torch.lu` is a Python wrapper over a native function, so its gradient is implemented via `autograd.Function`,
`torch.lu_solve` is a native function, so it cannot access `torch.lu_unpack` as it is implemented in Python.

Hence this PR presents a native (ATen) `lu_unpack` version. It is also possible to update the gradients for `torch.lu` so that backward+JIT is supported (no JIT for `autograd.Function`) with this function.

~~The interface for this method is different from the original `torch.lu_unpack`, so it is decided to keep it hidden.~~

Pull Request resolved: https://github.com/pytorch/pytorch/pull/46913

Reviewed By: astaff

Differential Revision: D28117714

Pulled By: mruberry

fbshipit-source-id: befd33db12ecc147afacac792418b6f4948fa4a4
2021-05-09 19:12:56 -07:00
Heitor Schueroff
4cf2c646c2 Added torch.linalg.matrix_norm (#57127)
Summary:
This PR is focused on  the API for `linalg.matrix_norm` and delegates computations to `linalg.norm` for the moment.

The main difference between the norms is when `dim=None`. In this case
- `linalg.norm` will compute a vector norm on the flattened input if `ord=None`, otherwise it requires the input to be either 1D or 2D in order to disambiguate between vector and matrix norm
- `linalg.vector_norm` will flatten the input
- `linalg.matrix_norm` will compute the norm over the last two dimensions, treating the input as batch of matrices

In future PRs, the computations will be moved to `torch.linalg.matrix_norm` and `torch.norm` and `torch.linalg.norm` will delegate computations to either `linalg.vector_norm` or `linalg.matrix_norm` based on the arguments provided.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/57127

Reviewed By: mrshenli

Differential Revision: D28186736

Pulled By: mruberry

fbshipit-source-id: 99ce2da9d1c4df3d9dd82c0a312c9570da5caf25
2021-05-09 04:50:33 -07:00
Ivan Yashchuk
18fed3dfbe Change name for namedtuple return of torch.linalg.svd (#57181)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/57181

Documentation for torch.linalg.svd says:
> The returned decomposition is a named tuple `(U, S, Vh)`

The documentation is correct while the implementation was wrong.
Renamed `V` -> `Vh`. `h` stands for hermitian.
This is a BC-breaking change but our linalg module is beta, therefore we can do it without a deprecation notice or aliases.

Test Plan: Imported from OSS

Reviewed By: ngimel

Differential Revision: D28142162

Pulled By: mruberry

fbshipit-source-id: 5e6e0ae5a63300f2db1575ca3259df381f8e1a7e
2021-05-07 15:17:43 -07:00
Ivan Yashchuk
58f32fa5fd Remove compute_uv flag from torch.linalg.svd (#57180)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/57180

We have now a separate function for computing only the singular values.
`compute_uv` argument is not needed and it was decided in the
offline discussion to remove it. This is a BC-breaking change but our
linalg module is beta, therefore we can do it without a deprecation
notice.

Test Plan: Imported from OSS

Reviewed By: ngimel

Differential Revision: D28142163

Pulled By: mruberry

fbshipit-source-id: 3fac1fcae414307ad5748c9d5ff50e0aa4e1b853
2021-05-07 15:16:42 -07:00
Sam Estep
023ecc40ad Revert D28248766: Update internal code for torch.linalg.solve
Test Plan: revert-hammer

Differential Revision:
D28248766 (5f2925074b)

Original commit changeset: 300366605653

fbshipit-source-id: 316b97791e57f9017d4bf87898aea8dc869cba79
2021-05-07 07:49:16 -07:00
Ivan Yashchuk
5f2925074b Update internal code for torch.linalg.solve (#56613)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/56613

Replace linalg_solve_helper with `lu_stub` + `lu_solve_stub`.
Once `lu_stub` and `lu_solve_stub` have cuSOLVER-based codepath,
`torch.linalg.solve` will have it as well.

Test Plan: Imported from OSS

Reviewed By: albanD

Differential Revision: D28248766

Pulled By: mruberry

fbshipit-source-id: 3003666056533d097d0ad659e0603f59fbfda9aa
2021-05-07 03:29:16 -07:00
Heitor Schueroff
1f1e2dab6b Remove optional type for ord parameter in vector_norm (#57662)
Summary:
As per discussion here https://github.com/pytorch/pytorch/pull/57127#discussion_r624948215

Note that we cannot remove the optional type from the `dim` parameter because the default is to flatten the input tensor which cannot be easily captured by a value other than `None`

### BC Breaking Note
This PR changes the `ord` parameter of `torch.linalg.vector_norm` so that it no longer accepts `None` arguments. The default behavior of `2` is equivalent to the previous default of `None`.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/57662

Reviewed By: albanD, mruberry

Differential Revision: D28228870

Pulled By: heitorschueroff

fbshipit-source-id: 040fd8055bbe013f64d3c8409bbb4b2c87c99d13
2021-05-06 17:53:25 -07:00
Sam Estep
72ebdd68e1 Revert D28242069: Add cuSOLVER path for torch.linalg.lstsq
Test Plan: revert-hammer

Differential Revision:
D28242069 (7b31d4262b)

Original commit changeset: 23979d19ccc7

fbshipit-source-id: edf26a78b3485790deb1a8f53e8c8d3989c28e1b
2021-05-06 09:28:15 -07:00
Ivan Yashchuk
7b31d4262b Add cuSOLVER path for torch.linalg.lstsq (#57317)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/57317

This PR implements QR-based least squares solver using geqrf, ormqr, and
triangular_solve operations.

Internal code of triangular_solve was fixed to handle correctly larger
sized rectangular arrays.

Test Plan: Imported from OSS

Reviewed By: ngimel

Differential Revision: D28242069

Pulled By: mruberry

fbshipit-source-id: 23979d19ccc7f591afa8df4435d0db847e2d0d97
2021-05-06 04:45:55 -07:00
Ivan Yashchuk
35fab44eaf Add CUDA support for torch.ormqr (#57316)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/57316

CUDA support is implemented using cuSOLVER.

Test Plan: Imported from OSS

Reviewed By: ngimel

Differential Revision: D28242071

Pulled By: mruberry

fbshipit-source-id: 6f0a1c50c21c376d2ee2907bddb618c6a600db1f
2021-05-06 04:45:54 -07:00
Ivan Yashchuk
59d794b2c3 Port CPU torch.ormqr to ATen (#57315)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/57315

This PR ports `torch.ormqr` from TH to ATen.
CUDA path will be implemented in a follow-up PR.
With ATen port, support for complex and batched inputs is added.
The tests are rewritten and OpInfo entry is added.

We can implement the least squares solver with geqrf + ormqr +
triangular_solve. So it's useful to have this function renewed at least for the
internal code.

Resolves https://github.com/pytorch/pytorch/issues/24748

Test Plan: Imported from OSS

Reviewed By: ngimel

Differential Revision: D28242070

Pulled By: mruberry

fbshipit-source-id: f070bb6ac2f5a3269b163b22f7354e9089ed3061
2021-05-06 04:44:40 -07:00
Jane Xu
76d9070d10 Replace windows CUDA 11.2 CI with 11.3 (#57223)
Summary:
Testing 11.3 with current CI.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/57223

Test Plan:
Relevant CI (11.3) pass!

Disclaimer: Skipped test_inverse_errors_large for CUDA 11.3 as it failed. Issue documented at https://github.com/pytorch/pytorch/issues/57482.

Reviewed By: malfet

Differential Revision: D28169393

Pulled By: janeyx99

fbshipit-source-id: 9f5cf7b6737ee6196de92bd80918a5bfbe5510ea
2021-05-04 14:23:23 -07:00
Shen Li
6bc3ad28a3 Revert D28143091: [pytorch][PR] Add cross OpInfo
Test Plan: revert-hammer

Differential Revision:
D28143091 (4a872f8539)

Original commit changeset: 0b98226a1811

fbshipit-source-id: eda38923f31ac5a79af5c78077ed0106d904f6da
2021-05-03 09:19:41 -07:00
Mike Ruberry
4a872f8539 Add cross OpInfo (#55483)
Summary:
One of the tasks in https://github.com/pytorch/pytorch/issues/54261.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/55483

Reviewed By: ngimel

Differential Revision: D28143091

Pulled By: mruberry

fbshipit-source-id: 0b98226a1811f61cb90d2248dd4425135a096551
2021-05-02 16:23:02 -07:00
Ivan Yashchuk
75a2a92b02 Add torch.linalg.cholesky_ex without checking for errors by default (#56724)
Summary:
The new function has the following signature `cholesky_ex(Tensor input, *, bool check_errors=False) -> (Tensor L, Tensor infos)`. When `check_errors=True`, an error is thrown if the decomposition fails; `check_errors=False` - responsibility for checking the decomposition is on the user.

When `check_errors=False`, we don't have host-device memory transfers for checking the values of the `info` tensor.

Rewrote the internal code for `torch.linalg.cholesky`. Added `cholesky_stub` dispatch. `linalg_cholesky` is implemented using calls to `linalg_cholesky_ex` now.

Resolves https://github.com/pytorch/pytorch/issues/57032.

Ref. https://github.com/pytorch/pytorch/issues/34272, https://github.com/pytorch/pytorch/issues/47608, https://github.com/pytorch/pytorch/issues/47953

Pull Request resolved: https://github.com/pytorch/pytorch/pull/56724

Reviewed By: ngimel

Differential Revision: D27960176

Pulled By: mruberry

fbshipit-source-id: f05f3d5d9b4aa444e41c4eec48ad9a9b6fd5dfa5
2021-05-01 18:48:27 -07:00
Ivan Yashchuk
2be115336b Fix torch.ormqr for non Fortran-contiguous inputs (#57314)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/57314

Test Plan: Imported from OSS

Reviewed By: astaff

Differential Revision: D28118029

Pulled By: mruberry

fbshipit-source-id: e2ef65093cc5f77769adc7066c76f0607b5559a9
2021-05-01 17:50:06 -07:00
Arindam Roy
6d681d064f ROCM: Re-enable test_norm_fro_2_equivalence_old (#57170)
Summary:
This test was disabled for ROCM 3.9. With latest updates, the test is passing in ROCM 4.1. Hence enabling this test in test/test_linalg.py

Pull Request resolved: https://github.com/pytorch/pytorch/pull/57170

Reviewed By: astaff

Differential Revision: D28118217

Pulled By: mruberry

fbshipit-source-id: 1b830eed944a664c3b1b3e936b87096fef0c0ca2
2021-05-01 16:41:41 -07:00
Wenlei Xie
20085f6d23 Support auto generation of device check (#56872)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/56872

ghstack-source-id: 127914018

Test Plan: auto test

Reviewed By: ezyang

Differential Revision: D27986429

fbshipit-source-id: 0da8413b0b8e6810fcea27ed1de499f11f68bd1f
2021-05-01 12:02:09 -07:00
Sameer Deshmukh
293830bc19 Fix min() and max() for empty tensors (#52565)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/34907

Pull Request resolved: https://github.com/pytorch/pytorch/pull/52565

Reviewed By: anjali411

Differential Revision: D27999955

Pulled By: ezyang

fbshipit-source-id: 30e88cc8d84806198500e3001ecf58fa764536dd
2021-04-30 15:55:10 -07:00
Ivan Yashchuk
f54aa85a6c Fix MAGMA qr for empty batched inputs (#56257)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/56257

CPU and cuSOLVER path were fixed with refactoring of
`_linalg_qr_helper_default`.

Resolves https://github.com/pytorch/pytorch/issues/50576

Test Plan: Imported from OSS

Reviewed By: ngimel

Differential Revision: D27960157

Pulled By: mruberry

fbshipit-source-id: f923f3067a35e65218889e64c6a886364c3d1759
2021-04-30 11:15:03 -07:00
Ivan Yashchuk
03962bc7f1 Updated linalg.lstsq with NumPy compatible kwarg rcond (#54723)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/54723

Renamed "cond" -> "rcond" to be NumPy compatible. The default value for
rcond was changed to match non-legacy NumPy behavior.

Test Plan: Imported from OSS

Reviewed By: H-Huang

Differential Revision: D27993741

Pulled By: mruberry

fbshipit-source-id: a4baf25aca6a8272f1af2f963600866bfda56fb3
2021-04-29 09:11:12 -07:00
Ivan Yashchuk
5a02f72fcf Modified batched residuals return of torch.linalg.lstsq (#54722)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/54722

SciPy and NumPy operate only on non-batched input and return an empty array with shape (0,) if rank(a) != n.
The behavior for non-batched inputs is NumPy and SciPy compatible and the same result is computed.
For batched inputs, if any matrix in the batch has a rank less than `n`, then an empty tensor is returned.

Test Plan: Imported from OSS

Reviewed By: ngimel

Differential Revision: D27993736

Pulled By: mruberry

fbshipit-source-id: 0d7cff967b322a5e816a23f282b6ce383c4468ef
2021-04-29 09:10:12 -07:00
Heitor Schueroff
57e37080cd Added OpInfo for torch.einsum (#56276)
Summary:
Adds OpInfo testing for torch.einsum.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/56276

Reviewed By: mruberry

Differential Revision: D27967095

Pulled By: heitorschueroff

fbshipit-source-id: 60524273d2ca885e7eeb932db3e7fd697ae5ca8e
2021-04-27 07:39:38 -07:00
Ivan Yashchuk
f84f2063b4 Port CUDA torch.geqrf to ATen (#56251)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/56251

This PR ports `torch.geqrf` from TH to ATen for CUDA path.

Resolves https://github.com/pytorch/pytorch/issues/24569

Test Plan: Imported from OSS

Reviewed By: ngimel

Differential Revision: D27960155

Pulled By: mruberry

fbshipit-source-id: a8b010c41d703a5de4bf40b045c89e6b95b5a5ca
2021-04-26 09:50:41 -07:00
Ivan Yashchuk
6ba9fd5963 Added "Tensor tol" overload of torch.linalg.matrix_rank (#54157)
Summary:
Currently `torch.linalg.matrix_rank` accepts only Python's float for `tol=` argument. The current behavior is not NumPy compatible and this PR adds the possibility to pass Tensor for matrix-wise tolerances.

Ref. https://github.com/pytorch/pytorch/issues/42666

Pull Request resolved: https://github.com/pytorch/pytorch/pull/54157

Reviewed By: ezyang

Differential Revision: D27961548

Pulled By: mruberry

fbshipit-source-id: 47318eefa07a7876e6360dae089e5389b9939489
2021-04-26 09:35:40 -07:00
Ivan Yashchuk
d5ff432615 Add torch.linalg.svdvals (#56684)
Summary:
This PR adds `torch.linalg.svdvals(input, out=None)` that computes only the singular values of `input`.

Resolves https://github.com/pytorch/pytorch/issues/54155.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/56684

Reviewed By: albanD

Differential Revision: D27938229

Pulled By: mruberry

fbshipit-source-id: 5ea79ad9cccf818df0fbda1f431299ebf8de3798
2021-04-25 03:42:24 -07:00
Ivan Yashchuk
58fcf77712 Port CPU torch.geqrf to ATen (#56249)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/56249

This PR ports `torch.geqrf` from TH to ATen. CUDA path will be
implemented in a follow-up PR.
With ATen port support for complex and batched inputs is added.
There were no correctness tests, they are
added in this PR and I added OpInfo for this operation.

We can implement the QR decomposition as a composition of geqrf and
orgqr (torch.linalg.householder_product).
Also we can implement the least squares solver with geqrf + ormqr +
trtrs. So it's useful to have this function renewed at least for the
internal code.

Resolves https://github.com/pytorch/pytorch/issues/24705

Test Plan: Imported from OSS

Reviewed By: ngimel

Differential Revision: D27907357

Pulled By: mruberry

fbshipit-source-id: 94e1806078977417e7903db76eab9d578305f585
2021-04-25 01:17:00 -07:00
Heitor Schueroff
369e8bc4bc Added support for uppercase letters in torch.einsum (#56475)
Summary:
This PR adds support for upper case letters in `torch.einsum` equation.

Addresses PR https://github.com/pytorch/pytorch/pull/55013 here.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/56475

Reviewed By: ailzhang

Differential Revision: D27948362

Pulled By: heitorschueroff

fbshipit-source-id: 51cf57b17c4c23d88fab5343f17ba3bfbe3607a5
2021-04-23 08:13:58 -07:00
Kurt Mohler
1f04494c0e Consolidate nondeterministic error tests (#55631)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/51498

Pull Request resolved: https://github.com/pytorch/pytorch/pull/55631

Reviewed By: malfet

Differential Revision: D27909953

Pulled By: mruberry

fbshipit-source-id: 9115b2433f9c276555be55bd51b270a7a2846829
2021-04-22 23:37:01 -07:00
Jeffrey Wan
2ea3c24c06 Disable flaky tests (#56279)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/56279

Test Plan: Imported from OSS

Reviewed By: albanD

Differential Revision: D27916606

Pulled By: soulitzer

fbshipit-source-id: 60c07024f6eb818f4aa6730a5f9ff90d7bc2b80f
2021-04-22 19:45:41 -07:00
Ivan Yashchuk
3d878dee45 Added out= variant for torch.linalg.lstsq (#54721)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/54721

Test Plan: Imported from OSS

Reviewed By: ngimel

Differential Revision: D27874711

Pulled By: mruberry

fbshipit-source-id: 696ebb6eb0bad81988e9cb7a081388a3a5ab3e2c
2021-04-20 07:09:06 -07:00
Winston Smith
7513455c74 Make tensordot resize output tensor's size if out= argument is specified & make it safely cast & copy output (#56286)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/56022.
Fixes https://github.com/pytorch/pytorch/issues/56316

For `torch.tensordot`,
1. `tensordot`'s out variant now resizes the output tensor provided as the `out` argument if necessary.
2. Added a check to verify if the output tensor provided as the argument for `out` is on the same device as the input tensors.
3. Added a check to verify if the dtype of the result is castable to the dtype of the output tensor provided as an argument for `out`.
4. Because of (2) & (3), `tensordot`'s out variant now [safely casts & copies output](https://github.com/pytorch/pytorch/wiki/Developer-FAQ#how-does-out-work-in-pytorch).
5. `test_tensordot` in `test_linalg.py` had a bug - the output tensor wasn't being defined to be on the same device as the input tensors. It was fixed by simply using a `device` argument in its definition.
6. Added an `OpInfo` for `tensordot` and modified the `OpInfo` for `inner`.

cc heitorschueroff mruberry

Pull Request resolved: https://github.com/pytorch/pytorch/pull/56286

Reviewed By: ngimel

Differential Revision: D27845980

Pulled By: mruberry

fbshipit-source-id: 134ab163f05c31a6900dd65aefc745803019e037
2021-04-19 04:20:21 -07:00
Kurt Mohler
a3a75bd35e Add complex autograd support for torch.cross (#55854)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/53512

Pull Request resolved: https://github.com/pytorch/pytorch/pull/55854

Reviewed By: nikithamalgifb

Differential Revision: D27737571

Pulled By: anjali411

fbshipit-source-id: 38165b952cc4c9213d61c7d98b549b984c154927
2021-04-15 15:07:25 -07:00
Mike Ruberry
399b66c813 Ports logdet from method_tests() to op_db (#55743)
Summary:
Per title. Also updates some tensor construction helpers.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/55743

Reviewed By: ngimel

Differential Revision: D27702060

Pulled By: mruberry

fbshipit-source-id: f64b7bee855733ad1f4fd182819ceec5831d9878
2021-04-11 20:39:16 -07:00
Yukio Siraichi
93bf0ae6fc Remove legacy constructor calls from pytorch codebase. (#54142)
Summary:
Follow up from https://github.com/pytorch/pytorch/issues/53889
Related to https://github.com/pytorch/pytorch/issues/47112

Removing every occurrence of the legacy constructor call present in PyTorch at:
- _docs_
- _benchmarks_
- _test_
- _caffe2_
- _CONTRIBUTING.md_

Pull Request resolved: https://github.com/pytorch/pytorch/pull/54142

Reviewed By: ngimel

Differential Revision: D27699450

Pulled By: mruberry

fbshipit-source-id: 530aa3f5746cc8bc1407d5d51b2bbd8075e30546
2021-04-11 15:45:17 -07:00
Arindam Roy
0dff0d1537 [ROCM] Disable few tests for Magma (#55534)
Summary:
After MAGMA has been enabled, around 5k new tests are running now.
Out of these 5 tests (each having 4 datatypes) are failing on the latest ROCM
CI with Rocm 4.1.  Disabling these tests for now so the ROCM CI does not fail.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/55534

Reviewed By: ZolotukhinM

Differential Revision: D27630085

Pulled By: malfet

fbshipit-source-id: c48d124e6a2b4a4f3c6c4b6ac2bdf6c214f325c7
2021-04-07 22:22:43 -07:00