Commit Graph

232 Commits

Author SHA1 Message Date
Peter Bell
39717d3034 Remove histogramdd functional wrapper
Merge once the forward compatibility period is expired for the histogramdd
operator.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/74201

Approved by: https://github.com/ezyang, https://github.com/albanD
2022-04-14 20:56:24 +00:00
PyTorch MergeBot
715e07b97f Revert "Remove histogramdd functional wrapper"
This reverts commit 8cc338e5c2.

Reverted https://github.com/pytorch/pytorch/pull/74201 on behalf of https://github.com/suo
2022-04-14 03:56:48 +00:00
Peter Bell
8cc338e5c2 Remove histogramdd functional wrapper
Merge once the forward compatibility period is expired for the histogramdd
operator.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/74201

Approved by: https://github.com/ezyang
2022-04-14 02:47:39 +00:00
PyTorch MergeBot
3471b0eb3d Revert "Remove histogramdd functional wrapper"
This reverts commit 7c9017127f.

Reverted https://github.com/pytorch/pytorch/pull/74201 on behalf of https://github.com/malfet
2022-04-13 12:54:24 +00:00
Peter Bell
7c9017127f Remove histogramdd functional wrapper
Merge once the forward compatibility period is expired for the histogramdd
operator.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/74201

Approved by: https://github.com/ezyang
2022-04-13 03:02:59 +00:00
Edward Z. Yang
0a1bc5f501 Miscellaneous __torch_function__ fixes
I figured these out by unconditionally turning on a no-op torch function
mode on the test suite and then fixing errors as they showed up.  Here's
what I found:

- _parse_to failed internal assert when __torch_function__'ed because it
  claims its name is "to" to the argument parser; added a name override
  so we know how to find the correct name

- Infix operator magic methods on Tensor did not uniformly handle
  __torch_function__ and TypeError to NotImplemented.  Now, we always
  do the __torch_function__ handling in
  _wrap_type_error_to_not_implemented and your implementation of
  __torch_function__ gets its TypeErrors converted to NotImplemented
  (for better or for worse; see
  https://github.com/pytorch/pytorch/issues/75462 )

- A few cases where code was incorrectly testing if a Tensor was
  Tensor-like in the wrong way, now use is_tensor_like (in grad
  and in distributions).  Also update docs for has_torch_function to
  push people to use is_tensor_like.

- is_grads_batched was dropped from grad in handle_torch_function, now
  fixed

- Report that you have a torch function even if torch function is
  disabled if a mode is enabled.  This makes it possible for a mode
  to return NotImplemented, pass to a subclass which does some
  processing and then pass back to the mode even after the subclass
  disables __torch_function__ (so the tensors are treated "as if"
  they are regular Tensors).  This brings the C++ handling behavior
  in line with the Python behavior.

- Make the Python implementation of overloaded types computation match
  the C++ version: when torch function is disabled, there are no
  overloaded types (because they all report they are not overloaded).

Signed-off-by: Edward Z. Yang <ezyangfb.com>

Pull Request resolved: https://github.com/pytorch/pytorch/pull/75484

Approved by: https://github.com/zou3519
2022-04-11 16:52:16 +00:00
Peter Bell
c5023ea1d5 stft: Implement center padding in ATen
Pull Request resolved: https://github.com/pytorch/pytorch/pull/73432

Approved by: https://github.com/ezyang
2022-04-01 01:14:52 +00:00
Kelvinthedrugger
9cc848c3f8 fix typo torch.functional.py
Fixes a typo in torch.functional.py document, which is not an existing issue
Thanks for your time

Pull Request resolved: https://github.com/pytorch/pytorch/pull/74796
Approved by: https://github.com/mrshenli, https://github.com/osalpekar
2022-03-30 00:45:03 +00:00
Peter Bell
c7f9da5752 Add C++ implementation of histogramdd
This creates a `histogramdd` operator with overloads matching the `Union`
behaviour used in the functional variant. Moving into C++ is preferred because
it can handle torch function automatically instead of needing to differentiate
between the overloads manually.

This also adds a new return type: `std::tuple<Tensor, std::vector<Tensor>>`. For
which I've updated `wrap` to be completely generic for tuples and removed the
old manual definitions.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/74200

Approved by: https://github.com/ezyang
2022-03-29 02:17:21 +00:00
Khushi Agrawal
905efa82ff [fix] torch.broadcast_shapes should not handle shapes with negative dimensions. (#72999)
Summary:
Hi,
The PR fixes https://github.com/pytorch/pytorch/issues/68957. It aims to include the following:
- Fixes the code in `torch/functional.py`.
- Add the missing tests for negative input values and non-iterable inputs.

~#### TODO~
~- [x] Add OpInfo~
EDIT: `broadcast_shapes` don't take any tensor inputs. So we don't need OpInfo here. Thanks, kshitij12345 for guidance.

#### Earlier
```python
>>> shapes = [1, -12]
>>> torch.broadcast_shapes(*shapes)
torch.Size([-12])    # MUST RAISE ERROR
```

#### Now
```python
>>> shapes = [1, -12]
>>> torch.broadcast_shapes(*shapes)
RuntimeError: Trying to create tensor with negative dimension -12: [-12]
```

#### NumPy's Output
```python
>>> shapes = [1, -12]
>>> numpy.broadcast_shapes(*shapes)
ValueError: negative dimensions are not allowed
```

#### `torch.broadcast_tensor()` Output
As mentioned in the [doc](https://pytorch.org/docs/stable/generated/torch.broadcast_shapes.html):
```python
>>> shapes = [1, -12]
>>> torch.broadcast_tensors(*map(torch.empty, shapes))[0].shape
RuntimeError: Trying to create tensor with negative dimension -12: [-12]
```

Looking forward to hearing from you and your questions. Thanks! :)

cc: mruberry kshitij12345

Pull Request resolved: https://github.com/pytorch/pytorch/pull/72999

Reviewed By: albanD

Differential Revision: D34543995

Pulled By: ngimel

fbshipit-source-id: e32b1f266500a5e002c8f353b1e02f44c23d4f6e
(cherry picked from commit a6253ce6bb8455a3c89398f12b7d790a0b7e8d95)
2022-03-03 18:33:06 +00:00
Peter Bell
9ad0578c59 Remove istft python functional wrapper (#71993)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/71993

I've kept the symbol `torch.functional.istft` because it looks like public API,
but it could just as easily be moved to `_torch_docs.py`.

Moving this into its own PR until TorchScript starts recognizing `input`
as a keyword argument.

Test Plan: Imported from OSS

Reviewed By: mrshenli

Differential Revision: D34461399

Pulled By: anjali411

fbshipit-source-id: 3275fb74bef2fa0e030e61f7ee188daf8b5b2acf
(cherry picked from commit 5b4b083de894eba9ab16cea53b77746bcfd0fe32)
2022-03-03 17:31:00 +00:00
Peter Bell
0947521268 Update stft tests to support latest librosa (#72833)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/72833

Closes #72550

The latest version of librosa breaks backward compatibility in two
ways:
- Everything except the input tensor is now keyword-only
- `pad_mode` now defaults to `'constant'` for zero-padding

https://librosa.org/doc/latest/generated/librosa.stft.html

This changes the test to match the old behaior even when using the new
library and updates the documentation to explicitly say that
`torch.stft` doesn't exactly follow the librosa API. This was always
true (`torch.stft` it has new arguments, a different default window
and supports complex input), but it can't hurt to be explicit.

Test Plan: Imported from OSS

Reviewed By: ngimel

Differential Revision: D34386897

Pulled By: mruberry

fbshipit-source-id: 6adc23f48fcb368dacf70602e9197726d6b7e0c1
(cherry picked from commit b5c5ed4196)
2022-02-23 02:31:42 +00:00
lezcano
a35b4b49d2 Add linalg.lu_factor (#66933)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/66933

This PR exposes `torch.lu` as `torch.linalg.lu_factor` and
`torch.linalg.lu_factor_ex`.

This PR also adds support for matrices with zero elements both in
the size of the matrix and the batch. Note that this function simply
returns empty tensors of the correct size in this case.

We add a test and an OpInfo for the new function.

This PR also adds documentation for this new function in line of
the documentation in the rest of `torch.linalg`.

Fixes https://github.com/pytorch/pytorch/issues/56590
Fixes https://github.com/pytorch/pytorch/issues/64014

cc jianyuh nikitaved pearu mruberry walterddr IvanYashchuk xwang233 Lezcano

Test Plan: Imported from OSS

Reviewed By: gchanan

Differential Revision: D32834069

Pulled By: mruberry

fbshipit-source-id: 51ef12535fa91d292f419acf83b800b86ee9c7eb
2022-01-05 20:32:12 -08:00
Mike Ruberry
6ae34ea6f8 Revert D32521980: Add linalg.lu_factor
Test Plan: revert-hammer

Differential Revision:
D32521980 (b10929a14a)

Original commit changeset: 26a49ebd87f8

fbshipit-source-id: e1a6bb9c2ece9bd78190fe17e16a46e3358c5c82
2021-11-28 17:22:15 -08:00
lezcano
b10929a14a Add linalg.lu_factor (#66933)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/66933

This PR exposes `torch.lu` as `torch.linalg.lu_factor` and
`torch.linalg.lu_factor_ex`.

This PR also adds support for matrices with zero elements both in
the size of the matrix and the batch. Note that this function simply
returns empty tensors of the correct size in this case.

We add a test and an OpInfo for the new function.

This PR also adds documentation for this new function in line of
the documentation in the rest of `torch.linalg`.

Fixes https://github.com/pytorch/pytorch/issues/56590
Fixes https://github.com/pytorch/pytorch/issues/64014

cc jianyuh nikitaved pearu mruberry walterddr IvanYashchuk xwang233 Lezcano

Test Plan: Imported from OSS

Reviewed By: albanD

Differential Revision: D32521980

Pulled By: mruberry

fbshipit-source-id: 26a49ebd87f8a41472f8cd4e9de4ddfb7f5581fb
2021-11-27 17:52:48 -08:00
Michael Suo
5c3529a86d [lint] small pass to make lint clean (#68367)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/68367

- bmm_test.py was using syntax not allowed in 3.6
- Some suppressions were not placed on the correct line.

With this file,
```
lintrunner --paths-cmd='git grep -Il .'
```
passes successfully.

Test Plan: Imported from OSS

Reviewed By: janeyx99, mrshenli

Differential Revision: D32436644

Pulled By: suo

fbshipit-source-id: ae9300c6593d8564fb326822de157d00f4aaa3c2
2021-11-16 10:27:00 -08:00
Nikita Shulga
7b0408684b Fix linter (#67122)
Summary:
Fixes regression introduced by 7e5aa0d35a

Pull Request resolved: https://github.com/pytorch/pytorch/pull/67122

Reviewed By: seemethere

Differential Revision: D31872569

Pulled By: malfet

fbshipit-source-id: ada0137db9a46cbec573489c9c37a94f3a7576ae
2021-10-22 16:02:36 -07:00
samdow
7e5aa0d35a fixed unique arguments documentation (#66132)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/66132

Differential Revisi
<img width="875" alt="Screen Shot 2021-10-05 at 12 10 39 PM" src="https://user-images.githubusercontent.com/17888388/136276286-3df20681-7b7a-4a91-97d6-4f1ac3722121.png">
on: [D31397746](https://our.intern.facebook.com/intern/diff/D31397746/)

Test Plan: Imported from OSS

Reviewed By: gchanan

Differential Revision: D31734476

Pulled By: samdow

fbshipit-source-id: 8999443c7f9b24394d7543652b8350261c1f8b3a
2021-10-22 14:50:02 -07:00
Saketh Are
33790c4e06 Implement histogramdd on CPU (#65318)
Summary:
Implements `torch.histogramdd` analogous to `numpy.histogramdd`.

Builds on https://github.com/pytorch/pytorch/pull/58780, generalizing the existing `torch.histogram` kernel to handle D-dimensional inputs.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/65318

Reviewed By: soulitzer

Differential Revision: D31654555

Pulled By: saketh-are

fbshipit-source-id: 14b781fac0fd3698b052dbd6f0fda46e50d4c5f1
2021-10-21 16:09:31 -07:00
Michael Dagitses
aaffcfe9cd implement "xy" indexing for torch.meshgrid (#62724)
Summary:
This is step 4/7 of https://github.com/pytorch/pytorch/issues/50276. This allows the use of `"xy"` indexing but doesn't change any defaults.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/62724

Reviewed By: heitorschueroff

Differential Revision: D30995290

Pulled By: dagitses

fbshipit-source-id: 08a6a6144b20bc019f68bc3c52e3bbf967976d8f
2021-09-17 08:31:17 -07:00
Michael Dagitses
2c57bbf521 add support for indexing to meshgrid (#62722)
Summary:
This is step 3/7 of https://github.com/pytorch/pytorch/issues/50276. It only adds support for the argument but doesn't implement new indexing modes yet.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/62722

Test Plan:
Verified this is not FC breaking by adding logging to both meshgrid
overloads and then called meshgrid twice:

`meshgrid(*tensors)`
  and
`meshgrid(*tensors, indexing='ij')`

This confirmed that the former signature triggered the original native
function and the latter signature triggered the new native function.

Reviewed By: H-Huang

Differential Revision: D30394313

Pulled By: dagitses

fbshipit-source-id: e265cb114d8caae414ee2305dc463b34fdb57fa6
2021-09-16 09:59:49 -07:00
Kshiteej K
ff6b475d4a [fix] don't expose unique_dim in torch (#63080)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/62793

This is mostly a quick fix. I think the more correct fix could be updating `unique_dim` to `_unique_dim` which could be BC-breaking for C++ users (� maybe). Maybe something else I am missing.

~~Not sure how to add a test for it.~~ Have tested it locally.

We can add a test like following. Tested this locally, it fails currently but passes with the fix.
```python
        def test_wildcard_import(self):
            exec('from torch import *')

```

Pull Request resolved: https://github.com/pytorch/pytorch/pull/63080

Reviewed By: gchanan

Differential Revision: D30738711

Pulled By: zou3519

fbshipit-source-id: b86d0190e45ba0b49fd2cffdcfd2e3a75cc2a35e
2021-09-14 18:19:17 -07:00
Zhaoheng Ni
2c258d91cc Fix torch.istft length mismatch and window runtime error (#63469)
Summary:
The PR fixes two issues:
- See https://github.com/pytorch/pytorch/issues/62747 and https://github.com/pytorch/audio/issues/1409. The length mismatch when the given ``length`` parameter is longer than expected. Add padding logic in consistent with librosa.
- See https://github.com/pytorch/pytorch/issues/62323. The current implementations checks if the min value of window_envelop.abs() is greater than zero.  In librosa they normalize the signal on non-zero values by indexing. Like
```
approx_nonzero_indices = ifft_window_sum > util.tiny(ifft_window_sum)
y[approx_nonzero_indices] /= ifft_window_sum[approx_nonzero_indices]
```

Pull Request resolved: https://github.com/pytorch/pytorch/pull/63469

Reviewed By: fmassa

Differential Revision: D30695827

Pulled By: nateanl

fbshipit-source-id: d034e53f0d65b3fd1dbd150c9c5acf3faf25a164
2021-09-02 09:31:47 -07:00
Michael Dagitses
d4593d9d08 document why wrappers exist in torch.functional (#62847)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/62844.

These wrappers are not super obvious, but ultimately stem from the lack of support for functions with variadic args in native_functions.yaml. https://github.com/pytorch/pytorch/issues/62845 tracks that issue.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/62847

Reviewed By: VitalyFedyunin

Differential Revision: D30305016

Pulled By: dagitses

fbshipit-source-id: 716fcecb0417b770bc92cfd8c54f7ead89070896
2021-08-18 11:51:21 -07:00
Michael Dagitses
0f2f6a79cb clarify the documentation of torch.meshgrid (#62977)
Summary:
Also warn about the behavior differences from `numpy.meshgrid`.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/62977

Reviewed By: mruberry, ngimel

Differential Revision: D30220930

Pulled By: dagitses

fbshipit-source-id: ae6587b41792721cae2135376c58121b4634e296
2021-08-18 04:01:22 -07:00
Mike Ruberry
22f78144c7 Extends warning on norm docs (#63310)
Summary:
torch.norm has a couple documentation issues, like https://github.com/pytorch/pytorch/issues/44552 and https://github.com/pytorch/pytorch/issues/38595, but since it's deprecated this PR simply clarifies that the documentation (and implementation) of torch.norm maybe be incorrect. This should be additional encouragement for users to migrate to torch.linalg.vector_norm and torch.linalg.matrix_norm.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/63310

Reviewed By: ngimel

Differential Revision: D30337997

Pulled By: mruberry

fbshipit-source-id: 0fdcc438f36e4ab29e21e0a64709e4f35a2467ba
2021-08-16 22:23:45 -07:00
Nikita Vedeneev
dbcfd7739f Make torch.lu differentiable for wide/tall inputs + jit (#61564)
Summary:
As per title.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/61564

Reviewed By: astaff

Differential Revision: D30338136

Pulled By: mruberry

fbshipit-source-id: f01436fc90980544cdfa270feee16bb3dda21b93
2021-08-16 11:40:57 -07:00
Meghan Lele
acdad8bc63 [docs] Merge note block in torch.lu documentation (#63156)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/63156

**Summary**
This commit merges the four successive `Note` blocks that appear in the
documentation for `torch.lu`. Each one only has one line in it, so all
of them have been merged into one block with a bulleted list that
contains the original items.

**Test Plan**
Continuous integration.

*Before*
<img width="888" alt="Captura de Pantalla 2021-08-12 a la(s) 10 48 39 a  m" src="https://user-images.githubusercontent.com/4392003/129244443-b7d1594e-8833-4c20-a911-e1bf7ca88a8d.png">

*After*
<img width="932" alt="Captura de Pantalla 2021-08-12 a la(s) 10 48 46 a  m" src="https://user-images.githubusercontent.com/4392003/129244462-1f39dcdb-90e0-4fd9-a95f-343b0b6be1f1.png">

**Fixes**
This commit fixes #62339.

Test Plan: Imported from OSS

Reviewed By: navahgar, pbelevich

Differential Revision: D30292633

Pulled By: SplitInfinity

fbshipit-source-id: cb9071165629bfe7316b1d2fe952e4354c75d48f
2021-08-13 12:11:35 -07:00
Shen Li
1022443168 Revert D30279364: [codemod][lint][fbcode/c*] Enable BLACK by default
Test Plan: revert-hammer

Differential Revision:
D30279364 (b004307252)

Original commit changeset: c1ed77dfe43a

fbshipit-source-id: eab50857675c51e0088391af06ec0ecb14e2347e
2021-08-12 11:45:01 -07:00
Zsolt Dollenstein
b004307252 [codemod][lint][fbcode/c*] Enable BLACK by default
Test Plan: manual inspection & sandcastle

Reviewed By: zertosh

Differential Revision: D30279364

fbshipit-source-id: c1ed77dfe43a3bde358f92737cd5535ae5d13c9a
2021-08-12 10:58:35 -07:00
gmagogsfm
a46d4212bf Allow dims=0 in torch.tensordot call (#61331)
Summary:
In one of my previous PRs that rewrite `tensordot` implementation, I mistakenly take empty value of `dims_a` and `dims_b` as illegal values. This turns out to be not true. Empty `dims_a` and `dims_b` are supported, in fact common when `dims` is passed as an integer. This PR removes the unnecessary check.

Fixes https://github.com/pytorch/pytorch/issues/61096

Pull Request resolved: https://github.com/pytorch/pytorch/pull/61331

Reviewed By: eellison

Differential Revision: D29578910

Pulled By: gmagogsfm

fbshipit-source-id: 96e58164491a077ddc7a1d6aa6ccef8c0c9efda2
2021-07-10 17:05:20 -07:00
lezcano
4e347f1242 [docs] Fix backticks in docs (#60474)
Summary:
There is a very common error when writing docs: One forgets to write a matching `` ` ``, and something like ``:attr:`x`` is rendered in the docs. This PR fixes most (all?) of these errors (and a few others).

I found these running ``grep -r ">[^#<][^<]*\`"`` on the `docs/build/html/generated` folder. The regex finds an HTML tag that does not start with `#` (as python comments in example code may contain backticks) and that contains a backtick in the rendered HTML.

This regex has not given any false positive in the current codebase, so I am inclined to suggest that we should add this check to the CI. Would this be possible / reasonable / easy to do malfet ?

Pull Request resolved: https://github.com/pytorch/pytorch/pull/60474

Reviewed By: mrshenli

Differential Revision: D29309633

Pulled By: albanD

fbshipit-source-id: 9621e0e9f87590cea060dd084fa367442b6bd046
2021-06-24 06:27:41 -07:00
Heitor Schueroff
4caca7a15b Improved torch.einsum testing and fixed bug (#59731)
Summary:
Improved torch.einsum testing and fixed a bug where lower case letters appeared before upper case letters in the sorted order which is inconsistent with NumPy.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/59731

Reviewed By: SplitInfinity, ansley

Differential Revision: D29183078

Pulled By: heitorschueroff

fbshipit-source-id: a33980d273707da2d60a387a2af2fa41527ddb68
2021-06-17 04:48:47 -07:00
Heitor Schueroff
58412740ae Added doc for torch.einsum sublist format (#57038)
Summary:
Adds documentation for the new sublist format for `torch.einsum`

closes https://github.com/pytorch/pytorch/issues/21412

Pull Request resolved: https://github.com/pytorch/pytorch/pull/57038

Reviewed By: mruberry

Differential Revision: D28994431

Pulled By: heitorschueroff

fbshipit-source-id: 3dfb154fe6e4c440ac67c2dd92727bb5ecfe289e
2021-06-10 08:01:56 -07:00
Heitor Schueroff
72ae924fad Added sublist support for torch.einsum (#56625)
Summary:
This PR adds an alternative way of calling `torch.einsum`. Instead of specifying the subscripts as letters in the `equation` parameter, one can now specify the subscripts as a list of integers as in `torch.einsum(operand1, subscripts1, operand2, subscripts2, ..., [subscripts_out])`. This would be equivalent to `torch.einsum('<subscripts1>,<subscripts2>,...,->[<subscript_out>]', operand1, operand2, ...)`

TODO
- [x] Update documentation
- [x] Add more error checking
- [x] Update tests

Pull Request resolved: https://github.com/pytorch/pytorch/pull/56625

Reviewed By: zou3519

Differential Revision: D28062616

Pulled By: heitorschueroff

fbshipit-source-id: ec50ad34f127210696e7c545e4c0675166f127dc
2021-05-21 08:36:45 -07:00
Heitor Schueroff
9123229684 Cleanup functional.py after lu_unpack was removed (#58669)
Summary:
Remove code in functional.py that became unused after PR c790fd2bf8

Pull Request resolved: https://github.com/pytorch/pytorch/pull/58669

Reviewed By: driazati

Differential Revision: D28572377

Pulled By: heitorschueroff

fbshipit-source-id: c90d80ead5f3d69100667488bc6b14ef54b95b54
2021-05-20 13:06:30 -07:00
Nikita Vedeneev
c790fd2bf8 ATen lu_unpack. Required for making torch.lu_solve differentiable. (#46913)
Summary:
Backward methods for `torch.lu` and `torch.lu_solve` require the `torch.lu_unpack` method.
However, while `torch.lu` is a Python wrapper over a native function, so its gradient is implemented via `autograd.Function`,
`torch.lu_solve` is a native function, so it cannot access `torch.lu_unpack` as it is implemented in Python.

Hence this PR presents a native (ATen) `lu_unpack` version. It is also possible to update the gradients for `torch.lu` so that backward+JIT is supported (no JIT for `autograd.Function`) with this function.

~~The interface for this method is different from the original `torch.lu_unpack`, so it is decided to keep it hidden.~~

Pull Request resolved: https://github.com/pytorch/pytorch/pull/46913

Reviewed By: albanD

Differential Revision: D28355725

Pulled By: mruberry

fbshipit-source-id: 281260f3b6e93c15b08b2ba66d5a221314b00e78
2021-05-11 22:53:21 -07:00
Heitor Schueroff
0da5421837 Doc deprecate norm and add seealso to linalg.norm (#57986)
Summary:
**BC-breaking note**

This PR updates the deprecation notice for torch.norm to point users to the new torch.linalg.vector_norm and torch.linalg.matrix_norm functions.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/57986

Reviewed By: nikithamalgifb

Differential Revision: D28353625

Pulled By: heitorschueroff

fbshipit-source-id: 5de77d89f0e84945baa5fea91f73918dc7eeafd4
2021-05-11 12:02:12 -07:00
lezcano
43f6deb6e4 Deprecate chain_matmul (#57735)
Summary:
This one's easy. I also included a bugfix.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/57735

Reviewed By: bdhirsh

Differential Revision: D28318277

Pulled By: mruberry

fbshipit-source-id: c3c4546a11ba5b555b99ee79b1ce6c0649fa7323
2021-05-11 00:09:36 -07:00
Mike Ruberry
3c87fe9b14 Revert D28117714: [pytorch][PR] ATen lu_unpack. Required for making torch.lu_solve differentiable.
Test Plan: revert-hammer

Differential Revision:
D28117714 (5c67d8dfd3)

Original commit changeset: befd33db12ec

fbshipit-source-id: 295b2134935542a903a73f90a7998239dfe6cc81
2021-05-09 23:20:06 -07:00
Nikita Vedeneev
5c67d8dfd3 ATen lu_unpack. Required for making torch.lu_solve differentiable. (#46913)
Summary:
Backward methods for `torch.lu` and `torch.lu_solve` require the `torch.lu_unpack` method.
However, while `torch.lu` is a Python wrapper over a native function, so its gradient is implemented via `autograd.Function`,
`torch.lu_solve` is a native function, so it cannot access `torch.lu_unpack` as it is implemented in Python.

Hence this PR presents a native (ATen) `lu_unpack` version. It is also possible to update the gradients for `torch.lu` so that backward+JIT is supported (no JIT for `autograd.Function`) with this function.

~~The interface for this method is different from the original `torch.lu_unpack`, so it is decided to keep it hidden.~~

Pull Request resolved: https://github.com/pytorch/pytorch/pull/46913

Reviewed By: astaff

Differential Revision: D28117714

Pulled By: mruberry

fbshipit-source-id: befd33db12ecc147afacac792418b6f4948fa4a4
2021-05-09 19:12:56 -07:00
Sam Estep
75024e228c Add lint for unqualified type: ignore (#56290)
Summary:
The other half of https://github.com/pytorch/pytorch/issues/56272.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/56290

Test Plan:
CI should pass on the tip of this PR, and we know that the lint works because the following CI runs (before this PR was finished) failed:

- https://github.com/pytorch/pytorch/runs/2384511062
- https://github.com/pytorch/pytorch/actions/runs/765036024

Reviewed By: seemethere

Differential Revision: D27867219

Pulled By: samestep

fbshipit-source-id: e648f07b6822867e70833e23ddafe7fb7eaca235
2021-04-21 08:07:23 -07:00
Sam Estep
e3900d2ba5 Add lint for unqualified noqa (#56272)
Summary:
As this diff shows, currently there are a couple hundred instances of raw `noqa` in the codebase, which just ignore all errors on a given line. That isn't great, so this PR changes all existing instances of that antipattern to qualify the `noqa` with respect to a specific error code, and adds a lint to prevent more of this from happening in the future.

Interestingly, some of the examples the `noqa` lint catches are genuine attempts to qualify the `noqa` with a specific error code, such as these two:
```
test/jit/test_misc.py:27:            print(f"{hello + ' ' + test}, I'm a {test}") # noqa E999
test/jit/test_misc.py:28:            print(f"format blank") # noqa F541
```
However, those are still wrong because they are [missing a colon](https://flake8.pycqa.org/en/3.9.1/user/violations.html#in-line-ignoring-errors), which actually causes the error code to be completely ignored:

- If you change them to anything else, the warnings will still be suppressed.
- If you add the necessary colons then it is revealed that `E261` was also being suppressed, unintentionally:
  ```
  test/jit/test_misc.py:27:57: E261 at least two spaces before inline comment
  test/jit/test_misc.py:28:35: E261 at least two spaces before inline comment
  ```

I did try using [flake8-noqa](https://pypi.org/project/flake8-noqa/) instead of a custom `git grep` lint, but it didn't seem to work. This PR is definitely missing some of the functionality that flake8-noqa is supposed to provide, though, so if someone can figure out how to use it, we should do that instead.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/56272

Test Plan:
CI should pass on the tip of this PR, and we know that the lint works because the following CI run (before this PR was finished) failed:

- https://github.com/pytorch/pytorch/runs/2365189927

Reviewed By: janeyx99

Differential Revision: D27830127

Pulled By: samestep

fbshipit-source-id: d6dcf4f945ebd18cd76c46a07f3b408296864fcb
2021-04-19 13:16:18 -07:00
Peter Bell
5c402d9026 STFT: Clarify output shape in documentation (#54877)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/54631

I removed the phrase "When `onesided` is the default value `True`". It's not always the default and it's also confusing because it doesn't seem to relate to the bullet points it's introducing. It makes more sense in the sentence before, i.e. these frequencies are included "when the output is onesided". So, I've rewritten it as that meaning and included the correct formula for frequencies.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/54877

Reviewed By: ngimel

Differential Revision: D27562785

Pulled By: mruberry

fbshipit-source-id: d7f36382611e8e176e3370393d1b371d577d46bb
2021-04-06 15:28:57 -07:00
Heitor Schueroff
d98072b027 Deprecate torch.chain_matmul in favor of torch.linalg.multi_dot (#53453)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/53453

Test Plan: Imported from OSS

Reviewed By: H-Huang

Differential Revision: D27406282

Pulled By: heitorschueroff

fbshipit-source-id: b6e715d1b88e0613ee6b6208cb28ba4757e31717
2021-04-01 04:50:51 -07:00
Nikita Vedeneev
dfc7fa03e5 lu_backward: more numerically stable and with complex support. (#53994)
Summary:
As per title.

Numerical stability increased by replacing inverses with solutions to systems of linear triangular equations.

Unblocks computing `torch.det` for FULL-rank inputs of complex dtypes via the LU decomposition once https://github.com/pytorch/pytorch/pull/48125/files is merged:
```
LU, pivots = input.lu()
P, L, U = torch.lu_unpack(LU, pivots)
det_input = P.det() * torch.prod(U.diagonal(0, -1, -2), dim=-1)  # P is not differentiable, so we are fine even if it is complex.
```

Unfortunately, since `lu_backward` is implemented as `autograd.Function`, we cannot support both autograd and scripting at the moment.
The solution would be to move all the lu-related methods to ATen, see https://github.com/pytorch/pytorch/issues/53364.

Resolves https://github.com/pytorch/pytorch/issues/52891
TODOs:
* extend lu_backward for tall/wide matrices of full rank.
* move lu-related functionality to ATen and make it differentiable.
* handle rank-deficient inputs.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/53994

Reviewed By: pbelevich

Differential Revision: D27188529

Pulled By: anjali411

fbshipit-source-id: 8e053b240413dbf074904dce01cd564583d1f064
2021-03-25 13:33:58 -07:00
Yanan Cao
f48a9712b7 Rewrite functional.tensordot to be TorchScript-able (#53672)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/53487

Pull Request resolved: https://github.com/pytorch/pytorch/pull/53672

Reviewed By: VitalyFedyunin

Differential Revision: D26934392

Pulled By: gmagogsfm

fbshipit-source-id: f842af340e4be723bf90b903793b0221af158ca7
2021-03-20 23:03:30 -07:00
Philip Meier
b0afe945a7 Fix pylint error torch.tensor is not callable (#53424)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/53424

Fixes https://github.com/pytorch/pytorch/issues/24807 and supersedes the stale https://github.com/pytorch/pytorch/issues/25093 (Cc Microsheep). If you now run the reproduction

```python
import torch

if __name__ == "__main__":
    t = torch.tensor([1, 2, 3], dtype=torch.float64)
```

with `pylint==2.6.0`, you get the following output

```
test_pylint.py:1:0: C0114: Missing module docstring (missing-module-docstring)
test_pylint.py:4:8: E1101: Module 'torch' has no 'tensor' member; maybe 'Tensor'? (no-
member)
test_pylint.py:4:38: E1101: Module 'torch' has no 'float64' member (no-member)
```

Now `pylint` doesn't recognize `torch.tensor` at all, but it is promoted in the stub. Given that it also doesn't recognize `torch.float64`, I think fixing this is out of scope of this PR.

 ---

## TL;DR

This BC-breaking only for users that rely on unintended behavior. Since `torch/__init__.py` loaded `torch/tensor.py` it was populated in `sys.modules`. `torch/__init__.py` then overwrote `torch.tensor` with the actual function. With this `import torch.tensor as tensor` does not fail, but returns the function rather than the module. Users that rely on this import need to change it to `from torch import tensor`.

Reviewed By: zou3519

Differential Revision: D26223815

Pulled By: bdhirsh

fbshipit-source-id: 125b9ff3d276e84a645cd7521e8d6160b1ca1c21
2021-03-09 11:32:53 -08:00
momohatt
3403babd94 [doc] Fix documentations of torch functions (#52982)
Summary:
This PR includes multiple small fixes of docstrings.

* Fix documentation for [`torch.atleast_2d`](https://pytorch.org/docs/master/generated/torch.atleast_2d.html) and [`torch.atleast_3d`](https://pytorch.org/docs/master/generated/torch.atleast_3d.html) by adding a new line before `Args::`.
* Fix indentation for [`torch.isfinite`](https://pytorch.org/docs/master/generated/torch.isfinite.html) and [`torch.isinf`](https://pytorch.org/docs/master/generated/torch.isinf.html). The "Arguments", "Parameters" and "Examples" sections need to be at the same level as the first description.
* Insert a new line after `Example::` where it is missing. This makes difference in the way the documentations are rendered: see [this](https://pytorch.org/docs/master/generated/torch.gt.html) (with a new line) and [this](https://pytorch.org/docs/master/generated/torch.triu_indices.html) (without). As the majority of the docs seems to follow the former style, this PR amends the latter cases.
* Fix the "Returns" section of [`torch.block_diag`](https://pytorch.org/docs/master/generated/torch.block_diag.html) and [`torch.cartesian_prod`](https://pytorch.org/docs/master/generated/torch.cartesian_prod.html). The second and the subsequent lines shouldn't be indented, as can be seen in the docstring of [`torch.vander`](https://pytorch.org/docs/master/generated/torch.vander.html).
* Fix variable names in the example of `torch.fft.(i)fftn`.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/52982

Reviewed By: mruberry

Differential Revision: D26724408

Pulled By: H-Huang

fbshipit-source-id: c65aa0621f7858b05fd16f497caacf6ea8eb33c9
2021-03-01 09:59:57 -08:00
Helene
47557b95ef Removed typographical error from tech docs (#51286)
Summary:
Dublications removed from tech docs.

![Screenshot](https://user-images.githubusercontent.com/71665475/106158807-6e5b8100-6184-11eb-9036-bccdf2086c31.png)

Pull Request resolved: https://github.com/pytorch/pytorch/pull/51286

Reviewed By: albanD

Differential Revision: D26227627

Pulled By: ailzhang

fbshipit-source-id: efa0cd90face458673b8530388378d5a7eb0f1cf
2021-02-03 14:09:36 -08:00