Commit Graph

612 Commits

Author SHA1 Message Date
kshitij12345
c9d0c855f7 [special] Alias for special.expm1 and special.exp2 (#54670)
Summary:
Reference: https://github.com/pytorch/pytorch/issues/50345

Pull Request resolved: https://github.com/pytorch/pytorch/pull/54670

Reviewed By: H-Huang

Differential Revision: D27401440

Pulled By: mruberry

fbshipit-source-id: 02b1fd0e8ffd3f5a017d6b6b9229b76b92b4b745
2021-03-30 10:03:13 -07:00
Jeff Yang
6dedecc77c docs: add memory_format in torch.empty (#54664)
Summary:
fixes https://github.com/pytorch/pytorch/issues/43504

Pull Request resolved: https://github.com/pytorch/pytorch/pull/54664

Reviewed By: ailzhang

Differential Revision: D27328504

Pulled By: zou3519

fbshipit-source-id: 6c3e11473ada34f7e9fae7bae366328e50f71b0e
2021-03-29 10:23:36 -07:00
Jeff Yang
12a454788b docs: fix parameter in torch.take (#54667)
Summary:
fixes https://github.com/pytorch/pytorch/issues/43495
https://11812612-65600975-gh.circle-artifacts.com/0/docs/generated/torch.take.html

Pull Request resolved: https://github.com/pytorch/pytorch/pull/54667

Reviewed By: ailzhang

Differential Revision: D27328252

Pulled By: zou3519

fbshipit-source-id: 5812ebdaba063ca0a9c0f4a9becd00a570d84d30
2021-03-29 10:01:23 -07:00
kshitij12345
0527d14248 [numpy] Add torch.take_along_dim (#52833)
Summary:
Reference: https://github.com/pytorch/pytorch/issues/38349

Wrapper around the existing `torch.gather` with broadcasting logic.

TODO:
* [x] Add Doc entry (see if phrasing can be improved)
* [x] Add OpInfo
* [x] Add test against numpy
* [x] Handle broadcasting behaviour and when dim is not given.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/52833

Reviewed By: malfet

Differential Revision: D27319038

Pulled By: mruberry

fbshipit-source-id: 00f307825f92c679d96e264997aa5509172f5ed1
2021-03-28 05:22:51 -07:00
Heitor Schueroff
591084abb8 Deprecate torch.matrix_power in favor of torch.linalg.matrix_power (#53538)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/53538

* #52608 Added torch.linalg.matrix_power

Test Plan: Imported from OSS

Reviewed By: bdhirsh

Differential Revision: D27261531

Pulled By: heitorschueroff

fbshipit-source-id: 5a944b390f3cc6896c2aa92ba467319ddc9309e4
2021-03-23 15:11:24 -07:00
kshitij12345
afb560065c [testing] OpInfo for sgn and sign (#53885)
Summary:
Reference https://github.com/pytorch/pytorch/issues/42515

TODO:
* [x] Check rendered docs. https://11525594-65600975-gh.circle-artifacts.com/0/docs/generated/torch.sgn.html

Pull Request resolved: https://github.com/pytorch/pytorch/pull/53885

Reviewed By: ejguan

Differential Revision: D27114318

Pulled By: mruberry

fbshipit-source-id: 678179d87741aacd3b50f03dc460207c5aa29589
2021-03-22 09:39:40 -07:00
Yukio Siraichi
27048c1dfa Remove legacy constructor calls from _torch_ folder. (#53889)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/53146
Related to https://github.com/pytorch/pytorch/issues/47112

As mentioned in https://github.com/pytorch/pytorch/issues/47112, the plan is to:

1. Verify that all `torch.Tensor()` scenarios are covered by other functions
2. Scrub internal `torch.Tensor()` uses
3. Update the docs and throw `TORCH_WARN_ONCE` if someone uses `torch.Tensor()`

In this PR, I replaced all occurrences of `torch.Tensor` present in the _torch_ folder.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/53889

Reviewed By: walterddr, zou3519

Differential Revision: D27190743

Pulled By: jbschlosser

fbshipit-source-id: 7ecc201d57935b8dbb98ae3718b60d95cb55a010
2021-03-19 15:20:19 -07:00
kshitij12345
bfd009836e [torch.special] Add special.erf{c, inv} (#53260)
Summary:
Reference: https://github.com/pytorch/pytorch/issues/50345

Also adds `overrides` entry for module and the newly added functions.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/53260

Reviewed By: agolynski

Differential Revision: D27114342

Pulled By: mruberry

fbshipit-source-id: b1dd88f373db251bb71df12d33b160382138f63f
2021-03-18 19:06:25 -07:00
Ivan Yashchuk
564456ac44 Added autograd support for torch.orgqr (#52637)
Summary:
This PR adds autograd support for `torch.orgqr`.

Since `torch.orgqr` is one of few functions that expose LAPACK's naming and all other linear algebra routines were renamed a long time ago, I also added a new function with a new name and `torch.orgqr` now is an alias for it.

The new proposed name is `householder_product`. For a matrix `input` and a vector `tau` LAPACK's orgqr operation takes columns of `input` (called Householder vectors or elementary reflectors) scalars of `tau` that together represent Householder matrices and then the product of these matrices is computed. See https://www.netlib.org/lapack/lug/node128.html.
Other linear algebra libraries that I'm aware of do not expose this LAPACK function, so there is some freedom in naming it. It is usually used internally only for QR decomposition, but can be useful for deep learning tasks now when it supports differentiation.

Resolves https://github.com/pytorch/pytorch/issues/50104

Pull Request resolved: https://github.com/pytorch/pytorch/pull/52637

Reviewed By: agolynski

Differential Revision: D27114246

Pulled By: mruberry

fbshipit-source-id: 9ab51efe52aec7c137aa018c7bd486297e4111ce
2021-03-18 05:42:18 -07:00
mattip
ae154a8c2c various doc building cleanups (#53851)
Summary:
brianjo
- Add a javascript snippet to close the expandable left navbar sections 'Notes', 'Language Bindings', 'Libraries', 'Community'
- Fix two latex bugs that were causing output in the log that might have been misleading when looking for true doc build problems
- Change the way release versions interact with sphinx. I tested these via building docs twice: once with `export RELEASE=1` and once without.
  - Remove perl scripting to turn the static version text into a link to the versions.html document. Instead, put this where it belongs in the layout.html template. This is the way the domain libraries (text, vision, audio) do it.
  -  There were two separate templates for master and release, with the only difference between them is that the master has an admonition "You are viewing unstable developer preview docs....". Instead toggle that with the value of `release`.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/53851

Reviewed By: mruberry

Differential Revision: D27085875

Pulled By: ngimel

fbshipit-source-id: c2d674deb924162f17131d895cb53cef08a1f1cb
2021-03-16 15:01:59 -07:00
Xiong Wei
da10ccd35f Implements cpu_kernel_multiple_outputs and torch.frexp (#51097)
Summary:
Close https://github.com/pytorch/pytorch/issues/51108
Related https://github.com/pytorch/pytorch/issues/38349

This PR implements the `cpu_kernel_multiple_outputs` to support returning multiple values in a CPU kernel.
```c++
auto iter = at::TensorIteratorConfig()
  .add_output(out1)
  .add_output(out2)
  .add_input(in1)
  .add_input(in2)
  .build();

at::native::cpu_kernel_multiple_outputs(iter,
  [=](float a, float b) -> std::tuple<float, float> {
    float add = a + b;
    float mul = a * b;
    return std::tuple<float, float>(add, mul);
  }
);
```

The `out1` will equal to `torch.add(in1, in2)`, while the result of `out2` will be `torch.mul(in1, in2)`.
It helps developers implement new torch functions that return two tensors more conveniently, such as NumPy-like functions [divmod](https://numpy.org/doc/1.18/reference/generated/numpy.divmod.html?highlight=divmod#numpy.divmod) and [frexp](https://numpy.org/doc/stable/reference/generated/numpy.frexp.html#numpy.frexp).

This PR adds `torch.frexp` function to exercise the new functionality provided by `cpu_kernel_multiple_outputs`.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/51097

Reviewed By: albanD

Differential Revision: D26982619

Pulled By: heitorschueroff

fbshipit-source-id: cb61c7f2c79873ab72ab5a61cbdb9203531ad469
2021-03-15 10:44:32 -07:00
Ivan Yashchuk
fe08671756 Added cuBLAS path for torch.triangular_solve (#53147)
Summary:
This PR adds the cuBLAS based path for `torch.triangular_solve`
The device dispatching helper function was removed from native_functions.yml, it is replaced with DECLARE/DEFINE_DISPATCH.

`magmaTriangularSolve` is removed and replaced with cuBLAS calls, this is not a BC-breaking change because internally MAGMA just calls the same cuBLAS function and doesn't do anything else.

Batched cuBLAS is faster than batched MAGMA for matrices of size up until 512x512, after that MAGMA is faster. For batches smaller than ~8 and matrix sizes larger than 64x64 a forloop of cuBLAS calls is faster than batched version.

Ref. https://github.com/pytorch/pytorch/issues/47953

Pull Request resolved: https://github.com/pytorch/pytorch/pull/53147

Reviewed By: heitorschueroff

Differential Revision: D27007416

Pulled By: mruberry

fbshipit-source-id: ddfc190346e6a56b84145ed0a9af67ca9cde3506
2021-03-12 13:38:42 -08:00
iramazanli
e90e773445 Fix to empty_like example (#53088)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/52375

Pull Request resolved: https://github.com/pytorch/pytorch/pull/53088

Reviewed By: zou3519

Differential Revision: D26752772

Pulled By: iramazanli

fbshipit-source-id: 21e395c6bbfd8f2cc808ddc12aefb2a426bb50d0
2021-03-08 13:19:47 -08:00
Edward Yang
758fb94fcb Prefix assert_async with underscore, fix some bugs in assert_async CUDA testing (#53276)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/53276

- One of the tests had a syntax error (but the test
  wasn't fine grained enough to catch this; any error
  was a pass)
- Doesn't work on ROCm

Signed-off-by: Edward Z. Yang <ezyang@fb.com>

Differential Revision: D26820048

Test Plan: Imported from OSS

Reviewed By: mruberry

Pulled By: ezyang

fbshipit-source-id: b02c4252d10191c3b1b78f141d008084dc860c45
2021-03-05 17:36:01 -08:00
kshitij12345
e9d7137072 fixes #38775 #38779: complex support for linspace and logspace (#38875)
Summary:
Closes https://github.com/pytorch/pytorch/issues/38775, Closes https://github.com/pytorch/pytorch/issues/38779

TO-DO:
* [x] Add Tests

Quansight Tracking : q-38775, q-38779

Pull Request resolved: https://github.com/pytorch/pytorch/pull/38875

Reviewed By: malfet

Differential Revision: D26628530

Pulled By: anjali411

fbshipit-source-id: ca4259b9f6725c4a4350f944465327169d12122e
2021-03-05 08:37:55 -08:00
Edward Yang
cfd9360d09 Revert D26837780: Revert D26819810: Revert D26815021: Revert D26744062: Add assert_async
Test Plan: revert-hammer

Differential Revision:
D26837780

Original commit changeset: 21567cab5c0f

fbshipit-source-id: 8ea735e5fdc97e32ae3fafd40297a1b8a7cd34b0
2021-03-04 20:45:35 -08:00
Edward Yang
1accffe450 Revert D26819810: Revert D26815021: Revert D26744062: Add assert_async
Test Plan: revert-hammer

Differential Revision:
D26819810

Original commit changeset: e528260e1aa9

fbshipit-source-id: 21567cab5c0ff5f5e60a699d4d4678773a567c30
2021-03-04 18:48:56 -08:00
Edward Yang
9e5e5a7d96 Revert D26815021: Revert D26744062: Add assert_async
Test Plan: revert-hammer

Differential Revision:
D26815021

Original commit changeset: 972eaafcdf14

fbshipit-source-id: e528260e1aa91df1873c73af00aa57addd671607
2021-03-04 09:28:25 -08:00
Mike Ruberry
b864457743 Revert D26744062: Add assert_async
Test Plan: revert-hammer

Differential Revision:
D26744062 (12d63cc2f5)

Original commit changeset: be6d2653afe5

fbshipit-source-id: 972eaafcdf14d96abdec3dea6bcbd5cac1f3d759
2021-03-04 04:11:25 -08:00
kshitij12345
c4c77e2001 [special] add torch.special namespace (#52296)
Summary:
Reference: https://github.com/pytorch/pytorch/issues/50345

 * Add `torch.special` namespace
* Add `torch.special.gammaln` (alias to `torch.lgamma`)

TODO:
* Add proper entries for docs.
   * [x] Add .rst file entry
   * [x] Add documentation
   * [x] Update `lgamma` OpInfo entry for alias to `special.gammaln`.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/52296

Reviewed By: ngimel

Differential Revision: D26754890

Pulled By: mruberry

fbshipit-source-id: 73479f68989d6443ad07b7b02763fa98973c15f6
2021-03-04 00:04:36 -08:00
Edward Yang
12d63cc2f5 Add assert_async (#53086)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/53086

Fixes #36853

Signed-off-by: Edward Z. Yang <ezyang@fb.com>

Test Plan: Imported from OSS

Reviewed By: albanD

Differential Revision: D26744062

Pulled By: ezyang

fbshipit-source-id: be6d2653afe584adf67a05b5d43185b40764650d
2021-03-03 16:18:07 -08:00
momohatt
3403babd94 [doc] Fix documentations of torch functions (#52982)
Summary:
This PR includes multiple small fixes of docstrings.

* Fix documentation for [`torch.atleast_2d`](https://pytorch.org/docs/master/generated/torch.atleast_2d.html) and [`torch.atleast_3d`](https://pytorch.org/docs/master/generated/torch.atleast_3d.html) by adding a new line before `Args::`.
* Fix indentation for [`torch.isfinite`](https://pytorch.org/docs/master/generated/torch.isfinite.html) and [`torch.isinf`](https://pytorch.org/docs/master/generated/torch.isinf.html). The "Arguments", "Parameters" and "Examples" sections need to be at the same level as the first description.
* Insert a new line after `Example::` where it is missing. This makes difference in the way the documentations are rendered: see [this](https://pytorch.org/docs/master/generated/torch.gt.html) (with a new line) and [this](https://pytorch.org/docs/master/generated/torch.triu_indices.html) (without). As the majority of the docs seems to follow the former style, this PR amends the latter cases.
* Fix the "Returns" section of [`torch.block_diag`](https://pytorch.org/docs/master/generated/torch.block_diag.html) and [`torch.cartesian_prod`](https://pytorch.org/docs/master/generated/torch.cartesian_prod.html). The second and the subsequent lines shouldn't be indented, as can be seen in the docstring of [`torch.vander`](https://pytorch.org/docs/master/generated/torch.vander.html).
* Fix variable names in the example of `torch.fft.(i)fftn`.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/52982

Reviewed By: mruberry

Differential Revision: D26724408

Pulled By: H-Huang

fbshipit-source-id: c65aa0621f7858b05fd16f497caacf6ea8eb33c9
2021-03-01 09:59:57 -08:00
Nikita Vedeneev
9699c703c2 Stable sort for the CPU take 2. (#51790)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/38681.
A duplicate of https://github.com/pytorch/pytorch/pull/50052 created to become importable to the fb internal tests.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/51790

Reviewed By: agolynski

Differential Revision: D26279045

Pulled By: glaringlee

fbshipit-source-id: 348e171dee9c370a76002b65d0c82c329f57a421
2021-02-19 09:28:57 -08:00
Alban Desmaison
e0d9d0f248 update symeig backward note about similar eigenvalues (#52311)
Summary:
First part of https://github.com/pytorch/pytorch/issues/49886 to at least properly warn users of the current state

Pull Request resolved: https://github.com/pytorch/pytorch/pull/52311

Reviewed By: soulitzer

Differential Revision: D26495644

Pulled By: albanD

fbshipit-source-id: 72abdfe41cdbcc1ac739a536eb85d1aa4ba90897
2021-02-17 19:07:25 -08:00
Mike Ruberry
1795398c24 Updates rounding_mode documentation to remove "true" (#52202)
Summary:
In design review the use of the word "true" for a "rounding mode" which actually performed no rounding was, understandably, considered confusing. This PR updates the documentation to remove references to "true." The signatures for torch.div and torch.divide are updated to reflect the future behavior where rounding_mode=None will be the default.

This is slightly inaccurate. Today when rounding mode is not specified it is effectively None, but users cannot actually specify rounding_mode=None today. That change was considered too disruptive to the 1.8 branch cut process.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/52202

Reviewed By: gchanan

Differential Revision: D26424979

Pulled By: mruberry

fbshipit-source-id: db3cc769c0d9c6d7e42bfad294073c99fa9168d9
2021-02-12 09:19:39 -08:00
Mike Ruberry
594a66d778 Warn about floor_divide performing incorrect rounding (#50281) (#50281)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/50281

Pull Request resolved: https://github.com/pytorch/pytorch/pull/51745

Test Plan: Imported from OSS

Reviewed By: ngimel

Pulled By: mruberry

Differential Revision: D26257855

fbshipit-source-id: e5d497cf07b0c746838ed081c5d0e82fb4cb701b
2021-02-10 03:13:34 -08:00
Jeffrey Wan
159c48b19b Fix triplet margin loss and reciprocal docs (#51650)
Summary:
Reciprocal: the note should be placed after the formula

Triplet-margin-loss (before):
![image](https://user-images.githubusercontent.com/13428986/106784863-cb3eb780-661a-11eb-8372-07b51e4cb2d4.png)
After:
![image](https://user-images.githubusercontent.com/13428986/106784948-e5789580-661a-11eb-890c-6185aab96e54.png)

Pull Request resolved: https://github.com/pytorch/pytorch/pull/51650

Reviewed By: izdeby

Differential Revision: D26314151

Pulled By: soulitzer

fbshipit-source-id: d7574e64e96a41a515231ba7e1008de8b2f292aa
2021-02-08 12:15:11 -08:00
Vasiliy Kuznetsov
8c48af822e pytorch docs: add fake_quantize functions documentation (#51748)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/51748

Adding docs for `fake_quantize_per_tensor_affine` and `fake_quantize_per_channel_affine`
functions.

Note: not documenting `fake_quantize_per_tensor_affine_cachemask` and
`fake_quantize_per_channel_affine_cachemask` since they are implementation details
of `fake_quantize_per_tensor_affine` and `fake_quantize_per_channel_affine`,
and do not need to be exposed to the user at the moment.

Test Plan: Build the docs locally on Mac OS, it looks good

Reviewed By: supriyar

Differential Revision: D26270514

Pulled By: vkuzo

fbshipit-source-id: 8e3c9815a12a3427572cb4d34a779e9f5e4facdd
2021-02-05 17:53:02 -08:00
Heitor Schueroff
e7ff0854c6 [doc] Fix inconsistencies with torch.linalg.inv and deprecate torch.inverse (#51672)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/51672

Test Plan: Imported from OSS

Reviewed By: mruberry

Differential Revision: D26240535

Pulled By: heitorschueroff

fbshipit-source-id: 16dbd0a8a8c0f851faa12bf092dbedfb7cb0b292
2021-02-04 17:19:45 -08:00
Heitor Schueroff
ff4848aaa1 [doc] Fix inconsistencies with linalg.pinv docs and deprecate pinverse (#51671)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/51671

Test Plan: Imported from OSS

Reviewed By: mruberry

Differential Revision: D26240534

Pulled By: heitorschueroff

fbshipit-source-id: 26e2a3cad2105e6e2b7779e785666b38597450c5
2021-02-04 17:19:41 -08:00
Heitor Schueroff
e7d7256f2d doc] Fix inconsistencies with torch.linalg.matrix_rank doc (#51660)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/51660

Test Plan: Imported from OSS

Reviewed By: mruberry

Differential Revision: D26234100

Pulled By: heitorschueroff

fbshipit-source-id: b9c48c0e172461ed2770d52c07a147152d51d4b7
2021-02-04 17:19:37 -08:00
Heitor Schueroff
87504c3265 [doc] Fix inconsistencies with torch.linalg.eigh (#51658)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/51658

Test Plan: Imported from OSS

Reviewed By: mruberry

Differential Revision: D26234101

Pulled By: heitorschueroff

fbshipit-source-id: c1b5cc74ba0b32c49bfd843e97f957971d8be364
2021-02-04 17:19:29 -08:00
Heitor Schueroff
4835f203ec [doc] Fix inconsistencies with torch.linalg.det docs (#51651)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/51651

Test Plan: Imported from OSS

Reviewed By: mruberry

Differential Revision: D26234103

Pulled By: heitorschueroff

fbshipit-source-id: 00ec7dae942bda887f57cb76752f8b5ef25d276a
2021-02-04 17:19:25 -08:00
Peter Bell
b150f150ba Add division overload with rounding_mode selection (#51706)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/51706

Pull Request resolved: https://github.com/pytorch/pytorch/pull/50280

As mentioned in gh-43874, this adds a `rounding_mode={'true', 'trunc', 'floor'}`
argument so `torch.div` can be used as a replacement for `floor_divide` during
the transitional period.

I've included dedicated kernels for truncated and floor division which
aren't strictly necessary for float, but do perform significantly better (~2x) than
doing true division followed by a separate rounding kernel.

Note: I introduce new overloads for `aten::div` instead of just adding a default
`rounding_mode` because various JIT passes rely on the exact operator schema.

Test Plan: Imported from OSS

Reviewed By: ngimel

Differential Revision: D26123271

Pulled By: mruberry

fbshipit-source-id: 51a83717602114597ec9c4d946e35a392eb01d46
2021-02-04 13:08:36 -08:00
Ilia Cherniavskii
f1f9b049d8 [profiler] Support top-level memory events (#51421)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/51421

Mark memory events that did not happen within an operator context
explicitly in the profiler output.

Test Plan: python test/test_profiler.py -k test_memory_profiler

Reviewed By: ngimel

Differential Revision: D26166518

Pulled By: ilia-cher

fbshipit-source-id: 3c14d3ac25a7137733ea7cc65f0eb48693a98f5e
2021-02-04 04:14:15 -08:00
Jeffrey Wan
b18eeaa80a Implement np.diff for single order differences (#50569)
Summary:
Implements `np.diff` for single order differences only:
 - method and function variants for `diff` and function variant for `diff_out`
 - supports out variant, but not in-place since shape changes
 - adds OpInfo entry, and test in `test_torch`
 - automatic autograd because we are using the `Math` dispatch

_Update: we only support Tensors for prepend and append in this PR. See discussion below and comments for more details._

Currently there is a quirk in the c++ API based on how this is implemented: it is not possible to specify scalar prepend and appends without also specifying all 4 arguments.

That is because the goal is to match NumPy's diff signature of `diff(int n=1, int dim=-1, Union[Scalar, Tensor] prepend=None, Union[Scalar, Tensor] append)=None` where all arguments are optional, positional and in the correct order.
There are a couple blockers. One is c++ ambiguity. This prevents us from simply doing `diff(int n=1, int dim=-1, Scalar? prepend=None, Tensor? append=None)` etc for all combinations of {Tensor, Scalar} x {Tensor, Scalar}.

Why not have append, prepend not have default args and then write out the whole power set of {Tensor, Scalar, omitted} x {Tensor, Scalar, omitted} you might ask. Aside from having to write 18 overloads, this is actually illegal because arguments with defaults must come after arguments without defaults. This would mean having to write `diff(prepend, append, n, dim)` which is not desired. Finally writing out the entire power set of all arguments n, dim, prepend, append is out of the question because that would actually involve 2 * 2 * 3 * 3 = 36 combinations. And if we include the out variant, that would be 72 overloads!

With this in mind, the current way this is implemented is actually to still do `diff(int n=1, int dim=-1, Scalar? prepend=None, Tensor? append=None)`. But also make use of `cpp_no_default_args`. The idea is to only have one of the 4 {Tensor, Scalar} x {Tensor, Scalar} provide default arguments for the c++ api, and add `cpp_no_default_args` for the remaining 3 overloads. With this, Python api works as expected, but some calls such as `diff(prepend=1)` won't work on c++ api.

We can optionally add 18 more overloads that cover the {dim, n, no-args} x {scalar-tensor, tensor-scalar, scalar-scalar} x {out, non-out} cases for c++ api. _[edit: counting is hard - just realized this number is still wrong. We should try to count the cases we do cover instead and subtract that from the total: (2 * 2 * 3 * 3) - (3 + 2^4) = 17. 3 comes from the 3 of 4 combinations of {tensor, scalar}^2 that we declare to be `cpp_no_default_args`, and the one remaining case that has default arguments has covers 2^4 cases. So actual count is 34 additional overloads to support all possible calls]_

_[edit: thanks to https://github.com/pytorch/pytorch/issues/50767 hacky_wrapper is no longer necessary; it is removed in the latest commit]_
 hacky_wrapper was also necessary here because `Tensor?` will cause dispatch to look for the `const optional<Tensor>&` schema but also generate a `const Tensor&` declaration in Functions.h. hacky_wrapper allows us to define our function as `const Tensor&` but wraps it in optional for us, so this avoids both the errors while linking and loading.

_[edit: rewrote the above to improve clarity and correct the fact that we actually need 18 more overloads (26 total), not 18 in total to complete the c++ api]_

Pull Request resolved: https://github.com/pytorch/pytorch/pull/50569

Reviewed By: H-Huang

Differential Revision: D26176105

Pulled By: soulitzer

fbshipit-source-id: cd8e77cc2de1117c876cd71c29b312887daca33f
2021-02-02 20:25:16 -08:00
Heitor Schueroff
c6f37e50f2 [doc] Add deprecation message to torch.slogdet in favor of torch.linalg.slogdet (#51354)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/51354

Re-created from https://github.com/pytorch/pytorch/pull/51301 because of issues with ghstack.

This PR is part of a larger effort to ensure torch.linalg documentation is consistent (see #50287).

Updated torch.slogdet documentation to add a deprecation message in favor of torch.linalg.slogdet.

Test Plan: Imported from OSS

Reviewed By: H-Huang

Differential Revision: D26148679

Pulled By: heitorschueroff

fbshipit-source-id: 4d9f3386d9ba6dc735a4d1e86cfcd88eaba3cbcc
2021-02-02 07:58:01 -08:00
Heitor Schueroff
8fa328f88e [doc] Deprecate torch.cholesky in favor of torch.linalg.cholesky (#51460)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/51460

This PR is part of a larger effort to ensure torch.linalg documentation is consistent (see #50287).

* #51459 [doc] Fix linalg.cholesky doc consistency issues

Test Plan: Imported from OSS

Reviewed By: mruberry

Differential Revision: D26176130

Pulled By: heitorschueroff

fbshipit-source-id: cc89575db69cbfd5f87d970a2e71deb6522a35b1
2021-02-01 15:47:08 -08:00
Brian Skinn
fe645fdfc7 Update _torch_docs.py (#51212)
Summary:
Fix `torch.linalg.qr` reference where it's desired to render fully-qualified name into docs.

Suggested fix for https://github.com/pytorch/pytorch/pull/47764/files#r565368195

Pull Request resolved: https://github.com/pytorch/pytorch/pull/51212

Reviewed By: ezyang

Differential Revision: D26142496

Pulled By: ailzhang

fbshipit-source-id: 052b2085099baa372e3b515b403f25d23cf50785
2021-01-29 13:03:09 -08:00
Ivan Yashchuk
ddf26816d3 Make torch.svd return V, not V.conj() for complex inputs (#51012)
Summary:
**BC-breaking note:**

torch.svd() added support for complex inputs in PyTorch 1.7, but was not documented as doing so. The complex "V" tensor returned was actually the complex conjugate of what's expected. This PR fixes the discrepancy.

This will silently break all users of torch.svd() with complex inputs.

**Original PR Summary:**

This PR resolves https://github.com/pytorch/pytorch/issues/45821.

The problem was that when introducing the support of complex inputs for `torch.svd` it was overlooked that LAPACK/MAGMA returns the conjugate transpose of V matrix, not just the transpose of V. So `torch.svd` was silently returning U, S, V.conj() instead of U, S, V.

Behavior of `torch.linalg.pinv`, `torch.pinverse` and `torch.linalg.svd` (they depend on `torch.svd`) is not changed in this PR.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/51012

Reviewed By: bdhirsh

Differential Revision: D26047593

Pulled By: albanD

fbshipit-source-id: d1e08dbc3aab9ce1150a95806ef3b5da98b5d3ca
2021-01-25 14:06:41 -08:00
Xiao Wang
186c3da037 Add cusolver gesvdj and gesvdjBatched to the backend of torch.svd (#48436)
Summary:
This PR adds cusolver `gesvdj` and `gesvdjBatched` to the backend of `torch.svd`.

I've tested the performance using cuda 11.1 on 2070, V100, and A100. The cusolver gesvdj and gesvdjBatched performances are better than magma in all square matrix cases. So cusolver backend will replace magma backend when available.

When both matrix dimensions are no greater than 32, `gesvdjBatched` is used. Otherwise, `gesvdj` is used.

Detailed benchmark is available at https://github.com/xwang233/code-snippet/tree/master/svd.

Some relevant code and discussions
- https://github.com/tensorflow/tensorflow/blob/master/tensorflow/core/kernels/linalg/svd_op_gpu.cu.cc
- https://github.com/google/jax/blob/master/jaxlib/cusolver.cc
- https://github.com/cupy/cupy/issues/3174
- https://github.com/tensorflow/tensorflow/issues/13603
- https://www.nvidia.com/en-us/on-demand/session/gtcsiliconvalley2019-s9226/

See also https://github.com/pytorch/pytorch/issues/42666 https://github.com/pytorch/pytorch/issues/47953

Close https://github.com/pytorch/pytorch/pull/50516

Pull Request resolved: https://github.com/pytorch/pytorch/pull/48436

Reviewed By: ejguan

Differential Revision: D25977046

Pulled By: heitorschueroff

fbshipit-source-id: c27e705cd29b6fd7c8ac674c1f9f490fa26ee1bf
2021-01-24 15:47:05 -08:00
Sam Estep
c147aa306c Use doctest directly to get docstring examples (#50596)
Summary:
This PR addresses [a two-year-old TODO in `test/test_type_hints.py`](12942ea52b/test/test_type_hints.py (L21-L22)) by replacing most of the body of our custom `get_examples_from_docstring` function with [a function from Python's built-in `doctest.DocTestParser` class](https://docs.python.org/3/library/doctest.html#doctest.DocTestParser.get_examples). This mostly made the parser more strict, catching a few errors in existing doctests:

- missing `...` in multiline statements
- missing space after `>>>`
- unmatched closing parenthesis

Also, as shown by [the resulting diff of the untracked `test/generated_type_hints_smoketest.py` file](https://pastebin.com/vC5Wz6M0) (also linked from the test plan below), this introduces a few incidental changes as well:

- standalone comments are no longer preserved
- indentation is now visually correct
- [`example_torch_promote_types`](4da9ceb743/torch/_torch_docs.py (L6753-L6772)) is now present
- an example called `example_torch_tensor___array_priority__` is added, although I can't tell where it comes from
- the last nine lines of code from [`example_torch_tensor_align_as`](5d45140d68/torch/_tensor_docs.py (L386-L431)) are now present
- the previously-misformatted third line from [`example_torch_tensor_stride`](5d45140d68/torch/_tensor_docs.py (L3508-L3532)) is now present

Pull Request resolved: https://github.com/pytorch/pytorch/pull/50596

Test Plan:
Checkout the base commit, typecheck the doctests, and save the generated file:
```
$ python test/test_type_hints.py TestTypeHints.test_doc_examples
$ cp test/generated_type_hints_smoketest.py /tmp
```
Then checkout this PR, do the same thing, and compare:
```
$ python test/test_type_hints.py TestTypeHints.test_doc_examples
$ git diff --no-index {/tmp,test}/generated_type_hints_smoketest.py
```
The test should succeed, and the diff should match [this paste](https://pastebin.com/vC5Wz6M0).

Reviewed By: walterddr

Differential Revision: D25926245

Pulled By: samestep

fbshipit-source-id: 23bc379ff438420e556263c19582dba06d8e42ec
2021-01-20 15:55:36 -08:00
kiyosora
4803eaf502 Implement NumPy-like function torch.fmax() & torch.fmin() (#49312)
Summary:
- Implementing the NumPy-like function`torch.fmax()` and `torch.fmin()` recommended in https://github.com/pytorch/pytorch/issues/48440

Pull Request resolved: https://github.com/pytorch/pytorch/pull/49312

Reviewed By: izdeby

Differential Revision: D25887246

Pulled By: heitorschueroff

fbshipit-source-id: d762eeff8b328bfcbe7d48b7ee9d2da72c249691
2021-01-20 06:45:25 -08:00
Xinyu Li
7526e38cd3 Revert "Stable sort for CPU (#50052)" (#50752)
Summary:
This reverts commit c99f356051.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/50752

Reviewed By: zou3519

Differential Revision: D25958146

Pulled By: glaringlee

fbshipit-source-id: f4068d038f9bd337bac8b673eaeb46a4646f6c77
2021-01-19 18:21:25 -08:00
nikitaved
c99f356051 Stable sort for CPU (#50052)
Summary:
Fixes [https://github.com/pytorch/pytorch/issues/38681](https://github.com/pytorch/pytorch/issues/38681) for the CPU.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/50052

Reviewed By: mrshenli

Differential Revision: D25900823

Pulled By: glaringlee

fbshipit-source-id: 1a3fa336037d0aa2344d79f46dcacfd478a353d1
2021-01-15 19:34:27 -08:00
Rong Rong (AI Infra)
3df5f9c3b2 Revert D25843351: [pytorch][PR] Clarify, make consistent, and test the behavior of logspace when dtype is integral
Test Plan: revert-hammer

Differential Revision:
D25843351 (0ae0fac1bb)

Original commit changeset: 45237574d04c

fbshipit-source-id: fb5343d509b277158b14d1b61e10433793889842
2021-01-15 18:47:37 -08:00
Hong Xu
0ae0fac1bb Clarify, make consistent, and test the behavior of logspace when dtype is integral (#47647)
Summary:
torch.logspace doesn't seem to have explained how integers are handled.
Add some clarification and some test when dtype is integral.

The CUDA implementation is also updated to be consistent with CPU implementation.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/47647

Reviewed By: gchanan

Differential Revision: D25843351

Pulled By: walterddr

fbshipit-source-id: 45237574d04c56992c18766667ff1ed71be77ac3
2021-01-15 12:31:20 -08:00
kshitij12345
057be23168 [doc] Add note about torch.flip returning new tensor and not view. (#50041)
Summary:
Reference: https://github.com/pytorch/pytorch/issues/38271

Pull Request resolved: https://github.com/pytorch/pytorch/pull/50041

Reviewed By: izdeby

Differential Revision: D25883870

Pulled By: mruberry

fbshipit-source-id: 33cc28a2176e98f2f29077958782291609c7999b
2021-01-13 01:01:47 -08:00
Erjia Guan
ca5d9617ba Fix remainder type promotion (#48668)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/48668

Combine tests for `fmod` and `remainder`.

## BC-breaking Note:
In order to make `remainder` operator have type promotion, we have to introduce BC breaking.
### 1.7.1:
In the case where the second argument is a python number, the result is casted to the dtype of the first argument.
```python
>>> torch.remainder(x, 1.2)
tensor([0, 0, 0, 0, 0], dtype=torch.int32)
```
### This PR:
In the case where the second argument is a python number, the dtype of result is determined by type promotion of both inputs.
```python
>>> torch.remainder(x, 1.2)
tensor([1.0000, 0.8000, 0.6000, 0.4000, 0.2000])
```

Test Plan: Imported from OSS

Reviewed By: mruberry

Differential Revision: D25869136

Pulled By: ejguan

fbshipit-source-id: 8e5e87eec605a15060f715952de140f25644008c
2021-01-12 22:09:30 -08:00
Erjia Guan
a0f7b18391 Fix fmod type promotion (#48278)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/48278

Remove various lines from tests due to no type promotion introduced from #47323

## BC-breaking Note:
In order to make `fmod` operator have type promotion, we have to introduce BC breaking.
### 1.7.1:
In the case where the second argument is a python number, the result is casted to the dtype of the first argument.
```python
>>> torch.fmod(x, 1.2)
tensor([0, 0, 0, 0, 0], dtype=torch.int32)
```
### Prior PR:
Check the BC-breaking note of #47323

### This PR:
In the case where the second argument is a python number, the dtype of result is determined by type promotion of both inputs.
```python
>>> torch.fmod(x, 1.2)
tensor([1.0000, 0.8000, 0.6000, 0.4000, 0.2000])
```

Test Plan: Imported from OSS

Reviewed By: mruberry

Differential Revision: D25869137

Pulled By: ejguan

fbshipit-source-id: bce763926731e095b75daf2e934bff7c03ff0832
2021-01-12 22:04:19 -08:00