Commit Graph

1343 Commits

Author SHA1 Message Date
Akifumi Imanishi
aa1fd6b45a Add LazyBatchNormXd (#51548)
Summary:
This PR implements UninitializedBuffer and LazyBatchnormXd based on https://github.com/pytorch/pytorch/issues/44538. (cc. emcastillo and albanD)

Pull Request resolved: https://github.com/pytorch/pytorch/pull/51548

Reviewed By: zhangguanheng66

Differential Revision: D26276903

Pulled By: albanD

fbshipit-source-id: 0ac706974178363f8af075e59b41d5989418922f
2021-02-05 10:27:04 -08:00
Natalia Gimelshein
d3023d86ba Revert D26249330: [Gradient Compression] Add a documentation page for DDP communication hooks
Test Plan: revert-hammer

Differential Revision:
D26249330 (e62aabac43)

Original commit changeset: ab973390ddb7

fbshipit-source-id: d508daed76219e7ca588cf7fb38aeaaffc61acfd
2021-02-04 22:38:06 -08:00
Yi Wang
e62aabac43 [Gradient Compression] Add a documentation page for DDP communication hooks (#51715)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/51715

Add a documentation page for DDP communication hooks.

Screenshot:

{F369781049}

Test Plan: View locally

Reviewed By: pritamdamania87

Differential Revision: D26249330

fbshipit-source-id: ab973390ddb785c5191f587a1b2b6de7d229e50e
2021-02-04 18:53:53 -08:00
guyang3532
ecfb73aaca Update docs for torch.profiler.tensorboard_trace_handler (#51636)
Summary:
![image](https://user-images.githubusercontent.com/62738430/106856207-17f8c000-66f9-11eb-80c9-844f79de423e.png)

Pull Request resolved: https://github.com/pytorch/pytorch/pull/51636

Reviewed By: orionr

Differential Revision: D26246309

Pulled By: ilia-cher

fbshipit-source-id: 083868e9231727638238c5f5ca31e3566d5e2e7e
2021-02-04 13:32:59 -08:00
James Reed
949ab213dd Revert "Revert D26246231: [FX] Edits after comprehensive pass over docs" (#51728)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/51728

This reverts commit 6c80fd005f.

Test Plan: Imported from OSS

Reviewed By: navahgar

Differential Revision: D26254130

Pulled By: jamesr66a

fbshipit-source-id: f301688f85c512076fee9b83a986677ef893d2c5
2021-02-04 13:01:09 -08:00
Joel Schlosser
a0137808a7 Note on Modules for 1.8 docs (#51536)
Summary:
A new note on Modules for 1.8 documentation.

Rendered form can be seen here: https://alband.github.io/doc_view/notes/modules.html
(thanks Alban!)

Pull Request resolved: https://github.com/pytorch/pytorch/pull/51536

Reviewed By: albanD

Differential Revision: D26254282

Pulled By: jbschlosser

fbshipit-source-id: 09cbd46aa268a29b6f54fd48ffe1d6b98db0ff31
2021-02-04 11:28:11 -08:00
Alban Desmaison
6c80fd005f Revert D26246231: [FX] Edits after comprehensive pass over docs
Test Plan: revert-hammer

Differential Revision:
D26246231 (c22bc4821d)

Original commit changeset: 8d6278a9fe1d

fbshipit-source-id: fdc83289f8fe7986bc02181eec55e4e72be2d812
2021-02-04 09:26:21 -08:00
James Reed
c22bc4821d [FX] Edits after comprehensive pass over docs (#51705)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/51705

Pull Request resolved: #51679

Test Plan: Imported from OSS

Reviewed By: Chillee

Differential Revision: D26246231

Pulled By: jamesr66a

fbshipit-source-id: 8d6278a9fe1da5e6c34eff4fedc4c7e18533fe0f
2021-02-04 08:11:07 -08:00
Taylor Robie
c8af338407 Expand benchmark utils docs (#51664)
Summary:
Add some much needed documentation on the Timer callgrind output format, and expand what is shown on the website.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/51664

Reviewed By: tugsbayasgalan

Differential Revision: D26246675

Pulled By: robieta

fbshipit-source-id: 7a07ff35cae07bd2da111029242a5dc8de21403c
2021-02-04 00:22:41 -08:00
Joel Schlosser
e60f18c2ad Generate header with version #defines for LibTorch (#50073)
Summary:
Uses cmake's `configure_file()` macro to generate a new `torch/csrc/api/include/torch/version.h` header with `TORCH_VERSION_{MAJOR,MINOR,PATCH}` \#defines from an input file `torch/csrc/api/include/torch/version.h.in`.

For Bazel builds, this is accomplished with `header_template_rule()`.

For Buck builds, this is accomplished with `fb_native.genrule()`.

Fixes https://github.com/pytorch/pytorch/issues/44365

<img width="1229" alt="Screen Shot 2021-01-05 at 3 19 24 PM" src="https://user-images.githubusercontent.com/75754324/103809279-3fd80380-5027-11eb-9039-fd23922cebd5.png">

Pull Request resolved: https://github.com/pytorch/pytorch/pull/50073

Reviewed By: glaringlee

Differential Revision: D25855877

Pulled By: jbschlosser

fbshipit-source-id: 6bb792718c97e2c2dbaa74b7b7b831a4f6938e49
2021-02-03 22:18:53 -08:00
Horace He
f1a63b7c10 [FX] Added how to write transformations section (#51278)
Summary:
![image](https://user-images.githubusercontent.com/6355099/106121588-b8614a00-6125-11eb-923f-fcdf575cd6cd.png)

I still need to add links to vmap/grad/decomposition, but those haven't been added to the examples folder yet.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/51278

Reviewed By: zou3519

Differential Revision: D26223103

Pulled By: Chillee

fbshipit-source-id: 3ad9bf76cd3438743edecdc17c44f8d1e00e5ea1
2021-02-03 21:32:43 -08:00
Mike Ruberry
16cfe970e0 Updates linalg documentation per feature review process (#51620)
Summary:
Notes the module is in beta and that the policy for returning optionally computed tensors may change in the future.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/51620

Reviewed By: heitorschueroff

Differential Revision: D26220254

Pulled By: mruberry

fbshipit-source-id: edf78fe448d948b43240e138d6d21b780324e41e
2021-02-03 16:11:57 -08:00
anjali411
34d4d79966 Autograd doc note fix (#51661)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/51661

Test Plan: Imported from OSS

Reviewed By: ezyang

Differential Revision: D26230912

Pulled By: anjali411

fbshipit-source-id: 94323d7bce631a4c5781020e9650495461119ede
2021-02-03 15:08:35 -08:00
Ansley Ussery
ab4623da16 Document FX debugging (#51530)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/51530

Test Plan: Imported from OSS

Reviewed By: jamesr66a

Differential Revision: D26192641

Pulled By: ansley

fbshipit-source-id: c69ab1bb2451d8ee5a729445f52bccc66e6f431b
2021-02-02 23:17:51 -08:00
Gemfield
b48ee75507 Fix quantization doc issue (#50187)
Summary:
There has a description error in quantization.rst, fixed it.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/50187

Reviewed By: mrshenli

Differential Revision: D25895294

Pulled By: soumith

fbshipit-source-id: c0b2e7ba3fadfc0977ab2d4d4e9ed4f93694cedd
2021-02-02 20:33:21 -08:00
Jeffrey Wan
b18eeaa80a Implement np.diff for single order differences (#50569)
Summary:
Implements `np.diff` for single order differences only:
 - method and function variants for `diff` and function variant for `diff_out`
 - supports out variant, but not in-place since shape changes
 - adds OpInfo entry, and test in `test_torch`
 - automatic autograd because we are using the `Math` dispatch

_Update: we only support Tensors for prepend and append in this PR. See discussion below and comments for more details._

Currently there is a quirk in the c++ API based on how this is implemented: it is not possible to specify scalar prepend and appends without also specifying all 4 arguments.

That is because the goal is to match NumPy's diff signature of `diff(int n=1, int dim=-1, Union[Scalar, Tensor] prepend=None, Union[Scalar, Tensor] append)=None` where all arguments are optional, positional and in the correct order.
There are a couple blockers. One is c++ ambiguity. This prevents us from simply doing `diff(int n=1, int dim=-1, Scalar? prepend=None, Tensor? append=None)` etc for all combinations of {Tensor, Scalar} x {Tensor, Scalar}.

Why not have append, prepend not have default args and then write out the whole power set of {Tensor, Scalar, omitted} x {Tensor, Scalar, omitted} you might ask. Aside from having to write 18 overloads, this is actually illegal because arguments with defaults must come after arguments without defaults. This would mean having to write `diff(prepend, append, n, dim)` which is not desired. Finally writing out the entire power set of all arguments n, dim, prepend, append is out of the question because that would actually involve 2 * 2 * 3 * 3 = 36 combinations. And if we include the out variant, that would be 72 overloads!

With this in mind, the current way this is implemented is actually to still do `diff(int n=1, int dim=-1, Scalar? prepend=None, Tensor? append=None)`. But also make use of `cpp_no_default_args`. The idea is to only have one of the 4 {Tensor, Scalar} x {Tensor, Scalar} provide default arguments for the c++ api, and add `cpp_no_default_args` for the remaining 3 overloads. With this, Python api works as expected, but some calls such as `diff(prepend=1)` won't work on c++ api.

We can optionally add 18 more overloads that cover the {dim, n, no-args} x {scalar-tensor, tensor-scalar, scalar-scalar} x {out, non-out} cases for c++ api. _[edit: counting is hard - just realized this number is still wrong. We should try to count the cases we do cover instead and subtract that from the total: (2 * 2 * 3 * 3) - (3 + 2^4) = 17. 3 comes from the 3 of 4 combinations of {tensor, scalar}^2 that we declare to be `cpp_no_default_args`, and the one remaining case that has default arguments has covers 2^4 cases. So actual count is 34 additional overloads to support all possible calls]_

_[edit: thanks to https://github.com/pytorch/pytorch/issues/50767 hacky_wrapper is no longer necessary; it is removed in the latest commit]_
 hacky_wrapper was also necessary here because `Tensor?` will cause dispatch to look for the `const optional<Tensor>&` schema but also generate a `const Tensor&` declaration in Functions.h. hacky_wrapper allows us to define our function as `const Tensor&` but wraps it in optional for us, so this avoids both the errors while linking and loading.

_[edit: rewrote the above to improve clarity and correct the fact that we actually need 18 more overloads (26 total), not 18 in total to complete the c++ api]_

Pull Request resolved: https://github.com/pytorch/pytorch/pull/50569

Reviewed By: H-Huang

Differential Revision: D26176105

Pulled By: soulitzer

fbshipit-source-id: cd8e77cc2de1117c876cd71c29b312887daca33f
2021-02-02 20:25:16 -08:00
anjali411
642afcb168 Add sgn to torch.rst so that it appears in the built docs (#51479)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/51479

Fixes https://github.com/pytorch/pytorch/issues/50146

Test Plan: Imported from OSS

Reviewed By: mruberry

Differential Revision: D26179734

Pulled By: anjali411

fbshipit-source-id: 1cda9a3dc9ce600e585900eea70fbecac0635d5c
2021-02-01 12:43:06 -08:00
James Reed
609f76f27a [WIP][FX] Add Interpreter and Transformer (#50420)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/50420

Test Plan: Imported from OSS

Reviewed By: zdevito

Differential Revision: D25880330

Pulled By: jamesr66a

fbshipit-source-id: 27d34888e36e39924821fed891d79f969237a104
2021-02-01 11:40:12 -08:00
Mike Ruberry
40c0fffb4b Fixes docs (#51439)
Summary:
pytorch_python_doc_build is failing with:

```
Jan 31 04:30:45 /var/lib/jenkins/workspace/docs/source/notes/broadcasting.rst:6: WARNING: 'any' reference target not found: numpy.doc.broadcasting
```

this removes the incorrect reference and adds an updated link.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/51439

Reviewed By: ngimel

Differential Revision: D26170232

Pulled By: mruberry

fbshipit-source-id: 829999db52e1e860d36d626d0d9f26e31283d14b
2021-01-31 22:00:26 -08:00
Natalia Gimelshein
7ab89f58be expose memory_fraction and gpu_process docs (#51372)
Summary:
Per title

Pull Request resolved: https://github.com/pytorch/pytorch/pull/51372

Reviewed By: mruberry

Differential Revision: D26157787

Pulled By: ngimel

fbshipit-source-id: 97eac5f12881a2bf62c251f6f7eaf65fdbe34056
2021-01-29 18:22:34 -08:00
anjali411
fd9a85d21b Doc update for complex numbers (#51129)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/51129

Test Plan: Imported from OSS

Reviewed By: pbelevich

Differential Revision: D26094947

Pulled By: anjali411

fbshipit-source-id: 4e1cdf8915a8c6a86ac3462685cdce881e1bcffa
2021-01-27 07:32:26 -08:00
mattip
b60494000b DOC: udate left navbar links for vision and text (#51103)
Summary:
A tiny PR to update the links in the lefthand navbar under Libraries. The canonical link for vision and text is `https://pytorch.org/vision/stable` and `https://pytorch.org/text/stable` respectively. The link without the `/stable` works via a redirect, this is cleaner.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/51103

Reviewed By: izdeby

Differential Revision: D26079760

Pulled By: heitorschueroff

fbshipit-source-id: df1fa64d7895831f4e6242445bae02c1faa5e4dc
2021-01-27 07:19:00 -08:00
Emilio Castillo
233e4ebdb6 Implement autograd functions for c10d communication operations (#40762)
Summary:
Closes https://github.com/pytorch/pytorch/issues/40702, Fixes https://github.com/pytorch/pytorch/issues/40690

Currently wip. But I would appreciate some feedback. Functions should be double-differentiable.

Contrary to b35cdc5200/torch/nn/parallel/_functions.py
This PR generates list of tensors instead of aggregating the received data in a single tensor. Is this behavior correct?

Thanks!

Pull Request resolved: https://github.com/pytorch/pytorch/pull/40762

Reviewed By: glaringlee

Differential Revision: D24758889

Pulled By: mrshenli

fbshipit-source-id: 79285fb4b791cae3d248f34e2aadb11c9ab10cce
2021-01-26 07:52:51 -08:00
Pritam Damania
68c218547c Add documentation page for pipeline parallelism. (#50791)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/50791

Add a dedicated pipeline parallelism doc page explaining the APIs and
the overall value of the module.
ghstack-source-id: 120257168

Test Plan:
1) View locally
2) waitforbuildbot

Reviewed By: rohan-varma

Differential Revision: D25967981

fbshipit-source-id: b607b788703173a5fa4e3526471140506171632b
2021-01-25 13:47:13 -08:00
Hameer Abbasi
f7b339d11c Clarify wording around overrides subclasses. (#51031)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/47117

Pull Request resolved: https://github.com/pytorch/pytorch/pull/51031

Reviewed By: bdhirsh

Differential Revision: D26047498

Pulled By: albanD

fbshipit-source-id: dd0a7d9f97c0f6469b3050d2e3b4473f1bee3820
2021-01-25 08:19:13 -08:00
James Reed
789f6f1250 [FX] Minor docs changes (#50966)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/50966

Test Plan: Imported from OSS

Reviewed By: suo

Differential Revision: D26029101

Pulled By: jamesr66a

fbshipit-source-id: 4374771be74d0a4d05fdd29107be5357130c2a76
2021-01-22 16:23:19 -08:00
Kurt Mohler
8ab1a1495d Rename set_deterministic to use_deterministic_algorithms (#49904)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/49100

Pull Request resolved: https://github.com/pytorch/pytorch/pull/49904

Reviewed By: ezyang, mrshenli

Differential Revision: D25956761

Pulled By: mruberry

fbshipit-source-id: 86a59289d50825a0ebbd7c358b483c8d8039ffa6
2021-01-22 11:27:07 -08:00
M.L. Croci
8eb90d4865 Add Gaussian NLL Loss (#50886)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/48520.

cc albanD (This is a clean retry PR https://github.com/pytorch/pytorch/issues/49807)

Pull Request resolved: https://github.com/pytorch/pytorch/pull/50886

Reviewed By: ejguan

Differential Revision: D26007435

Pulled By: albanD

fbshipit-source-id: 88fe91b40dea6f72e093e6301f0f04fcc842d2f0
2021-01-22 06:56:49 -08:00
Jerry Zhang
b5242d66b6 [quant][doc] Adding a table comparing eager and fx graph mode (#50413)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/50413

Test Plan:
.

Imported from OSS

Reviewed By: vkuzo

Differential Revision: D25886960

fbshipit-source-id: b99178d3900eedec920dbff28ab956f97be2661a
2021-01-21 13:43:42 -08:00
James Reed
d0e942f9a7 [FX][docs] Add limitations of symbolic tracing (#50638)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/50638

Test Plan: Imported from OSS

Reviewed By: ansley

Differential Revision: D25933780

Pulled By: jamesr66a

fbshipit-source-id: 0aa97ea05203fbcb707b0e947a465e206104b7df
2021-01-20 21:42:16 -08:00
kiyosora
4803eaf502 Implement NumPy-like function torch.fmax() & torch.fmin() (#49312)
Summary:
- Implementing the NumPy-like function`torch.fmax()` and `torch.fmin()` recommended in https://github.com/pytorch/pytorch/issues/48440

Pull Request resolved: https://github.com/pytorch/pytorch/pull/49312

Reviewed By: izdeby

Differential Revision: D25887246

Pulled By: heitorschueroff

fbshipit-source-id: d762eeff8b328bfcbe7d48b7ee9d2da72c249691
2021-01-20 06:45:25 -08:00
Meghan Lele
4aea007351 [JIT] Fix archive file extension in examples and docs (#50649)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/50649

**Summary**
Tutorials, documentation and comments are not consistent with the file
extension they use for JIT archives. This commit modifies certain
instances of `*.pth` in `torch.jit.save` calls with `*.pt`.

**Test Plan**
Continuous integration.

**Fixes**
This commit fixes #49660.

Test Plan: Imported from OSS

Reviewed By: ZolotukhinM

Differential Revision: D25961628

Pulled By: SplitInfinity

fbshipit-source-id: a40c97954adc7c255569fcec1f389aa78f026d47
2021-01-20 02:04:46 -08:00
Himangshu
4ff1823fac Add Sparse support for torch.sqrt (#50088)
Summary:
Fixes #{issue number}

Pull Request resolved: https://github.com/pytorch/pytorch/pull/50088

Reviewed By: mrshenli

Differential Revision: D25894003

Pulled By: ezyang

fbshipit-source-id: 93688c33b2f9a355c331d6edb3e402935223f75b
2021-01-19 20:19:07 -08:00
Ivan Yashchuk
f9a5ba7398 Added linalg.slogdet (#49194)
Summary:
This PR adds `torch.linalg.slogdet`.

Changes compared to the original torch.slogdet:

- Complex input now works as in NumPy
- Added out= variant (allocates temporary and makes a copy for now)
- Updated `slogdet_backward` to work with complex input

Ref. https://github.com/pytorch/pytorch/issues/42666

Pull Request resolved: https://github.com/pytorch/pytorch/pull/49194

Reviewed By: VitalyFedyunin

Differential Revision: D25916959

Pulled By: mruberry

fbshipit-source-id: cf9be8c5c044870200dcce38be48cd0d10e61a48
2021-01-19 07:28:12 -08:00
Guilherme Leobas
0d981eea6c add type annotations to torch.nn.modules.conv (#49564)
Summary:
closes gh-49563

Pull Request resolved: https://github.com/pytorch/pytorch/pull/49564

Reviewed By: albanD

Differential Revision: D25917441

Pulled By: walterddr

fbshipit-source-id: 491dc06cfc1bbf694dfd9ccefca4f55488a931b2
2021-01-15 11:16:11 -08:00
James Reed
d9f71b5868 [WIP][FX] new sections in docs (#50562)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/50562

Adding new top-level sections to the docs to be filled out

![image](https://user-images.githubusercontent.com/4685384/104666703-5b778580-5689-11eb-80ab-7df07f816b5b.png)

Test Plan: Imported from OSS

Reviewed By: Chillee

Differential Revision: D25919592

Pulled By: jamesr66a

fbshipit-source-id: 45f564eb8fddc7a42abb5501e160cca0dd0745c8
2021-01-14 21:34:36 -08:00
James Reed
6882f9cc1c [FX] Add wrap() docstring to docs and add decorator example (#50555)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/50555

Test Plan: Imported from OSS

Reviewed By: Chillee

Differential Revision: D25917564

Pulled By: jamesr66a

fbshipit-source-id: 20c7c8b1192fa80c6a0bb9e18910791bd7167232
2021-01-14 21:31:51 -08:00
Ivan Yashchuk
9384d31af5 Added linalg.pinv (#48399)
Summary:
This PR adds `torch.linalg.pinv`.

Changes compared to the original `torch.pinverse`:
 * New kwarg "hermitian": with `hermitian=True` eigendecomposition is used instead of singular value decomposition.
 * `rcond` argument can now be a `Tensor` of appropriate shape to apply matrix-wise clipping of singular values.
 * Added `out=` variant (allocates temporary and makes a copy for now)

Ref. https://github.com/pytorch/pytorch/issues/42666

Pull Request resolved: https://github.com/pytorch/pytorch/pull/48399

Reviewed By: zhangguanheng66

Differential Revision: D25869572

Pulled By: mruberry

fbshipit-source-id: 0f330a91d24ba4e4375f648a448b27594e00dead
2021-01-12 06:52:06 -08:00
Ansley Ussery
080a097935 Add docstring for Proxy (#50145)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/50145

Test Plan: Imported from OSS

Reviewed By: pbelevich

Differential Revision: D25854281

Pulled By: ansley

fbshipit-source-id: d7af6fd6747728ef04e86fbcdeb87cb0508e1fd8
2021-01-11 13:47:55 -08:00
Ivan Yashchuk
4774c6800b Added linalg.inv (#48261)
Summary:
This PR adds `torch.linalg.inv` for NumPy compatibility.

`linalg_inv_out` uses in-place operations on provided `result` tensor.

I modified `apply_inverse` to accept tensor of Int instead of std::vector, that way we can write a function similar to `linalg_inv_out` but removing the error checks and device memory synchronization.

I fixed `lda` (leading dimension parameter which is max(1, n)) in many places to handle 0x0 matrices correctly.
Zero batch dimensions are also working and tested.

Ref https://github.com/pytorch/pytorch/issues/42666

Pull Request resolved: https://github.com/pytorch/pytorch/pull/48261

Reviewed By: gchanan

Differential Revision: D25849590

Pulled By: mruberry

fbshipit-source-id: cfee6f1daf7daccbe4612ec68f94db328f327651
2021-01-10 04:00:51 -08:00
kshitij12345
5d45140d68 [numpy] torch.{all/any} : output dtype is always bool (#47878)
Summary:
BC-breaking note:

This PR changes the behavior of the any and all functions to always return a bool tensor. Previously these functions were only defined on bool and uint8 tensors, and when called on uint8 tensors they would also return a uint8 tensor. (When called on a bool tensor they would return a bool tensor.)

PR summary:

https://github.com/pytorch/pytorch/pull/44790#issuecomment-725596687

Fixes 2 and 3

Also Fixes https://github.com/pytorch/pytorch/issues/48352

Changes
* Output dtype is always `bool` (consistent with numpy) **BC Breaking (Previously used to match the input dtype**)
* Uses vectorized version for all dtypes on CPU
* Enables test for complex
* Update doc for `torch.all` and `torch.any`

TODO
* [x] Update docs
* [x] Benchmark
* [x] Raise issue on XLA

Pull Request resolved: https://github.com/pytorch/pytorch/pull/47878

Reviewed By: albanD

Differential Revision: D25714324

Pulled By: mruberry

fbshipit-source-id: a87345f725297524242d69402dfe53060521ea5d
2021-01-08 11:05:39 -08:00
Antonio Cuni
5c5abd591d Implement torch.linalg.svd (#45562)
Summary:
This is related to https://github.com/pytorch/pytorch/issues/42666 .
I am opening this PR to have the opportunity to discuss things.
First, we need to consider the differences between `torch.svd` and `numpy.linalg.svd`:

1. `torch.svd` takes `some=True`, while `numpy.linalg.svd` takes `full_matrices=True`, which is effectively the opposite (and with the opposite default, too!)

2. `torch.svd` returns `(U, S, V)`, while `numpy.linalg.svd` returns `(U, S, VT)` (i.e., V transposed).

3. `torch.svd` always returns a 3-tuple; `numpy.linalg.svd` returns only `S` in case `compute_uv==False`

4. `numpy.linalg.svd` also takes an optional `hermitian=False` argument.

I think that the plan is to eventually deprecate `torch.svd` in favor of `torch.linalg.svd`, so this PR does the following:

1. Rename/adapt the old `svd` C++ functions into `linalg_svd`: in particular, now `linalg_svd` takes `full_matrices` and returns `VT`

2. Re-implement the old C++ interface on top of the new (by negating `full_matrices` and transposing `VT`).

3. The C++ version of `linalg_svd` *always* returns a 3-tuple (we can't do anything else). So, there is a python wrapper which manually calls `torch._C._linalg.linalg_svd` to tweak the return value in case `compute_uv==False`.

Currently, `linalg_svd_backward` is broken because it has not been adapted yet after the `V ==> VT` change, but before continuing and spending more time on it I wanted to make sure that the general approach is fine.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/45562

Reviewed By: H-Huang

Differential Revision: D25803557

Pulled By: mruberry

fbshipit-source-id: 4966f314a0ba2ee391bab5cda4563e16275ce91f
2021-01-08 06:46:16 -08:00
Richard Barnes
e606e60331 [Needs Review] Convert some files to Python3 (#49351)
Summary:
Uses the Python standard library 2to3 script to convert a number of Python 2 files to Python 3. This facilitates code maintenance such as dropping unused imports in D25500422.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/49351

Test Plan: Standard sandcastle tests

Reviewed By: xush6528

Differential Revision: D25499576

fbshipit-source-id: 0c44718ac734771ce0758b1cb30676cc3d76ac10
2021-01-06 10:48:16 -08:00
Vasiliy Kuznetsov
ffbb68af8a quant docs: add common errors section (#49902)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/49902

Adds a common errors section, and details the two errors
we see often on the discuss forums, with recommended solutions.

Test Plan: build the docs on Mac OS, the new section renders correctly.

Reviewed By: supriyar

Differential Revision: D25718195

Pulled By: vkuzo

fbshipit-source-id: c5ef2b24831d18d57bbafdb82d26d8fbf3a90781
2020-12-30 15:01:59 -08:00
Antonio Cuni
361f5ed91d Implement torch.linalg.qr (#47764)
Summary:
I am opening this PR early to have a place to discuss design issues.
The biggest difference between `torch.qr` and `numpy.linalg.qr` is that the former `torch.qr` takes a boolean parameter `some=True`, while the latter takes a string parameter `mode='reduced'` which can be one of the following:

`reduced`
this is completely equivalent to `some=True`, and both are the default.

`complete`
this is completely equivalent to `some=False`.

`r`
this returns only `r` instead of a tuple `(r, q)`. We have already decided that we don't want different return types depending on the parameters, so I propose to return `(r, empty_tensor)` instead. I **think** that in this mode it will be impossible to implement the backward pass, so we should raise an appropriate error in that case.

`raw`
in this mode, it returns `(h, tau)` instead of `(q, r)`. Internally, `h` and `tau` are obtained by calling lapack's `dgeqrf` and are later used to compute the actual values of `(q, r)`. The numpy docs suggest that these might be useful to call other lapack functions, but at the moment none of them is exposed by numpy and I don't know how often it is used in the real world.
I suppose the implementing the backward pass need attention to: the most straightforward solution is to use `(h, tau)` to compute `(q, r)` and then use the normal logic for `qr_backward`, but there might be faster alternatives.

`full`, `f`
alias for `reduced`, deprecated since numpy 1.8.0

`economic`, `e`
similar to `raw but it returns only `h` instead of `(h, tau). Deprecated since numpy 1.8.0

To summarize:
  * `reduce`, `complete` and `r` are straightforward to implement.

  * `raw` needs a bit of extra care, but I don't know how much high priority it is: since it is used rarely, we might want to not support it right now and maybe implement it in the future?

  * I think we should just leave `full` and `economic` out, and possibly add a note to the docs explaining what you need to use instead

/cc mruberry

Pull Request resolved: https://github.com/pytorch/pytorch/pull/47764

Reviewed By: ngimel

Differential Revision: D25708870

Pulled By: mruberry

fbshipit-source-id: c25c70a23a02ec4322430d636542041e766ebe1b
2020-12-28 17:28:17 -08:00
Samuel Marks
e6779d4357 [*.py] Rename "Arguments:" to "Args:" (#49736)
Summary:
I've written custom parsers and emitters for everything from docstrings to classes and functions. However, I recently came across an issue when I was parsing/generating from the TensorFlow codebase: inconsistent use of `Args:` and `Arguments:` in its docstrings.

```sh
(pytorch#c348fae)$ for name in 'Args:' 'Arguments:'; do
    printf '%-10s %04d\n' "$name" "$(rg -IFtpy --count-matches "$name" | paste -s -d+ -- | bc)"; done
Args:      1095
Arguments: 0336
```

It is easy enough to extend my parsers to support both variants, however it looks like `Arguments:` is wrong anyway, as per:

  - https://google.github.io/styleguide/pyguide.html#doc-function-args @ [`ddccc0f`](https://github.com/google/styleguide/blob/ddccc0f/pyguide.md)

  - https://chromium.googlesource.com/chromiumos/docs/+/master/styleguide/python.md#describing-arguments-in-docstrings @ [`9fc0fc0`](https://chromium.googlesource.com/chromiumos/docs/+/9fc0fc0/styleguide/python.md)

  - https://sphinxcontrib-napoleon.readthedocs.io/en/latest/example_google.html @ [`c0ae8e3`](https://github.com/sphinx-contrib/napoleon/blob/c0ae8e3/docs/source/example_google.rst)

Therefore, only `Args:` is valid. This PR replaces them throughout the codebase.

PS: For related PRs, see tensorflow/tensorflow/pull/45420

PPS: The trackbacks automatically appearing below are sending the same changes to other repositories in the [PyTorch](https://github.com/pytorch) organisation.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/49736

Reviewed By: albanD

Differential Revision: D25710534

Pulled By: soumith

fbshipit-source-id: 61e8ff01abb433e9f78185c2d1d0cbd7c22c1619
2020-12-28 09:34:47 -08:00
Mike Ruberry
5acc27c00a Revert D25690129: [pytorch][PR] Added linalg.inv
Test Plan: revert-hammer

Differential Revision:
D25690129 (8554b58fbd)

Original commit changeset: edb2d03721f2

fbshipit-source-id: 8679ea18e637423d35919544d2b047a62ac3abd8
2020-12-23 15:27:52 -08:00
Jeffrey Wan
1833009202 Fix typo in complex autograd docs (#49755)
Summary:
Update complex autograd docs to fix a typo

Pull Request resolved: https://github.com/pytorch/pytorch/pull/49755

Reviewed By: mruberry

Differential Revision: D25692649

Pulled By: soulitzer

fbshipit-source-id: 43c2113b4c8f2d1828880102189a5a9b887dc784
2020-12-23 14:42:34 -08:00
Ralf Gommers
d99a0c3b3e Improve docs for scatter and gather functions (#49679)
Summary:
- Add warning about non-unique indices
- And note that these functions don't broadcast
- Add missing `torch.scatter` and `torch.scatter_add` doc entries
- Fix parameter descriptions
- Improve code examples to make indexing behaviour easier to understand

Closes gh-48214
Closes gh-26191
Closes gh-37130
Closes gh-34062
xref gh-31776

Pull Request resolved: https://github.com/pytorch/pytorch/pull/49679

Reviewed By: mruberry

Differential Revision: D25693660

Pulled By: ngimel

fbshipit-source-id: 4983e7b4efcbdf1ab9f04e58973b4f983e8e43a4
2020-12-23 12:23:15 -08:00
Richard Barnes
b3387139b4 Mod lists to neutral+descriptive terms in caffe2/docs (#49803)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/49803

Per "https://fb.workplace.com/groups/e/permalink/3320810064641820/" we can no longer use the terms "whitelist" and "blacklist", and editing any file containing them results in a critical error signal. Let's embrace the change.
This diff changes "blacklist" to "blocklist" in a number of non-interface contexts (interfaces would require more extensive testing and might interfere with reading stored data, so those are deferred until later).

Test Plan: Sandcastle

Reviewed By: vkuzo

Differential Revision: D25686924

fbshipit-source-id: 117de2ca43a0ea21b6e465cf5082e605e42adbf6
2020-12-23 11:37:11 -08:00