Commit Graph

1575 Commits

Author SHA1 Message Date
mattip
1681323ddc DOC: Merge extraheader block from theme instead of override (#70187)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/70185

The extraheader block in docs/source/_templates/layout.html overrides the one from the pytorch theme. The theme block adds Google Analytics, so they were missing from the `master` documentation. This came up in PR pytorch/pytorch.github.io#899.

brianjo

Pull Request resolved: https://github.com/pytorch/pytorch/pull/70187

Reviewed By: bdhirsh

Differential Revision: D33248466

Pulled By: malfet

fbshipit-source-id: b314916a3f0789b6617cf9ba6bd938bf5ca27242
2022-01-05 06:42:38 -08:00
Juhyeong Kim
bc40fb5639 [Reinstate] Wishart distribution (#70377)
Summary:
Implement https://github.com/pytorch/pytorch/issues/68050
Reopened merged and reverted PR https://github.com/pytorch/pytorch/issues/68588 worked with neerajprad
cc neerajprad

Sorry for the confusion.

TODO:

- [x] Unit Test
- [x] Documentation
- [x] Change constraint of matrix variables with 'torch.distributions.constraints.symmetric' if it is reviewed and merged. Debug positive definite constraints https://github.com/pytorch/pytorch/issues/68720

Pull Request resolved: https://github.com/pytorch/pytorch/pull/70377

Reviewed By: mikaylagawarecki

Differential Revision: D33355132

Pulled By: neerajprad

fbshipit-source-id: e968c0d9a3061fb2855564b96074235e46a57b6c
2021-12-30 11:41:46 -08:00
Arvind Kannan
6217fee96b Revert D33246843: [pytorch][PR] Implementation of Wishart distribution
Test Plan: revert-hammer

Differential Revision:
D33246843 (a217a62e73)

Original commit changeset: 825fcddf4785

Original Phabricator Diff: D33246843 (a217a62e73)

fbshipit-source-id: 2c8063e8d10e9d3ac20fa44673e6011ed1160753
2021-12-21 18:55:49 -08:00
Kim Juhyeong
a217a62e73 Implementation of Wishart distribution (#68588)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/68050

TODO:
- [x] Unit Test
- [x] Documentation
- [x] Change constraint of matrix variables with 'torch.distributions.constraints.symmetric' if it is reviewed and merged. https://github.com/pytorch/pytorch/issues/68720

Pull Request resolved: https://github.com/pytorch/pytorch/pull/68588

Reviewed By: bdhirsh

Differential Revision: D33246843

Pulled By: neerajprad

fbshipit-source-id: 825fcddf478555235e7a66de0c18368c41e935cd
2021-12-21 14:07:30 -08:00
Jerry Zhang
9d3a6fa623 [quant][bc-breaking] Remove QConfigDynamic from quantization api (#69875)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/69875

att

Test Plan:
ci + regression tets:
```
python test/test_quantization.py TestPostTrainingStatic
python test/test_quantization.py TestPostTrainingDynamic
python test/test_quantization.py TestQuantizeFx
```

Imported from OSS

Reviewed By: vkuzo

Differential Revision: D33079096

fbshipit-source-id: 1e73bb27c518eba62b60f3a8c4b532dddc8367cf
2021-12-17 23:10:06 -08:00
Philip Meier
de296d526f move torch.testing from prototype to beta (#69668)
Summary:
cc brianjo mruberry

Pull Request resolved: https://github.com/pytorch/pytorch/pull/69668

Reviewed By: albanD

Differential Revision: D33028213

Pulled By: mruberry

fbshipit-source-id: 3316b887d4c322cc1262feee651464da4124a6de
2021-12-17 09:52:47 -08:00
Jerry Zhang
043098ef7f [quant][graphmode] Rename backend_config_dict folder to backend (#69882)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/69882

att

Test Plan:
```
python test/fx2trt/test_quant_trt.py
```

Imported from OSS

Reviewed By: supriyar

Differential Revision: D33081761

fbshipit-source-id: c3178eec5798ac8587be09a963944b570c73e8ea
2021-12-16 21:13:04 -08:00
Nicolas Hug
73a6c36f1b Add more details to the known limitations section of torchhub docs (#69970)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/69970

This is a follow up to https://github.com/pytorch/hub/issues/243

Test Plan: Imported from OSS

Reviewed By: jbschlosser

Differential Revision: D33124060

Pulled By: NicolasHug

fbshipit-source-id: 298fe14b39a1aff3e0b029044c9a0db8bc82336a
2021-12-16 02:43:48 -08:00
Mike Guo
d4f8313497 Add low level torch.profiler.kineto_profile base class (#63302)
Summary:
Refactor torch.profiler.profile by separate it into one low level class and one high level wrapper.

The PR include the following change:
1. separate class torch.profiler.profile into two separated class: kineto_profiler and torch.profiler.profile.
2. The former class has the low-level functionality exposed in C++ level like: prepare_profiler, start_profiler, stop_profiler.
3. The original logics in torch.profiler.profile including export_chrome_trace, export_stacks, key_averages, events, add_metadata are all moved into kineto_profiler since they are all exposed by the torch.autograd.profiler.
4. The new torch.profiler.profile is fully back-compatible with original class since it inherit from torch.profiler.kineto_profiler. Its only responsibility in new implementation is the maintenance of the finite state machine of ProfilerAction.

With the refactoring, the responsibility boundary is clear and the new logic is simple to understand.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/63302

Reviewed By: albanD

Differential Revision: D33006442

Pulled By: robieta

fbshipit-source-id: 30d7c9f5c101638703f1243fb2fcc6ced47fb690
2021-12-14 14:47:43 -08:00
Brian Hirsh
457ba1dd3e Porting index_add to structured kernels, add an out variant (#65993)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/65993

This PR attempts to port `index_add` to structured kernels, but does more than that:

* Adds an `out=` variant to `index_add`
* Revises `native_functions.yaml` registrations, to not have multiple entries and instead pass default value to `alpha`.
* Changes in `derivatives.yaml` file for autograd functioning
* Revises error messages, please see: https://github.com/pytorch/pytorch/pull/65993#issuecomment-945441615

Follow-up PRs in near future will attempt to refactor the OpInfo test, and will give another look at tests in `test/test_torch.py` for this function. (hence the use of ghstack for this)

~This is WIP because there are tests failing for `Dimname` variant on mobile/android builds, and I'm working on fixing them.~

Issue tracker: https://github.com/pytorch/pytorch/issues/55070

Test Plan: Imported from OSS

Reviewed By: ejguan

Differential Revision: D32646426

fbshipit-source-id: b035ecf843a9a27d4d1e18b202b035adc2a49ab5
2021-12-14 11:57:13 -08:00
Kevin Tse
b67eaec853 [DateLoader] more clearly expose 'default_collate' and 'default_convert' to users (#69862)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/69862

Fixes #69445

cc SsnL VitalyFedyunin ejguan NivekT

Test Plan: Imported from OSS

Reviewed By: ejguan, ngimel

Differential Revision: D33068792

Pulled By: NivekT

fbshipit-source-id: ef9791acdc23d014b8761fa7420062d454ce8969
2021-12-14 11:18:26 -08:00
Supriya Rao
b1ef56d646 [quant][docs] quantized model save/load instructions (#69789)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/69789

Add details on how to save and load quantized models without hitting errors

Test Plan:
CI autogenerated docs

Imported from OSS

Reviewed By: jerryzh168

Differential Revision: D33030991

fbshipit-source-id: 8ec4610ae6d5bcbdd3c5e3bb725f2b06af960d52
2021-12-13 20:23:59 -08:00
Mike Ruberry
dc87cf5fe1 Fixes mem_get_info when querying on a device other than the current device (#69640)
Summary:
Also fixes the documentation failing to appear and adds a test to validate that op works with multiple devices properly.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/69640

Reviewed By: ngimel

Differential Revision: D32965391

Pulled By: mruberry

fbshipit-source-id: 4fe502809b353464da8edf62d92ca9863804f08e
2021-12-08 23:04:30 -08:00
Peter Bell
e279963eef Remove remaining THC code (#69039)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/69039

Test Plan: Imported from OSS

Reviewed By: anjali411

Differential Revision: D32872476

Pulled By: ngimel

fbshipit-source-id: 7972aacc24aef9450fb59b707ed6396c501bcb31
2021-12-08 12:18:08 -08:00
Vincent-Pierre Berges
30bb4e0071 Add nvidia-smi memory and utilization as native Python API (#69104)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/69104

Add nvidia-smi memory and utilization as native Python API

Test Plan:
testing the function returns the appropriate value.
Unit tests to come.

Reviewed By: malfet

Differential Revision: D32711562

fbshipit-source-id: 01e676203299f8fde4f3ed4065f68b497e62a789
2021-12-08 10:33:23 -08:00
Charles David Hernandez
fc2614537b Updating quantization documentation (#68907)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/68907

Added information about symmetric
qschemes and corrected an error in reference to https://github.com/pytorch/pytorch/issues/68540

Test Plan: Imported from OSS

Reviewed By: vkuzo

Differential Revision: D32662033

fbshipit-source-id: 9052c597f61991934b86850fea8b6eab78397450
2021-12-08 08:32:33 -08:00
gmagogsfm
358e908162 Add Union type to TorchScript Language Ref (#69514)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/69514

Reviewed By: tugsbayasgalan

Differential Revision: D32909371

Pulled By: gmagogsfm

fbshipit-source-id: af1c3040cd59ee913dc576cf8a8c759313f1e07f
2021-12-07 12:53:54 -08:00
Rodrigo Bermúdez Schettino
1a202b0c39 Docs: Fix broken code syntax in autograd.rst (#69362)
Summary:
The backticks around `nn.Parameters` were not rendered correctly because the word was enclosed in an italics block.
Spotted the issue on https://pytorch.org/docs/stable/notes/autograd.html#locally-disable-grad-doc.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/69362

Reviewed By: zou3519

Differential Revision: D32924093

Pulled By: albanD

fbshipit-source-id: 5a310ac3f3d13a5116f7aa911817b9452eee711d
2021-12-07 12:03:15 -08:00
Xiao Wang
bfe5ad28e6 [Linalg] Add a runtime switch to let pytorch prefer a backend impl in linalg functions on GPU (#67980)
Summary:
Per title.

This PR introduces a global flag that lets pytorch prefer one of the many backend implementations while calling linear algebra functions on GPU.

Usage:
```python
torch.backends.cuda.preferred_linalg_library('cusolver')
```

Available options (str): `'default'`, `'cusolver'`, `'magma'`.

Issue https://github.com/pytorch/pytorch/issues/63992 inspired me to write this PR. No heuristic is perfect on all devices, library versions, matrix shapes, workloads, etc. We can obtain better performance if we can conveniently switch linear algebra backends at runtime.

Performance of linear algebra operators after this PR should be no worse than before. The flag is set to **`'default'`** by default, which makes everything the same as before this PR.

The implementation of this PR is basically following that of https://github.com/pytorch/pytorch/pull/67790.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/67980

Reviewed By: mruberry

Differential Revision: D32849457

Pulled By: ngimel

fbshipit-source-id: 679fee7744a03af057995aef06316306073010a6
2021-12-03 19:06:30 -08:00
Michael Carilli
da023611d7 [CUDA graphs] Fixes make_graphed_callables example typos (#69379)
Summary:
cc mcarilli

Pull Request resolved: https://github.com/pytorch/pytorch/pull/69379

Reviewed By: mruberry

Differential Revision: D32841260

Pulled By: ngimel

fbshipit-source-id: a7d0b9db0578526907547b201eddd55827812b63
2021-12-03 16:51:14 -08:00
Elio
088a4feb41 Update the documentation for AMP with DataParallel (#69218)
Summary:
Following https://github.com/pytorch/pytorch/issues/60540 and pull request https://github.com/pytorch/pytorch/issues/43102

Pull Request resolved: https://github.com/pytorch/pytorch/pull/69218

Reviewed By: gchanan

Differential Revision: D32803814

Pulled By: ngimel

fbshipit-source-id: 06fdbbee2c7734153271be70ec4bc24263c8c367
2021-12-03 14:58:47 -08:00
Michael Suo
ad182479b0 [deploy] docs (#69251)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/69251

This adds some actual documentation for deploy, which is probably useful
since we told everyone it was experimentally available so they will
probably be looking at what the heck it is.

It also wires up various compoenents of the OSS build to actually work
when used from an external project.

Differential Revision:
D32783312
D32783312

Test Plan: Imported from OSS

Reviewed By: wconstab

Pulled By: suo

fbshipit-source-id: c5c0a1e3f80fa273b5a70c13ba81733cb8d2c8f8
2021-12-01 21:55:18 -08:00
Nikul Patel
8f9f559453 ammend tensors.rst and torch.rst for doc generation (#69030)
Summary:
(This is my first contribution to PyTorch) Added missing operations to docs added in https://github.com/pytorch/pytorch/issues/64430. Please let me know if I've done anything wrong.

Fixes https://github.com/pytorch/pytorch/issues/68928

Pull Request resolved: https://github.com/pytorch/pytorch/pull/69030

Reviewed By: samdow

Differential Revision: D32706826

Pulled By: soulitzer

fbshipit-source-id: edcc175a8f9bc69450a39059580c05edce699312
2021-11-30 12:04:13 -08:00
mrshenli
b8c3693281 Remove autograd-enabled collective APIs from distributed docs (#69011)
Summary:
These APIs are not yet officially released and are still under discussion. Hence, this commit removes those APIs from docs and will add them back when ready.

cc pietern mrshenli pritamdamania87 zhaojuanmao satgera rohan-varma gqchen aazzolini osalpekar jiayisuse SciPioneer H-Huang

Pull Request resolved: https://github.com/pytorch/pytorch/pull/69011

Reviewed By: fduwjj

Differential Revision: D32703124

Pulled By: mrshenli

fbshipit-source-id: ea049fc7ab6b0015d38cc40c5b5daf47803b7ea0
2021-11-29 18:14:50 -08:00
JUBIN CHHEDA
27228656e6 [FX][docs] Document gotcha about training flag (#68915)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/68913

Pull Request resolved: https://github.com/pytorch/pytorch/pull/68915

Reviewed By: jamesr66a

Differential Revision: D32705410

Pulled By: jubinchheda

fbshipit-source-id: a44c17ab0e62465823ceb0ef983ae330b50fb073
2021-11-29 16:13:32 -08:00
Mike Ruberry
6ae34ea6f8 Revert D32521980: Add linalg.lu_factor
Test Plan: revert-hammer

Differential Revision:
D32521980 (b10929a14a)

Original commit changeset: 26a49ebd87f8

fbshipit-source-id: e1a6bb9c2ece9bd78190fe17e16a46e3358c5c82
2021-11-28 17:22:15 -08:00
lezcano
b10929a14a Add linalg.lu_factor (#66933)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/66933

This PR exposes `torch.lu` as `torch.linalg.lu_factor` and
`torch.linalg.lu_factor_ex`.

This PR also adds support for matrices with zero elements both in
the size of the matrix and the batch. Note that this function simply
returns empty tensors of the correct size in this case.

We add a test and an OpInfo for the new function.

This PR also adds documentation for this new function in line of
the documentation in the rest of `torch.linalg`.

Fixes https://github.com/pytorch/pytorch/issues/56590
Fixes https://github.com/pytorch/pytorch/issues/64014

cc jianyuh nikitaved pearu mruberry walterddr IvanYashchuk xwang233 Lezcano

Test Plan: Imported from OSS

Reviewed By: albanD

Differential Revision: D32521980

Pulled By: mruberry

fbshipit-source-id: 26a49ebd87f8a41472f8cd4e9de4ddfb7f5581fb
2021-11-27 17:52:48 -08:00
lezcano
cf54416925 Add docs entry for adjoint. (#68869)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/68869

As per title.

cc brianjo mruberry anjali411

Test Plan: Imported from OSS

Reviewed By: ejguan

Differential Revision: D32647456

Pulled By: anjali411

fbshipit-source-id: 2cb053a6884e2b22d3decc058e86d10f355fcb84
2021-11-24 10:03:41 -08:00
Yutaro Sanada
74e6d2ce67 fix typos in jit_language_reference.rst (#68706)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/68700

- indent problem

Pull Request resolved: https://github.com/pytorch/pytorch/pull/68706

Reviewed By: mruberry

Differential Revision: D32598916

Pulled By: jbschlosser

fbshipit-source-id: 42af216e83fb48bbd311fc3d41fc3e8f5a2fef08
2021-11-22 19:09:06 -08:00
lezcano
b46c89d950 Add linalg.solve_triangular (#63568)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/63568

This PR adds the first solver with structure to `linalg`. This solver
has an API compatible with that of `linalg.solve` preparing these for a
possible future merge of the APIs. The new API:
- Just returns the solution, rather than the solution and a copy of `A`
- Removes the confusing `transpose` argument and replaces it by a
correct handling of conj and strides within the call
- Adds a `left=True` kwarg. This can be achieved via transposes of the
inputs and the result, but it's exposed for convenience.

This PR also implements a dataflow that minimises the number of copies
needed before calling LAPACK / MAGMA / cuBLAS and takes advantage of the
conjugate and neg bits.

This algorithm is implemented for `solve_triangular` (which, for this, is
the most complex of all the solvers due to the `upper` parameters).
Once more solvers are added, we will factor out this calling algorithm,
so that all of them can take advantage of it.

Given the complexity of this algorithm, we implement some thorough
testing. We also added tests for all the backends, which was not done
before.

We also add forward AD support for `linalg.solve_triangular` and improve the
docs of `linalg.solve_triangular`. We also fix a few issues with those of
`torch.triangular_solve`.

Resolves https://github.com/pytorch/pytorch/issues/54258
Resolves https://github.com/pytorch/pytorch/issues/56327
Resolves https://github.com/pytorch/pytorch/issues/45734

cc jianyuh nikitaved pearu mruberry walterddr IvanYashchuk xwang233 Lezcano

Test Plan: Imported from OSS

Reviewed By: jbschlosser

Differential Revision: D32588230

Pulled By: mruberry

fbshipit-source-id: 69e484849deb9ad7bb992cc97905df29c8915910
2021-11-22 12:41:06 -08:00
Vansh Sharma
ff125a3624 Minor changes in documentation (#68557)
Summary:
Fixed some small typos

Pull Request resolved: https://github.com/pytorch/pytorch/pull/68557

Reviewed By: mruberry

Differential Revision: D32538749

Pulled By: ngimel

fbshipit-source-id: 09a9cd4031463b6a40d7307bd8fcb7d364444ac3
2021-11-18 17:57:16 -08:00
Masaki Kozuki
9ce3c630ba [Docs] Mention torch.bfloat16 in torch.finfo (#68496)
Summary:
https://pytorch.org/docs/master/type_info.html#torch.torch.finfo seems to miss `torch.bfloat16`.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/68496

Reviewed By: mruberry

Differential Revision: D32538806

Pulled By: ngimel

fbshipit-source-id: 1296b3eb34d024cfc7d85cf53efe771ee9f98ea2
2021-11-18 17:52:41 -08:00
Jane Xu
9f4e004abd Revert D32283178: Add linalg.solve_triangular
Test Plan: revert-hammer

Differential Revision:
D32283178 (0706607abc)

Original commit changeset: deb672e6e52f

fbshipit-source-id: d2a3421292147426cc61c2f063b721acf9004755
2021-11-18 14:46:10 -08:00
lezcano
0706607abc Add linalg.solve_triangular (#63568)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/63568

This PR adds the first solver with structure to `linalg`. This solver
has an API compatible with that of `linalg.solve` preparing these for a
possible future merge of the APIs. The new API:
- Just returns the solution, rather than the solution and a copy of `A`
- Removes the confusing `transpose` argument and replaces it by a
correct handling of conj and strides within the call
- Adds a `left=True` kwarg. This can be achieved via transposes of the
inputs and the result, but it's exposed for convenience.

This PR also implements a dataflow that minimises the number of copies
needed before calling LAPACK / MAGMA / cuBLAS and takes advantage of the
conjugate and neg bits.

This algorithm is implemented for `solve_triangular` (which, for this, is
the most complex of all the solvers due to the `upper` parameters).
Once more solvers are added, we will factor out this calling algorithm,
so that all of them can take advantage of it.

Given the complexity of this algorithm, we implement some thorough
testing. We also added tests for all the backends, which was not done
before.

We also add forward AD support for `linalg.solve_triangular` and improve the
docs of `linalg.solve_triangular`. We also fix a few issues with those of
`torch.triangular_solve`.

Resolves https://github.com/pytorch/pytorch/issues/54258
Resolves https://github.com/pytorch/pytorch/issues/56327
Resolves https://github.com/pytorch/pytorch/issues/45734

cc jianyuh nikitaved pearu mruberry walterddr IvanYashchuk xwang233 Lezcano

Test Plan: Imported from OSS

Reviewed By: zou3519, JacobSzwejbka

Differential Revision: D32283178

Pulled By: mruberry

fbshipit-source-id: deb672e6e52f58b76536ab4158073927a35e43a8
2021-11-18 09:45:51 -08:00
Rok
952ca25daa Sparse CSR: add convert_indices_from_csr_to_coo (#66774)
Summary:
This PR adds conversion from CSR to COO.

Fixes https://github.com/pytorch/pytorch/issues/56959

cc nikitaved pearu cpuhrsch IvanYashchuk gchanan mruberry

Pull Request resolved: https://github.com/pytorch/pytorch/pull/66774

Reviewed By: zou3519

Differential Revision: D32288415

Pulled By: cpuhrsch

fbshipit-source-id: 683ba658dc46835fdf3c0e24645c0c2bb243b968
2021-11-17 22:28:30 -08:00
frgfm
693fe2fd9b docs: Added Union to supported types in documentation (#68435)
Summary:
This PR simply updates the documentation following up on https://github.com/pytorch/pytorch/pull/64234, by adding `Union` as a supported type.

Any feedback is welcome!

cc ansley albanD gmagogsfm

Pull Request resolved: https://github.com/pytorch/pytorch/pull/68435

Reviewed By: davidberard98

Differential Revision: D32494271

Pulled By: ansley

fbshipit-source-id: c3e4806d8632e1513257f0295568a20f92dea297
2021-11-17 10:26:31 -08:00
Saketh Are
86399d8e0c Add histogramdd to torch.rst (#68273)
Summary:
The `torch.histogramdd` operator is documented in `torch/functional.py` but does not appear in the generated docs because it is missing from `docs/source/torch.rst`.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/68273

Reviewed By: cpuhrsch

Differential Revision: D32470522

Pulled By: saketh-are

fbshipit-source-id: a23e73ba336415457a30bae568bda80afa4ae3ed
2021-11-16 11:55:40 -08:00
Thomas Metcalfe
ba16b1eca7 [numpy] Alias arctan2 to atan2 (#67010)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/65906

Adds an alias `arctan2` to improve numpy compatibility

cc mruberry rgommers

Pull Request resolved: https://github.com/pytorch/pytorch/pull/67010

Reviewed By: anjali411

Differential Revision: D32378998

Pulled By: mruberry

fbshipit-source-id: 424c5c10c12b49c20ee83ccd109325c480b5b6cf
2021-11-16 09:41:09 -08:00
Anirudh Dagar
b07a11929d Array API: Add torch.linalg.cross (#63285)
Summary:
### Create `linalg.cross`

Fixes https://github.com/pytorch/pytorch/issues/62810

As discussed in the corresponding issue, this PR adds `cross` to the `linalg` namespace (**Note**: There is no method variant) which is slightly different in behaviour compared to `torch.cross`.

**Note**: this is NOT an alias as suggested in mruberry's [https://github.com/pytorch/pytorch/issues/62810 comment](https://github.com/pytorch/pytorch/issues/62810#issuecomment-897504372) below
> linalg.cross being consistent with the Python Array API (over NumPy) makes sense because NumPy has no linalg.cross. I also think we can implement linalg.cross without immediately deprecating torch.cross, although we should definitely refer users to linalg.cross. Deprecating torch.cross will require additional review. While it's not used often it is used, and it's unclear if users are relying on its unique behavior or not.

The current default implementation of `torch.cross` is extremely weird and confusing. This has also been reported multiple times previously. (See https://github.com/pytorch/pytorch/issues/17229, https://github.com/pytorch/pytorch/issues/39310, https://github.com/pytorch/pytorch/issues/41850, https://github.com/pytorch/pytorch/issues/50273)

- [x] Add `torch.linalg.cross` with default `dim=-1`
- [x] Add OpInfo and other tests for `torch.linalg.cross`
- [x] Add broadcasting support to `torch.cross` and `torch.linalg.cross`
- [x] Remove out skip from `torch.cross` OpInfo
- [x] Add docs for `torch.linalg.cross`. Improve docs for `torch.cross` mentioning `linalg.cross` and the difference between the two. Also adds a warning to `torch.cross`, that it may change in the future (we might want to deprecate it later)

 ---

### Additional Fixes to `torch.cross`
- [x] Fix Doc for Tensor.cross
- [x] Fix torch.cross in `torch/overridres.py`

While working on `linalg.cross` I noticed these small issues with `torch.cross` itself.

[Tensor.cross docs](https://pytorch.org/docs/stable/generated/torch.Tensor.cross.html) still mentions `dim=-1` default which is actually wrong. It should be `dim=None` after the behaviour was updated in PR https://github.com/pytorch/pytorch/issues/17582 but the documentation for the `method` or `function` variant wasn’t updated. Later PR https://github.com/pytorch/pytorch/issues/41850 updated the documentation for the `function` variant i.e `torch.cross` and also added the following warning about the weird behaviour.
> If `dim` is not given, it defaults to the first dimension found with the size 3. Note that this might be unexpected.

But still, the `Tensor.cross` docs were missed and remained outdated. I’m finally fixing that here. Also fixing `torch/overrides.py` for `torch.cross` as well now, with `dim=None`.

To verify according to the docs the default behaviour of `dim=-1` should raise, you can try the following.

```python
a = torch.randn(3, 4)
b = torch.randn(3, 4)
b.cross(a)  # this works because the implementation finds 3 in the first dimension and the default behaviour as shown in documentation is actually not true.
>>> tensor([[ 0.7171, -1.1059,  0.4162,  1.3026],
        [ 0.4320, -2.1591, -1.1423,  1.2314],
        [-0.6034, -1.6592, -0.8016,  1.6467]])

b.cross(a, dim=-1)  # this raises as expected since the last dimension doesn't have a 3
>>> RuntimeError: dimension -1 does not have size 3
```

Please take a closer look (particularly the autograd part, this is the first time I'm dealing with `derivatives.yaml`). If there is something missing, wrong or needs more explanation, please let me know. Looking forward to the feedback.

cc mruberry Lezcano IvanYashchuk rgommers

Pull Request resolved: https://github.com/pytorch/pytorch/pull/63285

Reviewed By: gchanan

Differential Revision: D32313346

Pulled By: mruberry

fbshipit-source-id: e68c2687c57367274e8ddb7ef28ee92dcd4c9f2c
2021-11-11 12:49:41 -08:00
Kurt Mohler
db014b8529 Add set_deterministic_debug_mode and get_deterministic_debug_mode (#67778)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/67386

Pull Request resolved: https://github.com/pytorch/pytorch/pull/67778

Reviewed By: ngimel

Differential Revision: D32310661

Pulled By: mruberry

fbshipit-source-id: 300129e96ca51c22fa711182ce6a9f4d4d2ce57f
2021-11-11 12:48:29 -08:00
eqy
790763b0fe Add an option to disable reduced precision reductions for FP16 GEMM (#67946)
Summary:
https://github.com/pytorch/pytorch/issues/67578 disabled reduced precision reductions for FP16 GEMMs. After benchmarking, we've found that this has substantial performance impacts for common GEMM shapes (e.g., those found in popular instantiations of multiheaded-attention) on architectures such as Volta. As these performance regressions may come as a surprise to current users, this PR adds a toggle to disable reduced precision reductions
`torch.backends.cuda.matmul.allow_fp16_reduced_precision_reduction = `
rather than making it the default behavior.

CC ngimel ptrblck
stas00 Note that the behavior after the previous PR can be replicated with
`torch.backends.cuda.matmul.allow_fp16_reduced_precision_reduction = False`

Pull Request resolved: https://github.com/pytorch/pytorch/pull/67946

Reviewed By: zou3519

Differential Revision: D32289896

Pulled By: ngimel

fbshipit-source-id: a1ea2918b77e27a7d9b391e030417802a0174abe
2021-11-09 17:27:20 -08:00
James Reed
eaf0457eef [distributed][docs] Delete distributed optimimzer section from RPC and add reference to namespace docs page (#68068)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/68068

cc pietern mrshenli pritamdamania87 zhaojuanmao satgera rohan-varma gqchen aazzolini osalpekar jiayisuse SciPioneer H-Huang

Test Plan: Imported from OSS

Reviewed By: pritamdamania87

Differential Revision: D32286554

Pulled By: jamesr66a

fbshipit-source-id: a43fe1f0cfa74721f467b128f2e878bd02f32546
2021-11-09 15:01:54 -08:00
Xiaoyu Zhang
273f7ae9b3 fx: Update fx.rst (#68043)
Summary:
When I run this part of the code on the document with PyTorch version 1.10.0, I found some differences between the output and the document, as follows:

```python
import torch
import torch.fx as fx

class M(torch.nn.Module):
    def forward(self, x, y):
        return x + y

# Create an instance of `M`
m = M()

traced = fx.symbolic_trace(m)
print(traced)
print(traced.graph)
traced.graph.print_tabular()
```

I get the result:

```shell
def forward(self, x, y):
    add = x + y;  x = y = None
    return add

graph():
    %x : [#users=1] = placeholder[target=x]
    %y : [#users=1] = placeholder[target=y]
    %add : [#users=1] = call_function[target=operator.add](args = (%x, %y), kwargs = {})
    return add
opcode         name    target                   args    kwargs
-------------  ------  -----------------------  ------  --------
placeholder    x       x                        ()      {}
placeholder    y       y                        ()      {}
call_function  add     <built-in function add>  (x, y)  {}
output         output  output                   (add,)  {}
```

This pr modified the document。

Pull Request resolved: https://github.com/pytorch/pytorch/pull/68043

Reviewed By: driazati

Differential Revision: D32287178

Pulled By: jamesr66a

fbshipit-source-id: 48ebd0e6c09940be9950cd57ba0c03274a849be5
2021-11-09 14:00:45 -08:00
James Reed
3f048c637f [distributed] Render torch.distributed.optim members (#67885)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/67885

cc pietern mrshenli pritamdamania87 zhaojuanmao satgera rohan-varma gqchen aazzolini osalpekar jiayisuse SciPioneer H-Huang

Test Plan: Imported from OSS

Reviewed By: mrshenli

Differential Revision: D32191952

Pulled By: jamesr66a

fbshipit-source-id: a9ed52da8e89b3491eab2e691f5571338f83e8e3
2021-11-08 16:20:55 -08:00
jcwchen
5b036d5f2b [Doc] [ONNX]Fix a broken url for ONNXRuntime custom op (#67944)
Summary:
**Description**
Update the broken url by a valid link https://onnxruntime.ai/docs/reference/operators/add-custom-op.html.

**Motivation**
Closes https://github.com/pytorch/pytorch/issues/67849. The url is broken.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/67944

Reviewed By: NivekT

Differential Revision: D32252880

Pulled By: H-Huang

fbshipit-source-id: 400b0efa3d6f63e60b016c482fbbed1293c29806
2021-11-08 15:51:02 -08:00
andrewor
4a8f27445d [Quant] Add dynamic QAT Linear module (#67325)
Summary:
**Summary:** This commit adds the `torch.nn.qat.dynamic.modules.Linear`
module, the dynamic counterpart to `torch.nn.qat.modules.Linear`.
Functionally these are very similar, except the dynamic version
expects a memoryless observer and is converted into a dynamically
quantized module before inference.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/67325

Test Plan:
`python3 test/test_quantization.py TestQuantizationAwareTraining.test_dynamic_qat_linear`

**Reviewers:** Charles David Hernandez, Jerry Zhang

**Subscribers:** Charles David Hernandez, Supriya Rao, Yining Lu

**Tasks:** 99696812

**Tags:** pytorch

Reviewed By: malfet, jerryzh168

Differential Revision: D32178739

Pulled By: andrewor14

fbshipit-source-id: 5051bdd7e06071a011e4e7d9cc7769db8d38fd73
2021-11-08 10:24:25 -08:00
Alban Desmaison
9cdd1d7e48 Docs module check (#67440)
Summary:
Add check to make sure we do not add new submodules without documenting them in an rst file.
This is especially important because our doc coverage only runs for modules that are properly listed.

temporarily removed "torch" from the list to make sure the failure in CI looks as expected. EDIT: fixed now

This is what a CI failure looks like for the top level torch module as an example:
![image](https://user-images.githubusercontent.com/6359743/139264690-01af48b3-cb2f-4cfc-a50f-975fca0a8140.png)

Pull Request resolved: https://github.com/pytorch/pytorch/pull/67440

Reviewed By: jbschlosser

Differential Revision: D32005310

Pulled By: albanD

fbshipit-source-id: 05cb2abc2472ea4f71f7dc5c55d021db32146928
2021-11-01 06:24:27 -07:00
kshitij12345
510e3026a9 [numpy] add torch.argwhere (#64257)
Summary:
Adds `torch.argwhere` as an alias to `torch.nonzero`

Currently, `torch.nonzero` is actually provides equivalent functionality to `np.argwhere`.

From NumPy docs,
> np.argwhere(a) is almost the same as np.transpose(np.nonzero(a)), but produces a result of the correct shape for a 0D array.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/64257

Reviewed By: qihqi

Differential Revision: D32049884

Pulled By: saketh-are

fbshipit-source-id: 016e49884698daa53b83e384435c3f8f6b5bf6bb
2021-10-30 15:26:11 -07:00
Vasiliy Kuznetsov
99282126dc pytorch quantization: document the custom module APIs (#67449)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/67449

Adds a description of what the current custom module API does
and API examples for Eager mode and FX graph mode to the main
PyTorch quantization documentation page.

Test Plan:
```
cd docs
make html
python -m http.server
// check the docs page, it renders correctly
```

Reviewed By: jbschlosser

Differential Revision: D31994641

Pulled By: vkuzo

fbshipit-source-id: d35a62947dd06e71276eb6a0e37950d3cc5abfc1
2021-10-29 05:22:17 -07:00
Kenichi Maehashi
6ed68f3f84 Document torch.jit.is_tracing() (#67326)
Summary:
This PR adds `torch.jit.is_tracing()` to the JIT API reference.
This function is widely used but left undocumented: https://github.com/search?q=torch.jit.is_tracing&type=code

Pull Request resolved: https://github.com/pytorch/pytorch/pull/67326

Reviewed By: tugsbayasgalan

Differential Revision: D31985251

Pulled By: Krovatkin

fbshipit-source-id: 852b432b08d63df8bd7a7a02c9555e61f5f37978
2021-10-28 09:56:09 -07:00