Commit Graph

1204 Commits

Author SHA1 Message Date
neerajprad
0f3a3f22af Add sample validation for LKJCholesky.log_prob (#52763)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/52724.

This fixes the following for the LKJCholesky distribution in master:
 - `log_prob` does sample validation when `validate_args=True`.
 - exposes documentation for the LKJCholesky distribution.

cc. fehiepsi, fritzo

Pull Request resolved: https://github.com/pytorch/pytorch/pull/52763

Reviewed By: anjali411

Differential Revision: D26657216

Pulled By: neerajprad

fbshipit-source-id: 12e8f8384cf0c3df8a29564c1e1718d2d6a5833f
2021-02-25 16:12:29 -08:00
Luca Wehrstedt
92a4ee1cf6 Revert D26375734: Implemented torch.linalg.multi_dot
Test Plan: revert-hammer

Differential Revision:
D26375734 (0396f492b9)

Original commit changeset: 839642692424

fbshipit-source-id: cb64db646010128d802e1930d5e9526c1f7aa6a2
2021-02-25 00:43:57 -08:00
Heitor Schueroff
0396f492b9 Implemented torch.linalg.multi_dot (#51807)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/51807

Implemented torch.linalg.multi_dot similar to [numpy.linalg.multi_dot](https://numpy.org/doc/stable/reference/generated/numpy.linalg.multi_dot.html).

This function does not support broadcasting or batched inputs at the moment.

**NOTE**
numpy.linalg.multi_dot allows the first and last tensors to have more than 2 dimensions despite their docs stating these must be either 1D or 2D. This PR diverges from NumPy in that it enforces this restriction.

**TODO**
- [ ] Benchmark against NumPy
- [x] Add OpInfo testing
- [x] Remove unnecessary copy for out= argument

Test Plan: Imported from OSS

Reviewed By: nikithamalgifb

Differential Revision: D26375734

Pulled By: heitorschueroff

fbshipit-source-id: 839642692424c4b1783606c76dd5b34455368f0b
2021-02-24 15:32:30 -08:00
Jeff Yang
f111ec48c1 docs: add fractional_max_pool in nn.functional (#52557)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/51708

Pull Request resolved: https://github.com/pytorch/pytorch/pull/52557

Reviewed By: bdhirsh

Differential Revision: D26591388

Pulled By: jbschlosser

fbshipit-source-id: 42643864df92ea014e69a8ec5c29333735e98898
2021-02-22 20:45:07 -08:00
Jeff Yang
7f4dff5496 docs: add FractionalMaxPool3d in pooling layers (#52556)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/51625

Pull Request resolved: https://github.com/pytorch/pytorch/pull/52556

Reviewed By: smessmer

Differential Revision: D26593666

Pulled By: bdhirsh

fbshipit-source-id: 3d81d23fa70efa0f794dde47a34baad0aaa9c751
2021-02-22 17:04:09 -08:00
Jeff Yang
fd5792f857 docs: add :nosignatures: in torch.jit (#52555)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/52554

Pull Request resolved: https://github.com/pytorch/pytorch/pull/52555

Reviewed By: ZolotukhinM

Differential Revision: D26573956

Pulled By: SplitInfinity

fbshipit-source-id: ce011c66ce771bc7e9357f98db9994d54faa7013
2021-02-22 16:19:07 -08:00
Joe Zhu
f2b43ddbf4 Update api doc for enabling TcpStore on Windows (#51847)
Summary:
Fixes #{issue number}

Pull Request resolved: https://github.com/pytorch/pytorch/pull/51847

Reviewed By: albanD

Differential Revision: D26405678

Pulled By: malfet

fbshipit-source-id: 073b675225b48d1732771583f8f2473e0fdcf35c
2021-02-11 14:44:03 -08:00
Nikita Shulga
76c6e12a5c Minor spelling updates (#52149)
Summary:
Add space between 'e.g.' and 'build'
'pacakge'->'package'

Pull Request resolved: https://github.com/pytorch/pytorch/pull/52149

Reviewed By: osalpekar

Differential Revision: D26405824

Pulled By: malfet

fbshipit-source-id: 386390d3f31a9fc268b05902b9dca1deeaf626f9
2021-02-11 12:36:27 -08:00
Martin Jaggi
b6806308ac typo in docs ddp_comm_hooks.rst (#51986)
Summary:
Fixes a typo in ddp_comm_hooks.rst

Pull Request resolved: https://github.com/pytorch/pytorch/pull/51986

Reviewed By: SciPioneer

Differential Revision: D26360314

Pulled By: mrshenli

fbshipit-source-id: 50349501c53823cbcbad0f72be7c6ac9d51a4120
2021-02-11 12:02:37 -08:00
Horace He
475278f1c0 [FX] Make some modifications to limitation section (#51928)
Summary:
![](https://i.imgur.com/P0Tq4xR.jpg)

Pull Request resolved: https://github.com/pytorch/pytorch/pull/51928

Reviewed By: jamesr66a

Differential Revision: D26329664

Pulled By: Chillee

fbshipit-source-id: 94fd7b03ca53f48b1e4633a462c6e02bb0fd2f3c
2021-02-09 18:32:28 -08:00
Jerry Zhang
0ec00c1292 [docs] Add docs for storage and tensors for quantized Tensor (#51817)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/51817

Test Plan: Imported from OSS

Reviewed By: supriyar

Differential Revision: D26292464

Pulled By: jerryzh168

fbshipit-source-id: c5992deda4af949de4ea2e40edee8f22bd59b9e1
2021-02-09 13:20:56 -08:00
Akifumi Imanishi
b3fda95fe7 Add LazyBatchNormXd (#51862)
Summary:
Same diff with https://github.com/pytorch/pytorch/issues/51548 (cc. albanD)

Pull Request resolved: https://github.com/pytorch/pytorch/pull/51862

Reviewed By: izdeby

Differential Revision: D26312289

Pulled By: albanD

fbshipit-source-id: 9cdec0e0c9021c33d10d85010978c7fa5cb4dc60
2021-02-09 10:29:03 -08:00
Yi Wang
9e4f3b89c4 [Gradient Compression] Add register_comm_hook API to DDP communication hooks documentation page (#51846)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/51846

`register_comm_hook` method is defined in DistributedDataParallel module, but it is not covered in `distributed.rst`. Since it's closely related to DDP communication hook, add the docstrings to `ddp_comm_hooks.rst` instead of a reference.

Screenshot:

{F370425625}
ghstack-source-id: 121278173

Test Plan:
view locally

python_doc_test:
https://app.circleci.com/pipelines/github/pytorch/pytorch/271234/workflows/dc0b443d-8a62-4334-9b42-800c33a68553/jobs/10770636

Reviewed By: rohan-varma

Differential Revision: D26298191

fbshipit-source-id: 32e0685fd3c935cf9a2d129e6c520a94aa3e3817
2021-02-08 15:12:39 -08:00
mattip
b97a040f71 ENH: toggle TORCH_WARN_ONCE to TORCH_WARN for tests (#48560)
Summary:
Toward fixing https://github.com/pytorch/pytorch/issues/47624

~Step 1: add `TORCH_WARN_MAYBE` which can either warn once or every time in c++, and add a c++ function to toggle the value.
Step 2 will be to expose this to python for tests. Should I continue in this PR or should we take a different approach: add the python level exposure without changing any c++ code and then over a series of PRs change each call site to use the new macro and change the tests to make sure it is being checked?~

Step 1: add a python and c++ toggle to convert TORCH_WARN_ONCE into TORCH_WARN so the warnings can be caught in tests
Step 2: add a python-level decorator to use this toggle in tests
Step 3: (in future PRs): use the decorator to catch the warnings instead of `maybeWarnsRegex`

Pull Request resolved: https://github.com/pytorch/pytorch/pull/48560

Reviewed By: ngimel

Differential Revision: D26171175

Pulled By: mruberry

fbshipit-source-id: d83c18f131d282474a24c50f70a6eee82687158f
2021-02-08 08:21:19 -08:00
Yi Wang
4b3c99ce4a [Resubmission] Add a documentation page for DDP communication hooks (#51773)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/51773

Resubmission of #51715.

Minor changes:
1) Removed "Note [Guidance to Tune ``matrix_approximation_rank`` And ``start_powerSGD_iter``]" in powerSGD_hook.py.

2) Removed the duplicate description of `torch.nn.parallel.DistributedDataParallel.register_comm_hook` in ddp_comm_hooks.rst, because it is already covered by distributed.rst.

Also updated the doc based on the comments from PowerSGD paper author Thijs Vogels .

It seems that `python_doc_test` was flaky. The previous error message was not informative:
https://app.circleci.com/pipelines/github/pytorch/pytorch/270682/workflows/8d186a3c-d682-46bf-b617-ad4eef5991e2/jobs/10739143, and all the warnings did also appear on the master branch.

Rebasing to a new master branch seems to get this fixed:
https://app.circleci.com/pipelines/github/pytorch/pytorch/270696/workflows/1a3adbea-6443-4876-b87b-e17d90d41428/jobs/10740021/steps

Screenshot:

{F369899792}
ghstack-source-id: 121199613

Test Plan: View locally

Reviewed By: mingzhe09088

Differential Revision: D26272687

fbshipit-source-id: 6677db496a68171798940a80343f4d9a508e15db
2021-02-06 21:22:04 -08:00
Natalia Gimelshein
6c0bf28da6 [wip] doc_fix (#51825)
Summary:
tries to fix doc_test

Pull Request resolved: https://github.com/pytorch/pytorch/pull/51825

Reviewed By: bertmaher

Differential Revision: D26295583

Pulled By: ngimel

fbshipit-source-id: 13f6e7f1675d810adfd4abd2d579e2812fe54c80
2021-02-06 11:36:36 -08:00
Vasiliy Kuznetsov
8c48af822e pytorch docs: add fake_quantize functions documentation (#51748)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/51748

Adding docs for `fake_quantize_per_tensor_affine` and `fake_quantize_per_channel_affine`
functions.

Note: not documenting `fake_quantize_per_tensor_affine_cachemask` and
`fake_quantize_per_channel_affine_cachemask` since they are implementation details
of `fake_quantize_per_tensor_affine` and `fake_quantize_per_channel_affine`,
and do not need to be exposed to the user at the moment.

Test Plan: Build the docs locally on Mac OS, it looks good

Reviewed By: supriyar

Differential Revision: D26270514

Pulled By: vkuzo

fbshipit-source-id: 8e3c9815a12a3427572cb4d34a779e9f5e4facdd
2021-02-05 17:53:02 -08:00
Alban Desmaison
a930162c69 Revert D26276903: [pytorch][PR] Add LazyBatchNormXd
Test Plan: revert-hammer

Differential Revision:
D26276903 (aa1fd6b45a)

Original commit changeset: 0ac706974178

fbshipit-source-id: bfe01b01cd460f1e2845ea5ef1fc1514e6b6ba54
2021-02-05 12:37:29 -08:00
Supriya Rao
59cb693c90 [quant] add docs for embedding/embedding_bag (#51770)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/51770

Test Plan:
tested locally on mac

Imported from OSS

Reviewed By: jerryzh168

Differential Revision: D26279112

fbshipit-source-id: 8675d3ef712ecbe545bad0d3502181b3ccdd7f89
2021-02-05 11:43:15 -08:00
Horace He
9c2dd5775a Fixed slight bug in FX docs (#51779)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/51779

Reviewed By: ngimel

Differential Revision: D26279623

Pulled By: Chillee

fbshipit-source-id: 0cd2a487ce6b80ce0d3f81e2b2334ade20d816bb
2021-02-05 11:27:39 -08:00
Akifumi Imanishi
aa1fd6b45a Add LazyBatchNormXd (#51548)
Summary:
This PR implements UninitializedBuffer and LazyBatchnormXd based on https://github.com/pytorch/pytorch/issues/44538. (cc. emcastillo and albanD)

Pull Request resolved: https://github.com/pytorch/pytorch/pull/51548

Reviewed By: zhangguanheng66

Differential Revision: D26276903

Pulled By: albanD

fbshipit-source-id: 0ac706974178363f8af075e59b41d5989418922f
2021-02-05 10:27:04 -08:00
Natalia Gimelshein
d3023d86ba Revert D26249330: [Gradient Compression] Add a documentation page for DDP communication hooks
Test Plan: revert-hammer

Differential Revision:
D26249330 (e62aabac43)

Original commit changeset: ab973390ddb7

fbshipit-source-id: d508daed76219e7ca588cf7fb38aeaaffc61acfd
2021-02-04 22:38:06 -08:00
Yi Wang
e62aabac43 [Gradient Compression] Add a documentation page for DDP communication hooks (#51715)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/51715

Add a documentation page for DDP communication hooks.

Screenshot:

{F369781049}

Test Plan: View locally

Reviewed By: pritamdamania87

Differential Revision: D26249330

fbshipit-source-id: ab973390ddb785c5191f587a1b2b6de7d229e50e
2021-02-04 18:53:53 -08:00
guyang3532
ecfb73aaca Update docs for torch.profiler.tensorboard_trace_handler (#51636)
Summary:
![image](https://user-images.githubusercontent.com/62738430/106856207-17f8c000-66f9-11eb-80c9-844f79de423e.png)

Pull Request resolved: https://github.com/pytorch/pytorch/pull/51636

Reviewed By: orionr

Differential Revision: D26246309

Pulled By: ilia-cher

fbshipit-source-id: 083868e9231727638238c5f5ca31e3566d5e2e7e
2021-02-04 13:32:59 -08:00
James Reed
949ab213dd Revert "Revert D26246231: [FX] Edits after comprehensive pass over docs" (#51728)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/51728

This reverts commit 6c80fd005f.

Test Plan: Imported from OSS

Reviewed By: navahgar

Differential Revision: D26254130

Pulled By: jamesr66a

fbshipit-source-id: f301688f85c512076fee9b83a986677ef893d2c5
2021-02-04 13:01:09 -08:00
Joel Schlosser
a0137808a7 Note on Modules for 1.8 docs (#51536)
Summary:
A new note on Modules for 1.8 documentation.

Rendered form can be seen here: https://alband.github.io/doc_view/notes/modules.html
(thanks Alban!)

Pull Request resolved: https://github.com/pytorch/pytorch/pull/51536

Reviewed By: albanD

Differential Revision: D26254282

Pulled By: jbschlosser

fbshipit-source-id: 09cbd46aa268a29b6f54fd48ffe1d6b98db0ff31
2021-02-04 11:28:11 -08:00
Alban Desmaison
6c80fd005f Revert D26246231: [FX] Edits after comprehensive pass over docs
Test Plan: revert-hammer

Differential Revision:
D26246231 (c22bc4821d)

Original commit changeset: 8d6278a9fe1d

fbshipit-source-id: fdc83289f8fe7986bc02181eec55e4e72be2d812
2021-02-04 09:26:21 -08:00
James Reed
c22bc4821d [FX] Edits after comprehensive pass over docs (#51705)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/51705

Pull Request resolved: #51679

Test Plan: Imported from OSS

Reviewed By: Chillee

Differential Revision: D26246231

Pulled By: jamesr66a

fbshipit-source-id: 8d6278a9fe1da5e6c34eff4fedc4c7e18533fe0f
2021-02-04 08:11:07 -08:00
Taylor Robie
c8af338407 Expand benchmark utils docs (#51664)
Summary:
Add some much needed documentation on the Timer callgrind output format, and expand what is shown on the website.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/51664

Reviewed By: tugsbayasgalan

Differential Revision: D26246675

Pulled By: robieta

fbshipit-source-id: 7a07ff35cae07bd2da111029242a5dc8de21403c
2021-02-04 00:22:41 -08:00
Horace He
f1a63b7c10 [FX] Added how to write transformations section (#51278)
Summary:
![image](https://user-images.githubusercontent.com/6355099/106121588-b8614a00-6125-11eb-923f-fcdf575cd6cd.png)

I still need to add links to vmap/grad/decomposition, but those haven't been added to the examples folder yet.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/51278

Reviewed By: zou3519

Differential Revision: D26223103

Pulled By: Chillee

fbshipit-source-id: 3ad9bf76cd3438743edecdc17c44f8d1e00e5ea1
2021-02-03 21:32:43 -08:00
Mike Ruberry
16cfe970e0 Updates linalg documentation per feature review process (#51620)
Summary:
Notes the module is in beta and that the policy for returning optionally computed tensors may change in the future.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/51620

Reviewed By: heitorschueroff

Differential Revision: D26220254

Pulled By: mruberry

fbshipit-source-id: edf78fe448d948b43240e138d6d21b780324e41e
2021-02-03 16:11:57 -08:00
anjali411
34d4d79966 Autograd doc note fix (#51661)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/51661

Test Plan: Imported from OSS

Reviewed By: ezyang

Differential Revision: D26230912

Pulled By: anjali411

fbshipit-source-id: 94323d7bce631a4c5781020e9650495461119ede
2021-02-03 15:08:35 -08:00
Ansley Ussery
ab4623da16 Document FX debugging (#51530)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/51530

Test Plan: Imported from OSS

Reviewed By: jamesr66a

Differential Revision: D26192641

Pulled By: ansley

fbshipit-source-id: c69ab1bb2451d8ee5a729445f52bccc66e6f431b
2021-02-02 23:17:51 -08:00
Gemfield
b48ee75507 Fix quantization doc issue (#50187)
Summary:
There has a description error in quantization.rst, fixed it.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/50187

Reviewed By: mrshenli

Differential Revision: D25895294

Pulled By: soumith

fbshipit-source-id: c0b2e7ba3fadfc0977ab2d4d4e9ed4f93694cedd
2021-02-02 20:33:21 -08:00
Jeffrey Wan
b18eeaa80a Implement np.diff for single order differences (#50569)
Summary:
Implements `np.diff` for single order differences only:
 - method and function variants for `diff` and function variant for `diff_out`
 - supports out variant, but not in-place since shape changes
 - adds OpInfo entry, and test in `test_torch`
 - automatic autograd because we are using the `Math` dispatch

_Update: we only support Tensors for prepend and append in this PR. See discussion below and comments for more details._

Currently there is a quirk in the c++ API based on how this is implemented: it is not possible to specify scalar prepend and appends without also specifying all 4 arguments.

That is because the goal is to match NumPy's diff signature of `diff(int n=1, int dim=-1, Union[Scalar, Tensor] prepend=None, Union[Scalar, Tensor] append)=None` where all arguments are optional, positional and in the correct order.
There are a couple blockers. One is c++ ambiguity. This prevents us from simply doing `diff(int n=1, int dim=-1, Scalar? prepend=None, Tensor? append=None)` etc for all combinations of {Tensor, Scalar} x {Tensor, Scalar}.

Why not have append, prepend not have default args and then write out the whole power set of {Tensor, Scalar, omitted} x {Tensor, Scalar, omitted} you might ask. Aside from having to write 18 overloads, this is actually illegal because arguments with defaults must come after arguments without defaults. This would mean having to write `diff(prepend, append, n, dim)` which is not desired. Finally writing out the entire power set of all arguments n, dim, prepend, append is out of the question because that would actually involve 2 * 2 * 3 * 3 = 36 combinations. And if we include the out variant, that would be 72 overloads!

With this in mind, the current way this is implemented is actually to still do `diff(int n=1, int dim=-1, Scalar? prepend=None, Tensor? append=None)`. But also make use of `cpp_no_default_args`. The idea is to only have one of the 4 {Tensor, Scalar} x {Tensor, Scalar} provide default arguments for the c++ api, and add `cpp_no_default_args` for the remaining 3 overloads. With this, Python api works as expected, but some calls such as `diff(prepend=1)` won't work on c++ api.

We can optionally add 18 more overloads that cover the {dim, n, no-args} x {scalar-tensor, tensor-scalar, scalar-scalar} x {out, non-out} cases for c++ api. _[edit: counting is hard - just realized this number is still wrong. We should try to count the cases we do cover instead and subtract that from the total: (2 * 2 * 3 * 3) - (3 + 2^4) = 17. 3 comes from the 3 of 4 combinations of {tensor, scalar}^2 that we declare to be `cpp_no_default_args`, and the one remaining case that has default arguments has covers 2^4 cases. So actual count is 34 additional overloads to support all possible calls]_

_[edit: thanks to https://github.com/pytorch/pytorch/issues/50767 hacky_wrapper is no longer necessary; it is removed in the latest commit]_
 hacky_wrapper was also necessary here because `Tensor?` will cause dispatch to look for the `const optional<Tensor>&` schema but also generate a `const Tensor&` declaration in Functions.h. hacky_wrapper allows us to define our function as `const Tensor&` but wraps it in optional for us, so this avoids both the errors while linking and loading.

_[edit: rewrote the above to improve clarity and correct the fact that we actually need 18 more overloads (26 total), not 18 in total to complete the c++ api]_

Pull Request resolved: https://github.com/pytorch/pytorch/pull/50569

Reviewed By: H-Huang

Differential Revision: D26176105

Pulled By: soulitzer

fbshipit-source-id: cd8e77cc2de1117c876cd71c29b312887daca33f
2021-02-02 20:25:16 -08:00
anjali411
642afcb168 Add sgn to torch.rst so that it appears in the built docs (#51479)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/51479

Fixes https://github.com/pytorch/pytorch/issues/50146

Test Plan: Imported from OSS

Reviewed By: mruberry

Differential Revision: D26179734

Pulled By: anjali411

fbshipit-source-id: 1cda9a3dc9ce600e585900eea70fbecac0635d5c
2021-02-01 12:43:06 -08:00
James Reed
609f76f27a [WIP][FX] Add Interpreter and Transformer (#50420)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/50420

Test Plan: Imported from OSS

Reviewed By: zdevito

Differential Revision: D25880330

Pulled By: jamesr66a

fbshipit-source-id: 27d34888e36e39924821fed891d79f969237a104
2021-02-01 11:40:12 -08:00
Mike Ruberry
40c0fffb4b Fixes docs (#51439)
Summary:
pytorch_python_doc_build is failing with:

```
Jan 31 04:30:45 /var/lib/jenkins/workspace/docs/source/notes/broadcasting.rst:6: WARNING: 'any' reference target not found: numpy.doc.broadcasting
```

this removes the incorrect reference and adds an updated link.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/51439

Reviewed By: ngimel

Differential Revision: D26170232

Pulled By: mruberry

fbshipit-source-id: 829999db52e1e860d36d626d0d9f26e31283d14b
2021-01-31 22:00:26 -08:00
Natalia Gimelshein
7ab89f58be expose memory_fraction and gpu_process docs (#51372)
Summary:
Per title

Pull Request resolved: https://github.com/pytorch/pytorch/pull/51372

Reviewed By: mruberry

Differential Revision: D26157787

Pulled By: ngimel

fbshipit-source-id: 97eac5f12881a2bf62c251f6f7eaf65fdbe34056
2021-01-29 18:22:34 -08:00
anjali411
fd9a85d21b Doc update for complex numbers (#51129)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/51129

Test Plan: Imported from OSS

Reviewed By: pbelevich

Differential Revision: D26094947

Pulled By: anjali411

fbshipit-source-id: 4e1cdf8915a8c6a86ac3462685cdce881e1bcffa
2021-01-27 07:32:26 -08:00
mattip
b60494000b DOC: udate left navbar links for vision and text (#51103)
Summary:
A tiny PR to update the links in the lefthand navbar under Libraries. The canonical link for vision and text is `https://pytorch.org/vision/stable` and `https://pytorch.org/text/stable` respectively. The link without the `/stable` works via a redirect, this is cleaner.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/51103

Reviewed By: izdeby

Differential Revision: D26079760

Pulled By: heitorschueroff

fbshipit-source-id: df1fa64d7895831f4e6242445bae02c1faa5e4dc
2021-01-27 07:19:00 -08:00
Emilio Castillo
233e4ebdb6 Implement autograd functions for c10d communication operations (#40762)
Summary:
Closes https://github.com/pytorch/pytorch/issues/40702, Fixes https://github.com/pytorch/pytorch/issues/40690

Currently wip. But I would appreciate some feedback. Functions should be double-differentiable.

Contrary to b35cdc5200/torch/nn/parallel/_functions.py
This PR generates list of tensors instead of aggregating the received data in a single tensor. Is this behavior correct?

Thanks!

Pull Request resolved: https://github.com/pytorch/pytorch/pull/40762

Reviewed By: glaringlee

Differential Revision: D24758889

Pulled By: mrshenli

fbshipit-source-id: 79285fb4b791cae3d248f34e2aadb11c9ab10cce
2021-01-26 07:52:51 -08:00
Pritam Damania
68c218547c Add documentation page for pipeline parallelism. (#50791)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/50791

Add a dedicated pipeline parallelism doc page explaining the APIs and
the overall value of the module.
ghstack-source-id: 120257168

Test Plan:
1) View locally
2) waitforbuildbot

Reviewed By: rohan-varma

Differential Revision: D25967981

fbshipit-source-id: b607b788703173a5fa4e3526471140506171632b
2021-01-25 13:47:13 -08:00
Hameer Abbasi
f7b339d11c Clarify wording around overrides subclasses. (#51031)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/47117

Pull Request resolved: https://github.com/pytorch/pytorch/pull/51031

Reviewed By: bdhirsh

Differential Revision: D26047498

Pulled By: albanD

fbshipit-source-id: dd0a7d9f97c0f6469b3050d2e3b4473f1bee3820
2021-01-25 08:19:13 -08:00
James Reed
789f6f1250 [FX] Minor docs changes (#50966)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/50966

Test Plan: Imported from OSS

Reviewed By: suo

Differential Revision: D26029101

Pulled By: jamesr66a

fbshipit-source-id: 4374771be74d0a4d05fdd29107be5357130c2a76
2021-01-22 16:23:19 -08:00
Kurt Mohler
8ab1a1495d Rename set_deterministic to use_deterministic_algorithms (#49904)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/49100

Pull Request resolved: https://github.com/pytorch/pytorch/pull/49904

Reviewed By: ezyang, mrshenli

Differential Revision: D25956761

Pulled By: mruberry

fbshipit-source-id: 86a59289d50825a0ebbd7c358b483c8d8039ffa6
2021-01-22 11:27:07 -08:00
M.L. Croci
8eb90d4865 Add Gaussian NLL Loss (#50886)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/48520.

cc albanD (This is a clean retry PR https://github.com/pytorch/pytorch/issues/49807)

Pull Request resolved: https://github.com/pytorch/pytorch/pull/50886

Reviewed By: ejguan

Differential Revision: D26007435

Pulled By: albanD

fbshipit-source-id: 88fe91b40dea6f72e093e6301f0f04fcc842d2f0
2021-01-22 06:56:49 -08:00
Jerry Zhang
b5242d66b6 [quant][doc] Adding a table comparing eager and fx graph mode (#50413)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/50413

Test Plan:
.

Imported from OSS

Reviewed By: vkuzo

Differential Revision: D25886960

fbshipit-source-id: b99178d3900eedec920dbff28ab956f97be2661a
2021-01-21 13:43:42 -08:00
James Reed
d0e942f9a7 [FX][docs] Add limitations of symbolic tracing (#50638)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/50638

Test Plan: Imported from OSS

Reviewed By: ansley

Differential Revision: D25933780

Pulled By: jamesr66a

fbshipit-source-id: 0aa97ea05203fbcb707b0e947a465e206104b7df
2021-01-20 21:42:16 -08:00
kiyosora
4803eaf502 Implement NumPy-like function torch.fmax() & torch.fmin() (#49312)
Summary:
- Implementing the NumPy-like function`torch.fmax()` and `torch.fmin()` recommended in https://github.com/pytorch/pytorch/issues/48440

Pull Request resolved: https://github.com/pytorch/pytorch/pull/49312

Reviewed By: izdeby

Differential Revision: D25887246

Pulled By: heitorschueroff

fbshipit-source-id: d762eeff8b328bfcbe7d48b7ee9d2da72c249691
2021-01-20 06:45:25 -08:00