Commit Graph

425 Commits

Author SHA1 Message Date
Natalia Gimelshein
d3023d86ba Revert D26249330: [Gradient Compression] Add a documentation page for DDP communication hooks
Test Plan: revert-hammer

Differential Revision:
D26249330 (e62aabac43)

Original commit changeset: ab973390ddb7

fbshipit-source-id: d508daed76219e7ca588cf7fb38aeaaffc61acfd
2021-02-04 22:38:06 -08:00
Rohan Varma
c3f2f3294e [RPC] Add option to make rref.get_type not block. (#50977)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/50977

Adds a `blocking` flag that can be set to False to make this API return a `Future` to the type. This is to make this function non-blocking, mostly for a future change that will allow `rref.rpc_async()` to be completely non-blocking (it currently calls and waits for this function that issues an RPC in-line).
ghstack-source-id: 121021433

Test Plan: Modified UT

Reviewed By: mrshenli

Differential Revision: D25944582

fbshipit-source-id: e3b48a52af2d4578551a30ba6838927b489b1c03
2021-02-04 20:18:50 -08:00
Yi Wang
e62aabac43 [Gradient Compression] Add a documentation page for DDP communication hooks (#51715)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/51715

Add a documentation page for DDP communication hooks.

Screenshot:

{F369781049}

Test Plan: View locally

Reviewed By: pritamdamania87

Differential Revision: D26249330

fbshipit-source-id: ab973390ddb785c5191f587a1b2b6de7d229e50e
2021-02-04 18:53:53 -08:00
Vincent Quenneville-Belair
50d903f19f [optim] make functional api be private (#51316) (#51665)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/51665

This reverts commit 896f82aa92.

Test Plan: Imported from OSS

Reviewed By: gchanan

Differential Revision: D26232608

Pulled By: vincentqb

fbshipit-source-id: ca006baf4fb672c11c1bb003c39a29cbadb63dd3
2021-02-03 17:59:05 -08:00
Yi Wang
43df03de13 [Gradient Compression] Replace torch.sqrt(torch.sum(col ** 2)) by torch.norm() (#51629)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/51629

Leverage the existing util functions as much as possible for potential performance gain.

Original PR issue: Investigate Applying PowerSGD to Communication Hook for Gradient Compression #47202
ghstack-source-id: 120919883

Test Plan:
buck test mode/dev-nosan caffe2/test/distributed:c10d -- test_powerSGD_ddp_comm_hook_nccl

No performance regression:
f248664994 uses `torch.norm()`
```
total:
  32 GPUs -- 32 GPUs: p25:  1.050    30/s  (batch size 32)
p50:  1.230    26/s  (batch size 32)
p75:  1.449    22/s  (batch size 32)
p90:  1.611    19/s  (batch size 32)
p95:  1.702    18/s  (batch size 32)

backward:
  32 GPUs -- 32 GPUs: p25:  0.769    41/s  (batch size 32)
p50:  0.920    34/s  (batch size 32)
p75:  1.139    28/s  (batch size 32)
p90:  1.322    24/s  (batch size 32)
p95:  1.440    22/s  (batch size 32)
```

f248678690 does not use `torch.norm()`
```
total:
  32 GPUs -- 32 GPUs: p25:  1.056    30/s  (batch size 32)
p50:  1.249    25/s  (batch size 32)
p75:  1.443    22/s  (batch size 32)
p90:  1.608    19/s  (batch size 32)
p95:  1.711    18/s  (batch size 32)

backward:
  32 GPUs -- 32 GPUs: p25:  0.777    41/s  (batch size 32)
p50:  0.939    34/s  (batch size 32)
p75:  1.127    28/s  (batch size 32)
p90:  1.322    24/s  (batch size 32)
p95:  1.448    22/s  (batch size 32)
```

Reviewed By: pritamdamania87

Differential Revision: D26219835

fbshipit-source-id: 31d8ad3401d4efced4a6069f4f1e169ea3372697
2021-02-03 13:39:11 -08:00
Vincent Quenneville-Belair
896f82aa92 [optim] make functional api be private (#51316)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/51316

Make optim functional API be private until we release with beta

Test Plan: Imported from OSS

Reviewed By: albanD

Differential Revision: D26213469

fbshipit-source-id: b0fd001a8362ec1c152250bcd57c7205ed893107
2021-02-03 09:29:33 -08:00
Alban Desmaison
c311b8961a Revert D26113953: [pytorch][PR] [ZeroRedundancyOptimizer] Elastic and pytorch compatible checkpoints
Test Plan: revert-hammer

Differential Revision:
D26113953 (bbe18e3527)

Original commit changeset: 030bfeee2c34

fbshipit-source-id: 6c1494ad01c2f96a15601329b4fce3fef4b38a01
2021-02-03 06:12:21 -08:00
Benjamin Lefaudeux
bbe18e3527 [ZeroRedundancyOptimizer] Elastic and pytorch compatible checkpoints (#50956)
Summary:
- Makes it possible to use non-sharded optimizer checkpoints (as long as the model/param groups are the same, of course)
- Makes it possible to save with a given world size, and load with another world size
- Use Torch Distributed built-in broadcast object list instead of a ad-hoc version

Pull Request resolved: https://github.com/pytorch/pytorch/pull/50956

Reviewed By: malfet

Differential Revision: D26113953

Pulled By: blefaudeux

fbshipit-source-id: 030bfeee2c34c2d987590d45dc8efe05515f2e5c
2021-02-02 14:32:13 -08:00
Yi Wang
79e7544cb4 [Gradient Compression] Check start_PowerSGD_iter > 1 and add guidance on tuning PowerSGD configs. (#51427)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/51427

A user reported that `start_PowerSGD_iter` failed when it's set as 1. This is because allocating memory for error tensors somehow overlap with bucket rebuilding process at iteration 1.

Check `start_PowerSGD_iter > 1` instead of `start_PowerSGD_iter >= 1`.

Also add a unit test of `test_invalid_powerSGD_state` and some guidance on tuning PowerSGD configs.

Original PR issue: Investigate Applying PowerSGD to Communication Hook for Gradient Compression #47202
ghstack-source-id: 120834126

Test Plan: buck test mode/dev-nosan caffe2/test/distributed:c10d -- test_invalid_powerSGD_state

Reviewed By: rohan-varma

Differential Revision: D26166897

fbshipit-source-id: 34d5b64bb3dd43acb61d792626c70e6c8bb44a5d
2021-02-02 04:30:24 -08:00
Yi Wang
c08078031f [Gradient Compression] Allow BatchedPowerSGD to run vanilla allreduce for the first K iterations (#51270)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/51270

Similar to #50973, allow the batched version to run vanilla allreduce for the first K iterations.

This may be useful if the batched version can be applied to some use cases where the accuracy requirement is not very strict.

Original PR issue: Investigate Applying PowerSGD to Communication Hook for Gradient Compression #47202
ghstack-source-id: 120725858

Test Plan:
buck test mode/dev-nosan caffe2/test/distributed:c10d -- test_powerSGD_ddp_comm_hook_nccl

baseline: f248001754
batched PowerSGD: f246960752

The training time was reduced from 54m48s to 30m33s, and the accuracy is approximately the same: 44.21 vs 44.35

Reviewed By: rohan-varma

Differential Revision: D26077709

fbshipit-source-id: 6afeefad7a3fbdd7da2cbffb56dfbad855a96cb5
2021-02-01 15:26:29 -08:00
Yi Wang
0831984ed5 [Resubmission][Gradient Compression] Refactor default_hooks.py and powerSGD_hook.py by creating a util function that make a vanilla allreduce future (#51400)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/51400

Resubmission of #51094

Address https://github.com/pytorch/pytorch/pull/50973#discussion_r564229818

Original PR issue: Investigate Applying PowerSGD to Communication Hook for Gradient Compression #47202
ghstack-source-id: 120725690

Test Plan:
buck test mode/dev-nosan caffe2/test/distributed:c10d -- test_powerSGD_ddp_comm_hook_nccl

buck test mode/dev-nosan caffe2/test/distributed:c10d -- test_default_ddp_comm_hooks_nccl

Reviewed By: rohan-varma

Differential Revision: D26162333

fbshipit-source-id: ccc2eae5383a23673e00d61cb5570fb8bf749cd0
2021-02-01 11:34:41 -08:00
Rohan Varma
c255628134 [Collective APIs] Make python object collective API args consistent (#50625)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/50625

Make API signatures consistent and provide default argument similar to
the tensor collectives.
ghstack-source-id: 120718121

Test Plan: CI

Reviewed By: wanchaol

Differential Revision: D25932012

fbshipit-source-id: d16267e236a65ac9d55e19e2178f9d9267b08a20
2021-01-30 19:47:16 -08:00
Wanchao Liang
662b6d2115 [dist_optim] update the doc of DistributedOptimizer (#51314)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/51314

updating the doc of DistributedOptimizer to include TorchScript enablement information

Test Plan: Imported from OSS

Reviewed By: pbelevich

Differential Revision: D26156032

Pulled By: wanchaol

fbshipit-source-id: 1f3841f55918a5c2ed531cf6aeeb3f6e3a09a6a8
2021-01-29 17:12:52 -08:00
Iurii Zdebskyi
5a406c023e Revert D26070147: [Gradient Compression] Refactor default_hooks.py and powerSGD_hook.py by creating a util function that make a vanilla allreduce future
Test Plan: revert-hammer

Differential Revision:
D26070147 (e7b3496232)

Original commit changeset: 8c9339f1511e

fbshipit-source-id: fa1e9582baec9759a73b3004be9bb19bdeb6cd34
2021-01-29 09:06:24 -08:00
Yi Wang
e7b3496232 [Gradient Compression] Refactor default_hooks.py and powerSGD_hook.py by creating a util function that make a vanilla allreduce future (#51094)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/51094

Address https://github.com/pytorch/pytorch/pull/50973#discussion_r564229818

Original PR issue: Investigate Applying PowerSGD to Communication Hook for Gradient Compression #47202
ghstack-source-id: 120619680

Test Plan:
buck test mode/dev-nosan caffe2/test/distributed:c10d -- test_powerSGD_ddp_comm_hook_nccl

buck test mode/dev-nosan caffe2/test/distributed:c10d -- test_default_ddp_comm_hooks_nccl

Reviewed By: rohan-varma

Differential Revision: D26070147

fbshipit-source-id: 8c9339f1511e8f24cc906b9411cfe4850a5a6d81
2021-01-28 19:03:18 -08:00
Yi Wang
9d731e87de [Gradient Compression] Explicitly specify the dtype of the error tensor (#50985)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/50985

Explicitly specify the dtype of error tensor when it is initialized by zeros.

Previously if the dtype of input tensor is FP16, the error tensor is still created in FP32, although later it will be assigned by another FP16 tensor (`input_tensor_cp` - `input_tensor`).

This change will make the dtype of error tensor look more clear.

Additionally, also explicitly specify the dtype if rank-1 tensor buffer is empty.

Original PR issue: Investigate Applying PowerSGD to Communication Hook for Gradient Compression #47202
ghstack-source-id: 120377786

Test Plan:
buck test mode/dev-nosan caffe2/test/distributed:c10d -- test_powerSGD_ddp_comm_hook_nccl

buck test mode/dev-nosan caffe2/test/distributed:distributed_nccl_fork -- test_DistributedDataParallel_powerSGD_ddp_comm_hook

Reviewed By: rohan-varma

Differential Revision: D26034988

fbshipit-source-id: e0d323d0b77c6a2478cdbe8b31a1946ffd1a07da
2021-01-28 19:03:14 -08:00
Yi Wang
b619d37bb4 [Gradient Compression] Simplify the implementation of error feedback and warm-start (#50981)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/50981

Since vanilla allreduce will to be applied in the first few iterations, bucket rebuilding process will not affect caching per-variable tensors.

Previously the cached tensors used for error feedback and warm-up need to be rebuilt later, because their corresponding input tensors' shape will be changed after the bucket rebuild process.

Original PR issue: Investigate Applying PowerSGD to Communication Hook for Gradient Compression #47202
ghstack-source-id: 120617971

Test Plan: real run

Reviewed By: rohan-varma

Differential Revision: D26034418

fbshipit-source-id: e8744431c7f3142d75b77b60110e6861c2ff5c14
2021-01-28 18:59:40 -08:00
Pritam Damania
96cedefd8e [Pipe] Refactor convert_to_balance under non-test package. (#50860)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/50860

Since fairscale.nn.Pipe still uses 'balance' and 'devices' parameters,
other frameworks like fairseq still use these parameters. As a result, the
`convert_to_balance` method is a nice utility to use for migrating to PyTorch
Pipe without changing a lot of code in other frameworks.

In addition to this I've renamed the method to be more illustrative of what it
does and also allowed an optional devices parameter.
ghstack-source-id: 120430775

Test Plan:
1) waitforbuildbot
2) Tested with fairseq

Reviewed By: SciPioneer

Differential Revision: D25987273

fbshipit-source-id: dccd42cf1a74b08c876090d3a10a94911cc46dd8
2021-01-28 12:10:21 -08:00
Wanchao Liang
3562ca2da2 [dist_optim] add warning to distributed optimizer (#50630)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/50630

Add a warning log to distributed optimizer, to warn user the optimizer
is created without TorchScript support.

Test Plan: Imported from OSS

Reviewed By: rohan-varma

Differential Revision: D25932777

Pulled By: wanchaol

fbshipit-source-id: 8db3b98bdd27fc04c5a3b8d910b028c0c37f138d
2021-01-26 10:30:55 -08:00
Emilio Castillo
233e4ebdb6 Implement autograd functions for c10d communication operations (#40762)
Summary:
Closes https://github.com/pytorch/pytorch/issues/40702, Fixes https://github.com/pytorch/pytorch/issues/40690

Currently wip. But I would appreciate some feedback. Functions should be double-differentiable.

Contrary to b35cdc5200/torch/nn/parallel/_functions.py
This PR generates list of tensors instead of aggregating the received data in a single tensor. Is this behavior correct?

Thanks!

Pull Request resolved: https://github.com/pytorch/pytorch/pull/40762

Reviewed By: glaringlee

Differential Revision: D24758889

Pulled By: mrshenli

fbshipit-source-id: 79285fb4b791cae3d248f34e2aadb11c9ab10cce
2021-01-26 07:52:51 -08:00
Yi Wang
9f19843d19 [Gradient Compression] Typo fixes in PowerSGD (#50974)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/50974

Typo fixes.

Original PR issue: Investigate Applying PowerSGD to Communication Hook for Gradient Compression #47202
ghstack-source-id: 120257221

Test Plan: N/A

Reviewed By: rohan-varma

Differential Revision: D26031679

fbshipit-source-id: 9d049b50419a3e40e53f7f1275a441e31b87717b
2021-01-25 22:55:54 -08:00
Yi Wang
ffaae32d60 [Gradient Compression] Allow PowerSGD to run vallina allreduce for the first K iterations (#50973)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/50973

This can extend the original PowerSGD method to a hybrid approach: vanilla allreduce + PowerSGD. This can help further improve the accuracy, at the cost of a lower speedup.

Also add more comments on the fields in `PowerSGDState`.

Original PR issue: Investigate Applying PowerSGD to Communication Hook for Gradient Compression #47202
ghstack-source-id: 120257202

Test Plan:
buck test mode/dev-nosan caffe2/test/distributed:c10d -- test_powerSGD_ddp_comm_hook_nccl

buck test mode/dev-nosan caffe2/test/distributed:distributed_nccl_fork -- test_DistributedDataParallel_powerSGD_ddp_comm_hook

Reviewed By: rohan-varma

Differential Revision: D26031478

fbshipit-source-id: d72e70bb28ba018f53223c2a4345306980b3084e
2021-01-25 22:38:39 -08:00
Yanli Zhao
250c71121b Create a DDPLoggingData and expose it to python interface (#50622)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/50622

1. Define a DDPLoggingData struct that is the placeholder for all the ddp related logging fields
2. Put the DDPLoggingData struct in the C10 directory so that it can be easily imported by c10 and torch files
3. Expose get_ddp_logging_data() method in python so that users can get the logging data and dump in their applications
4. Unit test tested the logging data can be set and got as expected
5. Follow up will add more logging fields such as perf stats, internal states, env variables and etc
ghstack-source-id: 120275870

Test Plan: unit tests

Reviewed By: SciPioneer

Differential Revision: D25930527

fbshipit-source-id: 290c200161019c58e28eed9a5a2a7a8153113f99
2021-01-25 15:23:07 -08:00
Pritam Damania
68c218547c Add documentation page for pipeline parallelism. (#50791)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/50791

Add a dedicated pipeline parallelism doc page explaining the APIs and
the overall value of the module.
ghstack-source-id: 120257168

Test Plan:
1) View locally
2) waitforbuildbot

Reviewed By: rohan-varma

Differential Revision: D25967981

fbshipit-source-id: b607b788703173a5fa4e3526471140506171632b
2021-01-25 13:47:13 -08:00
Wanchao Liang
2c3c2a4b7a [dist_optim] add distributed functional AdamW optimizer (#50620)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/50620

Add TorchScript compatible AdamW functional optimizer to distributed optimizer

Test Plan: Imported from OSS

Reviewed By: rohan-varma

Differential Revision: D25932774

Pulled By: wanchaol

fbshipit-source-id: 64eb4aeaa3cab208d0ebbec7c4d91a9d43951947
2021-01-23 01:04:45 -08:00
Wanchao Liang
3f982e56b1 [dist_optim] add distributed functional RMSprop optimizer (#50619)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/50619

Add TorchScript compatible RMSprop functional optimizer to distributed optimizer

Test Plan: Imported from OSS

Reviewed By: rohan-varma

Differential Revision: D25932775

Pulled By: wanchaol

fbshipit-source-id: bd4854f9f95a740e02a1bebe24f780488460ba4d
2021-01-23 01:04:41 -08:00
Wanchao Liang
6c81b4d917 [dist_optim] add distributed functional Adadelta optimizer (#50623)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/50623

Add TorchScript compatible Adadelta functional optimizer to distributed optimizer

Test Plan: Imported from OSS

Reviewed By: rohan-varma

Differential Revision: D25932772

Pulled By: wanchaol

fbshipit-source-id: d59b04e5f0b6bab7e0d1c5f68e66249a65958e0b
2021-01-23 01:04:36 -08:00
Wanchao Liang
cd2067539e [dist_optim] add distributed functional sgd optimizer (#50618)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/50618

Test Plan: Imported from OSS

Reviewed By: rohan-varma

Differential Revision: D25932778

Pulled By: wanchaol

fbshipit-source-id: 8df3567b477bc5ba3556b8c5294cd3da5db963ad
2021-01-23 01:04:32 -08:00
Wanchao Liang
5cbe1e4933 [dist_optim] add distributed functional Adam optimizer (#50624)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/50624

Add TorchScript compatible Adam functional optimizer to distributed optimizer

Test Plan: Imported from OSS

Reviewed By: rohan-varma

Differential Revision: D25932770

Pulled By: wanchaol

fbshipit-source-id: cab3f1164c76186969c284a2c52481b79bbb7190
2021-01-23 01:01:37 -08:00
Rohan Varma
7e10fbfb71 Add note about TCP init in RPC tests to contributing doc. (#50861)
Summary:
We added this option in https://github.com/pytorch/pytorch/pull/48248, but it would be good to document it somewhere as well, hence adding it to this contributing doc.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/50861

Reviewed By: mrshenli

Differential Revision: D26014505

Pulled By: rohan-varma

fbshipit-source-id: c1321679f01dd52038131ff571362ad36884510a
2021-01-22 13:28:03 -08:00
Yi Wang
439afda090 [Gradient Compression] Fix warm-start for PowerSGD laywerwise compression (#50283)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/50283

Realize that for the layerwise compression, the previous warm-start implementation only skips memory allocations, but does not skip filling random values for Qs.

Also fix the unit test in distributed_test.py. Previously the process group was not created correctly, and not communication occurred in the test_DistributedDataParallel_powerSGD_ddp_comm_hook.

Original PR issue: Investigate Applying PowerSGD to Communication Hook for Gradient Compression #47202
ghstack-source-id: 120101220

Test Plan:
Verified the fix by adding added some loggings locally.

Also verified no NE diff on Ads 1x.

Reviewed By: rohan-varma

Differential Revision: D25846222

fbshipit-source-id: 1ebeeb55ceba64d4d904ea6ac1bb42b1b2241520
2021-01-20 22:31:44 -08:00
Benjamin Lefaudeux
87fb3707d9 ZeroRedundancyOptimizer: an implementation of a standalone sharded optimizer wrapper (#46750)
Summary:
Implement the first stage of ZeRO, sharding of the optimizer state, as described in [this blog post](https://www.microsoft.com/en-us/research/blog/zero-2-deepspeed-shattering-barriers-of-deep-learning-speed-scale/) and [this paper](https://arxiv.org/abs/1910.02054). This implementation is completely independent from the [DeepSpeed](https://github.com/microsoft/DeepSpeed) framework, and aims at providing ZeRO-compliant building blocks within the PyTorch scheme of things.

This works by:
- acting as a wrapper to a pytorch optimizer. ZeROptimizer does not optimize anything by itself, it only shards optimizers for distributed jobs
- each rank distributes parameters according to a given partitioning scheme (could be updated), and owns the update of a given shard only
- the .step() is called on each rank as expected, the fact that the optimizer actually works on a shard of the model is not visible from the outside
- when the update is completed, each rank broadcasts the updated model shard to all the other ranks

This can be used with DDP, although some communications are wasted in that case (gradients are all-reduced to all ranks). This implementation was initially developed in [Fairscale](https://github.com/facebookresearch/fairscale), and can also be used with an optimized DDP which only reduces to the relevant ranks. More context on ZeRO and PyTorch can be found in [this RFC](https://github.com/pytorch/pytorch/issues/42849)

The API with respect to loading and saving the state is a known pain point and should probably be discussed an updated. Other possible follow ups include integrating more closely to a [modularized DDP](https://github.com/pytorch/pytorch/issues/37002), [making the checkpoints partition-agnostic](https://github.com/facebookresearch/fairscale/issues/164), [exposing a gradient clipping option](https://github.com/facebookresearch/fairscale/issues/98) and making sure that mixed precision states are properly handled.

original authors include msbaines, min-xu-ai and myself

Pull Request resolved: https://github.com/pytorch/pytorch/pull/46750

Reviewed By: mruberry

Differential Revision: D25958918

Pulled By: blefaudeux

fbshipit-source-id: 14280f2fd90cf251eee8ef9ac0f1fa6025ae9c50
2021-01-20 14:36:16 -08:00
chengjun
4a8ef4525e Add new backend type for Intel heterogeneous computation platform. (#49786)
Summary:
Add a new device type 'XPU' ('xpu' for lower case) to PyTorch. Changes are needed for code related to device model and kernel dispatch, e.g. DeviceType, Backend and DispatchKey etc.

https://github.com/pytorch/pytorch/issues/48246

Pull Request resolved: https://github.com/pytorch/pytorch/pull/49786

Reviewed By: mrshenli

Differential Revision: D25893962

Pulled By: ezyang

fbshipit-source-id: 7ff0a316ee34cf0ed6fc7ead08ecdeb7df4b0052
2021-01-20 08:15:18 -08:00
Luca Wehrstedt
05036564cf Remove workaround for TensorPipe failing to get device of CUDA ptr (#50580)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/50580

Due to what looked like a bug in CUDA, TensorPipe was sometimes failing to auto-detect the device of a CUDA pointer. A workaround, on the PyTorch side, was to always initialize a CUDA context on device 0. Now that TensorPipe has fixed that we can undo the workaround.

Reviewed By: mrshenli

Differential Revision: D25952929

fbshipit-source-id: 57a5f73241f7371661855c767e44a64ca3b84a74
2021-01-19 12:18:00 -08:00
Rohan Varma
d64184ef4c [RPC] Support timeout for RRef proxy functions (#50499)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/50499

Adds a timeout API to the following functions:
```
rref.rpc_sync()
rref.rpc_async()
rref.remote()
```
so that RPCs initiated by these proxy calls can be appropriately timed out similar to the regular RPC APIs. Timeouts are supported in the following use cases:

1. rpc.remote finishes in time and successfully, but function run by rref.rpc_async() is slow and times out. Timeout error will be raised
2. rref.rpc_async() function is fast, but rpc.remote() is slow/hanging. Then when rref.rpc_async() is called, it will still timeout with the passed in timeout (and won't block for the rpc.remote() to succeed, which is what happens currently). Although, the timeout will occur during the future creation itself (and not the wait) since it calls `rref._get_type` which blocks. We can consider making this nonblocking by modifying rref._get_type to return a future, although that is likely a larger change.

Test Plan: Added UT

Reviewed By: wanchaol

Differential Revision: D25897495

fbshipit-source-id: f9ad5b8f75121f50537677056a5ab16cf262847e
2021-01-15 13:23:23 -08:00
Rohan Varma
ab1ba8f433 [RPC] Support timeout in rref._get_type() (#50498)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/50498

This change is mostly needed for the next diff in this stack, where
rref._get_type() is called in the rpc_async/rpc_sync RRef proxy function and
can block indefinitely if there is no timeout. It will also be useful to have a
timeout argument when we publicize this API to keep it consistent with other
RPC APIs.
ghstack-source-id: 119859767

Test Plan: Added UT

Reviewed By: pritamdamania87

Differential Revision: D25897588

fbshipit-source-id: 2e84aaf7e4faecf80005c78ee2ac8710f387503e
2021-01-15 13:18:39 -08:00
Shen Li
30e45bb133 Enable GPU-to-GPU comm in TensorPipeAgent (#44418)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/44418

This commit uses TensorPipe's cuda_ipc channel to conduct
cross-process same-machine GPU-to-GPU communication. On the sender
side, `TensorPipeAgent` grabs a stream to each device used by the
message, let these streams wait for current streams, and passes
the streams to TensorPipe `CudaBuffer`. On the receiver side, it
also grabs a stream for each device used in the message, and uses
these streams to receive tensors and run user functions. After that,
these streams are then used for sending the response back to the
sender. When receiving the response, the sender will grab a new set
of streams and use them for TensorPipe's `CudaBuffer`.

If device maps are provided, `TensorPipeAgent::send` will return a
derived class of `CUDAFuture`, which is specifically tailored for
RPC Messages.

TODOs:
1. Enable sending CUDA RPC to the same process.
2. Add a custom CUDA stream pool.
3. When TensorPipe addressed the error for `cudaPointerGetAttributes()`,
remove `cuda:0` context initialization code in `backend_registry.py`.
4. When TensorPipe can detect availability of peer access, enable all
tests on platforms without peer access.

Differential Revision: D23626207

Test Plan: Imported from OSS

Reviewed By: lw

Pulled By: mrshenli

fbshipit-source-id: d30e89e8a98bc44b8d237807b84e78475c2763f0
2021-01-14 13:55:41 -08:00
Pavel Belevich
d2c3733ca1 Reorder torch.distributed.rpc.init_rpc docstring arguments (#50419)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/50419

Test Plan: Imported from OSS

Reviewed By: glaringlee

Differential Revision: D25911561

Pulled By: pbelevich

fbshipit-source-id: 62c9a5c3f5ec5eddcbd149821ebdf484ff392158
2021-01-14 07:58:09 -08:00
Guilherme Leobas
5f8e1a1da9 add type annotations to torch.nn.modules.module (#49045)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/49044

Pull Request resolved: https://github.com/pytorch/pytorch/pull/49045

Reviewed By: malfet

Differential Revision: D25767092

Pulled By: walterddr

fbshipit-source-id: a81ba96f3495943af7bb9ee3e5fc4c94c690c405
2021-01-11 17:01:47 -08:00
Pritam Damania
f39f258dfd Ensure DDP + Pipe works with find_unused_parameters. (#49908)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/49908

As described in https://github.com/pytorch/pytorch/issues/49891, DDP +
Pipe doesn't work with find_unused_parameters.

This PR adds a simple fix to enable this functionality. This only currently
works for Pipe within a single host and needs to be re-worked once we support
cross host Pipe.
ghstack-source-id: 119573413

Test Plan:
1) unit tests added.
2) waitforbuildbot

Reviewed By: rohan-varma

Differential Revision: D25719922

fbshipit-source-id: 948bcc758d96f6b3c591182f1ec631830db1b15c
2021-01-11 16:52:37 -08:00
Alex Henrie
2c4b6ec457 Unused exception variables (#50181)
Summary:
These unused variables were identified by [pyflakes](https://pypi.org/project/pyflakes/). They can be safely removed to simplify the code.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/50181

Reviewed By: gchanan

Differential Revision: D25844270

fbshipit-source-id: 0e648ffe8c6db6daf56788a13ba89806923cbb76
2021-01-08 13:33:18 -08:00
Yi Wang
ce370398cc [Gradient Compression] Remove the extra comma after "bucket" in PowerSGD hook signatures (#50197)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/50197

Remove the extra comma after "bucket".
ghstack-source-id: 119513484

Test Plan: waitforbuildbot

Reviewed By: rohan-varma

Differential Revision: D25823117

fbshipit-source-id: acf048f7cb732c23cba3a81ccce1e70f6b9f4299
2021-01-07 15:56:20 -08:00
Pritam Damania
16e5af41da Fix store based barrier to only use 'add'. (#49930)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/49930

Certain store implementations don't work well when we use get() and
add() on the same key. To avoid this issue, we only use add() in the store
based barrier. The buggy store implementations can't be properly fixed due to
legacy reasons.

Test Plan:
1) unit tests.
2) waitforbuildbot

Reviewed By: osalpekar

Differential Revision: D25725386

fbshipit-source-id: 1535e2629914de7f78847b730f8764f92cde67e7
2021-01-05 12:46:24 -08:00
Jagadish Krishnamoorthy
c115957df0 [distributed] Provide parameter to pass GPU ID in barrier function (#49069)
Summary:
For a multi GPU node, rank and corresponding GPU mapping can be different.
Provide optional parameter to specify the GPU device number for the
allreduce operation in barrier function.

Add test cases to validate barrier device_ids.

Signed-off-by: Jagadish Krishnamoorthy <jagdish.krishna@gmail.com>

Fixes https://github.com/pytorch/pytorch/issues/48110

Pull Request resolved: https://github.com/pytorch/pytorch/pull/49069

Reviewed By: mrshenli

Differential Revision: D25658528

Pulled By: rohan-varma

fbshipit-source-id: 418198b6224c8c1fd95993b80c072a8ff8f02eec
2021-01-05 11:27:54 -08:00
Ilia Cherniavskii
749f8b7850 Remove flops warnings from the default profiler use case (#49896)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/49896

Add missing check for with_flops option set

Test Plan:
python test/test_profiler.py
CI

Reviewed By: xuzhao9, ngimel

Differential Revision: D25716930

Pulled By: ilia-cher

fbshipit-source-id: 0da0bbb6c1a52328f665237e503406f877b41449
2020-12-30 23:49:29 -08:00
Ansley Ussery
c619892482 Fix errata (#49903)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/49903

Test Plan: Imported from OSS

Reviewed By: ngimel

Differential Revision: D25718411

Pulled By: ansley

fbshipit-source-id: 0cc365c5a53077752dc1c5a5c4a65b873baa3604
2020-12-28 20:40:41 -08:00
Samuel Marks
e6779d4357 [*.py] Rename "Arguments:" to "Args:" (#49736)
Summary:
I've written custom parsers and emitters for everything from docstrings to classes and functions. However, I recently came across an issue when I was parsing/generating from the TensorFlow codebase: inconsistent use of `Args:` and `Arguments:` in its docstrings.

```sh
(pytorch#c348fae)$ for name in 'Args:' 'Arguments:'; do
    printf '%-10s %04d\n' "$name" "$(rg -IFtpy --count-matches "$name" | paste -s -d+ -- | bc)"; done
Args:      1095
Arguments: 0336
```

It is easy enough to extend my parsers to support both variants, however it looks like `Arguments:` is wrong anyway, as per:

  - https://google.github.io/styleguide/pyguide.html#doc-function-args @ [`ddccc0f`](https://github.com/google/styleguide/blob/ddccc0f/pyguide.md)

  - https://chromium.googlesource.com/chromiumos/docs/+/master/styleguide/python.md#describing-arguments-in-docstrings @ [`9fc0fc0`](https://chromium.googlesource.com/chromiumos/docs/+/9fc0fc0/styleguide/python.md)

  - https://sphinxcontrib-napoleon.readthedocs.io/en/latest/example_google.html @ [`c0ae8e3`](https://github.com/sphinx-contrib/napoleon/blob/c0ae8e3/docs/source/example_google.rst)

Therefore, only `Args:` is valid. This PR replaces them throughout the codebase.

PS: For related PRs, see tensorflow/tensorflow/pull/45420

PPS: The trackbacks automatically appearing below are sending the same changes to other repositories in the [PyTorch](https://github.com/pytorch) organisation.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/49736

Reviewed By: albanD

Differential Revision: D25710534

Pulled By: soumith

fbshipit-source-id: 61e8ff01abb433e9f78185c2d1d0cbd7c22c1619
2020-12-28 09:34:47 -08:00
Yi Wang
55b431b17a [Gradient Compression] Directly let world_size = group_to_use.size() (#49715)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/49715

Address the comment on https://github.com/pytorch/pytorch/pull/49417#discussion_r545388351
ghstack-source-id: 119049598

Test Plan: waitforbuildbot

Reviewed By: rohan-varma

Differential Revision: D25673997

fbshipit-source-id: 44eb2540e5a77331c34ba503285cbd0bd63c2c0a
2020-12-22 23:24:54 -08:00
Yi Wang
88c33ff8ab [Gradient Compression] Explicitly restrict the scope of torch.cuda.synchronize to the current device (#49711)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/49711

`torch.cuda.synchronize` uses the current device by default. Explicitly specify this device for better readability.

Original PR issue: Investigate Applying PowerSGD to Communication Hook for Gradient Compression #47202
ghstack-source-id: 119017654

Test Plan:
buck test mode/dev-nosan caffe2/test/distributed:c10d -- test_powerSGD_ddp_comm_hook_nccl

buck test mode/dev-nosan caffe2/test/distributed:distributed_nccl_fork -- test_DistributedDataParallel_powerSGD_ddp_comm_hook

Reviewed By: rohan-varma

Differential Revision: D25672267

fbshipit-source-id: 62a2266727a2ea76175f3c438daf20951091c771
2020-12-22 23:21:45 -08:00
Yi Wang
af1b636b89 [Gradient Compression] Change wait() to value() in some callbacks of PowerSGD communication hook (#49709)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/49709

Since wait() has already been called in the return statements of the precursor callbacks, no need to wait again.

Original PR issue: Investigate Applying PowerSGD to Communication Hook for Gradient Compression #47202
ghstack-source-id: 119015237

Test Plan:
buck test mode/dev-nosan caffe2/test/distributed:c10d -- test_powerSGD_ddp_comm_hook_nccl
buck test mode/dev-nosan caffe2/test/distributed:distributed_nccl_fork -- test_DistributedDataParallel_powerSGD_ddp_comm_hook

Reviewed By: rohan-varma

Differential Revision: D25672068

fbshipit-source-id: da136327db4c4c0e3b846ba8d6885629f1044374
2020-12-22 21:37:04 -08:00