Commit Graph

74 Commits

Author SHA1 Message Date
Yi Wang
2b398d0537 [Reland][Gradient Compression] Apply division first to avoid overflow (#59576)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/59576

If the gradients before allreduce are large, then the sum after allreduce may overflow, especially for FP16. Therefore, apply the division before allreduce.

This fix is applied to both C++ and Python comm hooks.
ghstack-source-id: 130754510

Test Plan:
buck test mode/dev-nosan caffe2/test/distributed:c10d -- test_ddp_comm_hook_allreduce_hook_nccl
buck test mode/dev-nosan caffe2/test/distributed:c10d -- test_default_ddp_comm_hooks_nccl
buck test mode/dev-nosan caffe2/test/distributed:c10d -- test_fp16_compress_wrapper_nccl
buck test mode/dev-nosan caffe2/test/distributed:c10d -- test_builtin_ddp_comm_hooks_nccl
buck test mode/dev-nosan caffe2/test/distributed:c10d -- test_powerSGD_ddp_comm_hook_nccl
buck test mode/dev-nosan caffe2/test/distributed:c10d -- test_ddp_comm_hook_allreduce_hook_nccl_grad_is_view
buck test mode/dev-nosan caffe2/test/distributed:c10d -- test_default_ddp_comm_hooks_nccl_is_view
buck test mode/dev-nosan caffe2/test/distributed:c10d -- test_fp16_compress_wrapper_is_view
buck test mode/dev-nosan caffe2/test/distributed:c10d -- test_builtin_ddp_comm_hooks_nccl_grad_is_view
buck test mode/dev-nosan caffe2/test/distributed:c10d -- test_powerSGD_ddp_comm_hook_nccl_grad_is_view

Reviewed By: rohan-varma

Differential Revision: D28941327

fbshipit-source-id: 932e8ddbdb2bfd609a78943f6dc390d3d6ca333f
2021-06-08 10:03:21 -07:00
Mike Ruberry
f998e63dca Revert D28922548: [Gradient Compression] Apply division first to avoid overflow
Test Plan: revert-hammer

Differential Revision:
D28922548 (459270ac01)

Original commit changeset: 442bd3cc7a35

fbshipit-source-id: 7e4361b4eb283cdb21f15a36d6eebf558dd7386f
2021-06-07 03:57:10 -07:00
Yi Wang
459270ac01 [Gradient Compression] Apply division first to avoid overflow (#59522)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/59522

If the gradients before allreduce are large, then the sum after allreduce may overflow, especially for FP16. Therefore, apply the division before allreduce.

This fix is applied to both C++ and Python comm hooks.
ghstack-source-id: 130686229

Test Plan:
buck test mode/dev-nosan caffe2/test/distributed:c10d -- test_ddp_comm_hook_allreduce_hook_nccl
buck test mode/dev-nosan caffe2/test/distributed:c10d -- test_default_ddp_comm_hooks_nccl
buck test mode/dev-nosan caffe2/test/distributed:c10d -- test_fp16_compress_wrapper_nccl
buck test mode/dev-nosan caffe2/test/distributed:c10d -- test_builtin_ddp_comm_hooks_nccl
buck test mode/dev-nosan caffe2/test/distributed:c10d -- test_powerSGD_ddp_comm_hook_nccl
buck test mode/dev-nosan caffe2/test/distributed:c10d -- test_ddp_comm_hook_allreduce_hook_nccl_grad_is_view
buck test mode/dev-nosan caffe2/test/distributed:c10d -- test_default_ddp_comm_hooks_nccl_is_view
buck test mode/dev-nosan caffe2/test/distributed:c10d -- test_fp16_compress_wrapper_is_view
buck test mode/dev-nosan caffe2/test/distributed:c10d -- test_builtin_ddp_comm_hooks_nccl_grad_is_view
buck test mode/dev-nosan caffe2/test/distributed:c10d -- test_powerSGD_ddp_comm_hook_nccl_grad_is_view

Reviewed By: rohan-varma

Differential Revision: D28922548

fbshipit-source-id: 442bd3cc7a35a8b948f626062fa7ad2e3704c5be
2021-06-07 01:43:10 -07:00
Yi Wang
9bfc1c4e0e [Gradient Compression] Update the docstring of fp16_compress_hook (#58168)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/58168

Update the documentation to be consistent to https://github.com/pytorch/pytorch/pull/57410.
ghstack-source-id: 128797174

Test Plan: N/A

Reviewed By: agolynski, zhengwy888

Differential Revision: D28388160

fbshipit-source-id: 6ba13ad9f9d7b4d003cdc112545573e452df8b65
2021-05-12 14:28:41 -07:00
lezcano
24087d07ca Deprecate QR (#57745)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/57745

Reviewed By: bdhirsh

Differential Revision: D28318164

Pulled By: mruberry

fbshipit-source-id: b8e3cb9d7ab33f30c8653ec39f932a8af8bd2a50
2021-05-10 22:56:37 -07:00
Weiyi Zheng
c07babbcf1 [Gradient Compression] Divide by world size before all_reduce to avoid overflow (#57410)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/57410

FP16 gradient compression may run into 'inf' issue. switching to division before allreduce can avoid this problem.
ghstack-source-id: 127877083

Test Plan:
before chage

f268909897

after change:
f270950609

If you still sees 'grad_norm = inf' after enabling fp16 hook, you can resume the training and turning off the hook.

Reviewed By: SciPioneer

Differential Revision: D28128628

fbshipit-source-id: 0b6648637713e4f321e39c9ccb645a6b6f1750a0
2021-05-07 12:23:21 -07:00
Sam Estep
e3900d2ba5 Add lint for unqualified noqa (#56272)
Summary:
As this diff shows, currently there are a couple hundred instances of raw `noqa` in the codebase, which just ignore all errors on a given line. That isn't great, so this PR changes all existing instances of that antipattern to qualify the `noqa` with respect to a specific error code, and adds a lint to prevent more of this from happening in the future.

Interestingly, some of the examples the `noqa` lint catches are genuine attempts to qualify the `noqa` with a specific error code, such as these two:
```
test/jit/test_misc.py:27:            print(f"{hello + ' ' + test}, I'm a {test}") # noqa E999
test/jit/test_misc.py:28:            print(f"format blank") # noqa F541
```
However, those are still wrong because they are [missing a colon](https://flake8.pycqa.org/en/3.9.1/user/violations.html#in-line-ignoring-errors), which actually causes the error code to be completely ignored:

- If you change them to anything else, the warnings will still be suppressed.
- If you add the necessary colons then it is revealed that `E261` was also being suppressed, unintentionally:
  ```
  test/jit/test_misc.py:27:57: E261 at least two spaces before inline comment
  test/jit/test_misc.py:28:35: E261 at least two spaces before inline comment
  ```

I did try using [flake8-noqa](https://pypi.org/project/flake8-noqa/) instead of a custom `git grep` lint, but it didn't seem to work. This PR is definitely missing some of the functionality that flake8-noqa is supposed to provide, though, so if someone can figure out how to use it, we should do that instead.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/56272

Test Plan:
CI should pass on the tip of this PR, and we know that the lint works because the following CI run (before this PR was finished) failed:

- https://github.com/pytorch/pytorch/runs/2365189927

Reviewed By: janeyx99

Differential Revision: D27830127

Pulled By: samestep

fbshipit-source-id: d6dcf4f945ebd18cd76c46a07f3b408296864fcb
2021-04-19 13:16:18 -07:00
Yi Wang
b4cb020c0f [Gradient Compression] Make orthogonalization_epsilon configurable in PowerSGDState (#55738)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/55738

Per title, and use 0 as the default value.

It turns out that setting this epsilon as 0 can accelerate convergence and improve accuracy for some use cases.

Test Plan:
unit tests
f264687105
f264675194

Reviewed By: shuyingsunshine21

Differential Revision: D27694971

fbshipit-source-id: b61528c6c817127974acdc4635bccf607532287f
2021-04-13 02:52:56 -07:00
Yi Wang
2496a09314 [Gradient Compression] Fix PowerSGD docstring by removing an extra whitespace (#55666)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/55666

{F590513307}

Some code is not properly displayed due to an extra whitespace ahead of `(num_rows + num_cols)`.
ghstack-source-id: 126148569

Test Plan: Locally viewed

Reviewed By: rohan-varma

Differential Revision: D27673663

fbshipit-source-id: 603ae4ddbe86ceaefc311885b82b0f6b48b57b27
2021-04-09 21:11:40 -07:00
Yi Wang
1b4bb3691c [Gradient Compression] Update _powerSGD_comm_hook_wrapper to only expose 2 most critical hyperparameters (#55295)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/55295

Update `_powerSGD_comm_hook_wrapper` to only expose 2 most critical hyperparameters, to make this API more clear to any future user (although the second hyperparameter `start_powerSGD_iter` is not in use yet).

Test Plan: waitforbuildbot

Reviewed By: shuyingsunshine21

Differential Revision: D27561734

fbshipit-source-id: b661981cc033b109f4f2fc92b435567a184a7fb5
2021-04-06 01:29:10 -07:00
Yi Wang
cc4036905c [Gradient Compression] Update the default value of start_powerSGD_iter and update the docstring (#55272)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/55272

1. Set 1K as the default value of `start_powerSGD_iter` for practicability. The original default value 10 is usually too small for real use cases. The new default value 1K is also consistent with PyTorch Lightning.
2. Update the docstring of `start_powerSGD_iter` to remind the users to set a value no less than the warm-up steps if any.
3. Update some unit tests to start PowerSGD early.

ghstack-source-id: 125707662

Test Plan: waitforbuildbot

Reviewed By: shuyingsunshine21

Differential Revision: D27553388

fbshipit-source-id: 40076419bc85755c0c0b64b79ba914b241085fcc
2021-04-06 01:27:29 -07:00
Yi Wang
6a2f046504 [SPMD] Restrict DDP communication hooks to SPSD mode (#55253)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/55253

Previously DDP communication hooks takes a tensor list as the input. Now only takes a single tensor, as the preparation of retiring SPMD and only providing a single model replica for DDP communication hooks.

The next step is limiting only 1 model replica in Reducer.
ghstack-source-id: 125677637

Test Plan: waitforbuildbot

Reviewed By: zhaojuanmao

Differential Revision: D27533898

fbshipit-source-id: 5db92549c440f33662cf4edf8e0a0fd024101eae
2021-04-05 16:46:47 -07:00
Yi Wang
058357a439 [Gradient Compression] Report compression rate for batched PowerSGD hook (#55103)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/55103

Previously compression rate is only reported in PowerSGD hook. Also report this metric for comprehensive experimentation.

It is very easy to compute the sizes before and after compression, because there is only one matrix factorization per bucket, and no accumulation within the bucket is needed.
1) The size before compression is the input tensor size.
2) The size after compression is the size of P + Q, where each has a size of `square_side_length * state.matrix_approximation_rank`.
ghstack-source-id: 125399028

Test Plan: Tested by running scripts/wayi/torch/power_sgd.py locally.

Reviewed By: deadlybulb

Differential Revision: D27474295

fbshipit-source-id: a2225e85be03ab20238f01014d5ec9ae1787c4fb
2021-03-31 22:17:05 -07:00
Yi Wang
7c0941ee63 Clang-format powerSGD_hook.py (#54839)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/54839

ghstack-source-id: 125089465

Test Plan: N/A

Reviewed By: rohan-varma

Differential Revision: D27384796

fbshipit-source-id: 8312059f6a47d60ca29f75041141bb88804e1b32
2021-03-30 09:28:45 -07:00
Yi Wang
6c31f56bf4 [Gradient Compression] Add cuda.syncrhonize back to batched powerSGD (#54838)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/54838

Realize that an explicit sync is somehow still needed for batched PowerSGD hook. I find that a job failure can be fixed by this change.

The sync was once removed by #54482.

Test Plan:
f260900882
f260899693

Reviewed By: rohan-varma

Differential Revision: D27384738

fbshipit-source-id: 3efd738b9fd375e2ceb36ed3a6bf99cd8ce8ff95
2021-03-30 09:27:11 -07:00
Mark Astley
4bf90558e0 [Gradient Compression] Add logging for gradient compression stats. (#54647)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/54647

Regularly log stats showing effect of gradient compression when using the PowerSGD DDP communication hook.

Test Plan:
buck run mode/dev-nosan scripts/wayi/torch:power_sgd

Play with the layer sizes of the input model (you can just use linear layers for convenience), and check the log that shows compression stats. For convenience, you can change `logging.info` to `print` locally.

You can create some test diffs on top of this diff, to show that the compression stats are correct in different cases.

Run with power_sgd script:
{F537381542}

Diff with example using a simple linear model: D27299934
sample output:
{F538486535}

Reviewed By: SciPioneer

Differential Revision: D27240254

fbshipit-source-id: 9e142b2f7957cc874804f799b7bb3bffdf824858
2021-03-25 07:44:17 -07:00
Yi Wang
c22fc448cd [Gradient Compression] Remove cuda.syncrhonize in batched powerSGD (#54482)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/54482

`cuda.synchronize` is unnecessary for `batched_powerSGD_hook`.
ghstack-source-id: 124607761

Test Plan:
f259607860
f259563921

Reviewed By: rohan-varma

Differential Revision: D27254314

fbshipit-source-id: 4744c07a6f0c8939e766ffa935ddbf3c47e85d18
2021-03-23 00:55:53 -07:00
Yi Wang
de70cdb66b Clang format default_hooks.py (#53956)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/53956

ghstack-source-id: 123852987

Test Plan: N/A

Reviewed By: iseessel

Differential Revision: D27032713

fbshipit-source-id: 11d831fa0f08b1c8bc2e44acd144bf85a69a1211
2021-03-13 10:41:11 -08:00
Yi Wang
ca4aae85fa [Gradient Compression] Update the docstring of fp16_compress_wrapper (#53955)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/53955

Per title
ghstack-source-id: 123852836

Test Plan: N/A

Reviewed By: iseessel

Differential Revision: D27032700

fbshipit-source-id: 6f9bbc028efe6cc9b54f4ec729fea745368efb2e
2021-03-13 10:39:40 -08:00
Isaac Seessel
3078233e9a [Gradient Compression] Make FP16 compression as a wrapper that can be combined with other communication hooks (#53808)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/53808

Create a FP16 wrapper that can combine FP16 gradient compression with any gradient compression algorithm.

Test Plan:
Unit test:
```
buck test mode/dev-nosan caffe2/test/distributed:c10d -- test_fp16_compress_wrapper
```

Performance Test on DDP QPS Benchmark: Check if AllReduce + FP16 Wrapper = FP16 Compression
1) FP16 Compression:
f256897690

2) FP16 Wrapper + AllReduce (after patching D26960986):
f256897289

Reviewed By: SciPioneer

Differential Revision: D26978832

fbshipit-source-id: 0dcd18b050c02f5e9f3cff56344d1f39a04e20c0
2021-03-12 17:31:07 -08:00
Yi Wang
8016d28c0b [Gradient Compression] Update the comment on fp16_compress_hook (#53780)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/53780

Update the comment, because the input data type of `fp16_compress_hook` does not have to be FP32. For example, the input dtype can also be FP64, as long as it can be casted into FP16.
ghstack-source-id: 123680621

Test Plan: N/A

Reviewed By: iseessel

Differential Revision: D26967224

fbshipit-source-id: 26d79a3629a597e6335b6f59c97d25a764a8ed80
2021-03-11 13:40:32 -08:00
Yi Wang
68b62493b8 [Gradient Compression] Make GradBucket class public (#53099)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/53099

Publish GradBucket APIs for publishing DDP communication hooks.

s/_GradBucket/GradBucket
ghstack-source-id: 123030921

Test Plan: waitforbuildbot

Reviewed By: rohan-varma

Differential Revision: D26721121

fbshipit-source-id: ee5f68e33095b9965b51937b86cdeb331fd2419a
2021-03-03 19:22:15 -08:00
Yi Wang
b59075eced [Gradient Compression] Refactor tensor grouping in PowerSGD (#52981)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/52981

No need to create a hard boundary between rank-1 tensors and high-rank tensors, since some high-rank tensors will not be compressed if the compression cannot save enough bandwidth, according to `_should_compress` function.

Therefore, refactor and simplify the tensor grouping logic, which addresses the comment in https://github.com/pytorch/pytorch/pull/52541#discussion_r580867311
ghstack-source-id: 122997032

Test Plan:
waitforbuildbot

Already LGTMed by PowerSGD paper author.

Ads1x (completed):
https://www.internalfb.com/intern/tupperware/details/job/?handle=priv3_global%2Fmast_hpc%2Ftsm_hpc-wayi_ads_10x_POWER_SGD_gpu8_2021-02-28_15-29.trainer&tatwTabs=tasks&task_id=0&task_tab=TASK_LOGS

Detectron2:
1) Before refactoring:
f254353864
Accuracy: 39.972
Overall training speed: 67498 iterations in 6:15:42 (0.3340 s / it)

2) After refactoring:
f254353380
Accuracy: 39.944
Overall training speed: 67498 iterations in 6:09:41 (0.3286 s / it)

Reviewed By: rohan-varma

Differential Revision: D26713689

fbshipit-source-id: 12cfcb65feaa2a2d94e3c7793073031f13828305
2021-03-03 19:20:41 -08:00
Yi Wang
ba36e32406 [Gradient Compression] Correct the usage of min_compression_rate (#52979)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/52979

Compression rate = uncompressed size / compressed size, so the compression rate is usually greater than 1.

Previously the compression rate was perceived as compressed size / uncompressed size, which can be very confusing.
ghstack-source-id: 122996272

Test Plan: unit tests

Reviewed By: zhaojuanmao

Differential Revision: D26713349

fbshipit-source-id: 83b7f8908c101954cf01f56a22161047fbfeaa53
2021-03-03 15:35:40 -08:00
Yi Wang
b05dd931ee [Gradient Compression] Add is_the_last_bucket_to_allreduce method to GradBucket class (#53010)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/53010

To determine the boundary between different iterations in a DDP communication hook, currently the user code needs `bucket.get_index() == 0`, which involves internal bucketization implementation details and undermines the usability of DDP communication hook.

Create an API to hide the details and improve the usability before publishing GradBucket APIs.
ghstack-source-id: 122723081

Test Plan: buck test mode/dev-nosan caffe2/test/distributed:c10d -- test_powerSGD_ddp_comm_hook_nccl

Reviewed By: rohan-varma

Differential Revision: D26720813

fbshipit-source-id: f4a3147382c1f970534d7f0dee0cd599156c8b8c
2021-03-02 14:39:12 -08:00
Yi Wang
ecb5ac90ed [Gradient Compression] Add get_per_parameter_tensors method to GradBucket class (#53009)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/53009

It can be a common operation to apply layer-wise operations over per-parameter tensors in a DDP communication hook.

Create a util method in GradBucket class before publishing GradBucket APIs.
ghstack-source-id: 122833594

Test Plan:
buck test mode/dev-nosan caffe2/test/distributed:c10d -- test_powerSGD_ddp_comm_hook_nccl

f254364097

Reviewed By: rohan-varma

Differential Revision: D26717893

fbshipit-source-id: 916db319de8b85dd22bc4e35db5671bf4e34740f
2021-03-02 14:39:03 -08:00
Yi Wang
890e051047 Clang-format quantization_hooks.py (#53100)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/53100

ghstack-source-id: 122723751

Test Plan: N/A

Reviewed By: rohan-varma

Differential Revision: D26721146

fbshipit-source-id: 985057fc02c997124b676854eb0a55e569971a3f
2021-03-02 12:48:43 -08:00
Shen Li
729d88119a Fix GradBucket Typing (#52943)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/52943

Test Plan: Imported from OSS

Reviewed By: rohan-varma

Differential Revision: D26699759

Pulled By: mrshenli

fbshipit-source-id: 712165a29d114da761ef4f161096ca46a958df03
2021-02-27 20:04:38 -08:00
Seung-Jae Bang
2d75346c25 [Gradient Compression] Add a minimum compression rate threshold for PowerSGD communication hook (#52541)
Summary:
Fixes #{52034}
- Add a minimum compression rate threshold to `PowerSGDState`
- Use the threshold to determine whether to compress high-rank tensors or not

Pull Request resolved: https://github.com/pytorch/pytorch/pull/52541

Test Plan:
No performance regression using rank-8 compression:
baseline: f253000411
updated one: f253010955

Reviewed By: rohan-varma

Differential Revision: D26594862

Pulled By: SciPioneer

fbshipit-source-id: 2859a91b4ca6bd1862bf6cd6441dc2a89badb2d5
2021-02-23 22:03:02 -08:00
Yi Wang
03ae6d9903 Remove useless _allgather_then_aggregate_hook (#52593)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/52593

This hook is not used at all, and it probably can only be used for demonstrating that allgather is slower than allreduce, so it should never be used in practice.

However, this hook and its helper function stay with the communication hook public APIs in the same file. It will be better to make the public API file as concise as possible.

Since I don't think we will use this hook in the future, prefer deleting it to moving it to a separate file.
ghstack-source-id: 122180969

Test Plan: waitforbuildbot

Reviewed By: rohan-varma

Differential Revision: D26575318

fbshipit-source-id: b258154a7c92e33236c34104bd79bc244ecdb158
2021-02-22 12:12:53 -08:00
Yi Wang
4b3c99ce4a [Resubmission] Add a documentation page for DDP communication hooks (#51773)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/51773

Resubmission of #51715.

Minor changes:
1) Removed "Note [Guidance to Tune ``matrix_approximation_rank`` And ``start_powerSGD_iter``]" in powerSGD_hook.py.

2) Removed the duplicate description of `torch.nn.parallel.DistributedDataParallel.register_comm_hook` in ddp_comm_hooks.rst, because it is already covered by distributed.rst.

Also updated the doc based on the comments from PowerSGD paper author Thijs Vogels .

It seems that `python_doc_test` was flaky. The previous error message was not informative:
https://app.circleci.com/pipelines/github/pytorch/pytorch/270682/workflows/8d186a3c-d682-46bf-b617-ad4eef5991e2/jobs/10739143, and all the warnings did also appear on the master branch.

Rebasing to a new master branch seems to get this fixed:
https://app.circleci.com/pipelines/github/pytorch/pytorch/270696/workflows/1a3adbea-6443-4876-b87b-e17d90d41428/jobs/10740021/steps

Screenshot:

{F369899792}
ghstack-source-id: 121199613

Test Plan: View locally

Reviewed By: mingzhe09088

Differential Revision: D26272687

fbshipit-source-id: 6677db496a68171798940a80343f4d9a508e15db
2021-02-06 21:22:04 -08:00
Natalia Gimelshein
d3023d86ba Revert D26249330: [Gradient Compression] Add a documentation page for DDP communication hooks
Test Plan: revert-hammer

Differential Revision:
D26249330 (e62aabac43)

Original commit changeset: ab973390ddb7

fbshipit-source-id: d508daed76219e7ca588cf7fb38aeaaffc61acfd
2021-02-04 22:38:06 -08:00
Yi Wang
e62aabac43 [Gradient Compression] Add a documentation page for DDP communication hooks (#51715)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/51715

Add a documentation page for DDP communication hooks.

Screenshot:

{F369781049}

Test Plan: View locally

Reviewed By: pritamdamania87

Differential Revision: D26249330

fbshipit-source-id: ab973390ddb785c5191f587a1b2b6de7d229e50e
2021-02-04 18:53:53 -08:00
Yi Wang
43df03de13 [Gradient Compression] Replace torch.sqrt(torch.sum(col ** 2)) by torch.norm() (#51629)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/51629

Leverage the existing util functions as much as possible for potential performance gain.

Original PR issue: Investigate Applying PowerSGD to Communication Hook for Gradient Compression #47202
ghstack-source-id: 120919883

Test Plan:
buck test mode/dev-nosan caffe2/test/distributed:c10d -- test_powerSGD_ddp_comm_hook_nccl

No performance regression:
f248664994 uses `torch.norm()`
```
total:
  32 GPUs -- 32 GPUs: p25:  1.050    30/s  (batch size 32)
p50:  1.230    26/s  (batch size 32)
p75:  1.449    22/s  (batch size 32)
p90:  1.611    19/s  (batch size 32)
p95:  1.702    18/s  (batch size 32)

backward:
  32 GPUs -- 32 GPUs: p25:  0.769    41/s  (batch size 32)
p50:  0.920    34/s  (batch size 32)
p75:  1.139    28/s  (batch size 32)
p90:  1.322    24/s  (batch size 32)
p95:  1.440    22/s  (batch size 32)
```

f248678690 does not use `torch.norm()`
```
total:
  32 GPUs -- 32 GPUs: p25:  1.056    30/s  (batch size 32)
p50:  1.249    25/s  (batch size 32)
p75:  1.443    22/s  (batch size 32)
p90:  1.608    19/s  (batch size 32)
p95:  1.711    18/s  (batch size 32)

backward:
  32 GPUs -- 32 GPUs: p25:  0.777    41/s  (batch size 32)
p50:  0.939    34/s  (batch size 32)
p75:  1.127    28/s  (batch size 32)
p90:  1.322    24/s  (batch size 32)
p95:  1.448    22/s  (batch size 32)
```

Reviewed By: pritamdamania87

Differential Revision: D26219835

fbshipit-source-id: 31d8ad3401d4efced4a6069f4f1e169ea3372697
2021-02-03 13:39:11 -08:00
Yi Wang
79e7544cb4 [Gradient Compression] Check start_PowerSGD_iter > 1 and add guidance on tuning PowerSGD configs. (#51427)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/51427

A user reported that `start_PowerSGD_iter` failed when it's set as 1. This is because allocating memory for error tensors somehow overlap with bucket rebuilding process at iteration 1.

Check `start_PowerSGD_iter > 1` instead of `start_PowerSGD_iter >= 1`.

Also add a unit test of `test_invalid_powerSGD_state` and some guidance on tuning PowerSGD configs.

Original PR issue: Investigate Applying PowerSGD to Communication Hook for Gradient Compression #47202
ghstack-source-id: 120834126

Test Plan: buck test mode/dev-nosan caffe2/test/distributed:c10d -- test_invalid_powerSGD_state

Reviewed By: rohan-varma

Differential Revision: D26166897

fbshipit-source-id: 34d5b64bb3dd43acb61d792626c70e6c8bb44a5d
2021-02-02 04:30:24 -08:00
Yi Wang
c08078031f [Gradient Compression] Allow BatchedPowerSGD to run vanilla allreduce for the first K iterations (#51270)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/51270

Similar to #50973, allow the batched version to run vanilla allreduce for the first K iterations.

This may be useful if the batched version can be applied to some use cases where the accuracy requirement is not very strict.

Original PR issue: Investigate Applying PowerSGD to Communication Hook for Gradient Compression #47202
ghstack-source-id: 120725858

Test Plan:
buck test mode/dev-nosan caffe2/test/distributed:c10d -- test_powerSGD_ddp_comm_hook_nccl

baseline: f248001754
batched PowerSGD: f246960752

The training time was reduced from 54m48s to 30m33s, and the accuracy is approximately the same: 44.21 vs 44.35

Reviewed By: rohan-varma

Differential Revision: D26077709

fbshipit-source-id: 6afeefad7a3fbdd7da2cbffb56dfbad855a96cb5
2021-02-01 15:26:29 -08:00
Yi Wang
0831984ed5 [Resubmission][Gradient Compression] Refactor default_hooks.py and powerSGD_hook.py by creating a util function that make a vanilla allreduce future (#51400)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/51400

Resubmission of #51094

Address https://github.com/pytorch/pytorch/pull/50973#discussion_r564229818

Original PR issue: Investigate Applying PowerSGD to Communication Hook for Gradient Compression #47202
ghstack-source-id: 120725690

Test Plan:
buck test mode/dev-nosan caffe2/test/distributed:c10d -- test_powerSGD_ddp_comm_hook_nccl

buck test mode/dev-nosan caffe2/test/distributed:c10d -- test_default_ddp_comm_hooks_nccl

Reviewed By: rohan-varma

Differential Revision: D26162333

fbshipit-source-id: ccc2eae5383a23673e00d61cb5570fb8bf749cd0
2021-02-01 11:34:41 -08:00
Iurii Zdebskyi
5a406c023e Revert D26070147: [Gradient Compression] Refactor default_hooks.py and powerSGD_hook.py by creating a util function that make a vanilla allreduce future
Test Plan: revert-hammer

Differential Revision:
D26070147 (e7b3496232)

Original commit changeset: 8c9339f1511e

fbshipit-source-id: fa1e9582baec9759a73b3004be9bb19bdeb6cd34
2021-01-29 09:06:24 -08:00
Yi Wang
e7b3496232 [Gradient Compression] Refactor default_hooks.py and powerSGD_hook.py by creating a util function that make a vanilla allreduce future (#51094)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/51094

Address https://github.com/pytorch/pytorch/pull/50973#discussion_r564229818

Original PR issue: Investigate Applying PowerSGD to Communication Hook for Gradient Compression #47202
ghstack-source-id: 120619680

Test Plan:
buck test mode/dev-nosan caffe2/test/distributed:c10d -- test_powerSGD_ddp_comm_hook_nccl

buck test mode/dev-nosan caffe2/test/distributed:c10d -- test_default_ddp_comm_hooks_nccl

Reviewed By: rohan-varma

Differential Revision: D26070147

fbshipit-source-id: 8c9339f1511e8f24cc906b9411cfe4850a5a6d81
2021-01-28 19:03:18 -08:00
Yi Wang
9d731e87de [Gradient Compression] Explicitly specify the dtype of the error tensor (#50985)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/50985

Explicitly specify the dtype of error tensor when it is initialized by zeros.

Previously if the dtype of input tensor is FP16, the error tensor is still created in FP32, although later it will be assigned by another FP16 tensor (`input_tensor_cp` - `input_tensor`).

This change will make the dtype of error tensor look more clear.

Additionally, also explicitly specify the dtype if rank-1 tensor buffer is empty.

Original PR issue: Investigate Applying PowerSGD to Communication Hook for Gradient Compression #47202
ghstack-source-id: 120377786

Test Plan:
buck test mode/dev-nosan caffe2/test/distributed:c10d -- test_powerSGD_ddp_comm_hook_nccl

buck test mode/dev-nosan caffe2/test/distributed:distributed_nccl_fork -- test_DistributedDataParallel_powerSGD_ddp_comm_hook

Reviewed By: rohan-varma

Differential Revision: D26034988

fbshipit-source-id: e0d323d0b77c6a2478cdbe8b31a1946ffd1a07da
2021-01-28 19:03:14 -08:00
Yi Wang
b619d37bb4 [Gradient Compression] Simplify the implementation of error feedback and warm-start (#50981)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/50981

Since vanilla allreduce will to be applied in the first few iterations, bucket rebuilding process will not affect caching per-variable tensors.

Previously the cached tensors used for error feedback and warm-up need to be rebuilt later, because their corresponding input tensors' shape will be changed after the bucket rebuild process.

Original PR issue: Investigate Applying PowerSGD to Communication Hook for Gradient Compression #47202
ghstack-source-id: 120617971

Test Plan: real run

Reviewed By: rohan-varma

Differential Revision: D26034418

fbshipit-source-id: e8744431c7f3142d75b77b60110e6861c2ff5c14
2021-01-28 18:59:40 -08:00
Yi Wang
9f19843d19 [Gradient Compression] Typo fixes in PowerSGD (#50974)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/50974

Typo fixes.

Original PR issue: Investigate Applying PowerSGD to Communication Hook for Gradient Compression #47202
ghstack-source-id: 120257221

Test Plan: N/A

Reviewed By: rohan-varma

Differential Revision: D26031679

fbshipit-source-id: 9d049b50419a3e40e53f7f1275a441e31b87717b
2021-01-25 22:55:54 -08:00
Yi Wang
ffaae32d60 [Gradient Compression] Allow PowerSGD to run vallina allreduce for the first K iterations (#50973)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/50973

This can extend the original PowerSGD method to a hybrid approach: vanilla allreduce + PowerSGD. This can help further improve the accuracy, at the cost of a lower speedup.

Also add more comments on the fields in `PowerSGDState`.

Original PR issue: Investigate Applying PowerSGD to Communication Hook for Gradient Compression #47202
ghstack-source-id: 120257202

Test Plan:
buck test mode/dev-nosan caffe2/test/distributed:c10d -- test_powerSGD_ddp_comm_hook_nccl

buck test mode/dev-nosan caffe2/test/distributed:distributed_nccl_fork -- test_DistributedDataParallel_powerSGD_ddp_comm_hook

Reviewed By: rohan-varma

Differential Revision: D26031478

fbshipit-source-id: d72e70bb28ba018f53223c2a4345306980b3084e
2021-01-25 22:38:39 -08:00
Yi Wang
439afda090 [Gradient Compression] Fix warm-start for PowerSGD laywerwise compression (#50283)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/50283

Realize that for the layerwise compression, the previous warm-start implementation only skips memory allocations, but does not skip filling random values for Qs.

Also fix the unit test in distributed_test.py. Previously the process group was not created correctly, and not communication occurred in the test_DistributedDataParallel_powerSGD_ddp_comm_hook.

Original PR issue: Investigate Applying PowerSGD to Communication Hook for Gradient Compression #47202
ghstack-source-id: 120101220

Test Plan:
Verified the fix by adding added some loggings locally.

Also verified no NE diff on Ads 1x.

Reviewed By: rohan-varma

Differential Revision: D25846222

fbshipit-source-id: 1ebeeb55ceba64d4d904ea6ac1bb42b1b2241520
2021-01-20 22:31:44 -08:00
Yi Wang
ce370398cc [Gradient Compression] Remove the extra comma after "bucket" in PowerSGD hook signatures (#50197)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/50197

Remove the extra comma after "bucket".
ghstack-source-id: 119513484

Test Plan: waitforbuildbot

Reviewed By: rohan-varma

Differential Revision: D25823117

fbshipit-source-id: acf048f7cb732c23cba3a81ccce1e70f6b9f4299
2021-01-07 15:56:20 -08:00
Ansley Ussery
c619892482 Fix errata (#49903)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/49903

Test Plan: Imported from OSS

Reviewed By: ngimel

Differential Revision: D25718411

Pulled By: ansley

fbshipit-source-id: 0cc365c5a53077752dc1c5a5c4a65b873baa3604
2020-12-28 20:40:41 -08:00
Samuel Marks
e6779d4357 [*.py] Rename "Arguments:" to "Args:" (#49736)
Summary:
I've written custom parsers and emitters for everything from docstrings to classes and functions. However, I recently came across an issue when I was parsing/generating from the TensorFlow codebase: inconsistent use of `Args:` and `Arguments:` in its docstrings.

```sh
(pytorch#c348fae)$ for name in 'Args:' 'Arguments:'; do
    printf '%-10s %04d\n' "$name" "$(rg -IFtpy --count-matches "$name" | paste -s -d+ -- | bc)"; done
Args:      1095
Arguments: 0336
```

It is easy enough to extend my parsers to support both variants, however it looks like `Arguments:` is wrong anyway, as per:

  - https://google.github.io/styleguide/pyguide.html#doc-function-args @ [`ddccc0f`](https://github.com/google/styleguide/blob/ddccc0f/pyguide.md)

  - https://chromium.googlesource.com/chromiumos/docs/+/master/styleguide/python.md#describing-arguments-in-docstrings @ [`9fc0fc0`](https://chromium.googlesource.com/chromiumos/docs/+/9fc0fc0/styleguide/python.md)

  - https://sphinxcontrib-napoleon.readthedocs.io/en/latest/example_google.html @ [`c0ae8e3`](https://github.com/sphinx-contrib/napoleon/blob/c0ae8e3/docs/source/example_google.rst)

Therefore, only `Args:` is valid. This PR replaces them throughout the codebase.

PS: For related PRs, see tensorflow/tensorflow/pull/45420

PPS: The trackbacks automatically appearing below are sending the same changes to other repositories in the [PyTorch](https://github.com/pytorch) organisation.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/49736

Reviewed By: albanD

Differential Revision: D25710534

Pulled By: soumith

fbshipit-source-id: 61e8ff01abb433e9f78185c2d1d0cbd7c22c1619
2020-12-28 09:34:47 -08:00
Yi Wang
55b431b17a [Gradient Compression] Directly let world_size = group_to_use.size() (#49715)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/49715

Address the comment on https://github.com/pytorch/pytorch/pull/49417#discussion_r545388351
ghstack-source-id: 119049598

Test Plan: waitforbuildbot

Reviewed By: rohan-varma

Differential Revision: D25673997

fbshipit-source-id: 44eb2540e5a77331c34ba503285cbd0bd63c2c0a
2020-12-22 23:24:54 -08:00
Yi Wang
88c33ff8ab [Gradient Compression] Explicitly restrict the scope of torch.cuda.synchronize to the current device (#49711)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/49711

`torch.cuda.synchronize` uses the current device by default. Explicitly specify this device for better readability.

Original PR issue: Investigate Applying PowerSGD to Communication Hook for Gradient Compression #47202
ghstack-source-id: 119017654

Test Plan:
buck test mode/dev-nosan caffe2/test/distributed:c10d -- test_powerSGD_ddp_comm_hook_nccl

buck test mode/dev-nosan caffe2/test/distributed:distributed_nccl_fork -- test_DistributedDataParallel_powerSGD_ddp_comm_hook

Reviewed By: rohan-varma

Differential Revision: D25672267

fbshipit-source-id: 62a2266727a2ea76175f3c438daf20951091c771
2020-12-22 23:21:45 -08:00
Yi Wang
af1b636b89 [Gradient Compression] Change wait() to value() in some callbacks of PowerSGD communication hook (#49709)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/49709

Since wait() has already been called in the return statements of the precursor callbacks, no need to wait again.

Original PR issue: Investigate Applying PowerSGD to Communication Hook for Gradient Compression #47202
ghstack-source-id: 119015237

Test Plan:
buck test mode/dev-nosan caffe2/test/distributed:c10d -- test_powerSGD_ddp_comm_hook_nccl
buck test mode/dev-nosan caffe2/test/distributed:distributed_nccl_fork -- test_DistributedDataParallel_powerSGD_ddp_comm_hook

Reviewed By: rohan-varma

Differential Revision: D25672068

fbshipit-source-id: da136327db4c4c0e3b846ba8d6885629f1044374
2020-12-22 21:37:04 -08:00