pytorch/docs/source/notes
Michael Carilli 40246fa63c Gradient scaling API (#26512)
Summary:
This PR implements the gradient scaling API that mruberry, jjsjann123, ngimel, zdevito, gchanan and I have been discussing.  Relevant issue/RFC: https://github.com/pytorch/pytorch/issues/25081.

Volume-wise, this PR is mostly documentation and tests.  The Python API (found entirely in `torch/cuda/amp/amp_scaler.py`) is lightweight .  The exposed functions are intended to make the implementation and control flow of gradient scaling convenient, intuitive, and performant.

The API is probably easiest to digest by looking at the documentation and examples. `docs/source/amp.rst` is the homepage for the Automatic Mixed Precision package.  `docs/source/notes/amp_examples.rst` includes several examples demonstrating common but not-immediately-obvious use cases.  Examples are backed by tests in `test_cuda.py` (and thankfully the tests pass :P).

Two small utility kernels have been added in `native/cuda/AmpKernels.cu` to improve performance and avoid host-device synchronizations wherever possible.

Existing optimizers, both in the wild and in Pytorch core, do not need to change to use the scaling API.

However, the API was also designed to establish a contract between user scripts and optimizers such that writers of _new_ custom optimizers have the control points they need to implement fast, optionally sync-free updates.  User scripts that obey the scaling API can drop such custom optimizers in and reap performance benefits without having to change anything aside from the optimizer constructor itself.  [I know what the contract with custom optimizers should be](35829f24ef/torch/cuda/amp/amp_scaler.py (L179-L184)), but I'm waiting for review on the rest of the API before I go about documenting it (it will be given a dedicated section in `docs/source/notes/amp_examples.rst`.

Currently, the gradient scaling examples do not include the auto-casting API as discussed in https://github.com/pytorch/pytorch/issues/25081.  The gradient scaling API is intended to be orthogonal/modular relative to autocasting.  Without auto-casting the gradient scaling API is fully use-_able_, but not terribly use-_ful_, so it's up to you guys whether you want to wait until auto-casting is ready before merging the scaling API as well.

### Todo
- [ ] How do I get c10 registered status for my two custom kernels?  They're very simple.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/26512

Differential Revision: D19859905

Pulled By: mruberry

fbshipit-source-id: bb8ae6966214718dfee11345db824389e4286923
2020-02-13 11:06:06 -08:00
..
amp_examples.rst Gradient scaling API (#26512) 2020-02-13 11:06:06 -08:00
autograd.rst Update distributed autograd design doc with appropriate links. (#29927) 2019-11-15 21:10:53 -08:00
broadcasting.rst [docs] Update broadcasting and cuda semantics notes (#6904) 2018-04-24 13:41:24 -04:00
cpu_threading_runtimes.svg Update CPU threading doc (#33083) 2020-02-11 14:13:51 -08:00
cpu_threading_torchscript_inference.rst Update CPU threading doc (#33083) 2020-02-11 14:13:51 -08:00
cpu_threading_torchscript_inference.svg Threading and CPU Inference note 2019-07-29 15:45:49 -07:00
cuda.rst Comprehensive-ish instrumentation for CUDA memory allocator (#27361) 2019-10-08 15:42:48 -07:00
ddp.rst Adding DDP Design Note 2020-01-15 14:10:45 -08:00
distributed_autograd.rst minor doc tweak to use mp.spawn in example (#30381) 2020-01-06 22:19:01 -08:00
extending.rst Fix typos, via a Levenshtein-type corrector (#31523) 2020-01-17 16:03:19 -08:00
faq.rst Use "length of the RNN input" instead of "length of the RNN" 2019-05-24 09:03:50 -07:00
large_scale_deployments.rst Thread local debug info 2019-08-12 14:53:57 -07:00
multiprocessing.rst Add IterableDataset (#19228) 2019-06-20 20:12:44 -07:00
randomness.rst Update randomness.rst (#21337) 2019-06-04 07:38:00 -07:00
rref.rst Fix RRef design doc warning (#30240) 2019-11-21 16:22:39 -08:00
serialization.rst code syntax error in document (serialization.rst) (#937) 2017-03-06 10:06:04 -05:00
windows.rst Fix typos, via a Levenshtein-type corrector (#31523) 2020-01-17 16:03:19 -08:00