pytorch/docs/source
Michael Carilli 40246fa63c Gradient scaling API (#26512)
Summary:
This PR implements the gradient scaling API that mruberry, jjsjann123, ngimel, zdevito, gchanan and I have been discussing.  Relevant issue/RFC: https://github.com/pytorch/pytorch/issues/25081.

Volume-wise, this PR is mostly documentation and tests.  The Python API (found entirely in `torch/cuda/amp/amp_scaler.py`) is lightweight .  The exposed functions are intended to make the implementation and control flow of gradient scaling convenient, intuitive, and performant.

The API is probably easiest to digest by looking at the documentation and examples. `docs/source/amp.rst` is the homepage for the Automatic Mixed Precision package.  `docs/source/notes/amp_examples.rst` includes several examples demonstrating common but not-immediately-obvious use cases.  Examples are backed by tests in `test_cuda.py` (and thankfully the tests pass :P).

Two small utility kernels have been added in `native/cuda/AmpKernels.cu` to improve performance and avoid host-device synchronizations wherever possible.

Existing optimizers, both in the wild and in Pytorch core, do not need to change to use the scaling API.

However, the API was also designed to establish a contract between user scripts and optimizers such that writers of _new_ custom optimizers have the control points they need to implement fast, optionally sync-free updates.  User scripts that obey the scaling API can drop such custom optimizers in and reap performance benefits without having to change anything aside from the optimizer constructor itself.  [I know what the contract with custom optimizers should be](35829f24ef/torch/cuda/amp/amp_scaler.py (L179-L184)), but I'm waiting for review on the rest of the API before I go about documenting it (it will be given a dedicated section in `docs/source/notes/amp_examples.rst`.

Currently, the gradient scaling examples do not include the auto-casting API as discussed in https://github.com/pytorch/pytorch/issues/25081.  The gradient scaling API is intended to be orthogonal/modular relative to autocasting.  Without auto-casting the gradient scaling API is fully use-_able_, but not terribly use-_ful_, so it's up to you guys whether you want to wait until auto-casting is ready before merging the scaling API as well.

### Todo
- [ ] How do I get c10 registered status for my two custom kernels?  They're very simple.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/26512

Differential Revision: D19859905

Pulled By: mruberry

fbshipit-source-id: bb8ae6966214718dfee11345db824389e4286923
2020-02-13 11:06:06 -08:00
..
_static Improve documentation around builtin functions (#30347) 2019-12-04 13:50:40 -08:00
_templates Generate sphinx docs with secure content. (#18508) 2019-03-27 11:01:48 -07:00
community Fix broken links in governance.rst 2020-02-04 14:26:09 -08:00
notes Gradient scaling API (#26512) 2020-02-13 11:06:06 -08:00
org/pytorch Revert D19320493: Javadoc changes 2020-01-09 14:23:30 -08:00
scripts Add torch.nn.GELU for GELU activation (#28944) 2019-11-03 21:55:05 -08:00
__config__.rst Allow a non-OpenMP based build (#19749) 2019-05-06 19:34:48 -07:00
amp.rst Gradient scaling API (#26512) 2020-02-13 11:06:06 -08:00
autograd.rst Added docs for context method mixins. Fixes issue #27365 (#28643) 2019-10-28 08:31:35 -07:00
bottleneck.rst [docs] Clarify more CUDA profiling gotchas in bottleneck docs (#6763) 2018-04-19 13:15:27 -04:00
checkpoint.rst Stashing checkpointing RNG states based on devices of arg tensors (#14518) 2018-12-11 09:48:45 -08:00
conf.py Revert D19320493: Javadoc changes 2020-01-09 14:23:30 -08:00
cpp_extension.rst Inline JIT C++ Extensions (#7059) 2018-04-30 11:48:44 -04:00
cuda_deterministic_backward.rst Typo correction in cuda_deterministic_backward.rst (#25011) 2019-08-22 21:19:39 -07:00
cuda_deterministic.rst Amend nondeterminism notes (#12217) 2018-10-16 23:59:26 -07:00
cuda.rst Fix most documentation warnings (#27782) 2019-10-13 10:34:01 -07:00
cudnn_deterministic.rst Amend nondeterminism notes (#12217) 2018-10-16 23:59:26 -07:00
cudnn_persistent_rnn.rst don't copy weight gradients in rnn (#12600) 2018-10-12 13:34:10 -07:00
data.rst Fix typo in data.rst docs 2019-12-18 09:52:10 -08:00
distributed.rst Fix typos (#30606) 2019-12-02 20:17:42 -08:00
distributions.rst Revert D18249048: Moved VonMises distribution with sampling upstream from Pyro. 2019-11-04 09:50:50 -08:00
dlpack.rst document torch.utils.dlpack (#9343) 2018-07-11 07:46:09 -07:00
hub.rst Fix typos, via a Levenshtein-type corrector (#31523) 2020-01-17 16:03:19 -08:00
index.rst Gradient scaling API (#26512) 2020-02-13 11:06:06 -08:00
jit_builtin_functions.rst Fix builtin function reference (#24056) 2019-08-09 15:58:15 -07:00
jit_language_reference.rst Cleanup after moving language reference (#31146) 2019-12-18 15:09:35 -08:00
jit_python_reference.rst Add Python language reference docs (#30686) 2019-12-26 13:21:36 -08:00
jit_unsupported.rst add unsupported section (#31329) 2019-12-18 13:56:02 -08:00
jit.rst Fix typos, via a Levenshtein-type corrector (#31523) 2020-01-17 16:03:19 -08:00
math-quantizer-equation.png adding quantization.rst file for quantization feature (#27559) 2019-10-09 16:45:09 -07:00
model_zoo.rst add/move a few apis in torch.hub (#18758) 2019-04-10 23:10:39 -07:00
multiprocessing.rst Bag of documentation fixes; fix more sphinx warnings (#27850) 2019-10-15 07:31:14 -07:00
name_inference.rst Fix typos (#30606) 2019-12-02 20:17:42 -08:00
named_tensor.rst Bag of documentation fixes; fix more sphinx warnings (#27850) 2019-10-15 07:31:14 -07:00
nn.functional.rst Breaks up NN module in docs so it loads faster. 2019-06-11 09:38:41 -07:00
nn.init.rst Bag of documentation fixes; fix more sphinx warnings (#27850) 2019-10-15 07:31:14 -07:00
nn.rst Pruning Functionality (#24076) 2019-11-08 19:38:00 -08:00
onnx.rst [ONNX] Update ONNX landing page since 1.3 (#32805) 2020-02-03 10:38:29 -08:00
optim.rst Fix capitalization inconsistency in optim.rst 2019-12-04 08:17:03 -08:00
packages.rst Revert D19320493: Javadoc changes 2020-01-09 14:23:30 -08:00
quantization.rst Updates to quantization documentation (#30288) 2019-11-23 09:29:30 -08:00
random.rst Fix most documentation warnings (#27782) 2019-10-13 10:34:01 -07:00
rpc.rst Explain RPC behavior when using Tensor as arg or return value 2020-01-09 16:42:24 -08:00
sparse.rst Bag of documentation fixes; fix more sphinx warnings (#27850) 2019-10-15 07:31:14 -07:00
storage.rst
tensor_attributes.rst Expose a torch.result_type and simplify tensor iterator 2019-09-25 06:52:23 -07:00
tensorboard.rst Add method add_hparams to API doc (#27344) 2019-10-03 17:07:45 -07:00
tensors.rst Added cummin 2020-01-17 10:51:58 -08:00
torch.rst Added cummin 2020-01-17 10:51:58 -08:00
type_info.rst Allow converting char tensor to numpy; add [fi]info.min (#15046) 2018-12-24 09:11:24 -08:00