[Doc] Deprecation notice for only_inputs argument (#63631)

Summary:
Fixes https://github.com/pytorch/pytorch/issues/63544.

Changed docstring accordingly. I'm new here, not sure if the style is okay. Please check.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/63631

Reviewed By: ejguan

Differential Revision: D30459439

Pulled By: soulitzer

fbshipit-source-id: 8df3c509d1dd39764815b099ab47229550126cbe
This commit is contained in:
北海若 2021-08-20 15:45:12 -07:00 committed by Facebook GitHub Bot
parent bcf8e2f57e
commit efe01c59e3

View File

@ -173,17 +173,18 @@ def grad(
gradients w.r.t. each of the outputs. If an output doesn't require_grad,
then the gradient can be ``None``).
If ``only_inputs`` is ``True``, the function will only return a list of gradients
w.r.t the specified inputs. If it's ``False``, then gradient w.r.t. all remaining
leaves will still be computed, and will be accumulated into their ``.grad``
attribute.
.. note::
If you run any forward ops, create ``grad_outputs``, and/or call ``grad``
in a user-specified CUDA stream context, see
:ref:`Stream semantics of backward passes<bwd-cuda-stream-semantics>`.
.. note::
``only_inputs`` argument is deprecated and is ignored now (defaults to ``True``).
To accumulate gradient for other parts of the graph, please use
``torch.autograd.backward``.
Args:
outputs (sequence of Tensor): outputs of the differentiated function.
inputs (sequence of Tensor): Inputs w.r.t. which the gradient will be