pytorch/docs/source/index.rst
Richard Zou 41846e205e [torch.func] Setup torch.func, populate it with all transforms (#91016)
This PR sets up torch.func and populates it with the following APIs:
- grad
- grad_and_value
- vjp
- jvp
- jacrev
- jacfwd
- hessian
- functionalize
- vmap

It also renames all instances of `functorch` in the APIs for those docs
to `torch.func`.

We rewrite the `__module__` fields on some of the above APIs so that the
APIs fit PyTorch's public api definition.
- For an API to be public, it must have a `__module__` that points to a
  public PyTorch submodule. However, `torch._functorch.eager_transforms`
  is not public due to the leading underscore.
- The solution is to rewrite `__module__` to point to where the API is
  exposed (torch.func). This is what both Numpy and JAX do for their
  APIs.
- h/t pmeier in
  https://github.com/pytorch/pytorch/issues/90284#issuecomment-1348595246
  for idea and code
- The helper function, `exposed_in`, is confined to
  torch._functorch/utils for now because we're not completely sure if
  this should be the long-term solution.

Implication for functorch.* APIs:
- functorch.grad is the same object as torch.func.grad
- this means that the functorch.grad docstring is actually the
  torch.func.grad docstring and will refer to torch.func instead of
  functorch.
- This isn't really a problem since the plan on record is to deprecate
  functorch in favor of torch.func. We can fix these if we really want,
  but I'm not sure if a solution is worth maintaining.

Test Plan:
- view docs preview

Future:
- vmap should actually just be torch.vmap. This requires an extra step
  where I need to test internal callsites, so, I'm separating it into a
  different PR.
- make_fx should be in torch.func to be consistent with `import
  functorch`. This one is a bit more of a headache to deal with w.r.t.
  public api, so going to deal with it separately.
- beef up func.rst with everything else currently on the functorch
  documention website. func.rst is currently just an empty shell.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/91016
Approved by: https://github.com/samdow
2022-12-20 00:00:52 +00:00

154 lines
3.9 KiB
ReStructuredText

.. PyTorch documentation master file, created by
sphinx-quickstart on Fri Dec 23 13:31:47 2016.
You can adapt this file completely to your liking, but it should at least
contain the root `toctree` directive.
:github_url: https://github.com/pytorch/pytorch
PyTorch documentation
===================================
PyTorch is an optimized tensor library for deep learning using GPUs and CPUs.
Features described in this documentation are classified by release status:
*Stable:* These features will be maintained long-term and there should generally
be no major performance limitations or gaps in documentation.
We also expect to maintain backwards compatibility (although
breaking changes can happen and notice will be given one release ahead
of time).
*Beta:* These features are tagged as Beta because the API may change based on
user feedback, because the performance needs to improve, or because
coverage across operators is not yet complete. For Beta features, we are
committing to seeing the feature through to the Stable classification.
We are not, however, committing to backwards compatibility.
*Prototype:* These features are typically not available as part of
binary distributions like PyPI or Conda, except sometimes behind run-time
flags, and are at an early stage for feedback and testing.
.. toctree::
:glob:
:maxdepth: 1
:caption: Community
community/*
.. toctree::
:glob:
:maxdepth: 1
:caption: Developer Notes
notes/*
.. toctree::
:glob:
:maxdepth: 1
:caption: torch.compile
:hidden:
dynamo/index
dynamo/installation
dynamo/get-started
dynamo/guards-overview
dynamo/custom-backends
dynamo/deep-dive
dynamo/troubleshooting
dynamo/faq
ir
.. toctree::
:maxdepth: 1
:caption: Language Bindings
cpp_index
Javadoc <https://pytorch.org/javadoc/>
torch::deploy <deploy>
.. toctree::
:glob:
:maxdepth: 2
:caption: Python API
torch
nn
nn.functional
tensors
tensor_attributes
tensor_view
torch.amp <amp>
torch.autograd <autograd>
torch.library <library>
cuda
torch.backends <backends>
torch.distributed <distributed>
torch.distributed.algorithms.join <distributed.algorithms.join>
torch.distributed.elastic <distributed.elastic>
torch.distributed.fsdp <fsdp>
torch.distributed.optim <distributed.optim>
torch.distributed.tensor.parallel <distributed.tensor.parallel>
torch.distributed.checkpoint <distributed.checkpoint>
torch.distributions <distributions>
torch._dynamo <_dynamo>
torch.fft <fft>
torch.func <func>
futures
fx
torch.hub <hub>
torch.jit <jit>
torch.linalg <linalg>
torch.monitor <monitor>
torch.signal <signal>
torch.special <special>
torch.overrides
torch.package <package>
profiler
nn.init
onnx
onnx_diagnostics
optim
complex_numbers
ddp_comm_hooks
pipeline
quantization
rpc
torch.random <random>
masked
torch.nested <nested>
sparse
storage
torch.testing <testing>
torch.utils.benchmark <benchmark_utils>
torch.utils.bottleneck <bottleneck>
torch.utils.checkpoint <checkpoint>
torch.utils.cpp_extension <cpp_extension>
torch.utils.data <data>
torch.utils.jit <jit_utils>
torch.utils.dlpack <dlpack>
torch.utils.mobile_optimizer <mobile_optimizer>
torch.utils.model_zoo <model_zoo>
torch.utils.tensorboard <tensorboard>
type_info
named_tensor
name_inference
torch.__config__ <config_mod>
.. toctree::
:maxdepth: 1
:caption: Libraries
torchaudio <https://pytorch.org/audio/stable>
TorchData <https://pytorch.org/data>
TorchRec <https://pytorch.org/torchrec>
TorchServe <https://pytorch.org/serve>
torchtext <https://pytorch.org/text/stable>
torchvision <https://pytorch.org/vision/stable>
PyTorch on XLA Devices <http://pytorch.org/xla/>
Indices and tables
==================
* :ref:`genindex`
* :ref:`modindex`