pytorch/docs/source
Richard Zou 41846e205e [torch.func] Setup torch.func, populate it with all transforms (#91016)
This PR sets up torch.func and populates it with the following APIs:
- grad
- grad_and_value
- vjp
- jvp
- jacrev
- jacfwd
- hessian
- functionalize
- vmap

It also renames all instances of `functorch` in the APIs for those docs
to `torch.func`.

We rewrite the `__module__` fields on some of the above APIs so that the
APIs fit PyTorch's public api definition.
- For an API to be public, it must have a `__module__` that points to a
  public PyTorch submodule. However, `torch._functorch.eager_transforms`
  is not public due to the leading underscore.
- The solution is to rewrite `__module__` to point to where the API is
  exposed (torch.func). This is what both Numpy and JAX do for their
  APIs.
- h/t pmeier in
  https://github.com/pytorch/pytorch/issues/90284#issuecomment-1348595246
  for idea and code
- The helper function, `exposed_in`, is confined to
  torch._functorch/utils for now because we're not completely sure if
  this should be the long-term solution.

Implication for functorch.* APIs:
- functorch.grad is the same object as torch.func.grad
- this means that the functorch.grad docstring is actually the
  torch.func.grad docstring and will refer to torch.func instead of
  functorch.
- This isn't really a problem since the plan on record is to deprecate
  functorch in favor of torch.func. We can fix these if we really want,
  but I'm not sure if a solution is worth maintaining.

Test Plan:
- view docs preview

Future:
- vmap should actually just be torch.vmap. This requires an extra step
  where I need to test internal callsites, so, I'm separating it into a
  different PR.
- make_fx should be in torch.func to be consistent with `import
  functorch`. This one is a bit more of a headache to deal with w.r.t.
  public api, so going to deal with it separately.
- beef up func.rst with everything else currently on the functorch
  documention website. func.rst is currently just an empty shell.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/91016
Approved by: https://github.com/samdow
2022-12-20 00:00:52 +00:00
..
_static Move Dynamo docs back to core (#89769) 2022-11-29 04:38:53 +00:00
_templates Fix left nav (#78552) 2022-06-01 00:49:53 +00:00
community Update Persons of Interest (#90069) 2022-12-02 23:06:57 +00:00
dynamo Replace TORCHINDUCTOR_TRACE with TORCH_COMPILE_DEBUG in documentation (#91011) 2022-12-19 14:45:27 +00:00
elastic Add watchdog to TorchElastic agent and trainers (#84081) 2022-09-07 00:17:20 +00:00
notes Improve Autograd Documentation Clarity (#89401) 2022-12-06 06:45:04 +00:00
rpc Support Union in TorchScript (#64234) 2021-09-03 06:12:24 -07:00
scripts Doc for Canonical Aten and Prims IR (#90644) 2022-12-13 21:30:47 +00:00
_dynamo.rst Add torch._dynamo to docs (#89510) 2022-11-23 16:33:13 +00:00
amp.rst Remove deprecated torch.matrix_rank (#70981) 2022-09-22 17:40:46 +00:00
autograd.rst Change torch.autograd.graph.disable_saved_tensors_hooks to be public API (#85994) 2022-10-03 16:25:01 +00:00
backends.rst Create native function for determining which implementation of SDP to call (#89029) 2022-11-16 03:07:54 +00:00
benchmark_utils.rst Cleanup all module references in doc (#73983) 2022-03-10 22:26:29 +00:00
bottleneck.rst add itt unit test and docstrings (#84848) 2022-09-28 01:39:58 +00:00
checkpoint.rst
complex_numbers.rst Add a note on CUDA 11.6 (#80363) 2022-06-27 21:34:24 +00:00
conf.py [ONNX] Document ONNX diagnostics (#88371) 2022-11-16 19:21:46 +00:00
config_mod.rst rename config module file to work with gh pages better 2022-03-10 20:41:44 +00:00
cpp_extension.rst Check clang++/g++ version when compiling CUDA extensions (#63230) 2022-02-24 08:32:32 +00:00
cpp_index.rst Add C++ Landing Page (#38450) 2020-05-14 16:02:01 -07:00
cuda._sanitizer.rst Fix typos under docs directory (#88033) 2022-10-31 19:31:56 +00:00
cuda.rst Add Pluggable CUDA allocator backend (#86786) 2022-11-23 17:54:36 +00:00
cudnn_persistent_rnn.rst Remove orphan from cuDNN persistent note (#65160) 2021-09-21 11:09:47 -07:00
cudnn_rnn_determinism.rst Forbid trailing whitespace (#53406) 2021-03-05 17:22:55 -08:00
data.rst [DataLoader] Removing DataLoader2 related code (#88848) 2022-11-11 22:27:01 +00:00
ddp_comm_hooks.rst Fix two small typos in ddp_comm_hooks.rst (#82047) 2022-07-23 19:10:57 +00:00
deploy.rst Delete torch::deploy from pytorch core (#85953) 2022-10-06 07:20:16 +00:00
distributed.algorithms.join.rst Add tutorial link (#62785) 2021-08-05 17:28:02 -07:00
distributed.checkpoint.rst [PT-D][Checkpointing] Move distributed checkpointing from torch.distributed._shard.checkpoint to torch.distributed.checkpoint (#88698) 2022-11-16 21:06:38 +00:00
distributed.elastic.rst [1/n][torch/elastic] Move torchelastic docs *.rst (#148) 2021-05-04 00:57:56 -07:00
distributed.optim.rst [distributed][docs] Delete distributed optimimzer section from RPC and add reference to namespace docs page (#68068) 2021-11-09 15:01:54 -08:00
distributed.rst [Doc][Distributed] Add missing functions to distributed.rst (#89905) 2022-12-04 07:22:54 +00:00
distributed.tensor.parallel.rst Move tensor_parallel out to distributed.tensor folder (#89878) 2022-11-30 22:13:10 +00:00
distributions.rst [Reinstate] Wishart distribution (#70377) 2021-12-30 11:41:46 -08:00
dlpack.rst Lint trailing newlines (#54737) 2021-03-30 13:09:52 -07:00
docutils.conf
fft.rst Cleanup all module references in doc (#73983) 2022-03-10 22:26:29 +00:00
fsdp.rst [FSDP()][3/N] Refactor public APIs (#87917) 2022-10-31 16:45:21 +00:00
func.api.rst [torch.func] Setup torch.func, populate it with all transforms (#91016) 2022-12-20 00:00:52 +00:00
func.rst [torch.func] Setup torch.func, populate it with all transforms (#91016) 2022-12-20 00:00:52 +00:00
futures.rst Update docs to mention CUDA support for Future (#50048) 2021-05-11 08:26:33 -07:00
fx.rst prepare removal of deprecated functionality in torch.testing (#87969) 2022-11-02 14:04:48 +00:00
hub.rst Add more details to the known limitations section of torchhub docs (#69970) 2021-12-16 02:43:48 -08:00
index.rst [torch.func] Setup torch.func, populate it with all transforms (#91016) 2022-12-20 00:00:52 +00:00
ir.rst Doc for Canonical Aten and Prims IR (#90644) 2022-12-13 21:30:47 +00:00
jit_builtin_functions.rst Lint trailing newlines (#54737) 2021-03-30 13:09:52 -07:00
jit_language_reference_v2.rst Fix typos in docs (#80602) 2022-08-29 23:32:44 +00:00
jit_language_reference.rst (Re-open) Adds cudaMallocAsync as an alternative backend for the CUDA allocator (#82682) 2022-10-12 03:44:21 +00:00
jit_python_reference.rst [JIT] improve documentation (#57991) 2021-05-19 11:47:32 -07:00
jit_unsupported.rst (Re-open) Adds cudaMallocAsync as an alternative backend for the CUDA allocator (#82682) 2022-10-12 03:44:21 +00:00
jit_utils.rst Create __init__.py (#78629) 2022-06-03 18:14:21 +00:00
jit.rst torch.jit doc link for nvfuser readme.md (#77780) 2022-07-07 23:25:35 +00:00
library.rst Add docs for Python Registration 2022-06-13 23:21:23 +00:00
linalg.rst Add a note on the stability of linalg functions. (#88313) 2022-11-07 22:44:23 +00:00
masked.rst Update masked.rst (#89758) 2022-11-28 17:55:43 +00:00
math-quantizer-equation.png
mobile_optimizer.rst [Vulkan] Add Vulkan Rewrite to Transfer Inputs and Outputs to Vulkan and CPU Backends Respectively (#87432) 2022-10-31 14:18:45 +00:00
model_zoo.rst
monitor.rst torch/monitor: merge Interval and FixedCount stats (#72009) 2022-01-30 23:21:59 +00:00
multiprocessing.rst Forbid trailing whitespace (#53406) 2021-03-05 17:22:55 -08:00
name_inference.rst Abladawood patch 1 (#58496) 2021-05-20 10:32:18 -07:00
named_tensor.rst Add torch.unflatten and improve its docs (#81399) 2022-07-29 15:02:42 +00:00
nested.rst Revert "remove torch.equal usages (#89527)" 2022-12-02 21:36:13 +00:00
nn.functional.rst Add Dropout1d module 2022-06-15 14:39:07 +00:00
nn.init.rst update nn.init doc to reflect the no_grad (#80882) 2022-07-07 17:19:29 +00:00
nn.rst Add Dropout1d module 2022-06-15 14:39:07 +00:00
onnx_diagnostics.rst [ONNX] Document ONNX diagnostics (#88371) 2022-11-16 19:21:46 +00:00
onnx_supported_aten_ops.rst [ONNX] Update ONNX documentation to include unsupported operators (#84496) 2022-09-16 23:48:37 +00:00
onnx.rst [ONNX] Add onnx-script into ONNX docs (#89078) 2022-11-17 06:27:17 +00:00
optim.rst [doc] LR scheduler example fix (#86629) 2022-10-11 21:41:50 +00:00
package.rst Fix typos in torch.package documentation (#82994) 2022-08-08 20:19:17 +00:00
pipeline.rst Minor changes in documentation (#68557) 2021-11-18 17:57:16 -08:00
profiler.rst Fix ITT unit-tests if PyTorch is compiled with USE_ITT=OFF (#86199) 2022-10-04 21:57:05 +00:00
quantization-accuracy-debugging.rst Fix typo under docs directory (#87583) 2022-10-24 23:52:44 +00:00
quantization-backend-configuration.rst update quantization doc: add x86 backend as default backend of server inference (#86794) 2022-12-02 02:10:25 +00:00
quantization-support.rst [ao] quantize.py fixing public v private (#87521) 2022-12-14 22:50:39 +00:00
quantization.rst update quantization doc: add x86 backend as default backend of server inference (#86794) 2022-12-02 02:10:25 +00:00
random.rst Remove duplicated entries in random.rst (#39725) 2020-06-10 16:51:15 -07:00
rpc.rst Fix typo under docs directory and RELEASE.md (#85896) 2022-09-29 21:41:59 +00:00
signal.rst Nuttall window (#90103) 2022-12-16 09:05:53 +00:00
sparse.rst Fix typos in .md and .rst files (#88962) 2022-11-17 03:37:02 +00:00
special.rst [primTorch] special: j0, j1, spherical_j0 (#86049) 2022-10-04 18:21:46 +00:00
storage.rst Deprecate TypedStorage, its derived classes, and all of their public methods (#85303) 2022-11-08 18:11:01 +00:00
tensor_attributes.rst Chore: Add 'mps' to the docs of tensor_attributes (#86585) 2022-10-14 19:59:33 +00:00
tensor_view.rst Correcting a minor typo: "Users should pay" instead of "Users should be pay" (#72500) 2022-02-08 23:08:25 +00:00
tensorboard.rst Cleanup all module references in doc (#73983) 2022-03-10 22:26:29 +00:00
tensors.rst Remove deprecated torch.lstsq (#70980) 2022-09-23 00:16:55 +00:00
testing.rst document torch.testing.assert_allclose (#89526) 2022-12-01 11:22:50 +00:00
torch.ao.ns._numeric_suite_fx.rst Quantization docs: add pages for Numeric Suite (Eager and FX) (#66380) 2021-10-11 18:47:58 -07:00
torch.ao.ns._numeric_suite.rst Quantization docs: add pages for Numeric Suite (Eager and FX) (#66380) 2021-10-11 18:47:58 -07:00
torch.overrides.rst Revert "Revert "Implement sym_sizes to create proper IR for sym ints representing tensor sizes (#76836)"" 2022-05-18 18:40:57 +00:00
torch.rst Add torch.compile implementation (#89607) 2022-12-01 20:17:52 +00:00
type_info.rst ENH: Convert finfo.tiny to finfo.smallest_normal (#76292) 2022-05-20 00:59:48 +00:00