pytorch/docs/source
BowenBao 1e04ffd2fd [ONNX] Refactor _run_symbolic_function (#67573) (#68491)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/68491

* Allows implementing symbolic functions for domains other than `aten`, for example `prim`, in symbolic_opset#.py.
* Allows symbolic function to access extra context if needed, through `SymbolicFunctionState`.
  * Particularly, the `prim::PythonOp` special case can access node without the need of passing node through inputs. Updates will be made downstreams, and in a follow-up PR we will remove the previous workaround in exporter.
* `prim::Loop`, `prim::If`, etc are now moved outside of `_run_symbolic_function` from utils.py, and to symbolic_opset9.py.

Motivation for this change:
- Better maintainability and reducing complexity. Easier to add symbolic for operators, both simple and complex ones (that need additional context), without the former needing to know the existence of the latter.
- The design idea was long outdated. prim ops are no longer rare special cases, and they shouldn't all be handled inside `_run_symbolic_function`. As a result this function becomes too clumsy. There were also prim ops symbolic added in symbolic_opset#.py with signature `prim_[opname]`, creating separation and confusion.

Test Plan: Imported from OSS

Reviewed By: jansel

Differential Revision: D32483782

Pulled By: malfet

fbshipit-source-id: f9affc31b1570af30ffa6668da9375da111fd54a

Co-authored-by: BowenBao <bowbao@microsoft.com>
2022-02-11 10:32:46 -08:00
..
_static clarify the documentation of torch.meshgrid (#62977) 2021-08-18 04:01:22 -07:00
_templates DOC: Merge extraheader block from theme instead of override (#70187) 2022-01-05 06:42:38 -08:00
community Update contribution_guide.rst (#64142) 2021-08-30 19:26:59 -07:00
elastic (torchelastic) make --max_restarts explicit in the quickstart and runner docs (#65838) 2021-09-29 19:29:01 -07:00
notes Fixes jiterator cache macro include + updates CUDA note with cache variables (#71452) 2022-01-18 19:42:11 -08:00
rpc Support Union in TorchScript (#64234) 2021-09-03 06:12:24 -07:00
scripts [docs] Add images to some activation functions (#65415) 2021-09-22 11:05:29 -07:00
__config__.rst
amp.rst rebase for autocast updates to include device_type and dtype flags (#61002) 2021-08-10 20:03:12 -07:00
autograd.rst Targeted documentation updates in autograd.functional (#72111) 2022-02-01 19:16:53 -08:00
backends.rst [Linalg] Add a runtime switch to let pytorch prefer a backend impl in linalg functions on GPU (#67980) 2021-12-03 19:06:30 -08:00
benchmark_utils.rst
bottleneck.rst
checkpoint.rst
complex_numbers.rst Grammatical update of tech docs (#61547) 2021-07-14 14:01:59 -07:00
conf.py Add transformation using cdf of distribution. (#72495) 2022-02-09 06:37:56 -08:00
cpp_extension.rst
cpp_index.rst
cuda.rst Document torch.cuda.ExternalStream, torch.cuda.caching_allocator_alloc and torch.cuda.caching_allocator_delete (#70126) 2022-01-12 15:44:40 -08:00
cudnn_persistent_rnn.rst Remove orphan from cuDNN persistent note (#65160) 2021-09-21 11:09:47 -07:00
cudnn_rnn_determinism.rst
data.rst [DateLoader] more clearly expose 'default_collate' and 'default_convert' to users (#69862) 2021-12-14 11:18:26 -08:00
ddp_comm_hooks.rst [DDP Comm Hook] Add debugging communication hooks to ddp_comm_hooks.rst (#64352) 2021-09-01 17:37:19 -07:00
deploy.rst [deploy] docs (#69251) 2021-12-01 21:55:18 -08:00
distributed.algorithms.join.rst Add tutorial link (#62785) 2021-08-05 17:28:02 -07:00
distributed.elastic.rst [1/n][torch/elastic] Move torchelastic docs *.rst (#148) 2021-05-04 00:57:56 -07:00
distributed.optim.rst [distributed][docs] Delete distributed optimimzer section from RPC and add reference to namespace docs page (#68068) 2021-11-09 15:01:54 -08:00
distributed.rst Implement gather primitive for ProcessGroupNCCL (#66745) 2022-01-27 11:35:01 -08:00
distributions.rst [Reinstate] Wishart distribution (#70377) 2021-12-30 11:41:46 -08:00
dlpack.rst Lint trailing newlines (#54737) 2021-03-30 13:09:52 -07:00
docutils.conf
fft.rst C++ API and docs for hfftn (#66127) 2021-10-07 12:48:36 -07:00
fsdp.rst make fsdp folder to be public (#72084) 2022-02-02 07:47:17 -08:00
futures.rst Update docs to mention CUDA support for Future (#50048) 2021-05-11 08:26:33 -07:00
fx.rst Fix for retracing documentation which would break for n-ary operators (#71599) 2022-01-24 12:04:25 -08:00
hub.rst Add more details to the known limitations section of torchhub docs (#69970) 2021-12-16 02:43:48 -08:00
index.rst make fsdp folder to be public (#72084) 2022-02-02 07:47:17 -08:00
jit_builtin_functions.rst Lint trailing newlines (#54737) 2021-03-30 13:09:52 -07:00
jit_language_reference_v2.rst Add Union type to TorchScript Language Ref (#69514) 2021-12-07 12:53:54 -08:00
jit_language_reference.rst fix typos in jit_language_reference.rst (#68706) 2021-11-22 19:09:06 -08:00
jit_python_reference.rst [JIT] improve documentation (#57991) 2021-05-19 11:47:32 -07:00
jit_unsupported.rst
jit.rst Back out "D30740897 Add fusion enabled apis" (#64500) 2021-09-04 20:55:58 -07:00
linalg.rst [Array API] Add linalg.diagonal (#70599) 2022-01-26 00:05:37 -08:00
math-quantizer-equation.png
mobile_optimizer.rst
model_zoo.rst
monitor.rst torch/monitor: merge Interval and FixedCount stats (#72009) 2022-01-30 15:19:09 -08:00
multiprocessing.rst
name_inference.rst Abladawood patch 1 (#58496) 2021-05-20 10:32:18 -07:00
named_tensor.rst
nn.functional.rst Revert D34154832: [pytorch][PR] Add multi_head_attention_forward to functional rst docs 2022-02-10 21:05:52 -08:00
nn.init.rst
nn.rst Implements the orthogonal parametrization (#62089) 2021-08-30 13:12:07 -07:00
onnx.rst [ONNX] Refactor _run_symbolic_function (#67573) (#68491) 2022-02-11 10:32:46 -08:00
optim.rst To add SequentialLR to PyTorch Core Schedulers (#64037) 2021-09-09 09:36:32 -07:00
package.rst Minor changes in documentation (#68557) 2021-11-18 17:57:16 -08:00
pipeline.rst Minor changes in documentation (#68557) 2021-11-18 17:57:16 -08:00
profiler.rst Add low level torch.profiler.kineto_profile base class (#63302) 2021-12-14 14:47:43 -08:00
quantization-support.rst [quant][bc-breaking] Remove QConfigDynamic from quantization api (#69875) 2021-12-17 23:10:06 -08:00
quantization.rst [quant][docs] quantized model save/load instructions (#69789) 2021-12-13 20:23:59 -08:00
random.rst
rpc.rst [distributed][docs] Delete distributed optimimzer section from RPC and add reference to namespace docs page (#68068) 2021-11-09 15:01:54 -08:00
sparse.rst Add missing entry for sampled_addmm in sparse.rst (#72312) 2022-02-07 15:59:06 -08:00
special.rst [special] special alias for softmax (#62251) 2021-10-01 03:55:32 -07:00
storage.rst Lint trailing newlines (#54737) 2021-03-30 13:09:52 -07:00
tensor_attributes.rst Remove legacy constructor calls from pytorch codebase. (#54142) 2021-04-11 15:45:17 -07:00
tensor_view.rst Correcting a minor typo: "Users should pay" instead of "Users should be pay" (#72500) 2022-02-08 15:04:44 -08:00
tensorboard.rst
tensors.rst ammend tensors.rst and torch.rst for doc generation (#69030) 2021-11-30 12:04:13 -08:00
testing.rst move torch.testing from prototype to beta (#69668) 2021-12-17 09:52:47 -08:00
torch.ao.ns._numeric_suite_fx.rst Quantization docs: add pages for Numeric Suite (Eager and FX) (#66380) 2021-10-11 18:47:58 -07:00
torch.ao.ns._numeric_suite.rst Quantization docs: add pages for Numeric Suite (Eager and FX) (#66380) 2021-10-11 18:47:58 -07:00
torch.overrides.rst
torch.rst Structured Kernels for index_copy, add out variant (#67329) 2022-02-08 14:51:06 -08:00
type_info.rst [Docs] Mention torch.bfloat16 in torch.finfo (#68496) 2021-11-18 17:52:41 -08:00