pytorch/docs/source
Jing Xu f988aa2b3f Enable Intel® VTune™ Profiler's Instrumentation and Tracing Technology APIs (ITT) to PyTorch (#63289)
More detailed description of benefits can be found at #41001. This is Intel's counterpart of NVidia’s NVTX (https://pytorch.org/docs/stable/autograd.html#torch.autograd.profiler.emit_nvtx).

ITT is a functionality for labeling trace data during application execution across different Intel tools.
For integrating Intel(R) VTune Profiler into Kineto, ITT needs to be integrated into PyTorch first. It works with both standalone VTune Profiler [(https://www.intel.com/content/www/us/en/developer/tools/oneapi/vtune-profiler.html](https://www.intel.com/content/www/us/en/developer/tools/oneapi/vtune-profiler.html)) and Kineto-integrated VTune functionality in the future.
It works for both Intel CPU and Intel XPU devices.

Pitch
Add VTune Profiler's ITT API function calls to annotate PyTorch ops, as well as developer customized code scopes on CPU, like NVTX for NVidia GPU.

This PR rebases the code changes at https://github.com/pytorch/pytorch/pull/61335 to the latest master branch.

Usage example:
```
with torch.autograd.profiler.emit_itt():
    for i in range(10):
        torch.itt.range_push('step_{}'.format(i))
        model(input)
        torch.itt.range_pop()
```

cc @ilia-cher @robieta @chaekit @gdankel @bitfort @ngimel @orionr @nbcsm @guotuofeng @guyang3532 @gaoteng-git
Pull Request resolved: https://github.com/pytorch/pytorch/pull/63289
Approved by: https://github.com/malfet
2022-06-30 05:14:03 +00:00
..
_static clarify the documentation of torch.meshgrid (#62977) 2021-08-18 04:01:22 -07:00
_templates Fix left nav (#78552) 2022-06-01 00:49:53 +00:00
community Adjust wording for consistency (#79758) 2022-06-17 01:39:30 +00:00
elastic (torchelastic) make --max_restarts explicit in the quickstart and runner docs (#65838) 2021-09-29 19:29:01 -07:00
notes [AMP] Use generic autocast in example, specify dtype (#79579) 2022-06-17 21:32:51 +00:00
rpc Support Union in TorchScript (#64234) 2021-09-03 06:12:24 -07:00
scripts [ONNX] Clean up onnx_supported_ops (#79424) 2022-06-23 20:44:51 +00:00
amp.rst Remove operators that support BFloat16 in the fp32 cast policy list of AutocastCPU (#77623) 2022-05-17 16:49:17 +00:00
autograd.rst Enable Intel® VTune™ Profiler's Instrumentation and Tracing Technology APIs (ITT) to PyTorch (#63289) 2022-06-30 05:14:03 +00:00
backends.rst Revert "[cuDNN V8 API] (reopen) Allow the number of kernels profiled under torch.backends.cudnn.benchmark = True to be limitedCudnnv8 benchmark limit (#77002)" 2022-05-24 21:52:35 +00:00
benchmark_utils.rst Cleanup all module references in doc (#73983) 2022-03-10 22:26:29 +00:00
bottleneck.rst Enable Intel® VTune™ Profiler's Instrumentation and Tracing Technology APIs (ITT) to PyTorch (#63289) 2022-06-30 05:14:03 +00:00
checkpoint.rst
complex_numbers.rst Add a note on CUDA 11.6 (#80363) 2022-06-27 21:34:24 +00:00
conf.py Wconstab/reland pysymint (#79795) 2022-06-20 22:55:06 +00:00
config_mod.rst rename config module file to work with gh pages better 2022-03-10 20:41:44 +00:00
cpp_extension.rst Check clang++/g++ version when compiling CUDA extensions (#63230) 2022-02-24 08:32:32 +00:00
cpp_index.rst
cuda.rst Python Jiterator supports multiple outputs (#78139) 2022-05-24 21:52:56 +00:00
cudnn_persistent_rnn.rst Remove orphan from cuDNN persistent note (#65160) 2021-09-21 11:09:47 -07:00
cudnn_rnn_determinism.rst
data.rst [DataLoader] Minor documentation improvement 2022-05-31 15:59:46 +00:00
ddp_comm_hooks.rst Functionality/pickling for commhooks (#79334) 2022-06-16 23:15:34 +00:00
deploy.rst Back out "Back out "[torch deploy] Update deploy.rst with working simple example"" (#76713) 2022-05-03 14:12:18 +00:00
distributed.algorithms.join.rst Add tutorial link (#62785) 2021-08-05 17:28:02 -07:00
distributed.elastic.rst
distributed.optim.rst [distributed][docs] Delete distributed optimimzer section from RPC and add reference to namespace docs page (#68068) 2021-11-09 15:01:54 -08:00
distributed.rst Add TORCH_CPP_LOG_LEVEL to the docs 2022-05-03 17:01:11 +00:00
distributions.rst [Reinstate] Wishart distribution (#70377) 2021-12-30 11:41:46 -08:00
dlpack.rst
docutils.conf
fft.rst Cleanup all module references in doc (#73983) 2022-03-10 22:26:29 +00:00
fsdp.rst make fsdp folder to be public (#72084) 2022-02-02 15:50:14 +00:00
futures.rst
fx.rst Introduce Z3 types and utility functions for constraint generation (#80084) 2022-06-25 22:27:33 +00:00
hub.rst Add more details to the known limitations section of torchhub docs (#69970) 2021-12-16 02:43:48 -08:00
index.rst Add docs for Python Registration 2022-06-13 23:21:23 +00:00
jit_builtin_functions.rst
jit_language_reference_v2.rst Add Union type to TorchScript Language Ref (#69514) 2021-12-07 12:53:54 -08:00
jit_language_reference.rst fix typos in jit_language_reference.rst (#68706) 2021-11-22 19:09:06 -08:00
jit_python_reference.rst
jit_unsupported.rst
jit_utils.rst Create __init__.py (#78629) 2022-06-03 18:14:21 +00:00
jit.rst adding a quick link to nvfuser README.md in jit doc for 1.12 release (#78160) 2022-06-09 17:28:17 +00:00
library.rst Add docs for Python Registration 2022-06-13 23:21:23 +00:00
linalg.rst Add linalg.lu_solve 2022-06-07 22:28:28 +00:00
math-quantizer-equation.png
mobile_optimizer.rst
model_zoo.rst
monitor.rst torch/monitor: merge Interval and FixedCount stats (#72009) 2022-01-30 23:21:59 +00:00
multiprocessing.rst
name_inference.rst
named_tensor.rst
nested.rst to_padded_tensor doc v0 (#78657) 2022-06-03 14:27:31 +00:00
nn.functional.rst Add Dropout1d module 2022-06-15 14:39:07 +00:00
nn.init.rst add trunc_normal_ function to doc of torch.nn.init 2022-05-06 14:33:08 +00:00
nn.rst Add Dropout1d module 2022-06-15 14:39:07 +00:00
onnx_supported_aten_ops.rst Add list of supported ATen ops by ONNX converter into torch.onnx page 2022-04-07 00:05:44 +00:00
onnx.rst [ONNX] Fix case in type annotation in docs (#78388) 2022-05-31 19:27:34 +00:00
optim.rst Remove misleading statement in optim.Optimizer docs (#76967) 2022-05-10 14:39:53 +00:00
package.rst Fix typo in torch.package code and docs (#77604) 2022-05-17 17:35:39 +00:00
pipeline.rst Minor changes in documentation (#68557) 2021-11-18 17:57:16 -08:00
profiler.rst Add low level torch.profiler.kineto_profile base class (#63302) 2021-12-14 14:47:43 -08:00
quantization-accuracy-debugging.rst quant docs: best practices for quantization accuracy debugging 2022-05-17 12:16:52 +00:00
quantization-backend-configuration.rst quantization: autogenerate quantization backend configs for documentation (#75126) 2022-04-04 22:22:30 +00:00
quantization-support.rst [quant] Quantizable documentation (#79957) 2022-06-24 16:55:15 +00:00
quantization.rst [quant] Quantizable documentation (#79957) 2022-06-24 16:55:15 +00:00
random.rst
rpc.rst Add note in RPC docs about retries. (#73601) 2022-03-03 00:29:31 +00:00
sparse.rst Cleanup all module references in doc (#73983) 2022-03-10 22:26:29 +00:00
special.rst torch.special.scaled_modified_bessel_k0 (#78900) 2022-06-29 14:53:37 +00:00
storage.rst Virtualize <type>Storage classes (#66970) 2022-03-22 23:44:48 +00:00
tensor_attributes.rst fix wrong indexing of class names in docs 2022-03-02 22:21:21 +00:00
tensor_view.rst Correcting a minor typo: "Users should pay" instead of "Users should be pay" (#72500) 2022-02-08 23:08:25 +00:00
tensorboard.rst Cleanup all module references in doc (#73983) 2022-03-10 22:26:29 +00:00
tensors.rst Unprivate _index_reduce and add documentation 2022-05-13 19:48:38 +00:00
testing.rst promote torch.testing to stable (#73348) 2022-02-25 06:30:31 +00:00
torch.ao.ns._numeric_suite_fx.rst Quantization docs: add pages for Numeric Suite (Eager and FX) (#66380) 2021-10-11 18:47:58 -07:00
torch.ao.ns._numeric_suite.rst Quantization docs: add pages for Numeric Suite (Eager and FX) (#66380) 2021-10-11 18:47:58 -07:00
torch.overrides.rst Revert "Revert "Implement sym_sizes to create proper IR for sym ints representing tensor sizes (#76836)"" 2022-05-18 18:40:57 +00:00
torch.rst Autogen Tags enum, and allow specifying tags while defining an op 2022-06-11 00:29:32 +00:00
type_info.rst ENH: Convert finfo.tiny to finfo.smallest_normal (#76292) 2022-05-20 00:59:48 +00:00