pytorch/docs/source/conf.py
Edward Z. Yang 5c6f5439b7 Implement SymBool (#92149)
We have known for a while that we should in principle support SymBool as a separate concept from SymInt and SymFloat ( in particular, every distinct numeric type should get its own API). However, recent work with unbacked SymInts in, e.g., https://github.com/pytorch/pytorch/pull/90985 have made this a priority to implement. The essential problem is that our logic for computing the contiguity of tensors performs branches on the passed in input sizes, and this causes us to require guards when constructing tensors from unbacked SymInts. Morally, this should not be a big deal because, we only really care about the regular (non-channels-last) contiguity of the tensor, which should be guaranteed since most people aren't calling `empty_strided` on the tensor, however, because we store a bool (not a SymBool, prior to this PR it doesn't exist) on TensorImpl, we are forced to *immediately* compute these values, even if the value ends up not being used at all. In particular, even when a user allocates a contiguous tensor, we still must compute channels-last contiguity (as some contiguous tensors are also channels-last contiguous, but others are not.)

This PR implements SymBool, and makes TensorImpl use SymBool to store the contiguity information in ExtraMeta. There are a number of knock on effects, which I now discuss below.

* I introduce a new C++ type SymBool, analogous to SymInt and SymFloat. This type supports logical and, logical or and logical negation. I support the bitwise operations on this class (but not the conventional logic operators) to make it clear that logical operations on SymBool are NOT short-circuiting. I also, for now, do NOT support implicit conversion of SymBool to bool (creating a guard in this case). This does matter too much in practice, as in this PR I did not modify the equality operations (e.g., `==` on SymInt) to return SymBool, so all preexisting implicit guards did not need to be changed. I also introduced symbolic comparison functions `sym_eq`, etc. on SymInt to make it possible to create SymBool. The current implementation of comparison functions makes it unfortunately easy to accidentally introduce guards when you do not mean to (as both `s0 == s1` and `s0.sym_eq(s1)` are valid spellings of equality operation); in the short term, I intend to prevent excess guarding in this situation by unit testing; in the long term making the equality operators return SymBool is probably the correct fix.
* ~~I modify TensorImpl to store SymBool for the `is_contiguous` fields and friends on `ExtraMeta`. In practice, this essentially meant reverting most of the changes from https://github.com/pytorch/pytorch/pull/85936 . In particular, the fields on ExtraMeta are no longer strongly typed; at the time I was particularly concerned about the giant lambda I was using as the setter getting a desynchronized argument order, but now that I have individual setters for each field the only "big list" of boolean arguments is in the constructor of ExtraMeta, which seems like an acceptable risk. The semantics of TensorImpl are now that we guard only when you actually attempt to access the contiguity of the tensor via, e.g., `is_contiguous`. By in large, the contiguity calculation in the implementations now needs to be duplicated (as the boolean version can short circuit, but the SymBool version cannot); you should carefully review the duplicate new implementations. I typically use the `identity` template to disambiguate which version of the function I need, and rely on overloading to allow for implementation sharing. The changes to the `compute_` functions are particularly interesting; for most of the functions, I preserved their original non-symbolic implementation, and then introduce a new symbolic implementation that is branch-less (making use of our new SymBool operations). However, `compute_non_overlapping_and_dense` is special, see next bullet.~~ This appears to cause performance problems, so I am leaving this to an update PR.
* (Update: the Python side pieces for this are still in this PR, but they are not wired up until later PRs.) While the contiguity calculations are relatively easy to write in a branch-free way, `compute_non_overlapping_and_dense` is not: it involves a sort on the strides. While in principle we can still make it go through by using a data oblivious sorting network, this seems like too much complication for a field that is likely never used (because typically, it will be obvious that a tensor is non overlapping and dense, because the tensor is contiguous.) So we take a different approach: instead of trying to trace through the logic computation of non-overlapping and dense, we instead introduce a new opaque operator IsNonOverlappingAndDenseIndicator which represents all of the compute that would have been done here. This function returns an integer 0 if `is_non_overlapping_and_dense` would have returned `False`, and an integer 1 otherwise, for technical reasons (Sympy does not easily allow defining custom functions that return booleans). The function itself only knows how to evaluate itself if all of its arguments are integers; otherwise it is left unevaluated. This means we can always guard on it (as `size_hint` will always be able to evaluate through it), but otherwise its insides are left a black box. We typically do NOT expect this custom function to show up in actual boolean expressions, because we will typically shortcut it due to the tensor being contiguous. It's possible we should apply this treatment to all of the other `compute_` operations, more investigation necessary. As a technical note, because this operator takes a pair of a list of SymInts, we need to support converting `ArrayRef<SymNode>` to Python, and I also unpack the pair of lists into a single list because I don't know if Sympy operations can actually validly take lists of Sympy expressions as inputs. See for example `_make_node_sizes_strides`
* On the Python side, we also introduce a SymBool class, and update SymNode to track bool as a valid pytype. There is some subtlety here: bool is a subclass of int, so one has to be careful about `isinstance` checks (in fact, in most cases I replaced `isinstance(x, int)` with `type(x) is int` for expressly this reason.) Additionally, unlike, C++, I do NOT define bitwise inverse on SymBool, because it does not do the correct thing when run on booleans, e.g., `~True` is `-2`. (For that matter, they don't do the right thing in C++ either, but at least in principle the compiler can warn you about it with `-Wbool-operation`, and so the rule is simple in C++; only use logical operations if the types are statically known to be SymBool). Alas, logical negation is not overrideable, so we have to introduce `sym_not` which must be used in place of `not` whenever a SymBool can turn up. To avoid confusion with `__not__` which may imply that `operators.__not__` might be acceptable to use (it isn't), our magic method is called `__sym_not__`. The other bitwise operators `&` and `|` do the right thing with booleans and are acceptable to use.
* There is some annoyance working with booleans in Sympy. Unlike int and float, booleans live in their own algebra and they support less operations than regular numbers. In particular, `sympy.expand` does not work on them. To get around this, I introduce `safe_expand` which only calls expand on operations which are known to be expandable.

TODO: this PR appears to greatly regress performance of symbolic reasoning. In particular, `python test/functorch/test_aotdispatch.py -k max_pool2d` performs really poorly with these changes. Need to investigate.

Signed-off-by: Edward Z. Yang <ezyang@meta.com>
Pull Request resolved: https://github.com/pytorch/pytorch/pull/92149
Approved by: https://github.com/albanD, https://github.com/Skylion007
2023-01-21 02:21:56 +00:00

762 lines
23 KiB
Python

# -*- coding: utf-8 -*-
#
# PyTorch documentation build configuration file, created by
# sphinx-quickstart on Fri Dec 23 13:31:47 2016.
#
# This file is execfile()d with the current directory set to its
# containing dir.
#
# Note that not all possible configuration values are present in this
# autogenerated file.
#
# All configuration values have a default; values that are commented out
# serve to show the default.
# If extensions (or modules to document with autodoc) are in another directory,
# add these directories to sys.path here. If the directory is relative to the
# documentation root, use os.path.abspath to make it absolute, like shown here.
#
import os
from os import path
import re
# import sys
import pkgutil
# source code directory, relative to this file, for sphinx-autobuild
# sys.path.insert(0, os.path.abspath('../..'))
import torch
try:
import torchvision # noqa: F401
except ImportError:
import warnings
warnings.warn('unable to load "torchvision" package')
RELEASE = os.environ.get('RELEASE', False)
import pytorch_sphinx_theme
# -- General configuration ------------------------------------------------
# If your documentation needs a minimal Sphinx version, state it here.
#
needs_sphinx = '3.1.2'
# Add any Sphinx extension module names here, as strings. They can be
# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom
# ones.
extensions = [
'sphinx.ext.autodoc',
'sphinx.ext.autosummary',
'sphinx.ext.doctest',
'sphinx.ext.intersphinx',
'sphinx.ext.todo',
'sphinx.ext.coverage',
'sphinx.ext.napoleon',
'sphinx.ext.viewcode',
'sphinxcontrib.katex',
'sphinx.ext.autosectionlabel',
'sphinx_copybutton',
'sphinx_panels',
'myst_parser',
]
# build the templated autosummary files
autosummary_generate = True
numpydoc_show_class_members = False
# Theme has bootstrap already
panels_add_bootstrap_css = False
# autosectionlabel throws warnings if section names are duplicated.
# The following tells autosectionlabel to not throw a warning for
# duplicated section names that are in different documents.
autosectionlabel_prefix_document = True
# katex options
#
#
katex_prerender = True
napoleon_use_ivar = True
# Add any paths that contain templates here, relative to this directory.
templates_path = ['_templates']
# TODO: document these and remove them from here.
coverage_ignore_functions = [
# torch
"typename",
# torch.autograd
"register_py_tensor_class_for_device",
"variable",
# torch.cuda
"check_error",
"cudart",
"is_bf16_supported",
# torch.cuda._sanitizer
"format_log_message",
"zip_arguments",
"zip_by_key",
# torch.distributed.autograd
"is_available",
# torch.distributed.elastic.events
"construct_and_record_rdzv_event",
"record_rdzv_event",
# torch.distributed.elastic.metrics
"initialize_metrics",
# torch.distributed.elastic.rendezvous.registry
"get_rendezvous_handler",
# torch.distributed.launch
"launch",
"main",
"parse_args",
# torch.distributed.rpc
"is_available",
# torch.distributed.run
"config_from_args",
"determine_local_world_size",
"get_args_parser",
"get_rdzv_endpoint",
"get_use_env",
"main",
"parse_args",
"parse_min_max_nnodes",
"run",
"run_script_path",
# torch.distributions.constraints
"is_dependent",
# torch.hub
"import_module",
# torch.jit
"export_opnames",
# torch.jit.unsupported_tensor_ops
"execWrapper",
# torch.onnx
"unregister_custom_op_symbolic",
# torch.ao.quantization
"default_eval_fn",
# torch.backends
"disable_global_flags",
"flags_frozen",
# torch.distributed.algorithms.ddp_comm_hooks
"register_ddp_comm_hook",
# torch.nn
"factory_kwargs",
# torch.nn.parallel
"DistributedDataParallelCPU",
# torch.utils
"set_module",
# torch.utils.model_dump
"burn_in_info",
"get_info_and_burn_skeleton",
"get_inline_skeleton",
"get_model_info",
"get_storage_info",
"hierarchical_pickle",
]
coverage_ignore_classes = [
# torch
"FatalError",
"QUInt2x4Storage",
"Size",
"Storage",
"Stream",
"Tensor",
"finfo",
"iinfo",
"qscheme",
"AggregationType",
"AliasDb",
"AnyType",
"Argument",
"ArgumentSpec",
"BenchmarkConfig",
"BenchmarkExecutionStats",
"Block",
"BoolType",
"BufferDict",
"CallStack",
"Capsule",
"ClassType",
"Code",
"CompleteArgumentSpec",
"ComplexType",
"ConcreteModuleType",
"ConcreteModuleTypeBuilder",
"DeepCopyMemoTable",
"DeserializationStorageContext",
"DeviceObjType",
"DictType",
"DispatchKey",
"DispatchKeySet",
"EnumType",
"ExcludeDispatchKeyGuard",
"ExecutionPlan",
"FileCheck",
"FloatType",
"FunctionSchema",
"Gradient",
"Graph",
"GraphExecutorState",
"IODescriptor",
"InferredType",
"IntType",
"InterfaceType",
"ListType",
"LockingLogger",
"MobileOptimizerType",
"ModuleDict",
"Node",
"NoneType",
"NoopLogger",
"NumberType",
"OperatorInfo",
"OptionalType",
"ParameterDict",
"PyObjectType",
"PyTorchFileReader",
"PyTorchFileWriter",
"RRefType",
"ScriptClass",
"ScriptClassFunction",
"ScriptDict",
"ScriptDictIterator",
"ScriptDictKeyIterator",
"ScriptList",
"ScriptListIterator",
"ScriptMethod",
"ScriptModule",
"ScriptModuleSerializer",
"ScriptObject",
"ScriptObjectProperty",
"SerializationStorageContext",
"StaticModule",
"StringType",
"SymIntType",
"ThroughputBenchmark",
"TracingState",
"TupleType",
"Type",
"UnionType",
"Use",
"Value",
# torch.cuda
"BFloat16Storage",
"BFloat16Tensor",
"BoolStorage",
"BoolTensor",
"ByteStorage",
"ByteTensor",
"CharStorage",
"CharTensor",
"ComplexDoubleStorage",
"ComplexFloatStorage",
"CudaError",
"DeferredCudaCallError",
"DoubleStorage",
"DoubleTensor",
"FloatStorage",
"FloatTensor",
"HalfStorage",
"HalfTensor",
"IntStorage",
"IntTensor",
"LongStorage",
"LongTensor",
"ShortStorage",
"ShortTensor",
"cudaStatus",
# torch.cuda._sanitizer
"Access",
"AccessType",
"CUDASanitizer",
"CUDASanitizerDispatchMode",
"CUDASanitizerErrors",
"EventHandler",
"SynchronizationError",
"UnsynchronizedAccessError",
# torch.distributed.elastic.multiprocessing.errors
"ChildFailedError",
"ProcessFailure",
# torch.distributions.constraints
"cat",
"greater_than",
"greater_than_eq",
"half_open_interval",
"independent",
"integer_interval",
"interval",
"less_than",
"multinomial",
"stack",
# torch.distributions.transforms
"AffineTransform",
"CatTransform",
"ComposeTransform",
"CorrCholeskyTransform",
"CumulativeDistributionTransform",
"ExpTransform",
"IndependentTransform",
"PowerTransform",
"ReshapeTransform",
"SigmoidTransform",
"SoftmaxTransform",
"SoftplusTransform",
"StackTransform",
"StickBreakingTransform",
"TanhTransform",
"Transform",
# torch.jit
"CompilationUnit",
"Error",
"Future",
"ScriptFunction",
# torch.onnx
"CheckerError",
"ExportTypes",
# torch.backends
"ContextProp",
"PropModule",
# torch.backends.cuda
"cuBLASModule",
"cuFFTPlanCache",
"cuFFTPlanCacheAttrContextProp",
"cuFFTPlanCacheManager",
# torch.distributed.algorithms.ddp_comm_hooks
"DDPCommHookType",
# torch.jit.mobile
"LiteScriptModule",
# torch.ao.nn.quantized.modules
"DeQuantize",
"Quantize",
# torch.utils.backcompat
"Warning",
]
# The suffix(es) of source filenames.
# You can specify multiple suffix as a list of string:
#
# source_suffix = ['.rst', '.md']
source_suffix = '.rst'
# The master toctree document.
master_doc = 'index'
# General information about the project.
project = 'PyTorch'
copyright = '2022, PyTorch Contributors'
author = 'PyTorch Contributors'
torch_version = str(torch.__version__)
# The version info for the project you're documenting, acts as replacement for
# |version| and |release|, also used in various other places throughout the
# built documents.
#
# The short X.Y version.
# TODO: change to [:2] at v1.0
version = 'master (' + torch_version + ' )'
# The full version, including alpha/beta/rc tags.
# TODO: verify this works as expected
release = 'master'
# Customized html_title here.
# Default is " ".join(project, release, "documentation") if not set
if RELEASE:
# Turn 1.11.0aHASH into 1.11
# Note: the release candidates should no longer have the aHASH suffix, but in any
# case we wish to leave only major.minor, even for rc builds.
version = '.'.join(torch_version.split('.')[:2])
html_title = " ".join((project, version, "documentation"))
release = version
# The language for content autogenerated by Sphinx. Refer to documentation
# for a list of supported languages.
#
# This is also used if you do content translation via gettext catalogs.
# Usually you set "language" from the command line for these cases.
language = "en"
# List of patterns, relative to source directory, that match files and
# directories to ignore when looking for source files.
# This patterns also effect to html_static_path and html_extra_path
exclude_patterns = []
# The name of the Pygments (syntax highlighting) style to use.
pygments_style = 'sphinx'
# If true, `todo` and `todoList` produce output, else they produce nothing.
todo_include_todos = True
# Disable docstring inheritance
autodoc_inherit_docstrings = False
# Show type hints in the description
autodoc_typehints = 'description'
# Add parameter types if the parameter is documented in the docstring
autodoc_typehints_description_target = 'documented_params'
# Type aliases for common types
# Sphinx type aliases only works with Postponed Evaluation of Annotations
# (PEP 563) enabled (via `from __future__ import annotations`), which keeps the
# type annotations in string form instead of resolving them to actual types.
# However, PEP 563 does not work well with JIT, which uses the type information
# to generate the code. Therefore, the following dict does not have any effect
# until PEP 563 is supported by JIT and enabled in files.
autodoc_type_aliases = {
"_size_1_t": "int or tuple[int]",
"_size_2_t": "int or tuple[int, int]",
"_size_3_t": "int or tuple[int, int, int]",
"_size_4_t": "int or tuple[int, int, int, int]",
"_size_5_t": "int or tuple[int, int, int, int, int]",
"_size_6_t": "int or tuple[int, int, int, int, int, int]",
"_size_any_opt_t": "int or None or tuple",
"_size_2_opt_t": "int or None or 2-tuple",
"_size_3_opt_t": "int or None or 3-tuple",
"_ratio_2_t": "float or tuple[float, float]",
"_ratio_3_t": "float or tuple[float, float, float]",
"_ratio_any_t": "float or tuple",
"_tensor_list_t": "Tensor or tuple[Tensor]",
}
# Enable overriding of function signatures in the first line of the docstring.
autodoc_docstring_signature = True
# -- katex javascript in header
#
# def setup(app):
# app.add_javascript("https://cdn.jsdelivr.net/npm/katex@0.10.0-beta/dist/katex.min.js")
# -- Options for HTML output ----------------------------------------------
#
# The theme to use for HTML and HTML Help pages. See the documentation for
# a list of builtin themes.
#
#
#
html_theme = 'pytorch_sphinx_theme'
html_theme_path = [pytorch_sphinx_theme.get_html_theme_path()]
# Theme options are theme-specific and customize the look and feel of a theme
# further. For a list of options available for each theme, see the
# documentation.
html_theme_options = {
'pytorch_project': 'docs',
'canonical_url': 'https://pytorch.org/docs/stable/',
'collapse_navigation': False,
'display_version': True,
'logo_only': True,
'analytics_id': 'UA-117752657-2',
}
html_logo = '_static/img/pytorch-logo-dark-unstable.png'
if RELEASE:
html_logo = '_static/img/pytorch-logo-dark.svg'
# Add any paths that contain custom static files (such as style sheets) here,
# relative to this directory. They are copied after the builtin static files,
# so a file named "default.css" will overwrite the builtin "default.css".
html_static_path = ['_static']
html_css_files = [
'css/jit.css',
]
from sphinx.ext.coverage import CoverageBuilder
def coverage_post_process(app, exception):
if exception is not None:
return
# Only run this test for the coverage build
if not isinstance(app.builder, CoverageBuilder):
return
if not torch.distributed.is_available():
raise RuntimeError("The coverage tool cannot run with a version "
"of PyTorch that was built with USE_DISTRIBUTED=0 "
"as this module's API changes.")
# These are all the modules that have "automodule" in an rst file
# These modules are the ones for which coverage is checked
# Here, we make sure that no module is missing from that list
modules = app.env.domaindata['py']['modules']
# We go through all the torch submodules and make sure they are
# properly tested
missing = set()
def is_not_internal(modname):
split_name = modname.split(".")
for name in split_name:
if name[0] == "_":
return False
return True
# The walk function does not return the top module
if "torch" not in modules:
missing.add("torch")
for _, modname, ispkg in pkgutil.walk_packages(path=torch.__path__,
prefix=torch.__name__ + '.'):
if ispkg and is_not_internal(modname):
if modname not in modules:
missing.add(modname)
output = []
if missing:
mods = ", ".join(missing)
output.append(f"\nYou added the following module(s) to the PyTorch namespace '{mods}' "
"but they have no corresponding entry in a doc .rst file. You should "
"either make sure that the .rst file that contains the module's documentation "
"properly contains either '.. automodule:: mod_name' (if you do not want "
"the paragraph added by the automodule, you can simply use '.. py:module:: mod_name') "
" or make the module private (by appending an '_' at the beginning of its name).")
# The output file is hard-coded by the coverage tool
# Our CI is setup to fail if any line is added to this file
output_file = path.join(app.outdir, 'python.txt')
if output:
with open(output_file, "a") as f:
for o in output:
f.write(o)
def process_docstring(app, what_, name, obj, options, lines):
"""
Custom process to transform docstring lines Remove "Ignore" blocks
Args:
app (sphinx.application.Sphinx): the Sphinx application object
what (str):
the type of the object which the docstring belongs to (one of
"module", "class", "exception", "function", "method", "attribute")
name (str): the fully qualified name of the object
obj: the object itself
options: the options given to the directive: an object with
attributes inherited_members, undoc_members, show_inheritance
and noindex that are true if the flag option of same name was
given to the auto directive
lines (List[str]): the lines of the docstring, see above
References:
https://www.sphinx-doc.org/en/1.5.1/_modules/sphinx/ext/autodoc.html
https://www.sphinx-doc.org/en/master/usage/extensions/autodoc.html
"""
import re
remove_directives = [
# Remove all xdoctest directives
re.compile(r'\s*>>>\s*#\s*x?doctest:\s*.*'),
re.compile(r'\s*>>>\s*#\s*x?doc:\s*.*'),
]
filtered_lines = [
line for line in lines
if not any(pat.match(line) for pat in remove_directives)
]
# Modify the lines inplace
lines[:] = filtered_lines
# make sure there is a blank line at the end
if lines and lines[-1].strip():
lines.append('')
# Called automatically by Sphinx, making this `conf.py` an "extension".
def setup(app):
# NOTE: in Sphinx 1.8+ `html_css_files` is an official configuration value
# and can be moved outside of this function (and the setup(app) function
# can be deleted).
html_css_files = [
'https://cdn.jsdelivr.net/npm/katex@0.10.0-beta/dist/katex.min.css'
]
# In Sphinx 1.8 it was renamed to `add_css_file`, 1.7 and prior it is
# `add_stylesheet` (deprecated in 1.8).
add_css = getattr(app, 'add_css_file', app.add_stylesheet)
for css_file in html_css_files:
add_css(css_file)
app.connect("build-finished", coverage_post_process)
app.connect('autodoc-process-docstring', process_docstring)
# From PyTorch 1.5, we now use autogenerated files to document classes and
# functions. This breaks older references since
# https://pytorch.org/docs/stable/torch.html#torch.flip
# moved to
# https://pytorch.org/docs/stable/generated/torch.flip.html
# which breaks older links from blog posts, stack overflow answers and more.
# To mitigate that, we add an id="torch.flip" in an appropriated place
# in torch.html by overriding the visit_reference method of html writers.
# Someday this can be removed, once the old links fade away
from sphinx.writers import html, html5
def replace(Klass):
old_call = Klass.visit_reference
def visit_reference(self, node):
if 'refuri' in node and 'generated' in node.get('refuri'):
ref = node.get('refuri')
ref_anchor = ref.split('#')
if len(ref_anchor) > 1:
# Only add the id if the node href and the text match,
# i.e. the href is "torch.flip#torch.flip" and the content is
# "torch.flip" or "flip" since that is a signal the node refers
# to autogenerated content
anchor = ref_anchor[1]
txt = node.parent.astext()
if txt == anchor or txt == anchor.split('.')[-1]:
self.body.append('<p id="{}"/>'.format(ref_anchor[1]))
return old_call(self, node)
Klass.visit_reference = visit_reference
replace(html.HTMLTranslator)
replace(html5.HTML5Translator)
# -- Options for HTMLHelp output ------------------------------------------
# Output file base name for HTML help builder.
htmlhelp_basename = 'PyTorchdoc'
# -- Options for LaTeX output ---------------------------------------------
latex_elements = {
# The paper size ('letterpaper' or 'a4paper').
#
# 'papersize': 'letterpaper',
# The font size ('10pt', '11pt' or '12pt').
#
# 'pointsize': '10pt',
# Additional stuff for the LaTeX preamble.
#
# 'preamble': '',
# Latex figure (float) alignment
#
# 'figure_align': 'htbp',
}
# Grouping the document tree into LaTeX files. List of tuples
# (source start file, target name, title,
# author, documentclass [howto, manual, or own class]).
latex_documents = [
(master_doc, 'pytorch.tex', 'PyTorch Documentation',
'Torch Contributors', 'manual'),
]
# -- Options for manual page output ---------------------------------------
# One entry per manual page. List of tuples
# (source start file, name, description, authors, manual section).
man_pages = [
(master_doc, 'PyTorch', 'PyTorch Documentation',
[author], 1)
]
# -- Options for Texinfo output -------------------------------------------
# Grouping the document tree into Texinfo files. List of tuples
# (source start file, target name, title, author,
# dir menu entry, description, category)
texinfo_documents = [
(master_doc, 'PyTorch', 'PyTorch Documentation',
author, 'PyTorch', 'One line description of project.',
'Miscellaneous'),
]
# Example configuration for intersphinx: refer to the Python standard library.
intersphinx_mapping = {
'python': ('https://docs.python.org/3', None),
'numpy': ('https://numpy.org/doc/stable', None),
}
# -- A patch that prevents Sphinx from cross-referencing ivar tags -------
# See http://stackoverflow.com/a/41184353/3343043
from docutils import nodes
from sphinx.util.docfields import TypedField
from sphinx import addnodes
import sphinx.ext.doctest
# Without this, doctest adds any example with a `>>>` as a test
doctest_test_doctest_blocks = ''
doctest_default_flags = sphinx.ext.doctest.doctest.ELLIPSIS
doctest_global_setup = '''
import torch
try:
import torchvision
except ImportError:
torchvision = None
'''
def patched_make_field(self, types, domain, items, **kw):
# `kw` catches `env=None` needed for newer sphinx while maintaining
# backwards compatibility when passed along further down!
# type: (List, unicode, Tuple) -> nodes.field
def handle_item(fieldarg, content):
par = nodes.paragraph()
par += addnodes.literal_strong('', fieldarg) # Patch: this line added
# par.extend(self.make_xrefs(self.rolename, domain, fieldarg,
# addnodes.literal_strong))
if fieldarg in types:
par += nodes.Text(' (')
# NOTE: using .pop() here to prevent a single type node to be
# inserted twice into the doctree, which leads to
# inconsistencies later when references are resolved
fieldtype = types.pop(fieldarg)
if len(fieldtype) == 1 and isinstance(fieldtype[0], nodes.Text):
typename = fieldtype[0].astext()
builtin_types = ['int', 'long', 'float', 'bool', 'type']
for builtin_type in builtin_types:
pattern = fr'(?<![\w.]){builtin_type}(?![\w.])'
repl = f'python:{builtin_type}'
typename = re.sub(pattern, repl, typename)
par.extend(self.make_xrefs(self.typerolename, domain, typename,
addnodes.literal_emphasis, **kw))
else:
par += fieldtype
par += nodes.Text(')')
par += nodes.Text(' -- ')
par += content
return par
fieldname = nodes.field_name('', self.label)
if len(items) == 1 and self.can_collapse:
fieldarg, content = items[0]
bodynode = handle_item(fieldarg, content)
else:
bodynode = self.list_type()
for fieldarg, content in items:
bodynode += nodes.list_item('', handle_item(fieldarg, content))
fieldbody = nodes.field_body('', bodynode)
return nodes.field('', fieldname, fieldbody)
TypedField.make_field = patched_make_field
copybutton_prompt_text = r'>>> |\.\.\. '
copybutton_prompt_is_regexp = True