The BaseDataScheduler is the abstract scheduler class specifically for the
BaseDataSparsifier class. This class controls a specific hyperparameter of
the sparsifier class and varies it across the training process (or across time).
Args:
data_sparsifier (instance of BaseDataSparsifier)
Implemented class data sparsifier class wherein the update_mask is implemented
schedule_param (str)
A specific hyperparameter of the passed sparsifier that needs to be scheduled/varied
last_epoch (int, default=-1)
This is specifically is passed when training needs to be resumed from a particular
point.
verbose (bool, default=False)
Verbosity of the BaseDataScheduler
The *get_schedule_param()* function needs to be implemented by the user.
Test Plan:
```python test/test_ao_sparsity.py TestBaseDataScheduler```
Differential Revision: [D37358608](https://our.internmc.facebook.com/intern/diff/D37358608)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/79817
Approved by: https://github.com/jerryzh168, https://github.com/z-a-f
Summary: per https://github.com/pytorch/pytorch/issues/79135 the code
snippets in the docs don't run. This is a recurring problem since
previously there was no unit test to check that these code snippets
actually ran. This PR adds support for such a test, importing the
snippet as a string and evaluating it to make sure that it actually runs
if the code snippet has user defined code, you can pass in dummy
versions using global_inputs. Sometimes the imports of the code snippets
behave oddly but you can pass them in as in test_quantization_doc_custom
where nnq is passed in.
Test Plan: python test/test_quantization.py TestQuantizationDocs
also see https://github.com/pytorch/pytorch/pull/79994 to see what shows up in CI when the docs get broken
Reviewers:
Subscribers:
Tasks:
Tags:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/79923
Approved by: https://github.com/z-a-f, https://github.com/vspenubarthi
Base Data Sparsifier class for all Data sparsifiers.
The abstract class accepts raw torch tensors / embedding / embedding bags (refer to SUPPORTED_TYPES above)
to prepare for sparsification.
In this case, mask (and parametrizations) is owned by the class and not by the user.
Specifically, the container object inside the class maintains the mask and parametrizations of the input data
Test Plan:
```python test/test_ao_sparsity.py TestBaseDataSparsifier```
Pull Request resolved: https://github.com/pytorch/pytorch/pull/79251
Approved by: https://github.com/z-a-f, https://github.com/HDCharles
Summary: https://github.com/pytorch/pytorch/pull/78452 replaced
qconfig_dict with QConfigMapping as the default API for prepare_fx,
prepare_qat_fx, and convert_fx. We should update the docs to reflect
this change as well.
Test Plan:
```
cd docs
make html
cd build/html
python -m server.http
```
Reviewers: jerryzh168, vkuzo
Subscribers: jerryzh168, vkuzo
Pull Request resolved: https://github.com/pytorch/pytorch/pull/78533
Approved by: https://github.com/vkuzo
Summary:
This PR creates a best practices guideline for debugging quantization
accuracy. The content here comes from https://fburl.com/gdoc/nzlzxeaf,
with experimental and Meta-only parts left out.
For now, a lot of the debugging is manual, with the Numeric Suite the
only tool we have to help the user find root causes of quantization
inaccuracies. As we build additional tools for equalization detection,
outlier detection, etc, we will add them to this page
Test plan:
```
cd docs
make html
cd build/html
python -m server.http
// result renders well in browser
```
Pull Request resolved: https://github.com/pytorch/pytorch/pull/77536
Approved by: https://github.com/hx89
There seems to be a typo in the main quantization docs.
In the table comparing "Eager Mode Quantization" against "FX Graph Mode Quantization", in the row named "Quantization Mode Support" both modes say they are "Quantiztion aware" instead of "Quantization aware"
Pull Request resolved: https://github.com/pytorch/pytorch/pull/77300
Approved by: https://github.com/H-Huang
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/76223
Small formatting fixes that was missed because I didn't check the generated doc last time
Test Plan:
visual inspection of the generated docs for this PR
Imported from OSS
Reviewed By: HDCharles
Differential Revision: D35853174
fbshipit-source-id: 4454a4bf5d0c998d866bbae1d6b5286827082033
(cherry picked from commit 125f60356ccc9cd6888c515889bd27ff9860ec74)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/75998
Add more details to user facing docs quantization.rst, which will be displayed in the official quantization doc page: https://pytorch.org/docs/stable/quantization.html
This includes:
* docs for quantization stack (quantized tensor, quantized operator and modules, observer, fake_quantize, QConfig, quantization flow)
* Added support table for quantization mode, quantization flow mode and backend, (also moved around operator support table)
* restructured eager mode and fx mode docs as well
Test Plan:
inspect the doc that's built by github ci
Imported from OSS
Reviewed By: dzdang
Differential Revision: D35739111
fbshipit-source-id: 3762d387479bdd37472cb17d5c49da2f520effbb
(cherry picked from commit db5e6411c52c08dd9c45f841ab86713d36a75d51)
Summary:
Following https://github.com/pytorch/rfcs/blob/master/RFC-0019-Extending-PyTorch-Quantization-to-Custom-Backends.md we implemented
the backend configuration for fbgemm/qnnpack backend, currently it was under fx folder, but we'd like to use this for all different
workflows, including eager, fx graph and define by run quantization, this PR moves it to torch.ao.quantization namespace so that
it can be shared by different workflows
Also moves some utility functions specific to fx to fx/backend_config_utils.py and some files are kept in fx folder (quantize_handler.py and fuse_handler.py)
Test Plan:
python test/teset_quantization.py TestQuantizeFx
python test/teset_quantization.py TestQuantizeFxOps
python test/teset_quantization.py TestQuantizeFxModels
python test/test_quantization.py TestAOMigrationQuantization
python test/test_quantization.py TestAOMigrationQuantizationFx
Reviewers:
Subscribers:
Tasks:
Tags:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/75823
Approved by: https://github.com/vkuzo
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/75126
Quantization has a high volume of configurations of how to quantize an
op for a reference model representation which is useful for a lowering
step for a backend. An example of this is
```
{'dtype_configs': [{'input_dtype': torch.quint8,
'output_dtype': torch.quint8}],
'observation_type': <ObservationType.OUTPUT_USE_DIFFERENT_OBSERVER_AS_INPUT: 0>,
'pattern': <class 'torch.nn.modules.conv.ConvTranspose1d'>},
```
These configs are checked into master, and they are created with Python functions.
Therefore, there is no easy way for the user to see what the configs actually
are without running some Python code.
This PR is one approach to document these configs. Here is what this is doing:
1. during documentation build, write a text file of the configs
2. render that text file on a quantization page, with some additional context
In the future, this could be extended to autogenerate better looking tables
such as: op support per backend and dtype, op support per valid quantization settings per backend,
etc.
Test Plan:
```
cd docs
make html
cd html
python -m http.server 8000
// render http://[::]:8000/quantization-backend-configuration.html
// it renders correctly
```
Reviewed By: ejguan
Differential Revision: D35365461
Pulled By: vkuzo
fbshipit-source-id: d60f776ccb57da9db3d09550e4b27bd5e725635a
(cherry picked from commit 14865c0e23bc080120342c8f9278f0fae8eb8fbd)
Summary:
Working towards https://docs.google.com/document/d/10yx2-4gs0gTMOimVS403MnoAWkqitS8TUHX73PN8EjE/edit?pli=1#
This PR:
- Ensure that all the submodules are listed in a rst file (that ensure they are considered by the coverage tool)
- Remove some long deprecated code that just error out on import
- Remove the allow list altogether to ensure nothing gets added back there
Pull Request resolved: https://github.com/pytorch/pytorch/pull/73983
Reviewed By: anjali411
Differential Revision: D34787908
Pulled By: albanD
fbshipit-source-id: 163ce61e133b12b2f2e1cbe374f979e3d6858db7
(cherry picked from commit c9edfead7a01dc45bfc24eaf7220d2a84ab1f62e)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/69789
Add details on how to save and load quantized models without hitting errors
Test Plan:
CI autogenerated docs
Imported from OSS
Reviewed By: jerryzh168
Differential Revision: D33030991
fbshipit-source-id: 8ec4610ae6d5bcbdd3c5e3bb725f2b06af960d52
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/67449
Adds a description of what the current custom module API does
and API examples for Eager mode and FX graph mode to the main
PyTorch quantization documentation page.
Test Plan:
```
cd docs
make html
python -m http.server
// check the docs page, it renders correctly
```
Reviewed By: jbschlosser
Differential Revision: D31994641
Pulled By: vkuzo
fbshipit-source-id: d35a62947dd06e71276eb6a0e37950d3cc5abfc1
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/66380
Description:
1. creates doc pages for Eager and FX numeric suites
2. adds a link from main quantization doc to (1)
3. formats docblocks in Eager NS to render well
4. adds example code and docblocks to FX numeric suite
Test Plan:
```
cd docs
make html
python -m http.server
// renders well
```
Reviewed By: jerryzh168
Differential Revision: D31543173
Pulled By: vkuzo
fbshipit-source-id: feb291bcbe92747495f45165f738631fa5cbffbd
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/66379
Description:
Creates a quantization API reference and fixes all the docblock errors.
This is #66122 to #66210 squashed together
Test Plan:
```
cd docs
make html
python -m http.server
// open webpage, inspect it, looks good
```
Reviewed By: ejguan
Differential Revision: D31543172
Pulled By: vkuzo
fbshipit-source-id: 9131363d6528337e9f100759654d3f34f02142a9
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/66222
Description:
1. creates doc pages for Eager and FX numeric suites
2. adds a link from main quantization doc to (1)
3. formats docblocks in Eager NS to render well
4. adds example code and docblocks to FX numeric suite
Test Plan:
```
cd docs
make html
python -m http.server
// renders well
```
Reviewed By: jerryzh168
Differential Revision: D31447610
Pulled By: vkuzo
fbshipit-source-id: 441170c4a6c3ddea1e7c7c5cc2f1e1cd5aa65f2f
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/66210
Description:
Moves the backend section of the quantization page further down,
to ensure that the API description and reference sections are closer
to the top.
Test Plan:
```
cd docs
make html
python -m server.http
// renders well
```
Reviewed By: jerryzh168
Differential Revision: D31447611
Pulled By: vkuzo
fbshipit-source-id: 537b146559bce484588b3c78e6b0cdb4c274e8dd
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/66198
Consolidates all API reference material for quantization on a single
page, to reduce duplication of information.
Future PRs will improve the API reference page itself.
Test Plan:
```
cd docs
make html
python -m http.server
// renders well
```
Reviewed By: jerryzh168
Differential Revision: D31447616
Pulled By: vkuzo
fbshipit-source-id: 2f9c4dac2b2fb377568332aef79531d1f784444a
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/66129
Adds a documentation page for `torch.ao.quantization.QConfig`. It is useful
for this to have a separate page since it shared between Eager and FX graph
mode quantization.
Also, ensures that all important functions and module attributes in this
module have docstrings, so users can discover these without reading the
source code.
Test Plan:
```
cd docs
make html
python -m http.server
// open webpage, inspect it, renders correctly
```
Reviewed By: jerryzh168
Differential Revision: D31447614
Pulled By: vkuzo
fbshipit-source-id: 5d9dd2a4e8647fa17b96cefbaae5299adede619c
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/66125
Before this PR, the documentation for observers and fake_quants was inlined in the
Eager mode quantization page. This was hard to discover, especially
since that page is really long, and we now have FX graph mode quantization reusing
all of this code.
This PR moves observers and fake_quants into their own documentation pages. It also
adds docstrings to all user facing module attributes such as the default observers
and fake_quants, so people can discover them from documentation without having
to inspect the source code.
For now, enables autoformatting (which means all public classes, functions, members
with docstrings will get docs). If we need to exclude something in these files from
docs in the future, we can go back to manual docs.
Test Plan:
```
cd docs
make html
python -m server.http
// inspect docs on localhost, renders correctly
```
Reviewed By: dagitses
Differential Revision: D31447613
Pulled By: vkuzo
fbshipit-source-id: 63b4cf518badfb29ede583a5c2ca823f572c8599
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/66122
Description:
Adds a documentation page for FX graph mode quantization APIs which
reads from the docstrings in `quantize_fx`, and links it from the main
quantization documentation page.
Also, updates the docstrings in `quantize_fx` to render well with reStructuredText.
Test Plan:
```
cd docs
make html
python -m http.server
// open webpage, inspect it, looks good
```
Reviewed By: dagitses
Differential Revision: D31447612
Pulled By: vkuzo
fbshipit-source-id: 07d0a6137f1537af82dce0a729f9617efaa714a0
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/63582
Current quantization docs do not define qconfig and qengine. Added text to define these concepts before they are used.
ghstack-source-id: 137051719
Test Plan: Imported from OSS
Reviewed By: HDCharles
Differential Revision: D30658656
fbshipit-source-id: a45a0fcdf685ca1c3f5c3506337246a430f8f506
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/58925
Cleans up documentation on natively supported backends. In particular:
* adds a section title
* deduplicates information about fbgemm/qnnpack
* clarifies what `torch.backends.quantized.engine` does
* adds code samples with default settings for `fbgemm` and `qnnpack`
Test Plan: Imported from OSS
Reviewed By: jerryzh168
Differential Revision: D28681840
Pulled By: vkuzo
fbshipit-source-id: 51a6ab66934f657553351f6c84a638fd5f7b4e12
Summary:
No oustanding issue, can create it if needed.
Was looking for that resource and it was moved without fixing the documentation.
Cheers
Pull Request resolved: https://github.com/pytorch/pytorch/pull/56776
Reviewed By: heitorschueroff
Differential Revision: D27967020
Pulled By: ezyang
fbshipit-source-id: a5cd7d554da43a9c9e44966ccd0b0ad9eef2948c
Summary:
The pre-amble here is misformatted at least and is hard to make sense of: https://pytorch.org/docs/master/quantization.html#prototype-fx-graph-mode-quantization
This PR is trying to make things easier to understand.
As I'm new to this please verify that my modifications remain in line with what may have been meant originally.
Thanks.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/52192
Reviewed By: ailzhang
Differential Revision: D27941730
Pulled By: vkuzo
fbshipit-source-id: 6c4bbf7c87d8fb87ab5d588b690a72045752e47a
Summary:
There has a description error in quantization.rst, fixed it.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/50187
Reviewed By: mrshenli
Differential Revision: D25895294
Pulled By: soumith
fbshipit-source-id: c0b2e7ba3fadfc0977ab2d4d4e9ed4f93694cedd
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/49902
Adds a common errors section, and details the two errors
we see often on the discuss forums, with recommended solutions.
Test Plan: build the docs on Mac OS, the new section renders correctly.
Reviewed By: supriyar
Differential Revision: D25718195
Pulled By: vkuzo
fbshipit-source-id: c5ef2b24831d18d57bbafdb82d26d8fbf3a90781
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/45306
Adds details to the main quantization doc on how specifically
users can skip or customize quantization of layers.
Test Plan: Imported from OSS
Reviewed By: jerryzh168
Differential Revision: D23917034
Pulled By: vkuzo
fbshipit-source-id: ccf71ce4300c1946b2ab63d1f35a07691fd7a2af
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/45305
Adds an explanatation for reduce_range to the main quantization
doc page.
Test Plan: Imported from OSS
Reviewed By: jerryzh168
Differential Revision: D23916669
Pulled By: vkuzo
fbshipit-source-id: ef93fb774cb15741cd92889f114f6ab76c39f051
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/45135
The previous quantization summary had steps on what to do for
dynamic, static, QAT. This PR moves these steps to comments in the
example code, so it is more clear how to accomplish the steps.
Test Plan: Imported from OSS
Reviewed By: jerryzh168
Differential Revision: D23842456
Pulled By: vkuzo
fbshipit-source-id: db2399e51e9ae33c8a1ac610e3d7dbdb648742b0
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/45093
This adds a tl;dr; style summary of the quantization API
to the documentation. Hopefully this will make this easier
for new folks to learn how to use quantization.
This is not meant to be all-encompassing. Future PRs
can improve the documentation further.
Test Plan:
1. build the doc as specified in https://github.com/pytorch/pytorch#building-the-documentation
2. inspect the quantization page in Chrome, format looks good
Reviewed By: jerryzh168
Differential Revision: D23828257
Pulled By: vkuzo
fbshipit-source-id: 9311ee3f394cd83af0aeafb6e2fcdc3e0321fa38
Summary:
xref gh-38010 and gh-38011.
After this PR, there should be only two warnings:
```
pytorch/docs/source/index.rst:65: WARNING: toctree contains reference to nonexisting \
document 'torchvision/index'
WARNING: autodoc: failed to import class 'tensorboard.writer.SummaryWriter' from module \
'torch.utils'; the following exception was raised:
No module named 'tensorboard'
```
If tensorboard and torchvision are prerequisites to building docs, they should be added to the `requirements.txt`.
As for breaking up quantization into smaller pieces: I split out the list of supported operations and the list of modules to separate documents. I think this makes the page flow better, makes it much "lighter" in terms of page cost, and also removes some warnings since the same class names appear in multiple sub-modules.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/41321
Reviewed By: ngimel
Differential Revision: D22753099
Pulled By: mruberry
fbshipit-source-id: d504787fcf1104a0b6e3d1c12747ec53450841da
Summary:
solves most of gh-38011 in the framework of solving gh-32703.
These should only be formatting fixes, I did not try to fix grammer and syntax.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/41068
Differential Revision: D22411919
Pulled By: zou3519
fbshipit-source-id: 25780316b6da2cfb4028ea8a6f649bb18b746440
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/40377
Cleans up the docstring for quantized ELU and adds it to the quantization docs.
Test Plan: * build on Mac OS and inspect
Differential Revision: D22162834
Pulled By: vkuzo
fbshipit-source-id: e548fd4dc8d67db27ed19cac4dbdf2a942586759
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/40346
Cleans up docstrings for quantized BatchNorm and adds to quantization docs
Test Plan: * build on Mac OS and inspect
Differential Revision: D22152633
Pulled By: vkuzo
fbshipit-source-id: e0bf02194158231e0205b5b2df7f6f1ffc3c4d65
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/40345
Fixes docstrings and adds to quantization docs for quantized InstanceNorm.
Test Plan: * build on Mac OS and inspect
Differential Revision: D22152637
Pulled By: vkuzo
fbshipit-source-id: 7a485311ead20796b7a0944827d1d04e14ec8dcd
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/40343
Cleans up the quantized GroupNorm docstring and adds it to quantization docs.
Test Plan: * build on Mac OS and inspect
Differential Revision: D22152635
Pulled By: vkuzo
fbshipit-source-id: 5553b841c7a5d77f1467f0c40657db9e5d730a12
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/40342
Cleans up the docstrings for quantized LayerNorm, and adds it to the docs.
Test Plan: * build on Mac OS and inspect
Differential Revision: D22152639
Pulled By: vkuzo
fbshipit-source-id: 38adf14b34675d1983ac4ed751938aa396e5400b
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/40341
Cleans up the hardtanh docstring and adds it to quantization docs.
Test Plan: * build and inspect on Mac OS
Differential Revision: D22152636
Pulled By: vkuzo
fbshipit-source-id: c98e635199c8be332aa6958664ff23faad834908
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/40340
Adds and simplifies quantization docs for hardsigmoid
Test Plan:
* build docs on Mac OS
* inspect
Differential Revision: D22152634
Pulled By: vkuzo
fbshipit-source-id: 18da273023fb00e5f0bc1e881b00536492c606d3
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/40323
Cleans up the naming and the function param docs for quantized hardswish.
Remove redundant docstrings and link to floating point modules instead.
Test Plan:
* build the docs on Mac OS
* verify that every link works as expected
Differential Revision: D22152638
Pulled By: vkuzo
fbshipit-source-id: fef04874ae460b449c677424a6a1c6dd47054795
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/39331
Fixes gh-37590
Adds an extra `make coverage` to document building, which uses the built-in facility in sphinx to check docstring coverage. Also fixes a failure to import `torch/jit/supported_ops.py` which broke the [Torchscript Builtins](https://pytorch.org/docs/stable/jit_builtin_functions.html) page.
This also adds the required `SPHINXOPTS` to turn warnings into error, but this is commented out. Note that since documentation of `torchvision` is merged in here, failures there would cause failures here if this is made active. Some thought might be needed about pinning the torchvision version merged into documentation.
The first commit should fail, since the "ScriptModule" class is commented out. I did that in order to check that a CI failure is properly reported.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/38244
Differential Revision: D21640589
Pulled By: ezyang
fbshipit-source-id: 1e240d81669b5f21404d596de4a27d192dc9fd8a
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/38449
Also update docs to reflect conv1d op support
Test Plan:
python test/test_quantization.py TestQuantizedFunctional.test_conv1d_api
Imported from OSS
Differential Revision: D21575921
fbshipit-source-id: 21c9f6b49ad456cd9d93e97f17cf5b8d87f0da6b
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/38283
Adds support for the modules and tests
Test Plan:
python test/test_quantization.py TestStaticQuantizedModule.test_conv1d_api
Imported from OSS
Differential Revision: D21553665
fbshipit-source-id: 7ea28da024bdf59f87f300d616c266f2b41f0bcd
Summary:
xref gh-32838, gh-34032
This is a major refactor of parts of the documentation to split it up using sphinx's `autosummary` feature which will build out `autofuction` and `autoclass` stub files and link to them. The end result is that the top module pages like torch.nn.rst and torch.rst are now more like table-of-contents to the actual single-class or single-function documentations pages.
Along the way, I modified many of the docstrings to eliminate sphinx warnings when building. I think the only thing I changed from a non-documentation perspective is to add names to `__all__` when adding them to `globals()` in `torch.__init__.py`
I do not know the CI system: are the documentation build artifacts available after the build, so reviewers can preview before merging?
Pull Request resolved: https://github.com/pytorch/pytorch/pull/37419
Differential Revision: D21337640
Pulled By: ezyang
fbshipit-source-id: d4ad198780c3ae7a96a9f22651e00ff2d31a0c0f
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/27850
Many of these are real problems in the documentation (i.e., link or
bullet point doesn't display correctly).
Test Plan: - built and viewed the documentation for each change locally.
Differential Revision: D17908123
Pulled By: zou3519
fbshipit-source-id: 65c92a352c89b90fb6b508c388b0874233a3817a
Summary:
People get confused with partial support otherwise: https://github.com/pytorch/pytorch/issues/27811#27729
Suggestions on where else put warnings are welcomed (probably in tutorials - cc SethHWeidman )
Pull Request resolved: https://github.com/pytorch/pytorch/pull/27829
Differential Revision: D17910931
Pulled By: dzhulgakov
fbshipit-source-id: 37a169a4bef01b94be59fe62a8f641c3ec5e9b7c
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/27782
Warnings show up when running `make html` to build documentation. All of
the warnings are very reasonable and point to bugs in our docs. This PR
attempts to fix most of those warnings.
In the future we will add something to the CI that asserts that there
are no warnings in our docs.
Test Plan: - build and view changes locally
Differential Revision: D17887067
Pulled By: zou3519
fbshipit-source-id: 6bf4d08764759133b20983d6cd7f5d27e5ee3166
Summary:
This was written by Raghu, Jessica, Dmytro and myself.
This PR will accumulate additional changes (there are a few more things we need to add to this actual rst file). I'll probably add the related image files to this PR as well.
I'm breaking draft PR https://github.com/pytorch/pytorch/pull/27553 into more easily digestible pieces.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/27559
Differential Revision: D17843414
Pulled By: gottbrath
fbshipit-source-id: 434689f255ac1449884acf81f10e0148d0d8d302