Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/39331
Fixes gh-37590
Adds an extra `make coverage` to document building, which uses the built-in facility in sphinx to check docstring coverage. Also fixes a failure to import `torch/jit/supported_ops.py` which broke the [Torchscript Builtins](https://pytorch.org/docs/stable/jit_builtin_functions.html) page.
This also adds the required `SPHINXOPTS` to turn warnings into error, but this is commented out. Note that since documentation of `torchvision` is merged in here, failures there would cause failures here if this is made active. Some thought might be needed about pinning the torchvision version merged into documentation.
The first commit should fail, since the "ScriptModule" class is commented out. I did that in order to check that a CI failure is properly reported.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/38244
Differential Revision: D21640589
Pulled By: ezyang
fbshipit-source-id: 1e240d81669b5f21404d596de4a27d192dc9fd8a
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/39216
The `rpc.functions.async_execution` decorator specifies that the
wrapped function is guaranteed to return a `torch.futures.Future`.
The decorator adds a `_wrapped_async_rpc_function` attribute to
the wrapper function. The caller retrieves this information and
then sets `isAsyncFunction` argument accordingly which is later
added to PythonCall RPC message as a field. On the callee side,
if the PythonCall carries an asynchronous function, it will cast
the function's return value to a jit::PythonFutureWrapper object,
and then install response creation and communication as a callback
on the that jit::PythonFutureWrapper.
For applications, this feature is useful when a function needs to
wait for IO or additional singaling. In those cases, marking the
user function as `rpc.functions.async_execution` will prevent it
from blocking one thread on callee for too long.
Test Plan: Imported from OSS
Reviewed By: rohan-varma
Differential Revision: D21779962
fbshipit-source-id: 6b6aa698bf6f91dad6ed2a7ee433df429b59e941
Summary:
Continuation of issue gh-36064 and PR gh-38042 which removed the unmaintained javaspinx extension. The unknown sphinx directives cause warnings when building documentation.
Edit: link to PR as well as issue
Pull Request resolved: https://github.com/pytorch/pytorch/pull/38920
Differential Revision: D21818297
Pulled By: ezyang
fbshipit-source-id: 2c1d007a7689b26653d7dee081b0b969b8a731a2
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/39033
Added `real` and `imag` views as tensor attributes. Right now, tensor.imag is disabled for real tensors. This is because if we return a new tensor of zeros, the user would be able to update the tensor returned by tensor.imag which should not be allowed as numpy returns a read-only array, and pytorch doesn't support read-only tensors yet.
TODO in follow-up PRs:
1. add a setter for `real` and `imag`
2. add special case in codegen for `real` and `imag` backward functions.
3. remove `copy_real` and `copy_imag` methods.
Test Plan: Imported from OSS
Differential Revision: D21767542
Pulled By: anjali411
fbshipit-source-id: 539febf01f01ff055e3fbc7e9ff01fd3fe729056
Summary:
Fixes https://github.com/pytorch/pytorch/issues/38401
* `torch.hub.load_state_dict_from_url()` now also downloads to `$TORCH_HOME/hub/checkpoints` instead of `$TORCH_HOME/checkpoints` like `torch.hub.load()` and others.
* Make `hub_dir` private, add and use `get_dir()` instead.
Also updated docs. Did not see a need for additional unit tests.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/38969
Differential Revision: D21725880
Pulled By: ailzhang
fbshipit-source-id: 58cc6b32ddbda91e58c1c1433cc3916223556ea1
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/38149
This is for (#21290) (#31894)
Instead of putting "Pytorch master documentation" in header's html title, now we use "Pytorch 1.x.x documentation", this is similar to tensorFlow and numpy doc page.
In google search, we will get
Pytorch Documentation - Pytorch 1.x.x Documentation instead.
Test Plan: Imported from OSS
Differential Revision: D21586559
Pulled By: glaringlee
fbshipit-source-id: 2995709ac3c22dbb0183b5b4abfde7d795f1f8eb
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/38449
Also update docs to reflect conv1d op support
Test Plan:
python test/test_quantization.py TestQuantizedFunctional.test_conv1d_api
Imported from OSS
Differential Revision: D21575921
fbshipit-source-id: 21c9f6b49ad456cd9d93e97f17cf5b8d87f0da6b
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/38283
Adds support for the modules and tests
Test Plan:
python test/test_quantization.py TestStaticQuantizedModule.test_conv1d_api
Imported from OSS
Differential Revision: D21553665
fbshipit-source-id: 7ea28da024bdf59f87f300d616c266f2b41f0bcd
Summary:
Fix for https://github.com/pytorch/pytorch/issues/37986
Follows the stack in https://github.com/pytorch/pytorch/pull/33783 stack to make functions in `torch/functional.py` resolve to their python implementations. Because the return type of `torch.unique` depends on `return_inverse` and `return_counts` I had to refactor the implementation to use our boolean_dispatch mechanism.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/38156
Differential Revision: D21504449
Pulled By: eellison
fbshipit-source-id: 7efb1dff3b5c00655da10168403ac4817286ff59
Summary:
Make Linear layer working correct when bias is False
Pull Request resolved: https://github.com/pytorch/pytorch/pull/38002
Differential Revision: D21509679
Pulled By: malfet
fbshipit-source-id: c7077992cf414ecc557b39e5ed1e39ef01c8b347
Summary:
Related to gh-36318
Mention `bfloat16` dtype and `BFloat16Tensor` in documentation. The real fix would be to implement cpu operations on 16-bit float `half`, and I couldn't help but notice that `torch.finfo(torch.bfloat16).xxx` crashes for `xxx in ['max', 'min', 'eps']`
Pull Request resolved: https://github.com/pytorch/pytorch/pull/37051
Differential Revision: D21476851
Pulled By: ngimel
fbshipit-source-id: fef601d3116d130d67cd3a5654077f31b699409b
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/38120
Test Plan: build docs locally and attach a screenshot to this PR.
Differential Revision: D21477815
Pulled By: zou3519
fbshipit-source-id: 420bbcfcbd191d1a8e33cdf4a90c95bf00a5d226
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/37548
Moving RecordFunction from torch::autograd::profiler into at namespace
Test Plan:
CI
Imported from OSS
Differential Revision: D21315852
fbshipit-source-id: 4a4dbabf116c162f9aef0da8606590ec3f3847aa
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/37491
This PR modernizes RecordFunction API and adds thread local callbacks
in addition to the global ones
Changes:
- support for TLS callbacks, this is going to be the foundation of profiler and other tools
- modernize interface around simple set of functions (add|remove|has|clear)(Global|ThreadLocal)(Callback) and adding RecordFunctionCallback to easily construct callbacks to be passed
- we also add `.setShouldRun` into the callback interface to support cases when simple uniform sampling is not enough
- to properly support add/remove introduce the idea of callback handle returned by add
- internal implementation still uses SmallVector to store intermediate state (as before) - in this case these are vector of handles of callbacks that were picked to run
- to speed up runtime we keep these vectors sorted, this way we can quickly enumerate callbacks that need to be run
- added tests for new functionality
Test Plan:
BUILD_BINARY=1 USE_BLAS=MKL USE_MKLDNN=0 USE_CUDA=0 python setup.py
develop install
./build/bin/test_jit
CI
record_function_benchmark: https://gist.github.com/ilia-cher/f1e094dae47fe23e55e7672ac4dcda2f
Imported from OSS
Differential Revision: D21300448
fbshipit-source-id: 6d55c26dbf20b33d35c3f1604dcc07bb063c8c43
Summary:
xref gh-32838, gh-34032
This is a major refactor of parts of the documentation to split it up using sphinx's `autosummary` feature which will build out `autofuction` and `autoclass` stub files and link to them. The end result is that the top module pages like torch.nn.rst and torch.rst are now more like table-of-contents to the actual single-class or single-function documentations pages.
Along the way, I modified many of the docstrings to eliminate sphinx warnings when building. I think the only thing I changed from a non-documentation perspective is to add names to `__all__` when adding them to `globals()` in `torch.__init__.py`
I do not know the CI system: are the documentation build artifacts available after the build, so reviewers can preview before merging?
Pull Request resolved: https://github.com/pytorch/pytorch/pull/37419
Differential Revision: D21337640
Pulled By: ezyang
fbshipit-source-id: d4ad198780c3ae7a96a9f22651e00ff2d31a0c0f
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/37666
Add `:orphan:` to avoid "WARNING: document isn't included in any toctree".
Test Plan: Imported from OSS
Differential Revision: D21351053
Pulled By: mrshenli
fbshipit-source-id: 6ff67c418fc1de410c7dc39ad9a0be5c30d07122
Summary:
Adds support for generating Vandermonde matrices based off of the Numpy implementation found [here](https://github.com/numpy/numpy/blob/v1.17.0/numpy/lib/twodim_base.py#L475-L563).
Adds test to ensure generated matrix matches expected Numpy implementation. Note test are only limited to torch.long and torch.double due to differences in now PyTorch and Numpy deal with type promotion.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/36725
Differential Revision: D21075138
Pulled By: jessebrizzi
fbshipit-source-id: 6bb1559e8247945714469b0e2b07c6f4d5fd1fd0
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/37382
After adding c10::DispatchKey::Profiler the behavior of RecordFunction
observers is also controlled by the dispatch key,
this PR moves the logic outside of the profiler into the record function
Reviewed By: jamesr66a
Differential Revision: D21268320
fbshipit-source-id: 93207e3b55325d20dcc5b1e8f448ab86933321da
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/37292
After adding c10::DispatchKey::Profiler the behavior of RecordFunction
observers is also controlled by the dispatch key,
this PR moves the logic outside of the profiler into the record function
Reviewed By: jamesr66a
Differential Revision: D21245094
fbshipit-source-id: 595e41b18206d2ba4cf639cb320f630907868b3f
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/37195
After adding c10::DispatchKey::Profiler the behavior of RecordFunction
observers is also controlled by the dispatch key,
this PR moves the logic outside of the profiler into the record function
Reviewed By: ngimel
Differential Revision: D21213786
fbshipit-source-id: e618254da74a4f1ce16c51a3869bbd75a4f561ad