Options to address the "undocumented python objects":
1. Reference the functions in the .rst via the torch.nn.modules namespace. Note that this changes the generated doc filenames / locations for most of these functions!
2. [Not an option] Monkeypatch `__module__` for these objects (broke several tests in CI due to `inspect.findsource` failing after this change)
3. Update the .rst files to also document the torch.nn.modules forms of these functions, duplicating docs.
#### [this is the docs page added](https://docs-preview.pytorch.org/pytorch/pytorch/158491/nn.aliases.html)
This PR takes option 3 by adding an rst page nn.aliases that documents the aliases in nested namespaces, removing all the torch.nn.modules.* entries from the coverage skiplist except
- NLLLoss2d (deprecated)
- Container (deprecated)
- CrossMapLRN2d (what is this?)
- NonDynamicallyQuantizableLinear
This mostly required adding docstrings to `forward`, `extra_repr` and `reset_parameters`. Since forward arguments are already part of the module docstrings I just added a very basic docstring.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/158491
Approved by: https://github.com/janeyx99
Fixes#112599
Fixed errors relating to pydocstyle in the following files. The remaining errors are related to docstrings at the module level and at methods within each module, `forward()`, `reset_parameters`, `__init__` ..etc
pydocstyle torch/nn/modules/pooling.py --count
before: 49
after: 29
**remaining errors:**
```
torch/nn/modules/pooling.py:1 at module level:
D100: Missing docstring in public module
torch/nn/modules/pooling.py:90 in public method `forward`:
D102: Missing docstring in public method
torch/nn/modules/pooling.py:163 in public method `forward`:
D102: Missing docstring in public method
torch/nn/modules/pooling.py:240 in public method `forward`:
D102: Missing docstring in public method
torch/nn/modules/pooling.py:315 in public method `__init__`:
D107: Missing docstring in __init__
torch/nn/modules/pooling.py:321 in public method `forward`:
D102: Missing docstring in public method
torch/nn/modules/pooling.py:402 in public method `__init__`:
D107: Missing docstring in __init__
torch/nn/modules/pooling.py:408 in public method `forward`:
D102: Missing docstring in public method
torch/nn/modules/pooling.py:472 in public method `__init__`:
D107: Missing docstring in __init__
torch/nn/modules/pooling.py:478 in public method `forward`:
D102: Missing docstring in public method
torch/nn/modules/pooling.py:541 in public method `__init__`:
D107: Missing docstring in __init__
torch/nn/modules/pooling.py:550 in public method `forward`:
D102: Missing docstring in public method
torch/nn/modules/pooling.py:620 in public method `__init__`:
D107: Missing docstring in __init__
torch/nn/modules/pooling.py:630 in public method `forward`:
D102: Missing docstring in public method
torch/nn/modules/pooling.py:706 in public method `__init__`:
D107: Missing docstring in __init__
torch/nn/modules/pooling.py:716 in public method `forward`:
D102: Missing docstring in public method
torch/nn/modules/pooling.py:720 in public method `__setstate__`:
D105: Missing docstring in magic method
torch/nn/modules/pooling.py:774 in public method `__init__`:
D107: Missing docstring in __init__
torch/nn/modules/pooling.py:792 in public method `forward`:
D102: Missing docstring in public method
torch/nn/modules/pooling.py:845 in public method `__init__`:
D107: Missing docstring in __init__
torch/nn/modules/pooling.py:863 in public method `forward`:
D102: Missing docstring in public method
torch/nn/modules/pooling.py:925 in public method `forward`:
D102: Missing docstring in public method
torch/nn/modules/pooling.py:979 in public method `forward`:
D102: Missing docstring in public method
torch/nn/modules/pooling.py:1026 in public method `forward`:
D102: Missing docstring in public method
torch/nn/modules/pooling.py:1068 in public method `forward`:
D102: Missing docstring in public method
torch/nn/modules/pooling.py:1111 in public method `forward`:
D102: Missing docstring in public method
torch/nn/modules/pooling.py:1150 in public method `forward`:
D102: Missing docstring in public method
torch/nn/modules/pooling.py:1189 in public method `forward`:
D102: Missing docstring in public method
torch/nn/modules/pooling.py:1228 in public method `forward`:
D102: Missing docstring in public method
```
pydocstyle torch/nn/modules/upsampling.py --count
before: 14
after: 7
**remaining:**
```
torch/nn/modules/upsampling.py:1 at module level:
D100: Missing docstring in public module
torch/nn/modules/upsampling.py:142 in public method `__init__`:
D107: Missing docstring in __init__
torch/nn/modules/upsampling.py:156 in public method `forward`:
D102: Missing docstring in public method
torch/nn/modules/upsampling.py:160 in public method `__setstate__`:
D105: Missing docstring in magic method
torch/nn/modules/upsampling.py:166 in public method `extra_repr`:
D102: Missing docstring in public method
torch/nn/modules/upsampling.py:216 in public method `__init__`:
D107: Missing docstring in __init__
torch/nn/modules/upsampling.py:263 in public method `__init__`:
D107: Missing docstring in __init__
```
pydocstyle torch/nn/modules/rnn.py --count
before: 47
after: 40
**remaining**
```
torch/nn/modules/rnn.py:1 at module level:
D100: Missing docstring in public module
torch/nn/modules/rnn.py:59 in public method `__init__`:
D107: Missing docstring in __init__
torch/nn/modules/rnn.py:160 in public method `__setattr__`:
D105: Missing docstring in magic method
torch/nn/modules/rnn.py:225 in public method `reset_parameters`:
D102: Missing docstring in public method
torch/nn/modules/rnn.py:230 in public method `check_input`:
D102: Missing docstring in public method
torch/nn/modules/rnn.py:242 in public method `get_expected_hidden_size`:
D102: Missing docstring in public method
torch/nn/modules/rnn.py:256 in public method `check_hidden_size`:
D102: Missing docstring in public method
torch/nn/modules/rnn.py:272 in public method `check_forward_args`:
D102: Missing docstring in public method
torch/nn/modules/rnn.py:278 in public method `permute_hidden`:
D102: Missing docstring in public method
torch/nn/modules/rnn.py:284 in public method `extra_repr`:
D102: Missing docstring in public method
torch/nn/modules/rnn.py:305 in public method `__getstate__`:
D105: Missing docstring in magic method
torch/nn/modules/rnn.py:313 in public method `__setstate__`:
D105: Missing docstring in magic method
torch/nn/modules/rnn.py:355 in public method `all_weights`:
D102: Missing docstring in public method
torch/nn/modules/rnn.py:471 in public method `__init__`:
D107: Missing docstring in __init__
torch/nn/modules/rnn.py:478 in public method `__init__`:
D107: Missing docstring in __init__
torch/nn/modules/rnn.py:481 in public method `__init__`:
D107: Missing docstring in __init__
torch/nn/modules/rnn.py:503 in public method `forward` (skipping F811):
D102: Missing docstring in public method
torch/nn/modules/rnn.py:762 in public method `__init__`:
D107: Missing docstring in __init__
torch/nn/modules/rnn.py:768 in public method `__init__`:
D107: Missing docstring in __init__
torch/nn/modules/rnn.py:771 in public method `__init__`:
D107: Missing docstring in __init__
torch/nn/modules/rnn.py:774 in public method `get_expected_cell_size`:
D102: Missing docstring in public method
torch/nn/modules/rnn.py:786 in public method `check_forward_args`:
D102: Missing docstring in public method
torch/nn/modules/rnn.py:798 in public method `permute_hidden`:
D102: Missing docstring in public method
torch/nn/modules/rnn.py:809 in public method `forward` (skipping F811):
D102: Missing docstring in public method
torch/nn/modules/rnn.py:820 in public method `forward` (skipping F811):
D102: Missing docstring in public method
torch/nn/modules/rnn.py:1030 in public method `__init__`:
D107: Missing docstring in __init__
torch/nn/modules/rnn.py:1036 in public method `__init__`:
D107: Missing docstring in __init__
torch/nn/modules/rnn.py:1039 in public method `__init__`:
D107: Missing docstring in __init__
torch/nn/modules/rnn.py:1046 in public method `forward` (skipping F811):
D102: Missing docstring in public method
torch/nn/modules/rnn.py:1054 in public method `forward` (skipping F811):
D102: Missing docstring in public method
torch/nn/modules/rnn.py:1123 in public class `RNNCellBase`:
D101: Missing docstring in public class
torch/nn/modules/rnn.py:1134 in public method `__init__`:
D107: Missing docstring in __init__
torch/nn/modules/rnn.py:1152 in public method `extra_repr`:
D102: Missing docstring in public method
torch/nn/modules/rnn.py:1160 in public method `reset_parameters`:
D102: Missing docstring in public method
torch/nn/modules/rnn.py:1224 in public method `__init__`:
D107: Missing docstring in __init__
torch/nn/modules/rnn.py:1230 in public method `forward`:
D102: Missing docstring in public method
torch/nn/modules/rnn.py:1327 in public method `__init__`:
D107: Missing docstring in __init__
torch/nn/modules/rnn.py:1332 in public method `forward`:
D102: Missing docstring in public method
torch/nn/modules/rnn.py:1422 in public method `__init__`:
D107: Missing docstring in __init__
torch/nn/modules/rnn.py:1427 in public method `forward`:
D102: Missing docstring in public method
```
pydocstyle torch/nn/modules/pixelshuffle.py --count
before: 13
after: 8
**remaining:**
```
torch/nn/modules/pixelshuffle.py:1 at module level:
D100: Missing docstring in public module
torch/nn/modules/pixelshuffle.py:52 in public method `__init__`:
D107: Missing docstring in __init__
torch/nn/modules/pixelshuffle.py:56 in public method `forward`:
D102: Missing docstring in public method
torch/nn/modules/pixelshuffle.py:59 in public method `extra_repr`:
D102: Missing docstring in public method
torch/nn/modules/pixelshuffle.py:105 in public method `__init__`:
D107: Missing docstring in __init__
torch/nn/modules/pixelshuffle.py:109 in public method `forward`:
D102: Missing docstring in public method
torch/nn/modules/pixelshuffle.py:112 in public method `extra_repr`:
D102: Missing docstring in public method
```
pydocstyle torch/nn/modules/sparse.py --count
before: 14
after: 8
**remaining errors:**
```
torch/nn/modules/sparse.py:1 at module level:
D100: Missing docstring in public module
torch/nn/modules/sparse.py:124 in public method `__init__`:
D107: Missing docstring in __init__
torch/nn/modules/sparse.py:153 in public method `reset_parameters`:
D102: Missing docstring in public method
torch/nn/modules/sparse.py:162 in public method `forward`:
D102: Missing docstring in public method
torch/nn/modules/sparse.py:167 in public method `extra_repr`:
D102: Missing docstring in public method
torch/nn/modules/sparse.py:320 in public method `__init__`:
D107: Missing docstring in __init__
torch/nn/modules/sparse.py:350 in public method `reset_parameters`:
D102: Missing docstring in public method
torch/nn/modules/sparse.py:396 in public method `extra_repr`:
D102: Missing docstring in public method
```
Pull Request resolved: https://github.com/pytorch/pytorch/pull/113177
Approved by: https://github.com/ezyang
Fixes#100935 , adding handling for the recompute_scale_factor field. I would be happy to write a test for this, but might need some advice on where it should go/how to reliably reproduce the given issue. I'd also be happy to iterate on the proposed changes.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/101248
Approved by: https://github.com/albanD
This is a new version of #15648 based on the latest master branch.
Unlike the previous PR where I fixed a lot of the doctests in addition to integrating xdoctest, I'm going to reduce the scope here. I'm simply going to integrate xdoctest, and then I'm going to mark all of the failing tests as "SKIP". This will let xdoctest run on the dashboards, provide some value, and still let the dashboards pass. I'll leave fixing the doctests themselves to another PR.
In my initial commit, I do the bare minimum to get something running with failing dashboards. The few tests that I marked as skip are causing segfaults. Running xdoctest results in 293 failed, 201 passed tests. The next commits will be to disable those tests. (unfortunately I don't have a tool that will insert the `#xdoctest: +SKIP` directive over every failing test, so I'm going to do this mostly manually.)
Fixes https://github.com/pytorch/pytorch/issues/71105
@ezyang
Pull Request resolved: https://github.com/pytorch/pytorch/pull/82797
Approved by: https://github.com/ezyang
Summary:
The documentation of torch.nn.Upsample stated that `align_corners` only affects `linear`, `bilinear` and `trilinear`.
This PR updates the documentation for the Python `Upsample` module and the C++ `UpsampleOptions` struct to reflect that `bicubic` is also affected by `align_corners`.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/66756
Reviewed By: zou3519
Differential Revision: D31731148
Pulled By: jbschlosser
fbshipit-source-id: 3ec277fc3fbdf8414d0de327d8c57ba07342a5b9
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/38211
Just because the annotations are inline doesn't mean the files type
check; most of the newly annotated files have type errors and I
added exclusions for them in mypy.ini. The payoff of moving
all of these modules inline is I can delete the relevant code
generation logic for the pyi files (which was added ignore
annotations that weren't actually relevant anymore.)
For the most part the translation was completely mechanical, but there
were two hairy issues. First, I needed to work around a Python 3.6 and
earlier bug where Generic has a nontrivial metaclass. This fix is in
torch/jit/__init__.py. Second, module.py, we need to apply the same
fix for avoiding contravariance checks that the pyi file used to have;
this is done by declaring forward as a variable (rather than a
function), which appears to be sufficient enough to get mypy to not
contravariantly check input arguments.
Because we aren't actually typechecking these modules in most
cases, it is inevitable that some of these type annotations are wrong.
I slavishly copied the old annotations from the pyi files unless there
was an obvious correction I could make. These annotations will probably
need fixing up later.
Signed-off-by: Edward Z. Yang <ezyang@fb.com>
Test Plan: Imported from OSS
Differential Revision: D21497397
Pulled By: ezyang
fbshipit-source-id: 2b08bacc152c48f074e7edc4ee5dce1b77d83702
Summary:
* Deletes all weak script decorators / associated data structures / methods
* In order to keep supporting the standard library in script, this enables recursive script on any function defined in `torch.nn`
* Most changes in `torch/nn` are the result of `ag -Q "weak" torch/nn/ -l | xargs sed -i '/weak/d'`, only `rnn.py` needed manual editing to use the `ignore` and `export` to continue supporting the overloaded `forward` methods
* `Sequential`/`ModuleList` no longer need to be added to constants since they are compiled on demand
This should also fix https://github.com/pytorch/pytorch/issues/22212
Pull Request resolved: https://github.com/pytorch/pytorch/pull/22212
Differential Revision: D15988346
Pulled By: driazati
fbshipit-source-id: af223e3ad0580be895377312949997a70e988e4f
Summary:
Fixes#20523 .
nn.Upsample was unable to accept tuple inputs for the scale_factor argument due to direct casting to float, which was done in #17732.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/20581
Differential Revision: D15392622
Pulled By: ezyang
fbshipit-source-id: b56ba8197a5bbf8891bc7e1bebf5cad63dcab04d
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18598
ghimport-source-id: c74597e5e7437e94a43c163cee0639b20d0d0c6a
Stack from [ghstack](https://github.com/ezyang/ghstack):
* **#18598 Turn on F401: Unused import warning.**
This was requested by someone at Facebook; this lint is turned
on for Facebook by default. "Sure, why not."
I had to noqa a number of imports in __init__. Hypothetically
we're supposed to use __all__ in this case, but I was too lazy
to fix it. Left for future work.
Be careful! flake8-2 and flake8-3 behave differently with
respect to import resolution for # type: comments. flake8-3 will
report an import unused; flake8-2 will not. For now, I just
noqa'd all these sites.
All the changes were done by hand.
Signed-off-by: Edward Z. Yang <ezyang@fb.com>
Differential Revision: D14687478
fbshipit-source-id: 30d532381e914091aadfa0d2a5a89404819663e3
Summary:
IIRC we decided to remove warning in code in #11568. This got reverted accidentally in #14123.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/17921
Differential Revision: D14422811
Pulled By: ailzhang
fbshipit-source-id: 7067264bd1d3e3b7861d29e18ade2969ed705ca1
Summary:
Addresses #918, interpolation results should be similar to tf
* Adds bicubic interpolation operator to `nn.functional.interpolate`
* Corresponding test in `test_nn.py`
The operator is added in legacy `TH` to be aligned with the other upsampling operators; they can be refactored/moved to ATen all at once when #10482 is resolved
Pull Request resolved: https://github.com/pytorch/pytorch/pull/9849
Differential Revision: D9007525
Pulled By: driazati
fbshipit-source-id: 93ef49a34ce4e5ffd4bda94cd9a6ddc939f0a4cc
Summary:
Add support for interpolate and upsampling in weak_script mode.
Because the function parameters are overloaded, i had to add it as a builtin op. For interpolate:
size can be ?int | int[]?, and scale_factor can be ?float | float[]?. Every combination of the two parameters needs to be supported.
The same logic applies for upsample_nearest, upsample_bilinear, and upsample.
There are a few fixes that I came to along the way.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14123
Differential Revision: D13278923
Pulled By: eellison
fbshipit-source-id: e59729034369be4ce4b747291a3d1c74e135b869
Summary:
Fixes#11452 .
Based on the discussion with SsnL and soumith , we want to bring back Upsample as a module instead of introducing a new nn.interpolate module for now. If anyone want to do downsample, they should use `nn.functional.interpolate ` instead.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/11568
Differential Revision: D9804359
Pulled By: ailzhang
fbshipit-source-id: 2b232d55fc83c2b581bf336f1ee8d1cf1c1159ca
Summary:
This PR addresses #5823.
* fix docstring: upsample doesn't support LongTensor
* Enable float scale up & down sampling for linear/bilinear/trilinear modes. (following SsnL 's commit)
* Enable float scale up & down sampling for nearest mode. Note that our implementation is slightly different from TF that there's actually no "align_corners" concept in this mode.
* Add a new interpolate function API to replace upsample. Add deprecate warning for upsample.
* Add an area mode which is essentially Adaptive_average_pooling into resize_image.
* Add test cases for interpolate in test_nn.py
* Add a few comments to help understand *linear interpolation code.
* There is only "*cubic" mode missing in resize_images API which is pretty useful in practice. And it's labeled as hackamonth here #1552. I discussed with SsnL that we probably want to implement all new ops in ATen instead of THNN/THCUNN. Depending on the priority, I could either put it in my queue or leave it for a HAMer.
* After the change, the files named as *Upsampling*.c works for both up/down sampling. I could rename the files if needed.
Differential Revision: D8729635
Pulled By: ailzhang
fbshipit-source-id: a98dc5e1f587fce17606b5764db695366a6bb56b
This PR enables users to print extra information of their subclassed nn.Module.
Now I simply insert the user-defined string at the ending of module name, which should be discussed in this PR.
Before this PR, users should redefine the __repr__ and copy&paste the source code from Module.
* Add support for extra information on Module
* Rewrite the repr method of Module
* Fix flake8
* Change the __repr__ to get_extra_repr in Linear
* Fix extra new-line for empty line
* Add test for __repr__ method
* Fix bug of block string indent
* Add indent for multi-line repr test.
* Address review comments
* Update tutorial for creating nn.Module
* Fix flake8, add extra_repr of bilinear
* Refactor DropoutNd
* Change to extra_repr in some Modules
* Fix flake8
* Refactor padding modules
* Refactor pooling module
* Fix typo
* Change to extra_repr
* Fix bug for GroupNorm
* Fix bug for LayerNorm
* Changes in bilinear upsampling
* Add align_corners option to upsampling module & functional when using linearly interpolating modes
When align_corners=True, it uses the old original upsampling scheme, which gives visually better results,
but doesn't properly align input and output pixels, and thus cause the output vary basing on input.
This PR adds this align_corners option, and changes the default behavior to align_corners=False, with
proper warning if this option is not specified upon using nn.Upsample or nn.functional.upsample to let
be aware of this new change.
Adds tests in test_nn.py for spatial invariance when align_corners=False, and usual module tests for
align_corners=False.
* remove redundant checks and unnecessary variables; fix the cast
* fix negative indices
* Fix some minor errors in existing docs.
* Fix Convolution and Pooling docs in torch.nn.functional
* Cleaned up torch.nn.functional docs
* Address @SsnL 's comments
* Add multiplication sign missing in docs
* Fix more typos, and clear some warnings
* Change infinity symbol in LPPool2d
* Revert some changes in torch.nn.functional
* Few more minor changes
* Improvize documentation
1. Add formula for erf, erfinv
2. Make exp, expm1 similar to log, log1p
3. Symbol change in ge, le, ne, isnan
* Fix minor nit in the docstring
* More doc improvements
1. Added some formulae
2. Complete scanning till "Other Operations" in Tensor docs
* Add more changes
1. Modify all torch.Tensor wherever required
* Fix Conv docs
1. Fix minor nits in the references for LAPACK routines
* Improve Pooling docs
1. Fix lint error
* Improve docs for RNN, Normalization and Padding
1. Fix flake8 error for pooling
* Final fixes for torch.nn.* docs.
1. Improve Loss Function documentation
2. Improve Vision Layers documentation
* Fix lint error
* Improve docstrings in torch.nn.init
* Fix lint error
* Fix minor error in torch.nn.init.sparse
* Fix Activation and Utils Docs
1. Fix Math Errors
2. Add explicit clean to Makefile in docs to prevent running graph generation script
while cleaning
3. Fix utils docs
* Make PYCMD a Makefile argument, clear up prints in the build_activation_images.py
* Fix batch norm doc error
The nn.* counterpart of #5443 . Mostly removed Variable wrapper. Also added doc for nn.RReLU.
Notice that torch.randn(*, requires_grad=True) isn't documented until #5462 is done.
* Add more detail to CUDA documentation
Also adds better cross-linking to the pages that discuss relevant topics.
* Adds recommendation to torch.save docs
* Make the version numbers for the docs dynamic
Might need tweaks for beta, 1.0, etc.
Here's the command I used to invoke autopep8 (in parallel!):
git ls-files | grep '\.py$' | xargs -n1 -P`nproc` autopep8 -i
Several rules are ignored in setup.cfg. The goal is to let autopep8
handle everything which it can handle safely, and to disable any rules
which are tricky or controversial to address. We may want to come back
and re-enable some of these rules later, but I'm trying to make this
patch as safe as possible.
Also configures flake8 to match pep8's behavior.
Also configures TravisCI to check the whole project for lint.