Commit Graph

49 Commits

Author SHA1 Message Date
Mikayla Gawarecki
9e8f27cc79 [BE] Make torch.nn.modules.* satisfy the docs coverage test (#158491)
Options to address the "undocumented python objects":

1. Reference the functions in the .rst via the torch.nn.modules namespace. Note that this changes the generated doc filenames / locations for most of these functions!
2. [Not an option] Monkeypatch `__module__` for these objects (broke several tests in CI due to `inspect.findsource` failing after this change)
3. Update the .rst files to also document the torch.nn.modules forms of these functions, duplicating docs.

#### [this is the docs page added](https://docs-preview.pytorch.org/pytorch/pytorch/158491/nn.aliases.html)
This PR takes option 3 by adding an rst page nn.aliases that documents the aliases in nested namespaces, removing all the torch.nn.modules.* entries from the coverage skiplist except
- NLLLoss2d (deprecated)
- Container (deprecated)
- CrossMapLRN2d (what is this?)
- NonDynamicallyQuantizableLinear

This mostly required adding docstrings to `forward`, `extra_repr` and `reset_parameters`. Since forward arguments are already part of the module docstrings I just added a very basic docstring.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/158491
Approved by: https://github.com/janeyx99
2025-07-25 22:03:55 +00:00
Xuehai Pan
62ccf6d7cd [BE] enable UFMT for torch/nn/modules (#128594)
Part of #123062

- #123062

Pull Request resolved: https://github.com/pytorch/pytorch/pull/128594
Approved by: https://github.com/mikaylagawarecki
2024-06-23 05:37:57 +00:00
PyTorch MergeBot
d4022b4658 Revert "[BE] enable UFMT for torch/nn/modules (#128594)"
This reverts commit 95ac2d6482.

Reverted https://github.com/pytorch/pytorch/pull/128594 on behalf of https://github.com/fbgheith due to breaking internal builds ([comment](https://github.com/pytorch/pytorch/pull/128594#issuecomment-2181788935))
2024-06-21 00:50:08 +00:00
Xuehai Pan
95ac2d6482 [BE] enable UFMT for torch/nn/modules (#128594)
Part of #123062

- #123062

Pull Request resolved: https://github.com/pytorch/pytorch/pull/128594
Approved by: https://github.com/mikaylagawarecki
ghstack dependencies: #128596
2024-06-17 16:29:25 +00:00
Aaron Orenstein
27f9d3b0a1 Flip default value for mypy disallow_untyped_defs [8/11] (#127845)
See #127836 for details.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/127845
Approved by: https://github.com/oulgen
ghstack dependencies: #127842, #127843, #127844
2024-06-08 18:49:56 +00:00
zabboud
53e7de4b65 Issue 112599 - fix pydocstyle errors (#113177)
Fixes #112599

Fixed errors relating to pydocstyle in the following files. The remaining errors are related to docstrings at the module level and at methods within each module, `forward()`, `reset_parameters`, `__init__` ..etc

pydocstyle torch/nn/modules/pooling.py --count
before: 49
after: 29

**remaining errors:**
```
torch/nn/modules/pooling.py:1 at module level:
        D100: Missing docstring in public module
torch/nn/modules/pooling.py:90 in public method `forward`:
        D102: Missing docstring in public method
torch/nn/modules/pooling.py:163 in public method `forward`:
        D102: Missing docstring in public method
torch/nn/modules/pooling.py:240 in public method `forward`:
        D102: Missing docstring in public method
torch/nn/modules/pooling.py:315 in public method `__init__`:
        D107: Missing docstring in __init__
torch/nn/modules/pooling.py:321 in public method `forward`:
        D102: Missing docstring in public method
torch/nn/modules/pooling.py:402 in public method `__init__`:
        D107: Missing docstring in __init__
torch/nn/modules/pooling.py:408 in public method `forward`:
        D102: Missing docstring in public method
torch/nn/modules/pooling.py:472 in public method `__init__`:
        D107: Missing docstring in __init__
torch/nn/modules/pooling.py:478 in public method `forward`:
        D102: Missing docstring in public method
torch/nn/modules/pooling.py:541 in public method `__init__`:
        D107: Missing docstring in __init__
torch/nn/modules/pooling.py:550 in public method `forward`:
        D102: Missing docstring in public method
torch/nn/modules/pooling.py:620 in public method `__init__`:
        D107: Missing docstring in __init__
torch/nn/modules/pooling.py:630 in public method `forward`:
        D102: Missing docstring in public method
torch/nn/modules/pooling.py:706 in public method `__init__`:
        D107: Missing docstring in __init__
torch/nn/modules/pooling.py:716 in public method `forward`:
        D102: Missing docstring in public method
torch/nn/modules/pooling.py:720 in public method `__setstate__`:
        D105: Missing docstring in magic method
torch/nn/modules/pooling.py:774 in public method `__init__`:
        D107: Missing docstring in __init__
torch/nn/modules/pooling.py:792 in public method `forward`:
        D102: Missing docstring in public method
torch/nn/modules/pooling.py:845 in public method `__init__`:
        D107: Missing docstring in __init__
torch/nn/modules/pooling.py:863 in public method `forward`:
        D102: Missing docstring in public method
torch/nn/modules/pooling.py:925 in public method `forward`:
        D102: Missing docstring in public method
torch/nn/modules/pooling.py:979 in public method `forward`:
        D102: Missing docstring in public method
torch/nn/modules/pooling.py:1026 in public method `forward`:
        D102: Missing docstring in public method
torch/nn/modules/pooling.py:1068 in public method `forward`:
        D102: Missing docstring in public method
torch/nn/modules/pooling.py:1111 in public method `forward`:
        D102: Missing docstring in public method
torch/nn/modules/pooling.py:1150 in public method `forward`:
        D102: Missing docstring in public method
torch/nn/modules/pooling.py:1189 in public method `forward`:
        D102: Missing docstring in public method
torch/nn/modules/pooling.py:1228 in public method `forward`:
        D102: Missing docstring in public method
```

pydocstyle torch/nn/modules/upsampling.py --count
before: 14
after: 7

**remaining:**
```
torch/nn/modules/upsampling.py:1 at module level:
        D100: Missing docstring in public module
torch/nn/modules/upsampling.py:142 in public method `__init__`:
        D107: Missing docstring in __init__
torch/nn/modules/upsampling.py:156 in public method `forward`:
        D102: Missing docstring in public method
torch/nn/modules/upsampling.py:160 in public method `__setstate__`:
        D105: Missing docstring in magic method
torch/nn/modules/upsampling.py:166 in public method `extra_repr`:
        D102: Missing docstring in public method
torch/nn/modules/upsampling.py:216 in public method `__init__`:
        D107: Missing docstring in __init__
torch/nn/modules/upsampling.py:263 in public method `__init__`:
        D107: Missing docstring in __init__
```

pydocstyle torch/nn/modules/rnn.py --count
before: 47
after: 40

**remaining**
```
torch/nn/modules/rnn.py:1 at module level:
        D100: Missing docstring in public module
torch/nn/modules/rnn.py:59 in public method `__init__`:
        D107: Missing docstring in __init__
torch/nn/modules/rnn.py:160 in public method `__setattr__`:
        D105: Missing docstring in magic method
torch/nn/modules/rnn.py:225 in public method `reset_parameters`:
        D102: Missing docstring in public method
torch/nn/modules/rnn.py:230 in public method `check_input`:
        D102: Missing docstring in public method
torch/nn/modules/rnn.py:242 in public method `get_expected_hidden_size`:
        D102: Missing docstring in public method
torch/nn/modules/rnn.py:256 in public method `check_hidden_size`:
        D102: Missing docstring in public method
torch/nn/modules/rnn.py:272 in public method `check_forward_args`:
        D102: Missing docstring in public method
torch/nn/modules/rnn.py:278 in public method `permute_hidden`:
        D102: Missing docstring in public method
torch/nn/modules/rnn.py:284 in public method `extra_repr`:
        D102: Missing docstring in public method
torch/nn/modules/rnn.py:305 in public method `__getstate__`:
        D105: Missing docstring in magic method
torch/nn/modules/rnn.py:313 in public method `__setstate__`:
        D105: Missing docstring in magic method
torch/nn/modules/rnn.py:355 in public method `all_weights`:
        D102: Missing docstring in public method
torch/nn/modules/rnn.py:471 in public method `__init__`:
        D107: Missing docstring in __init__
torch/nn/modules/rnn.py:478 in public method `__init__`:
        D107: Missing docstring in __init__
torch/nn/modules/rnn.py:481 in public method `__init__`:
        D107: Missing docstring in __init__
torch/nn/modules/rnn.py:503 in public method `forward` (skipping F811):
        D102: Missing docstring in public method
torch/nn/modules/rnn.py:762 in public method `__init__`:
        D107: Missing docstring in __init__
torch/nn/modules/rnn.py:768 in public method `__init__`:
        D107: Missing docstring in __init__
torch/nn/modules/rnn.py:771 in public method `__init__`:
        D107: Missing docstring in __init__
torch/nn/modules/rnn.py:774 in public method `get_expected_cell_size`:
        D102: Missing docstring in public method
torch/nn/modules/rnn.py:786 in public method `check_forward_args`:
        D102: Missing docstring in public method
torch/nn/modules/rnn.py:798 in public method `permute_hidden`:
        D102: Missing docstring in public method
torch/nn/modules/rnn.py:809 in public method `forward` (skipping F811):
        D102: Missing docstring in public method
torch/nn/modules/rnn.py:820 in public method `forward` (skipping F811):
        D102: Missing docstring in public method
torch/nn/modules/rnn.py:1030 in public method `__init__`:
        D107: Missing docstring in __init__
torch/nn/modules/rnn.py:1036 in public method `__init__`:
        D107: Missing docstring in __init__
torch/nn/modules/rnn.py:1039 in public method `__init__`:
        D107: Missing docstring in __init__
torch/nn/modules/rnn.py:1046 in public method `forward` (skipping F811):
        D102: Missing docstring in public method
torch/nn/modules/rnn.py:1054 in public method `forward` (skipping F811):
        D102: Missing docstring in public method
torch/nn/modules/rnn.py:1123 in public class `RNNCellBase`:
        D101: Missing docstring in public class
torch/nn/modules/rnn.py:1134 in public method `__init__`:
        D107: Missing docstring in __init__
torch/nn/modules/rnn.py:1152 in public method `extra_repr`:
        D102: Missing docstring in public method
torch/nn/modules/rnn.py:1160 in public method `reset_parameters`:
        D102: Missing docstring in public method
torch/nn/modules/rnn.py:1224 in public method `__init__`:
        D107: Missing docstring in __init__
torch/nn/modules/rnn.py:1230 in public method `forward`:
        D102: Missing docstring in public method
torch/nn/modules/rnn.py:1327 in public method `__init__`:
        D107: Missing docstring in __init__
torch/nn/modules/rnn.py:1332 in public method `forward`:
        D102: Missing docstring in public method
torch/nn/modules/rnn.py:1422 in public method `__init__`:
        D107: Missing docstring in __init__
torch/nn/modules/rnn.py:1427 in public method `forward`:
        D102: Missing docstring in public method
```

pydocstyle torch/nn/modules/pixelshuffle.py --count
before: 13
after: 8

**remaining:**
```
torch/nn/modules/pixelshuffle.py:1 at module level:
        D100: Missing docstring in public module
torch/nn/modules/pixelshuffle.py:52 in public method `__init__`:
        D107: Missing docstring in __init__
torch/nn/modules/pixelshuffle.py:56 in public method `forward`:
        D102: Missing docstring in public method
torch/nn/modules/pixelshuffle.py:59 in public method `extra_repr`:
        D102: Missing docstring in public method
torch/nn/modules/pixelshuffle.py:105 in public method `__init__`:
        D107: Missing docstring in __init__
torch/nn/modules/pixelshuffle.py:109 in public method `forward`:
        D102: Missing docstring in public method
torch/nn/modules/pixelshuffle.py:112 in public method `extra_repr`:
        D102: Missing docstring in public method
```

pydocstyle torch/nn/modules/sparse.py --count
before: 14
after: 8

**remaining errors:**
```
torch/nn/modules/sparse.py:1 at module level:
        D100: Missing docstring in public module
torch/nn/modules/sparse.py:124 in public method `__init__`:
        D107: Missing docstring in __init__
torch/nn/modules/sparse.py:153 in public method `reset_parameters`:
        D102: Missing docstring in public method
torch/nn/modules/sparse.py:162 in public method `forward`:
        D102: Missing docstring in public method
torch/nn/modules/sparse.py:167 in public method `extra_repr`:
        D102: Missing docstring in public method
torch/nn/modules/sparse.py:320 in public method `__init__`:
        D107: Missing docstring in __init__
torch/nn/modules/sparse.py:350 in public method `reset_parameters`:
        D102: Missing docstring in public method
torch/nn/modules/sparse.py:396 in public method `extra_repr`:
        D102: Missing docstring in public method
```
Pull Request resolved: https://github.com/pytorch/pytorch/pull/113177
Approved by: https://github.com/ezyang
2023-11-14 20:55:22 +00:00
ts
dfd822d756 Fix deserialization for UpsamplingBilinear2d (#101248)
Fixes #100935 , adding handling for the recompute_scale_factor field. I would be happy to write a test for this, but might need some advice on where it should go/how to reliably reproduce the given issue. I'd also be happy to iterate on the proposed changes.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/101248
Approved by: https://github.com/albanD
2023-05-12 15:40:17 +00:00
Xuehai Pan
5b1cedacde [BE] [2/3] Rewrite super() calls in functorch and torch (#94588)
Rewrite Python built-in class `super()` calls. Only non-semantic changes should be applied.

- #94587
- #94588
- #94592

Also, methods with only a `super()` call are removed:

```diff
class MyModule(nn.Module):
-   def __init__(self):
-       super().__init__()
-
    def forward(self, ...):
        ...
```

Some cases that change the semantics should be kept unchanged. E.g.:

f152a79be9/caffe2/python/net_printer.py (L184-L190)

f152a79be9/test/test_jit_fuser_te.py (L2628-L2635)

Pull Request resolved: https://github.com/pytorch/pytorch/pull/94588
Approved by: https://github.com/ezyang, https://github.com/albanD
2023-02-10 21:16:33 +00:00
Edward Z. Yang
caf1b27196 Fix Upsample/EmbeddingBag module printing (#93850)
The fix generalizes but I want someone else to holistically figure this out.

Fixes https://github.com/pytorch/pytorch/issues/93233
Fixes https://github.com/pytorch/pytorch/issues/93512

Signed-off-by: Edward Z. Yang <ezyang@meta.com>
Pull Request resolved: https://github.com/pytorch/pytorch/pull/93850
Approved by: https://github.com/albanD
2023-02-02 02:50:29 +00:00
joncrall
b136f3f310 More doctest refinements. (#83317)
Follow up to #82797

Now that the doctests themselves are in a better state, we should be able to enable xdoctest on the CI so they stay that way.

@ezyang @vadimkantorov
Pull Request resolved: https://github.com/pytorch/pytorch/pull/83317
Approved by: https://github.com/ezyang
2022-08-22 20:07:26 +00:00
joncrall
4618371da5 Integrate xdoctest - Rebased (#82797)
This is a new version of #15648 based on the latest master branch.

Unlike the previous PR where I fixed a lot of the doctests in addition to integrating xdoctest, I'm going to reduce the scope here. I'm simply going to integrate xdoctest, and then I'm going to mark all of the failing tests as "SKIP". This will let xdoctest run on the dashboards, provide some value, and still let the dashboards pass. I'll leave fixing the doctests themselves to another PR.

In my initial commit, I do the bare minimum to get something running with failing dashboards. The few tests that I marked as skip are causing segfaults. Running xdoctest results in 293 failed, 201 passed tests. The next commits will be to disable those tests. (unfortunately I don't have a tool that will insert the `#xdoctest: +SKIP` directive over every failing test, so I'm going to do this mostly manually.)

Fixes https://github.com/pytorch/pytorch/issues/71105

@ezyang
Pull Request resolved: https://github.com/pytorch/pytorch/pull/82797
Approved by: https://github.com/ezyang
2022-08-12 02:08:01 +00:00
PyTorch MergeBot
9db3c517de Add __all__ for torch.nn.modules, torch.distributed.elastic, torch.nn.utils submodules (#80240)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/80240
Approved by: https://github.com/rohan-varma
2022-06-27 17:11:12 +00:00
vfdev
62ca5a81c0 Exposed recompute_scale_factor into nn.Upsample (#66419)
Summary:
Description:
- Exposed recompute_scale_factor into nn.Upsample such that recompute_scale_factor=True option could be used

Context: https://github.com/pytorch/pytorch/pull/64501#discussion_r710205190

Pull Request resolved: https://github.com/pytorch/pytorch/pull/66419

Reviewed By: gchanan

Differential Revision: D31731276

Pulled By: jbschlosser

fbshipit-source-id: 2118489e6f5bc1142f2a64323f4cfd095a9f3c42
2021-10-20 07:59:25 -07:00
Jannik Bamberger
c994a7fc2d Update documentation of torch.nn.Upsample (#66756)
Summary:
The documentation of torch.nn.Upsample stated that `align_corners` only affects `linear`, `bilinear` and `trilinear`.

This PR updates the documentation for the Python `Upsample` module and the C++ `UpsampleOptions` struct to reflect that `bicubic` is also affected by `align_corners`.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/66756

Reviewed By: zou3519

Differential Revision: D31731148

Pulled By: jbschlosser

fbshipit-source-id: 3ec277fc3fbdf8414d0de327d8c57ba07342a5b9
2021-10-18 13:07:17 -07:00
Nikita Shulga
442684cb25 Enable typechecks for torch.nn.modules.[activation|upsampling] (#44093)
Summary:
Add missing `hardsigmoid`, `silu`, `hardswish` and `multi_head_attention_forward` to functional.pyi.in
 Embed some typing annotations into functional.py

Pull Request resolved: https://github.com/pytorch/pytorch/pull/44093

Reviewed By: ezyang

Differential Revision: D23494384

Pulled By: malfet

fbshipit-source-id: 27023c16ff5951ceaebb78799c4629efa25f7c5c
2020-09-03 13:20:04 -07:00
Edward Yang
eace053398 Move all torch.nn.modules type annotations inline (#38211)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/38211

Just because the annotations are inline doesn't mean the files type
check; most of the newly annotated files have type errors and I
added exclusions for them in mypy.ini.  The payoff of moving
all of these modules inline is I can delete the relevant code
generation logic for the pyi files (which was added ignore
annotations that weren't actually relevant anymore.)

For the most part the translation was completely mechanical, but there
were two hairy issues.  First, I needed to work around a Python 3.6 and
earlier bug where Generic has a nontrivial metaclass.  This fix is in
torch/jit/__init__.py.  Second, module.py, we need to apply the same
fix for avoiding contravariance checks that the pyi file used to have;
this is done by declaring forward as a variable (rather than a
function), which appears to be sufficient enough to get mypy to not
contravariantly check input arguments.

Because we aren't actually typechecking these modules in most
cases, it is inevitable that some of these type annotations are wrong.
I slavishly copied the old annotations from the pyi files unless there
was an obvious correction I could make.  These annotations will probably
need fixing up later.

Signed-off-by: Edward Z. Yang <ezyang@fb.com>

Test Plan: Imported from OSS

Differential Revision: D21497397

Pulled By: ezyang

fbshipit-source-id: 2b08bacc152c48f074e7edc4ee5dce1b77d83702
2020-06-11 15:59:57 -07:00
David Riazati
10c4b98ade Remove weak script (#22212)
Summary:
* Deletes all weak script decorators / associated data structures / methods
   * In order to keep supporting the standard library in script, this enables recursive script on any function defined in `torch.nn`
   * Most changes in `torch/nn` are the result of `ag -Q "weak" torch/nn/ -l | xargs sed -i '/weak/d'`, only `rnn.py` needed manual editing to use the `ignore` and `export` to continue supporting the overloaded `forward` methods
* `Sequential`/`ModuleList` no longer need to be added to constants since they are compiled on demand

This should also fix https://github.com/pytorch/pytorch/issues/22212
Pull Request resolved: https://github.com/pytorch/pytorch/pull/22212

Differential Revision: D15988346

Pulled By: driazati

fbshipit-source-id: af223e3ad0580be895377312949997a70e988e4f
2019-07-03 17:28:25 -07:00
vishwakftw
4c806a9e8a Allow tuples for scale_factor argument in nn.Upsample (#20581)
Summary:
Fixes #20523 .

nn.Upsample was unable to accept tuple inputs for the scale_factor argument due to direct casting to float, which was done in #17732.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/20581

Differential Revision: D15392622

Pulled By: ezyang

fbshipit-source-id: b56ba8197a5bbf8891bc7e1bebf5cad63dcab04d
2019-05-17 07:14:18 -07:00
Edward Yang
173f224570 Turn on F401: Unused import warning. (#18598)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18598
ghimport-source-id: c74597e5e7437e94a43c163cee0639b20d0d0c6a

Stack from [ghstack](https://github.com/ezyang/ghstack):
* **#18598 Turn on F401: Unused import warning.**

This was requested by someone at Facebook; this lint is turned
on for Facebook by default.  "Sure, why not."

I had to noqa a number of imports in __init__.  Hypothetically
we're supposed to use __all__ in this case, but I was too lazy
to fix it.  Left for future work.

Be careful!  flake8-2 and flake8-3 behave differently with
respect to import resolution for # type: comments.  flake8-3 will
report an import unused; flake8-2 will not.  For now, I just
noqa'd all these sites.

All the changes were done by hand.

Signed-off-by: Edward Z. Yang <ezyang@fb.com>

Differential Revision: D14687478

fbshipit-source-id: 30d532381e914091aadfa0d2a5a89404819663e3
2019-03-30 09:01:17 -07:00
ZhuBaohe
8c3285bf11 Fix loss functions doc (#18420)
Summary:
Correct docstring display error on web page caused by my previous PR
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18420

Differential Revision: D14642467

Pulled By: soumith

fbshipit-source-id: 16fdd3301a4c5bad27fbcd8686f7fbfcc1e908ee
2019-03-27 10:23:24 -07:00
Ailing Zhang
3e00f79a1e remove warning for upsample code (#17921)
Summary:
IIRC we decided to remove warning in code in #11568. This got reverted accidentally in #14123.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/17921

Differential Revision: D14422811

Pulled By: ailzhang

fbshipit-source-id: 7067264bd1d3e3b7861d29e18ade2969ed705ca1
2019-03-12 12:16:33 -07:00
David Riazati
0955592243 Cast nn.Upsample.scale_factor to a float (#17732)
Summary:
Fixes #17106
Pull Request resolved: https://github.com/pytorch/pytorch/pull/17732

Differential Revision: D14388192

Pulled By: driazati

fbshipit-source-id: d9c9e87a7c6db63c1de3ddebbb8dcf619f0dc34d
2019-03-08 15:29:35 -08:00
ZhuBaohe
19a6de328f Correct docstring of vision/init functions
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/17351

Differential Revision: D14276355

Pulled By: soumith

fbshipit-source-id: 9b572b6a04eeb1e44cd93961edac76ed10f7b24e
2019-03-01 11:40:23 -08:00
David Riazati
48943c3b7a Update Upsample docs to match nn.interpolate
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/17134

Reviewed By: ezyang

Differential Revision: D14095694

Pulled By: driazati

fbshipit-source-id: 79afec9ddd50b3b8ce39acf98c2543cf1a3d1127
2019-02-15 06:38:41 -08:00
David Riazati
59d71b9664 Bicubic interpolation for nn.functional.interpolate (#9849)
Summary:
Addresses #918, interpolation results should be similar to tf

* Adds bicubic interpolation operator to `nn.functional.interpolate`
* Corresponding test in `test_nn.py`

The operator is added in legacy `TH` to be aligned with the other upsampling operators; they can be refactored/moved to ATen all at once when #10482 is resolved
Pull Request resolved: https://github.com/pytorch/pytorch/pull/9849

Differential Revision: D9007525

Pulled By: driazati

fbshipit-source-id: 93ef49a34ce4e5ffd4bda94cd9a6ddc939f0a4cc
2018-12-17 15:31:48 -08:00
Elias Ellison
862b8cae51 interpolate (#14123)
Summary:
Add support for interpolate and upsampling in weak_script mode.

Because the function parameters are overloaded, i had to add it as a builtin op. For interpolate:
size can be ?int | int[]?, and scale_factor can be ?float | float[]?. Every combination of the two parameters needs to be supported.

The same logic applies for upsample_nearest, upsample_bilinear, and upsample.

There are a few fixes that I came to along the way.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14123

Differential Revision: D13278923

Pulled By: eellison

fbshipit-source-id: e59729034369be4ce4b747291a3d1c74e135b869
2018-12-04 00:01:43 -08:00
Ailing Zhang
f09054f8d0 Remove deprecate warning for Upsampling (#11568)
Summary:
Fixes #11452 .

Based on the discussion with SsnL  and soumith , we want to bring back Upsample as a module instead of introducing a new nn.interpolate module for now. If anyone want to do downsample, they should use `nn.functional.interpolate ` instead.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/11568

Differential Revision: D9804359

Pulled By: ailzhang

fbshipit-source-id: 2b232d55fc83c2b581bf336f1ee8d1cf1c1159ca
2018-09-14 17:54:48 -07:00
Rob Kunkle
6e85112f12 Adding katex rendering of equations, and required edits to equations. (#8848)
Summary:
This fixes issue #8529.

- Adds Katex extension to conf.py and requirements.txt
- Fixes syntax differences in docs
- Should allow documentation pages to render faster
Pull Request resolved: https://github.com/pytorch/pytorch/pull/8848

Reviewed By: soumith

Differential Revision: D8677702

Pulled By: goodlux

fbshipit-source-id: c4a832c5879e0eebcb14763b35a41663331ba23f
2018-08-02 12:25:17 -07:00
Ailing Zhang
227c8f2654 Implement nn.functional.interpolate based on upsample. (#8591)
Summary:
This PR addresses #5823.

* fix docstring: upsample doesn't support LongTensor

* Enable float scale up & down sampling for linear/bilinear/trilinear modes. (following SsnL 's commit)

* Enable float scale up & down sampling for nearest mode. Note that our implementation is slightly different from TF that there's actually no "align_corners" concept in this mode.

* Add a new interpolate function API to replace upsample. Add deprecate warning for upsample.

* Add an area mode which is essentially Adaptive_average_pooling into resize_image.

* Add test cases for interpolate in test_nn.py

* Add a few comments to help understand *linear interpolation code.

* There is only "*cubic" mode missing in resize_images API which is pretty useful in practice. And it's labeled as hackamonth here #1552. I discussed with SsnL that we probably want to implement all new ops in ATen instead of THNN/THCUNN. Depending on the priority, I could either put it in my queue or leave it for a HAMer.

* After the change, the files named as *Upsampling*.c works for both up/down sampling. I could rename the files if needed.

Differential Revision: D8729635

Pulled By: ailzhang

fbshipit-source-id: a98dc5e1f587fce17606b5764db695366a6bb56b
2018-07-06 15:28:11 -07:00
li-roy
d564ecb4a5 Update docs with new tensor repr (#6454)
* Update docs with new tensor repr

* remove cuda in dtype

* remove changes to gloo submodule

* [docs] document tensor.new_* ctor

* [docs] Add docs for tensor.to(), tensor.float(), etc

* [docs] Moar examples for docs.

* [docs] Warning for tensor ctor copy behavior

* Quick fix

* [docs] Document requires_grad_()

* [docs] Add example for requires_grad_()

* update slogdet and *fft

* update tensor rst

* small fixes

* update some docs

* additional doc changes

* update torch and tensor docs

* finish changing tensor docs

* fix flake8

* slogdet with negative det

* Update functional.py tensor ctors

* Fix nll_loss docs

* reorder to move device up

* torch.LongTensor -> torch.tensor or torch.empty in docs

* update tensor constructors in docs

* change tensor constructors

* change constructors

* change more Tensor() to tensor()

* Show requires_grads_ docs

* Fix set_default_dtype docs

* Update docs with new tensor repr

* remove cuda in dtype

* remove changes to gloo submodule

* [docs] document tensor.new_* ctor

* [docs] Add docs for tensor.to(), tensor.float(), etc

* [docs] Moar examples for docs.

* [docs] Warning for tensor ctor copy behavior

* Quick fix

* [docs] Document requires_grad_()

* [docs] Add example for requires_grad_()

* update slogdet and *fft

* update tensor rst

* small fixes

* update some docs

* additional doc changes

* update torch and tensor docs

* finish changing tensor docs

* fix flake8

* slogdet with negative det

* Update functional.py tensor ctors

* Fix nll_loss docs

* reorder to move device up

* torch.LongTensor -> torch.tensor or torch.empty in docs

* update tensor constructors in docs

* change tensor constructors

* change constructors

* change more Tensor() to tensor()

* Show requires_grads_ docs

* Fix set_default_dtype docs

* Link to torch.no_grad, etc, from torch doc

* Add dtype aliases to table

* regen docs again

* Tensor attributes stub page

* link to inplace sampling

* Link torch.dtype, device, and layout

* fix dots after nonfinite floats

* better layout docs
2018-04-21 07:35:37 -04:00
Kaiyu Shi
605307f8f3 Add support for printing extra information in Module and refactor redundant codes (#5936)
This PR enables users to print extra information of their subclassed nn.Module.
Now I simply insert the user-defined string at the ending of module name, which should be discussed in this PR.

Before this PR, users should redefine the __repr__ and copy&paste the source code from Module.

* Add support for extra information on Module

* Rewrite the repr method of Module

* Fix flake8

* Change the __repr__ to get_extra_repr in Linear

* Fix extra new-line for empty line

* Add test for __repr__ method

* Fix bug of block string indent

* Add indent for multi-line repr test.

* Address review comments

* Update tutorial for creating nn.Module

* Fix flake8, add extra_repr of bilinear

* Refactor DropoutNd

* Change to extra_repr in some Modules

* Fix flake8

* Refactor padding modules

* Refactor pooling module

* Fix typo

* Change to extra_repr

* Fix bug for GroupNorm

* Fix bug for LayerNorm
2018-04-02 13:52:33 -04:00
Tongzhou Wang
39829c1670 Improve docs (#5999)
* Clarify det and svd doc on when backward is not stable

* Fix some links in nn.functional doc; improve upsampling doc
2018-03-26 14:09:11 -04:00
Tongzhou Wang
5d77709485 Linearly interpolating upsampling fix (#5927)
* Changes in bilinear upsampling

* Add align_corners option to upsampling module & functional when using linearly interpolating modes
When align_corners=True, it uses the old original upsampling scheme, which gives visually better results,
but doesn't properly align input and output pixels, and thus cause the output vary basing on input.
This PR adds this align_corners option, and changes the default behavior to align_corners=False, with
proper warning if this option is not specified upon using nn.Upsample or nn.functional.upsample to let
be aware of this new change.
Adds tests in test_nn.py for spatial invariance when align_corners=False, and usual module tests for
align_corners=False.

* remove redundant checks and unnecessary variables; fix the cast

* fix negative indices
2018-03-24 12:21:13 -04:00
Vishwak Srinivasan
76a283db40 [ready] General Documentation Improvements - 2 (#5685)
* Fix some minor errors in existing docs.

* Fix Convolution and Pooling docs in torch.nn.functional

* Cleaned up torch.nn.functional docs

* Address @SsnL 's comments

* Add multiplication sign missing in docs

* Fix more typos, and clear some warnings

* Change infinity symbol in LPPool2d

* Revert some changes in torch.nn.functional

* Few more minor changes
2018-03-13 09:47:43 -04:00
Richard Zou
4e190c2fed Fix floor latex rendering (#5682)
* Make floors larger

* Improve Latex rendering of floor

* Improve latex rendering of ceil

* Fix flake8
2018-03-09 23:53:14 -05:00
Vishwak Srinivasan
32b3841553 [ready] General documentation improvements (#5450)
* Improvize documentation
1. Add formula for erf, erfinv
2. Make exp, expm1 similar to log, log1p
3. Symbol change in ge, le, ne, isnan

* Fix minor nit in the docstring

* More doc improvements
1. Added some formulae
2. Complete scanning till "Other Operations" in Tensor docs

* Add more changes
1. Modify all torch.Tensor wherever required

* Fix Conv docs
1. Fix minor nits in the references for LAPACK routines

* Improve Pooling docs
1. Fix lint error

* Improve docs for RNN, Normalization and Padding
1. Fix flake8 error for pooling

* Final fixes for torch.nn.* docs.
1. Improve Loss Function documentation
2. Improve Vision Layers documentation

* Fix lint error

* Improve docstrings in torch.nn.init

* Fix lint error

* Fix minor error in torch.nn.init.sparse

* Fix Activation and Utils Docs
1. Fix Math Errors
2. Add explicit clean to Makefile in docs to prevent running graph generation script
while cleaning
3. Fix utils docs

* Make PYCMD a Makefile argument, clear up prints in the build_activation_images.py

* Fix batch norm doc error
2018-03-08 13:21:12 -05:00
Tongzhou Wang
27265503ad nn.* doc update after Variable/Tensor merge (#5459)
The nn.* counterpart of #5443 . Mostly removed Variable wrapper. Also added doc for nn.RReLU.

Notice that torch.randn(*, requires_grad=True) isn't documented until #5462 is done.
2018-03-01 18:11:39 -05:00
SsnL
de1f4e69dd raw text (#3327) 2017-10-28 01:24:02 +05:30
Luca Antiga
c580352aee Adding 1d upsampling (#2846) 2017-09-24 16:50:24 -04:00
yunjey
ea607afd06 Add comments in nn.Upsample (#2175) 2017-07-21 14:34:58 -04:00
Soumith Chintala
2a49353d5e minor fix for docs of Upsample 2017-06-07 11:42:52 -04:00
Luca Antiga
b9ab26765e Add 3D upsampling (nearest and trilinear) with tests 2017-06-07 11:29:27 -04:00
andrew giessel
2e7635b929 Add flexible bilinear upsampling aspect ratio redux (#1317) 2017-05-03 08:46:28 -04:00
ngimel
97a82a3018 fix formatting in upsampling docs (#1067) 2017-03-22 18:06:31 -04:00
Eli Stevens
b87c113cf4 CUDA documentation enhancement and docs versioning (#848)
* Add more detail to CUDA documentation

Also adds better cross-linking to the pages that discuss relevant topics.

* Adds recommendation to torch.save docs

* Make the version numbers for the docs dynamic

Might need tweaks for beta, 1.0, etc.
2017-02-26 08:33:26 -05:00
Soumith Chintala
d4c9a3782b billinear -> bilinear, docs for upsampling, improved docs for Unpooling, pep8 tests fix (#617)
* billinear -> bilinear, docs for upsampling, improved docs for Unpooling, pep8 tests fix
2017-01-30 05:08:48 +05:30
Luke Yeager
e7c1e6a8e3 [pep8] Fix most lint automatically with autopep8
Here's the command I used to invoke autopep8 (in parallel!):

    git ls-files | grep '\.py$' | xargs -n1 -P`nproc` autopep8 -i

Several rules are ignored in setup.cfg. The goal is to let autopep8
handle everything which it can handle safely, and to disable any rules
which are tricky or controversial to address. We may want to come back
and re-enable some of these rules later, but I'm trying to make this
patch as safe as possible.

Also configures flake8 to match pep8's behavior.

Also configures TravisCI to check the whole project for lint.
2017-01-28 01:15:51 +01:00
Adam Paszke
f8d4f980b3 Add upsampling modules and functions 2017-01-24 17:30:50 -05:00
Adam Paszke
fb39971464 Add more modules to nn 2016-09-14 11:05:56 -07:00