Commit Graph

318 Commits

Author SHA1 Message Date
soulitzer
6b521bbf35 Prevent module full_backward_hook from erroring in double backward (#88357)
Also clarifies documentation to say "execute if and only if gradients wrt outputs are computed" (previously, "execute every time gradients wrt inputs are computed")

See https://docs.google.com/document/d/1tFZKYdsSzRBJ7Di7SWt8X8fSg-E3eiUPwomMF10UyhM/edit for more details regarding the question: 'should module full_backward_hooks be called every time the gradients wrt module inputs are called, or should module full_backward_hooks only be called when the "backward for the module" have been computed?'

Fixes https://github.com/pytorch/pytorch/issues/88312

Pull Request resolved: https://github.com/pytorch/pytorch/pull/88357
Approved by: https://github.com/albanD
2022-11-16 19:27:30 +00:00
Samantha Andow
87238e6491 [nn] add remove_duplicate flag to named_parameters (#759) (#88090)
Summary:
X-link: https://github.com/pytorch/torchrec/pull/759

Since the remove_duplicate flag was added to named_buffers in D39493161 (c12f829cce), this adds the same flag to named_parameters

Test Plan:
python test/test_nn.py -k test_buffers_and_named_buffers

OSS Tests

Differential Revision: D40801899

Pull Request resolved: https://github.com/pytorch/pytorch/pull/88090
Approved by: https://github.com/albanD
2022-11-09 00:09:20 +00:00
Kazuaki Ishizaki
2ddefbdc3c Fix typos used in documents under torch directory (#88300)
This PR fixes typos, in comments of Python files, that are found from a search box at https://pytorch.org/docs/master/search.html

Pull Request resolved: https://github.com/pytorch/pytorch/pull/88300
Approved by: https://github.com/lezcano
2022-11-02 09:38:13 +00:00
Shen Li
82698b8954 Add prepend argument to nn.Module hooks (#87370)
cc @ezyang @gchanan
Pull Request resolved: https://github.com/pytorch/pytorch/pull/87370
Approved by: https://github.com/soulitzer
2022-10-25 19:18:04 +00:00
Antonio Kim
6b59d9b566 Fix registration hooks (#87369)
There is a bug in the implementation of the registration hooks introduced in https://github.com/pytorch/pytorch/pull/86148 whereby if the hook returns a tensor, then the short circuiting logic:
```
value = hook(self, name, value) or value
```
Raises an exception
```
RuntimeError: Boolean value of Tensor with more than one value is ambiguous
```
Fixing the logic so that it only checks to see if the value is `None` before overriding

Fixes #85837

CC: @albanD @jbschlosser
Pull Request resolved: https://github.com/pytorch/pytorch/pull/87369
Approved by: https://github.com/albanD
2022-10-21 05:12:25 +00:00
Kshiteej K
54ee95c8ec [nn] module: full_backward_pre_hook (#86700)
Fixes https://github.com/pytorch/pytorch/issues/42824

* [x] Test
* [x] Doc
Pull Request resolved: https://github.com/pytorch/pytorch/pull/86700
Approved by: https://github.com/soulitzer
2022-10-13 17:36:39 +00:00
Antonio Kim
09a676f639 Add hooks for register_buffer/module/parameter (#86148)
As described in the issue, this PR adds hooks to be run when `register_parameter`, `register_buffer` and `register_module` are called.

Fixes #85837

cc @albanD @mruberry @jbschlosser @walterddr @kshitij12345 @saketh-are
Pull Request resolved: https://github.com/pytorch/pytorch/pull/86148
Approved by: https://github.com/albanD
2022-10-12 20:57:22 +00:00
Jerry Zhang
c12f829cce [nn] Add remove_duplicate flag to named_buffers (#674) (#85903)
Summary:
X-link: https://github.com/pytorch/torchrec/pull/674

Pull Request resolved: https://github.com/pytorch/pytorch/pull/84984

this is to allow named_buffers to return the same buffer objects with different names multiple times, needed by internal use cases
ghstack-source-id: 168589597

Test Plan:
python test/test_nn.py -k test_buffers_and_named_buffers

Imported from OSS

Reviewed By: albanD

Differential Revision: D39493161

Pull Request resolved: https://github.com/pytorch/pytorch/pull/85903
Approved by: https://github.com/albanD
2022-10-11 18:49:09 +00:00
Weiyi Zheng
b2311192e6 [NN module] speed up _load_from_state_dict (#85743)
Fixes #61398

The original implementation is very slow when the state_dict.keys() is long. This PR only passes relevant keys to the child module.

existing test passes: `pytest test/test_nn.py -k state_dict`
I couldn't figure out a good way to write a new test for this new behavior. Had a new snippet, but it will be flaky if integrated into the main CI because it's a timing based check.
But I can verify that the test took 30s to run, after this PR it only takes 0.5s.

```python
    def test_load_state_dict_large(self):
        # construct a module with 4 levels of module, 10 linear each, leads to 10k items in the dictionary
        import copy
        import time
        base_module = nn.Linear(1,1)
        model = base_module
        for level in range(4):
           model = nn.Sequential(*[copy.deepcopy(model) for _ in range(10)])
        state_dict = model.state_dict()
        self.assertEqual(len(state_dict), 20000)
        st = time.time()
        model.load_state_dict(state_dict, strict=True)
        strict_load_time = time.time() - st
        # it took 0.5 seconds to
        self.assertLess(strict_load_time, 10)
```
Pull Request resolved: https://github.com/pytorch/pytorch/pull/85743
Approved by: https://github.com/albanD
2022-09-28 15:26:03 +00:00
joncrall
b136f3f310 More doctest refinements. (#83317)
Follow up to #82797

Now that the doctests themselves are in a better state, we should be able to enable xdoctest on the CI so they stay that way.

@ezyang @vadimkantorov
Pull Request resolved: https://github.com/pytorch/pytorch/pull/83317
Approved by: https://github.com/ezyang
2022-08-22 20:07:26 +00:00
Ian Barber
76d5699e13 Fix use-generator lint warnings in module.py (#83700)
% pylint --disable=all --enable=R1729 torch/nn/modules/module.py
Verified in pylint 2.14.5

--------------------------------------------------------------------
Your code has been rated at 10.00/10 (previous run: 10.00/10, +0.00)

Pull Request resolved: https://github.com/pytorch/pytorch/pull/83700
Approved by: https://github.com/kit1980, https://github.com/albanD
2022-08-19 02:51:44 +00:00
joncrall
4618371da5 Integrate xdoctest - Rebased (#82797)
This is a new version of #15648 based on the latest master branch.

Unlike the previous PR where I fixed a lot of the doctests in addition to integrating xdoctest, I'm going to reduce the scope here. I'm simply going to integrate xdoctest, and then I'm going to mark all of the failing tests as "SKIP". This will let xdoctest run on the dashboards, provide some value, and still let the dashboards pass. I'll leave fixing the doctests themselves to another PR.

In my initial commit, I do the bare minimum to get something running with failing dashboards. The few tests that I marked as skip are causing segfaults. Running xdoctest results in 293 failed, 201 passed tests. The next commits will be to disable those tests. (unfortunately I don't have a tool that will insert the `#xdoctest: +SKIP` directive over every failing test, so I'm going to do this mostly manually.)

Fixes https://github.com/pytorch/pytorch/issues/71105

@ezyang
Pull Request resolved: https://github.com/pytorch/pytorch/pull/82797
Approved by: https://github.com/ezyang
2022-08-12 02:08:01 +00:00
Alfredo Canziani
76953beee3 Update state_dict docs (#83104)
- Better definition of `state_dict`
- Additional shallow copy warning

Pull Request resolved: https://github.com/pytorch/pytorch/pull/83104
Approved by: https://github.com/jbschlosser
2022-08-10 22:24:10 +00:00
ProGamerGov
357b7d589c Fix docstring inconsistencies: string -> str, boolean -> bool (#82410)
### Description

Throughout the PyTorch docs and codebase, the `string` type in docstrings is referred to by two separate names. This leads to inconsistent docs, like you can see here: https://pytorch.org/docs/stable/generated/torch.nn.Conv3d.html#torch.nn.Conv3d

This PR fixes this issue by ensuring that all mentions of the string type in docstrings, are using the same format that Sphinx generates hyperlinks for.

### Testing
No testing should be required for this change

Pull Request resolved: https://github.com/pytorch/pytorch/pull/82410
Approved by: https://github.com/jbschlosser
2022-07-28 21:29:57 +00:00
otaj
db52e4b7d9 Bugfix/weakref (#80139)
Fixes #78580

I'm back! :)

cc @albanD

Pull Request resolved: https://github.com/pytorch/pytorch/pull/80139
Approved by: https://github.com/albanD
2022-06-28 14:51:42 +00:00
PyTorch MergeBot
9db3c517de Add __all__ for torch.nn.modules, torch.distributed.elastic, torch.nn.utils submodules (#80240)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/80240
Approved by: https://github.com/rohan-varma
2022-06-27 17:11:12 +00:00
Saketh Are
817eb94ff4 Speed up module constructor by avoiding module.__setattr__ (#77098)
Module's overridden `__setattr__` has special handling for parameters, submodules, and buffers, resulting in reduces performance for its default attributes (which are not of those types). Setting them directly results in a significant improvement for module construction speed (~10x).
Pull Request resolved: https://github.com/pytorch/pytorch/pull/77098
Approved by: https://github.com/jbschlosser
2022-06-23 17:15:45 +00:00
Peter Bell
7843a5e882 Move Tensor.grad back into C++
`Tensor.grad` was moved to python in #30531 to add a warning. However,
that warning has since been lowered into C++ so this wrapper is no
longer necessary.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/76675

Approved by: https://github.com/albanD
2022-06-10 13:44:45 +00:00
PyTorch MergeBot
d8b80edade Revert "Use weakref.proxy when saving module to internal dictionaries to not increase refcount (#76435)"
This reverts commit 1aa3cbb83b.

Reverted https://github.com/pytorch/pytorch/pull/76435 on behalf of https://github.com/jbschlosser
2022-05-17 17:51:26 +00:00
Rohan Varma
a275491c6f [Reland] load_state_dict post hook (#77392)
Reland of https://github.com/pytorch/pytorch/pull/76823 with fixes to call `__setstate__` for softmax/softmin/logsoftmax as per discussion with @albanD and @jbschlosser. Original description:

Implements `register_load_state_dict_post_hook` API as discussed in https://github.com/pytorch/pytorch/issues/75287.

Unittests cover:
- Ensuring hooks are called with the correct module
- Hook is called with `IncompatibleKeys` field
- If hook modifies this, load_state_dict returns the modified result

Pull Request resolved: https://github.com/pytorch/pytorch/pull/77392
Approved by: https://github.com/jbschlosser
2022-05-14 06:06:23 +00:00
PyTorch MergeBot
d92b0a51aa Revert "Load state dict post hook"
This reverts commit 56bed0dcfe.

Reverted https://github.com/pytorch/pytorch/pull/76823 on behalf of https://github.com/rohan-varma
2022-05-12 21:00:49 +00:00
otaj
1aa3cbb83b Use weakref.proxy when saving module to internal dictionaries to not increase refcount (#76435)
Fixes #76434

Pull Request resolved: https://github.com/pytorch/pytorch/pull/76435
Approved by: https://github.com/jbschlosser
2022-05-11 18:40:59 +00:00
neverix
87e543da9b Add load_state_dict error message for non-dicts (#77197)
Fixes #76886
Pull Request resolved: https://github.com/pytorch/pytorch/pull/77197
Approved by: https://github.com/jbschlosser
2022-05-10 22:11:51 +00:00
Rohan Varma
56bed0dcfe Load state dict post hook
Implements `register_load_state_dict_post_hook` API as discussed in https://github.com/pytorch/pytorch/issues/75287.

Unittests cover:
- Ensuring hooks are called with the correct module
- Hook is called with `IncompatibleKeys` field
- If hook modifies this, load_state_dict returns the modified result

Pull Request resolved: https://github.com/pytorch/pytorch/pull/76823
Approved by: https://github.com/albanD
2022-05-05 19:27:05 +00:00
lkct
b8776e143f Fix false DeprecationWarning in Module.state_dict
Fixes #75404

TODO:
- [x] add tests
Pull Request resolved: https://github.com/pytorch/pytorch/pull/75507
Approved by: https://github.com/jbschlosser
2022-05-04 20:08:23 +00:00
Michael Suo
fb0f285638 [lint] upgrade mypy to latest version
Fixes https://github.com/pytorch/pytorch/issues/75927.

Had to fix some bugs and add some ignores.

To check if clean:
```
lintrunner --paths-cmd='git grep -Il .' --take MYPY,MYPYSTRICT
```

Pull Request resolved: https://github.com/pytorch/pytorch/pull/76753

Approved by: https://github.com/malfet
2022-05-03 20:51:34 +00:00
PyTorch MergeBot
3d7428d9ac Revert "[lint] upgrade mypy to latest version"
This reverts commit 9bf18aab94.

Reverted https://github.com/pytorch/pytorch/pull/76753 on behalf of https://github.com/suo
2022-05-03 20:01:18 +00:00
Michael Suo
9bf18aab94 [lint] upgrade mypy to latest version
Fixes https://github.com/pytorch/pytorch/issues/75927.

Had to fix some bugs and add some ignores.

To check if clean:
```
lintrunner --paths-cmd='git grep -Il .' --take MYPY,MYPYSTRICT
```

Pull Request resolved: https://github.com/pytorch/pytorch/pull/76753

Approved by: https://github.com/malfet
2022-05-03 19:43:28 +00:00
lkct
9fae0762b0 fix typing in Module.state_dict and load_state_dict
Fixes #72707

Pull Request resolved: https://github.com/pytorch/pytorch/pull/73483
Approved by: https://github.com/albanD, https://github.com/jbschlosser
2022-05-02 17:27:54 +00:00
Zachary Ikpefua
43cc726c22 updated _forward_unim. to include descriptive error
Fixes #74303

Added error description for an unimplemented forward function.

_Using torch summary to test the functionality_
Before:
![image](https://user-images.githubusercontent.com/34219451/161395955-39947ea0-3664-41b6-9ed7-0af58c3c8901.png)

After:
![image](https://user-images.githubusercontent.com/34219451/161395910-71d2078d-268a-4b1f-88af-33dba6dff6a7.png)

Pull Request resolved: https://github.com/pytorch/pytorch/pull/75148
Approved by: https://github.com/albanD
2022-04-08 20:22:47 +00:00
Anthony Barbier
ce9e27a0fc Add new keys for Graphcore IPU (DispatchKey / Backend / DeviceType)
We need a key to register our out of tree backend: https://github.com/graphcore/poptorch
Pull Request resolved: https://github.com/pytorch/pytorch/pull/74763
Approved by: https://github.com/bdhirsh
2022-04-07 17:18:45 +00:00
Nikita Shulga
21a82fb519 Make torch.nn importable on Python-3.7.0
As `typing.OrderedDict` were introduced by Python-3.7.2+, see
https://docs.python.org/3.10/library/typing.html#typing.OrderedDict

Fixes #74087

Pull Request resolved: https://github.com/pytorch/pytorch/pull/74211
Approved by: https://github.com/suo, https://github.com/janeyx99
2022-03-15 06:12:42 +00:00
Eli Uriegas
97ade8c64c Back out "fix: nn.Module allowing for expected Mixin MRO"
Summary:
Original commit changeset: 54741d983477

Original Phabricator Diff: D34822179

(Note: this ignores all push blocking failures!)

Test Plan: good_testplan

Reviewed By: osalpekar

Differential Revision: D34866030

fbshipit-source-id: 852182b90634873e51634228c33adbb72c5111c7
(cherry picked from commit 28f9a000c06f9c8e6477f369407036c9e54aec27)
2022-03-14 17:39:24 +00:00
MattiaSarti
1ac519e6b5 fix: nn.Module allowing for expected Mixin MRO
## Description

This pull request solves #74036

## How Functionality Changes Were Tested

Running:
```
from torch import nn

class A:
    def __init__(self):
        super().__init__()
        self.a = True

class B(A, nn.Module):
    def __init__(self):
        super().__init__()
        self.b = True

class C(nn.Module, A):
    def __init__(self):
        super().__init__()
        self.c = True

b = B()
c = C()

print(b.b)
print(b.a)

print(c.c)
print(c.a)
```

- ### Results - Before:
  ```
  >>> from torch import nn
  >>>
  >>>
  >>> class A:
  ...     def __init__(self):
  ...         super().__init__()
  ...         self.a = True
  ...
  >>>
  >>> class B(A, nn.Module):
  ...     def __init__(self):
  ...         super().__init__()
  ...         self.b = True
  ...
  >>>
  >>> class C(nn.Module, A):
  ...     def __init__(self):
  ...         super().__init__()
  ...         self.c = True
  ...
  >>>
  >>> b = B()
  >>> c = C()
  >>>
  >>> print(b.b)
  True
  >>> print(b.a)
  True
  >>>
  >>> print(c.c)
  True
  >>> print(c.a)
  Traceback (most recent call last):
    File "<stdin>", line 1, in <module>
    File "/pytorch/torch/nn/modules/module.py", line 1188, in __getattr__
      raise AttributeError("'{}' object has no attribute '{}'".format(
  AttributeError: 'C' object has no attribute 'a'
  ```

- ### Results - After:
  ```
  >>> from torch import nn
  >>>
  >>>
  >>> class A:
  ...     def __init__(self):
  ...         super().__init__()
  ...         self.a = True
  ...
  >>>
  >>> class B(A, nn.Module):
  ...     def __init__(self):
  ...         super().__init__()
  ...         self.b = True
  ...
  >>>
  >>> class C(nn.Module, A):
  ...     def __init__(self):
  ...         super().__init__()
  ...         self.c = True
  ...
  >>>
  >>> b = B()
  >>> c = C()
  >>>
  >>> print(b.b)
  True
  >>> print(b.a)
  True
  >>>
  >>> print(c.c)
  True
  >>> print(c.a)
  True
  ```

Pull Request resolved: https://github.com/pytorch/pytorch/pull/74096
Approved by: https://github.com/albanD
2022-03-11 15:32:05 +00:00
Thiago Crepaldi
3c45fc8e20 Fix URL for creating github issues
Minor typos for the URL that allows users to create new issues on PyTorch's GH issues page
Pull Request resolved: https://github.com/pytorch/pytorch/pull/73411
2022-02-25 16:19:03 +00:00
lkct
9d6639abcd Fix nn.Module.state_dict() (#72780)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/72778

TODO
- [x] Waiting for a conclusion from discussion in the issue.
- [x] Still bugs in handling misplaced args. Need a re-design to cover all corner cases.

TODO changes
- [x] Put deprecated signature to the second.
- [x] Change to kwargs, positional deprecated
- [x] `DeprecationWarning` add comment on why not use it
- [x] Remove unnecessary comments.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/72780

Reviewed By: george-qi

Differential Revision: D34398656

Pulled By: albanD

fbshipit-source-id: e8f2708e3dfd925ff354e098a66905f9775f4e0a
(cherry picked from commit 7f8eaf05fc48b333d22a07af57a7024b8b9ec6bf)
2022-02-23 22:32:06 +00:00
vfdev
af3ca50291 Fixed docstring typo for nn.Module.get_submodule (#73018)
Summary:
Description:
- Fixed docstring typo for nn.Module.get_submodule

otherwise output is invisible: https://pytorch.org/docs/stable/generated/torch.nn.Module.html#torch.nn.Module.get_submodule

Pull Request resolved: https://github.com/pytorch/pytorch/pull/73018

Reviewed By: davidberard98

Differential Revision: D34310091

Pulled By: jbschlosser

fbshipit-source-id: e35aef2b7479bdd81fb6b7ddd203bd71798769e1
(cherry picked from commit e4944e1f8e)
2022-02-17 22:40:18 +00:00
lkct
352eeb2ef9 doc fix nn.Module: docstring should come after class variable (#72912)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/72862

Pull Request resolved: https://github.com/pytorch/pytorch/pull/72912

Reviewed By: cpuhrsch

Differential Revision: D34286017

Pulled By: jbschlosser

fbshipit-source-id: d172f7600e7f66c30187996ee42c72bf273643cc
(cherry picked from commit d9f9b5b418)
2022-02-16 23:10:38 +00:00
Brian Muse
8bf3179f6e #71946 Remove Python 3.6 references (#72211)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/71946

This commit removes some bits of code that were hard coded for Python 3.6 support from the `.circleci` and `torch` folders. It should only be merged if https://github.com/pytorch/pytorch/issues/66462 is complete.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/72211

Reviewed By: dagitses, seemethere

Differential Revision: D33982604

Pulled By: musebc

fbshipit-source-id: 8f453bf9909df615addd59538adb369c65484044
(cherry picked from commit 944a9970fe)
2022-02-08 03:46:20 +00:00
Horace He
7cdbbfaee2 Revert D33716716: [pytorch][PR] Added remove_duplicate parameter to nn.Module
Test Plan: revert-hammer

Differential Revision:
D33716716 (7e8217549f)

Original commit changeset: ff1ed9980bd1

Original Phabricator Diff: D33716716 (7e8217549f)

fbshipit-source-id: 91c3d9acc5bc731da716dd0d2485431f85f861c9
(cherry picked from commit c81d193bf0)
2022-02-03 09:04:29 +00:00
Horace He
7e8217549f Added remove_duplicate parameter to nn.Module (#39)
Summary:
Pull Request resolved: https://github.com/pytorch/torchrec/pull/39

Pull Request resolved: https://github.com/facebookresearch/torchrec/pull/6

This makes it so that shared parameters get their own entry in `named_parameters`.

More broadly, this makes it so that
```
params_and_buffers = {**mod.named_named_parameters(remove_duplicate=False), **mod.named_buffers(remove_duplicate=False)}
_stateless.functional_call(mod, params_and_buffers, args, kwargs)
```
is identical to calling the original module's forwards pass.

cc pietern mrshenli pritamdamania87 zhaojuanmao satgera rohan-varma gqchen aazzolini osalpekar jiayisuse SciPioneer H-Huang

Pull Request resolved: https://github.com/pytorch/pytorch/pull/71542

Reviewed By: jbschlosser, albanD

Differential Revision: D33716716

Pulled By: Chillee

fbshipit-source-id: ff1ed9980bd1a3f7ebaf695ee5e401202b543213
(cherry picked from commit d6e3ad3cd0)
2022-02-01 18:34:58 +00:00
kshitij12345
0a2cdd18f3 nice error msg from load_state_dict for non-tensor value (#70596)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/67549

Pull Request resolved: https://github.com/pytorch/pytorch/pull/70596

Reviewed By: anjali411

Differential Revision: D33710750

Pulled By: jbschlosser

fbshipit-source-id: 870b5fafffcd005fd4fcd62f865542739c133805
(cherry picked from commit da374fbc58)
2022-01-21 22:02:13 +00:00
Ilya Persky
0460324b9b Fix docs rendering for nn.Module.named_modules() (#70491)
Summary:
The documentation rendering for nn.Module.named_modules() is a bit broken, see the description of the last argument [here](https://pytorch.org/docs/stable/generated/torch.nn.Module.html#torch.nn.Module.named_modules).

This PR fixes that.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/70491

Reviewed By: mikaylagawarecki

Differential Revision: D33349882

Pulled By: albanD

fbshipit-source-id: a46327c12e8114f7ef2055a8518c4ca9d186e669
2021-12-29 10:08:53 -08:00
Mikayla Gawarecki
3fe2ff800c Module docs update (#66909)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/37824

{F671745341}

Pull Request resolved: https://github.com/pytorch/pytorch/pull/66909

Reviewed By: anjali411

Differential Revision: D31782046

Pulled By: mikaylagawarecki

fbshipit-source-id: 009d2ea3c8a51a89786ef55bb9e88dc53aa8360f
2021-10-20 08:14:36 -07:00
Jack Kelly
7191dd2613 Update Module docstring for Python 3 (#65748)
Summary:
In Python 3, we can call `super()` without any arguments.

If I understand correctly, Python 2 is no longer supported by PyTorch, so we can change the documentation to be Python-3 only :)

Pull Request resolved: https://github.com/pytorch/pytorch/pull/65748

Reviewed By: saketh-are

Differential Revision: D31246055

Pulled By: albanD

fbshipit-source-id: 3980def1a556d4bdfa391ea61cb2a65efa20df79
2021-09-29 13:40:15 -07:00
Rodrigo Berriel
b80bdcc73b Add register_module alias to nn.Module (#65174)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/60397. I'm not sure how aliases are supposed to be implemented, but this is the most basic/direct way, IMO. As a side-effect, this implementation results in a "duplicate" doc entry, inheriting the one from `add_module`:

![monkey-patch](https://user-images.githubusercontent.com/7027770/133693137-8408d8e7-1f4f-436b-b176-57dda9bc3a32.png)

An alternative implementation could be:

```python
def register_module(self, name: str, module: Optional['Module']) -> None:
    r"""Alias for :func:`add_module`."""
    self.add_module(name, module)
```

which results in this documentation:

![image](https://user-images.githubusercontent.com/7027770/133693249-d969a71a-be44-489d-9633-4f38b44ab887.png)

Questions:
1. Should I replicate the tests? There are two for `add_module`: [test_add_module_raises_error_if_attr_exists](873255c6d9/test/test_nn.py (L1420-L1434)) and [test_add_module](873255c6d9/test/test_nn.py (L1837-L1855)).
2. This PR only adds `register_module` to `nn.Module`. There is an `add_module` in [`_RemoteModule`](https://github.com/pytorch/pytorch/blob/master/torch/distributed/nn/api/remote_module.py#L311-L312), which raises `NotSupported`, and there is another one in [`ConcreteModuleTypeBuilder`](873255c6d9/torch/_C/__init__.pyi.in (L468)), which means something else, I think. Should I do anything about them?

cc ngimel SsnL

Pull Request resolved: https://github.com/pytorch/pytorch/pull/65174

Reviewed By: soulitzer

Differential Revision: D31089717

Pulled By: jbschlosser

fbshipit-source-id: abd8d14a434fd8c7efa0bd8c242df56da33491e9
2021-09-22 16:37:28 -07:00
Joel Schlosser
544af391b5 Allow arbitrary objects in state_dicts (#62976)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/62094

Introduces functionality for adding arbitrary objects to module state_dicts. To take advantage of this, the following functions can be defined on a module:
* `get_extra_state(self) -> dict` - Returns a dict defining any extra state this module wants to save
* `set_extra_state(self, state)` - Subsumes the given state within the module

In the details, a sub-dictionary is stored in the state_dict under the key `_extra_state` for each module that requires extra state.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/62976

Reviewed By: heitorschueroff

Differential Revision: D30518657

Pulled By: jbschlosser

fbshipit-source-id: 5fb35ab8e3d36f35e3e96dcd4498f8c917d1f386
2021-08-24 19:06:14 -07:00
Alban Desmaison
2d5b19f62b Update full backward hook doc with not-same-object note (#63245)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/61446

Pull Request resolved: https://github.com/pytorch/pytorch/pull/63245

Reviewed By: ejguan

Differential Revision: D30352656

Pulled By: albanD

fbshipit-source-id: 7000ecb54a80f2da968ec7600b98574b608578ae
2021-08-19 06:50:56 -07:00
Gary Miguel
9fdf7ec6a2 [docs] Update sphinx to 3.5.4 (#61601)
Summary:
Sphinx 4.x is out, but it seems that requires many more changes to
adopt. So instead use the latest version of 3.x, which includes
several nice features.

* Add some noindex directives to deal with warnings that would otherwise
  be triggered by this change due to conflicts between the docstrings
  declaring a function and the autodoc extension declaring the
  same function.
* Update distributions.utils.lazy_property to make it look like a
  regular property when sphinx autodoc inspects classes.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/61601

Reviewed By: ejguan

Differential Revision: D29801876

Pulled By: albanD

fbshipit-source-id: 544d2434a15ceb77bff236e934dbd8e4dbd9d160
2021-07-30 06:23:10 -07:00
Pritam Damania
cac4aa71ca Provide option to pass module instance to _load_state_dict_pre_hooks. (#62070)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/62070

We have a custom Tensor:
https://github.com/pytorch/pytorch/blob/master/torch/distributed/_sharded_tensor/api.py#L67,
which doesn't show up in state_dict for the module. This was resolved by
using the _register_state_dict_hook:
https://github.com/pytorch/pytorch/blob/master/torch/nn/modules/module.py#L1196
to parse and add custom tensors to state_dict.

However, the problem is during load time  _register_load_state_dict_pre_hook:
https://github.com/pytorch/pytorch/blob/master/torch/nn/modules/module.py#L1272,
does not pass in the module instance and as a result, a ShardedTensor in the
state_dict cannot be appropriately added to a module at load time.

To resolve this issue, in this PR I've enhanced this hook to support two
variations, one which passes in the module instance (for the problem described
above) and one is the previous version for BC reasons.
ghstack-source-id: 134541391

Test Plan:
1) unit tests
2) waitforbuildbot

Reviewed By: jbschlosser

Differential Revision: D29867142

fbshipit-source-id: bcb136ff51eedd0b508cfb419e8b8a6b7d95539c
2021-07-28 19:22:47 -07:00
Thomas J. Fan
414537ac99 DOC Fixes link in register_module_backward_hook (#61999)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/61580

Pull Request resolved: https://github.com/pytorch/pytorch/pull/61999

Reviewed By: saketh-are

Differential Revision: D29847397

Pulled By: albanD

fbshipit-source-id: 3d9e1a5abac82d658b4f1746ace73e2fecb41725
2021-07-22 14:29:40 -07:00
Nikita Shulga
4e94e84f65 Type annotate torch.nn.Module ctor (#61334)
Summary:
Annotate generic types
Fix some type violations
Override `_modules` and `_parameters` in `Sequential`, `ModuleList`, `ModuleDict`, etc

Fixes https://github.com/pytorch/pytorch/issues/45497

Pull Request resolved: https://github.com/pytorch/pytorch/pull/61334

Reviewed By: albanD

Differential Revision: D29579533

Pulled By: malfet

fbshipit-source-id: 5cd8ca918b260ca35cfdd873dee8851d39d17de2
2021-07-16 13:59:06 -07:00
Joel Schlosser
aa01a7a61c Fix for get_buffer(): check buffers by name instead of value (#61429)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/61242

Previous code was wrongly checking if a tensor is a buffer in a module by comparing values; fix compares names instead.
Docs need some updating as well- current plan is to bump that to a separate PR, but I'm happy to do it here as well if preferred.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/61429

Reviewed By: gchanan

Differential Revision: D29712341

Pulled By: jbschlosser

fbshipit-source-id: 41f29ab746505e60f13de42a9053a6770a3aac22
2021-07-15 09:55:09 -07:00
Nikita Shulga
f0316ec0b6 Revert D24068202: [pytorch][PR] Add typing return value to init in nn.Module
Test Plan: revert-hammer

Differential Revision:
D24068202 (506397a809)

Original commit changeset: 4cd9b6ca12b5

fbshipit-source-id: f45fcf7ee6ee9198ed6f3f34956ce68a64378c32
2021-07-06 22:15:31 -07:00
Alexander
506397a809 Add typing return value to init in nn.Module (#45654)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/45497

Pull Request resolved: https://github.com/pytorch/pytorch/pull/45654

Reviewed By: driazati

Differential Revision: D24068202

Pulled By: malfet

fbshipit-source-id: 4cd9b6ca12b531311302e3cdeeab39bc45d86c94
2021-07-06 17:09:30 -07:00
Hongbo Zhang
ad69e2fd11 [torch] Module fix on the support of LazyModule on bug #60132 (#60517)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/60517

This is to fix the module support on lazymodulefixin on the bug issue #60132
Check the link: https://github.com/pytorch/pytorch/issues/60132

We will have to update lazy_extension given the dependency on module.py and update the unit test as well.

Test Plan:
Unit test passes

torchrec test passes

Reviewed By: albanD

Differential Revision: D29274068

fbshipit-source-id: 1c20f7f0556e08dc1941457ed20c290868346980
2021-06-25 16:20:19 -07:00
lezcano
4e347f1242 [docs] Fix backticks in docs (#60474)
Summary:
There is a very common error when writing docs: One forgets to write a matching `` ` ``, and something like ``:attr:`x`` is rendered in the docs. This PR fixes most (all?) of these errors (and a few others).

I found these running ``grep -r ">[^#<][^<]*\`"`` on the `docs/build/html/generated` folder. The regex finds an HTML tag that does not start with `#` (as python comments in example code may contain backticks) and that contains a backtick in the rendered HTML.

This regex has not given any false positive in the current codebase, so I am inclined to suggest that we should add this check to the CI. Would this be possible / reasonable / easy to do malfet ?

Pull Request resolved: https://github.com/pytorch/pytorch/pull/60474

Reviewed By: mrshenli

Differential Revision: D29309633

Pulled By: albanD

fbshipit-source-id: 9621e0e9f87590cea060dd084fa367442b6bd046
2021-06-24 06:27:41 -07:00
Thomas J. Fan
6d1b4642f0 DOC Describes parameters/buffers registered as None in load_state_dict (#60549)
Summary:
Related to https://github.com/pytorch/pytorch/issues/8104

Pull Request resolved: https://github.com/pytorch/pytorch/pull/60549

Reviewed By: VitalyFedyunin

Differential Revision: D29343732

Pulled By: jbschlosser

fbshipit-source-id: ef5ba3094c8eaf2f9c8efeba6a9d9ab52ebf8b2c
2021-06-23 20:15:22 -07:00
Thomas J. Fan
7e032f18cf DOC Describes behavior for None in module.register_* (#60125)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/45834

Pull Request resolved: https://github.com/pytorch/pytorch/pull/60125

Reviewed By: zou3519

Differential Revision: D29196138

Pulled By: jbschlosser

fbshipit-source-id: af736c0d66005ec33412860f00b233a5d2922137
2021-06-17 19:18:23 -07:00
Thomas J. Fan
8af6281201 DOC Adds register_module_full_backward_hook into docs (#58954)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/54443

Adds `register_module_full_backward_hook` into the index so it is rendered in the html docs.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/58954

Reviewed By: ngimel

Differential Revision: D28801816

Pulled By: jbschlosser

fbshipit-source-id: a2e737fe983e5d7e4e26d7639183bca34b571cb8
2021-06-01 15:47:10 -07:00
Thomas J. Fan
5f1117226f DOC Update register_buffer/parameter docstring explaining None (#59015)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/40977

Pull Request resolved: https://github.com/pytorch/pytorch/pull/59015

Reviewed By: ngimel

Differential Revision: D28797948

Pulled By: jbschlosser

fbshipit-source-id: 3bf60af5c1cfc5f1786b4975b48f093391374503
2021-06-01 13:55:07 -07:00
Jeffrey Wan
a7a5992d7d Add no-grad inference mode note (#58513)
Summary:
Adds a note explaining the difference between several often conflated mechanisms in the autograd note
Also adds a link to this note from the docs in `grad_mode` and `nn.module`.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/58513

Reviewed By: gchanan

Differential Revision: D28651129

Pulled By: soulitzer

fbshipit-source-id: af9eb1749b641fc1b632815634eea36bf7979156
2021-05-25 13:06:54 -07:00
Thomas J. Fan
ee93a348de ENH Raises nicer error when calling module.train with invalid modes (#58247)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/46763

Pull Request resolved: https://github.com/pytorch/pytorch/pull/58247

Reviewed By: ejguan

Differential Revision: D28418080

Pulled By: albanD

fbshipit-source-id: fef8f4f641ef75e801ed8b8d04c4016579aea8b0
2021-05-17 05:57:18 -07:00
lezcano
452569dffb cfloat and cdouble functions (#58137)
Summary:
This adds the methods `Tensor.cfloat()` and `Tensor.cdouble()`.

I was not able to find the tests for `.float()` functions. I'd be happy to add similar tests for these functions  once someone points me to them.

Fixes https://github.com/pytorch/pytorch/issues/56014

Pull Request resolved: https://github.com/pytorch/pytorch/pull/58137

Reviewed By: ejguan

Differential Revision: D28412288

Pulled By: anjali411

fbshipit-source-id: ff3653cb3516bcb3d26a97b9ec3d314f1f42f83d
2021-05-13 21:13:37 -07:00
Joel Schlosser
f7fba854bf Implement module.to_empty() (#56610)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/54600

Pull Request resolved: https://github.com/pytorch/pytorch/pull/56610

Reviewed By: malfet

Differential Revision: D27921653

Pulled By: jbschlosser

fbshipit-source-id: 10734b3eaa5b84bb4ba6eeba1043cfc8bb570a17
2021-04-27 06:19:54 -07:00
Bharat123rox
f2acdff73d DOC: Add note to mutating methods (#56877)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/56243 by adding a note to mutating functions not following the trailing `_` convention in `torch/nn/modules/module.py`

I can also raise separate PRs for other files, if needed

Pull Request resolved: https://github.com/pytorch/pytorch/pull/56877

Reviewed By: ezyang

Differential Revision: D28008856

Pulled By: jbschlosser

fbshipit-source-id: 63bfca0df05e49fceadd3167b1427dcb5542206a
2021-04-27 06:16:56 -07:00
Sam Estep
75024e228c Add lint for unqualified type: ignore (#56290)
Summary:
The other half of https://github.com/pytorch/pytorch/issues/56272.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/56290

Test Plan:
CI should pass on the tip of this PR, and we know that the lint works because the following CI runs (before this PR was finished) failed:

- https://github.com/pytorch/pytorch/runs/2384511062
- https://github.com/pytorch/pytorch/actions/runs/765036024

Reviewed By: seemethere

Differential Revision: D27867219

Pulled By: samestep

fbshipit-source-id: e648f07b6822867e70833e23ddafe7fb7eaca235
2021-04-21 08:07:23 -07:00
Jerry Zhang
0a541e23e1 [nn] Add allow_duplicate option for named_modules (#54812)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/54812

Needed for quantization since different attribute might refer to the same module instance

Test Plan: Imported from OSS

Reviewed By: vkuzo

Differential Revision: D27408376

fbshipit-source-id: cada85c4a1772d3dd9502c3f6f9a56d690d527e7
2021-04-16 01:26:16 -07:00
Nikita Shulga
d33829f844 Fix type annotations for state_dict() override (#55704)
Summary:
Change annotation to OrderedDict, but stringify it to stay compatible with Python-3.6

Fixes https://github.com/pytorch/pytorch/issues/55302

Pull Request resolved: https://github.com/pytorch/pytorch/pull/55704

Reviewed By: walterddr

Differential Revision: D27686011

Pulled By: malfet

fbshipit-source-id: 3a8dedf33f38d86767ebd4e8a1a8abfe850b375a
2021-04-09 17:48:12 -07:00
Xiao Wang
55d45458bd [cuDNN] Enable Conv3d channels_last_3d (#48430)
Summary:
This PR adds the functionality to use channals_last_3d, aka, NDHWC, in Conv3d. It's only enabled when cuDNN version is greater than or equal to 8.0.5.

Todo:

- [x] add memory_format test
- [x]  add random shapes functionality test

Close https://github.com/pytorch/pytorch/pull/52547

Pull Request resolved: https://github.com/pytorch/pytorch/pull/48430

Reviewed By: mrshenli

Differential Revision: D27641452

Pulled By: ezyang

fbshipit-source-id: 0e98957cf30c50c3390903d307dd43bdafd28880
2021-04-09 07:56:49 -07:00
Sam Estep
8c798e0622 Forbid trailing whitespace (#53406)
Summary:
Context: https://github.com/pytorch/pytorch/pull/53299#discussion_r587882857

These are the only hand-written parts of this diff:
- the addition to `.github/workflows/lint.yml`
- the file endings changed in these four files (to appease FB-internal land-blocking lints):
  - `GLOSSARY.md`
  - `aten/src/ATen/core/op_registration/README.md`
  - `scripts/README.md`
  - `torch/csrc/jit/codegen/fuser/README.md`

The rest was generated by running this command (on macOS):
```
git grep -I -l ' $' -- . ':(exclude)**/contrib/**' ':(exclude)third_party' | xargs gsed -i 's/ *$//'
```

I looked over the auto-generated changes and didn't see anything that looked problematic.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/53406

Test Plan:
This run (after adding the lint but before removing existing trailing spaces) failed:
- https://github.com/pytorch/pytorch/runs/2043032377

This run (on the tip of this PR) succeeded:
- https://github.com/pytorch/pytorch/runs/2043296348

Reviewed By: walterddr, seemethere

Differential Revision: D26856620

Pulled By: samestep

fbshipit-source-id: 3f0de7f7c2e4b0f1c089eac9b5085a58dd7e0d97
2021-03-05 17:22:55 -08:00
Ansley Ussery
85109ce427 Support submodule manipulation in GraphModule (#52358)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/52358

Test Plan: Imported from OSS

Reviewed By: jamesr66a

Differential Revision: D26759260

Pulled By: ansley

fbshipit-source-id: 25d2b9124a7d957704f1700a45dca143aaed391d
2021-03-04 14:52:35 -08:00
Horace He
a649d808e6 Added fast path in the case of no hooks (#52576)
Summary:
See the discussion here: https://github.com/pytorch/pytorch/pull/50431

~~Not completely done yet - need to figure out the backwards compatibility stuff as well as `RemovableHandle`.~~

~~Also, this concretely breaks Torchscript (which tries to script the properties), and more generally, probably requires modifying Torchscript hook support: https://github.com/pytorch/pytorch/issues/34329~~

Just kidding, I think all problems are solved :)

Another thing I could do in this PR is to simply replace all the `len(x) > 0` checks with the faster checks. That's about 1.5-2k more Python instructions and .4 - .5 microseconds slower.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/52576

Reviewed By: ailzhang

Differential Revision: D26650352

Pulled By: Chillee

fbshipit-source-id: 0fd73e916354b9e306701a8a396c5dc051e69f0d
2021-02-24 21:48:09 -08:00
Jeff Yang
0c455332e8 docs: add link to Tensor.share_memory_ in Module.share_memory (#52561)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/48228

Pull Request resolved: https://github.com/pytorch/pytorch/pull/52561

Reviewed By: malfet

Differential Revision: D26626012

fbshipit-source-id: 7aab44c60d1bcbda68012521ec852843864abc7f
2021-02-23 20:20:50 -08:00
Akifumi Imanishi
b3fda95fe7 Add LazyBatchNormXd (#51862)
Summary:
Same diff with https://github.com/pytorch/pytorch/issues/51548 (cc. albanD)

Pull Request resolved: https://github.com/pytorch/pytorch/pull/51862

Reviewed By: izdeby

Differential Revision: D26312289

Pulled By: albanD

fbshipit-source-id: 9cdec0e0c9021c33d10d85010978c7fa5cb4dc60
2021-02-09 10:29:03 -08:00
Alban Desmaison
a930162c69 Revert D26276903: [pytorch][PR] Add LazyBatchNormXd
Test Plan: revert-hammer

Differential Revision:
D26276903 (aa1fd6b45a)

Original commit changeset: 0ac706974178

fbshipit-source-id: bfe01b01cd460f1e2845ea5ef1fc1514e6b6ba54
2021-02-05 12:37:29 -08:00
Akifumi Imanishi
aa1fd6b45a Add LazyBatchNormXd (#51548)
Summary:
This PR implements UninitializedBuffer and LazyBatchnormXd based on https://github.com/pytorch/pytorch/issues/44538. (cc. emcastillo and albanD)

Pull Request resolved: https://github.com/pytorch/pytorch/pull/51548

Reviewed By: zhangguanheng66

Differential Revision: D26276903

Pulled By: albanD

fbshipit-source-id: 0ac706974178363f8af075e59b41d5989418922f
2021-02-05 10:27:04 -08:00
chengjun
4a8ef4525e Add new backend type for Intel heterogeneous computation platform. (#49786)
Summary:
Add a new device type 'XPU' ('xpu' for lower case) to PyTorch. Changes are needed for code related to device model and kernel dispatch, e.g. DeviceType, Backend and DispatchKey etc.

https://github.com/pytorch/pytorch/issues/48246

Pull Request resolved: https://github.com/pytorch/pytorch/pull/49786

Reviewed By: mrshenli

Differential Revision: D25893962

Pulled By: ezyang

fbshipit-source-id: 7ff0a316ee34cf0ed6fc7ead08ecdeb7df4b0052
2021-01-20 08:15:18 -08:00
jonykarki
934805bc49 cleaned up ModuleAttributeError (#50298)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/49726
Just cleaned up the unnecessary `ModuleAttributeError`

BC-breaking note:
`ModuleAttributeError` was added in the previous unsuccessful [PR](https://github.com/pytorch/pytorch/pull/49879) and removed here. If a user catches `ModuleAttributeError` specifically, this will no longer work. They should catch `AttributeError` instead.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/50298

Reviewed By: mrshenli

Differential Revision: D25907620

Pulled By: jbschlosser

fbshipit-source-id: cdfa6b1ea76ff080cd243287c10a9d749a3f3d0a
2021-01-14 06:58:01 -08:00
Guilherme Leobas
5f8e1a1da9 add type annotations to torch.nn.modules.module (#49045)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/49044

Pull Request resolved: https://github.com/pytorch/pytorch/pull/49045

Reviewed By: malfet

Differential Revision: D25767092

Pulled By: walterddr

fbshipit-source-id: a81ba96f3495943af7bb9ee3e5fc4c94c690c405
2021-01-11 17:01:47 -08:00
Alex Henrie
5f2ec6293d Unused variables in neural net classes and functions (#50100)
Summary:
These unused variables were identified by [pyflakes](https://pypi.org/project/pyflakes/). They can be safely removed to simplify the code and possibly improve performance.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/50100

Reviewed By: ezyang

Differential Revision: D25797764

Pulled By: smessmer

fbshipit-source-id: ced341aee692f429d2dcc3a4ef5c46c8ee99cabb
2021-01-06 08:16:57 -08:00
Samuel Marks
e6779d4357 [*.py] Rename "Arguments:" to "Args:" (#49736)
Summary:
I've written custom parsers and emitters for everything from docstrings to classes and functions. However, I recently came across an issue when I was parsing/generating from the TensorFlow codebase: inconsistent use of `Args:` and `Arguments:` in its docstrings.

```sh
(pytorch#c348fae)$ for name in 'Args:' 'Arguments:'; do
    printf '%-10s %04d\n' "$name" "$(rg -IFtpy --count-matches "$name" | paste -s -d+ -- | bc)"; done
Args:      1095
Arguments: 0336
```

It is easy enough to extend my parsers to support both variants, however it looks like `Arguments:` is wrong anyway, as per:

  - https://google.github.io/styleguide/pyguide.html#doc-function-args @ [`ddccc0f`](https://github.com/google/styleguide/blob/ddccc0f/pyguide.md)

  - https://chromium.googlesource.com/chromiumos/docs/+/master/styleguide/python.md#describing-arguments-in-docstrings @ [`9fc0fc0`](https://chromium.googlesource.com/chromiumos/docs/+/9fc0fc0/styleguide/python.md)

  - https://sphinxcontrib-napoleon.readthedocs.io/en/latest/example_google.html @ [`c0ae8e3`](https://github.com/sphinx-contrib/napoleon/blob/c0ae8e3/docs/source/example_google.rst)

Therefore, only `Args:` is valid. This PR replaces them throughout the codebase.

PS: For related PRs, see tensorflow/tensorflow/pull/45420

PPS: The trackbacks automatically appearing below are sending the same changes to other repositories in the [PyTorch](https://github.com/pytorch) organisation.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/49736

Reviewed By: albanD

Differential Revision: D25710534

Pulled By: soumith

fbshipit-source-id: 61e8ff01abb433e9f78185c2d1d0cbd7c22c1619
2020-12-28 09:34:47 -08:00
albanD
ccd646696b Fix Module backward hooks for all Tensor inputs/outputs (#46163)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/598

This is BC-breaking as we now explicitly don't call the hook when there are not Tensors at the top level of the output.
This feature was not working anyways as the returned grad_input/grad_output were wrong (not respecting the output structure and wrong inputs for multi-Node Module).

This is also BC-breaking as we now report the correct gradients for `nn.Module`s that contain multiple autograd `Node`s while we use to return bad results before.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/46163

Reviewed By: ailzhang, mruberry

Differential Revision: D24894180

Pulled By: albanD

fbshipit-source-id: e1b5d193d2818eb2f51e2a2722c7405c8bd13c2b
2020-12-18 09:04:36 -08:00
Richard Zou
22d21414d7 Revert D24574649: [pytorch][PR] Utility that loads a DP/DDP model state dict into a non-DDP model with the same architecture.
Test Plan: revert-hammer

Differential Revision:
D24574649 (b631c872c9)

Original commit changeset: 17d29ab16ae2

fbshipit-source-id: 6766c6b21b82c9463143da0370192d9c68dbce6c
2020-11-10 06:55:45 -08:00
Pradeep Ganesan
b631c872c9 Utility that loads a DP/DDP model state dict into a non-DDP model with the same architecture. (#45643)
Summary:
Added a convenience function that allows users to load models without DP/DDP from a DP/DDP state dict.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/45643

Reviewed By: rohan-varma

Differential Revision: D24574649

fbshipit-source-id: 17d29ab16ae24a30890168fa84da6c63650e61e9
2020-11-09 20:49:29 -08:00
Ivan Yashchuk
6de619e4a4 Allow converting parameters of nn.Module to complex dtypes (#44788)
Summary:
This PR makes it possible to cast the parameters of nn.Module to complex dtypes.
The following code works with the proposed changes.
```python
In [1]: import torch
In [2]: lin = torch.nn.Linear(5, 1).to(torch.complex64)
In [3]: lin(torch.zeros(3, 5, dtype=torch.complex64))
Out[3]:
tensor([[-0.1739+0.j],
        [-0.1739+0.j],
        [-0.1739+0.j]], grad_fn=<AddmmBackward>)
```
Fixes https://github.com/pytorch/pytorch/issues/43477.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/44788

Reviewed By: zou3519

Differential Revision: D24307225

Pulled By: anjali411

fbshipit-source-id: dacc4f5c8c9a99303f74d1f5d807cd657b3b69b5
2020-10-21 08:54:59 -07:00
Emilio Castillo
d38a71d579 torch.nn.modules.LazyModuleMixin and torch.nn.LazyLinear (Shape Inference II) (#44538)
Summary:
Retake on https://github.com/pytorch/pytorch/issues/40493 after all the feedback from albanD

This PR implements the generic Lazy mechanism and a sample `LazyLinear` layer with the `UninitializedParameter`.

The main differences with the previous PR are two;
Now `torch.nn.Module` remains untouched.
We don't require an explicit initialization or a dummy forward pass before starting the training or inference of the actual module. Making this much simpler to use from the user side.

As we discussed offline, there was the suggestion of not using a mixin, but changing the `__class__` attribute of `LazyLinear` to become `Linear` once it's completely initialized. While this can be useful, by the time being we need `LazyLinear` to be a `torch.nn.Module` subclass since there are many checks that rely on the modules being instances of `torch.nn.Module`.
This can cause problems when we create complex modules such as
```
class MyNetwork(torch.nn.Module):
    def __init__(self):
        super(MyNetwork, self).__init__()
        self.conv = torch.nn.Conv2d(20, 4, 2)
        self.linear = torch.nn.LazyLinear(10)
    def forward(self, x):
        y = self.conv(x).clamp(min=0)
        return self.linear(y)
```
Here, when the __setattr__ function is called at the time LazyLinear is registered, it won't be added to the child modules of `MyNetwork`, so we have to manually do it later, but currently there is no way to do such thing as we can't access the parent module from LazyLinear once it becomes the Linear module. (We can add a workaround to this if needed).

TODO:

Add convolutions once the design is OK
Fix docstrings

Pull Request resolved: https://github.com/pytorch/pytorch/pull/44538

Reviewed By: ngimel

Differential Revision: D24162854

Pulled By: albanD

fbshipit-source-id: 6d58dfe5d43bfb05b6ee506e266db3cf4b885f0c
2020-10-19 13:13:54 -07:00
Taras Galkovskyi
f11f9a8c1f [pytorch][improvement] Improve torch logging to identify problematic key (#45766)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/45766

As per subj, making KeyError message more verbose.

Test Plan:
Verified that breakage can be successfully investigated with verbose error message
unit tests

Reviewed By: esqu1

Differential Revision: D24080362

fbshipit-source-id: f4e22a78809e5cff65a69780d5cbbc1e8b11b2e5
2020-10-05 14:54:52 -07:00
Michael Carilli
3e6bb5233f Reference amp tutorial (recipe) from core amp docs (#44725)
Summary:
https://pytorch.org/tutorials/recipes/recipes/amp_recipe.html is live.  Core amp docs should reference it.

Also i fixed some typos in the `zero_grad` docs we ignored when git was behaving weirdly during ngimel 's merge of https://github.com/pytorch/pytorch/pull/44423.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/44725

Reviewed By: mruberry

Differential Revision: D23723807

Pulled By: ngimel

fbshipit-source-id: ca0b76365f8ca908bd978e3b38bf81857fa6c2a3
2020-09-16 11:37:58 -07:00
taiyuanz
c515881137 Add reset_grad() function (#44423)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/44423

Pull Request resolved: https://github.com/pytorch/pytorch/pull/42754

Test Plan: Imported from OSS

Reviewed By: mruberry

Differential Revision: D23010859

Pulled By: ngimel

fbshipit-source-id: 56eec43eba88b98cbf714841813977c68f983564
2020-09-09 22:05:45 -07:00
Oliver Thomas
e39b43fd76 Issue 43057 (#43063)
Summary:
A small change that adds a docstring that can be found with
`getattr(nn.Module, nn.Module.forward.__name__, None).__doc__`

Fixes https://github.com/pytorch/pytorch/issues/43057

Pull Request resolved: https://github.com/pytorch/pytorch/pull/43063

Reviewed By: mrshenli

Differential Revision: D23161782

Pulled By: ezyang

fbshipit-source-id: 95456f858e2b6a0e41ae551ea4ec2e78dd35ee3f
2020-08-18 12:50:53 -07:00
Max Berrendorf
ddb8849ffc Fix method stub used for fixing mypy issue to work with pylint (#42356)
Summary:
Make function from method

Since _forward_unimplemented is defined within the nn.Module class,
pylint (correctly) complains about not implementing this method in subclasses.

Fixes https://github.com/pytorch/pytorch/issues/42305

Pull Request resolved: https://github.com/pytorch/pytorch/pull/42356

Reviewed By: mruberry

Differential Revision: D22867255

Pulled By: ezyang

fbshipit-source-id: ccf3e45e359d927e010791fadf70b2ef231ddb0b
2020-08-05 19:57:38 -07:00
Sharvil Nanavati
832b1659e7 Fix missing attribute when loading model from older version (#42242) (#42290)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/42242

Pull Request resolved: https://github.com/pytorch/pytorch/pull/42290

Reviewed By: VitalyFedyunin

Differential Revision: D22844096

Pulled By: albanD

fbshipit-source-id: 707e552e0ed581fbe00f1527ab7426880edaed64
2020-07-31 09:03:07 -07:00
Yanli Zhao
79cfd85987 grad detach_ only when it has grad_fn in zero_grad call (#41283)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/41283

in optimizer.zero_grad(), detach_ is useful to avoid memory leak only when grad has grad_fn, so add check to call grad.detach_ only when the grad has grad_fn in zero_grad() function
ghstack-source-id: 108702289

Test Plan: unit test

Reviewed By: mrshenli

Differential Revision: D22487315

fbshipit-source-id: 861909b15c8497f1da57f092d8963d4920c85e38
2020-07-29 11:40:13 -07:00
Michael Suo
300a3aaaad [jit] move private implementation out of jit/__init__.py (#40807)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/40807

We pack a lot of logic into `jit/__init__.py`, making it unclear to
developers and users which parts of our API are public vs. internal. This
is one in a series of PRs intended to pull implementation out into
separate files, and leave `__init__.py` as a place to register the
public API.

This PR moves all the tracing-related stuff out, and fixes other spots up
as necessary. Followups will move other core APIs out.

The desired end-state is that we conform to the relevant rules in [PEP 8](https://www.python.org/dev/peps/pep-0008/#public-and-internal-interfaces). In particular:
- Internal implementation goes in modules prefixed by `_`.
- `__init__.py` exposes a public API from these private modules, and nothing more.
- We set `__all__` appropriately to declare our public API.
- All use of JIT-internal functionality outside the JIT are removed (in particular, ONNX is relying on a number internal APIs). Since they will need to be imported explicitly, it will be easier to catch new uses of internal APIs in review.

Test Plan: Imported from OSS

Reviewed By: eellison

Differential Revision: D22320645

Pulled By: suo

fbshipit-source-id: 0720ea9976240e09837d76695207e89afcc58270
2020-07-05 22:01:11 -07:00
Tongzhou Wang
b678666a04 Add module.training to docs (#40923)
Summary:
A lot of people ask https://discuss.pytorch.org/t/check-if-model-is-eval-or-train/9395/3
Pull Request resolved: https://github.com/pytorch/pytorch/pull/40923

Reviewed By: pbelevich

Differential Revision: D22358799

Pulled By: zou3519

fbshipit-source-id: b5465ffedb691fb4811e097c4dbd7bbc405be09c
2020-07-02 12:36:59 -07:00
Jiayu Liu
0203d70c63 [nit] fix some typo within documentation (#40692)
Summary:
Apologize if this seems trivial, but i'd like to fix them on my way of reading some of the source code. Thanks!
Pull Request resolved: https://github.com/pytorch/pytorch/pull/40692

Differential Revision: D22284651

Pulled By: mrshenli

fbshipit-source-id: 4259d1808aa4d15a02cfd486cfb44dd75fdc58f8
2020-06-30 19:24:44 -07:00
Tongzhou Wang
ef5a314597 [typing] fix register_buffer/parameter (#40669)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/40669

Differential Revision: D22286130

Pulled By: ezyang

fbshipit-source-id: c0cc173279678978726895a0830343d5234e474e
2020-06-30 10:39:32 -07:00
Emilio Castillo
5e77999ecb Add global hooks to torch.nn.Module (#38972)
Summary:
This allows registering hooks that will be executed for every module.

This idea arose in a discussion with tkerola and niboshi kindly proposed this approach.

The use case for this is to avoid boilerplate code when registering the same hook for all the modules in a complex model, the internal use-case was to allow every model to accept a NumPy array in the forward pass in a simpler way. Other use cases involve general mechanisms for plotting or tracing & debugging.

Currently, this is shared for all the modules but this can be worked out to have the hooks shared only per type of module.

If this functionality is not needed feel free to close the PR.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/38972

Differential Revision: D22091364

Pulled By: albanD

fbshipit-source-id: 204ff5f9e119eff5bdd9140c64cb5dc467bb23a2
2020-06-17 12:20:35 -07:00
Edward Yang
eace053398 Move all torch.nn.modules type annotations inline (#38211)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/38211

Just because the annotations are inline doesn't mean the files type
check; most of the newly annotated files have type errors and I
added exclusions for them in mypy.ini.  The payoff of moving
all of these modules inline is I can delete the relevant code
generation logic for the pyi files (which was added ignore
annotations that weren't actually relevant anymore.)

For the most part the translation was completely mechanical, but there
were two hairy issues.  First, I needed to work around a Python 3.6 and
earlier bug where Generic has a nontrivial metaclass.  This fix is in
torch/jit/__init__.py.  Second, module.py, we need to apply the same
fix for avoiding contravariance checks that the pyi file used to have;
this is done by declaring forward as a variable (rather than a
function), which appears to be sufficient enough to get mypy to not
contravariantly check input arguments.

Because we aren't actually typechecking these modules in most
cases, it is inevitable that some of these type annotations are wrong.
I slavishly copied the old annotations from the pyi files unless there
was an obvious correction I could make.  These annotations will probably
need fixing up later.

Signed-off-by: Edward Z. Yang <ezyang@fb.com>

Test Plan: Imported from OSS

Differential Revision: D21497397

Pulled By: ezyang

fbshipit-source-id: 2b08bacc152c48f074e7edc4ee5dce1b77d83702
2020-06-11 15:59:57 -07:00