Commit Graph

4 Commits

Author SHA1 Message Date
Horace He
0e351c7df9 Added setattr to functional_call. (#77137)
Fixes https://github.com/pytorch/pytorch/issues/77133

Pull Request resolved: https://github.com/pytorch/pytorch/pull/77137
Approved by: https://github.com/emcastillo, https://github.com/albanD, https://github.com/jbschlosser
2022-05-17 05:40:46 +00:00
albanD
a6a5e6cecf move the stateless util to public API!
Pull Request resolved: https://github.com/pytorch/pytorch/pull/75834
Approved by: https://github.com/zou3519, https://github.com/jbschlosser
2022-04-21 13:42:24 +00:00
Emilio Castillo
fa38e93fe9 Add lightweight reparametrization for _stateless calls (#68969)
Summary:
https://github.com/pytorch/pytorch/issues/61447 introduced a mechanism for performing functional calls in a model using the reparametrization API. However, the overhead introduced in a single call was too large.
I tried to address this by modifying the reparametrization code to support spare tensors, but the changes needed were too large due to type checking and several parts of the code expecting actual `nn.Module` objects so this option was not feasible.

resnet50 and call functional with a parameters dict covering the 0, 25, 50, and 100% of the model total parameters.

Used script:
https://gist.github.com/emcastillo/f344a58638bd71d130c71c45f86f0c3a

| % of parameters passed | CPU Time (us) | GPU Time (us) |
|------------------------|---------------|---------------|
| regular call           | 5539          | 184909        |
| 0                      | 5561          | 184843        |
| 25                     | 11363         | 189236        |
| 50                     | 18716         | 195378        |
| 75                     | 22851         | 198641        |
| 100                    | 27441         | 202281        |

This PR just swaps the `__getattr__` of the submodules to look into a dict holding only the parameters when called, greatly reducing the burden of having to instantiate custom modules and calling forward to just retrieve a tensor.

The execution times now are as follows:

| % of parameters passed | CPU Time (us) | GPU Time (us) |
|------------------------|---------------|---------------|
| regular call           | 5939          | 187533        |
| 0                      | 5899          | 187570        |
| 25                     | 8541         | 188953        |
| 50                     | 10045         | 189826        |
| 75                     | 11049         | 190344        |
| 100                    | 11911         | 190800        |
| functorch with 100% params | 14014 | 191727

Now we see that the CPU time overhead is greatly reduced and the GPU time barely increases due to the effective overlap.

cc albanD zou3519

Pull Request resolved: https://github.com/pytorch/pytorch/pull/68969

Reviewed By: george-qi

Differential Revision: D33836360

Pulled By: albanD

fbshipit-source-id: 532561f64b18ca14c6ae2d77dcacb339397a589d
(cherry picked from commit fd4b6bdfbf)
2022-01-28 14:38:45 +00:00
Emilio Castillo
cd813f16bf Add functional api for nn.Module (#61447)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/58839

After discussing with albanD he proposed this simple design.

Let's iterate over the idea here :).

Thanks.

The main point that this PR does is to use reparametrization to be reverted at the end of the functional call.
This allows us to have the original model with its status unchanged, also in this scenario the module is created without parameters so this will hard error if not all parameters are specified when the forward pass is done.

``` python
import torch
import torch.nn.utils._stateless

class MyModule(torch.nn.Module):
    def __init__(self):
        super().__init__()
        self.l1 = torch.nn.Linear(1, 1)

    def forward(self, x):
        return self.l1(x)

mod = MyModule()
print('weight before', mod.l1.weight)
x = torch.rand((1, 1))
parameters = {"l1.weight": torch.nn.Parameter(torch.tensor([[1.0]])),
              "l1.bias": torch.nn.Parameter(torch.tensor([0.0]))}
res = torch.nn.utils._stateless.functional_call(mod, parameters, x)
print('Functional call input ', x, ' and result ', res)
print('weight after', mod.l1.weight)
```
Output
```
weight before Parameter containing:
tensor([[-0.4419]], requires_grad=True)

Functional call input tensor([[0.3531]]) and result tensor([[0.3531]], grad_fn=<AddmmBackward>)

weight after Parameter containing:
tensor([[-0.4419]], requires_grad=True)
```

Pull Request resolved: https://github.com/pytorch/pytorch/pull/61447

Reviewed By: soulitzer

Differential Revision: D31082765

Pulled By: albanD

fbshipit-source-id: ba814d0f9162fb39c59989ca9a8efe160405ba76
2021-09-21 12:39:43 -07:00