Add similar semantics for creating a buffer object similar to creating a parameter. This is done by introducing a new `Buffer` class that can be used for type disambiguation. The underlying functionality of registering a buffer remains the same as the `register_buffer` method has not been changed. The `persistent` parameter in the `Buffer` type is to indicate whether a buffer object should be persistent or not. Other non-test changes have to do with getting the new `Buffer` type recognized by inductor and dynamo. Remaining changes are test changes to make sure that the `Buffer` type can be used as a drop in replacement for `register_buffer` as it just leads to `register_buffer` being called. The addition of this new functionality still allows for normal tensors to be used as buffers so these changes are intended to be backwards compatible.
Fixes#35735
Pull Request resolved: https://github.com/pytorch/pytorch/pull/104069
Approved by: https://github.com/mikaylagawarecki
This PR:
- Updates the docs to say it is deprecated
- Raises a UserWarning
- Changes most of the callsites inside PyTorch to use
torch.func.functional_call, minus the test_stateless testing.
The motivation behind this is that we can now align behind a single
functional_call API in PyTorch.
Test Plan:
- existing tests
Pull Request resolved: https://github.com/pytorch/pytorch/pull/92280
Approved by: https://github.com/albanD
This bug came up as I was adding new tests for ExpandedWeights
If the forwards pass errors when the `_reparametrize_module` context manager is still on, the values from reparameterization will remain on the module outside of the context manager, where it should be the original values. This fixes that by putting a try/finally block around the forward call and call to reset the parameters
Pull Request resolved: https://github.com/pytorch/pytorch/pull/81262
Approved by: https://github.com/zou3519
Summary:
https://github.com/pytorch/pytorch/issues/61447 introduced a mechanism for performing functional calls in a model using the reparametrization API. However, the overhead introduced in a single call was too large.
I tried to address this by modifying the reparametrization code to support spare tensors, but the changes needed were too large due to type checking and several parts of the code expecting actual `nn.Module` objects so this option was not feasible.
resnet50 and call functional with a parameters dict covering the 0, 25, 50, and 100% of the model total parameters.
Used script:
https://gist.github.com/emcastillo/f344a58638bd71d130c71c45f86f0c3a
| % of parameters passed | CPU Time (us) | GPU Time (us) |
|------------------------|---------------|---------------|
| regular call | 5539 | 184909 |
| 0 | 5561 | 184843 |
| 25 | 11363 | 189236 |
| 50 | 18716 | 195378 |
| 75 | 22851 | 198641 |
| 100 | 27441 | 202281 |
This PR just swaps the `__getattr__` of the submodules to look into a dict holding only the parameters when called, greatly reducing the burden of having to instantiate custom modules and calling forward to just retrieve a tensor.
The execution times now are as follows:
| % of parameters passed | CPU Time (us) | GPU Time (us) |
|------------------------|---------------|---------------|
| regular call | 5939 | 187533 |
| 0 | 5899 | 187570 |
| 25 | 8541 | 188953 |
| 50 | 10045 | 189826 |
| 75 | 11049 | 190344 |
| 100 | 11911 | 190800 |
| functorch with 100% params | 14014 | 191727
Now we see that the CPU time overhead is greatly reduced and the GPU time barely increases due to the effective overlap.
cc albanD zou3519
Pull Request resolved: https://github.com/pytorch/pytorch/pull/68969
Reviewed By: george-qi
Differential Revision: D33836360
Pulled By: albanD
fbshipit-source-id: 532561f64b18ca14c6ae2d77dcacb339397a589d
(cherry picked from commit fd4b6bdfbf)
Summary:
Action following https://github.com/pytorch/pytorch/issues/66232
This change does require some context: there were several suggestions regarding what to do about this group of tests: tests that are core and crucial to all of PyTorch and are too broad to be owned by one team.
1. Let's add a "module: core" and put people behind it! This idea sounds appealing unless you are one of the people backing the label. From talking to albanD among others, this idea of putting all these core tests on the shoulder of a few people or one team isn't super fair and I have not yet found anyone willing to take on this job.
2. Taking advantage of the fact that we already have a triaging oncall that takes turns triaging issues, we can leave these tests essentially unlabeled and allow the oncall to triage these tests. Since these tests are crucial to PyTorch, we'll add the "high priority" label to mark them different from other unowned tests (see https://github.com/pytorch/pytorch/issues/67552).
3. I _could_ still create an unbacked label "module: core" and attribute these tests there, but I don't like the idea of creating a facade that the tests are "triaged" to a label when no one is actually taking a look.
Now we could potentially break these tests down into smaller files so that each piece _could_ be owned by a team, but 1. I don't know if this is currently feasible and 2. This approach does not prevent that from happening in the future.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/67553
Reviewed By: albanD
Differential Revision: D32025004
Pulled By: janeyx99
fbshipit-source-id: 1fb1aa4c27e305695ab6e80ae3d02f90519939c0
Summary:
Fixes https://github.com/pytorch/pytorch/issues/58839
After discussing with albanD he proposed this simple design.
Let's iterate over the idea here :).
Thanks.
The main point that this PR does is to use reparametrization to be reverted at the end of the functional call.
This allows us to have the original model with its status unchanged, also in this scenario the module is created without parameters so this will hard error if not all parameters are specified when the forward pass is done.
``` python
import torch
import torch.nn.utils._stateless
class MyModule(torch.nn.Module):
def __init__(self):
super().__init__()
self.l1 = torch.nn.Linear(1, 1)
def forward(self, x):
return self.l1(x)
mod = MyModule()
print('weight before', mod.l1.weight)
x = torch.rand((1, 1))
parameters = {"l1.weight": torch.nn.Parameter(torch.tensor([[1.0]])),
"l1.bias": torch.nn.Parameter(torch.tensor([0.0]))}
res = torch.nn.utils._stateless.functional_call(mod, parameters, x)
print('Functional call input ', x, ' and result ', res)
print('weight after', mod.l1.weight)
```
Output
```
weight before Parameter containing:
tensor([[-0.4419]], requires_grad=True)
Functional call input tensor([[0.3531]]) and result tensor([[0.3531]], grad_fn=<AddmmBackward>)
weight after Parameter containing:
tensor([[-0.4419]], requires_grad=True)
```
Pull Request resolved: https://github.com/pytorch/pytorch/pull/61447
Reviewed By: soulitzer
Differential Revision: D31082765
Pulled By: albanD
fbshipit-source-id: ba814d0f9162fb39c59989ca9a8efe160405ba76