Commit Graph

6 Commits

Author SHA1 Message Date
Xuehai Pan
2ce734cee9 [BE] enable UFMT for torch/ao/quantization/ (#128863)
Part of #123062

- #123062

Pull Request resolved: https://github.com/pytorch/pytorch/pull/128863
Approved by: https://github.com/ezyang
ghstack dependencies: #128861, #128862
2024-07-25 04:17:54 +00:00
Aaron Orenstein
62bcdc0ac9 Flip default value for mypy disallow_untyped_defs [4/11] (#127841)
See #127836 for details.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/127841
Approved by: https://github.com/oulgen
2024-06-08 18:36:48 +00:00
Richard Barnes
6370ac0251 [codemod] Replace hasattr with getattr in caffe2/torch/ao/quantization/stubs.py (#100597)
Summary:
The pattern
```
X.Y if hasattr(X, "Y") else Z
```
can be replaced with
```
getattr(X, "Y", Z)
```

The [getattr](https://www.w3schools.com/python/ref_func_getattr.asp) function gives more succinct code than the [hasattr](https://www.w3schools.com/python/ref_func_hasattr.asp) function. Please use it when appropriate.

**This diff is very low risk. Green tests indicate that you can safely Accept & Ship.**

Test Plan: Sandcastle

Reviewed By: vkuzo

Differential Revision: D44886422

Pull Request resolved: https://github.com/pytorch/pytorch/pull/100597
Approved by: https://github.com/Skylion007
2023-05-04 16:36:23 +00:00
Xuehai Pan
5b1cedacde [BE] [2/3] Rewrite super() calls in functorch and torch (#94588)
Rewrite Python built-in class `super()` calls. Only non-semantic changes should be applied.

- #94587
- #94588
- #94592

Also, methods with only a `super()` call are removed:

```diff
class MyModule(nn.Module):
-   def __init__(self):
-       super().__init__()
-
    def forward(self, ...):
        ...
```

Some cases that change the semantics should be kept unchanged. E.g.:

f152a79be9/caffe2/python/net_printer.py (L184-L190)

f152a79be9/test/test_jit_fuser_te.py (L2628-L2635)

Pull Request resolved: https://github.com/pytorch/pytorch/pull/94588
Approved by: https://github.com/ezyang, https://github.com/albanD
2023-02-10 21:16:33 +00:00
Vasiliy Kuznetsov
eb8d06591c quantization: fix bug in QuantWrapper with DeQuant qconfig (#73671)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/73671

QuantWrapper did not correctly apply qconfig to the dequant.
Therefore, if the user first applied qconfig to their module and
then wrapped it with `QuantWrapper`, the dequant would not get
swapped during the convert step.

The fix is to properly apply the qconfig to the dequant.

Test Plan:
```
python test/test_quantization.py TestQuantizeEagerPTQStatic.test_quantwrapper_attaches_qconfig_to_dequant
```

Reviewed By: MaigoAkisame

Differential Revision: D34585260

Pulled By: vkuzo

fbshipit-source-id: 82055a9fa7fc13a714fe460deb461c2e87e76b39
(cherry picked from commit c9f392333dd1c005d893bdc2fbafe8a82b317c88)
2022-03-03 15:31:53 +00:00
Supriya Rao
9d52651d4e torch.ao migration: stubs.py phase 1 (#64861)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/64861

1. move the file
  ```
  hg mv caffe2/torch/quantization/stubs.py caffe2/torch/ao/quantization/
  ```

  2. create a new file in the old location and copy the imports
  3. fix all call sites inside `torch`
ghstack-source-id: 137885365

Test Plan: buck test mode/dev //caffe2/test:quantization

Reviewed By: jerryzh168

Differential Revision: D30879678

fbshipit-source-id: a2d24f25d01064212aca15e94e8c78240ba48953
2021-09-13 08:40:29 -07:00