Aaron Orenstein
0b2a3687b9
PEP585 update - torch/fx ( #145166 )
...
See #145101 for details.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/145166
Approved by: https://github.com/bobrenjc93
2025-01-20 18:11:54 +00:00
Xuehai Pan
abbd71d29d
[BE][Easy] enable PYFMT for torch.fx ( #138443 )
...
Reproduce command:
```bash
ghstack checkout https://github.com/pytorch/pytorch/pull/138443
git checkout HEAD~1 torch/
lintrunner -a --take "PYFMT" --all-files
```
Pull Request resolved: https://github.com/pytorch/pytorch/pull/138443
Approved by: https://github.com/ezyang
2024-10-21 19:15:49 +00:00
Tom Ritchford
c0582fd0f8
Remove unused Python variables in torch/[b-z]* ( #136963 )
...
Pull Request resolved: https://github.com/pytorch/pytorch/pull/136963
Approved by: https://github.com/ezyang
2024-10-19 16:45:22 +00:00
Aaron Orenstein
ed86ac2f25
[BE] typing for decorators - fx/_compatibility ( #134054 )
...
Summary: See #131429
Test Plan: unit tests pass
Differential Revision: D61493706
Pull Request resolved: https://github.com/pytorch/pytorch/pull/134054
Approved by: https://github.com/oulgen
2024-08-26 04:00:27 +00:00
PyTorch MergeBot
945bf78894
Revert "[BE] typing for decorators - fx/_compatibility ( #131568 )"
...
This reverts commit 193f62fde9 .
Reverted https://github.com/pytorch/pytorch/pull/131568 on behalf of https://github.com/clee2000 due to same as https://github.com/pytorch/pytorch/pull/131572#issuecomment-2254328359 but I clicked the wrong link by accident. This is where it actually starts ([comment](https://github.com/pytorch/pytorch/pull/131568#issuecomment-2254330781 ))
2024-07-28 03:43:39 +00:00
Aaron Orenstein
193f62fde9
[BE] typing for decorators - fx/_compatibility ( #131568 )
...
See #131429
Pull Request resolved: https://github.com/pytorch/pytorch/pull/131568
Approved by: https://github.com/justinchuby , https://github.com/oulgen , https://github.com/zou3519
2024-07-25 22:24:19 +00:00
Aaron Orenstein
5a0068cc69
[BE] mypy: disallow untyped decorators ( #131428 )
...
Untyped decorators strip the types from their decorated function so even if the underlying function is fully typed then callers to it don't get any benefit from type annotations.
Step 1 - Enable the error and override in all the offending files.
#131429
Pull Request resolved: https://github.com/pytorch/pytorch/pull/131428
Approved by: https://github.com/justinchuby , https://github.com/oulgen
2024-07-23 21:50:55 +00:00
Aaron Orenstein
038b927590
Flip default value for mypy disallow_untyped_defs [7/11] ( #127844 )
...
See #127836 for details.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/127844
Approved by: https://github.com/oulgen
ghstack dependencies: #127842 , #127843
2024-06-08 18:49:45 +00:00
anjali411
4bf076e964
Add __all__ to torch.distributed, futures, fx, nn, package, benchmark submodules ( #80520 )
...
Pull Request resolved: https://github.com/pytorch/pytorch/pull/80520
Approved by: https://github.com/rohan-varma
2022-07-08 14:31:24 +00:00
Drazen Borkovic
9402219a36
Move serialize_module() out of OSS graph_manipulation.py to internal ( #80785 )
...
Differential Revision: D37582495
Pull Request resolved: https://github.com/pytorch/pytorch/pull/80785
Approved by: https://github.com/jfix71
2022-07-05 23:39:13 +00:00
Drazen Borkovic
f54098cd3e
Create JSON from new FX IR and lower to LLVM ( #77765 )
...
Summary:
Replace TensorView objects with maps for JSONing.
Lower to LLVM.
Reviewed By: jaybean-dev, jfix71
Differential Revision: D36318989
Pull Request resolved: https://github.com/pytorch/pytorch/pull/77765
Approved by: https://github.com/jfix71 , https://github.com/jamesr66a
2022-05-20 03:20:57 +00:00
Jordan Fix
0c91efb64e
[fx/graph_manipulation] Fix _update_weight_fused_dtypes ( #77702 )
...
Summary: D36335238 (18e36a6295 ) wasn't fully working due to previous impl which used op names for looking for matches. Instead use the FX graph itself.
Differential Revision: D36462875
Pull Request resolved: https://github.com/pytorch/pytorch/pull/77702
Approved by: https://github.com/jamesr66a
2022-05-19 03:28:28 +00:00
Jordan Fix
18e36a6295
[graph_manipulation] Set fused dtypes for all constant params/buffers ( #77401 )
...
Summary: We were handling constant attrs in a few different ways before, leading to confusion and missed handing for fused dtypes. This diff consolidates some of that code and unbreaks current breakage.
Test Plan: CI. Recently broken tests now pass.
Differential Revision: D36335238
Pull Request resolved: https://github.com/pytorch/pytorch/pull/77401
Approved by: https://github.com/jaybean-dev , https://github.com/jamesr66a
2022-05-17 07:42:29 +00:00
Jordan Fix
540cb5fee2
[graph_manipulation] Unpack list of outputs ( #72940 )
...
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/72940
att
Reviewed By: jackm321
Differential Revision: D34282062
fbshipit-source-id: 743710c18e1f38286d1b91c91868bb22c760f3ca
(cherry picked from commit fd2bdd189d )
2022-02-17 06:38:52 +00:00
Huamin Li
32dd4a8639
move fx_acc out of pytorch core ( #72803 )
...
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/72803
as title
Reviewed By: jfix71
Differential Revision: D34101788
fbshipit-source-id: a9fd84671929af21405c049603e9895ec68de3d8
(cherry picked from commit e98fd1c32d )
2022-02-15 16:13:43 +00:00
Jerry Zhang
3d6d4f4322
[fx2trt][quant] Add lowering support for per channel quantization in fx2trt ( #64787 )
...
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/64787
This PR added support for lowering per channel quantization and dequantization operators
in fx2trt, this also extends TensorMeta with extra arguments corresponding to per channel quantized Tensors,
initially I was thinking of adding a qpram that can capture everything, but currently we still have some lowering support
for fbgemm ops (which has scale and zero_point in operator interface). I think we can move everything to qprams
after we deprecate lowering support for fbgemm ops in the future.
Test Plan:
Test for per channel weight:
```
python torch/fx/experimental/fx2trt/example/quantized_resnet_test.py
```
change BC compatibility test expect for TensorMeta
```
python test/test_fx.py TestFXAPIBackwardCompatibility.test_class_member_back_compat --accept
```
Imported from OSS
Reviewed By: jfix71, mrshenli, 842974287
Differential Revision: D30879848
fbshipit-source-id: 76c3804bb1d9343183ae53d9f02c1a3bf6c79e1c
2021-09-30 18:54:14 -07:00
James Reed
0559cb37cd
[FX] Ensure BC coverage for all of torch.fx.passes ( #65081 )
...
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/65081
Test Plan: Imported from OSS
Reviewed By: jbschlosser, khabinov
Differential Revision: D30967428
Pulled By: jamesr66a
fbshipit-source-id: 2ff83da728dc469f086cf504e71b43396db612d8
2021-09-17 09:32:43 -07:00
James Reed
cf7409e184
[FX] Move graph_manipulation and param_fetch out of experimental and into passes ( #65183 )
...
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/65183
ghstack-source-id: 138309655
Test Plan: waitforsadcastle
Reviewed By: protonu
Differential Revision: D31007630
fbshipit-source-id: 77d14b284737aabbe2b9e6394177a0c2e40aafba
2021-09-17 09:32:40 -07:00