pytorch/torch/ao
andrewor14 6988e40b48 [quant][fx] Lower operator.matmul in convert_fx (#113954)
Summary: We support lowering `torch.matmul` but not
`operator.matmul`. This commit adds support for the latter,
which enables lowering the shorthand `@`. This address
https://github.com/pytorch/pytorch/issues/111450.

Test Plan:
python test/test_quantization.py TestQuantizeFx

Reviewers: jerryzh168

Subscribers: jerryzh168, supriyar
Pull Request resolved: https://github.com/pytorch/pytorch/pull/113954
Approved by: https://github.com/jerryzh168
2023-12-12 00:34:58 +00:00
..
nn Use \odot everywhere instead of mixing \odot and * for the Hadamard product (#111763) 2023-10-22 21:01:35 +00:00
ns [BE]: Apply RUF015 to torch folder (#113025) 2023-11-07 00:48:15 +00:00
pruning Update mypy to 1.7.0 (#114160) 2023-11-28 06:45:55 +00:00
quantization [quant][fx] Lower operator.matmul in convert_fx (#113954) 2023-12-12 00:34:58 +00:00
__init__.py