pytorch/test/quantization
Shangdi Yu bc5ecf83d7 [training ir migration] Fix quantization tests (#135184)
Summary:
Fixed some quantization tests for new training ir:

Fix batch norm node pattern matcher. In training ir, we have `aten.batch_norm` node instead of `aten._native_batch_norm_legit` and `aten._native_batch_norm_legit_no_training`.

Test Plan:
```
buck run fbcode//mode/dev-nosan fbcode//caffe2/test:quantization_pt2e
```

Reviewed By: tugsbayasgalan

Differential Revision: D62209819

Pull Request resolved: https://github.com/pytorch/pytorch/pull/135184
Approved by: https://github.com/tugsbayasgalan
2024-09-05 21:19:28 +00:00
..
ao_migration Enable UFMT on all of test/quantization/ao_migration &bc (#123994) 2024-04-13 06:36:10 +00:00
bc Fix failures when default is flipped for weights_only (#127627) 2024-08-16 00:22:43 +00:00
core Add new ops wrapped_linear_prepack and wrapped_quantized_linear_prepacked (#134232) 2024-08-23 04:54:26 +00:00
eager Add None return type to init -- tests (#132352) 2024-08-01 15:44:51 +00:00
fx Fix failures when default is flipped for weights_only (#127627) 2024-08-16 00:22:43 +00:00
jit Add None return type to init -- tests (#132352) 2024-08-01 15:44:51 +00:00
pt2e [training ir migration] Fix quantization tests (#135184) 2024-09-05 21:19:28 +00:00
serialized
__init__.py