pytorch/test/quantization
Shangdi Yu ad75b09d89 Replace capture_pre_autograd_graph with export_for_training in torch tests (#135623)
Summary: as title

Test Plan:
```
buck2 run 'fbcode//mode/dev-nosan' fbcode//caffe2/test:test_export -- -r test_conv_dynamic
buck2 run 'fbcode//mode/dev-nosan' fbcode//caffe2/test:fx -- -r matcher
 buck2 run 'fbcode//mode/dev-nosan' fbcode//caffe2/test/quantization:test_quantization -- -r x86
```

CI

Differential Revision: D62448302

Pull Request resolved: https://github.com/pytorch/pytorch/pull/135623
Approved by: https://github.com/tugsbayasgalan
2024-09-11 19:23:08 +00:00
..
ao_migration Enable UFMT on all of test/quantization/ao_migration &bc (#123994) 2024-04-13 06:36:10 +00:00
bc Fix failures when default is flipped for weights_only (#127627) 2024-08-16 00:22:43 +00:00
core Change wrapped_linear_prepack and wrapped_quantized_linear_prepacked to private by adding _ as prefix (#135401) 2024-09-08 04:16:24 +00:00
eager Add None return type to init -- tests (#132352) 2024-08-01 15:44:51 +00:00
fx Fix failures when default is flipped for weights_only (#127627) 2024-08-16 00:22:43 +00:00
jit Add None return type to init -- tests (#132352) 2024-08-01 15:44:51 +00:00
pt2e Replace capture_pre_autograd_graph with export_for_training in torch tests (#135623) 2024-09-11 19:23:08 +00:00
serialized
__init__.py