pytorch/test/quantization/eager
Anthony Barbier 954ce94950 Add __main__ guards to quantization tests (#154728)
This PR is part of a series attempting to re-submit https://github.com/pytorch/pytorch/pull/134592 as smaller PRs.

In quantization tests:

- Add and use a common raise_on_run_directly method for when a user runs a test file directly which should not be run this way. Print the file which the user should have run.
- Raise a RuntimeError on tests which have been disabled (not run)

Pull Request resolved: https://github.com/pytorch/pytorch/pull/154728
Approved by: https://github.com/ezyang
2025-06-10 19:46:07 +00:00
..
__init__.py
test_bias_correction_eager.py Add __main__ guards to quantization tests (#154728) 2025-06-10 19:46:07 +00:00
test_equalize_eager.py Add __main__ guards to quantization tests (#154728) 2025-06-10 19:46:07 +00:00
test_fuse_eager.py [Easy] enable PYFMT for torch/quantization/eager (#150761) 2025-04-18 05:53:33 +00:00
test_model_numerics.py [Easy] enable PYFMT for torch/quantization/eager (#150761) 2025-04-18 05:53:33 +00:00
test_numeric_suite_eager.py Add __main__ guards to quantization tests (#154728) 2025-06-10 19:46:07 +00:00
test_quantize_eager_ptq.py [Easy] enable PYFMT for torch/quantization/eager (#150761) 2025-04-18 05:53:33 +00:00
test_quantize_eager_qat.py [Easy] enable PYFMT for torch/quantization/eager (#150761) 2025-04-18 05:53:33 +00:00