pytorch/test/quantization/pt2e
Riley Dulin 3be150653c [torch][ao] Add customizable loss function to NodeAccuracySummary (#136282)
Summary:
Add a customizable loss function callback to NodeAccuracySummary to
allow users to pass in their own loss function.

Also, fix some type errors and propagate better exception messages when
unexpected tensor comparisons occur. Finally, enhance the robustness of
`generate_numeric_debug_handle` in the case where it is called multiple
times on the same model, by avoiding reuse of the same IDs.

Test Plan: Added a test for this case in `test_numeric_debugger`.

Reviewed By: jerryzh168

Differential Revision: D62898297

Pull Request resolved: https://github.com/pytorch/pytorch/pull/136282
Approved by: https://github.com/jerryzh168
2024-09-24 03:28:12 +00:00
..
test_duplicate_dq.py Replace capture_pre_autograd_graph with export_for_training in torch tests (#135623) 2024-09-11 19:23:08 +00:00
test_graph_utils.py Add None return type to init -- tests (#132352) 2024-08-01 15:44:51 +00:00
test_metadata_porting.py [Dynamo] Trace torch function modes entered outside of torch.compile (#133137) 2024-09-14 18:52:22 +00:00
test_numeric_debugger.py [torch][ao] Add customizable loss function to NodeAccuracySummary (#136282) 2024-09-24 03:28:12 +00:00
test_quantize_pt2e_qat.py "Remove BLOCK_LIST" (#135729) 2024-09-12 01:22:06 +00:00
test_quantize_pt2e.py Fix attr check for quantization spec (#135736) 2024-09-13 23:01:22 +00:00
test_representation.py Add None return type to init -- tests (#132352) 2024-08-01 15:44:51 +00:00
test_x86inductor_quantizer.py Replace capture_pre_autograd_graph with export_for_training in torch tests (#135623) 2024-09-11 19:23:08 +00:00
test_xnnpack_quantizer.py [export][training ir migration] quantized_decomposed.quantize_per_tensor decomposition (#134525) 2024-09-06 07:06:06 +00:00