pytorch/test/quantization/core/experimental
Natalia Gimelshein b8ce05456c enable cat for cuda bits types (#115044)
It was already working for cpu, so bring parity.
Also, slightly reduce number of compiled kernels by using OpaqueType.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/115044
Approved by: https://github.com/malfet
2023-12-06 00:05:18 +00:00
..
apot_fx_graph_mode_ptq.py
apot_fx_graph_mode_qat.py
quantization_util.py
test_bits.py enable cat for cuda bits types (#115044) 2023-12-06 00:05:18 +00:00
test_fake_quantize.py
test_float8.py Disallow fp8 type promotion (#113975) 2023-11-20 19:47:43 +00:00
test_linear.py
test_nonuniform_observer.py
test_quantized_tensor.py
test_quantizer.py