pytorch/test/quantization
Eddie Yan 54fe2d0e89 [cuDNN][quantization] skip qlinear test in cuDNN v9.1.0 (#128166)
#120006 only very recently unskipped this test 3 days ago so we don't consider it a blocker for cuDNNv9 for now

CC @atalman

Pull Request resolved: https://github.com/pytorch/pytorch/pull/128166
Approved by: https://github.com/atalman, https://github.com/nWEIdia
2024-06-06 21:43:29 +00:00
..
ao_migration Enable UFMT on all of test/quantization/ao_migration &bc (#123994) 2024-04-13 06:36:10 +00:00
bc Enable UFMT on all of test/quantization/ao_migration &bc (#123994) 2024-04-13 06:36:10 +00:00
core [cuDNN][quantization] skip qlinear test in cuDNN v9.1.0 (#128166) 2024-06-06 21:43:29 +00:00
eager Support min/max carry over for eager mode from_float method (#127309) 2024-05-29 19:33:26 +00:00
fx [BE]: Update ruff to v0.4.4 (#125031) 2024-05-12 20:02:37 +00:00
jit Enable UFMT on all of test/quantization/jit &pt2e (#124010) 2024-04-14 06:07:23 +00:00
pt2e Revert "[Quant][PT2E] enable qlinear post op fusion for dynamic quant & qat (#122667)" 2024-05-21 13:45:07 +00:00
serialized
__init__.py