pytorch/torch/ao/quantization
PyTorch MergeBot beae7725be Revert "Tighten type hints for tensor arithmetic (#135392)"
This reverts commit d378819068.

Reverted https://github.com/pytorch/pytorch/pull/135392 on behalf of https://github.com/ZainRizvi due to Sorry but this is breaking internally. See D65641103 for more details ([comment](https://github.com/pytorch/pytorch/pull/135392#issuecomment-2465906839))
2024-11-08 23:44:41 +00:00
..
backend_config [BE] enable UFMT for torch/ao/quantization/ (#128863) 2024-07-25 04:17:54 +00:00
experimental [CODEMOD][caffe2] use npt.NDArray instead of np.ndarray in type annotations (#136288) 2024-09-19 12:40:36 +00:00
fx Add missing mappings to support torch.uint16 in quantization and export (#136547) 2024-10-01 00:01:01 +00:00
pt2e [numerical debugger] bumped up the starting handler id (#139666) 2024-11-07 01:00:43 +00:00
quantizer [AOTI] Use len(serialized_weights) when calculating consts_size (#139054) 2024-10-31 09:54:16 +00:00
__init__.py [BE]: Update mypy to 1.11.2 (#133816) 2024-09-16 19:44:11 +00:00
_correct_bias.py [BE]: Update mypy to 1.11.2 (#133816) 2024-09-16 19:44:11 +00:00
_equalize.py Revert "Tighten type hints for tensor arithmetic (#135392)" 2024-11-08 23:44:41 +00:00
_learnable_fake_quantize.py [BE] enable UFMT for torch/ao/quantization/ (#128863) 2024-07-25 04:17:54 +00:00
fake_quantize.py Add None return type to init (#132335) 2024-08-01 15:26:45 +00:00
fuse_modules.py [BE] enable UFMT for torch/ao/quantization/ (#128863) 2024-07-25 04:17:54 +00:00
fuser_method_mappings.py [BE] enable UFMT for torch/ao/quantization/ (#128863) 2024-07-25 04:17:54 +00:00
observer.py Add uint16 support for observer (#136238) 2024-09-18 23:52:18 +00:00
pattern.md
qconfig_mapping.py Add None return type to init (#132335) 2024-08-01 15:26:45 +00:00
qconfig.py Revise CPU vectorization ISA support API (#135075) 2024-09-05 12:14:56 +00:00
quant_type.py [BE] enable UFMT for torch/ao/quantization/ (#128863) 2024-07-25 04:17:54 +00:00
quantization_mappings.py [BE] enable UFMT for torch/ao/quantization/ (#128863) 2024-07-25 04:17:54 +00:00
quantize_fx.py Add None return type to init (#132335) 2024-08-01 15:26:45 +00:00
quantize_jit.py [BE] enable UFMT for torch/ao/quantization/ (#128863) 2024-07-25 04:17:54 +00:00
quantize_pt2e.py Change to export_for_training in quantize_pt2e tests (#137233) 2024-10-04 18:33:02 +00:00
quantize.py [BE] enable UFMT for torch/ao/quantization/ (#128863) 2024-07-25 04:17:54 +00:00
stubs.py [BE] enable UFMT for torch/ao/quantization/ (#128863) 2024-07-25 04:17:54 +00:00
utils.py torch/ao/quantization/utils.py: Moving eps to targeted device to avoid device mismatch issue (#135204) 2024-10-15 14:58:55 +00:00