| .. |
|
backend_config
|
[quant][executorch] Support inception_v4 in examples (#108382)
|
2023-09-08 17:39:31 +00:00 |
|
experimental
|
upgrade lintrunner to the lowest supported versions on python 3.12 (#113562)
|
2023-11-15 18:12:01 +00:00 |
|
fx
|
CPU Publish: Fix Assign device error, when module has multiple devices (#109149) (#113509)
|
2023-11-14 06:15:32 +00:00 |
|
pt2e
|
[Quant] [PT2] Enable Inplace Dropout in _move_exported_model_to_eval (#114725)
|
2023-11-30 04:43:22 +00:00 |
|
quantizer
|
[Quant] [PT2] Add Hardtanh and ReLU6 into X86InductorQuantizer Conv2d Unary Annotation (#114579)
|
2023-11-28 07:18:00 +00:00 |
|
__init__.py
|
docstyle _correct_bias.py _equalize.py _learnable_fake_quantize.py backend_config experimental fake_quantize.py fuse_modules.py fuser_method_mappings.py (#112992)
|
2023-11-15 00:59:44 +00:00 |
|
_correct_bias.py
|
docstyle _correct_bias.py _equalize.py _learnable_fake_quantize.py backend_config experimental fake_quantize.py fuse_modules.py fuser_method_mappings.py (#112992)
|
2023-11-15 00:59:44 +00:00 |
|
_equalize.py
|
docstyle _correct_bias.py _equalize.py _learnable_fake_quantize.py backend_config experimental fake_quantize.py fuse_modules.py fuser_method_mappings.py (#112992)
|
2023-11-15 00:59:44 +00:00 |
|
_learnable_fake_quantize.py
|
docstyle _correct_bias.py _equalize.py _learnable_fake_quantize.py backend_config experimental fake_quantize.py fuse_modules.py fuser_method_mappings.py (#112992)
|
2023-11-15 00:59:44 +00:00 |
|
fake_quantize.py
|
docstyle _correct_bias.py _equalize.py _learnable_fake_quantize.py backend_config experimental fake_quantize.py fuse_modules.py fuser_method_mappings.py (#112992)
|
2023-11-15 00:59:44 +00:00 |
|
fuse_modules.py
|
docstyle _correct_bias.py _equalize.py _learnable_fake_quantize.py backend_config experimental fake_quantize.py fuse_modules.py fuser_method_mappings.py (#112992)
|
2023-11-15 00:59:44 +00:00 |
|
fuser_method_mappings.py
|
Update mypy to 1.7.0 (#114160)
|
2023-11-28 06:45:55 +00:00 |
|
observer.py
|
[BE]: Enable ruff rule PIE800 - unnecessary nested dict expansion (#113880)
|
2023-11-16 22:34:38 +00:00 |
|
pattern.md
|
|
|
|
qconfig_mapping.py
|
[ao] fixing quantized prelu workflow (#103455)
|
2023-06-23 16:45:40 +00:00 |
|
qconfig.py
|
Back out "Enable pickling model prepared with QAT qconfig" (#110392)
|
2023-10-05 14:41:00 +00:00 |
|
quant_type.py
|
[BE] Enable ruff's UP rules and autoformat ao/ (#105430)
|
2023-07-19 13:44:37 +00:00 |
|
quantization_mappings.py
|
[BE] Enable ruff's UP rules and autoformat ao/ (#105430)
|
2023-07-19 13:44:37 +00:00 |
|
quantize_fx.py
|
[quant][pt2e] Disable remove_qconfig (#111000)
|
2023-10-11 19:43:46 +00:00 |
|
quantize_jit.py
|
Fix typos under torch/ao directory (#97679)
|
2023-04-10 22:25:15 +00:00 |
|
quantize_pt2e.py
|
[quant][pt2e] Add transform_for_annotation method in Quantizer (#113115)
|
2023-11-09 20:23:29 +00:00 |
|
quantize.py
|
[ao] Support Subclasses of FloatFunctional in eager mode prepare (#109646)
|
2023-09-20 08:09:55 +00:00 |
|
stubs.py
|
[codemod] Replace hasattr with getattr in caffe2/torch/ao/quantization/stubs.py (#100597)
|
2023-05-04 16:36:23 +00:00 |
|
utils.py
|
[BE]: Update lintrunner mypy to 1.6.0 (#111375)
|
2023-10-17 01:22:06 +00:00 |