pytorch/test/quantization
Kimish Patel ffc0c46092 [Quantization] Add metadata porting for nodes added by quantization (#107107)
Summary:
This diff adds adding metadata to q-dq nodes by inferring the
quatization intent from node annotations. Annotations on the node are
way for user to specify how a node or subgraph is supposed to be
quantized. We continue to use that information to copy metadata on Q/DQ
node from appropriate nodes.

Test Plan:

Reviewers:

Subscribers:

Tasks:

Tags:

Differential Revision: [D48488416](https://our.internmc.facebook.com/intern/diff/D48488416)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/107107
Approved by: https://github.com/jerryzh168
ghstack dependencies: #107105, #107106, #107899, #107900
2023-09-02 06:38:14 +00:00
..
ao_migration ao migration: remove package test as this behavior is tested by other things (#94422) 2023-02-13 16:33:40 +00:00
bc [BE] Enable ruff's UP rules and autoformat test/ (#105434) 2023-07-19 20:36:06 +00:00
core [Quant] Add int8 linear op impl for quantization PT2E with Inductor. input is an int8 CPU tensor; weight is an int8 MdkldnnCPU tensor. (#105818) 2023-08-27 08:13:12 +00:00
eager [pytorch][ao] Add torch.matmul in FloatFunctional/QFunctional (#106831) 2023-08-10 22:43:36 +00:00
fx Revert "[quant][pt2e] store scale/zero_point as tensor attributes to support serialization (#105894)" 2023-07-28 01:16:02 +00:00
jit [BE] Enable ruff's UP rules and autoformat test/ (#105434) 2023-07-19 20:36:06 +00:00
pt2e [Quantization] Add metadata porting for nodes added by quantization (#107107) 2023-09-02 06:38:14 +00:00
serialized [ao] fix incorrect integer cast on histogram observer bounds (#90355) 2022-12-12 20:30:44 +00:00
__init__.py