Commit Graph

1019 Commits

Author SHA1 Message Date
Vasiliy Kuznetsov
565cf47abf Quantization docs: add pages for Numeric Suite (Eager and FX) (#66380)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/66380

Description:
1. creates doc pages for Eager and FX numeric suites
2. adds a link from main quantization doc to (1)
3. formats docblocks in Eager NS to render well
4. adds example code and docblocks to FX numeric suite

Test Plan:
```
cd docs
make html
python -m http.server
// renders well
```

Reviewed By: jerryzh168

Differential Revision: D31543173

Pulled By: vkuzo

fbshipit-source-id: feb291bcbe92747495f45165f738631fa5cbffbd
2021-10-11 18:47:58 -07:00
Vasiliy Kuznetsov
8b1258698e Improve quantization API docs (#66379)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/66379

Description:

Creates a quantization API reference and fixes all the docblock errors.

This is #66122 to #66210 squashed together

Test Plan:
```
cd docs
make html
python -m http.server
// open webpage, inspect it, looks good
```

Reviewed By: ejguan

Differential Revision: D31543172

Pulled By: vkuzo

fbshipit-source-id: 9131363d6528337e9f100759654d3f34f02142a9
2021-10-11 18:46:11 -07:00
Eshika Shah
88ed93c2ca Fix type checking errors in torch/quantization/fx/qconfig_utils.py (#66428)
Summary:
- [x] Fix the Pyre type checking errors in `torch/quantization/fx/qconfig_utils.py`
```
torch/quantization/fx/qconfig_utils.py:241:46 Incompatible variable type [9]: prepare_custom_config_dict is declared to have type `Dict[str, typing.Any]` but is used as type `None`.
torch/quantization/fx/qconfig_utils.py:267:46 Incompatible variable type [9]: convert_custom_config_dict is declared to have type `Dict[str, typing.Any]` but is used as type `None`.
torch/quantization/fx/qconfig_utils.py:284:43 Incompatible variable type [9]: fuse_custom_config_dict is declared to have type `Dict[str, typing.Any]` but is used as type `None`.
```
Fixes the issue: [MLH-Fellowship/pyre-check/issues/73](https://github.com/MLH-Fellowship/pyre-check/issues/73)

Pull Request resolved: https://github.com/pytorch/pytorch/pull/66428

Reviewed By: grievejia

Differential Revision: D31545215

Pulled By: 0xedward

fbshipit-source-id: 767ae7888854c2eec2ecf14855a5b011110b9271
2021-10-11 16:48:11 -07:00
Mike Ruberry
9971113340 Revert D31447612: Create a documentation page for FX graph mode quantization APIs
Test Plan: revert-hammer

Differential Revision:
D31447612 (a89ac3138e)

Original commit changeset: 07d0a6137f15

fbshipit-source-id: f2cba7d835011500580b4ab9cff72171280ee18b
2021-10-10 01:51:13 -07:00
Mike Ruberry
b85fd4c54f Revert D31447613: Create separate documentation pages for quantization observers and fake_quants
Test Plan: revert-hammer

Differential Revision:
D31447613 (f0fa3d1110)

Original commit changeset: 63b4cf518bad

fbshipit-source-id: 67de592d1e12a5149cdb22b0725caad063f94476
2021-10-10 01:51:11 -07:00
Mike Ruberry
10633460ce Revert D31447614: Create a documentation page for torch.ao.quantization.QConfig
Test Plan: revert-hammer

Differential Revision:
D31447614 (7332ed13ed)

Original commit changeset: 5d9dd2a4e864

fbshipit-source-id: 6ac15a956222ca61f7fbb75ed36bcc58b23f0f36
2021-10-10 01:51:09 -07:00
Mike Ruberry
ad0accdecd Revert D31447610: Quantization docs: add pages for Numeric Suite (Eager and FX)
Test Plan: revert-hammer

Differential Revision:
D31447610 (9539e6216b)

Original commit changeset: 441170c4a6c3

fbshipit-source-id: b49bff54405cdb8465397077e38506a36b277921
2021-10-10 01:49:19 -07:00
Vasiliy Kuznetsov
9539e6216b Quantization docs: add pages for Numeric Suite (Eager and FX) (#66222)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/66222

Description:
1. creates doc pages for Eager and FX numeric suites
2. adds a link from main quantization doc to (1)
3. formats docblocks in Eager NS to render well
4. adds example code and docblocks to FX numeric suite

Test Plan:
```
cd docs
make html
python -m http.server
// renders well
```

Reviewed By: jerryzh168

Differential Revision: D31447610

Pulled By: vkuzo

fbshipit-source-id: 441170c4a6c3ddea1e7c7c5cc2f1e1cd5aa65f2f
2021-10-09 06:46:06 -07:00
Vasiliy Kuznetsov
7332ed13ed Create a documentation page for torch.ao.quantization.QConfig (#66129)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/66129

Adds a documentation page for `torch.ao.quantization.QConfig`. It is useful
for this to have a separate page since it shared between Eager and FX graph
mode quantization.

Also, ensures that all important functions and module attributes in this
module have docstrings, so users can discover these without reading the
source code.

Test Plan:
```
cd docs
make html
python -m http.server
// open webpage, inspect it, renders correctly
```

Reviewed By: jerryzh168

Differential Revision: D31447614

Pulled By: vkuzo

fbshipit-source-id: 5d9dd2a4e8647fa17b96cefbaae5299adede619c
2021-10-09 06:45:58 -07:00
Vasiliy Kuznetsov
f0fa3d1110 Create separate documentation pages for quantization observers and fake_quants (#66125)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/66125

Before this PR, the documentation for observers and fake_quants was inlined in the
Eager mode quantization page.  This was hard to discover, especially
since that page is really long, and we now have FX graph mode quantization reusing
all of this code.

This PR moves observers and fake_quants into their own documentation pages. It also
adds docstrings to all user facing module attributes such as the default observers
and fake_quants, so people can discover them from documentation without having
to inspect the source code.

For now, enables autoformatting (which means all public classes, functions, members
with docstrings will get docs).  If we need to exclude something in these files from
docs in the future, we can go back to manual docs.

Test Plan:
```
cd docs
make html
python -m server.http
// inspect docs on localhost, renders correctly
```

Reviewed By: dagitses

Differential Revision: D31447613

Pulled By: vkuzo

fbshipit-source-id: 63b4cf518badfb29ede583a5c2ca823f572c8599
2021-10-09 06:45:56 -07:00
Vasiliy Kuznetsov
a89ac3138e Create a documentation page for FX graph mode quantization APIs (#66122)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/66122

Description:

Adds a documentation page for FX graph mode quantization APIs which
reads from the docstrings in `quantize_fx`, and links it from the main
quantization documentation page.

Also, updates the docstrings in `quantize_fx` to render well with reStructuredText.

Test Plan:
```
cd docs
make html
python -m http.server
// open webpage, inspect it, looks good
```

Reviewed By: dagitses

Differential Revision: D31447612

Pulled By: vkuzo

fbshipit-source-id: 07d0a6137f1537af82dce0a729f9617efaa714a0
2021-10-09 06:44:38 -07:00
Eshika Shah
85b562dd2b Fix type checking errors in fx/utils.py (#66311)
Summary:
- [x] Fix the Pyre type checking errors in `torch/quantization/fx/utils.py`
```
torch/quantization/fx/utils.py:490:4 Incompatible variable type [9]: target_module_type is declared to have type `Type[nn.modules.module.Module]` but is used as type `None`.
```
Fixes the issue: [MLH-Fellowship/pyre-check/issues/75](https://github.com/MLH-Fellowship/pyre-check/issues/75)

Pull Request resolved: https://github.com/pytorch/pytorch/pull/66311

Reviewed By: pradeep90

Differential Revision: D31506399

Pulled By: 0xedward

fbshipit-source-id: 3d866fba6005452378d4a2613b8689fa2d7a8b67
2021-10-08 19:14:22 -07:00
Ben Koopman
a58ff186e8 [quant][embedding qat] Add basic EmbeddingBag QAT fakeQuant workflow (#65443)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/65443

Test Plan: Imported from OSS

Reviewed By: dagitses, supriyar

Differential Revision: D31456445

Pulled By: b-koopman

fbshipit-source-id: 0edda6e272d9005fce65f2ba6a5e6abc831836de
2021-10-07 20:19:29 -07:00
Supriya Rao
8a974a482c [quant] Add support for quantization of Embedding{Bag} in dynamic quant APIs (#65674)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/65674

Before this PR user had to use the eager mode static quantization APIs to quantize Embedding/EmbeddingBag modules.
With this PR they can use either the static or dynamic quantization APIs for Embedding quantization

The only qconfig supported for embedding quantization is float_qparams_weight_only_qconfig whcih is currently enforced in the from_float
method of the quantized Embedding/Embedding modules.

To combine embedding quantization with Linear dynamic quantization, user can use the qconfig_dict to specify different qconfig for each module type.

The prepare/convert APIs can still be used to quantize Embeddings, with the caveat that user need to ensure input to Embedding ops are FP32.

Addresses Issue #65185
ghstack-source-id: 139935419

Test Plan:
python test/test_quantization.py

Imported from OSS

Reviewed By: gchanan

Differential Revision: D31211199

fbshipit-source-id: 8c747881caee5ccbf8b93c6704b08d132049dea4
2021-10-06 23:19:38 -07:00
Peter Bell
747a5782e3 [quant][fx] Don't assume bias is a keyword argument (#61647)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/61647

`prepare_fx` currently assumes that bias is always a positional argument to
convolutions, and only a keyword argument to other functions. This happens to work
today due to a quirk in how `__torch_function__` is handled for python
functions but shouldn't be considered stable.

Instead, we should support `bias` for both positional and keyword forms.

cc jerryzh168 jianyuh raghuramank100 jamesr66a vkuzo

Test Plan: Imported from OSS

Reviewed By: ngimel

Differential Revision: D31401360

Pulled By: albanD

fbshipit-source-id: 1e2f53d80e2176b870f326dc498e251e2386136e
2021-10-06 07:25:47 -07:00
Zafar
0d020effab [quant] Fix the parts that were missing after initial migration (#66058)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/66058

After the initial migration from `torch.quantization` to `torch.ao.quantization`, some of the files did not change.
This happened because the migration was done in parallel, and some of the files were landed while the others were still in the original location.
This is the last fix in the AO migration phase 1, which completely enables the ao.quantization namespace.

Test Plan: `python test/test_quantization.py`

Reviewed By: vkuzo

Differential Revision: D31366066

Pulled By: z-a-f

fbshipit-source-id: bf4a74885be89d098df2d87e685795a2a64026c5
2021-10-05 11:45:37 -07:00
Supriya Rao
458a00bacb Back out "[quant] update fused_obs_fake_quant op to accept output_fake_quant argument" (#66063)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/66063

Original commit changeset: bffe776216d0

Test Plan: CI

Reviewed By: vkuzo

Differential Revision: D31347042

fbshipit-source-id: f56f628dc4690187bf284a8f2fda4c6aae10c1d6
2021-10-05 11:02:54 -07:00
Zafar
c27b427cd9 [sparsity] Add m-out-of-n support in the WeightNormSparsifier (#65295)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/65295

The m-out-of-n is implemented as follows:

1. Compute the blocks that need to be sparsified using the weight-norm criterion
2. Within each block below the threshold find the smallest absolute value elements
3. Zero out only the smallest values within each block

m-out-of-n describes sparsification scheme where in a block with "n" elements, only "m" of them would be zeroed-out.
Block sparsity, with the whole block being all zeros, is a special case of m-out-n: If m==n, the whole block is reset.

This echoes the implementation described in the https://github.com/pytorch/pytorch/issues/59835,
as well as meets the support of the nVidia cusparselt requirements.
To support the CUDA sparsity (2/4), one would need to set the sparsity_level to 1.0.
That translates to all blocks of shape 1x4 within a tensor will sprasify with 2-out-4 scheme.

Test Plan: Imported from OSS

Reviewed By: vkuzo

Differential Revision: D31186828

Pulled By: z-a-f

fbshipit-source-id: 7bd3e2707915b90f4831859781fc6e25f716c618
2021-10-01 03:19:15 -07:00
Zafar
8b1aa85388 [sparsity] Change API to take FQNs as configuration (#65296)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/65296

The original API described in the https://github.com/pytorch/pytorch/issues/59835
assumed that the per-layer configuration would take a module/layer
reference. However, a more useful approach is to refer to the layers
by their fully qualified names (FQN). That allows us to store the
configuration in a file without serializing the models.

We define a layer's FQN as it's "path" within a model. For example,
if one can refer to a model using `model.layer0.sublayerX`, the FQN
of the sublayerX is `'layer0.sublayerX'`.

Test Plan:
```
python test/test_ao_sparsity.py -- TestBaseSparsifier
buck test mode/opt //caffe2:test -- TestBaseSparsifier
```

Reviewed By: gchanan

Differential Revision: D31186830

Pulled By: z-a-f

fbshipit-source-id: d8d87f1c054e5c10d470e67837476a11e0a9b1d4
2021-10-01 03:17:31 -07:00
Supriya Rao
4666e3f192 [quant] update fused_obs_fake_quant op to accept output_fake_quant argument (#65621)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/65621

Add a new attribute to the FusedMovingAvgObsFakeQuantize that controls if the Fake Quant operation should be applied at the output of a particular layer. The motivation is to give the users additional control to control the numerics of the fake_quant operators during training. It defaults to always fake quant the output (True).

Note: We will still observer the tensors as before (only the fake_quant operation is controlled using this flag)

For example
```
input model
x -> fc1 -> fc2 -> non_quantizable_op -> fc3

After fake_quant
x -> fake_quant(x) -> fc1 -> fake_quant(fc1) -> fc2 -> fake_quant(fc2) -> non_quantizable_op -> fake_quant() -> fc3 -> fake_quantize(fc3)

With output_fake_quant disabled at the output of fc2 and fc3 (since their outputs are non-quantizable)
x -> fake_quant(x) -> fc1 -> fake_quant(fc1) -> fc2 -> non_quantizable_op -> fake_quant() -> fc3
```

Test Plan: ./buck-out/gen/caffe2/test/quantization_fx\#binary.par -r test_disable_output_fake_quant

Reviewed By: jerryzh168

Differential Revision: D31174526

fbshipit-source-id: bffe776216d041fb09133a6fb09bfc2c0bb46b89
2021-09-30 01:08:01 -07:00
Charles David Hernandez
6d4b93bd96 [quant] adding memoryless observers for embeddingbag QAT work (#65699)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/65699

related to: https://github.com/pytorch/pytorch/pull/65443#discussion_r715132425

The QAT and PAT (pruning aware training) support for embedding bags needs a memoryless observer to work properly. This is necessitated by the changing pruned/non-pruned weights during training which can significantly change the quantization parameters.

This PR adds a memoryless flag to the simpler observer classes (not moving average since those explicitly have memory)

In addition to the above, I altered the reset_min_max_vals
function for MinMaxObserver so that it would preserve the device of the
existing self.min_val and self.max_val which was not preserved
previously compared to how it is initialized (using factory_kwargs)

Test Plan:
python test/test_quantization.py TestObserver

(added test_memoryless_minmaxobserver, test_memoryless_per_channel_minmaxobserver, test_memoryless_histogramobserver)

Imported from OSS

Reviewed By: supriyar

Differential Revision: D31209773

fbshipit-source-id: 44a63298e44880fbd3576f49ac568e781f3fd79a
2021-09-30 00:55:32 -07:00
Zafar Takhirov
c7ef620a14 [quant] Add imports to the torch/ao/quantization/__init__.py (#64911)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/64911

The import statements that involve the `quantize.py` were not added to the module level __init__ file. Those imports are necessary to mimic the behavior of the old import locations. Otherwise, the user would need to change their import statements to `from torch.ao.quantization.quantize import quantize` (instead of `from torch.ao.quantization import quantize`.

Another change in this diff is that we don't use `__all__` anymore. The all dunder was never used in quantization anyway, and just creates a potential bug when using `from ... import *`.
ghstack-source-id: 139342483

Test Plan: `buck test mode/dev //caffe2/test:quantization`

Reviewed By: vkuzo

Differential Revision: D30897663

fbshipit-source-id: a7b4919a191755e3ba690a79ce3362889f416689
2021-09-29 19:08:45 -07:00
Zafar
609384c056 [sparsity][doc] Docstring for WeightNormSparsifier (#65294)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/65294

This adds the docstring documentation to the WeightNormSparsifier and adds the typehints for the constructor args.
Note, this does not require testing as only the doc is changed.

Test Plan: Imported from OSS

Reviewed By: gchanan

Differential Revision: D31186827

Pulled By: z-a-f

fbshipit-source-id: c5010c9bba25b074c4cc6c88f251474b758f950d
2021-09-28 14:14:51 -07:00
Zafar
92ee5cc2e2 [sparsity] Fix for accumulation bug in WeightNormSparsifier (#65293)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/65293

This fixes a bug in the WeightNormSparsifier, where the mask is being multiplied by the newly computed mask.
Because the mask elements are binary 0/1, this accumulates the mask over every iteration, eventually collapsing the mask to zero.
This bug accidentally bled through from old versions.

Test Plan: Imported from OSS

Reviewed By: gchanan

Differential Revision: D31186829

Pulled By: z-a-f

fbshipit-source-id: 3f5b2c833148ab0bd8084e7410ce398f1252e65e
2021-09-28 14:14:49 -07:00
Zafar
a90912ecc5 [sparsity] Remove the pack_param from the sparsifier state_dict (#65292)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/65292

That was the original design, that we decided to simplify by removing the packing in the sparsifier.
The state of the sparsifier is saved directly, and the old behavior accidentally bled through to the current version.
This change removes the `_pack_params` method, and changes the state_dict to include the state directly.
We don't have to change the load_state_dict, as it will work with either the old or the new format.

The main reason for this PR is the simplification. The original design didn't achieve anything useful by packing the sparsification parameters.

Test Plan: Imported from OSS

Reviewed By: gchanan

Differential Revision: D31186826

Pulled By: z-a-f

fbshipit-source-id: 4ad72a7e669f048d2f2d269269ee11b63fa169db
2021-09-28 14:12:52 -07:00
Jerry Zhang
b77c979102 [quant][fx][graphmode] Make FixedQParam ops work for dtypes other than quint8 (#65484)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/65484

This PR makes sure we only use FixedQParamFakeQuantize for quint8 dtype and allows user
to use other dtypes for ops like sigmoid, this is useful for producing reference pattern for
these ops that can be used in other backends like TensorRT

Test Plan:
python test/test_quantization.py TestQuantizeFxOps

Imported from OSS

Reviewed By: vkuzo

Differential Revision: D31120377

fbshipit-source-id: 3b529d588e2b6ff0377a89c181f6237f8f0cc2f5
2021-09-23 18:29:56 -07:00
Supriya Rao
767a104698 [quant] change observer FQNs generated in prepare step (#65420)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/65420

Context: In some FB use cases we have a need to map observer stats from train model checkpoint to inference model. We observerd that some buffer names are different becuase the intermediate activation tensors
are generated differently across train and inference model. More details in https://fb.quip.com/PtGcAR0S5CQP

Currently, for each observer (activation_post_process), the FQN of the module inserted is determined based on the FQN of the input tensor it is observing.

In this change we change the observer FQN to include the FQN of the op/module it is observing rather than tensor/intermediate op names along with the “input”/“output” detail.

Before
```
def forward(self, x):
    x_activation_post_process_0 = self.x_activation_post_process_0(x);  x = None
    mods1_w = self.mods1.w
    mods1_w_activation_post_process_0 = self.mods1_w_activation_post_process_0(mods1_w);  mods1_w = None
    mods1_b = self.mods1.b
    linear = torch.nn.functional.linear(x_activation_post_process_0, mods1_w_activation_post_process_0, bias = mods1_b);  x_activation_post_process_0 = mods1_w_activation_post_process_0 = mods1_b = None
    linear_activation_post_process_0 = self.linear_activation_post_process_0(linear);  linear = None
    return linear_activation_post_process_0
```

After
```
def forward(self, x):
    mods1_input_activation_post_process_0 = self.mods1_input_activation_post_process_0(x);  x = None
    mods1_w = self.mods1.w
    mods1_w_activation_post_process_0 = self.mods1_w_activation_post_process_0(mods1_w);  mods1_w = None
    mods1_b = self.mods1.b
    linear = torch.nn.functional.linear(mods1_input_activation_post_process_0, mods1_w_activation_post_process_0, bias = mods1_b);  x_activation_post_process_0 = mods1_w_activation_post_process_0 = mods1_b = None
    mods1_output_activation_post_process_0 = self.mods1_output_activation_post_process_0(linear);  linear = None
    return mods1_output_activation_post_process_0
```

Test Plan:
python test/test_quantization.py test_observer_fqn

Imported from OSS

Reviewed By: jerryzh168

Differential Revision: D31088652

fbshipit-source-id: 2f1526f578a13000b34cfd30d11f16f402fd3447
2021-09-23 09:08:10 -07:00
Jerry Zhang
508845f2b5 [quant] AO migration of the torch/quantization/quantize_fx.py and torch/quantization/fx/* (#65033)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/65033

1. Move the file:
```
hg mv caffe2/torch/quantization/fx caffe2/torch/ao/quantization/fx
hg mv caffe2/torch/quantization/quantize_fx.py caffe2/torch/ao/quantization/quantize_fx.py
```
2. Create new files
```
touch caffe2/torch/quantization/quantize_fx.py
touch caffe2/torch/quantization/fx/__init__.py
```
3. import things in the new files
4. add tests to test/quantization/ao_migration/test_quantization_fx.py
this is because we have some fx import in quantize_fx and fx/*.py

Test Plan: buck test mode/dev //caffe2/test:quantization

Reviewed By: vkuzo, z-a-f

Differential Revision: D30949749

fbshipit-source-id: 9e5d4d039c8a0a0820bc9040e224f0d2c26886d3
2021-09-22 09:29:15 -07:00
Yuan Shangguan (June)
3f5f721ab3 Pass through allow-list from prepare_qat into propagate_qconfig_ to allow custom mapping and custom QAT module (#65119)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/65119

Pytorch Quantization: allow prepare_qat to include custom module by passing allow_list into the prepare_qat.

When we are implementing custom module and custom mapping for Quantization Aware Training (QAT), we need to add the custom module to the mappings and to the allow_list during prepare_qat. The allow_list needs to be surfaced to the  propagate_qconfig_.

Test Plan: relying on general unit test

Reviewed By: supriyar

Differential Revision: D30982060

fbshipit-source-id: 1114115b6a3b853238d33d72b5cbaafc60f463e0
2021-09-21 17:15:25 -07:00
Zafar Takhirov
02dec91212 [quant] AO migration of the torch/quantization/utils.py (phase 1) (#64919)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/64919

AO Team is migrating the existing torch.quantization into torch.ao.quantization. We are doing it one file at a time to make sure that the internal callsites are updated properly. This migrates the quantization utilities.
ghstack-source-id: 138303325

Test Plan: `buck test mode/dev //caffe2/test:quantization`

Reviewed By: jerryzh168

Differential Revision: D30899082

fbshipit-source-id: 85eb38c419e417147e71758b682cd095308dd0c9
2021-09-16 21:30:18 -07:00
Charles David Hernandez
8a094e3270 [quant]ao migration for quantization mappings and fuser method mappings hg mv (#64985)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/64985

moving quantization_mappings.py and fuser_method_mappings.py to the ao folder while retaining backwards compatibility

also added dict test

ghstack-source-id: 138215312

Test Plan:
buck test mode/dev //caffe2/test:quantization

https://www.internalfb.com/intern/testinfra/testrun/7036874471986444

buck test mode/dev //caffe2/test:quantization -- TestAOMigrationQuantization

https://www.internalfb.com/intern/testinfra/testrun/5348024625792701

Reviewed By: z-a-f

Differential Revision: D30982551

fbshipit-source-id: 00f53bd44009d6012a7de852000aad6885131edb
2021-09-16 12:59:20 -07:00
Charles David Hernandez
f309f8fbd4 [quant] ao migration of observer and qconfig (#64982)
Summary:
(Had to recreate this diff so it wasn't dependent on the stack)

Pull Request resolved: https://github.com/pytorch/pytorch/pull/64982

migration of qconfig.py and observer.py to torch/ao/quantization using new test format
ghstack-source-id: 138215256

Test Plan:
buck test mode/opt //caffe2/test:quantization

https://www.internalfb.com/intern/testinfra/testconsole/testrun/8444249354294701/

buck test mode/dev //caffe2/test:quantization -- TestAOMigrationQuantization

https://www.internalfb.com/intern/testinfra/testrun/3940649742829796

Reviewed By: z-a-f

Differential Revision: D30982534

fbshipit-source-id: 48d08969b1984311ceb036eac0877c811cd6add9
2021-09-16 10:33:16 -07:00
Zafar Takhirov
e0ecd09011 [quant] AO migration of the _correct_bias.py, _equalize.py, and _learnable_fake_quantize.py (#64917)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/64917

AO Team is migrating the existing torch.quantization into torch.ao.quantization. We are doing it one file at a time to make sure that the internal callsites are updated properly.
This migrates from torch.quantization to torch.ao.quantization the following files:
- `_correct_bias.py`
- `_equalize.py`
- `_learnable_fake_quantize.py`

**Note:** These file are migrated completely without any warning. The old location is thus silently deprecated.

Test Plan: `buck test mode/dev //caffe2/test:quantization -- TestBiasCorrection`

Reviewed By: vkuzo

Differential Revision: D30898565

fbshipit-source-id: 1d39be2539dd1adfcb42e16bdcc0daf5c8316bbd
2021-09-15 18:15:39 -07:00
Zafar Takhirov
c151d62f45 [quant] AO migration of the quant_types.py (phase 1) (#64916)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/64916

AO Team is migrating the existing torch.quantization into torch.ao.quantization. We are doing it one file at a time to make sure that the internal callsites are updated properly.
This migrates the quant_type.py from torch.quantization to torch.ao.quantization.
At this point both locations will be supported. Eventually the torch.quantization will be deprecated.

Test Plan: `buck test mode/dev //caffe2/test:quantization -- TestAOMigrationQuantization`

Reviewed By: vkuzo

Differential Revision: D30898422

fbshipit-source-id: 3e6126b49f0565a4136d6928cea9eb25368927ff
2021-09-15 17:30:00 -07:00
Zafar Takhirov
a42996f16e [quant] AO migration of the fuse_modules.py (phase 1) (#64913)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/64913

AO Team is migrating the existing torch.quantization into torch.ao.quantization. We are doing it one file at a time to make sure that the internal callsites are updated properly.
This migrates the fuse_module.py from torch.quantization to torch.ao.quantization.
At this point both locations will be supported. Eventually the torch.quantization will be deprecated.

Test Plan: `buck test mode/dev //caffe2/test:quantization`

Reviewed By: vkuzo

Differential Revision: D30882819

fbshipit-source-id: 1926ad6aa49136aceb5b625dcef4bfde3a2860d4
2021-09-15 17:28:47 -07:00
Vasiliy Kuznetsov
6101cbcedb torch.ao migration: fake_quantize.py, phase 1 (#64814)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/64814

1. move the file
```
hg mv caffe2/torch/quantization/fake_quantize.py caffe2/torch/ao/quantization/
```

2. create a new file in the old location and copy the imports
3. fix all callsites inside `torch`

Test Plan:
```
buck test mode/dev //caffe2/test:quantization
```

Reviewed By: z-a-f

Differential Revision: D30866792

fbshipit-source-id: 7a221cb46c0ab01f1c5de9be061f09ecc83ce23e
2021-09-13 15:22:28 -07:00
Supriya Rao
3d976d9ceb torch.ao migration: quantize_jit.py phase1 (#64860)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/64860

ghstack-source-id: 137885395

Test Plan: buck test mode/dev //caffe2/test:quantization

Reviewed By: jerryzh168

Differential Revision: D30880574

fbshipit-source-id: 9629027dd3b00bb8d45633e1564fc03a866f8c31
2021-09-13 08:41:48 -07:00
Supriya Rao
9d52651d4e torch.ao migration: stubs.py phase 1 (#64861)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/64861

1. move the file
  ```
  hg mv caffe2/torch/quantization/stubs.py caffe2/torch/ao/quantization/
  ```

  2. create a new file in the old location and copy the imports
  3. fix all call sites inside `torch`
ghstack-source-id: 137885365

Test Plan: buck test mode/dev //caffe2/test:quantization

Reviewed By: jerryzh168

Differential Revision: D30879678

fbshipit-source-id: a2d24f25d01064212aca15e94e8c78240ba48953
2021-09-13 08:40:29 -07:00
Vasiliy Kuznetsov
1577c106dc torch.ao migration: numeric suite, eager and fx (#64817)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/64817

This migrates `torch.quantization._numeric_suite` to `torch.ao.ns._numeric_suite`, and `torch.quantization._numeric_suite_fx` to `torch.ao.ns._numeric_suite_fx`.

1. move the files
```
HG: move eager mode
hg mv caffe2/torch/quantization/_numeric_suite.py caffe2/torch/ao/ns/
HG: move fx
hg mv caffe2/torch/quantization/_numeric_suite_fx.py caffe2/torch/ao/ns/
hg mv caffe2/torch/quantization/ns/* caffe2/torch/ao/ns/fx/
```

2. create new versions of `_numeric_suite.py` and `_numeric_suite_fx.py` with
imports

3. update all FB callsites

Test Plan: buck test mode/dev //caffe2/test:quantization

Reviewed By: z-a-f

Differential Revision: D30867538

fbshipit-source-id: 120ee830434ca490c1183a187a518eebcbbaf22c
2021-09-12 12:00:45 -07:00
Zafar Takhirov
9cc44aad21 [quant] AO migration of the quantize.py (resubmission) (#64445)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/64445

AO Team is migrating the existing torch.quantization into torch.ao.quantization. We are doing it one file at a time to make sure that the internal callsites are updated properly.
This migrates the quantize.py from torch.quantization to torch.ao.quantization.
At this point both locations will be supported. Eventually the torch.quantization will be deprecated.

Test Plan: `buck test mode/dev //caffe2/test:quantization`

Reviewed By: HDCharles

Differential Revision: D30734870

fbshipit-source-id: dc204f3cc46bff2cc81c95159eab9d333b43bb4b
2021-09-08 04:58:47 -07:00
Zafar Takhirov
046ed57a4d Revert D30055886: [quant] AO migration of the quantize.py
Test Plan: revert-hammer

Differential Revision:
D30055886 (44e3ed88c9)

Original commit changeset: 8ef7470f9fa6

fbshipit-source-id: c5bd3ead43a2d44b9e56872ec5bd7a195bdac725
2021-09-02 16:59:59 -07:00
Zafar Takhirov
44e3ed88c9 [quant] AO migration of the quantize.py (#64086)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/64086

AO Team is migrating the existing torch.quantization into torch.ao.quantization. We are doing it one file at a time to make sure that the internal callsites are updated properly.

This migrates the `quantize.py` from torch.quantization to `torch.ao.quantization`.

At this point both locations will be supported. Eventually the torch.quantization will be deprecated.

Test Plan: `buck test mode/opt //caffe2/test:quantization`

Reviewed By: jerryzh168, raghuramank100

Differential Revision: D30055886

fbshipit-source-id: 8ef7470f9fa640c0042bef5bb843e7a05ecd0b9f
2021-08-29 20:30:01 -07:00
Karen Zhou
6257f5b168 [pruner] add README to repo (#64099)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/64099

adding readme to pruner in OSS
ghstack-source-id: 136867516

Test Plan: should not affect behavior

Reviewed By: z-a-f

Differential Revision: D30608045

fbshipit-source-id: 3e9899a853395b2e91e8a69a5d2ca5f3c2acc646
2021-08-27 11:52:59 -07:00
Karen Zhou
eebac46282 [pruner] add getter for pruned outputs in base pruner (#63520)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/63520

Rather than having to call `module.parametrizations.weight[0].pruned_outputs` each time we need to access the set of pruned indices, we add a getter `get_module_pruned_outputs` which takes the module as an argument and returns the set.

This is used for testing.
ghstack-source-id: 136561130

Test Plan:
` buck test mode/dev-nosan //caffe2/test:ao -- TestBasePruner`

https://pxl.cl/1N4gK

Reviewed By: z-a-f

Differential Revision: D30374558

fbshipit-source-id: e38dfee0879cadde52b942e899a3d8d7151ee493
2021-08-25 09:57:29 -07:00
Karen Zhou
83b132b112 [pruner] add support for pruning BatchNorm2d (#63519)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/63519

If the pruner should be pruning biases along with weights, then if the model has BatchNorm2d following pruned Conv2d layers, then the corresponding channels of the BatchNorm must also be pruned.

Specifically, they need to zeroed out, rather than fully removed, since in eager mode, the dimensions between layers need to be preserved.

To do this, we add a pruning parametrization called `ZeroesParametrization` which zeroes out pruned channels, rather than removing them.

The user must provide in the config, a tuple of the Conv2d and BatchNorm layers that go together. The `prepare` method will add the tuple to the `module_groups`; then it will add a PruningParametrization to the Conv2d layer, and a ZeroesParametrization to BatchNorm, and then set their pruned sets to be the same set. That way, during `step`, both masks are updated with the same pruned indices.

ghstack-source-id: 136562278

Test Plan:
`buck test mode/dev-nosan //caffe2/test:ao -- TestBasePruner`

https://pxl.cl/1N1P6

Reviewed By: z-a-f

Differential Revision: D30349855

fbshipit-source-id: 3199d3688d5a70963f9b32d7a8fdac3962ae6a65
2021-08-25 09:56:19 -07:00
Karen Zhou
1256dcd509 [pruner] modify base pruner to prune bias by default (#63202)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/63202

By default, the prune will also prune biases, such that the whole output channel is removed. The user can manually set `also_prune_bias` to False when calling `prepare` if they don't want the bias to be pruned.
ghstack-source-id: 136466671

Test Plan:
`buck test mode/dev-nosan //caffe2/test:ao -- TestBasePruner`

https://pxl.cl/1MV32

modify `fusion_tests` according to API change
`buck test mode/opt //scripts/kazhou:fusion_tests`

https://pxl.cl/1NbKz

Reviewed By: z-a-f

Differential Revision: D30294494

fbshipit-source-id: c84655648bee0035559195ca855b98fb7edaa134
2021-08-24 10:25:45 -07:00
Karen Zhou
16ba20507a [pruner] amend base pruner API to match base sparsifier (#63178)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/63178

Update base pruner API to match base sparsifier API as defined in D28970960 / PR58955

Changes include:
- `enable_mask_update = True` in `__init__`
- `prepare` takes model and config instead of constructor
- convert functionality renamed to `squash_mask`, `convert` method call now raises Error
- `activation_handles` ad `bias_handles` initialized in `_prepare` instead of constructor
ghstack-source-id: 136467595

Test Plan:
Function names updates according to changes

`buck test mode/dev-nosan //caffe2/test:ao -- TestBasePruner`

https://pxl.cl/1MTgH

TODO will need to modify `fbcode/scripts/kazhou/fusion_tests.py` to use new API

Reviewed By: z-a-f

Differential Revision: D30287179

fbshipit-source-id: d4727bea1873b500f2d4bb784db26d532bf26cce
2021-08-24 10:25:43 -07:00
Karen Zhou
5dee15401c [pruner] refactor ActivationReconstruction forward hooks (#63158)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/63158

Combined functionality for `ActivationReconstruction` for both Linear and Conv2d in one class. The only difference between the old classes was the size and indexing of the reconstructed tensor -- that logic can be generalized by iterating over the size of `output`.
ghstack-source-id: 136467465

Test Plan:
`buck test mode/dev-nosan //caffe2/test:ao -- TestBasePruner`

https://pxl.cl/1MSSv

Reviewed By: raghuramank100

Differential Revision: D30282765

fbshipit-source-id: 08a1e4e0650511019fff85cf52b41dd818b0c7f8
2021-08-24 10:24:29 -07:00
Karen Zhou
d45291613c [pruner] generalize bias hook for conv2d (#62430)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/62430

The bias hook is a forward hook that is part of the pruning parametrization; it is attached after the activation reconstruction forward hook, so adding the bias occurs after zeros are reinserted to the pruned activation.

This diff/PR amends the bias hook to work for Conv2d layers, in addition to Linear layers. The reshaping of the ._bias parameter ensures that it is added to the right dimension of the output.
ghstack-source-id: 135097700

Test Plan:
Added tests for `Conv2dB()`, a model with Conv2d layers that have `bias=True`.

`buck test mode/dev-nosan //caffe2/test:ao -- TestBasePruner`

https://pxl.cl/1MfgL

Reviewed By: jerryzh168

Differential Revision: D29979571

fbshipit-source-id: c1a7e9fabc8b3c9d0050bd6b6c6a631ddfdf2a68
2021-08-05 09:27:17 -07:00
Karen Zhou
3687bbb1ed [pruner] add Conv2d support (#61778)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/61778

Adding Conv2d as supported modules for the pruner. Previously the pruner only supported Linear layers. This addition includes:
- adding a Conv2d activation reconstruction forward hook to match Conv2d weight shapes
- in `prepare`, checking the type of the module and using the corresponding activation forward hook
ghstack-source-id: 134143557

Test Plan:
Added conv2d tests
`buck test mode/dev-nosan //caffe2/test:ao -- TestBasePruner`

https://pxl.cl/1LLf3

Reviewed By: jerryzh168

Differential Revision: D29719045

fbshipit-source-id: 6a9f91b96992c552fff32f0e5a6e22f16eb7077b
2021-07-22 23:00:31 -07:00
Karen Zhou
9b3cbeaf7d [pruner] fix activation handles logic (#61592)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/61592

Add activation handles for each layer (stored in a list), so they can each be removed.

We don't remove them in the `convert` in eager mode because we aren't modifying output/input layer dimensions. We will need this in Fx mode though.
ghstack-source-id: 133497376

Test Plan:
Added some tests to make sure `model(x)` runs without error.

`buck test mode/dev-nosan //caffe2/test:ao --
TestBasePruner`

https://pxl.cl/1LBf4

Reviewed By: z-a-f

Differential Revision: D29682789

fbshipit-source-id: 9185702736e5f7f4320754ffef441610738ac154
2021-07-14 11:07:23 -07:00
Karen Zhou
962c9fbf85 [pruner] add handles for hooks (#61425)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/61425

Adding handle for activation reconstruction and bias forward hooks so they can be removed later
ghstack-source-id: 133244536

Test Plan:
This change should not affect behavior yet, but to double check:

`buck test mode/dev-nosan //caffe2/test:ao -- TestBasePruner`

https://pxl.cl/1LpM9

Reviewed By: z-a-f

Differential Revision: D29619720

fbshipit-source-id: c7428d2d0325cd11ce7919e0b67321e8cc196041
2021-07-09 11:28:35 -07:00
Karen Zhou
21ad978d4f [sparsity] rename sparsity_pattern to sparse_block_shape (#59898)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/59898

In `weight_norm_sparsifier`, the name of the argument `sparsity_pattern` is not intuitive for an argument describing the shape of the sparse block. It has been changed to `sparse_block_shape`.

Test Plan:
`buck test mode/dev-nosan //caffe2/test:ao -- TestWeightNormSparsifier`
https://pxl.cl/1LhRM

Reviewed By: z-a-f

Differential Revision: D29077045

fbshipit-source-id: 0cf9c5387d41ca8e839ee050d71f4fe477374143
2021-07-07 15:27:16 -07:00
Zafar
05c1e5b655 [sparsity] Lambda Scheduler (#59771)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/59771

Implements a specific sparsity scheduler, that uses a user-provided lambda's to change the levels.

Test Plan:
```
python test/test_ao_sparsity.py
```
Imported from OSS

Differential Revision:
D29070604
D29070604

Reviewed By: raghuramank100

Pulled By: z-a-f

fbshipit-source-id: c7ccbe63fe4cd6a0c3563541b7fcf93a99d0e62f
2021-07-02 21:39:38 -07:00
Zafar
37ebf2e3cd [sparsity] Base sparsity level scheduler class (#59770)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/59770

Implements the base scheduler class for changing the sparsity levels in the sparsifier.

Test Plan:
```
python test/test_ao_sparsity.py
```
Imported from OSS

Differential Revision:
D29070603
D29070603

Reviewed By: raghuramank100

Pulled By: z-a-f

fbshipit-source-id: 0b160e4eb0a2a303d2d19e6a3beb4784002b2cb7
2021-07-02 21:38:24 -07:00
Zafar
d42f1751d4 [sparsity] WeightNormSparsifier (#58955)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/58955

Implements the weight norm sparsifier.
This type of sparsifier computes the norm of the weights, sorts them, and zeroes-out the target fraction of them.

The main imeplemented method is `update_mask`, which holds the main logic of changing the masks.

Test Plan:
```
python test/test_ao_sparsity.py
```
Imported from OSS

Differential Revision:
D28970960
D28970960

Reviewed By: raghuramank100

Pulled By: z-a-f

fbshipit-source-id: 8f2a4360ad877f430cdc1065c6777106938b58d5
2021-07-02 17:35:27 -07:00
Zafar
7ab2729481 [sparsity][refactor] Import factoring out (#58707)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/58707

Minor refactor that changes the format of the import.
This is done to avoid accidental circular dependencies.

Test Plan:
```
python test/test_ao_sparsity.py
```
Imported from OSS

Differential Revision:
D28970961
D28970961

Reviewed By: raghuramank100

Pulled By: z-a-f

fbshipit-source-id: c312742f5e218c435a1a643532f5842116bfcfff
2021-07-02 16:32:39 -07:00
Zafar
973e9266ff [sparsity] Sparsifier class (#58704)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/58704

Implements the base sparsifier class based on the #59835 RFC documents.

This PR implements the base class for the sparsification. Specifically, the prepare method is implemented.

Test Plan:
```
python test/test_ao_sparsity.py
```
Imported from OSS

Differential Revision:
D28970958
D28970958

Reviewed By: raghuramank100

Pulled By: z-a-f

fbshipit-source-id: 0ef98a445c0a0aca22ce5708e34a9f94606d0e2b
2021-07-02 16:31:21 -07:00
Zafar
80cab10534 [sparsity] Sparsity parametrization (#58705)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/58705

The basic demo for this particular implementation can be found here:
https://gist.github.com/z-a-f/1d06ae8d5a509d3c9c1596dcb924afe0

Test Plan:
```
python test/test_ao_sparsity.py
```
Imported from OSS

Differential Revision:
D28970959
D28970959

Reviewed By: raghuramank100

Pulled By: z-a-f

fbshipit-source-id: 2a0bea1e0a81816690e05f83051d607c90925d32
2021-07-02 11:12:31 -07:00
Zafar
5d34b7955b [sparsity][refactor] Changing linear row/col control (#60850)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/60850

Test Plan:
```
python test/test_ao_sparsity.py
```

```
python test/test_ao_sparsity.py
```

Differential Revision:
D29465900
D29465900

Reviewed By: raghuramank100

Pulled By: z-a-f

fbshipit-source-id: 412f50da857f377898fea79d378ae54a049b81fe
2021-07-02 11:12:30 -07:00
Karen Zhou
ca2702a776 [pruner] Make bias hook stateless (#61077)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/61077

Removing `BiasHook` class, using function instead.
ghstack-source-id: 132899223

Test Plan:
` buck test mode/dev-nosan //caffe2/test:ao -- TestBasePruner`

https://pxl.cl/1L7Tg

Reviewed By: z-a-f

Differential Revision: D29504119

fbshipit-source-id: 6dd9689d18b17ac64e8a461f466e2c9018bc530b
2021-07-01 14:59:00 -07:00
Karen Zhou
0a7875231b [pruner] Add bias support (#60970)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/60970

Support adding bias in eager mode
ghstack-source-id: 132695883

Test Plan:
`buck test mode/dev-nosan //caffe2/test:ao -- TestBasePruner`

https://pxl.cl/1L3K3

Reviewed By: z-a-f

Differential Revision: D29441499

fbshipit-source-id: 47e0fff5b3014612bd021e145160ea54e2645e24
2021-07-01 14:57:09 -07:00
Karen Zhou
007ba37c9a [pruning] Speedup activation reconstruction (#60683)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/60683

Vectorized reconstruction without for loops

Test Plan:
`buck test mode/dev-nosan //caffe2/test:ao -- TestBasePruner`

https://pxl.cl/1KSJQ

Reviewed By: z-a-f

Differential Revision: D29370805

fbshipit-source-id: 75402437654a0b6f6391c8590bbe3f6fe3f43d8f
2021-06-28 12:58:21 -07:00
Karen Zhou
8d4a6ef962 [pruning] Activation reconstruction (#60292)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/60292

Added activation reconstruction in the `reconstruct` method

Test Plan:
`buck test mode/dev-nosan //caffe2/test:ao -- TestBasePruner`

https://pxl.cl/1KLl1

Reviewed By: z-a-f

Differential Revision: D29236569

fbshipit-source-id: 1ad085f4143eb9fa3efca51e00d810e0fdb7e9b1
2021-06-28 12:58:18 -07:00
Karen Zhou
71b83c27e2 [pruning] Move pruning directory into experimental folder (#60395)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/60395

Experimental folder so other developers know this is work in progress

Test Plan:
`buck test mode/dev-nosan //caffe2/test:ao -- TestBasePruner`

https://pxl.cl/1KGJD

Reviewed By: z-a-f

Differential Revision: D29272319

fbshipit-source-id: 93eeeceba0376753efc9a5bb69a155278ceb2fca
2021-06-22 11:08:48 -07:00
Karen Zhou
f75ea51e67 [pruning] Move pruning files to their own directory (#60293)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/60293

Move pruning files to their own directory

Test Plan:
`buck test mode/dev-nosan //caffe2/test:ao -- TestBasePruner`
https://pxl.cl/1KCfz

Reviewed By: z-a-f

Differential Revision: D29238159

fbshipit-source-id: 0173a278b39ff5ee4cbd54f333f558b6fe412be5
2021-06-22 11:08:47 -07:00
Karen Zhou
b25db5251a [pruning] Base pruner class (#60278)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/60278

Implemented `PruningParametrization`, which removes pruned rows, and `BasePruner`, which is the base class for structured pruning.

Test Plan:
`buck test mode/dev-nosan //caffe2/test:ao -- TestBasePruner`

https://pxl.cl/1KC2n

Reviewed By: z-a-f

Differential Revision: D29208349

fbshipit-source-id: f34e8e258bf13fa80292c2bd64d56f5ad1e72b6a
2021-06-22 11:07:31 -07:00
Zafar
b0fd3ca542 [sparse] Add the AO namespace to torch (#58703)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/58703

Test Plan: Imported from OSS

Reviewed By: jbschlosser

Differential Revision: D28970962

Pulled By: z-a-f

fbshipit-source-id: 0d0f62111a0883af4143a933292dfaaf8fae220d
2021-06-09 19:47:21 -07:00
Zafar Takhirov
375687839e [sparsity] Moving the sparsity python files to OSS (#56617)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/56617

This migrates the sparsity to the open source

Test Plan: `buck test mode/opt //caffe2/test:ao`

Reviewed By: raghuramank100

Differential Revision: D27812207

fbshipit-source-id: cc87d9d2b486269901a4ad9b483615741a1cd712
2021-04-22 14:07:31 -07:00