Commit Graph

35 Commits

Author SHA1 Message Date
Catherine Lee
4908fb53c3 [testing] Add test owner labels for some ao sparse tests (#163203)
I am trying to give some test files better owner labels than `module: unknown`.  I am not sure them, but they seem pretty reasonable
Pull Request resolved: https://github.com/pytorch/pytorch/pull/163203
Approved by: https://github.com/jcaip
2025-09-18 16:08:13 +00:00
Anthony Barbier
b1b8e57cda Add __main__ guards to ao tests (#154612)
This is the first PR of a series in an attempt to get the content of #134592 merged as smaller PRs (Given that the original one was closed due to a lack of reviewers).

This specific PR contains:
- Add and use a common raise_on_run_directly method for when a user runs a test file directly which should not be run this way. Print the file which the user should have run.
- Update ao tests.

There will be follow up PRs to update the other test suites but I don't have permissions to create branches directly on pytorch/pytorch so I can't create a stack and therefore will have to create them one at the time.

Cc @jerryzh168
Pull Request resolved: https://github.com/pytorch/pytorch/pull/154612
Approved by: https://github.com/jcaip
2025-06-10 18:33:09 +00:00
Xuehai Pan
ba48cf6535 [BE][Easy][6/19] enforce style for empty lines in import segments in test/ (#129757)
See https://github.com/pytorch/pytorch/pull/129751#issue-2380881501. Most changes are auto-generated by linter.

You can review these PRs via:

```bash
git diff --ignore-all-space --ignore-blank-lines HEAD~1
```

Pull Request resolved: https://github.com/pytorch/pytorch/pull/129757
Approved by: https://github.com/ezyang
2024-07-17 06:42:37 +00:00
Xuehai Pan
26f4f10ac8 [5/N][Easy] fix typo for usort config in pyproject.toml (kown -> known): sort torch (#127126)
The `usort` config in `pyproject.toml` has no effect due to a typo. Fixing the typo make `usort` do more and generate the changes in the PR. Except `pyproject.toml`, all changes are generated by `lintrunner -a --take UFMT --all-files`.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/127126
Approved by: https://github.com/kit1980
2024-05-27 14:49:57 +00:00
PyTorch MergeBot
55c0ab2887 Revert "[5/N][Easy] fix typo for usort config in pyproject.toml (kown -> known): sort torch (#127126)"
This reverts commit 7763c83af6.

Reverted https://github.com/pytorch/pytorch/pull/127126 on behalf of https://github.com/XuehaiPan due to Broken CI ([comment](https://github.com/pytorch/pytorch/pull/127126#issuecomment-2133044286))
2024-05-27 09:22:08 +00:00
Xuehai Pan
7763c83af6 [5/N][Easy] fix typo for usort config in pyproject.toml (kown -> known): sort torch (#127126)
The `usort` config in `pyproject.toml` has no effect due to a typo. Fixing the typo make `usort` do more and generate the changes in the PR. Except `pyproject.toml`, all changes are generated by `lintrunner -a --take UFMT --all-files`.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/127126
Approved by: https://github.com/kit1980
ghstack dependencies: #127122, #127123, #127124, #127125
2024-05-27 04:22:18 +00:00
Yuanhao Ji
b3504af56e Enable UFMT on test/scripts and some files (#124137)
Part of: #123062

Ran lintrunner on:

- `test/scripts`
- `test/simulate_nccl_errors.py`
- `test/test_ao_sparsity.py`
- `test/test_autocast.py`
- `test/test_binary_ufuncs.py`
- `test/test_bundled_images.py`
- `test/test_bundled_inputs.py`
- `test/test_comparison_utils.py`
- `test/test_compile_benchmark_util.py`
- `test/test_complex.py`
- `test/test_cpp_api_parity.py`
- `test/test_cpp_extensions_aot.py`
- `test/test_cpp_extensions_jit.py`
- `test/test_cpp_extensions_open_device_registration.py`

Detail:

```bash
$ lintrunner -a --take UFMT --all-files
ok No lint issues.
Successfully applied all patches.
```

Pull Request resolved: https://github.com/pytorch/pytorch/pull/124137
Approved by: https://github.com/soulitzer
2024-04-19 22:01:27 +00:00
LINGAO XIAO
e7b2430818 add pruning method: Filter Pruning via Geometric Median for Deep Convolutional Neural Networks Acceleration (#95689)
add `class FPGMStructured`
add `function FPGM_structured()`
add `function _validate_distance_type()`
add `function _compute_distance()`

Implement method mentioned in issue #39765

---
FPGMSparsifier Implement with the new pytorch pruning API torch.ao.pruning.
It is a structured pruning method, and it is added under torch.ao.pruning._experimental. Test cases are added at `test_structured_sparsifier.py`.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/95689
Approved by: https://github.com/jcaip
2023-08-02 16:24:42 +00:00
Justin Chu
73e1455327 [BE] Enable ruff's UP rules and autoformat test/ (#105434)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/105434
Approved by: https://github.com/albanD
2023-07-19 20:36:06 +00:00
Sudarshan Raghunathan
e45fa1a581 Back out "[core][pruning][be] rename BaseSparsifier to BasePruner (#98747)" (#99171)
Summary: Back out D44856390 since renaming the type breaks backwards compatibility of existing models used in integration tests and likely in prod as well.

Test Plan:
buck2 run //aiplatform/modelstore/model_generation/integration_tests:cogwheel_igr_tab_offline_and_recurring_model_generation_v1_api_test-launcher -- --build-fbpkg --run-disabled --run-harness-in-tupperware

Now fails with an OOM: https://www.internalfb.com/servicelab/experiment/100000000259121/trial/100000000331723/run

It was failing with an import error without this revert.

Differential Revision: D44991351

Pull Request resolved: https://github.com/pytorch/pytorch/pull/99171
Approved by: https://github.com/izaitsevfb, https://github.com/osalpekar
2023-04-15 00:37:45 +00:00
Jesse Cai
4584851da5 [core][pruning][be] rename BaseSparsifier to BasePruner (#98747)
Summary:

att

Test Plan:
`python test/test_ao_sparsity.py -- TestBasePruner`
Pull Request resolved: https://github.com/pytorch/pytorch/pull/98747
Approved by: https://github.com/jerryzh168
2023-04-10 21:25:19 +00:00
Jesse Cai
32e9b29ce9 [pruning][core][feature] Add in SaliencyPruner to pruner._experimental (#91814)
Summary:

This PR adds in SaliencyPruner, an implementation of L1 norm pruning for structured pruning, as well as additional tests for the SaliencyPruner
The README.md references this file but I forgot to add it in earlier when writing the tutorial.

Test Plan:
```
python test/test_ao_sparsity.py -- TestSaliencyPruner
```

Reviewers:

Subscribers:

Tasks:

Tags:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/91814
Approved by: https://github.com/jerryzh168
2023-01-10 04:04:55 +00:00
Jesse Cai
9a1c6fd506 [pruning][core][feature] Align BaseStructuredPruner with existing pruning flow (#88436)
Summary:

This PR aligns the "eager" mode of the structured pruning flow with the existing unstructured pruning flow.

The base pruner has been moved from and has been renamed from BasePruner to BaseStructuredPruner
`torch/ao/pruning/_experimental/pruner/base_pruner.py -> /torch/ao/pruning/_experimental/pruner/base_structured_pruner.py`

Support for pruning batchnorm modules in the config have been removed, so now the structured pruning code can use more of the BaseSparsifier logic and we don't need to override as many functions.

Since we aim to only support a single flow, we have only updated ZeroesParametrizations (FakeStructuredSparsity) and BiasHook.
The parameterizations have also been rewritten to use a bool mask tensor for keeping track of pruned rows, instead of using sets before.
This better aligns structured and unstructured sparsity.

The BaseStructuredSparsifier tests have also been updated to reflect the above changes. I also removed `squash_mask` tests because they were breaking CI and `squash_mask` is no longer used.

We will migrate the structured pruning code out of this folder in a later PR.

Test Plan:
```
python test/test_ao_sparsity -- TestBaseStructuredPruner
```

Reviewers:
z-a-f vkuzo

Subscribers:

Tasks:

Tags:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/88436
Approved by: https://github.com/vkuzo
2022-12-03 00:53:53 +00:00
Zafar
adb12438c1 [AO] Cubic sparsity level scheduler (#85232)
The scheduler updates the levels of sparsity based on https://arxiv.org/abs/1710.01878.

 ## Implementation

The update rule is defined as:

$$
\begin{aligned}
s_t &= s_f + (s_i - s_f)\left( 1 - \frac{t - t_0}{n\Delta t} \right)^3  \\
\mbox{for} ~ t &\in \left\\{ t_0, t_0+\Delta t, \dots, t_0 + n\Delta t \right\\} \end{aligned}
$$

There is a minor difference compared to the original paper. By providing `initially_zero` argument, one can set the level of sparsity before step $t_0$: If `False`, the sparsity level before $t_0$ is set to $s_i$, otherwise 0.

 ## Tests

```
python test/test_ao_sparsity.py -- TestCubicScheduler
```
Pull Request resolved: https://github.com/pytorch/pytorch/pull/85232
Approved by: https://github.com/junesg, https://github.com/jerryzh168
2022-10-04 22:44:15 +00:00
macandro96
03abcf2317 [ao][sparsity] Data Sparsity with Post Training Quantization (#82759)
Implementation of `post_training_sparse_quantize` that takes in a model
and applies sparsification and quantization to only `embeddings` & `embeddingbags`.
The quantization step can happen before or after sparsification depending on the `sparsify_first` argument.

Test Plan:
```python test/test_ao_sparsity.py TestQuantizationUtils```
Pull Request resolved: https://github.com/pytorch/pytorch/pull/82759
Approved by: https://github.com/z-a-f
2022-08-10 16:51:35 +00:00
albanD
2255911f8a Make M1 tests green (#82213)
This is skipping all the failing tests and add a new master job to test on M1

Pull Request resolved: https://github.com/pytorch/pytorch/pull/82213
Approved by: https://github.com/seemethere, https://github.com/soulitzer, https://github.com/malfet
2022-08-05 16:12:08 +00:00
HDCharles
8533951f09 [ao][sparsity][fx] make quant prepare -> sparse prepare compose (#81992)
Summary: sparse_prepare automatically composes with quantized prepare
even in cases with fusion. However, the convert step needed to be updated to handle parametrized
modules.

Test Plan: python test/test_ao_sparsity.py TestFxComposability

Reviewers:

Subscribers:

Tasks:

Tags:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/81992
Approved by: https://github.com/jerryzh168
2022-07-27 17:14:13 +00:00
macandro96
f87d8c2f62 [ao][sparsity] Basic implementation of activation sparsifier (#80886)
The Activation sparsifier class aims to sparsify/prune activations in a neural
network. The idea is to attach the sparsifier to a layer (or layers) and it
zeroes out the activations based on the mask_fn (or sparsification function)
input by the user.
The mask_fn is applied once all the inputs are aggregated and reduced i.e.
mask = mask_fn(reduce_fn(aggregate_fn(activations)))

Note::
    The sparsification mask is computed on the input **before it goes through the attached layer**.

Test Plan:
```python test/test_ao_sparsity.py TestActivationSparsifier```
Pull Request resolved: https://github.com/pytorch/pytorch/pull/80886
Approved by: https://github.com/HDCharles
2022-07-22 21:43:33 +00:00
HDCharles
fa6b6842e1 [ao][sparsity] removing leading '.' from fqn in utils (#79774)
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #79774
Pull Request resolved: https://github.com/pytorch/pytorch/pull/79774
Approved by: https://github.com/z-a-f
2022-06-30 00:00:56 +00:00
macandro96
70b7bca423 [ao][sparsity] Base scheduler class for Data Schedulers (#79817)
The BaseDataScheduler is the abstract scheduler class specifically for the
BaseDataSparsifier class. This class controls a specific hyperparameter of
the sparsifier class and varies it across the training process (or across time).

Args:
    data_sparsifier (instance of BaseDataSparsifier)
        Implemented class data sparsifier class wherein the update_mask is implemented
    schedule_param (str)
        A specific hyperparameter of the passed sparsifier that needs to be scheduled/varied
    last_epoch (int, default=-1)
        This is specifically is passed when training needs to be resumed from a particular
        point.
    verbose (bool, default=False)
        Verbosity of the BaseDataScheduler

The *get_schedule_param()* function needs to be implemented by the user.

Test Plan:
```python test/test_ao_sparsity.py TestBaseDataScheduler```

Differential Revision: [D37358608](https://our.internmc.facebook.com/intern/diff/D37358608)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/79817
Approved by: https://github.com/jerryzh168, https://github.com/z-a-f
2022-06-24 16:51:52 +00:00
macandro96
419b82e0fa [ao][sparsity] L1 norm based block data sparsifier
L1-Norm Sparsifier
This sparsifier computes the *L1-norm* of every sparse block and "zeroes-out" the
ones with the lowest norm. The level of sparsity defines how many of the
blocks is removed.
This sparsifier is controlled by three variables:
1. `sparsity_level` defines the number of *sparse blocks* that are zeroed-out
2. `sparse_block_shape` defines the shape of the sparse blocks. Note that
    the sparse blocks originate at the zero-index of the tensor.
3. `zeros_per_block` is the number of zeros that we are expecting in each
    sparse block. By default we assume that all elements within a block are
    zeroed-out. However, setting this variable sets the target number of
    zeros per block. The zeros within each block are chosen as the *smallest
    absolute values*.

Test Plan:
```python test/test_ao_sparsity.py TestNormDataSparsifiers```

Pull Request resolved: https://github.com/pytorch/pytorch/pull/79534

Approved by: https://github.com/z-a-f
2022-06-16 17:43:22 +00:00
macandro96
15828bcfd7 [ao][sparsity] Base class for Data Sparsifier
Base Data Sparsifier class for all Data sparsifiers.
The abstract class accepts raw torch tensors / embedding / embedding bags (refer to SUPPORTED_TYPES above)
to prepare for sparsification.
In this case, mask (and parametrizations) is owned by the class and not by the user.
Specifically, the container object inside the class maintains the mask and parametrizations of the input data

Test Plan:
```python test/test_ao_sparsity.py TestBaseDataSparsifier```

Pull Request resolved: https://github.com/pytorch/pytorch/pull/79251

Approved by: https://github.com/z-a-f, https://github.com/HDCharles
2022-06-16 17:31:22 +00:00
macandro96
a3468b7d4a [ao][sparsity] Added the Nearly Diagonal Sparsifier
This sparsifier creates a nearly diagonal mask to be applied to the weight matrix.
    Nearly Diagonal Matrix is a matrix that contains non-zero elements near the diagonal and the rest are zero.
    An example of a nearly diagonal matrix with degree (or nearliness) 3 and 5 are follows respectively.
    1 1 0 0       1 1 1 0
    1 1 1 0       1 1 1 1
    0 1 1 1       1 1 1 1
    0 0 1 1       0 1 1 1
    Note that a nearly diagonal matrix with degree 1 is just a matrix with main diagonal populated

    This sparsifier is controlled by one variable:
    1. `nearliness` defines the number of non-zero diagonal lines that are closest to the main diagonal.
        Currently - supports only odd number

    Note:
        This can be accelerated (vectorized) once the Spdiagonal feature (PR: #78439) is landed or the banded matrix
        feature is landed: https://stackoverflow.com/questions/52463972/generating-banded-matrices-using-numpy

Test Plan:

```
python test/test_ao_sparsity.py TestNearlyDiagonalSparsifier
```

Pull Request resolved: https://github.com/pytorch/pytorch/pull/78448

Approved by: https://github.com/z-a-f, https://github.com/HDCharles
2022-06-04 04:30:32 +00:00
Charles David Hernandez
02e30a09f7 [ao][sparsity] make sparsity and PTQ compose (#74845)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/74845

This PR adds support for quantization flow to detect
parametrized modules and match them using their original module types.
This mainly involved using the new type_before_parametrizations function rather than
type to check for module mathcing

Test Plan:
python test/test_ao_sparsity.py TestComposability

Imported from OSS

Reviewed By: jerryzh168

Differential Revision: D35240274

fbshipit-source-id: 7294d89c9c2e069e51d8b9bafa45c15f92bed124
(cherry picked from commit ed5cdb7b636c42e040d1b4a67b6b94604d06e1ff)
2022-04-05 03:35:41 +00:00
Zafar
d176c82bd5 [sparsity] Fix and enable the pruning tests (#66411)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/66411

The original tests were disabled, and had some bugs. This fixes those unittests.

Test Plan: Imported from OSS

Reviewed By: HDCharles

Differential Revision: D31590678

Pulled By: z-a-f

fbshipit-source-id: ddbed34cc01d5f15580cb8f0033416f2f9780068
2021-11-22 15:28:12 -08:00
Jane Xu
a4a6d056e6 Add ownership to more edge tests (#67859)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/66232

This should be the last immediate task. I anticipate test ownership will change overtime but this is the last big thing to close it out

Pull Request resolved: https://github.com/pytorch/pytorch/pull/67859

Reviewed By: soulitzer

Differential Revision: D32210534

Pulled By: janeyx99

fbshipit-source-id: 7fd835d87d9d35d49ec49de1fcfa29b085133e99
2021-11-05 11:01:16 -07:00
Jane Xu
6259601c8a Set test owners for tests with unknown owners (#67552)
Summary:
Action following https://github.com/pytorch/pytorch/issues/66232

Pull Request resolved: https://github.com/pytorch/pytorch/pull/67552

Reviewed By: jbschlosser

Differential Revision: D32028248

Pulled By: janeyx99

fbshipit-source-id: a006f7026288b7126dba58b31cac28e10ce0fed6
2021-10-29 12:42:01 -07:00
Shen Li
1022443168 Revert D30279364: [codemod][lint][fbcode/c*] Enable BLACK by default
Test Plan: revert-hammer

Differential Revision:
D30279364 (b004307252)

Original commit changeset: c1ed77dfe43a

fbshipit-source-id: eab50857675c51e0088391af06ec0ecb14e2347e
2021-08-12 11:45:01 -07:00
Zsolt Dollenstein
b004307252 [codemod][lint][fbcode/c*] Enable BLACK by default
Test Plan: manual inspection & sandcastle

Reviewed By: zertosh

Differential Revision: D30279364

fbshipit-source-id: c1ed77dfe43a3bde358f92737cd5535ae5d13c9a
2021-08-12 10:58:35 -07:00
Zafar
05c1e5b655 [sparsity] Lambda Scheduler (#59771)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/59771

Implements a specific sparsity scheduler, that uses a user-provided lambda's to change the levels.

Test Plan:
```
python test/test_ao_sparsity.py
```
Imported from OSS

Differential Revision:
D29070604
D29070604

Reviewed By: raghuramank100

Pulled By: z-a-f

fbshipit-source-id: c7ccbe63fe4cd6a0c3563541b7fcf93a99d0e62f
2021-07-02 21:39:38 -07:00
Zafar
37ebf2e3cd [sparsity] Base sparsity level scheduler class (#59770)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/59770

Implements the base scheduler class for changing the sparsity levels in the sparsifier.

Test Plan:
```
python test/test_ao_sparsity.py
```
Imported from OSS

Differential Revision:
D29070603
D29070603

Reviewed By: raghuramank100

Pulled By: z-a-f

fbshipit-source-id: 0b160e4eb0a2a303d2d19e6a3beb4784002b2cb7
2021-07-02 21:38:24 -07:00
Zafar
d42f1751d4 [sparsity] WeightNormSparsifier (#58955)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/58955

Implements the weight norm sparsifier.
This type of sparsifier computes the norm of the weights, sorts them, and zeroes-out the target fraction of them.

The main imeplemented method is `update_mask`, which holds the main logic of changing the masks.

Test Plan:
```
python test/test_ao_sparsity.py
```
Imported from OSS

Differential Revision:
D28970960
D28970960

Reviewed By: raghuramank100

Pulled By: z-a-f

fbshipit-source-id: 8f2a4360ad877f430cdc1065c6777106938b58d5
2021-07-02 17:35:27 -07:00
Zafar
973e9266ff [sparsity] Sparsifier class (#58704)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/58704

Implements the base sparsifier class based on the #59835 RFC documents.

This PR implements the base class for the sparsification. Specifically, the prepare method is implemented.

Test Plan:
```
python test/test_ao_sparsity.py
```
Imported from OSS

Differential Revision:
D28970958
D28970958

Reviewed By: raghuramank100

Pulled By: z-a-f

fbshipit-source-id: 0ef98a445c0a0aca22ce5708e34a9f94606d0e2b
2021-07-02 16:31:21 -07:00
Zafar
80cab10534 [sparsity] Sparsity parametrization (#58705)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/58705

The basic demo for this particular implementation can be found here:
https://gist.github.com/z-a-f/1d06ae8d5a509d3c9c1596dcb924afe0

Test Plan:
```
python test/test_ao_sparsity.py
```
Imported from OSS

Differential Revision:
D28970959
D28970959

Reviewed By: raghuramank100

Pulled By: z-a-f

fbshipit-source-id: 2a0bea1e0a81816690e05f83051d607c90925d32
2021-07-02 11:12:31 -07:00
Zafar Takhirov
dc1f60a9a2 [sparsity][refactor] Restructure the tests folders (#60032)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/60032

There will be more sparse tests coming. This PR creates a separate folder for the sparse tests

Test Plan: `python test/test_ao.py`

Reviewed By: raghuramank100

Differential Revision: D29139265

fbshipit-source-id: d0db915f00e6bc8d89a5651f08f72e362a912a6b
2021-06-15 13:37:19 -07:00