Commit Graph

16 Commits

Author SHA1 Message Date
Edward Z. Yang
dd3a77bc96 Apply UFMT to all files in benchmarks/ (#105928)
Signed-off-by: Edward Z. Yang <ezyang@meta.com>

Pull Request resolved: https://github.com/pytorch/pytorch/pull/105928
Approved by: https://github.com/albanD
2023-07-26 01:18:48 +00:00
Sam Estep
e3900d2ba5 Add lint for unqualified noqa (#56272)
Summary:
As this diff shows, currently there are a couple hundred instances of raw `noqa` in the codebase, which just ignore all errors on a given line. That isn't great, so this PR changes all existing instances of that antipattern to qualify the `noqa` with respect to a specific error code, and adds a lint to prevent more of this from happening in the future.

Interestingly, some of the examples the `noqa` lint catches are genuine attempts to qualify the `noqa` with a specific error code, such as these two:
```
test/jit/test_misc.py:27:            print(f"{hello + ' ' + test}, I'm a {test}") # noqa E999
test/jit/test_misc.py:28:            print(f"format blank") # noqa F541
```
However, those are still wrong because they are [missing a colon](https://flake8.pycqa.org/en/3.9.1/user/violations.html#in-line-ignoring-errors), which actually causes the error code to be completely ignored:

- If you change them to anything else, the warnings will still be suppressed.
- If you add the necessary colons then it is revealed that `E261` was also being suppressed, unintentionally:
  ```
  test/jit/test_misc.py:27:57: E261 at least two spaces before inline comment
  test/jit/test_misc.py:28:35: E261 at least two spaces before inline comment
  ```

I did try using [flake8-noqa](https://pypi.org/project/flake8-noqa/) instead of a custom `git grep` lint, but it didn't seem to work. This PR is definitely missing some of the functionality that flake8-noqa is supposed to provide, though, so if someone can figure out how to use it, we should do that instead.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/56272

Test Plan:
CI should pass on the tip of this PR, and we know that the lint works because the following CI run (before this PR was finished) failed:

- https://github.com/pytorch/pytorch/runs/2365189927

Reviewed By: janeyx99

Differential Revision: D27830127

Pulled By: samestep

fbshipit-source-id: d6dcf4f945ebd18cd76c46a07f3b408296864fcb
2021-04-19 13:16:18 -07:00
Xiang Gao
20ac736200 Remove py2 compatible future imports (#44735)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/44735

Reviewed By: mruberry

Differential Revision: D23731306

Pulled By: ezyang

fbshipit-source-id: 0ba009a99e475ddbe22981be8ac636f8a1c8b02f
2020-09-16 12:55:57 -07:00
Mingzhe Li
5b15f32697 rename benchmark_all_other_test (#30048)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/30048

as title

(Note: this ignores all push blocking failures!)

Test Plan:
```
buck run mode/opt caffe2/benchmarks/operator_benchmark:benchmark_all_other_test
# ----------------------------------------
# PyTorch/Caffe2 Operator Micro-benchmarks
# ----------------------------------------
# Tag : short

# Benchmarking PyTorch: add
# Mode: Eager
# Name: add_M64_N64_K64_cpu
# Input: M: 64, N: 64, K: 64, device: cpu
Forward Execution Time (us) : 142.032
...

Reviewed By: hl475

Differential Revision: D18580754

fbshipit-source-id: 125482d2987cbdb1d019ccedf56a9da5a7cebaba
2019-11-18 21:39:31 -08:00
Mingzhe Li
189b24ebe9 reorganize test binaries of op bench (#30023)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/30023

This diff doesn't change how users run the benchmarks. But under the hood, we group all the tests into three groups: unary test, quantized test, and the rest ops (we name it others here).

Test Plan:
```
buck run //caffe2/benchmarks/operator_benchmark:benchmark_all_test -- --iterations 1
# ----------------------------------------
# PyTorch/Caffe2 Operator Micro-benchmarks
# ----------------------------------------
# Tag : short

# Benchmarking PyTorch: abs
# Mode: Eager
# Name: abs_M512_N512_cpu
# Input: M: 512, N: 512, device: cpu
Forward Execution Time (us) : 17914.301
...
# Benchmarking PyTorch: add
# Mode: Eager
# Name: add_M64_N64_K64_cpu_bwd2
# Input: M: 64, N: 64, K: 64, device: cpu
Backward Execution Time (us) : 66525.855
...
# Benchmarking PyTorch: mul
# Mode: Eager
# Name: mul_N2_dtypetorch.qint32_contigTrue
# Input: N: 2, dtype: torch.qint32, contig: True
Forward Execution Time (us) : 290.555
...

Reviewed By: hl475

Differential Revision: D18574719

fbshipit-source-id: f7ff1d952031129adde51ebf002e4891bd484680
2019-11-18 12:21:26 -08:00
Mingzhe Li
5da2bf945e add embeddingbag to benchmark_all_test (#29830)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/29830

as title

Test Plan: na

Reviewed By: hl475

Differential Revision: D18506023

fbshipit-source-id: 15693894c0aa736ab3e818bc740099f0d629cb84
2019-11-14 20:13:57 -08:00
Mingzhe Li
00c224f0f2 move quantized tests from benchmark_all_test to benchmark_all_quantized_test (#29590)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/29590

as title

Test Plan:
```
buck run //caffe2/benchmarks/operator_benchmark:benchmark_all_test  -- --iteration 1
Parsing buck files: finished in 1.0 sec
Creating action graph: finished in 43.0 sec
Building: finished in 16.0 sec (100%) 10053/10053 jobs, 1 updated
  Total time: 01:00.0 min
# ----------------------------------------
# PyTorch/Caffe2 Operator Micro-benchmarks
# ----------------------------------------
# Tag : short

# Benchmarking PyTorch: add
# Mode: Eager
# Name: add_M64_N64_K64_cpu
# Input: M: 64, N: 64, K: 64, device: cpu
Forward Execution Time (us) : 45419.667
...

buck run //caffe2/benchmarks/operator_benchmark:benchmark_all_quantized_test
Parsing buck files: finished in 1.0 sec
Building: finished in 6.0 sec (100%) 10053/10053 jobs, 1 updated
  Total time: 7.0 sec
# ----------------------------------------
# PyTorch/Caffe2 Operator Micro-benchmarks
# ----------------------------------------
# Tag : short

# Benchmarking PyTorch: QReLU
# Mode: Eager
# Name: QReLU_dims(1,)_permute_dimsFalse_inplaceFalse_dtypetorch.quint8
# Input: dims: (1,), permute_dims: False, inplace: False, dtype: torch.quint8
Forward Execution Time (us) : 137.685
...

Reviewed By: hl475

Differential Revision: D18436727

fbshipit-source-id: 317ec0e4bd2a6e33c9a60830f01ed805ae412449
2019-11-11 14:59:29 -08:00
Zafar Takhirov
a47fe40729 qpool benchmarking
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/29250

Test Plan: Imported from OSS

Differential Revision: D18339142

Pulled By: z-a-f

fbshipit-source-id: 1d2a3dda15ab300ffa63719158a4788b7fb17df5
2019-11-09 17:52:31 -08:00
Zafar Takhirov
fb2eb01955 qadd benchmarking
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/29420

Test Plan: Imported from OSS

Differential Revision: D18383402

Pulled By: z-a-f

fbshipit-source-id: 8ea2f689b7df676ffb8adef0cbb058a7a2123938
2019-11-09 14:20:28 -08:00
Zafar Takhirov
d545e4f155 qrelu benchmarking
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/29174

Test Plan: Imported from OSS

Differential Revision: D18319345

Pulled By: z-a-f

fbshipit-source-id: b64f0131296771ed201d85664930cceb7be185bd
2019-11-05 17:20:40 -08:00
Mingzhe Li
fcd6a8252c add shapes for fill benchmark (#28966)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/28966

as title

Test Plan:
```
buck run mode/opt //caffe2/benchmarks/operator_benchmark/pt:fill_test
# ----------------------------------------
# PyTorch/Caffe2 Operator Micro-benchmarks
# ----------------------------------------
# Tag : short

# Benchmarking PyTorch: fill_
# Mode: Eager
# Name: fill__N1024_cpu_dtypetorch.int32
# Input: N: 1024, device: cpu, dtype: torch.int32
Forward Execution Time (us) : 2.008

Reviewed By: hl475

Differential Revision: D18241521

fbshipit-source-id: 6eb6e1ab7e8a2f461c6fc537f5bb971d12f594c3
2019-10-31 13:28:49 -07:00
Mingzhe Li
9034762a7d add more operators to benchmark_all_test (#28968)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/28968

Add fill and as_strided operators.

Test Plan:
```
buck run mode/opt //caffe2/benchmarks/operator_benchmark:benchmark_all_test -- --list_ops
# ----------------------------------------
# PyTorch/Caffe2 Operator Micro-benchmarks
# ----------------------------------------
# Tag : short

# List of Operators to run:
# round_
# exponential_
# QLinear
...

Reviewed By: hl475

Differential Revision: D18241522

fbshipit-source-id: aade1d68a68a660d19d8dfd980eb4d5d0891488b
2019-10-31 13:28:39 -07:00
Rohan Varma
4b77cae360 Add qconv_test to benchmarking tests (#24913)
Summary:
Adds the tests defined in `qconv_tests.py` to `benchmark_all_tests.py` so that they are ran by `benchmark_all_tests`.

The next diff will create another `ai_benchmark_test` specifying the qconv operations similar to D16768680. Since AI-PEP integrates with benchmark_all_tests, this should add these qconv benchmarks to AI-PEP.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/24913

Test Plan:
`buck run mode/opt caffe2/benchmarks/operator_benchmark:benchmark_all_test` (runs only test who's `tag` is `short`)

`buck run mode/opt caffe2/benchmarks/operator_benchmark:benchmark_all_test -- --tag_filter resnext101_32x4d` (runs test who's `tag` is `resxnet101_32x4d`).

This runs the tests for all the imported modules in `benchmark_all_test.py` (i.e. add_test, batchnorm_test, qconv_test, etc)

```
buck run mode/opt caffe2/benchmarks/operator_benchmark:benchmark_all_test -- --operators QConv2d,QLinear
```
tests the QConv and QLinear operators

Relevant output for `qconv_test.py` (for short tag):

```
# Benchmarking PyTorch: QConv2d
# Mode: Eager
# Name: QConv2d_N1_IC64_OC128_H56_W56_G1_kernel1_stride1_pad0
# Input: N: 1, IC: 64, OC: 128, H: 56, W: 56, G: 1, kernel: 1, stride: 1, pad: 0
Forward Execution Time (us) : 957.848

# Benchmarking PyTorch: QConv2d
# Mode: Eager
# Name: QConv2d_N1_IC256_OC256_H56_W56_G32_kernel3_stride1_pad1
# Input: N: 1, IC: 256, OC: 256, H: 56, W: 56, G: 32, kernel: 3, stride: 1, pad: 1
Forward Execution Time (us) : 3638.806

# Benchmarking PyTorch: QConv2d
# Mode: Eager
# Name: QConv2d_N1_IC256_OC256_H56_W56_G1_kernel1_stride1_pad0
# Input: N: 1, IC: 256, OC: 256, H: 56, W: 56, G: 1, kernel: 1, stride: 1, pad: 0
Forward Execution Time (us) : 3870.311

# Benchmarking PyTorch: QConv2d
# Mode: Eager
# Name: QConv2d_N1_IC512_OC512_H56_W56_G32_kernel3_stride2_pad1
# Input: N: 1, IC: 512, OC: 512, H: 56, W: 56, G: 32, kernel: 3, stride: 2, pad: 1
Forward Execution Time (us) : 10052.192
```

For resnext tag:

```
# ----------------------------------------
# PyTorch/Caffe2 Operator Micro-benchmarks
# ----------------------------------------
# Tag : resnext101_32x4d

# Benchmarking PyTorch: QConv2d
# Mode: Eager
# Name: QConv2d_N1_IC512_OC512_H14_W14_G32_kernel3_stride1_pad1
# Input: N: 1, IC: 512, OC: 512, H: 14, W: 14, G: 32, kernel: 3, stride: 1, pad: 1
Forward Execution Time (us) : 543.171

# Benchmarking PyTorch: QConv2d
# Mode: Eager
# Name: QConv2d_N1_IC512_OC1024_H28_W28_G1_kernel1_stride2_pad0
# Input: N: 1, IC: 512, OC: 1024, H: 28, W: 28, G: 1, kernel: 1, stride: 2, pad: 0
Forward Execution Time (us) : 1914.301

# Benchmarking PyTorch: QConv2d
# Mode: Eager
# Name: QConv2d_N1_IC512_OC256_H28_W28_G1_kernel1_stride1_pad0
# Input: N: 1, IC: 512, OC: 256, H: 28, W: 28, G: 1, kernel: 1, stride: 1, pad: 0
Forward Execution Time (us) : 1809.069

# Benchmarking PyTorch: QConv2d
# Mode: Eager
# Name: QConv2d_N1_IC512_OC512_H28_W28_G1_kernel1_stride1_pad0
# Input: N: 1, IC: 512, OC: 512, H: 28, W: 28, G: 1, kernel: 1, stride: 1, pad: 0
Forward Execution Time (us) : 3100.579

# Benchmarking PyTorch: QConv2d
# Mode: Eager
# Name: QConv2d_N1_IC512_OC512_H28_W28_G32_kernel3_stride2_pad1
# Input: N: 1, IC: 512, OC: 512, H: 28, W: 28, G: 32, kernel: 3, stride: 2, pad: 1
Forward Execution Time (us) : 2247.540

# Benchmarking PyTorch: QConv2d
# Mode: Eager
# Name: QConv2d_N1_IC64_OC128_H56_W56_G1_kernel1_stride1_pad0
# Input: N: 1, IC: 64, OC: 128, H: 56, W: 56, G: 1, kernel: 1, stride: 1, pad: 0
Forward Execution Time (us) : 1001.731

# Benchmarking PyTorch: QConv2d
# Mode: Eager
# Name: QConv2d_N1_IC64_OC256_H56_W56_G1_kernel1_stride1_pad0
# Input: N: 1, IC: 64, OC: 256, H: 56, W: 56, G: 1, kernel: 1, stride: 1, pad: 0
Forward Execution Time (us) : 1571.620
```

Differential Revision: D16908445

Pulled By: rohan-varma

fbshipit-source-id: b711bc3591ce5dcd3ab2521134cff2b12188e3ac
2019-08-22 11:28:49 -07:00
Mingzhe Li
7499fe72e9 remove c2 tests from benchmark_all_test (#23437)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/23437

as title

Reviewed By: hl475

Differential Revision: D16519770

fbshipit-source-id: 63fc269e18c264d399e25f44b03f81fc3ae01113
2019-07-26 11:12:53 -07:00
Mingzhe Li
7eb0319339 add new tests to benchmark_all_test (#22787)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/22787

as title

Reviewed By: hl475

Differential Revision: D16219329

fbshipit-source-id: 097ee73e7644d5ca482ad044d0fd2c3e7dc2c10b
2019-07-11 22:50:55 -07:00
Mingzhe Li
a5cf6d5100 reorganize op bench directory (#21543)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/21543

No code change in this diff.

Reviewed By: hl475

Differential Revision: D15721419

fbshipit-source-id: 06212cc882f5297064153417dc4d80bce9ec2667
2019-06-07 16:06:51 -07:00