Xuehai Pan
42015db6a9
[BE] fix typos in benchmarks/ ( #156077 )
...
Pull Request resolved: https://github.com/pytorch/pytorch/pull/156077
Approved by: https://github.com/Skylion007 , https://github.com/malfet
ghstack dependencies: #156069
2025-06-17 13:12:18 +00:00
Anthony Shoumikhin
e2f9759bd0
Fix broken URLs ( #152237 )
...
Pull Request resolved: https://github.com/pytorch/pytorch/pull/152237
Approved by: https://github.com/huydhn , https://github.com/malfet
2025-04-27 09:56:42 +00:00
cyy
2fd75667b4
[Caffe2]Remove Caffe2 scripts and benchmarks ( #126747 )
...
Due to removal of Caffe2.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/126747
Approved by: https://github.com/ezyang , https://github.com/malfet
2024-06-05 23:46:31 +00:00
Aaron Gokaslan
29cc293725
[BE]: FURB142 - Remove set mutations. Use set update ( #124551 )
...
Uses set mutation methods instead of manually reimplementing (update, set_difference etc).
Pull Request resolved: https://github.com/pytorch/pytorch/pull/124551
Approved by: https://github.com/ezyang
2024-04-21 14:12:33 +00:00
Aaron Gokaslan
bd10fea79a
[BE]: Enable F821 and fix bugs ( #116579 )
...
Fixes #112371
I tried to fix as many of the bugs as I could, a few I could not figure out what the proper fix for them was though and so I left them with noqas.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/116579
Approved by: https://github.com/ezyang
2024-01-01 08:40:46 +00:00
baocheny
e01e00fba8
fix code spell ( #116530 )
...
Pull Request resolved: https://github.com/pytorch/pytorch/pull/116530
Approved by: https://github.com/albanD
2023-12-29 12:58:38 +00:00
FFFrog
9a1cdcb8a0
Format: fixing multiple string concatenation in single line ( #106013 )
...
Fixing multiple string concatenation in single line
Pull Request resolved: https://github.com/pytorch/pytorch/pull/106013
Approved by: https://github.com/albanD
2023-07-26 18:39:18 +00:00
Edward Z. Yang
dd3a77bc96
Apply UFMT to all files in benchmarks/ ( #105928 )
...
Signed-off-by: Edward Z. Yang <ezyang@meta.com>
Pull Request resolved: https://github.com/pytorch/pytorch/pull/105928
Approved by: https://github.com/albanD
2023-07-26 01:18:48 +00:00
Justin Chu
5ef023b05a
[BE] Enable ruff's UP rules and autoformat benchmarks/ ( #105429 )
...
Pull Request resolved: https://github.com/pytorch/pytorch/pull/105429
Approved by: https://github.com/malfet
2023-07-19 04:46:37 +00:00
Aaron Gokaslan
8fce9a09cd
[BE]: pyupgrade Python to 3.8 - imports and object inheritance only ( #94308 )
...
Apply parts of pyupgrade to torch (starting with the safest changes).
This PR only does two things: removes the need to inherit from object and removes unused future imports.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/94308
Approved by: https://github.com/ezyang , https://github.com/albanD
2023-02-07 21:10:56 +00:00
Xiang Gao
20ac736200
Remove py2 compatible future imports ( #44735 )
...
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/44735
Reviewed By: mruberry
Differential Revision: D23731306
Pulled By: ezyang
fbshipit-source-id: 0ba009a99e475ddbe22981be8ac636f8a1c8b02f
2020-09-16 12:55:57 -07:00
Mingzhe Li
4ddf27ba48
[op-bench] check device attribute in user inputs
...
Summary: The device attribute in the op benchmark can only include 'cpu' or 'cuda'. So adding a check in this diff.
Test Plan: buck run caffe2/benchmarks/operator_benchmark:benchmark_all_test -- --warmup_iterations 1 --iterations 1
Reviewed By: ngimel
Differential Revision: D22538252
fbshipit-source-id: 3e5af72221fc056b8d867321ad22e35a2557b8c3
2020-07-14 17:17:59 -07:00
Mingzhe Li
9cb8fb61c2
update operator_range discription in op bench ( #30170 )
...
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/30170
as title
Test Plan:
```
buck-out/opt/gen/caffe2/benchmarks/operator_benchmark/benchmark_all_other_test.par --tag_filter all --iterations 1 --operator_range ef
...
ValueError: The correct format for operator_range is <start>-<end>, or <point>, <start>-<end>
buck-out/opt/gen/caffe2/benchmarks/operator_benchmark/benchmark_all_other_test.par --tag_filter all --iterations 1 --operator_range a-b
# ----------------------------------------
# PyTorch/Caffe2 Operator Micro-benchmarks
# ----------------------------------------
# Tag : all
# Benchmarking PyTorch: add
# Mode: Eager
# Name: add_M8_N32_K256_cpu
# Input: M: 8, N: 32, K: 256, device: cpu
Forward Execution Time (us) : 60.551
# Benchmarking PyTorch: add
# Mode: Eager
# Name: add_M8_N32_K256_cuda
# Input: M: 8, N: 32, K: 256, device: cuda
Forward Execution Time (us) : 67.716
...
buck-out/opt/gen/caffe2/benchmarks/operator_benchmark/benchmark_all_other_test.par --tag_filter all --iterations 1 --operator_range b,d-f
# ----------------------------------------
# PyTorch/Caffe2 Operator Micro-benchmarks
# ----------------------------------------
# Tag : all
# Benchmarking PyTorch: batchnorm
# Mode: Eager
# Name: batchnorm_M1_N256_K3136_cpu
# Input: M: 1, N: 256, K: 3136, device: cpu
Forward Execution Time (us) : 296.004
...
Reviewed By: hl475
Differential Revision: D18619975
fbshipit-source-id: 08f27ee2aeda47be431385f4b20ef7fbeb797516
2019-11-20 12:07:14 -08:00
Mingzhe Li
2b1466e665
allow operator_range to take multiple ranges ( #30124 )
...
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/30124
as title
Test Plan:
```
buck run mode/opt //caffe2/benchmarks/operator_benchmark:benchmark_all_other_test -- --tag_filter all --iterations 1 --device cuda --operator_range a,b-c
# ----------------------------------------
# PyTorch/Caffe2 Operator Micro-benchmarks
# ----------------------------------------
# Tag : all
# Benchmarking PyTorch: add
# Mode: Eager
# Name: add_M8_N32_K256_cuda
# Input: M: 8, N: 32, K: 256, device: cuda
Forward Execution Time (us) : 71.683
# Benchmarking PyTorch: batchnorm
# Mode: Eager
# Name: batchnorm_M1_N256_K3136_cuda
# Input: M: 1, N: 256, K: 3136, device: cuda
Forward Execution Time (us) : 118.840
# Benchmarking PyTorch: batchnorm
# Mode: Eager
# Name: batchnorm_M1_N8192_K1_cuda
# Input: M: 1, N: 8192, K: 1, device: cuda
Forward Execution Time (us) : 134.274
# Benchmarking PyTorch: cat
# Mode: Eager
# Name: cat_M128_N128_K1_dim1_cuda
# Input: M: 128, N: 128, K: 1, dim: 1, device: cuda
Forward Execution Time (us) : 109.172
...
Reviewed By: hl475
Differential Revision: D18605640
fbshipit-source-id: 4ae9b91a50c4cdf1b161b6c5c58f365ba514050c
2019-11-19 16:15:46 -08:00
Mingzhe Li
23991e89cc
change operator_range to work with lower and upper in op bench ( #30096 )
...
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/30096
as title
Test Plan:
```
buck run mode/opt caffe2/benchmarks/operator_benchmark:benchmark_all_quantized_test -- --iterations 1 --operator_range a-a
# ----------------------------------------
# PyTorch/Caffe2 Operator Micro-benchmarks
# ----------------------------------------
# Tag : short
# Benchmarking PyTorch: add
# Mode: Eager
# Name: add_N2_dtypetorch.quint8_contigTrue
# Input: N: 2, dtype: torch.quint8, contig: True
Forward Execution Time (us) : 22.251
# Benchmarking PyTorch: add
# Mode: Eager
# Name: add_N2_dtypetorch.qint8_contigTrue
# Input: N: 2, dtype: torch.qint8, contig: True
Forward Execution Time (us) : 17.247
# Benchmarking PyTorch: add
# Mode: Eager
# Name: add_N2_dtypetorch.qint32_contigTrue
# Input: N: 2, dtype: torch.qint32, contig: True
Forward Execution Time (us) : 29.653
...
Reviewed By: hl475
Differential Revision: D18596447
fbshipit-source-id: eac8d9d90db244aa9799293c22bb0d30cf3edf58
2019-11-19 11:01:02 -08:00
Mingzhe Li
8b9bac1fad
add operator-range argument to the op bench ( #30051 )
...
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/30051
This argument takes hyphen delimited start and end chars to filter operators. If the first character of an operator is in the start and end range, it will be tested. Otherwise skipped.
(Note: this ignores all push blocking failures!)
Test Plan:
```
buck run mode/opt caffe2/benchmarks/operator_benchmark:benchmark_all_test -- --iterations 1 --operator_range b-c
# ----------------------------------------
# PyTorch/Caffe2 Operator Micro-benchmarks
# ----------------------------------------
# Tag : short
# Benchmarking PyTorch: ceil
# Mode: Eager
# Name: ceil_M512_N512_cpu
# Input: M: 512, N: 512, device: cpu
Forward Execution Time (us) : 110.720
# Benchmarking PyTorch: ceil_
# Mode: Eager
# Name: ceil__M512_N512_cpu
# Input: M: 512, N: 512, device: cpu
Forward Execution Time (us) : 51.128
...
buck run mode/opt caffe2/benchmarks/operator_benchmark:benchmark_all_test -- --iterations 1 --operator_range None
# ----------------------------------------
# PyTorch/Caffe2 Operator Micro-benchmarks
# ----------------------------------------
# Tag : short
# Benchmarking PyTorch: abs
# Mode: Eager
# Name: abs_M512_N512_cpu
# Input: M: 512, N: 512, device: cpu
Forward Execution Time (us) : 107.113
# Benchmarking PyTorch: abs_
# Mode: Eager
# Name: abs__M512_N512_cpu
# Input: M: 512, N: 512, device: cpu
Forward Execution Time (us) : 54.259
...
Reviewed By: hl475
Differential Revision: D18581910
fbshipit-source-id: b1a1a7ba76f4d6a61c8a1659f15e9c66097654d4
2019-11-18 20:34:43 -08:00
Mingzhe Li
7374dd0d52
remove SkipInputShape flag ( #29615 )
...
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/29615
Remove that flag as it's not needed any more.
Test Plan: na
Reviewed By: hl475
Differential Revision: D18440271
fbshipit-source-id: 41b0659c72ef746a1cc268174fd1e7dc2beb1ae2
2019-11-11 16:56:40 -08:00
Mingzhe Li
f63cbf3ae2
change op benchmark forward_only flag ( #28967 )
...
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/28967
Change forward_only flag to take True or False so it should be integrated with PEP.
Test Plan:
```
[mingzhe0908@devgpu203.prn2 ~/fbsource/fbcode] ~/fbsource/fbcode/buck-out/opt/gen/caffe2/benchmarks/operator_benchmark/pt/add_test.par --forward_only True --iterations 1
# ----------------------------------------
# PyTorch/Caffe2 Operator Micro-benchmarks
# ----------------------------------------
# Tag : short
# Benchmarking PyTorch: add
# Mode: Eager
# Name: add_M64_N64_K64_cpu
# Input: M: 64, N: 64, K: 64, device: cpu
Forward Execution Time (us) : 152.489
# Benchmarking PyTorch: add
# Mode: Eager
# Name: add_M64_N64_K128_cpu
# Input: M: 64, N: 64, K: 128, device: cpu
Forward Execution Time (us) : 236.608
[mingzhe0908@devgpu203.prn2 ~/fbsource/fbcode] ~/fbsource/fbcode/buck-out/opt/gen/caffe2/benchmarks/operator_benchmark/pt/add_test.par --forward_only False --iterations 1
# ----------------------------------------
# PyTorch/Caffe2 Operator Micro-benchmarks
# ----------------------------------------
# Tag : short
# Benchmarking PyTorch: add
# Mode: Eager
# Name: add_M64_N64_K64_cpu
# Input: M: 64, N: 64, K: 64, device: cpu
Forward Execution Time (us) : 147.174
# Benchmarking PyTorch: add
# Mode: Eager
# Name: add_M64_N64_K128_cpu
# Input: M: 64, N: 64, K: 128, device: cpu
Forward Execution Time (us) : 253.437
# Benchmarking PyTorch: add
# Mode: Eager
# Name: add_M64_N64_K64_cpu_bwdall
# Input: M: 64, N: 64, K: 64, device: cpu
Backward Execution Time (us) : 1044.082
Reviewed By: hl475
Differential Revision: D18247416
fbshipit-source-id: 1c6cff1ac98233d4f0ca298e0cb4a0d3466e5834
2019-10-31 13:28:58 -07:00
Mingzhe Li
ab15584dce
add random sample function to generate list of inputs ( #23174 )
...
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/23174
This diff introduces a new function to random generates inputs based on the weights.
Test Plan:
buck run mode/dev-nosan //caffe2/benchmarks/operator_benchmark/common/tests:random_sample_test -- --iterations 3
```
# ----------------------------------------
# PyTorch/Caffe2 Operator Micro-benchmarks
# ----------------------------------------
# Tag : short
# Benchmarking PyTorch: add
# Mode: Eager
# Name: add_M1_N5_K7
# Input: M: 1, N: 5, K: 7
Forward Execution Time (us) : 82.923
# Benchmarking PyTorch: add
# Mode: Eager
# Name: add_M1_N6_K8
# Input: M: 1, N: 6, K: 8
Forward Execution Time (us) : 79.535
# Benchmarking PyTorch: add
# Mode: Eager
# Name: add_M2_N6_K7
# Input: M: 2, N: 6, K: 7
Forward Execution Time (us) : 83.471
# Benchmarking PyTorch: add
# Mode: Eager
# Name: add_M1_N4_K7
# Input: M: 1, N: 4, K: 7
Forward Execution Time (us) : 84.410
# Benchmarking PyTorch: add
# Mode: Eager
# Name: add_M1_N6_K7
# Input: M: 1, N: 6, K: 7
Forward Execution Time (us) : 82.399
```
Reviewed By: zheng-xq
Differential Revision: D15791723
fbshipit-source-id: 730e34d455e962ddf594a491d7c81c3f99fafa86
2019-10-09 11:24:14 -07:00
Mingzhe Li
a750a1a2b4
modify config_list to support cross product of attributes ( #23399 )
...
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/23399
This diff enables config_list function to support cross product of inputs besides the shapes.
The following is an example using the update interface. The same input shapes can run on different devices and dtypes.
```
add_short_configs = op_bench.config_list(
attr_names=['M', 'N', 'K'],
attrs=[
[8, 16, 32],
[16, 16, 64],
[64, 64, 128],
],
cross_product_configs={
'device': ['cpu', 'cuda'],
'dtype': [torch.float, torch.float64],
},
tags=['short'],
)
```
Test Plan:
buck run mode/dev-nosan caffe2/benchmarks/operator_benchmark/common/tests:pt_configs_list_test -- --iterations 3
```
# ----------------------------------------
# PyTorch/Caffe2 Operator Micro-benchmarks
# ----------------------------------------
# Tag : short
# Benchmarking PyTorch: add
# Mode: Eager
# Name: add_M8_N16_K32_devicecpu_dtypetorch.float32
# Input: M: 8, N: 16, K: 32, device: cpu, dtype: torch.float32
Forward Execution Time (us) : 164.489
# Benchmarking PyTorch: add
# Mode: Eager
# Name: add_M8_N16_K32_devicecpu_dtypetorch.float64
# Input: M: 8, N: 16, K: 32, device: cpu, dtype: torch.float64
Forward Execution Time (us) : 158.677
# Benchmarking PyTorch: add
# Mode: Eager
# Name: add_M8_N16_K32_devicecuda_dtypetorch.float32
# Input: M: 8, N: 16, K: 32, device: cuda, dtype: torch.float32
Forward Execution Time (us) : 103.866
# Benchmarking PyTorch: add
# Mode: Eager
# Name: add_M8_N16_K32_devicecuda_dtypetorch.float64
# Input: M: 8, N: 16, K: 32, device: cuda, dtype: torch.float64
Forward Execution Time (us) : 106.027
# Benchmarking PyTorch: add
# Mode: Eager
# Name: add_M16_N16_K64_devicecpu_dtypetorch.float32
# Input: M: 16, N: 16, K: 64, device: cpu, dtype: torch.float32
Forward Execution Time (us) : 451.016
...
```
buck test caffe2/benchmarks/operator_benchmark/fb:operator_benchmark_test
```
Building: finished in 2.4 sec (100%) 6882/6882 jobs, 2 updated
Total time: 2.8 sec
Trace available for this run at /tmp/testpilot.20190730-160519.3952794.log
TestPilot test runner for Facebook. See https://fburl.com/testpilot for details.
Testpilot build revision 203f0104fbfcec4128be2c482c64736309ae39c9 fbpkg a4b2a9897a0c45069bd07d83e5981052 at Sun Jul 28 01:22:13 2019 by twsvcscm from /data/fbprojects/packages/testinfra.testpilot/667/t.par
Discovering tests
Running 3 tests
Started new test run: https://our.intern.facebook.com/intern/testinfra/testrun/5910974514382830
✓ caffe2/benchmarks/operator_benchmark/fb:operator_benchmark_test - test_config_list_impl (operator_benchmark_test.TestConsumeOp) 0.011 1/3 (passed)
✓ caffe2/benchmarks/operator_benchmark/fb:operator_benchmark_test - test_list_of_ops (operator_benchmark_test.TestConsumeOp) 19.920 2/3 (passed)
✓ caffe2/benchmarks/operator_benchmark/fb:operator_benchmark_test - test_single_op (operator_benchmark_test.TestConsumeOp) 23.418 3/3 (passed)
✓ caffe2/benchmarks/operator_benchmark/fb:operator_benchmark_test - main 0.000 (passed)
Finished test run: https://our.intern.facebook.com/intern/testinfra/testrun/5910974514382830
Summary (total time 29.90s):
PASS: 4
FAIL: 0
SKIP: 0
FATAL: 0
TIMEOUT: 0
OMIT: 0
```
Reviewed By: zheng-xq
Differential Revision: D16501272
fbshipit-source-id: d92b5cf50b0f37d5b3a79d423acb521366b4e8db
2019-10-09 11:24:06 -07:00
Mingzhe Li
f0ebf769de
allow accepting empty input to the benchmark ( #23462 )
...
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/23462
as title
Reviewed By: hl475
Differential Revision: D16527176
fbshipit-source-id: 7a8ff4f3c6122ae7b3205e0b446fec06fd95eedc
2019-07-26 17:30:42 -07:00
Mingzhe Li
828c08b4c7
allow passing a list of operators to benchmark ( #23442 )
...
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/23442
Replace the argument name from `operator` to `operators` which can take a list of operators to test.
Reviewed By: hl475
Differential Revision: D16520779
fbshipit-source-id: 94284a87c64471793e319f5bd3143f89b9a192bb
2019-07-26 12:20:36 -07:00
Mingzhe Li
3516f3c235
handle exit from init method ( #21211 )
...
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/21211
There are cases where the `init` method used to create inputs can exit with error. When this happens, that specific input should be skipped.
Reviewed By: zheng-xq
Differential Revision: D15466410
fbshipit-source-id: 55e86764b2ec56f7730349ff1df6e50efc0239d7
2019-07-25 21:41:06 -07:00
Mingzhe Li
2b2fe525b9
introduce a new interface to add a list of operators ( #21209 )
...
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/21209
This diff introduces a new interface to add a list of operators. Here are the steps to add ops using this interface:
- create op_list:
```unary_ops_list = op_bench.op_list(
attr_names=["op_name", "op_function"],
attrs=[
["abs", torch.abs],
["abs_", torch.abs_],
],
)
```
- create a bench class:
```
class UnaryOpBenchmark(op_bench.TorchBenchmarkBase):
def init(self, M, N, op_function):
self.input_one = torch.rand(M, N)
self.op_func = op_function
def forward(self):
return self.op_func(self.input_one)
```
- 3. register those ops
``` op_bench.generate_pt_tests_from_list(unary_ops_list, unary_ops_configs, UnaryOpBenchmark)
```
Reviewed By: zheng-xq
Differential Revision: D15514188
fbshipit-source-id: f09b359cab8175eeb8d51b3ad7bbbcfbc9f6430f
2019-07-09 16:41:29 -07:00
Mingzhe Li
325ec2327f
create tensor based on provided datatype ( #22468 )
...
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/22468
as title
Reviewed By: ajauhri
Differential Revision: D15744503
fbshipit-source-id: 050b32dd7f135512385fc04f098c376c664211a9
2019-07-03 17:08:23 -07:00
Mingzhe Li
a4f281446b
introduce flags to set omp and mkl threads ( #21472 )
...
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/21472
as title
Reviewed By: hl475
Differential Revision: D15695846
fbshipit-source-id: 44437f6b94a9c583275fcc711bb6ccf2b04f90fc
2019-06-26 09:33:05 -07:00
Mingzhe Li
31089b02ce
introduce a new interface to add op [core changes] ( #21147 )
...
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/21147
This diff introduces a new interface to add PT/C2 operators to the benchmark suite.
The following steps are needed to add a new operator:
1. Specify the input shapes, args to an operator in configs
2. Create a PT/C2 benchmark class which includes ```init``` (create tensors), ```forward``` (specify the operator to be tested.), and ```backward```(gradient of an op.) methods
3. call generate_pt_test/generate_c2_test to create test cases based on configs
Reviewed By: zheng-xq
Differential Revision: D15250380
fbshipit-source-id: 1025a7cf60d2427baa0f3f716455946d3d3e6a27
2019-05-31 09:21:04 -07:00
Ilia Cherniavskii
19e6886576
Intra-op parallel microbenchmarks for PT ( #19997 )
...
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/19997
ghimport-source-id: 420d4a68a1ef879beee2734adba8abb575e0b0ab
Differential Revision: D15231375
Pulled By: ilia-cher
fbshipit-source-id: ce7248ea2ebb54d25c9d831c6e3f23f3534557dd
2019-05-06 20:21:45 -07:00
Ilia Cherniavskii
8c97f0b19e
Initialize Caffe2 only when running Caffe2 benchmarks ( #19980 )
...
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/19980
ghimport-source-id: ca31ca25b88a1c6219e4a32483f70738a8fdbf88
Differential Revision: D15229797
Pulled By: ilia-cher
fbshipit-source-id: 0b23dbdba0c0f60932a75d8b1900c54285f5a8e4
2019-05-06 19:17:23 -07:00
Mingzhe Li
08f5c05d60
make separate operators as independent binaries ( #19450 )
...
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/19450
We want to make each operator benchmark as a separate binary. The previous way to run the benchmark is by collecting all operators into a single binary, it is unnecessary when we want to filter a specific operator. This diff aims to resolve that issue.
Reviewed By: ilia-cher
Differential Revision: D14808159
fbshipit-source-id: 43cd25b219c6e358d0cd2a61463b34596bf3bfac
2019-04-18 20:00:47 -07:00
Mingzhe Li
45d5b6be48
Enhance front-end to add op ( #19433 )
...
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/19433
For operator benchmark project, we need to cover a lot of operators, so the interface for adding operators needs to be very clean and simple. This diff is implementing a new interface to add op.
Here is the logic to add new operator to the benchmark:
```
long_config = {}
short_config = {}
map_func
add_test(
[long_config, short_config],
map_func,
[caffe2 op]
[pt op]
)
```
Reviewed By: zheng-xq
Differential Revision: D14791191
fbshipit-source-id: ac6738507cf1b9d6013dc8e546a2022a9b177f05
2019-04-18 17:07:02 -07:00
Mingzhe Li
5f5a2aaab9
Operator-level performance microbenchmarks ( #18740 )
...
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18740
Test utilities for writing Caffe2/PyTorch performance microbenchmarks. Brief description of the file structure
* benchmark_core.py : core utiltiites for running microbenchmark tests
* benchmark_caffe2.py : Caffe2 specific benchmark utilitites
* benchmark_pytorch.py: PyTorch specific benchmark utilities
* benchmark_runner.py : Main function. Currently it can run the microbenchmark tests in a stand-alone mode. The next step is to have this integrate with AI-PEP.
The utilities are located at https://github.com/pytorch/pytorch/tree/master/test to have access to both Caffe2/PyTorch Python's frontend.
Include two operator microbenchmarks; support both Caffe2/PyTorch:
* MatMul
* Add
Reference: PyTorch benchmarks : https://github.com/pytorch/benchmark/tree/master/timing/python . In this work, we start with two example binary operators MatMul and Add, but eventually we should to cover unary operators like in the PyTorch benchmark repo.
Reviewed By: zheng-xq
Differential Revision: D13887111
fbshipit-source-id: b7a56b95448c9ec3e674b0de0ffb96af4439bfce
2019-04-02 17:06:19 -07:00