modify config_list to support cross product of attributes (#23399)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/23399

This diff enables config_list function to support cross product of inputs besides the shapes.

The following is an example using the update interface. The same input shapes can run on different devices and dtypes.
```
add_short_configs = op_bench.config_list(
    attr_names=['M', 'N', 'K'],
    attrs=[
        [8, 16, 32],
        [16, 16, 64],
        [64, 64, 128],
    ],
    cross_product_configs={
        'device': ['cpu', 'cuda'],
        'dtype': [torch.float, torch.float64],
    },
    tags=['short'],
)
```

Test Plan:
buck run mode/dev-nosan caffe2/benchmarks/operator_benchmark/common/tests:pt_configs_list_test -- --iterations 3

```
# ----------------------------------------
# PyTorch/Caffe2 Operator Micro-benchmarks
# ----------------------------------------
# Tag : short

# Benchmarking PyTorch: add
# Mode: Eager
# Name: add_M8_N16_K32_devicecpu_dtypetorch.float32
# Input: M: 8, N: 16, K: 32, device: cpu, dtype: torch.float32
Forward Execution Time (us) : 164.489

# Benchmarking PyTorch: add
# Mode: Eager
# Name: add_M8_N16_K32_devicecpu_dtypetorch.float64
# Input: M: 8, N: 16, K: 32, device: cpu, dtype: torch.float64
Forward Execution Time (us) : 158.677

# Benchmarking PyTorch: add
# Mode: Eager
# Name: add_M8_N16_K32_devicecuda_dtypetorch.float32
# Input: M: 8, N: 16, K: 32, device: cuda, dtype: torch.float32
Forward Execution Time (us) : 103.866

# Benchmarking PyTorch: add
# Mode: Eager
# Name: add_M8_N16_K32_devicecuda_dtypetorch.float64
# Input: M: 8, N: 16, K: 32, device: cuda, dtype: torch.float64
Forward Execution Time (us) : 106.027

# Benchmarking PyTorch: add
# Mode: Eager
# Name: add_M16_N16_K64_devicecpu_dtypetorch.float32
# Input: M: 16, N: 16, K: 64, device: cpu, dtype: torch.float32
Forward Execution Time (us) : 451.016
...
```

buck test caffe2/benchmarks/operator_benchmark/fb:operator_benchmark_test

```
Building: finished in 2.4 sec (100%) 6882/6882 jobs, 2 updated
  Total time: 2.8 sec
Trace available for this run at /tmp/testpilot.20190730-160519.3952794.log
TestPilot test runner for Facebook. See https://fburl.com/testpilot for details.
Testpilot build revision 203f0104fbfcec4128be2c482c64736309ae39c9 fbpkg a4b2a9897a0c45069bd07d83e5981052 at Sun Jul 28 01:22:13 2019 by twsvcscm from /data/fbprojects/packages/testinfra.testpilot/667/t.par
Discovering tests
Running 3 tests
Started new test run: https://our.intern.facebook.com/intern/testinfra/testrun/5910974514382830
      ✓ caffe2/benchmarks/operator_benchmark/fb:operator_benchmark_test - test_config_list_impl (operator_benchmark_test.TestConsumeOp) 0.011 1/3 (passed)
      ✓ caffe2/benchmarks/operator_benchmark/fb:operator_benchmark_test - test_list_of_ops (operator_benchmark_test.TestConsumeOp) 19.920 2/3 (passed)
      ✓ caffe2/benchmarks/operator_benchmark/fb:operator_benchmark_test - test_single_op (operator_benchmark_test.TestConsumeOp) 23.418 3/3 (passed)
      ✓ caffe2/benchmarks/operator_benchmark/fb:operator_benchmark_test - main 0.000 (passed)
Finished test run: https://our.intern.facebook.com/intern/testinfra/testrun/5910974514382830
Summary (total time 29.90s):
  PASS: 4
  FAIL: 0
  SKIP: 0
  FATAL: 0
  TIMEOUT: 0
  OMIT: 0
```

Reviewed By: zheng-xq

Differential Revision: D16501272

fbshipit-source-id: d92b5cf50b0f37d5b3a79d423acb521366b4e8db
This commit is contained in:
Mingzhe Li 2019-10-09 11:21:51 -07:00 committed by Facebook Github Bot
parent b9b9fd4fad
commit a750a1a2b4
2 changed files with 81 additions and 15 deletions

View File

@ -109,29 +109,55 @@ def cross_product_configs(**configs):
def config_list(**configs):
"""
Take specific inputs from users
For example, given
""" Generate configs based on the list of input shapes.
This function will take input shapes specified in a list from user. Besides
that, all other parameters will be cross producted first and each of the
generated list will be merged with the input shapes list.
Reserved Args:
attr_names(reserved): a list of names for input shapes.
attrs(reserved): a list of values for each input shape.
corss_product: a dictionary of attributes which will be
cross producted with the input shapes.
tags(reserved): a tag used to filter inputs.
Here is an example:
attrs = [
[1, 2],
[4, 5],
]
attr_names = ["M", "N"]
we will generate (({'M': 1}, {'N' : 2}),
({'M': 4}, {'N' : 5}))
],
attr_names = ['M', 'N'],
cross_product_configs={
'device': ['cpu', 'cuda'],
},
we will generate [[{'M': 1}, {'N' : 2}, {'device' : 'cpu'}],
[{'M': 1}, {'N' : 2}, {'device' : 'cuda'}],
[{'M': 4}, {'N' : 5}, {'device' : 'cpu'}],
[{'M': 4}, {'N' : 5}, {'device' : 'cuda'}]]
"""
generated_configs = []
if "attrs" not in configs:
reserved_names = ['attrs', 'attr_names', 'tags']
if any(attr not in configs for attr in reserved_names):
raise ValueError("Missing attrs in configs")
for inputs in configs["attrs"]:
tmp_result = [{configs["attr_names"][i] : input_value}
cross_configs = None
if 'cross_product_configs' in configs:
cross_configs = cross_product_configs(**configs['cross_product_configs'])
for inputs in configs['attrs']:
tmp_result = [{configs['attr_names'][i] : input_value}
for i, input_value in enumerate(inputs)]
# TODO(mingzhe0908):
# If multiple "tags" were provided, do they get concat?
# If a config has both ["short", "medium"], it should match
# both "short" and "medium" tag-filter?
tmp_result.append({"tags" : '_'.join(configs["tags"])})
generated_configs.append(tmp_result)
# If multiple 'tags' were provided, do they get concat?
# If a config has both ['short', 'medium'], it should match
# both 'short' and 'medium' tag-filter?
tmp_result.append({'tags' : '_'.join(configs['tags'])})
if cross_configs:
generated_configs += [tmp_result + list(config) for config in cross_configs]
else:
generated_configs.append(tmp_result)
return generated_configs

View File

@ -0,0 +1,40 @@
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
from __future__ import unicode_literals
import operator_benchmark as op_bench
import torch
"""Microbenchmarks for element-wise Add operator. Supports both Caffe2/PyTorch."""
add_short_configs = op_bench.config_list(
attr_names=['M', 'N', 'K'],
attrs=[
[8, 16, 32],
[16, 16, 64],
[64, 64, 128],
],
cross_product_configs={
'device': ['cpu', 'cuda'],
'dtype': [torch.float, torch.float64],
},
tags=['short'],
)
class AddBenchmark(op_bench.TorchBenchmarkBase):
def init(self, M, N, K, device, dtype):
self.input_one = torch.rand(M, N, K, device=device, dtype=dtype, requires_grad=True)
self.input_two = torch.rand(M, N, K, device=device, dtype=dtype)
self.set_module_name('add')
def forward(self):
return torch.add(self.input_one, self.input_two)
op_bench.generate_pt_test(add_short_configs, AddBenchmark)
if __name__ == "__main__":
op_bench.benchmark_runner.main()