Commit Graph

13 Commits

Author SHA1 Message Date
Mingzhe Li
a750a1a2b4 modify config_list to support cross product of attributes (#23399)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/23399

This diff enables config_list function to support cross product of inputs besides the shapes.

The following is an example using the update interface. The same input shapes can run on different devices and dtypes.
```
add_short_configs = op_bench.config_list(
    attr_names=['M', 'N', 'K'],
    attrs=[
        [8, 16, 32],
        [16, 16, 64],
        [64, 64, 128],
    ],
    cross_product_configs={
        'device': ['cpu', 'cuda'],
        'dtype': [torch.float, torch.float64],
    },
    tags=['short'],
)
```

Test Plan:
buck run mode/dev-nosan caffe2/benchmarks/operator_benchmark/common/tests:pt_configs_list_test -- --iterations 3

```
# ----------------------------------------
# PyTorch/Caffe2 Operator Micro-benchmarks
# ----------------------------------------
# Tag : short

# Benchmarking PyTorch: add
# Mode: Eager
# Name: add_M8_N16_K32_devicecpu_dtypetorch.float32
# Input: M: 8, N: 16, K: 32, device: cpu, dtype: torch.float32
Forward Execution Time (us) : 164.489

# Benchmarking PyTorch: add
# Mode: Eager
# Name: add_M8_N16_K32_devicecpu_dtypetorch.float64
# Input: M: 8, N: 16, K: 32, device: cpu, dtype: torch.float64
Forward Execution Time (us) : 158.677

# Benchmarking PyTorch: add
# Mode: Eager
# Name: add_M8_N16_K32_devicecuda_dtypetorch.float32
# Input: M: 8, N: 16, K: 32, device: cuda, dtype: torch.float32
Forward Execution Time (us) : 103.866

# Benchmarking PyTorch: add
# Mode: Eager
# Name: add_M8_N16_K32_devicecuda_dtypetorch.float64
# Input: M: 8, N: 16, K: 32, device: cuda, dtype: torch.float64
Forward Execution Time (us) : 106.027

# Benchmarking PyTorch: add
# Mode: Eager
# Name: add_M16_N16_K64_devicecpu_dtypetorch.float32
# Input: M: 16, N: 16, K: 64, device: cpu, dtype: torch.float32
Forward Execution Time (us) : 451.016
...
```

buck test caffe2/benchmarks/operator_benchmark/fb:operator_benchmark_test

```
Building: finished in 2.4 sec (100%) 6882/6882 jobs, 2 updated
  Total time: 2.8 sec
Trace available for this run at /tmp/testpilot.20190730-160519.3952794.log
TestPilot test runner for Facebook. See https://fburl.com/testpilot for details.
Testpilot build revision 203f0104fbfcec4128be2c482c64736309ae39c9 fbpkg a4b2a9897a0c45069bd07d83e5981052 at Sun Jul 28 01:22:13 2019 by twsvcscm from /data/fbprojects/packages/testinfra.testpilot/667/t.par
Discovering tests
Running 3 tests
Started new test run: https://our.intern.facebook.com/intern/testinfra/testrun/5910974514382830
      ✓ caffe2/benchmarks/operator_benchmark/fb:operator_benchmark_test - test_config_list_impl (operator_benchmark_test.TestConsumeOp) 0.011 1/3 (passed)
      ✓ caffe2/benchmarks/operator_benchmark/fb:operator_benchmark_test - test_list_of_ops (operator_benchmark_test.TestConsumeOp) 19.920 2/3 (passed)
      ✓ caffe2/benchmarks/operator_benchmark/fb:operator_benchmark_test - test_single_op (operator_benchmark_test.TestConsumeOp) 23.418 3/3 (passed)
      ✓ caffe2/benchmarks/operator_benchmark/fb:operator_benchmark_test - main 0.000 (passed)
Finished test run: https://our.intern.facebook.com/intern/testinfra/testrun/5910974514382830
Summary (total time 29.90s):
  PASS: 4
  FAIL: 0
  SKIP: 0
  FATAL: 0
  TIMEOUT: 0
  OMIT: 0
```

Reviewed By: zheng-xq

Differential Revision: D16501272

fbshipit-source-id: d92b5cf50b0f37d5b3a79d423acb521366b4e8db
2019-10-09 11:24:06 -07:00
Mingzhe Li
f0ebf769de allow accepting empty input to the benchmark (#23462)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/23462

as title

Reviewed By: hl475

Differential Revision: D16527176

fbshipit-source-id: 7a8ff4f3c6122ae7b3205e0b446fec06fd95eedc
2019-07-26 17:30:42 -07:00
Mingzhe Li
828c08b4c7 allow passing a list of operators to benchmark (#23442)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/23442

Replace the argument name from `operator` to `operators` which can take a list of operators to test.

Reviewed By: hl475

Differential Revision: D16520779

fbshipit-source-id: 94284a87c64471793e319f5bd3143f89b9a192bb
2019-07-26 12:20:36 -07:00
Mingzhe Li
3516f3c235 handle exit from init method (#21211)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/21211

There are cases where the `init` method used to create inputs can exit with error. When this happens, that specific input should be skipped.

Reviewed By: zheng-xq

Differential Revision: D15466410

fbshipit-source-id: 55e86764b2ec56f7730349ff1df6e50efc0239d7
2019-07-25 21:41:06 -07:00
Mingzhe Li
2b2fe525b9 introduce a new interface to add a list of operators (#21209)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/21209

This diff introduces a new interface to add a list of operators. Here are the steps to add ops using this interface:

- create op_list:
```unary_ops_list = op_bench.op_list(
    attr_names=["op_name", "op_function"],
    attrs=[
         ["abs", torch.abs],
         ["abs_", torch.abs_],
   ],
)
```
-  create a bench class:
```
class UnaryOpBenchmark(op_bench.TorchBenchmarkBase):
    def init(self, M, N, op_function):
        self.input_one = torch.rand(M, N)
        self.op_func = op_function

    def forward(self):
        return self.op_func(self.input_one)
```
- 3. register those ops
``` op_bench.generate_pt_tests_from_list(unary_ops_list, unary_ops_configs, UnaryOpBenchmark)
 ```

Reviewed By: zheng-xq

Differential Revision: D15514188

fbshipit-source-id: f09b359cab8175eeb8d51b3ad7bbbcfbc9f6430f
2019-07-09 16:41:29 -07:00
Mingzhe Li
325ec2327f create tensor based on provided datatype (#22468)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/22468

as title

Reviewed By: ajauhri

Differential Revision: D15744503

fbshipit-source-id: 050b32dd7f135512385fc04f098c376c664211a9
2019-07-03 17:08:23 -07:00
Mingzhe Li
a4f281446b introduce flags to set omp and mkl threads (#21472)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/21472

as title

Reviewed By: hl475

Differential Revision: D15695846

fbshipit-source-id: 44437f6b94a9c583275fcc711bb6ccf2b04f90fc
2019-06-26 09:33:05 -07:00
Mingzhe Li
31089b02ce introduce a new interface to add op [core changes] (#21147)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/21147

This diff introduces a new interface to add PT/C2 operators to the benchmark suite.

The following steps are needed to add a new operator:
1. Specify the input shapes, args to an operator in configs
2. Create a PT/C2 benchmark class which includes ```init``` (create tensors),  ```forward``` (specify the operator to be tested.), and ```backward```(gradient of an op.) methods
3. call generate_pt_test/generate_c2_test to create test cases based on configs

Reviewed By: zheng-xq

Differential Revision: D15250380

fbshipit-source-id: 1025a7cf60d2427baa0f3f716455946d3d3e6a27
2019-05-31 09:21:04 -07:00
Ilia Cherniavskii
19e6886576 Intra-op parallel microbenchmarks for PT (#19997)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/19997
ghimport-source-id: 420d4a68a1ef879beee2734adba8abb575e0b0ab

Differential Revision: D15231375

Pulled By: ilia-cher

fbshipit-source-id: ce7248ea2ebb54d25c9d831c6e3f23f3534557dd
2019-05-06 20:21:45 -07:00
Ilia Cherniavskii
8c97f0b19e Initialize Caffe2 only when running Caffe2 benchmarks (#19980)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/19980
ghimport-source-id: ca31ca25b88a1c6219e4a32483f70738a8fdbf88

Differential Revision: D15229797

Pulled By: ilia-cher

fbshipit-source-id: 0b23dbdba0c0f60932a75d8b1900c54285f5a8e4
2019-05-06 19:17:23 -07:00
Mingzhe Li
08f5c05d60 make separate operators as independent binaries (#19450)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/19450

We want to make each operator benchmark as a separate binary. The previous way to run the benchmark is by collecting all operators into a single binary, it is unnecessary when we want to filter a specific operator. This diff aims to resolve that issue.

Reviewed By: ilia-cher

Differential Revision: D14808159

fbshipit-source-id: 43cd25b219c6e358d0cd2a61463b34596bf3bfac
2019-04-18 20:00:47 -07:00
Mingzhe Li
45d5b6be48 Enhance front-end to add op (#19433)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/19433

For operator benchmark project, we need to cover a lot of operators, so the interface for adding operators needs to be very clean and simple. This diff is implementing a new interface to add op.

Here is the logic to add new operator to the benchmark:
```
long_config = {}
short_config = {}

map_func

add_test(
  [long_config, short_config],
  map_func,
  [caffe2 op]
  [pt op]
)
```

Reviewed By: zheng-xq

Differential Revision: D14791191

fbshipit-source-id: ac6738507cf1b9d6013dc8e546a2022a9b177f05
2019-04-18 17:07:02 -07:00
Mingzhe Li
5f5a2aaab9 Operator-level performance microbenchmarks (#18740)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18740

Test utilities for writing Caffe2/PyTorch performance microbenchmarks. Brief description of the file structure

* benchmark_core.py : core utiltiites for running microbenchmark tests
* benchmark_caffe2.py : Caffe2 specific benchmark utilitites
* benchmark_pytorch.py: PyTorch specific benchmark utilities
* benchmark_runner.py : Main function. Currently it can run the microbenchmark tests in a stand-alone mode. The next step is to have this integrate with AI-PEP.

The utilities are located at https://github.com/pytorch/pytorch/tree/master/test to have access to both Caffe2/PyTorch Python's frontend.

Include two operator microbenchmarks; support both Caffe2/PyTorch:
* MatMul
* Add

Reference: PyTorch benchmarks : https://github.com/pytorch/benchmark/tree/master/timing/python. In this work, we start with two example binary operators MatMul and Add, but eventually we should to cover unary operators like in the PyTorch benchmark repo.

Reviewed By: zheng-xq

Differential Revision: D13887111

fbshipit-source-id: b7a56b95448c9ec3e674b0de0ffb96af4439bfce
2019-04-02 17:06:19 -07:00