mirror of
https://github.com/zebrajr/pytorch.git
synced 2025-12-07 00:21:07 +01:00
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/56455 CPU convolution performance is pretty important for inference, so tracking performance for CNNs often boils down to finding shapes that have either regressed or need optimization. This diff adds a benchmark harness that lets you pretty easily add new sets of convolution parameters to benchmark. I've started with an exhaustive list of layers from MobileNetV3, ResNet-18 and ResNet-50, which are fairly popular torchvision models. More to come if these prove useful. I've also added four backend configurations: - native: uses at::conv2d, which applies its own backend selection heuristics - mkldnn_none: uses mkldnn but applies no prepacking; uses the NCHW default - mkldnn_weight: prepacks weights in an mkldnn-friendly format - mkldnn_input: also prepacks the inputs in NCHW16c ghstack-source-id: 127027784 Test Plan: Ran this on my Skylake Xeon Reviewed By: ngimel Differential Revision: D27876139 fbshipit-source-id: 950e1dfa09a33cc3acc7efd579f56df8453af1f2
3 lines
123 B
CMake
3 lines
123 B
CMake
add_executable(convolution_bench convolution.cpp)
|
|
target_link_libraries(convolution_bench PRIVATE torch_library benchmark)
|