mirror of
https://github.com/zebrajr/pytorch.git
synced 2025-12-07 12:21:27 +01:00
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/28229 We have `torch::RegisterOperators` for custom ops. `torch::jit::RegisterOperators` had a dual state of being able to register custom ops if called one way and being able to register pure JIT ops if called another way. This is confusing because you end up in different operator libraries depending on which API exactly you're using. This PR removes the ability for torch::jit::RegisterOperators to register custom ops and forces people to use the new torch::RegisterOperators. This was already deprecated before but we now remove it. ghstack-source-id: 92137305 Test Plan: unit tests Differential Revision: D17981895 fbshipit-source-id: 0af267dfdc3c6a2736740091cf841bac40deff40 |
||
|---|---|---|
| .. | ||
| fastrnns | ||
| framework_overhead_benchmark | ||
| operator_benchmark | ||
| README.md | ||
PyTorch Benchmarks
NOTE: This folder is currently work in progress.
This folder contains scripts that produce reproducible timings of various PyTorch features.
It also provides mechanisms to compare PyTorch with other frameworks.
Setup environment
Make sure you're on a machine with CUDA, torchvision, and pytorch installed. Install in the following order:
# Install torchvision. It comes with the pytorch stable release binary
conda install pytorch torchvision -c pytorch
# Install the latest pytorch master from source.
# It should supercede the installation from the release binary.
cd $PYTORCH_HOME
python setup.py build develop
# Check the pytorch installation version
python -c "import torch; print(torch.__version__)"
Benchmark List
Please refer to each subfolder to discover each benchmark suite