pytorch/benchmarks/overrides_benchmark
Hameer Abbasi 3d46e02ea1 Add __torch_function__ for methods (#37091)
Summary:
According to pytorch/rfcs#3

From the goals in the RFC:

1. Support subclassing `torch.Tensor` in Python (done here)
2. Preserve `torch.Tensor` subclasses when calling `torch` functions on them (done here)
3. Use the PyTorch API with `torch.Tensor`-like objects that are _not_ `torch.Tensor`
   subclasses (done in https://github.com/pytorch/pytorch/issues/30730)
4. Preserve `torch.Tensor` subclasses when calling `torch.Tensor` methods. (done here)
5. Propagating subclass instances correctly also with operators, using
   views/slices/indexing/etc. (done here)
6. Preserve subclass attributes when using methods or views/slices/indexing. (done here)
7. A way to insert code that operates on both functions and methods uniformly
   (so we can write a single function that overrides all operators). (done here)
8. The ability to give external libraries a way to also define
   functions/methods that follow the `__torch_function__` protocol. (will be addressed in a separate PR)

This PR makes the following changes:

1. Adds the `self` argument to the arg parser.
2. Dispatches on `self` as well if `self` is not `nullptr`.
3. Adds a `torch._C.DisableTorchFunction` context manager to disable `__torch_function__`.
4. Adds a `torch::torch_function_enabled()` and `torch._C._torch_function_enabled()` to check the state of `__torch_function__`.
5. Dispatches all `torch._C.TensorBase` and `torch.Tensor` methods via `__torch_function__`.

TODO:

- [x] Sequence Methods
- [x] Docs
- [x] Tests

Closes https://github.com/pytorch/pytorch/issues/28361

Benchmarks in https://github.com/pytorch/pytorch/pull/37091#issuecomment-633657778

Pull Request resolved: https://github.com/pytorch/pytorch/pull/37091

Reviewed By: ngimel

Differential Revision: D22765678

Pulled By: ezyang

fbshipit-source-id: 53f8aa17ddb8b1108c0997f6a7aa13cb5be73de0
2020-08-05 20:44:13 -07:00
..
bench.py [RELAND] Add __torch_function__ benchmarks (#36138) 2020-04-10 09:14:31 -07:00
common.py Add __torch_function__ for methods (#37091) 2020-08-05 20:44:13 -07:00
pyspybench.py [RELAND] Add __torch_function__ benchmarks (#36138) 2020-04-10 09:14:31 -07:00
README.md [RELAND] Add __torch_function__ benchmarks (#36138) 2020-04-10 09:14:31 -07:00

__torch_function__ micro-benchmarks

This benchmark suite provides a systemic way to measure the performance of __torch_function__ overhead.

Getting started

Initial Setup

Install py-spy by doing:

pip install py-spy

Note that more extensive documentation on using py-spy is available in CONTRIBUTING.md.

Running the benchmark

Run one of the following commands in the terminal, with the working directory being ${PYTORCH_CLONE_DIR}/benchmarks/overrides_benchmark:

# Benchmark all the cases
python bench.py

# Flame graph pertaining to each case.
py-spy record -o tensor.svg --native -- python pyspybench.py Tensor
py-spy record -o subtensor.svg --native -- python pyspybench.py SubTensor
py-spy record -o overridden.svg --native -- python pyspybench.py WithTorchFunction
py-spy record -o suboverridden.svg --native -- python pyspybench.py SubWithTorchFunction

Here is a brief overview of what the results should look like, if run correctly:

  • Overhead for torch functions when run on torch.Tensor objects is on the order of 2 μs.
  • __torch_function__ should add zero overhead for torch.Tensor inputs, a small overhead for subclasses of torch.Tensor, and a couple of microseconds for Tensor-likes with __torch_function__.
  • Changing the dispatching mechanism may result in changes that are on the order of 100 ns, which are hard to detect due to noise, but important.

Reporting benchmark results

When modifying any of the machinery around __torch_function__, run the benchmark for both the feature branch and the point it diverges from master. For each of these:

  • Run bench.py, and include the output in your result.
  • For each case where bench.py shows a regression, run the commands described above, prefixing the output SVG filename (the input to the -o switch) with base- or branch- depending on the commit you are running the benchmark on.
  • For each SVG, open it in the browser, take a screenshot and include it in your result. Also include a ZIP file with all SVGs thus produced included.