pytorch/torch/csrc/api
Edward Yang c836a04dc8 Delete a bunch of uses of getType in favor of TensorOptions.
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/11087

Reviewed By: cpuhrsch

Differential Revision: D9581560

fbshipit-source-id: ebe3c4c0956da8a7215ada287bf6526dbcb2b07d
2018-08-30 20:11:24 -07:00
..
include/torch Delete a bunch of uses of getType in favor of TensorOptions. 2018-08-30 20:11:24 -07:00
src Create at::linear (#10799) 2018-08-24 16:02:08 -07:00
README.md [C++ API] Use torch::Tensor instead of at::Tensor/Variable mix (#8680) 2018-06-24 19:03:39 -07:00

AUTOGRADPP

This is an experimental C++ frontend to pytorch's C++ backend. Use at your own risk.

How to build:

git submodule update --init --recursive

cd pytorch
# On Linux:
python setup.py build
# On macOS (may need to prefix with `MACOSX_DEPLOYMENT_TARGET=10.9 CC=clang CXX=clang++` when using anaconda)
LDSHARED="cc -dynamiclib -undefined dynamic_lookup" python setup.py build

cd ..; mkdir -p build; cd build
cmake .. -DPYTHON_EXECUTABLE:FILEPATH=$(which python)  # helpful if you use anaconda
make -j

Stuff

  • Check out the MNIST example, which tries to replicate PyTorch's MNIST model + training loop
  • The principled way to write a model is probably something like
TORCH_AUTOGRAD_CONTAINER_CLASS(MyModel) {
  // This does a 2D convolution, followed by global sum pooling, followed by a linear.
 public:
  void initialize_containers() override {
    myConv_ = add(Conv2d(1, 50, 3, 3).stride(2).make(), "conv");
    myLinear_ = add(Linear(50, 1).make(), "linear");
  }
  std::vector<Tensor> forward(std::vector<Tensor> x) override {
    auto v = myConv_->forward(x);
    v = v.mean(-1).mean(-1);
    return myLinear_.forward({v});
  }
 private:
  Container myLinear_;
  Container myConv_;
}

Some things are not implemented:

  • SGD, Adagrad, RMSprop, and Adam are the only optimizers implemented
  • Bidirectional, batch first, and PackedSequence are not implemented for LSTMs
  • Sparse Tensors might work but are very untested

Otherwise, lots of other things work. There may be breaking API changes.