pytorch/caffe2/contrib/tensorrt
Janet Yang 86eac5b456 [caffe2] Check for number of created subnets and optionally throw an error (#57366)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/57366

We often get error messages such as
```
Model failed AOT (glow ahead-of-time compilation) with exception: Error during AOT optimization (non-provisioned addNetwork):
Non-recoverable device error when adding network:
Error code: PARTITIONER_ERROR
Error message: Did not find a partition with an SLS node

Error return stack:
--------------------------------------------------------------------------------
glow/glow/lib/Partitioner/Partitioner.cpp:1244
--------------------------------------------------------------------------------
glow/glow/lib/Runtime/HostManager/HostManager.cpp:375
--------------------------------------------------------------------------------
```
This makes the error message more clear by checking for the number of OnnixifiOp created before going into Glow. The check is enabled with the `verify_only_single_subnet` flag, and is disabled by default.

Test Plan: Unit tests pass

Reviewed By: khabinov

Differential Revision: D28097674

fbshipit-source-id: 0eefd8f6ec1a82546b759be8e541256bf271a673
2021-07-08 14:29:03 -07:00
..
CMakeLists.txt
README.md
tensorrt_op_trt.cc [caffe2] Fix caffe2 build with TensorRT support (#54322) 2021-03-22 13:19:08 -07:00
tensorrt_op_trt.h
tensorrt_tranformer.cc [caffe2] Check for number of created subnets and optionally throw an error (#57366) 2021-07-08 14:29:03 -07:00
tensorrt_tranformer.h
trt_utils.cc
trt_utils.h

Caffe2 & TensorRT integration

Jenkins Build Status

This directory contains the code implementing TensorRTOp Caffe2 operator as well as Caffe2 model converter (using ONNX model as an intermediate format). To enable this functionality in your PyTorch build please set

USE_TENSORRT=1 ... python setup.py ...

or if you use CMake directly

-DUSE_TENSORRT=ON

For further information please explore caffe2/python/trt/test_trt.py test showing all possible use cases.

Questions and Feedback

Please use Github issues (https://github.com/pytorch/pytorch/issues) to ask questions, report bugs, and request new features.