mirror of
https://github.com/zebrajr/pytorch.git
synced 2025-12-07 12:21:27 +01:00
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/57366 We often get error messages such as ``` Model failed AOT (glow ahead-of-time compilation) with exception: Error during AOT optimization (non-provisioned addNetwork): Non-recoverable device error when adding network: Error code: PARTITIONER_ERROR Error message: Did not find a partition with an SLS node Error return stack: -------------------------------------------------------------------------------- glow/glow/lib/Partitioner/Partitioner.cpp:1244 -------------------------------------------------------------------------------- glow/glow/lib/Runtime/HostManager/HostManager.cpp:375 -------------------------------------------------------------------------------- ``` This makes the error message more clear by checking for the number of OnnixifiOp created before going into Glow. The check is enabled with the `verify_only_single_subnet` flag, and is disabled by default. Test Plan: Unit tests pass Reviewed By: khabinov Differential Revision: D28097674 fbshipit-source-id: 0eefd8f6ec1a82546b759be8e541256bf271a673 |
||
|---|---|---|
| .. | ||
| CMakeLists.txt | ||
| README.md | ||
| tensorrt_op_trt.cc | ||
| tensorrt_op_trt.h | ||
| tensorrt_tranformer.cc | ||
| tensorrt_tranformer.h | ||
| trt_utils.cc | ||
| trt_utils.h | ||
Caffe2 & TensorRT integration
This directory contains the code implementing TensorRTOp Caffe2 operator as well as Caffe2 model converter (using ONNX model as an intermediate format).
To enable this functionality in your PyTorch build please set
USE_TENSORRT=1 ... python setup.py ...
or if you use CMake directly
-DUSE_TENSORRT=ON
For further information please explore caffe2/python/trt/test_trt.py test showing all possible use cases.
Questions and Feedback
Please use Github issues (https://github.com/pytorch/pytorch/issues) to ask questions, report bugs, and request new features.