Summary:
Reason for this change:
(1) Setting/Getting default gpu id doesn't seem to be used at all.
(2) It actually is confusing compared to the CUDA_VISIBLE_DEVICES options etc.
(3) When setting cuda_gpu_id=-1 in the CUDAContext arg, it used to use the
default gpu id but probably we should use the current gpu - so that the caller
will be able to control the device placement.
One use case is for TensorRT - if we have a custom callback layer, then it would
be easier for TRT or whatever caller to set the running device.
Reviewed By: dzhulgakov
Differential Revision: D6740357
fbshipit-source-id: 2ea710e434b10220d5a198e31c93847304636863
Summary: Adding support for DLPack tensors to Python op
Reviewed By: Yangqing
Differential Revision: D6577702
fbshipit-source-id: e14ef213fcdb2930ffe164667971a92aa8db503c
Summary:
Add cudnn v6 support, including testing support for dilated convolution.
Add a check to ensure that the versions of cuDNN used to compile Caffe2 and run it are compatible
Closes https://github.com/caffe2/caffe2/pull/85
Reviewed By: bwasti
Differential Revision: D4387690
Pulled By: Yangqing
fbshipit-source-id: 312960134398dd4afe6ee0c01cdc160046c904e8