pytorch/c10/cuda
Nikita Shulga b5702e2350 Fix out-of-bounds access for caching allocator calls (#46439)
Summary:
In assertValidDevice() compare device index to `caching_allocator.device_allocator` rather than to `device_no`

Fixes potential crashes when caching allocator is accessed before being initialized, for example by calling something like:
`python -c "import torch;print(torch.cuda.memory_stats(0))"`

Fixes https://github.com/pytorch/pytorch/issues/46437

Pull Request resolved: https://github.com/pytorch/pytorch/pull/46439

Reviewed By: ngimel

Differential Revision: D24350717

Pulled By: malfet

fbshipit-source-id: 714e6e74f7c2367a9830b0292478270192f07a7f
2020-10-16 08:24:46 -07:00
..
impl Call uncheckedSetDevice in ~InlineDeviceGuard only when device index are different (#35438) 2020-03-30 13:13:17 -07:00
test Formatting cmake (to lowercase without space for if/elseif/else/endif) (#35521) 2020-03-27 14:25:17 -07:00
CMakeLists.txt [c10/cuda] Reorganize device_count() and robustly surface ASAN warnings (#42249) 2020-08-05 11:39:31 -07:00
CUDACachingAllocator.cpp Fix out-of-bounds access for caching allocator calls (#46439) 2020-10-16 08:24:46 -07:00
CUDACachingAllocator.h Add per-device allocator object in CUDACachingAllocator (#37567) 2020-05-11 06:44:44 -07:00
CUDAException.h Creates Torch-friendly Event class and adds Stream tracking to autograd (#25130) 2019-09-01 12:37:52 -07:00
CUDAFunctions.cpp [c10/cuda] Reorganize device_count() and robustly surface ASAN warnings (#42249) 2020-08-05 11:39:31 -07:00
CUDAFunctions.h [c10/cuda] Reorganize device_count() and robustly surface ASAN warnings (#42249) 2020-08-05 11:39:31 -07:00
CUDAGuard.h
CUDAMacros.h Don't call cudaStreamDestroy at destruction time (#15692) 2019-01-11 12:36:41 -08:00
CUDAMathCompat.h [takeover] BTRS algorithm for fast/efficient binomial sampling (#36858) 2020-04-22 15:53:41 -07:00
CUDAStream.cpp Don't call cudaStreamDestroy at destruction time (#15692) 2019-01-11 12:36:41 -08:00
CUDAStream.h Add CUDA11 build and test (#40452) 2020-06-30 13:50:44 -07:00
README.md Move hipify to torch/utils to bundle them into torch package (#27425) 2019-10-07 17:25:45 -07:00

c10/cuda is a core library with CUDA functionality. It is distinguished from c10 in that it links against the CUDA library, but like c10 it doesn't contain any kernels, and consists solely of core functionality that is generally useful when writing CUDA code; for example, C++ wrappers for the CUDA C API.

Important notes for developers. If you want to add files or functionality to this folder, TAKE NOTE. The code in this folder is very special, because on our AMD GPU build, we transpile it into c10/hip to provide a ROCm environment. Thus, if you write:

// c10/cuda/CUDAFoo.h
namespace c10 { namespace cuda {

void my_func();

}}

this will get transpiled into:

// c10/hip/HIPFoo.h
namespace c10 { namespace hip {

void my_func();

}}

Thus, if you add new functionality to c10, you must also update C10_MAPPINGS torch/utils/hipify/cuda_to_hip_mappings.py to transpile occurrences of cuda::my_func to hip::my_func. (At the moment, we do NOT have a catch all cuda:: to hip:: namespace conversion, as not all cuda namespaces are converted to hip::, even though c10's are.)

Transpilation inside this folder is controlled by CAFFE2_SPECIFIC_MAPPINGS (oddly enough.) C10_MAPPINGS apply to ALL source files.

If you add a new directory to this folder, you MUST update both c10/cuda/CMakeLists.txt and c10/hip/CMakeLists.txt