pytorch/torch/backends
Xiao Wang e856a4d66b Add an env var to skip cudnn version compatibility check (#89184)
skip the check by setting `PYTORCH_SKIP_CUDNN_COMPATIBILITY_CHECK=1`

Pull Request resolved: https://github.com/pytorch/pytorch/pull/89184
Approved by: https://github.com/ngimel
2022-11-17 20:10:52 +00:00
..
_coreml CoreML .mlmodel export support (#84784) 2022-09-17 02:06:43 +00:00
_nnapi
cuda Create native function for determining which implementation of SDP to call (#89029) 2022-11-16 03:07:54 +00:00
cudnn Add an env var to skip cudnn version compatibility check (#89184) 2022-11-17 20:10:52 +00:00
mkl [RFC] enable oneMKL&oneDNN on-demands verbose functinality (#63212) 2022-07-27 23:29:35 +00:00
mkldnn [RFC] enable oneMKL&oneDNN on-demands verbose functinality (#63212) 2022-07-27 23:29:35 +00:00
mps
openmp
opt_einsum [einsum] Fix opt_einsum defaults to be more reasonable (#86985) 2022-10-15 06:23:50 +00:00
quantized [Quant] Add unified x86 quant backend (#84329) 2022-09-29 00:44:40 +00:00
xeon Fix typos in messages under torch (#89049) 2022-11-17 04:18:14 +00:00
xnnpack
__init__.py