pytorch/torch/backends
Dmytro Dzhulgakov 764bf826e3 Remove fbgemm_is_cpu_supported in favor of torch.backends.quantized.supported_qengines (#26840)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/26840

Cleaning up top-level namespace. Also cosmetic changes to torch.backends.quantized

Test Plan: Imported from OSS

Differential Revision: D17604403

Pulled By: dzhulgakov

fbshipit-source-id: c55af277ea7319d962a82a6120f65ccd47a60abc
2019-09-27 13:45:15 -07:00
..
cuda Add device-specific cuFFT plan caches (#19300) 2019-04-18 06:39:35 -07:00
cudnn Add torch.backends.mkldnn.enabled flag (#25459) 2019-09-11 12:09:40 -07:00
mkl [fft][1 of 3] build system and helpers to support cuFFT and MKL (#5855) 2018-03-19 15:43:14 -04:00
mkldnn Add torch.backends.mkldnn.enabled flag (#25459) 2019-09-11 12:09:40 -07:00
openmp Add torch.backends.openmp.is_available(); fix some cmake messages (#16425) 2019-01-31 16:15:46 -08:00
quantized Remove fbgemm_is_cpu_supported in favor of torch.backends.quantized.supported_qengines (#26840) 2019-09-27 13:45:15 -07:00
__init__.py Add torch.backends.mkldnn.enabled flag (#25459) 2019-09-11 12:09:40 -07:00