pytorch/docs/source/notes
eqy 388b75edec [CUDA][cuBLAS] Add fp16 accumulate option to cuBLAS/cuBLASLt (#144441)
Test for `cublasGemmEx` added, still need to figure out the best way to exercise the other APIs...

Pull Request resolved: https://github.com/pytorch/pytorch/pull/144441
Approved by: https://github.com/Chillee
2025-01-11 15:30:38 +00:00
..
amp_examples.rst Update document for autocast on CPU (#135299) 2024-09-13 09:11:47 +00:00
autograd.rst
broadcasting.rst
cpu_threading_runtimes.svg
cpu_threading_torchscript_inference.rst
cpu_threading_torchscript_inference.svg
cuda.rst [CUDA][cuBLAS] Add fp16 accumulate option to cuBLAS/cuBLASLt (#144441) 2025-01-11 15:30:38 +00:00
custom_operators.rst Redirect the custom ops landing page :D (#139634) 2024-11-04 22:25:15 +00:00
ddp.rst
extending.func.rst
extending.rst [doc] fix grammar in "Extending Torch" (#140209) 2024-11-13 05:34:43 +00:00
faq.rst
fsdp.rst
get_start_xpu.rst update get start xpu (#137479) 2024-10-16 17:36:29 +00:00
gradcheck.rst
hip.rst [ROCm] set hipblas workspace (#138791) 2024-10-29 01:37:55 +00:00
large_scale_deployments.rst
modules.rst Fix to modules.rst: indent line with activation functions (#139667) 2024-11-08 01:12:52 +00:00
mps.rst
multiprocessing.rst
numerical_accuracy.rst Add option to configure reduced precision math backend for SDPA (#135964) 2024-09-24 07:11:38 +00:00
randomness.rst Fix typo in Reproducibility docs (#141341) 2024-11-26 16:53:26 +00:00
serialization.rst Add config.save.use_pinned_memory_for_d2h to serialization config (#143342) 2024-12-20 21:01:18 +00:00
windows.rst