pytorch/docs/source/notes
Jiang, Yanbing f4d8bc46c7 Enable TF32 as fp32 internal precision for matmul/linear/conv (#157520)
### Description

This PR is to enable TF32 as fp32 internal precision for matmul/linear/conv in `mkldnn backend`. Since we have refined fp32 precision API in https://github.com/pytorch/pytorch/pull/125888, we can easily extend the API to support TF32 for `mkldnn backend`.

```
torch.backends.mkldnn.matmul.fp32_precision = 'tf32'
torch.backends.mkldnn.conv.fp32_precision = "tf32"
```

Related kernel update and UTs update are done. And the wrapper `bf32_on_and _off` is updated to `reduced_f32_on_and_off`, and it can run tests 3 times, one is reduced_f32 OFF, the other two are reduced_f32 ON (including `bf32 ON` and `tf32 ON`).

Pull Request resolved: https://github.com/pytorch/pytorch/pull/157520
Approved by: https://github.com/mingfeima, https://github.com/jansel
2025-07-17 08:57:34 +00:00
..
amp_examples.rst
autograd.rst [doc] Add documentation for division by zero behavior in autograd (#155987) 2025-06-16 19:02:12 +00:00
broadcasting.rst
cpu_threading_torchscript_inference.rst [3/n] Remove references to TorchScript in PyTorch docs (#158315) 2025-07-15 21:14:18 +00:00
cuda.rst Update warning of TF32 (#158209) 2025-07-16 01:28:50 +00:00
custom_operators.rst
ddp.rst
extending.func.rst
extending.rst [autograd][docs] Add more details on why save_for_backward is important in extending autograd note (#153005) 2025-05-09 16:36:57 +00:00
faq.rst
fsdp.rst Fix some incorrect reST markups in the document (#154831) 2025-06-07 19:09:46 +00:00
get_start_xpu.rst [BE] fix typos in docs/ (#156080) 2025-06-21 02:47:32 +00:00
gradcheck.rst [BE] fix typos in docs/ (#156080) 2025-06-21 02:47:32 +00:00
hip.rst Fix broken URLs (#152237) 2025-04-27 09:56:42 +00:00
large_scale_deployments.rst [3/n] Remove references to TorchScript in PyTorch docs (#158315) 2025-07-15 21:14:18 +00:00
libtorch_stable_abi.md Address richard's comments on libtorch_stable_abi note (#156324) 2025-06-27 19:19:12 +00:00
mkldnn.rst Enable TF32 as fp32 internal precision for matmul/linear/conv (#157520) 2025-07-17 08:57:34 +00:00
modules.rst Fix to modules.rst: indent line with activation functions (#139667) 2024-11-08 01:12:52 +00:00
mps.rst
multiprocessing.rst [BE] fix typos in docs/ (#156080) 2025-06-21 02:47:32 +00:00
numerical_accuracy.rst Update warning of TF32 (#158209) 2025-07-16 01:28:50 +00:00
out.rst add Out Notes (#151306) 2025-04-24 20:25:09 +00:00
randomness.rst Fix typo in Reproducibility docs (#141341) 2024-11-26 16:53:26 +00:00
serialization.rst Delete sections referencing torchscript in serialization docs (#156648) 2025-06-25 23:41:24 +00:00
windows.rst Removing conda references from PyTorch Docs (#152702) 2025-05-20 20:33:28 +00:00