mirror of
https://github.com/zebrajr/pytorch.git
synced 2025-12-06 12:20:52 +01:00
This PR provides initial cutlass implementation of grouped gemm api as described in this [document](https://docs.google.com/document/d/1985La6wUUVH1AGBkNhaGKUXzx-9ybtbUp567-vYVOM4/edit?tab=t.0#heading=h.g8lzbjnyzzx9). Any combination of 2d and 3d inputs is supported, with 2d input being jagged, and the offsets of the jagged input being given by device tensor `offs`. Only H100 is supported, and only fp8_e4m3 with bf16 output and rowwise scaling. All the dimensions of each individual gemm have to be multiple of 16, that's cutlass limitation. I'll need to add those checks, for dynamic dimensions unfortunately the checks will have to be a device assert. I had to copy-paste cutlass's `Sm90RowBroadcast` and `Sm90ColBroadcast` structs with minor changes to enable scales given as pointer arrays, ideally those should be part of cutlass itself. I copied the schedules from the similar grouped gemm in FBGEMM, but there's a lot of room to improve perf, especially for `fast_accum=False`. Next steps would be perf tuning and increasing coverage to B100, I don't know how cutlass grouped gemm example handles blockwise scaling on B100. Pull Request resolved: https://github.com/pytorch/pytorch/pull/148531 Approved by: https://github.com/drisspg |
||
|---|---|---|
| .. | ||
| External | ||
| Modules | ||
| Modules_CUDA_fix | ||
| public | ||
| Allowlist.cmake | ||
| BuildVariables.cmake | ||
| Caffe2Config.cmake.in | ||
| CheckAbi.cmake | ||
| cmake_uninstall.cmake.in | ||
| Codegen.cmake | ||
| DebugHelper.cmake | ||
| Dependencies.cmake | ||
| FlatBuffers.cmake | ||
| GoogleTestPatch.cmake | ||
| IncludeSource.cpp.in | ||
| iOS.cmake | ||
| Metal.cmake | ||
| MiscCheck.cmake | ||
| prioritized_text.txt | ||
| ProtoBuf.cmake | ||
| ProtoBufPatch.cmake | ||
| Summary.cmake | ||
| TorchConfig.cmake.in | ||
| TorchConfigVersion.cmake.in | ||
| VulkanCodegen.cmake | ||
| VulkanDependencies.cmake | ||