mirror of
https://github.com/zebrajr/pytorch.git
synced 2025-12-07 00:21:07 +01:00
Proposal of two float8 variants - e5m2 and e4m3 - based on https://arxiv.org/pdf/2209.05433.pdf Hide all Float8 operator implementations behind `#if !defined(C10_MOBILE)` guard to keep Android build size almost unchanged TODO: - Refactor duplicated code - Cleanup unbalanced pragma pop in dtype utils - Add native implementation on the CUDA size Co-authored-by: Nikita Shulga <nshulga@meta.com> Pull Request resolved: https://github.com/pytorch/pytorch/pull/104242 Approved by: https://github.com/albanD
15 lines
317 B
C++
15 lines
317 B
C++
#include <c10/util/Float8_e5m2.h>
|
|
#include <iostream>
|
|
|
|
namespace c10 {
|
|
|
|
static_assert(
|
|
std::is_standard_layout<Float8_e5m2>::value,
|
|
"c10::Float8_e5m2 must be standard layout.");
|
|
|
|
std::ostream& operator<<(std::ostream& out, const Float8_e5m2& value) {
|
|
out << (float)value;
|
|
return out;
|
|
}
|
|
} // namespace c10
|