pytorch/c10/util/Float8_e5m2.cpp
Amadeusz Skrzypczak b64bd4a5dd Add torch.float8_e5m2 and torch.float8_e4m3 data types (#104242)
Proposal of two float8 variants - e5m2 and e4m3 - based on https://arxiv.org/pdf/2209.05433.pdf

Hide all Float8 operator implementations behind `#if !defined(C10_MOBILE)` guard to keep Android build size almost unchanged

TODO:
 - Refactor duplicated code
 - Cleanup unbalanced pragma pop in dtype utils
 - Add native implementation on the CUDA size

Co-authored-by: Nikita Shulga <nshulga@meta.com>

Pull Request resolved: https://github.com/pytorch/pytorch/pull/104242
Approved by: https://github.com/albanD
2023-07-20 16:09:11 +00:00

15 lines
317 B
C++

#include <c10/util/Float8_e5m2.h>
#include <iostream>
namespace c10 {
static_assert(
std::is_standard_layout<Float8_e5m2>::value,
"c10::Float8_e5m2 must be standard layout.");
std::ostream& operator<<(std::ostream& out, const Float8_e5m2& value) {
out << (float)value;
return out;
}
} // namespace c10