This diff adds APFloat support for a semantic that matches the TF32 data type
used by some accelerators (most notably GPUs from both NVIDIA and AMD).
For more information on the TF32 data type, see https://blogs.nvidia.com/blog/2020/05/14/tensorfloat-32-precision-format/.
Some intrinsics that support the TF32 data type were added in https://reviews.llvm.org/D122044.
For some discussion on supporting common semantics in APFloat, see similar
efforts for 8-bit formats at https://reviews.llvm.org/D146441, as well as
https://discourse.llvm.org/t/rfc-adding-the-amd-graphcore-maybe-others-float8-formats-to-apfloat/67969.
A subsequent diff will extend MLIR to use this data type. (Those changes are
not part of this diff to simplify the review process.)
Hmm, this says improved precision than half but the semantics you gave say 11 digits? Does NVIDIA document how many bits we should expect?