This is the first PR to add F16 and BF16 support to the sparse codegen. There are still problems in supporting these two data types, such as BF16 is not quite working yet.
Add tests cases.
Paths
| Differential D127010
[mlir][sparse] Add F16 and BF16. ClosedPublic Authored by bixia on Jun 3 2022, 3:38 PM.
Details Summary This is the first PR to add F16 and BF16 support to the sparse codegen. There are still problems in supporting these two data types, such as BF16 is not quite working yet. Add tests cases.
Diff Detail
Event Timeline
bixia marked 8 inline comments as done. Comment ActionsAddress review comments.
Comment Actions I think something went wrong. In one revision, you had addressed comments but in a later upload, some of these were undone again? Comment Actions Restore changes that addressed the first round of comments.
Comment Actions One last nit. Also, wait a bit before submitting to see if Wren has some more feedback on addressing the comment she made earlier.
This revision is now accepted and ready to land.Jun 7 2022, 2:42 PM Closed by commit rGea8ed5cbcfac: [mlir][sparse] Add F16 and BF16. (authored by bixia). · Explain WhyJun 8 2022, 9:51 AM This revision was automatically updated to reflect the committed changes. Comment Actions Sorry I missed the comments; I was out yesterday for a doctor's appt. I'll make a followup CL with the changes I had in mind. While working on that I recall that we might not be able to use some of my suggestions, since they're C++11 features whereas this file must be C++98 compliant. Nevertheless, once I finish the CL I'll post it just to help clarify what I meant.
Revision Contents
Diff 435225 mlir/include/mlir/ExecutionEngine/Float16bits.h
mlir/include/mlir/ExecutionEngine/SparseTensorUtils.h
mlir/lib/Dialect/SparseTensor/Transforms/CodegenUtils.cpp
mlir/lib/ExecutionEngine/CMakeLists.txt
mlir/lib/ExecutionEngine/Float16bits.cpp
mlir/lib/ExecutionEngine/SparseTensorUtils.cpp
mlir/test/Dialect/SparseTensor/conversion_sparse2dense.mlir
mlir/test/Integration/Dialect/SparseTensor/CPU/dense_output_f16.mlir
mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_sum_f16.mlir
utils/bazel/llvm-project-overlay/mlir/BUILD.bazel
|
nit: typically, more of the empty white space is filled with the ------- line here