Only column-major was supported so far. This adds row-major support as well.
Note that we probably also want very efficient SIMD implementations for the
various target platforms.
Details
Diff Detail
- Repository
- rG LLVM Github Monorepo
Event Timeline
Thank you very much for the patch! A few suggestions inline.
llvm/lib/Transforms/Scalar/LowerMatrixIntrinsics.cpp | ||
---|---|---|
1300 | I think it might be possible to share most of the transpose implementation for both row/column major transpose, as we are iterating over the indices of the second dimension while extracting from the vectors of the first dimension in both cases. Instead of using ArgShape.NumRows, we could just use the second dimension (InputMatrix.isColumnMajor() : ArgShape.NumRows : ArgShape.NumColumns and instead of iterating over columns/rows we could just iterate over the vectors of the leading dimension (InputMatrix.vectors()) and instead of ArgShape.NumColumns/NumRows, we could use ArgShape.getNumVectors(). | |
llvm/test/Transforms/LowerMatrixIntrinsics/pr46085-row.ll | ||
1 ↗ | (On Diff #266691) | for multiply, we just have row/column major test variants in multiply-i32-row-major.ll/multiply-i32.ll and so on. For consistency, it might be slightly preferable to extend the existing transpose-*.ll tests (if the cases in the new test are not covered) and call the row-major equivalent transpose-*-row-major.ll. |
I think it might be possible to share most of the transpose implementation for both row/column major transpose, as we are iterating over the indices of the second dimension while extracting from the vectors of the first dimension in both cases.
Instead of using ArgShape.NumRows, we could just use the second dimension (InputMatrix.isColumnMajor() : ArgShape.NumRows : ArgShape.NumColumns and instead of iterating over columns/rows we could just iterate over the vectors of the leading dimension (InputMatrix.vectors()) and instead of ArgShape.NumColumns/NumRows, we could use ArgShape.getNumVectors().