This patch fixes a bug in the way we compute the vector type for vector
transfer writes when the value to store needs to be transposed.
Details
Diff Detail
- Repository
- rG LLVM Github Monorepo
Event Timeline
mlir/lib/IR/AffineMap.cpp | ||
---|---|---|
570 | This looks a bit weird and very specific from an API perspective, I wouldn't know when to use in other cases than your very specific use case. How about something more reusable like (very rough sketch .. alternatives possible) AffineMap AffineMap::getFilteredMultiDimIdentityMap(MLIRContext *ctx, int64_t numDims, llvm::function_ref<bool(AffineDimExpr)> keepDim) { auto m = getMultiDimIdentityMap(numDims, numSyms, ctx); // apply keepDim return m.dropResults(projectedDims); } // call auto m = AffineMap::getFilteredMultiDimIdentityMap(ctx, m.getNumDims, [](AffineDimExpr d){ return llvm::any_of(m.getResults(), [](AffineExpr e) { return e.isFunctionOf(d) ;})}); The double lambda nesting may be unappealing, sorry it's Sat night .. :p but you get the gist. | |
mlir/test/Dialect/Linalg/vectorization.mlir | ||
1779 | note: we should have an op able to simply target a particular op to vectorize, this would greatly increase applicability and avoid chasing parent ops and applying many patterns on the fly. I am open to renaming the existing op too but that will require more changes than we should spend effort on right now and is a good intro cleanup task. |
Thanks for the quick review!
mlir/lib/IR/AffineMap.cpp | ||
---|---|---|
570 | Thanks! Make sense! | |
mlir/test/Dialect/Linalg/vectorization.mlir | ||
1779 | I think @awarzynski have been looking a bit more into the vectorization ops in the transform dialect. Maybe he can help with this although I understand this is low priority. |
mlir/test/Dialect/Linalg/vectorization.mlir | ||
---|---|---|
1779 | I might have a few spare cycles for this. I assume that you are thinking of something similar to transform.structured.masked_vectorize: transform.sequence failures(propagate) { ^bb1(%arg1: !transform.any_op): %0 = transform.structured.match ops{["linalg.generic"]} in %arg1 : (!transform.any_op) -> !transform.any_op transform.structured.masked_vectorize %0 vector_sizes [4] : !transform.any_op } |
This looks a bit weird and very specific from an API perspective, I wouldn't know when to use in other cases than your very specific use case.
How about something more reusable like (very rough sketch .. alternatives possible)
The double lambda nesting may be unappealing, sorry it's Sat night .. :p but you get the gist.
Feel free to introduce other helpers if they are generally reusable and not 1-off.