Details
Diff Detail
- Repository
- rG LLVM Github Monorepo
Event Timeline
Thanks for working on this @hanchung ! Just a few minor comments inline.
mlir/lib/Dialect/Linalg/Transforms/Vectorization.cpp | ||
---|---|---|
1326 | Could you add a test to exercise this? | |
1455 | [nit] I appreciate that this is unrelated to this change, but "Input vector sizes" is a bit misleading IMHO. "Requested vector sizes" or "Target vector sizes" would be more descriptive. IIUC, inputVectorSizes is an input to vectorize, i.e. the vectorizer. It's just too easy to confuse this with with "input to the PadOp that's being analyzed here". Perhaps that's just me. | |
1460–1461 | This isn't required for this change, is it? I don't mind, but if that's the case then could you add a note in the commit msg? Also, I'd move this to the very top of this method (it's one of they key assumptions, isn't it?). | |
1468 | [nit] Same note about "input" as above - perhaps "targert"? |
Awesome! Just a few comments. LGTM overall
mlir/lib/Dialect/Linalg/Transforms/Vectorization.cpp | ||
---|---|---|
1326 | Adding some comments here would also help understanding. I understand that this condition is only met for dynamic cases because the padOp semantics prevent the use of any other types that mismatch, right? | |
1454–1473 | I think all this code is shared with vectorize. Is it possible to refactor it to a utility function? |
Cool, thanks for extending to also support this case.
LGTM once comments are addressed.
address comments
mlir/lib/Dialect/Linalg/Transforms/Vectorization.cpp | ||
---|---|---|
1326 | This is because the pad op is not a DPS operation, we don't have a destination tensor to do transfer_write. We should just create the empty op with given low, high and source shape. | |
1455 | I agree with the point, it sounds really reasonable to me. There are many other places using input vector terms in this file, and I'd like to keep the consistency. This should be done in a separate patch, IMO. | |
1460–1461 | I think this is required for all masking vectorization? My understanding is that we need the static value to create the mask. This is the same check in generic op version. I refactored the common logic to a util, please take a look. |
Thank you!
mlir/lib/Dialect/Linalg/Transforms/Vectorization.cpp | ||
---|---|---|
1460–1461 |
Yes, that was my point :) Your patch is about "dynamic pad op masking vectorization" and this change applies to everything. Like I said, I don't mind :) |
LGTM, thanks! Please, wait for @awarzynski approval before landing.
mlir/lib/Dialect/Linalg/Transforms/Vectorization.cpp | ||
---|---|---|
1363 | doc |
Could you add a test to exercise this?