We are already doing this for depthwise convolution and pooling.
This helps to preserve the promotion semantics from Linalg op
definitions to lower layers.
Along the way, fixed the type mismatch issue in the existing
promote implementation.
Paths
| Differential D148471
[mlir][linalg] Promote operands for convolution vectorization ClosedPublic Authored by antiagainst on Apr 16 2023, 8:55 AM.
Details Summary We are already doing this for depthwise convolution and pooling. Along the way, fixed the type mismatch issue in the existing
Diff Detail
Event TimelineHerald added subscribers: • pcwang-thead, limo1996, stephenneuendorffer, nicolasvasilache. · View Herald Transcript Comment Actions LGTM
This revision is now accepted and ready to land.Apr 17 2023, 7:09 AM Closed by commit rG7517e246aca2: [mlir][linalg] Promote operands for convolution vectorization (authored by antiagainst). · Explain WhyApr 17 2023, 4:37 PM This revision was automatically updated to reflect the committed changes. antiagainst marked 2 inline comments as done.
Revision Contents
Diff 514453 mlir/lib/Dialect/Linalg/Transforms/Vectorization.cpp
mlir/test/Dialect/Linalg/vectorize-convolution.mlir
|
nit: This function doesn't seem to require ints and floats as the only element types until the assertion at the very bottom -- should we assert the element type is int or float before we query the bit width? As is, I think this would assert somewhere inside getIntOrFloatBitWidth.