Added dimensions can be both static and dinamic. Mapped dimension should be the same in the input and the init.
Details
Diff Detail
- Repository
- rG LLVM Github Monorepo
Event Timeline
what was the rationale behind this check in the first place ?
Also, looking at the test you remove, the attribute looks unnatural to me:
%bcast = linalg.broadcast ins(%input:tensor<4x16xf32>) outs(%init:tensor<4x?x16xf32>) dimensions = [0, 2] func.return %bcast : tensor<4x?x16xf32>
I would have expected to have
%bcast = linalg.broadcast ins(%input:tensor<4x16xf32>) outs(%init:tensor<4x?x16xf32>) dimensions = [1] func.return %bcast : tensor<4x?x16xf32>
I.e. the dimension 1 of the result is added by the broadcast.
As opposed to the dimensions 0, and 2 of the output were already present in the input.
Thoughts?
So, you would prefer having the attribute that specifies what dimensions are added, instead of what input dimensions are mapped to? These two representations are indeed equivalent and can be converted back-and-forth.
I just noted that it felt a bit unnatural to me and I am wondering if there is another interpretation than what I applied, i.e. "broadcast %input into a new tensor along result dimension 1" ?
I guess we could use an explanation in plain words as a judge of whether the syntax is straightforward or a bit convoluted?
I think it's just two different representation. We just borrowed the current one from mHLO, but the bcast there can do transpose, reshape and, possibly, double espresso shot. With no transpose present in the semantics of the op, we can actually switch from `dimensions = [0, 2]" to "dimensions = [1]". It will make it more obvious how many and what dimensions are added. The current representation make it clear how the input dims are mapped to the output dims, but it's less important, i think.
Anyway, I suggest we do that in a follow-up patch and submit this one for now.
Yes, the mhlo "heritage" is visible in the semantics, great let's do as a it followup.
Thanks!