The padded sizes should be derived from destination tensor, not source
tensor. There could be more than one incomplete tile in padding domain.
Details
Diff Detail
- Repository
- rG LLVM Github Monorepo
Event Timeline
mlir/lib/Dialect/Linalg/Transforms/Transforms.cpp | ||
---|---|---|
551 | Tangentially related, the documentation for the pack op says if the padding value is absent and the tile doesn't perfectly divide then it is UB. Do we need to update the op semantics or does this need to be a failure? (https://mlir.llvm.org/docs/Dialects/TensorOps/#tensorpack-mlirtensorpackop) |
rebase + address comments
mlir/lib/Dialect/Linalg/Transforms/Transforms.cpp | ||
---|---|---|
551 | It will raise an error if the padding value is absent and the tile doesn't perfectly divide the dimension. See https://github.com/llvm/llvm-project/blob/afe78db7701d15f1d69b3fa50ee05fd42a6297cd/mlir/test/Dialect/Tensor/invalid.mlir#L542-L546 For dynamic cases, it is hard to raise the signal. I think the pattern model the UB to pad with zeros when do the lowering. The padding value is needed for pad ops, even if the pad op is a no op (i.e., all the values in lows and highs are zeros). I think the pattern can decide how to handle UB itself, so we don't have to update the op semantics. The failures is raised if the op has enough static information. In this context, we don't need to update the doc. What do you think? |
mlir/lib/Dialect/Linalg/Transforms/Transforms.cpp | ||
---|---|---|
551 | Ah I forgot that the verifier will handle the static case, and for the dynamic case UB makes sense. |
nit: AffineMap::get(/*dimCount*/2, /*symbolCount*/1, d0 * s0 - d1);