When padding Linalg input operands, we assumed that the producer are only from
tensor.extract_slice ops. However, this does not work with fusion. In the case
of linalg.matmul + linalg.generic op, we could tile reduction dim firstly. Then
we pad all the Linalg inputs. This pattern would make the result of scf.for be
one of linalg.generic inputs. To handle this case, we have to propagate the
information from scf.for ops, i.e., get the information from the iter_args.
Details
- Reviewers
mravishankar nicolasvasilache gysit
Diff Detail
- Repository
- rG LLVM Github Monorepo
Event Timeline
Thanks for identifying the issue! This patch unfortunately reminds me of some dim op canonicalization pattern discussion in the past. The question there was if a dynamic tensor passed as an iteration argument to a for loop has the same shape in every loop iteration. The answer was unfortunately no in general. That means with your patch we may get a wrong shape in general. However, in the context of Linalg the shape of an iteration argument never changes. So it may be OK to assume iteration argument and result have the same shape here. @nicolasvasilache what is your opinion here?
mlir/lib/Dialect/Linalg/Transforms/Transforms.cpp | ||
---|---|---|
184 | There is a helper function to get the op operand directly, which should work similar to the following code: while (auto forOp = opOperand->get().getDefiningOp<scf::ForOp>()) { OpResult result = opOperand->get().cast<OpResult>(); opOperand = &forOp.getOpOperandForResult(result); } |
We might want https://reviews.llvm.org/D119390 instead because scf.for iter types might change during runtime.
There is a helper function to get the op operand directly, which should work similar to the following code: