Fold dim ops of scf.for results to dim ops of the respective iter args if the loop is shape preserving.
Details
Diff Detail
- Repository
- rG LLVM Github Monorepo
Event Timeline
This is adding a specialized pattern that implements the shape reification interface + rewrite based on it. My first hunch was that this should then also be in a pass and not a canonicalization. However, I think control flow is different than compute ops in that regard, so I came to the conclusion that it makes sense to have this as a pattern.
The other comment would be that an interface might be helpful to model the shape preserving nature of operations and not hard-wire it to the current two ops. This CL did not introduce that part, so it should not be blocked on it.
Just noting this here to document that thought.
Note: This is not actually a canonicalization pattern, but part of for-loop-canonicalization, which is a pass that contains "cross-dialect boundary" canonicalizations.
The other comment would be that an interface might be helpful to model the shape preserving nature of operations and not hard-wire it to the current two ops. This CL did not introduce that part, so it should not be blocked on it.
That's a good idea. Then this could be a canonicalization of tensor.dim/memref.dim. Same for D109431, which I just sent out.
Still not entirely sure how the interface would work, but maybe it could provide a function like:
/// Return the operand index, who's (dynamic) type is guaranteed to match with the `resultIdx`-th result's type. Return None if no such operand exists. Optional<unsigned> getTiedInput(unsigned resultIdx);
Or:
bool haveSameTypes(unsigned operandIdx, resultIdx);
Then it would be up to ops like scf.for to implement this interface. Ops such as tensor.insert_slice would also implement this interface, so the implementation of scf.for could query those without having a dependency on the tensor dialect.
I like that variant, because the tiedInput concept is very linalg specific. I would just say that they have the same shape to cover the cases where element types are different. And SameOperandAndResultTypeand friends could provide a default implementation for this, too.
@mravishankar wdyt? I think this is far away enough from the other shape interfaces we have.
bool haveSameTypes(unsigned operandIdx, resultIdx);I like that variant, because the tiedInput concept is very linalg specific. I would just say that they have the same shape to cover the cases where element types are different. And SameOperandAndResultTypeand friends could provide a default implementation for this, too.
If it's just the naming... One benefit of the getTiedInput variant (though it may have a different name) would be an easier implementation of isShapePreserving for scf.for loops. With haveSameTypes, I would have to iterate over all operands of an op, to see which operand has the same type. (There may even be multiple, so maybe the return value should be a SmallVector<unsigned>.)
Anyway, I'm landing this revision as is for the moment. I'll start a thread on Discourse or one of our chat groups later to discuss this further. And i'll change this implementation later if people are OK with adding a new interface.