Add a canonicalization pattern that rewrites memref::DimOps accordingly.
Depends On D103226
Differential D103132
[mlir][linalg] Derive padded subtensor shape from linalg op outputs springerm on May 25 2021, 6:28 PM. Authored by
Details
Add a canonicalization pattern that rewrites memref::DimOps accordingly. Depends On D103226
Diff Detail
Unit Tests Event TimelineComment Actions Let's add a proper folding hook and use createAndFold please, this has multiple benefits over all clients using internal op-specific knowledge. Comment Actions The folder would have to be implemented as part of DimOp::fold in MemRefOps.cpp. It would have to check if the DimOp operand is a LinalgOp and if so, replace it with a new DimOp on the LinalgOp's respective output. This means putting linalg-specific code into the memref dialect. A canonicalization pattern may be more suitable. Comment Actions A folder would be better because createOrFold cleans a lot of things up by construction without relying on calling a canonicalization pass in weird places. Still, I wonder if this is the justification we've been wanting to get the TiedOperandInterface in place and refactor using this: @silvas @mehdi_amini for an extra set of eyes. Comment Actions Submitting this commit as is, but something like TiedOperandInterface would in fact be useful here. Will revisit this if we end up adding sth like TiedOperandInterface. Comment Actions DimOp should already have a canonicalizer that will use InferShapedTypeOpInterface to rewrite dim(linalgop(x)) to some_expr_of(dim(x)). Do you have a test case where that doesn't happen? One difference is that I think the current way that this canonicalization plays out is that it calculates the output shape in terms of the ins. Whereas what you are proposing here calculates it in terms of the outs (which is trivial). This came up in the past and @mravishankar argued for the "shape in terms of ins" approach, whereas I argued for the "shape in terms of outs" approach. I'll let you experts discuss this matter. I'm generally against TiedOperandInterface (at least the way that IREE's is defined) because it has very vague semantics (it conflates a number of things -- the definition literally says something like "depending on the context it is interpreted as either A, B, or C" which doesn't have the precision needed to guide transformations safely). If the information needed is that the shape of an output is the same as the shape of an input, then InferShapedTypeOpInterface is already sufficient to express that. Comment Actions You are right, the existing canonicalizations + CSE are good enough. This commit is not needed, but I didn't realize that until after I wrote the pattern ;) |