Fusion of linalg.generic with
tensor.expand_shape/tensor.collapse_shape currently handles fusion
with reshape by expanding the dimensionality of the linalg.generic
operation. This helps fuse elementwise operations better since they
are fused at the highest dimensionality while keeping all indexing
maps involved projected permutations. The intent of these is to push
the reshape to the boundaries of functions.
The presence of named ops (or other ops across which the reshape
cannot be propagated) stops the propagation to the edges of the
function. At this stage, the converse patterns that fold the reshapes
with generic ops by collapsing the dimensions of the generic op can
push the reshape towards edges. In particular it helps the case where
reshapes exist in between named ops and generic ops.
linalg.named_op -> tensor.expand_shape -> linalg.generic
Pushing the reshape down will help fusion of linalg.named_op ->
linalg.generic using tile + fuse transformations.
This pattern is intended to replace the following patterns
- FoldReshapeByLinearization : These patterns create indexing maps
that are not projected permutations that affect future
transformations. They are only useful for folding unit-dimensions.
- PushReshapeByExpansion : This pattern has the same functionality
but has some restrictions
a) It tries to avoid creating new reshapes that limits its applicability. The pattern added here can achieve the same functionality through use of the `controlFn` that allows clients of the pattern freedom to make this decision. b) It does not work for ops with indexing semantics.
These patterns will be deprecated in a future patch.
nit: I think forwarding the constructor arguments should work here domainReassociation.emplace_back({dim});