Canonicalization and folding patterns in StandardOps may interfere with the needs
of Linalg. This revision introduces specific foldings for dynamic memrefs that can
be proven to be static.
Very concretely:
Determines whether it is possible to fold it away in the parent Linalg op:
mlir %1 = memref_cast %0 : memref<8x16xf32> to memref<?x?xf32> %2 = linalg.slice %1 ... : memref<?x?xf32> ... // or %1 = memref_cast %0 : memref<8x16xf32, affine_map<(i, j)->(16 * i + j)>> to memref<?x?xf32> linalg.generic(%1 ...) : memref<?x?xf32> ...
into
mlir %2 = linalg.slice %0 ... : memref<8x16xf32> ... // or linalg.generic(%0 ... : memref<8x16xf32, affine_map<(i, j)->(16 * i + j)>>
clang-format: please reformat the code