Page MenuHomePhabricator

[mlir] translate memref.reshape with static shapes but dynamic dims

Authored by ashay-github on May 28 2022, 5:41 PM.



Prior to this patch, the lowering of memref.reshape operations to the
LLVM dialect failed if the shape argument had a static shape with
dynamic dimensions. This patch adds the necessary support so that when
the shape argument has dynamic values, the lowering probes the dimension
at runtime to set the size in the MemRefDescriptor type. This patch
also computes the stride for dynamic dimensions by deriving it from the
sizes of the inner dimensions.

Diff Detail

Event Timeline

ashay-github created this revision.May 28 2022, 5:41 PM
ashay-github requested review of this revision.May 28 2022, 5:41 PM
ashay-github added inline comments.May 28 2022, 5:46 PM

Since the dimensions are now iterated in reverse order, this patch only _reorders_ statements (instead of adding new statements) in the existing test.

ftynse requested changes to this revision.May 30 2022, 9:05 AM
ftynse added inline comments.

If the last (i.e., the first in iteration order) stride is dynamic, strideValue will be set to nullptr, and it will be used below to create IR. This looks broken.


Consider using a regular if conditional instead, it will likely be shorter and more readable.


Please avoid calling code that might create IR from within rewriter.create arguments. This may lead to the IR being emitted in unspecified and thus untestable order (I know that it will not in this specific case, but it is easier to enforce the rule of never doing this rather than to debug potential future changes in function calls).

This revision now requires changes to proceed.May 30 2022, 9:05 AM

Now using regular if instead of a ternary statement for better readability

ashay-github marked 2 inline comments as done and an inline comment as not done.May 30 2022, 5:57 PM
ashay-github added inline comments.

I thought the same, but (if I am correct) the ViewOpLowering::getStride() method at line 1911 in the same file also seems to assume that the innermost dimension's stride will not be dynamic.

If the innermost dimension's stride is indeed dynamic, is it safe to say that it will be 1?


Makes sense, fixed.


Thanks, I hadn't considered the potential problems. I've fixed it in the most recent version.

ftynse added inline comments.May 31 2022, 2:46 AM

I suppose you mean this (line 1811, not 1911). It has enough checks to never attempt constructing the IR from a null Value, the code here does not. The lowering for ViewOp also leverages the semantic restriction of the op itself that requires the target memref to be contiguous, i.e. to have the innermost stride of 1. Here, there is actually a stronger restriction -- the target memref must have the identity layout, which subsumes innermost stride 1 and the following strides being products of sizes.


Nit: I understand the desire to indicate default-initialization with braces, but we don't usually do this here.


Nit: you shouldn't need .getResult(), single-result ops are implicitly convertible to Value.

ashay-github marked 4 inline comments as done.

Ensure that null stride value is never used


Thanks for the explanation. I had missed the fact that reshape ops have to have identity layout maps for both source and destination memref types.

I've placed an assertion at the top of the loop, in addition to ensuring that the IR isn't constructed with a null stride values. Please let me know if I didn't get that right. Thanks!

ftynse accepted this revision.Jun 2 2022, 3:07 AM
ftynse added inline comments.

Nit: put the comment inside the assert - assert(condition && "message");

This revision is now accepted and ready to land.Jun 2 2022, 3:07 AM

Place comment in the assert message