Page MenuHomePhabricator

[mlir][shape] Split out structural type conversions for shape dialect.

Authored by silvas on Oct 20 2020, 4:17 PM.



A "structural" type conversion is one where the underlying ops are
completely agnostic to the actual types involved and simply need to update
their types. An example of this is shape.assuming -- the shape.assuming op
and the corresponding shape.assuming_yield op need to update their types
accordingly to the TypeConverter, but otherwise don't care what type
conversions are happening.

Also, the previous conversion code would not correctly materialize
conversions for the shape.assuming_yield op. This should have caused a
verification failure, but shape.assuming's verifier wasn't calling
RegionBranchOpInterface::verifyTypes (which for reasons can't be called
automatically as part of the trait verification, and requires being
called manually). This patch also adds that verification.

Diff Detail

Event Timeline

silvas created this revision.Oct 20 2020, 4:17 PM
silvas requested review of this revision.Oct 20 2020, 4:17 PM
jpienaar added inline comments.Oct 20 2020, 4:30 PM

according ?


This sounds weird: they do it following X, but they don't care what they do.


nit: Shape

silvas updated this revision to Diff 299517.Oct 20 2020, 4:57 PM
silvas marked 3 inline comments as done.

address feedback


Good catch! Updated the wording.

tpopp accepted this revision.Oct 21 2020, 7:01 AM

Thanks for the explanation regarding the AssumingYieldOp also. I found it strange that it was working with that before and just assumed that ops didn't need to be updated if only their inputs changed.


If you have a strong desire to update this op in place like you did with IfOp, I wouldn't object :)

Not necessary at all, but I might follow up and do that if you don't because it looks nicer and avoids sharp edges in some DialectConversion infrastructure.

herhut accepted this revision.Oct 21 2020, 7:56 AM
herhut added inline comments.

Which patterns do you expect to end up in this file? Like shape.shape_of tensor to memref conversion? Or would that also be structural?

silvas added inline comments.Oct 21 2020, 10:46 AM

shape.shape_of would not be structural.

Structural refers to ops that have operands/results that behave analogous to successor operands / block arguments -- one Value transfers to another Value as a pure identity, so no special knowledge of the type is needed (much as any type can be passed through to a successor block argument). That is, a structural type conversion is defined as being correct for any type conversion.

So shape.shape_of would not fit that category, since it does care about the type of its operands (such a conversion would not work, for example, for a tensor<2xi32> -> i64 conversion, as that would result in an invalid shape.shape_of op).

One observation on shape.shape_of is that its result type is independent of whether its argument is a tensor or a memref. And conversion patterns are mostly about converting results -- so as part of any bufferization flow, tensors will be converted to memrefs, and the shape.shape_of operand will naturally shift from tensor to memref (when its def gets converted, or in the "composable passes" case, the materialization back to tensor is elided as part of a final "full conversion" that eliminates the materializations). The shape.shape_of will remain valid when that happens, and, no conversion pattern is even needed.

One exception to the "conversion patterns are mostly about converting results" is e.g. "yield" patterns that are just there to trigger materializations to the target type (e.g. memref in tensor->memref conversion) so that they are consistent with the enclosing op.


I'm still parsing through the best way to do this. I think that updateRootInPlace is the best option, but the conversion infra has a bug that makes it actually not work as intended with the current state of the dialect conversion infra.

The approach as written here is actually the "best" given the current state of the conversion infrastructure. The test case would fail with updateRootInPlace, due to the tensor_cast on the shape.assuming result not getting materialized.

A workaround for that bug (which unfortunately introduces another issue), which I used in D89757 was to do "op.clone()" and then replace the op with a munged clone (rather than munging in place). However, if I did that here, the test case would fail, because the cloning process clones the "test.source" in the body, which then the conversion infra thinks it needs to legalize (as if you had rewriter.create'd it), even though we are doing a partial conversion. The approach here dodges that second issue because inlineRegionBefore doesn't create new ops, so the test.source just naturally moves into the new shape.assuming and the conversion infra doesn't feel compelled to legalize it. There might be a conversion infrastructure bug here or it might be working as intended.

I'll be following up with River to get the conversion infra fixed and update these tests to a single idiomatic way of doing it (which I think will be the updateRootInPlace approach).

This revision was not accepted when it landed; it landed in state Needs Review.Oct 21 2020, 11:59 AM
This revision was landed with ongoing or failed builds.
This revision was automatically updated to reflect the committed changes.