Page MenuHomePhabricator

[mlir][Standard] Add canonicalizer for dynamic_tensor_from_elements

Authored by herhut on Fri, Sep 11, 9:53 AM.



This adds canonicalizer for

  • extracting an element from a dynamic_tensor_from_elements
  • propagating constant operands to the type of dynamic_tensor_from_elements

Diff Detail

Event Timeline

herhut created this revision.Fri, Sep 11, 9:53 AM
Herald added a project: Restricted Project. · View Herald TranscriptFri, Sep 11, 9:53 AM
herhut requested review of this revision.Fri, Sep 11, 9:53 AM
mehdi_amini accepted this revision.Sat, Sep 12, 10:55 AM
mehdi_amini added inline comments.

"resulting" does not read well to me here?


can you specify the type here?

auto tensorFromElements = extract.aggregate().getDefiningOp<DynamicTensorFromElementsOp>();

Is this always a more canonical form to do this?

This revision is now accepted and ready to land.Sat, Sep 12, 10:55 AM
bondhugula added inline comments.

Triple comment please.


Triple /// comments please.



If the element is extracted multiple times from the same tensor you can end up with duplicate computations here.
I think this one should not be a general canonicalisation pattern.


can we have another test for rank > 1?

herhut updated this revision to Diff 291516.Mon, Sep 14, 2:01 AM
herhut marked 5 inline comments as done.


frgossen accepted this revision.Mon, Sep 14, 2:22 AM
herhut updated this revision to Diff 291523.Mon, Sep 14, 2:55 AM
herhut marked 2 inline comments as done.

More comments

herhut added inline comments.Mon, Sep 14, 2:57 AM

Yes, I think this is always a better form. It might give you some code-growth if you have many uses (as in, many extract element operations) but you avoid an intermediate data structure, which generally propagates knowledge and makes other patterns easier to write.

I could guard this with the number of use sites being small to avoid excessive code growth but I think that would be pathological cases. Also dynamic_tensor_from_elements is not really a "compute op" like a LinAlg generic but is meant to create small intermediate tensors where pattern composition requires this, like in the shape dialect. I would not write the same pattern for LinAlg for instance.

I have added a check that there are no side-effects to make sure that replicating computation is OK. If you insist, I can also add a check that it is a single use. However, then we might get odd knowledge propagation in cases where one first does a CSE to combine for example shape_of computations vs. doing that later.