This is an archive of the discontinued LLVM Phabricator instance.

[mlir][tosa] Change tosa.reshape lowering to use the tensor.reshape op.
ClosedPublic

Authored by NatashaKnk on Apr 27 2022, 5:18 PM.

Details

Summary

Simlified reshape lowering. Adjusted depthwise convolution and reduce to use expand/collapse instead.

Diff Detail

Event Timeline

NatashaKnk created this revision.Apr 27 2022, 5:18 PM
Herald added a project: Restricted Project. · View Herald Transcript
NatashaKnk requested review of this revision.Apr 27 2022, 5:18 PM

Quite a lot of red :-)

mlir/lib/Conversion/TosaToLinalg/TosaToLinalg.cpp
897

Does i == r here?

925

Should we check if ranked first before querying rank?

947

adaptor.input1()? (The named accessor isn't much more descriptive)

mlir/lib/Conversion/TosaToLinalg/TosaToLinalgNamed.cpp
157

And here i is outputRank?

347

Same question re checking rank. If you move this closer to use it would be post checking shape (well it seems resultty is not check if static and assumed that it is if input types are)

NatashaKnk marked 2 inline comments as done.

Small clarity fixes

mlir/lib/Conversion/TosaToLinalg/TosaToLinalg.cpp
897

If I'm understanding the question correctly, this seems like a clarity issue. If so, r might not be the most clear name either, so changed (lmk if I'm misunderstanding the intention of the comment :)

925

This seems like a larger problem in our lowering process (i.e. applies to more than this op). Does it seem reasonable for us to just not handle tensors that have no rank even in cases like tosa.reshape, where the spec doesn't explicitly require the tensor to be ranked?

mlir/lib/Conversion/TosaToLinalg/TosaToLinalgNamed.cpp
347

In addition to the question above, is there a shape check I'm missing on the input? The weight and bias are checked for but the input itself should probably also have a check for being ranked, correct? (we should *theoretically* never run into the input being unranked as per the tosa spec but as we've seen it's not super reliable yet :)

rsuderman accepted this revision.Apr 29 2022, 11:37 AM
This revision is now accepted and ready to land.Apr 29 2022, 11:37 AM
jpienaar accepted this revision.Apr 29 2022, 11:57 AM

Nice, thanks!

mlir/lib/Conversion/TosaToLinalg/TosaToLinalg.cpp
897

Yes, and the other part was about reducing variable scope. Thanks

925

Yes having a top level iteration that would result in this pass not executing if unranked ops could be an option and uniform. And we should definititely document this expectation on the pass. If we expect these to ever be interleaved and only allow partial lowering (e.g., only lower ranked ops) then we have to do this check in every pattern, else we can do this at pass level.

mlir/lib/Conversion/TosaToLinalg/TosaToLinalgNamed.cpp
347

Yes I think that is missing too. This fits in the with the above, we could require that to be where the pass is used and check/assert at pass level etc. Not blocker here, here it was more that there is a large distance between definition and use and we normally try to make that small.

Seeing a failure when cherry-picked into IREE head. Do we know that TensorReshape has a lowering path for dynamic shapes?

Their example contains dynamic shapes, so my assumption was that it's a valid use case. The issue might potentially come from more than one dynamic dimension, but I'm not entirely sure how to track that down.

jpienaar closed this revision.Jul 15 2022, 7:54 AM

Closing as we won't be pursuing this path.