Simlified reshape lowering. Adjusted depthwise convolution and reduce to use expand/collapse instead.
Quite a lot of red :-)
Does i == r here?
Should we check if ranked first before querying rank?
adaptor.input1()? (The named accessor isn't much more descriptive)
And here i is outputRank?
Same question re checking rank. If you move this closer to use it would be post checking shape (well it seems resultty is not check if static and assumed that it is if input types are)
Small clarity fixes
If I'm understanding the question correctly, this seems like a clarity issue. If so, r might not be the most clear name either, so changed (lmk if I'm misunderstanding the intention of the comment :)
This seems like a larger problem in our lowering process (i.e. applies to more than this op). Does it seem reasonable for us to just not handle tensors that have no rank even in cases like tosa.reshape, where the spec doesn't explicitly require the tensor to be ranked?
In addition to the question above, is there a shape check I'm missing on the input? The weight and bias are checked for but the input itself should probably also have a check for being ranked, correct? (we should *theoretically* never run into the input being unranked as per the tosa spec but as we've seen it's not super reliable yet :)
Yes, and the other part was about reducing variable scope. Thanks
Yes having a top level iteration that would result in this pass not executing if unranked ops could be an option and uniform. And we should definititely document this expectation on the pass. If we expect these to ever be interleaved and only allow partial lowering (e.g., only lower ranked ops) then we have to do this check in every pattern, else we can do this at pass level.
Yes I think that is missing too. This fits in the with the above, we could require that to be where the pass is used and check/assert at pass level etc. Not blocker here, here it was more that there is a large distance between definition and use and we normally try to make that small.
Their example contains dynamic shapes, so my assumption was that it's a valid use case. The issue might potentially come from more than one dynamic dimension, but I'm not entirely sure how to track that down.