Convenience op that allows for simple expression of common crossing of
value/shape divide.
Details
- Reviewers
yaochengji - Commits
- rG2f025e0e78fd: [mlir][shape] Add dim op
Diff Detail
- Repository
- rG LLVM Github Monorepo
Event Timeline
Could you add test in ops.mlir for this new op?
mlir/lib/Dialect/Shape/IR/Shape.cpp | ||
---|---|---|
1072 | Do we need to consider more complicated situations here, where dim is not a constant-like op, but it could be folded into one? | |
1113 | it should be dim >= st.getRank() Further more, how do you plan to support negative dim index? |
Fix bound calculation
mlir/lib/Dialect/Shape/IR/Shape.cpp | ||
---|---|---|
1072 | Yes but not here. That would be done as part of analysis (probably sparse data flow analysis one, we are going to rebase the TOSA shape inference one to use that framework) where one would do (potentially) expensive propagations and partial evaluations to get dims along with an inference context (well when we combine the analysis we can take advantage of SCCP's analysis state too and feed back into it, which will be nice). Here it is just for the cheap/trivial case. | |
1113 | Negative dim index convention is not used anywhere where dim is used as operand/runtime value. I'm guessing you are thinking of Python's convention with array slicing wrt usage, similar to split_at. It is used there as its a fixed attribute so one can't do index calculation, and so if dim was an attribute here, we'd do same but for operand keeping it consistent with rest of system. (which does remind me we didn't add sub yet as we didn't need/used the arith one as part of lowered form only). |
mlir/lib/Dialect/Shape/IR/Shape.cpp | ||
---|---|---|
1113 | I guess we still need to check whether dim < 0 here. Since neither ShapeSizeType nor IndexType limit the value to be non-negative. |
mlir/lib/Dialect/Shape/IR/Shape.cpp | ||
---|---|---|
1113 | Good point, will update soon (and sorry, for some reason I didn't get a notification of your comment addressing now) |
mlir/lib/Dialect/Shape/IR/Shape.cpp | ||
---|---|---|
1113 | Tensor::DimOp also doesn't check this, that seems like bug. |
Do we need to consider more complicated situations here, where dim is not a constant-like op, but it could be folded into one?