num_elements derives the number of elements from a given shape.
The remaining operations help with conversion between the shape and the standard
dialect types.
Details
Diff Detail
- Repository
- rG LLVM Github Monorepo
Event Timeline
mlir/lib/IR/Builders.cpp | ||
---|---|---|
57 ↗ | (On Diff #264224) | can you split this cleanup of the core infra into a different patch? |
mlir/include/mlir/Dialect/Shape/IR/ShapeOps.td | ||
---|---|---|
175 | These should be marked NoSideEffect and have folders and InferTypeOpInterface. See https://reviews.llvm.org/D79833 for an example of another op. |
mlir/include/mlir/Dialect/Shape/IR/ShapeOps.td | ||
---|---|---|
175 | I am not sure from_index is enough. In the IR i first read it as if it creates 1D shape with the provided number number of elements. Maybe call this from_index_rank? Is there some special reason why we have shape.size instead of shape.rank? |
mlir/include/mlir/Dialect/Shape/IR/ShapeOps.td | ||
---|---|---|
175 | Maybe index_to_size and size_to_index would better convey what this does. We could also have a single operation size_cast in the spirit of the index_cast operation in standard that allows both conversions. That would be more uniform and, if we get to a state where we can have a type trait that would allow to use 'index_cast', it would be easier to rewrite. |
mlir/include/mlir/Dialect/Shape/IR/ShapeOps.td | ||
---|---|---|
175 | @silvas NoSideEffect makes perfect sense and I will also add the folders where possible. @pifon2a As far as I understand it, shape.size is used to describe also but not only the rank of a tensor. | |
mlir/lib/IR/Builders.cpp | ||
57 ↗ | (On Diff #264224) | It's now in https://reviews.llvm.org/D80111 |
mlir/include/mlir/Dialect/Shape/IR/ShapeOps.td | ||
---|---|---|
175 | I would prefer to have two ops rather than an SizeCast that does everything. That way you can just do isa<SizeToIndexOp> and know what you have. Otherwise, you always have to do isa<SizeCastOp> && op.getResult().getType().isa<SizeType>() or other such nonsense. Personally I think that shape.size_from_index/shape.size_to_index is reads best to me InferTypeOpInterface avoids needing to redundantly write the result type in the builder calls, since for these ops the result type is already known. (MLIR infra should be smarter about this and do it automatically, but currently that's not the case) |
mlir/include/mlir/Dialect/Shape/IR/ShapeOps.td | ||
---|---|---|
175 | size_from_index size_to_index works for me, too. |
mlir/include/mlir/Dialect/Shape/IR/ShapeOps.td | ||
---|---|---|
214 | Corrected shape.type to shape.shape and fixed line breaks because the containing line was too long. |
mlir/include/mlir/Dialect/Shape/IR/ShapeOps.td | ||
---|---|---|
156 | Could we keep these sorted? | |
175 |
It is used to generate custom build and verify methods for ops (and pattern rewrites).
In this case yes, but not yet plumbed through (there is pending item in shape inference doc about this but hasn't been implemented yet - the goal is to have all the shape constraint solving be in one place and be reused). In general these have been type constraints (e.g., Tensor<IntegerType> could be a case so one doesn't have exact shape/type). We should make it easier/better as Sean mentioned. | |
mlir/lib/Dialect/Shape/IR/Shape.cpp | ||
312 | It would seem for these, if (Attribute arg = operands[0]) return arg; is simpler. |
For future reference, this patch should probably be multiple separate patches:
- moving ConcatOp
- adding size_to_index and size_from_index
- adding num_elements
- Tidying up docs of Shape_JoinOp and Shape_ReduceOp and Shape_ConstSizeOp
- the change to StandardTypes.h
That helps keep each review focused which results in much better review feedback. It also helps people retroactively working through the code history to understand the development process, or folks trying.
I would appreciate if you do it for this patch already; I think there's more feedback to be gotten that is being masked by the mix of stuff happening in this patch. At the very least the commit description needs to be updated to incorporate the fact that it is doing all this stuff.
mlir/test/Dialect/Shape/canonicalize.mlir | ||
---|---|---|
119 | For this specific test, you might want to specifically check that the value is returned directly (otherwise the test also passes if no transformation is made) | |
128 | You don't seem to use these captured FileCheck variables. |
As suggested by @silvas , this revision is now split up into multiple ones:
- [MLIR] Cosmetic change: https://reviews.llvm.org/D80275
- [MLIR] Move ConcatOp to its lexicographic position: https://reviews.llvm.org/D80277
- [MLIR] Tidy up documentation for Shape_JoinOp, Shape_ReduceOp, and Shape_ConstSizeOp: https://reviews.llvm.org/D80278
- [MLIR] Add index_to_size and size_to_index to the shape dialect: https://reviews.llvm.org/D80280?vs=on&id=265167
- [MLIR] Add num_elements to the shape dialect: https://reviews.llvm.org/D80281
- [MLIR] Fix operand type in from_extent_tensor in the shape dialect: https://reviews.llvm.org/D80283
mlir/include/mlir/Dialect/Shape/IR/ShapeOps.td | ||
---|---|---|
156 | ||
mlir/lib/Dialect/Shape/IR/Shape.cpp | ||
312 | ||
mlir/test/Dialect/Shape/canonicalize.mlir | ||
119 | ||
128 |
Could we keep these sorted?