Add shape.shape_eq operation to the shape dialect.
The operation allows to test shapes for equality.
Details
Diff Detail
- Repository
- rG LLVM Github Monorepo
Event Timeline
mlir/include/mlir/Dialect/Shape/IR/ShapeOps.td | ||
---|---|---|
141 | Should it be SameOperandsAndResultType? | |
146 | What about equality of errors? | |
152 | Having two separate ops almost seems preferable (and is easier to isa/dyn_cast as well): shape.shape_eq %lhs, %rhs vs shape.eq %lhs, %rhs : !shape.shape More redundancy in the latter. |
mlir/include/mlir/Dialect/Shape/IR/ShapeOps.td | ||
---|---|---|
152 | Also, it is much easier to builder.create with with two separate ops, since with two separate ops the return type can be inferred. builder.create<shape::ShapeEqOp>(loc, lhs, rhs) vs builder.create<shape::EqOp>(loc, builder.getType<shape::ShapeType>(), lhs, rhs) |
mlir/include/mlir/Dialect/Shape/IR/ShapeOps.td | ||
---|---|---|
141 | D'oh, sorry. It's i1 result! My bad! How is the error case going to be propagated then? |
(we could also use shape.eq to be the one for shapes, which is a bit shorter than shape.shape_eq; I personally like being a bit more explicit but it's up to you what feels best namingwise)
mlir/include/mlir/Dialect/Shape/IR/ShapeOps.td | ||
---|---|---|
152 | D'oh, ignore that last comment. It returns i1. |
mlir/include/mlir/Dialect/Shape/IR/ShapeOps.td | ||
---|---|---|
145 | This entire unknown is a bit of a mess. I think we have two different concerns that we are trying to model here. One is the representation of partially constant values for constant folding and analysis. That is where the unknown comes in. On the other hand we have the runtime behavior, which does not know the concept of unknown. Maybe we should separate the two issues and only describe the runtime semantics for the operations in the docs. Have an additional description of the special encoding that we use for constant folding. That would fix these issues. @jpienaar WDYT? If any of the inputs is an error, then the result is undefined works for me. |
mlir/include/mlir/Dialect/Shape/IR/ShapeOps.td | ||
---|---|---|
145 | You are correct we do. I think there is more than that: there are operations that express constraints and operations that perform computations. And for the former this is useful but for the latter not. Having it expressed consistently is probably less confusion. This operation being added here is an operation on shapes but express no constraint. And as such does not need to operate in constraint domain or make sense where errors or partial known values are, e.g., this op should be operating on tensor<?xindex>. For expressing constraint you'd have shape.join %0, %1 followed by feeding into a assuming op (currently they only take witness, but the valid bit of shape serves the purpose of witness, so operationally they make sense there to me). While if this operation only makes sense where no errors are possible, then it should only operate there and so take tensor<?xindex> instead. | |
146 | As mentioned in the dialect doc: all functions that return a shape and take shapes as input, return the invalid shape if one of its operands is an invalid shape This op cannot be executed in any situation where one of the shapes can be an error. It only makes sense post verification/inside assume. As such, I don't know if if this op provides anything which equality check on tensor<?xindex> doesn't. | |
152 |
Return type can be inferred in either case. As long as one has the requirement that that operands and result type match.
Textual form is meant for debugging only. There is no more or less redundancy in the actual IR. This is still an IR and not a programming language. |
mlir/include/mlir/Dialect/Shape/IR/ShapeBase.td | ||
---|---|---|
112 ↗ | (On Diff #277430) | Let's wait before we discuss this more before we add support for both types in the same op. I don't think we have consensus on the direction for this. |
Should it be SameOperandsAndResultType?