LGTM modulo incorporating our decision on the error case discussion from shape_eq.
Fri, Jun 26
LGTM modulo getting feedback for others on how to define the error cases.
It would be good to update https://mlir.llvm.org/docs/Dialects/LLVM/
Thu, Jun 25
LGTM, modulo splitting into two ops and a nit.
(we could also use shape.eq to be the one for shapes, which is a bit shorter than shape.shape_eq; I personally like being a bit more explicit but it's up to you what feels best namingwise)
LGTM modulo splitting into two separate ops.
Mon, Jun 22
Fri, Jun 19
Thu, Jun 18
Great! Thanks for these nicely factored patches!
LGTM modulo renaming of op (discussed in the dependent patch)
Wed, Jun 17
Tue, Jun 16
As per discussion today, this pattern is going to "fight" with what we are doing with IREE (and also npcomp). Can we reserve this pattern for the shape to std-with-descriptors lowering?
Fri, Jun 12
Thu, Jun 11
Can you add documentation for what happens in the case where the dimension index is out of bounds? Are we allowed to assume that doesn't happen?
Wed, Jun 10
What is the invariant established by this pass that other passes can rely on? It seems that this pass is needed for correctness in many cases, not just optimization. So it needs to have a well-specified contract for later passes to rely on.
Thanks! For some reason I was assuming we already had this, so seeing it turned on is great!
Tue, Jun 9
Jun 6 2020
Jun 5 2020
mlir/Dialect/Vector/CMakeLists.txt doesn't seem to need this?
@tpopp can you PTAL at this? Surely we aren't the first using DRR inside mlir core, and I didn't find evidence that this CMakeLists.txt annotation was needed for them. Wild guess, but perhaps we can put the ShapeCanonicalization.td in lib/ ?
Jun 4 2020
LGTM with one requested modification and a nit.
Looks great! Thanks!
Looks great! (modulo jacques' comments)
May 27 2020
Nice! I'm really looking forward to this line of work unifying the adaptors with the main flow :)
May 26 2020
Awesome! Thanks so much for doing this :)
I feel like there must be some documentation somewhere that should be updated to describe this behavior?
May 21 2020
Update commit message.
May 20 2020
Looks like this has been shraded into a couple other patches. Resigning from this master one as I reviewed the others.
LGTM, but let's look into using a fold for this (the good thing is that all the test cases will carry over :) )
Awesome, thanks :)
See my comment in the other patch about using this in ShapeDialect::materializeConstant
Thanks, this is a great cleanup! I happen to have added this before tensor<index> was allowed so I used i32, but now that doesn't make sense.
LGTM, but please look into making these be folds instead of canonicalization patterns. That should also make the code simpler since the folding infra will give you the folded attributes pre-canned without having to do as much getDefiningOp<ConstShapeOp>().
May 19 2020
For future reference, this patch should probably be multiple separate patches:
- moving ConcatOp
- adding size_to_index and size_from_index
- adding num_elements
- Tidying up docs of Shape_JoinOp and Shape_ReduceOp and Shape_ConstSizeOp
- the change to StandardTypes.h
May 18 2020
May 15 2020
May 13 2020
May 12 2020
May 11 2020
May 9 2020
Make canonicalizer a bit easier to read.