Details
Diff Detail
- Repository
- rG LLVM Github Monorepo
Event Timeline
Thanks for removing this huge foot gun!
@Mogball would you have a suggestion on the best approach for the printing/parsing apparent regression here ?
The tricky bit is that this touches on the generic parser / printer..
It is possible we have to make tosa tests use proper custom syntax.
The fact that the magic number goes form -1 to -9223372036854775808 is an annoying detail that should not distract from the much bigger footgun (e.g. we've seen people actually perform arithmetic that "just works" with -1... ).
mlir/python/mlir/dialects/_linalg_ops_ext.py | ||
---|---|---|
42 ↗ | (On Diff #463527) | Let's avoid replacing. magic constant by another one, can we expose ShapedType::kDynamicSize to the C API and python? |
mlir/test/Conversion/TosaToLinalg/tosa-to-linalg.mlir | ||
562 | Could we improve printing / parsing everywhere the magic constant shifts ? | |
mlir/test/python/dialects/linalg/ops.py | ||
26–27 | Let's avoid replacing. magic constant by another one, can we expose ShapedType::kDynamicSize to the C API and python? (here and below) |
@kiranchandramohan this PR breaks flang. Could it be that there is a use of kDynamicSize instead of -1 somewhere?
I don't think there is a direct use of kDynamicSize in the code. ShapedType and RankedTensorType are used in a few places. I see that a lot of tests are failing at different places and due to different reasons including crashes.
CC: @jeanPerier, @clementval
mlir/python/mlir/dialects/_linalg_ops_ext.py | ||
---|---|---|
42 ↗ | (On Diff #463527) | FYI, they are already available in both C and Python. |
@clementval @kiranchandramohan @jeanPerier This PR will land in the end of the week. Could you please make sure that
static constexpr Extent getUnknownExtent() { return -1; }
is updated?
Because I do not know how to compile/test flang correctly. It should be smth like https://reviews.llvm.org/D135675. Maybe, it just works out-of-the-box.
@khasanovaa Can you rebase the patch so we can check if D135675 fixed the pre commit checks?
@clementval @kiranchandramohan @jeanPerier, it looks like my patch did not actually solve all problems. Could you please take a look? It is quite an important PR.
With the following changes, I see that all tests pass. But not sure whether the replacements have happened at all places.
diff --git a/flang/include/flang/Optimizer/Dialect/FIRTypes.td b/flang/include/flang/Optimizer/Dialect/FIRTypes.td index f179071f1943..0d06e1d118ea 100644 --- a/flang/include/flang/Optimizer/Dialect/FIRTypes.td +++ b/flang/include/flang/Optimizer/Dialect/FIRTypes.td @@ -128,7 +128,7 @@ def fir_CharacterType : FIR_Type<"Character", "char"> { static constexpr LenType singleton() { return 1; } /// Character has a LEN value which is not a compile-time known constant. - static constexpr LenType unknownLen() { return -1; } + static constexpr LenType unknownLen() { return mlir::ShapedType::kDynamicSize; } /// Character LEN is a runtime value. bool hasDynamicLen() { return getLen() == unknownLen(); } diff --git a/flang/lib/Optimizer/Transforms/AffinePromotion.cpp b/flang/lib/Optimizer/Transforms/AffinePromotion.cpp index ae152cf3a524..92f38d73790e 100644 --- a/flang/lib/Optimizer/Transforms/AffinePromotion.cpp +++ b/flang/lib/Optimizer/Transforms/AffinePromotion.cpp @@ -410,7 +410,7 @@ createAffineOps(mlir::Value arrayRef, mlir::PatternRewriter &rewriter) { auto affineApply = rewriter.create<mlir::AffineApplyOp>(acoOp.getLoc(), affineMap, indexArgs); auto arrayElementType = coordinateArrayElement(acoOp); - auto newType = mlir::MemRefType::get({-1}, arrayElementType); + auto newType = mlir::MemRefType::get({mlir::ShapedType::kDynamicSize}, arrayElementType); auto arrayConvert = rewriter.create<fir::ConvertOp>(acoOp.getLoc(), newType, acoOp.getMemref()); return std::make_pair(affineApply, arrayConvert);
Could we improve printing / parsing everywhere the magic constant shifts ?