Previously, mlir-opt crashes for the following example:
mlir-opt --test-transform-dialect-interpreter Test.mlir
Test.mlir:
func.func @reduction_bug(%arg0: tensor<32x32xi32>, %arg1: tensor<32xi32>, %out: tensor<32xi32>) -> tensor<32xi32> { %red = linalg.generic {indexing_maps = [affine_map<(d0, d1, d2) -> (d0, d2)>, affine_map<(d0, d1, d2) -> (d1)>, affine_map<(d0, d1, d2) -> (d0)>], iterator_types = ["parallel", "parallel", "reduction"]} ins(%arg0, %arg1 : tensor<32x32xi32>, tensor<32xi32>) outs(%out : tensor<32xi32>) { ^bb0(%a: i32, %b: i32, %c: i32): %r2 = arith.addi %c, %a : i32 linalg.yield %r2 : i32 } -> tensor<32xi32> return %red : tensor<32xi32> } transform.sequence failures(propagate) { ^bb0(%arg1: !pdl.operation): %0 = transform.structured.match ops{["linalg.generic"]} in %arg1 %1, %2, %3 = transform.structured.tile_reduction_using_scf %0 { tile_sizes = [0, 0, 8] } }
It was because the result tensor's rank was supposed to be exactly one less than the number of loops. This differential makes it fail gracefully.