When detecting reductions, make sure the block argument is from the linalg generic op.
Co-authored-by: Andrzej Warzynski <andrzej.warzynski@arm.com>
Differential D141413
[mlir][linalg] Fix incorrect reduction detection in Vectorizer dcaballe on Jan 10 2023, 11:03 AM. Authored by
Details When detecting reductions, make sure the block argument is from the linalg generic op. Co-authored-by: Andrzej Warzynski <andrzej.warzynski@arm.com>
Diff Detail
Event TimelineComment Actions Thanks for the fix, @dcaballe ! Here's a small repro. The key part is that %arg0 is defined outside of linalg.generic. Feel free to re-use this. func.func @pipeline_dispatch_1_generic_1080x1920(%input: tensor<120x64xf32>) -> tensor<120x64xf32> { %c0 = arith.constant 0 : index %c4 = arith.constant 4 : index %c64 = arith.constant 64 : index %cst_6 = arith.constant 4.000000e+00 : f32 %1 = scf.for %arg0 = %c0 to %c64 step %c4 iter_args(%arg1 = %input) -> (tensor<120x64xf32>) { %extracted_slice = tensor.extract_slice %arg1[%c0, %arg0] [1, 4] [1, 1] : tensor<120x64xf32> to tensor<1x4xf32> %10 = linalg.fill {__internal_linalg_transform__ = "1"} ins(%cst_6 : f32) outs(%extracted_slice : tensor<1x4xf32>) -> tensor<1x4xf32> %11 = linalg.generic {indexing_maps = [affine_map<(d0, d1) -> (d0, d1)>], iterator_types = ["parallel", "parallel"]} outs(%10 : tensor<1x4xf32>) { ^bb0(%out: f32): %12 = linalg.index 0 : index %13 = arith.addi %arg0, %12 : index %18 = arith.index_cast %13 : index to i32 %20 = arith.uitofp %18 : i32 to f32 %67 = arith.mulf %out, %20 : f32 linalg.yield %67 : f32 } -> tensor<1x4xf32> %inserted_slice = tensor.insert_slice %11 into %arg1[%c0, %arg0] [1, 4] [1, 1] : tensor<1x4xf32> into tensor<120x64xf32> scf.yield %inserted_slice : tensor<120x64xf32> } return %1 : tensor<120x64xf32> } Could you add a note that this fixes https://github.com/iree-org/iree/issues/11779? Just to make cross-referencing easier. Ta! Comment Actions
Comment Actions We have already been doing that :) (check git log). I think that referring to IREE GitHub issues is fine - that's just reflective of how strongly the two communities and projects overlap (MLIR and IREE). In fact, I should've probably raised my issue on https://github.com/llvm/llvm-project instead - my bad! In any case, I am happy with whatever you decide here. |