Masking was already supported for linalg.index and n-D extract but
disabled while waiting for some n-D extract vectorization patches to
land. This patch is just enabling masking for them and adding a couple
of tests.
Details
Diff Detail
- Repository
- rG LLVM Github Monorepo
Event Timeline
Thanks, this makes sense! I think that you are missing a test with dynamic shapes? For example (feel free to re-use):
#map1 = affine_map<(d0, d1) -> (d0, d1)> func.func @masked_vectorize_nd_tensor_extract_with_affine_apply_gather_dyn_shape(%arg0: tensor<?x?xf32>, %arg1: tensor<?x?xf32>) -> tensor<?x?xf32> { %c0 = arith.constant 1 : index %c1 = arith.constant 2 : index %2 = linalg.generic { indexing_maps = [#map1], iterator_types = ["parallel", "parallel"] } outs(%arg1 : tensor<?x?xf32>) { ^bb0(%arg3: f32): %21 = linalg.index 1 : index %3 = affine.apply affine_map<(d0, d1) -> (d0 + d1)>(%21, %c0) %7 = tensor.extract %arg0[%3, %c1] : tensor<?x?xf32> linalg.yield %7 : f32 } -> tensor<?x?xf32> return %2 : tensor<?x?xf32> } // CHECK-LABEL: func.func @masked_vectorize_nd_tensor_extract_with_affine_apply_gather( // CHECK-SAME: %[[VAL_0:.*]]: tensor<80x16xf32>, // CHECK-SAME: %[[VAL_1:.*]]: index, // CHECK-SAME: %[[VAL_2:.*]]: tensor<1x3xf32>) -> tensor<1x3xf32> { // CHECK: %[[VAL_3:.*]] = arith.constant 16 : index // CHECK: %[[VAL_4:.*]] = arith.constant 0 : index // CHECK: %[[VAL_5:.*]] = arith.constant 0.000000e+00 : f32 // CHECK: %[[VAL_6:.*]] = arith.constant 1 : index // CHECK: %[[VAL_7:.*]] = arith.constant 3 : index // CHECK: %[[VAL_8:.*]] = vector.create_mask %[[VAL_6]], %[[VAL_7]] : vector<1x4xi1> // CHECK: %[[VAL_9:.*]] = vector.mask %[[VAL_8]] { vector.transfer_read %[[VAL_2]]{{\[}}%[[VAL_4]], %[[VAL_4]]], %[[VAL_5]] {in_bounds = [true, true]} : tensor<1x3xf32>, vector<1x4xf32> } : vector<1x4xi1> -> vector<1x4xf32> // CHECK: %[[VAL_10:.*]] = arith.constant dense<[0, 1, 2, 3]> : vector<4xindex> // CHECK: %[[VAL_11:.*]] = vector.broadcast %[[VAL_1]] : index to vector<4xindex> // CHECK: %[[VAL_12:.*]] = arith.addi %[[VAL_10]], %[[VAL_11]] : vector<4xindex> // CHECK: %[[VAL_13:.*]] = arith.constant dense<true> : vector<1x4xi1> // CHECK: %[[VAL_14:.*]] = arith.constant dense<0.000000e+00> : vector<1x4xf32> // CHECK: %[[VAL_15:.*]] = arith.constant 0 : index // CHECK: %[[VAL_16:.*]] = vector.broadcast %[[VAL_12]] : vector<4xindex> to vector<1x4xindex> // CHECK: %[[VAL_17:.*]] = arith.constant 1 : index // CHECK: %[[VAL_18:.*]] = tensor.dim %[[VAL_0]], %[[VAL_17]] : tensor<80x16xf32> // CHECK: %[[VAL_19:.*]] = vector.broadcast %[[VAL_18]] : index to vector<1x4xindex> // CHECK: %[[VAL_20:.*]] = arith.muli %[[VAL_16]], %[[VAL_19]] : vector<1x4xindex> // CHECK: %[[VAL_21:.*]] = arith.constant dense<16> : vector<1x4xindex> // CHECK: %[[VAL_22:.*]] = arith.addi %[[VAL_21]], %[[VAL_20]] : vector<1x4xindex> // CHECK: %[[VAL_23:.*]] = vector.mask %[[VAL_8]] { vector.gather %[[VAL_0]]{{\[}}%[[VAL_15]], %[[VAL_15]]] {{\[}}%[[VAL_22]]], %[[VAL_13]], %[[VAL_14]] : tensor<80x16xf32>, vector<1x4xindex>, vector<1x4xi1>, vector<1x4xf32> into vector<1x4xf32> } : vector<1x4xi1> -> vector<1x4xf32> // CHECK: %[[VAL_24:.*]] = arith.constant 0 : index // CHECK: %[[VAL_25:.*]] = vector.mask %[[VAL_8]] { vector.transfer_write %[[VAL_23]], %[[VAL_2]]{{\[}}%[[VAL_24]], %[[VAL_24]]] {in_bounds = [true, true]} : vector<1x4xf32>, tensor<1x3xf32> } : vector<1x4xi1> -> tensor<1x3xf32> // CHECK: return %[[VAL_25]] : tensor<1x3xf32> // CHECK: } transform.sequence failures(propagate) { ^bb1(%arg1: !pdl.operation): %0 = transform.structured.match ops{["linalg.generic"]} in %arg1 : (!pdl.operation) -> !pdl.operation transform.structured.masked_vectorize %0 vector_sizes [3, 3] { vectorize_nd_extract } }
LGTM otherwise (I will be away for ~1 week, so please just go ahead and merge once a test with dynamic shapes is added).
Btw, the number of test cases in "vectorization.mlir" is growing fast. I'm tempted to extract the "tensor.extract" tests into a dedicated file. Or perhaps there's better way to split it?
Added 2 more tests for dynamic shapes.
Btw, the number of test cases in "vectorization.mlir" is growing fast. I'm tempted to extract the "tensor.extract" tests into a dedicated file. Or perhaps there's better way to split it?
I was thinking about moving all the masking tests to a separate file as well... I don't think there is an perfect way to split them. No strong opinion from my side!