This is an archive of the discontinued LLVM Phabricator instance.

[MLIR] Update Vector To LLVM conversion to be aware of assume_alignment
ClosedPublic

Authored by stephenneuendorffer on Apr 13 2021, 9:57 PM.

Details

Summary

vector.transfer_read and vector.transfer_write operations are converted
to llvm intrinsics with specific alignment information, however there
doesn't seem to be a way in llvm to take information from llvm.assume
intrinsics and change this alignment information. In any
event, due the to the structure of the llvm.assume instrinsic, applying
this information at the llvm level is more cumbersome. Instead, let's
generate the masked vector load and store instrinsic with the right
alignment information from MLIR in the first place. Since
we're bothering to do this, lets just emit the proper alignment for
loads, stores, scatter, and gather ops too.

Diff Detail

Event Timeline

stephenneuendorffer requested review of this revision.Apr 13 2021, 9:57 PM
Herald added a project: Restricted Project. · View Herald TranscriptApr 13 2021, 9:57 PM
mlir/lib/Conversion/VectorToLLVM/ConvertVectorToLLVM.cpp
124

Can we fold this into a:

LogicalResult getMemRefAlignment(LLVMTypeConverter &typeConverter,
                                 Value memRef, unsigned &align) {
  MemRefType memrefType = memRef.getType().cast<MemRefType>();
  
  // old code

  // your code
}

?

131

align = std::max(align, op.alignment()) (with a cast if needed) ?

aartbik added inline comments.Apr 14 2021, 11:53 AM
mlir/lib/Conversion/VectorToLLVM/ConvertVectorToLLVM.cpp
124

+1 with probably a name to reflect that

getMemRefAlignmentAndUpdate()

or something like that

stephenneuendorffer edited the summary of this revision. (Show Details)
stephenneuendorffer marked 3 inline comments as done.May 19 2021, 12:20 AM

In incorporating your feedback, I decided to emit the proper alignment other loads and stores too. Thoughts?

In incorporating your feedback, I decided to emit the proper alignment other loads and stores too. Thoughts?

Looks good!

Now I'm wondering whether we should go for the LICM instead?

mlir/lib/Conversion/VectorToLLVM/ConvertVectorToLLVM.cpp
124
If no assumed alignment

->

Return the minimal alignment value that satisfies all the AssumeAlignment uses of `value`.
If no such uses exist, return 0.
131

Thanks for re-spelling.
Now that I see it in this form, should this be licm instead of max ?

And by licm I mean lcm :p ..

mlir/lib/Conversion/VectorToLLVM/ConvertVectorToLLVM.cpp
131

And by licm I mean lcm :p ..

stephenneuendorffer marked 3 inline comments as done.May 19 2021, 10:36 AM
stephenneuendorffer added inline comments.
mlir/lib/Conversion/VectorToLLVM/ConvertVectorToLLVM.cpp
131

Probably.. I guess I was implicitly assuming everything was a power of 2.

stephenneuendorffer marked an inline comment as done.
mlir/lib/Conversion/VectorToLLVM/ConvertVectorToLLVM.cpp
131

Sure enough : 'error: 'memref.assume_alignment' op alignment must be power of 2'

This revision was not accepted when it landed; it landed in state Needs Review.May 19 2021, 10:51 AM
This revision was landed with ongoing or failed builds.
This revision was automatically updated to reflect the committed changes.
ThomasRaoux added inline comments.
mlir/test/Conversion/VectorToLLVM/vector-to-llvm.mlir
1555

How can we know that this store is aligned on 32B without knowing anything about %i and %j.
I'm running into a bug because the memref alignment is propagated to load/store associated without considering the indices used for indexing.
I don't believe we can infer any kind of alignment without having alignment information about the indices.
@stephenneuendorffer, could you clarify how you expect this to work?

mlir/test/Conversion/VectorToLLVM/vector-to-llvm.mlir
1555

This one is clearly not aligned on 32B but it should be aligned on:

lcm(annotation + offset_alignment, most_minor_dim_alignment) == lcm(32 + 0 * 4, 100 * 4) == 16B

There seems indeed to be a small bug somewhere but I would prefer we fix rather than flatly revert.

mehdi_amini added inline comments.Dec 1 2021, 12:09 AM
mlir/test/Conversion/VectorToLLVM/vector-to-llvm.mlir
1555

Why? Revert is conservative and cheap: it is trivial to reapply the patch with the fix without pressure.

nicolasvasilache added a comment.EditedDec 1 2021, 1:22 AM

Fair enough, my bias was against losing the contribution; didn't think about it your way, thanks!

ThomasRaoux added inline comments.Dec 1 2021, 6:53 AM
mlir/test/Conversion/VectorToLLVM/vector-to-llvm.mlir
1555

Yes Stephen reverted it as a quick solution as this was causing miscompile on IREE side.
@nicolasvasilache, I don't understand how we can prove that this is aligned on 16B. If %j = 1 and %i = 0 for instance we would be accessing 4B from the base and since the base is 32B aligned we would only be able to get 4B alignment.
What am I missing?

mlir/test/Conversion/VectorToLLVM/vector-to-llvm.mlir
1555

You're right, I was very focused on the memref type itself and did not look deeper at the indexing..

What I meant to say is that the alignment for each start of every row of memref is already degraded wrt the base pointer.

More precisely:

@%memref[0, 0] is aligned at 32B
@%memref[1, 0] is aligned at 16B but not 32B
@%memref[2, 0] is aligned at 32B
...

the indexing within the row can indeed be any 4B aligned.

But even worse, even @%memref[1, 32] is aligned at 16B and not 32B.

This is one of the important things to grasp when talking about memref and alignment: a memref is not a pointer.
An n-D memref is closer in principle to a:

(n-1)-D array of pointers to 1-D row

(even if they are not materialized)

Please ignore my earlier comment on the revert and sorry about the noise 😛