This is an archive of the discontinued LLVM Phabricator instance.

[SVE][CodeGen] Legalisation of masked loads and stores
ClosedPublic

Authored by kmclaughlin on Jul 3 2020, 10:05 AM.

Details

Summary

This patch modifies IncrementMemoryAddress to use a vscale
when calculating the new address if the data type is scalable.

Also adds tablegen patterns which match an extract_subvector
of a legal predicate type with zip1/zip2 instructions

Diff Detail

Event Timeline

kmclaughlin created this revision.Jul 3 2020, 10:05 AM

The patch overall looks good to me - just a question about the assert!

llvm/lib/CodeGen/SelectionDAG/TargetLowering.cpp
7156

Do we know if this is something we catch earlier and hence should never get here? I just wonder if here it's not really an assert that something went wrong with the code, but perhaps we just hit a case we don't support yet? If it's just because we don't support it yet, instead of asserting we could do:

if (DataVT.isScalableVector())

report_fatal_error("Cannot currently handle compressed memory with scalable vectors");
sdesmalen added inline comments.Jul 7 2020, 1:59 AM
llvm/lib/CodeGen/SelectionDAG/TargetLowering.cpp
7177

Given that the type of VScale will be AddrVT, it's clearer to use AddrVT.getSizeInBits().getFixedSize(), that avoids the types being different.

7178

Should this be using DataVT.getStoreSize() instead of DataVT.getSizeInBits() ?

Changes to IncrementMemoryAddress:

  • Changed the assert added for scalable vectors to a report_fatal_error
  • Replaced Addr.getValueSizeInBits().getFixedSize() with AddrVT.getSizeInBits().getFixedSize()
  • Use DataVT.getStoreSize() instead of DataVT.getSizeInBits()
kmclaughlin marked 3 inline comments as done.Jul 7 2020, 11:10 AM
kmclaughlin added inline comments.
llvm/lib/CodeGen/SelectionDAG/TargetLowering.cpp
7156

I think this is something that we just don't support yet, so I've changed this to report_fatal_error as suggested

efriedma added inline comments.Jul 7 2020, 11:53 AM
llvm/lib/CodeGen/SelectionDAG/TargetLowering.cpp
7156

This is part of the support for llvm.masked.expandload/llvm.masked.compressstore. There isn't a native instruction for that in SVE, but it's still a reasonable operation with scalable vectors.

llvm/lib/Target/AArch64/AArch64SVEInstrInfo.td
1112

Do we need to support extracting, for example, an nxv2i1 from an nxv16i1?

This revision is now accepted and ready to land.Jul 10 2020, 12:25 AM
kmclaughlin added inline comments.Jul 10 2020, 9:42 AM
llvm/lib/Target/AArch64/AArch64SVEInstrInfo.td
1112

We may need to support extracting a nxv2i1 from an nxv16i1, etc at some point, though I don't believe there are any code paths which would require this just now? At least, for the purposes of this patch I think we just need those patterns where the index is either 0 or half the number of elements.

efriedma accepted this revision.Jul 10 2020, 2:23 PM

LGTM

llvm/lib/Target/AArch64/AArch64SVEInstrInfo.td
1112

We do have a DAGCombine for EXTRACT_SUBVECTOR of an EXTRACT_SUBVECTOR; it isn't triggering here for some reason? I guess that's okay.

Thanks for reviewing this patch, @efriedma & @david-arm

llvm/lib/Target/AArch64/AArch64SVEInstrInfo.td
1112

I think the reason that DAGCombine doesn't trigger here is because it checks to make sure that the operand of the extract has only one use, which isn't the case in this patch as the original predicate will have multiple uses.

This revision was automatically updated to reflect the committed changes.