This is an archive of the discontinued LLVM Phabricator instance.

[RISCV] Introduce the RISCVMatInt::generateInstSeq helper
ClosedPublic

Authored by asb on Oct 6 2018, 3:25 AM.

Details

Summary

Logic to load 32-bit and 64-bit immediates is currently present in
RISCVAsmParser::emitLoadImm in order to support the li pseudoinstruction. With
the introduction of RV64 codegen, there is a greater benefit of sharing
immediate materialisation logic between the MC layer and codegen. The
generateInstSeq helper allows this by producing a vector of simple structs
representing the chosen instructions. This can then be consumed in the MC
layer to produce MCInsts or at instruction selection time to produce
appropriate SelectionDAG node. Sharing this logic means that both the li
pseudoinstruction and codegen can benefit from future optimisations, and
that this logic can be used for materialising constants during RV64 codegen.

This patch does contain a behaviour change: addi will now be produced on RV64
when no lui is necessary to materialise the constant. In that case addiw takes
x0 as the source register, so is semantically identical to addi.

Diff Detail

Repository
rL LLVM

Event Timeline

asb created this revision.Oct 6 2018, 3:25 AM

The helper makes it clear to follow. Thanks!

I am just not sure why you did the change to prefer add to addiw when for 64bit? is addi faster than addiw?

asb added a comment.Oct 8 2018, 7:06 PM

The helper makes it clear to follow. Thanks!

I am just not sure why you did the change to prefer add to addiw when for 64bit? is addi faster than addiw?

No real reason to prefer one or the other when they're semantically identical. It's mostly to be closer to gas/gcc which uses addi at least for the isInt<12> case.

psnobl added a subscriber: psnobl.Oct 9 2018, 12:44 AM

It is a bit off topic for this patch but considering the range between best & worst case materialization is it worth adding an implementation of TargetTransformInfo::getIntImmCost for RISC-V? Or maybe adding a TargetTransformInfo implementation would be better as one larger patch, once codegen behaviour is relatively stable.

asb added a comment.Oct 9 2018, 3:57 AM

I'm trying to keep things separate where possible, but yes we will want getIntImmCost and as you point out it's likely to be more useful for RV64. Ideally it would recognise that compressible code sequences are cheaper than non-compressible sequences.

sabuasal added a comment.EditedOct 10 2018, 3:29 PM
In D52961#1258464, @asb wrote:

The helper makes it clear to follow. Thanks!

I am just not sure why you did the change to prefer add to addiw when for 64bit? is addi faster than addiw?

No real reason to prefer one or the other when they're semantically identical. It's mostly to be closer to gas/gcc which uses addi at least for the isInt<12> case.

I see. makes sense. I just checked gcc and it seems to be favoring addi for that case.

asb added a comment.Oct 11 2018, 5:52 AM

The other motivation is that many of our codegen tests are going to include both RV32 and RV64 check lines. It's distracting to see addiw for materialising constants when it's not necessary, and it represents an unnecessary diff vs the equivalent RV32 code.

This patch still applies cleanly against HEAD.

@niosHD, did you have any thoughts about this approach?

For me the patch looks good, thank you for working on it Alex!

I like using ADDI instead of ADDIW when possible to get more consistent codegen output. However, the comment should be updated to reflect this change.

lib/Target/RISCV/Utils/RISCVMatInt.cpp
26 ↗(On Diff #168566)

The comment should be updated to reflect that we now always emit ADDI in the first two cases.

asb updated this revision to Diff 169524.Oct 12 2018, 5:10 PM
asb marked an inline comment as done.

Fixed out of date comment (thanks @niosHD).

asb added a comment.Oct 15 2018, 3:37 AM
In D52961#1258464, @asb wrote:

No real reason to prefer one or the other when they're semantically identical. It's mostly to be closer to gas/gcc which uses addi at least for the isInt<12> case.

I see. makes sense. I just checked gcc and it seems to be favoring addi for that case.

Does this look good to you then?

sabuasal accepted this revision.Oct 15 2018, 11:57 AM
sabuasal edited reviewers, added: sabuasal; removed: sameer.abuasal.
This revision is now accepted and ready to land.Oct 15 2018, 11:58 AM
In D52961#1265204, @asb wrote:
In D52961#1258464, @asb wrote:

No real reason to prefer one or the other when they're semantically identical. It's mostly to be closer to gas/gcc which uses addi at least for the isInt<12> case.

I see. makes sense. I just checked gcc and it seems to be favoring addi for that case.

Does this look good to you then?

Yes, no more comments on my side.

lewis-revill added inline comments.Oct 18 2018, 9:09 AM
lib/Target/RISCV/Utils/RISCVMatInt.cpp
66 ↗(On Diff #169524)

I noticed MathExtras.h has a function isMask_64, which essentially tells you whether a value is a non-zero set of 1s from the least significant bit up to a high bit. From this it's quite simple to implement the ADDI -1 + SRLI as described in D52962. Inserting the following code at this point would mean that the sequence it replaces will be at least 3 instructions as Val > 32 bits and Lo12 would be full of 1s:

// Find the special case of the value being a mask containing all ones from
// bit position 0 up to bit position N. In other words, a right shift of the
// constant -1. Uses only two instructions, ADDI+SRLI.
if (isMask_64((uint64_t)Val)) {
  Res.push_back(Inst(RISCV::ADDI, -1));

  int ShiftAmount = findLastSet((uint64_t)Val);
  Res.push_back(Inst(RISCV::SRLI, ShiftAmount));

  return;
}
asb added inline comments.Oct 18 2018, 9:51 AM
lib/Target/RISCV/Utils/RISCVMatInt.cpp
66 ↗(On Diff #169524)

I agree it's easy to add, but I wanted to perform this refactoring as a no-op with improvements going in further patches.

Plus handling this case is one of the tasks I walk people through in my RISC-V LLVM Coding Lab this afternoon :)

This revision was automatically updated to reflect the committed changes.