This is an archive of the discontinued LLVM Phabricator instance.

[AArch64] Enable runtime unrolling for in-order scheduling models
ClosedPublic

Authored by NickGuy on Mar 4 2021, 6:40 AM.

Details

Summary

This change also changes the "default" aarch64 compilation options, as
if -mcpu is omitted, an in-order sched model is used.

Diff Detail

Event Timeline

NickGuy created this revision.Mar 4 2021, 6:40 AM
NickGuy requested review of this revision.Mar 4 2021, 6:40 AM
SjoerdMeijer added inline comments.Mar 4 2021, 6:50 AM
llvm/lib/Target/AArch64/AArch64TargetTransformInfo.cpp
980

Looks sensible to me that we do this (first) for smaller in-order cores.
From a quick look, quite a few other targets that implement this hook have:

// Scan the loop: don't unroll loops with calls as this could prevent
// inlining.

Do we need that too? Have you benchmarked this, and can we try this too?

Runtime unrolling is always going to be a bit of a heuristic, unfortunately. We don't usually know at compile time what the trip count will be, so don't know if we will end up making things worse.

If the loop has already been vectorized (and possibly interleaved) it will already have been somewhat unrolled though. We may want to be more careful with cases like that and not runtime unroll the loop that has already been vectorized. We do the same thing for MVE, where that is more important due to the low register count and tail predicated loops. But the same reasoning may well apply here too, where unrolling it further just make the loop body too large to be useful.

NickGuy updated this revision to Diff 328978.Mar 8 2021, 5:11 AM

I've done some more benchmarks with the provided suggestions;

Do we need that too? Have you benchmarked this, and can we try this too?

In the benchmark I ran with this change, I saw a slight regression (~0.01%) which given the gains from unrolling is a negligible change.

If the loop has already been vectorized (and possibly interleaved) it will already have been somewhat unrolled though. We may want to be more careful with cases like that and not runtime unroll the loop that has already been vectorized. We do the same thing for MVE, where that is more important due to the low register count and tail predicated loops. But the same reasoning may well apply here too, where unrolling it further just make the loop body too large to be useful

With this change added onto the previous, I saw an improvement of ~0.4% over the unrestricted runtime unrolling.

This will still unroll the remainders of vectorized loops, which will be quite common but generally unhelpful to unroll. Can we try to prevent unrolling there too?

llvm/lib/Target/AArch64/AArch64TargetTransformInfo.cpp
1008

Should we be enabling partial unrolling too? If not, why not?

NickGuy updated this revision to Diff 333009.Mar 24 2021, 8:53 AM

This will still unroll the remainders of vectorized loops, which will be quite common but generally unhelpful to unroll. Can we try to prevent unrolling there too?

It seems to be difficult to identify the remainder loop; Checking the llvm.loop.isvectorized attribute (like in ARMTargetTransformInfo) caused all gains to be negated, and caused a slight regression when compared to without this change. I've instead included a check for the llvm.loop.unroll.disable attribute, which was seen on the remainder loop IR. This check causes no difference to the benchmark numbers. (Though thinking about it now, that might be due to it being handled elsewhere, making this check redundant)

Should we be enabling partial unrolling too? If not, why not?

Enabling partial unrolling made no difference in the benchmark results, so I haven't enabled it here.

It seems to be difficult to identify the remainder loop; Checking the llvm.loop.isvectorized attribute (like in ARMTargetTransformInfo) caused all gains to be negated, and caused a slight regression when compared to without this change. I've instead included a check for the llvm.loop.unroll.disable attribute, which was seen on the remainder loop IR. This check causes no difference to the benchmark numbers. (Though thinking about it now, that might be due to it being handled elsewhere, making this check redundant)

Yeah it doesn't seem worthwhile to check for something that the loop unroller will already be checking for. The vectorizer will unroll (/"interleave") at the same time as vectorizing, and we can already produce loops that are very wide compared to the input, sometimes to the point that they are never executed and we are only running the remainders. Unrolling on top of that will not be beneficial in a lot of cases, but that will depend heavily on the trip count of the loops.

The very quick benchmarks I ran didn't show this to be great because of all that extra unrolling, mostly of remainder loops by the look of it. I'm sure you have been running more benchmarks, and perhaps the ones here are not very representative of general A64 code.

Enabling partial unrolling made no difference in the benchmark results, so I haven't enabled it here.

Partial unrolling is when the trip count is known but we can unroll a mod of it. I would be very surprised if it didn't come up in any of your testing.

NickGuy updated this revision to Diff 338482.Apr 19 2021, 5:11 AM

The very quick benchmarks I ran didn't show this to be great because of all that extra unrolling, mostly of remainder loops by the look of it. I'm sure you have been running more benchmarks, and perhaps the ones here are not very representative of general A64 code.

I've removed the check for the loop attribute altogether, as it seemed to do more harm than good in the majority of benchmarks I ran, and I've added some further tuning to get some extra performance. The options specified in this patch were the best all-round of those I tested, giving up to a 10% improvement in some benchmark suites.

Running the llvm test suite with this change gave anywhere between a 0.35% and 2.1% improvement, depending on the specific hardware it was tested on. Interestingly, the 2.1% gain was on an out-of-order core, indicating that these changes could be beneficial there too. However I don't have any other numbers to hand to back that claim up, so I'll keep this patch scoped on in-order cores only.

Okay, that's a good result and this all looks sensible: you've run different benchmarks on different targets and noticed the different performance uplifts. Looks like that is the best we can do.

Just one remaining question from my side. Can you remind me about the impact of this? I.e., if -mcpu is omitted, we default to generic which is classified, or is using, an in-order schedmodel description on Android? Was that it? It would be great to at least mention this in the description / commit message, or as a comment too. You can also consider leaving a TODO above the !isOutOfOrder check that this might be beneficial for bigger cores too.

NickGuy updated this revision to Diff 338524.Apr 19 2021, 8:01 AM
NickGuy edited the summary of this revision. (Show Details)

Can you remind me about the impact of this? I.e., if -mcpu is omitted, we default to generic which is classified, or is using, an in-order schedmodel description on Android?

Yep, that's right. Though I don't think it was specific to Android, but AArch64 in general.

fhahn added a subscriber: fhahn.Apr 19 2021, 8:08 AM

Can you remind me about the impact of this? I.e., if -mcpu is omitted, we default to generic which is classified, or is using, an in-order schedmodel description on Android?

Yep, that's right. Though I don't think it was specific to Android, but AArch64 in general.

So does this mean the new behavior will be the default if no CPU is specified? I'm not sure if we are ready for that yet, unless we are confident that the current heuristics work well for out-of-order cores too. (Last time I benchmarked this for out-of-order cores there were a few notable regressions, but it's been a few months since then)

llvm/lib/Target/AArch64/AArch64TargetTransformInfo.cpp
989

This comment seems out of date?

llvm/test/Transforms/PhaseOrdering/AArch64/hoisting-required-for-vectorization.ll
107 ↗(On Diff #338524)

why do we need to disable unrolling here?

So does this mean the new behavior will be the default if no CPU is specified? I'm not sure if we are ready for that yet, unless we are confident that the current heuristics work well for out-of-order cores too. (Last time I benchmarked this for out-of-order cores there were a few notable regressions, but it's been a few months since then)

Do you remember where those regressions were, and how important these cases are? I can look into them with these changes to see if they help or hinder.

llvm/lib/Target/AArch64/AArch64TargetTransformInfo.cpp
989

Doesn't seem like it is to me. If a function call is found, it bails without enabling runtime unrolling. This check is also performed by many other implementations of this hook.

llvm/test/Transforms/PhaseOrdering/AArch64/hoisting-required-for-vectorization.ll
107 ↗(On Diff #338524)

We don't need to disable it, but this test was ballooning in size when unrolling (as expected), while not testing unrolling itself. Without the knowledge of what it's testing for specifically, I didn't want to change it.

fhahn added inline comments.Apr 19 2021, 9:19 AM
llvm/lib/Target/AArch64/AArch64TargetTransformInfo.cpp
989

If a function call is found,

That's correct, but you are also bailing out on other cases, like instructions with vector types, right?

llvm/test/Transforms/PhaseOrdering/AArch64/hoisting-required-for-vectorization.ll
107 ↗(On Diff #338524)

I am not sure that's expected. It doesn't look like the loop in the function has a runtime trip count, unless I am missing something?

Can you remind me about the impact of this? I.e., if -mcpu is omitted, we default to generic which is classified, or is using, an in-order schedmodel description on Android?

Yep, that's right. Though I don't think it was specific to Android, but AArch64 in general.

So does this mean the new behavior will be the default if no CPU is specified? I'm not sure if we are ready for that yet, unless we are confident that the current heuristics work well for out-of-order cores too. (Last time I benchmarked this for out-of-order cores there were a few notable regressions, but it's been a few months since then)

We have probably two options here:

  • Florian is right that we may need some more data on how well the bigger cores like this although it seems you already have some,
  • Perhaps alternatively, or as a first step, we can introduce a target feature for this and set this for our smaller in-order-cores for which you have just determined this is a good thing?

I was under the impression that without a -mcpu it defaulted to cortex-a53 schedule. It looks like it's no-schedule though, which still counts as an in-order core as it has no MicroOpBufferSize. Can we check if ST->getSchedModel().ProcID != 0, which will be the "NoSchedModel".

llvm/lib/Target/AArch64/AArch64TargetTransformInfo.cpp
1007

"Force" doesn't sound like the right wording to me. It is enabling runtime/partial unrolling, which the loop unroller may use or not.

1010

Out of order cores will already enable runtime unrolling based on the MicroOpBufferSize, it will just have a reduced threshold. I don't think the TODO is useful to add.

llvm/test/Transforms/PhaseOrdering/AArch64/hoisting-required-for-vectorization.ll
107 ↗(On Diff #338524)

It can still be partially unrolled.

I was under the impression that without a -mcpu it defaulted to cortex-a53 schedule. It looks like it's no-schedule though, which still counts as an in-order core as it has no MicroOpBufferSize. Can we check if ST->getSchedModel().ProcID != 0, which will be the "NoSchedModel".

Agreed with this. I.e., this is my current impression too. But we do need to get to the bottom of this to be sure. I seem to remember that this is the case for Android. So, could this be set in the driver?

NickGuy updated this revision to Diff 339229.Apr 21 2021, 7:26 AM
NickGuy marked 4 inline comments as done.
NickGuy retitled this revision from [AArch64] Force runtime unrolling for in-order scheduling models to [AArch64] Enable runtime unrolling for in-order scheduling models.

I was under the impression that without a -mcpu it defaulted to cortex-a53 schedule. It looks like it's no-schedule though, which still counts as an in-order core as it has no MicroOpBufferSize. Can we check if ST->getSchedModel().ProcID != 0, which will be the "NoSchedModel".

Instead of checking the ProcID (-mcpu=carmel returns a ProcID of 0), I've added a check against the processor family (through ST->getProcFamily())

llvm/lib/Target/AArch64/AArch64TargetTransformInfo.cpp
989

Right, I misunderstood. I've updated it to mention that we bail out in vector loops.

1007

Reworded both the commit message and this comment

SjoerdMeijer added inline comments.Apr 21 2021, 7:54 AM
llvm/lib/Target/AArch64/AArch64TargetTransformInfo.cpp
1012

Great. Can we add some tests for this?

NickGuy updated this revision to Diff 339974.Apr 23 2021, 3:59 AM
NickGuy marked an inline comment as done.
SjoerdMeijer accepted this revision.Apr 23 2021, 5:12 AM

Nice one, looks good to me. Please wait a few days and commit this early next week to provide an opportunity to still comment on this.

This revision is now accepted and ready to land.Apr 23 2021, 5:12 AM
dmgreen accepted this revision.Apr 25 2021, 12:18 AM

Yeah, thanks. You didn't report them in very much detail here, but this seemed to be an general improvement on a wide range of benchmarks. Not universal but a decent improvement. LGTM.