Page MenuHomePhabricator

[GVN] Introduce loop load PRE
Changes PlannedPublic

Authored by mkazantsev on Mon, Apr 5, 10:57 PM.



This patch allows PRE of the following type of loads:

  br label %loop

  br i1 ..., label %merge, label %clobber

  call foo() // Clobbers %p
  br label %merge

  br i1 ..., label %loop, label %exit


  %x0 = load %p
  br label %loop

  %x.pre = phi(x0, x2)
  br i1 ..., label %merge, label %clobber

  call foo() // Clobbers %p
  %x1 = load %p
  br label %merge

  x2 = phi(x.pre, x1)
  br i1 ..., label %loop, label %exit

So instead of loading from %p on every iteration, we load only when the actual clobber happens.
The typical pattern which it is trying to address is: hot loop, with all code inlined and
provably having no side effects, and some side-effecting calls on cold path.

The worst overhead from it is, if we always take clobber block, we make 1 more load
overall (in preheader). It only matters if loop has very few iteration. If clobber block is not taken
at least once, the transform is neutral or profitable.

There are several improvements prospect open up:

  • We can sometimes be smarter in loop-exiting blocks via split of critical edges;
  • If we have block frequency info, we can handle multiple clobbers. The only obstacle now is that we don't know if their sum is colder than the header.

Diff Detail

Unit TestsFailed

30 msx64 debian > LLVM.Transforms/GVN/PRE::pre-aliasning-path.ll
Script: -- : 'RUN: at line 2'; /mnt/disks/ssd0/agent/llvm-project/build/bin/opt -basic-aa -enable-load-pre -enable-pre -gvn -S < /mnt/disks/ssd0/agent/llvm-project/llvm/test/Transforms/GVN/PRE/pre-aliasning-path.ll | /mnt/disks/ssd0/agent/llvm-project/build/bin/FileCheck /mnt/disks/ssd0/agent/llvm-project/llvm/test/Transforms/GVN/PRE/pre-aliasning-path.ll
50 msx64 debian > LLVM.Transforms/GVN/PRE::pre-loop-load.ll
Script: -- : 'RUN: at line 2'; /mnt/disks/ssd0/agent/llvm-project/build/bin/opt -basic-aa -enable-load-pre -enable-pre -gvn -S < /mnt/disks/ssd0/agent/llvm-project/llvm/test/Transforms/GVN/PRE/pre-loop-load.ll | /mnt/disks/ssd0/agent/llvm-project/build/bin/FileCheck /mnt/disks/ssd0/agent/llvm-project/llvm/test/Transforms/GVN/PRE/pre-loop-load.ll

Event Timeline

mkazantsev created this revision.Mon, Apr 5, 10:57 PM
mkazantsev requested review of this revision.Mon, Apr 5, 10:57 PM
Herald added a project: Restricted Project. · View Herald TranscriptMon, Apr 5, 10:57 PM
mkazantsev edited the summary of this revision. (Show Details)Mon, Apr 5, 10:57 PM
lkail added a subscriber: lkail.Mon, Apr 5, 11:50 PM
reames added a comment.EditedThu, Apr 8, 12:42 PM

Just for context, I'd explored a very similar transform before in One really key difference between the prior attempt and this one is that previously I hadn't explicitly handled loops and instead tried to match this from the original IR. I don't think loop info was available at the time. Though, looking at the current code, it looks like LoopInfo is optional for the pass even now.

One other key detail that has changed is we now support speculation, and don't necessarily have to prove anticipation (which the earlier change struggled with.)

In general, I think the new approach is much more likely to be successful than the original since we're solving a subset of the problems.

Code structure wise, I want to suggest we don't try to shove this new transformation into the existing performLoadPRE codepath. Several of the concerns of that code (e.g. address translation) don't apply for the loop case, and you have at least one bug (whether we need to check speculation safety) because of trying to reuse the code.

I'd suggest instead that you split out the last third or so of that function into a helper which blindly performs the insertion, and replacement, and then implement a second performLoopLoadPRE entry which checks the appropriate legality for the new transform.

I also seriously question whether this is worth doing in old-GVN at all. The only infrastructure you actually need for this is memory aliasing and speculation safety. I'd seriously suggest writing a standalone pass which uses MemorySSA and ValueTracking, and maybe reuses the extracted helper function mentioned above.

mkazantsev updated this revision to Diff 336335.Fri, Apr 9, 1:07 AM
mkazantsev retitled this revision from [GVN] Introduce loop load PRE to [GVN] Introduce loop load PRE (WIP).

I'm pretty sure that availability problem you are referencing does not exist. See last 2 tests with guards.

As for refactoring, I'm going to do it. Putting WIP in the patch.

mkazantsev retitled this revision from [GVN] Introduce loop load PRE (WIP) to [GVN] Introduce loop load PRE.Fri, Apr 9, 1:30 AM

Looking more into the code, I don't think that loop PRE needs any other legality checks than what we have now.

mkazantsev updated this revision to Diff 336440.Fri, Apr 9, 6:51 AM

Split code out into different method. Haven't figured out yet how to make it in a separate pass with MemorySSA, but I think having it in GVN won't harm.

reames requested changes to this revision.Tue, Apr 13, 12:11 PM

Comments inline include one serious correctness issue.

This is much cleaner than the original patch. I was initially hesitant to take this at all - as opposed to using MemorySSA or NewGVN - but with the new structure this looks a lot less invasive.


Extend this comment to emphasize that this means we have proven the load must execute if the loop is entered, and is thus safe to hoist to the end of the preheader without introducing a new fault. Similarly, the in-loop clobber must be dominated by the original load and is thus fault safe.

Er, hm, there's an issue here I just realized. This isn't sound. The counter example here is when the clobber is a call to free and the loop actually runs one iteration.

You need to prove that LI is safe to execute in both locations. You have multiple options in terms of reasoning, I'll let you decide which you want to explore: speculation safety, must execute, or unfreeable allocations. The last (using allocas as an example for test), might be the easiest.


Tweak this comment a bit to emphasize that this ensures the new load executes at most as often as the original, and likely less often.


I don't understand this restriction. Why is a switch not allowed?


I don't think this loop does what you want, except maybe by accident. You allowed blocks outside the loop, as a result, you can end up with a bunch of available addresses and a bunch of loads before the preheader. This will likely later be DCEd since the preheader load will be the one actually used by SSA gen.

I strongly suspect you want exactly two available load locations: preheader, and your one in-loop clobber block.

This revision now requires changes to proceed.Tue, Apr 13, 12:11 PM
mkazantsev added inline comments.Tue, Apr 13, 8:17 PM

Free on the last iteration (the loop may have multiple though) is a nasty case indeed...


This was ogirinally the protection against invokes. Switch is allowed, will fix.


Yes, this check was lost during the refactoring. I'm pretty sure that eliminatePartiallyRedundantLoad will deal with it correctly, but it's at least not obvious. Thanks for catching.

Addressed comments, fixed bug with

I'm planning to add support for pointers basing on D99135, maybe as follow-up or on top of this.

mkazantsev planned changes to this revision.Thu, Apr 15, 11:50 PM
reames requested changes to this revision.Fri, Apr 16, 1:41 PM

Looks close to ready for an LGTM if you're willing to split patch as suggested.


continue the comment with something like:
"because we need a place to insert a copy of the load".

p.s. I'm fine with this in an initial patch, but you really should be using an alias check here as the trailing invoke might not alias the memory being PREed. Would make a good follow up patch.


You can generalize the first check as !LoadPtr->canBeFreed()


This last check is incorrect. Counter example:
for (i = 0; i < 1; i++)

v = o.f
if (c) {
  // this is loop block
atomic store g = o;
while(wait for other thread to free) {}


Can I ask you to pull this into a separate patch? (e.g. handle only the first two cases in this patch, and come back to the third in a follow on.)


Style: hasFnAttribute(AttributeKind::NoFree) handles both of these cases. (Or will once D100226 lands).