This is an archive of the discontinued LLVM Phabricator instance.

Add Load Combine Pass
ClosedPublic

Authored by Bigcheese on Apr 30 2014, 7:52 PM.

Details

Summary

This combines adjacent loads if no writes to memory or atomic ops occur between them.

This was motivated by us not generating a bswap for:

static inline uint64_t LoadU64_x8( const uint8_t* pData )
{
    return
(((uint64_t)pData[0])<<56)|(((uint64_t)pData[1])<<48)|(((uint64_t)pData[2])<<40)|(((uint64_t)pData[3])<<32)|(((uint64_t)pData[4])<<24)|(((uint64_t)pData[5])<<16)|(((uint64_t)pData[6])<<8)|((uint64_t)pData[7]);
}

Diff Detail

Repository
rL LLVM

Event Timeline

Bigcheese updated this revision to Diff 9001.Apr 30 2014, 7:52 PM
Bigcheese retitled this revision from to [InstCombine] Combine adjacent i8 loads..
Bigcheese updated this object.
Bigcheese edited the test plan for this revision. (Show Details)
Bigcheese added reviewers: chandlerc, atrick.
Bigcheese added a subscriber: Unknown Object (MLST).May 1 2014, 3:43 PM

Adding llvm-commits to CC.

chandlerc edited edge metadata.May 1 2014, 3:52 PM

I think I would prefer to implement this as a standalone pass until we have
some clear phase ordering need for it in a utility like instcombine.

My suggestion: a pass that runs right after GVN.

I would also use the domtree rather than the basic block structure.

I think it really needs to handle arbitrary sizes, and both loads and
stores. You don't want to scan multiple times for this.

But after thinking all of this, it strikes me that the work done here is
*very* similar to the work done by the SLP vectorizer for loads and stores.
I think there is at least the possibility that this should be done by the
SLP vectorizer, or that some of the logic should be shared. I'd appreciate
Arnold's, Raul's, Nadav's, and Hal's thoughts on this idea.

I agree this combining can in theory be done as part of SLP vectorization,
targeting wider scalar registers as a limited form of vector hardware,
supporting wider loads/stores and some bitwise operations. That would
likely catch more cases than what can be done with a small standalone pass.

Raúl E Silvera | SWE | rsilvera@google.com | *408-789-2846*

atrick edited edge metadata.May 1 2014, 10:08 PM

I agree with Chandler that this should only be done once, late in the pipeline (post GVN). I am also concerned that if this runs before SLP vectorizer it will interfere with it. I'd like to get Arnold's comments on this.

Load combining should probably be done in a single pass over the block. First collect all the offsets, then sort, then look for pairs. See the LoadClustering stuff in MachineScheduler. Your RangeLimit skirts around this problem, but I don't think the arbitrary threshold is necessary.

Doing this per basic block is ok. Although there's no reason you can't do it almost as easily on an extended basic block (side exits ok, no merges). Chandler said do it on the domtree, but handling CFG merges would be too complicated and expensive.

Did you forget to check for Invokes?

Conceptually this is doing SLP vectorization, but it doesn't fit with our SLP algortihm which first finds a vectorizable use-def tree. Sticking it in GVN is another option, but again I'm concerned about it running before SLP. Maybe it can run in the SLP pass after the main algorithm.

In my first comments, I didn't touch on the motivation for this. Can an argument be made that this load combining is generally profitable. Or are you just trying to match BSWAP?

atrick:

Load combining should probably be done in a single pass over the block. First collect all the offsets, then sort, then look for pairs. See the LoadClustering stuff in MachineScheduler. Your RangeLimit skirts around this problem, but I don't think the arbitrary threshold is necessary.

Doing this per basic block is ok. Although there's no reason you can't do it almost as easily on an extended basic block (side exits ok, no merges). Chandler said do it on the domtree, but handling CFG merges would be too complicated and expensive.

I agree this should be a basic block pass. The problem with multiple basic blocks is that you cannot introduce a load into a path that it would not otherwise be loaded. This would lead to potential a race conditions. So if all paths have the load, sure. I'm not sure how expensive this is to calculate though. I also believe we would have already hoisted the load into the parent BB.

Did you forget to check for Invokes?

Invokes are terminating instructions. There won't be any loads in the same basic block after it.

Conceptually this is doing SLP vectorization, but it doesn't fit with our SLP algortihm which first finds a vectorizable use-def tree. Sticking it in GVN is another option, but again I'm concerned about it running before SLP. Maybe it can run in the SLP pass after the main algorithm.

In this case wouldn't it be just as easy to make it a separate pass that runs after SLP?

Nadav:

Michael described the motivation to his patch by showing the LoadU64_x8 function and how it loads multiple scalars from memory just to combine them into a single word. If this function represents the kind of function that he wants to handle then I think that we should consider implementing this in SelectionDAG. We already merge consecutive loads and stores in SelectionDAG. Problems like this should be local to one basic block so the fact that we are working on a single basic block should not be a problem. At SelectionDAG level we should have enough information to make informed decisions on the profitability of merging loads/stores. I can’t imagine propagating all of the relevant information into IR land. The SLP vectorizer is designed to create vectors and teaching the SLP vectorizer to handle non-vector types is not a good idea. Also, generating vectors for code that should obviously stay scalar is not ideal. Michael, was I correct about the kind of problems that you are trying to solve? Have you considered SelectionDAG? I think MergeConsecutiveStores is a good place to look at.

My specific reason for looking into this was the stated problem, but I believe there are lots of other cases that can benefit from this. The main reason I didn't want to do it in SDAG is because we don't do bswap recognition in SDAG, and I'd rather not have multiple implementations of it. Another issue is the inliner, bswap is generally a great thing to inline, but if we only do this in SDAG we may choose not to.

There are four parts to the problem of widening chains when viewed from the SLP vectorizers perspective (staying in vector types):

  • Recognizing adjacent memory operation: this is obviously similar
  • Widening operations: we would widen only the load in this example
  • Building chains: there is no real chain of widened operations here: only the load is widened. One could imagine examples where we perform operations on the loaded i8 type before we build the i64.
  • Finding starting points of chains: this would be to recognize reduction into the wider type in this example.

If we were to model the example given in the patch in the slp vectorizer, I think we would (I changed the example to i32 to have to type less) recognize that
(I64 OR (I64 << (SEXT (I32 ...) to I64), 0))

(I64 << (SEXT(I32 ...) to I64), 32)))

Is a reduction into a I64 value which we can model as <2 x i32>:

(I64 CAST (<2 x i32> SHUFFLE (...)))

And then start a chain from the <2 x i32> (...) root which in this case would only be a <2 x i32> load (or in the real example a <8 x i8> load).

Do we expect that there would be longer chains that would benefit from widening that would start of such a pattern? I.e do we expect to be able to do some isomorphic operations in the smaller type (i8 or i32 in my example) before the reduction? If the answer is yes, I think it make sense thinking about doing this in the SLP vectorizer.

I am not sure that CodeGen deals well with such contortions, though: (I64 CAST (<8 x i8> SHUFFLE (<8 x i8> LOAD))) => (I64 (BSWAP (I64 LOAD)))? That could be fixed. What does our cost model say about such operations?

Teaching the SLP vectorizer to widen scalar types is a whole different complexity beast (I am not sure we want to model lanes of smaller types in a large type without using vectors).

I don't think the above transformation (building a value of a bigger type from a smaller type) is going to interfere with regular SLP vectorization because we start bottom-up (sink to source) and we don't have patterns that would start at an "or reduction" (bigger than 2 operations). If we implement a second transformation that starts from loads and widens operations top-down, then, I agree with Andy, we would have to be careful about phase ordering.

If however, all we want to catch is swap then this feels like a dag combine to me (with the gotcha of loosing analysis information during lowering mentioned below). But, it seems to me there is potential to catch longer chains leading to the loads.

Doing this at the IR level has the benefit that our memory analysis (BasicAA) is better in the current framework. Inlining can cause us to loose information about aliasing (lost noalias parameters, we should really fix this :), Hal had a patch but I digress ...).

hfinkel added a subscriber: hfinkel.May 2 2014, 9:47 AM

I'll point out that on PPC we have byte-swapped loads, and we currently handle this in CodeGen using a target-specific DAG combine. We recognize a LOAD+BSWAP and BSWAP+STORE pair and produce a target-specific node for the desired instruction. This does not, however, handle the case where the the loads are combinable. However, we already have an optimization in DAGCombine that is supposed to combine consecutive loads, DAGCombiner::CombineConsecutiveLoads (maybe it requires AA to be active in DAGCombine to work optimally, but I normally turn that on for PPC, and I think that it should be safe in general). Perhaps it just needs to be called in the right place to work for this input?

Doing this at the IR level has the benefit that our memory analysis (BasicAA) is better in the current framework. Inlining can cause us to loose information about aliasing (lost noalias parameters, we should really fix this :), Hal had a patch but I digress ...).

Yes, I'll be getting back to this quite soon.

Bigcheese updated this revision to Diff 9145.May 6 2014, 8:53 PM
Bigcheese retitled this revision from [InstCombine] Combine adjacent i8 loads. to Add Load Combine Pass.
Bigcheese updated this object.
Bigcheese edited edge metadata.

Moved out to separate pass run after SLP vectorizer. Still need to do a test-suite run.

This combines some loads that probably shouldn't be combined. I believe the proper fix here is to split (trunc (shr (load ...), <multiple of 8>), n) in SDAG for targets where the hardware will do load combining. This still exposes the optimization opportunity without regressing anything.

Bigcheese updated this object.May 7 2014, 1:41 AM

This pass will create unaligned loads. You'll need to use TTI (enhanced if necessary) to check whether or not the target supports unaligned loads of the required type (by, through TTI, calling TLI->allowsUnalignedMemoryAccesses, and making sure such access is considered "fast"). Otherwise, the code generator will end up breaking the load apart again and will likely generate worse code overall.

lib/Transforms/IPO/PassManagerBuilder.cpp
223 ↗(On Diff #9145)

Please move this to after BBVectorize as well.

lib/Transforms/Scalar/LoadCombine.cpp
138 ↗(On Diff #9145)

I suspect that this should really be DL->getTypeStoreSize(L.Load->getType()).

(and the same for all of the places that you call getPrimitiveSizeInBits -- which, for one thing, won't work for pointer types).

143 ↗(On Diff #9145)

This seems to strict. You actually only care that it is not greater, right?

165 ↗(On Diff #9145)

If no one else has a better suggestion, I'd move this into IRBuilder.

test/Transforms/LoadCombine/load-combine.ll
46 ↗(On Diff #9145)

Please add:
CHECK-LABEL: @LoadU64_x64_0

(and similarly to the other tests)

47 ↗(On Diff #9145)

Please also check the alignment on the load.

atrick added a comment.May 7 2014, 9:07 AM

It looks like you first sort the loads, then use load[0] as the insertion point. How does this work if the loads are striding backward through memory with uses in between?

Since you are effectively hoisting loads, I think you should check mayThrow(), not just mayWriteMemory(). In the future, we will want to support read-only calls that mayThrow. (This would allow redundant load elimination across the calls--important for runtime safety checks.)

Test cases tend to be more effective with CHECK-LABEL on the name of each subtest.

Otherwise LGTM after addressing Hal's comments. I'm not sure what Hal meant by checking alignment. The new load inherits the alignment of the first aggregated load. We'll end up with an unaligned load as far as the compiler can tell, which is often not optimal. I'm not sure whether combining, then splitting in SelectionDAG will produce worse code or not without trying it. If the mechanism exists to split unaligned loads, can we force that on for x86 and see if we generate worse code in our load combining test cases?

  • Original Message -----

From: "Andrew Trick" <atrick@apple.com>
To: bigcheesegs@gmail.com, atrick@apple.com, chandlerc@gmail.com
Cc: hfinkel@anl.gov, nrotem@apple.com, aschwaighofer@apple.com, rsilvera@google.com, llvm-commits@cs.uiuc.edu
Sent: Wednesday, May 7, 2014 11:07:24 AM
Subject: Re: [PATCH] Add Load Combine Pass

It looks like you first sort the loads, then use load[0] as the
insertion point. How does this work if the loads are striding
backward through memory with uses in between?

Since you are effectively hoisting loads, I think you should check
mayThrow(), not just mayWriteMemory(). In the future, we will want
to support read-only calls that mayThrow. (This would allow
redundant load elimination across the calls--important for runtime
safety checks.)

Test cases tend to be more effective with CHECK-LABEL on the name of
each subtest.

Otherwise LGTM after addressing Hal's comments. I'm not sure what Hal
meant by checking alignment.

Two things:

  1. In the regression test, the CHECK line should check the alignment of the generated load.
  2. I had proposed that we only perform the transformation at all when TLI->allowsUnalignedMemoryAccesses returns true and sets *fast to true. When this function returns false, then SDAG will split the load (so Andy's experiment is possible). Also, I agree with Andy that if benchmark data shows the code quality from combining+SDAG-splitting to be better than that from doing nothing, then by all means do the former. Without benchmark data (from multiple platforms, preferably), showing that to be the case, I'd prefer the conservative approach I detailed.

    -Hal

The new load inherits the alignment of
the first aggregated load. We'll end up with an unaligned load as
far as the compiler can tell, which is often not optimal. I'm not
sure whether combining, then splitting in SelectionDAG will produce
worse code or not without trying it. If the mechanism exists to
split unaligned loads, can we force that on for x86 and see if we
generate worse code in our load combining test cases?

http://reviews.llvm.org/D3580

atrick:

It looks like you first sort the loads, then use load[0] as the insertion point. How does this work if the loads are striding backward through memory with uses in between?

Oops. That is a problem. I did a backward test, but it didn't have a use of the value in-between the loads. I suppose I could also store the original insertion order...

Since you are effectively hoisting loads, I think you should check mayThrow(), not just mayWriteMemory(). In the future, we will want to support read-only calls that mayThrow. (This would allow redundant load elimination across the calls--important for runtime safety checks.)

I thought only terminating instructions can throw?

As for targets that have to split unaligned loads. I tested the original case on ARM and llvm actually generates slightly better code when SDAG splits the combined load than if we just leave it alone. I'll check other targets, but I believe that we do a pretty good job splicing. The only bad cases I ran into were where the combined load isn't actually used combined, it's just directly resplit via shr/trunc. Here I believe we will need to teach SDAG to split these loads back up, but that should be easy (but should be done before this lands).

  • Original Message -----

From: "Michael Spencer" <bigcheesegs@gmail.com>
To: bigcheesegs@gmail.com, atrick@apple.com, chandlerc@gmail.com
Cc: hfinkel@anl.gov, nrotem@apple.com, aschwaighofer@apple.com, rsilvera@google.com, llvm-commits@cs.uiuc.edu
Sent: Wednesday, May 7, 2014 2:19:00 PM
Subject: Re: [PATCH] Add Load Combine Pass

atrick:

It looks like you first sort the loads, then use load[0] as the
insertion point. How does this work if the loads are striding
backward through memory with uses in between?

Oops. That is a problem. I did a backward test, but it didn't have a
use of the value in-between the loads. I suppose I could also store
the original insertion order...

Since you are effectively hoisting loads, I think you should check
mayThrow(), not just mayWriteMemory(). In the future, we will want
to support read-only calls that mayThrow. (This would allow
redundant load elimination across the calls--important for runtime
safety checks.)

I thought only terminating instructions can throw?

*sigh* -- no.

As for targets that have to split unaligned loads. I tested the
original case on ARM and llvm actually generates slightly better
code when SDAG splits the combined load than if we just leave it
alone. I'll check other targets, but I believe that we do a pretty
good job splicing. The only bad cases I ran into were where the
combined load isn't actually used combined, it's just directly
resplit via shr/trunc. Here I believe we will need to teach SDAG to
split these loads back up, but that should be easy (but should be
done before this lands).

Okay.

-Hal

http://reviews.llvm.org/D3580

Bigcheese updated this revision to Diff 9724.EditedMay 22 2014, 5:13 PM
Bigcheese updated this object.

I ran test-suite with this pass and got some interesting results. Most tests have no changes, but 4 tests showed significant change.

+ is faster, - is slower.

nts.MultiSource/Benchmarks/MiBench/security-sha/security-sha44%
nts.MultiSource/Benchmarks/Prolangs-C/agrep/agrep12%
nts.MultiSource/Benchmarks/SciMark2-C/scimark2-21%
nts.MultiSource/Benchmarks/tramp3d-v4/tramp3d-v4-39%(this drops to -20% if we fix the load splicing cost model in DAGCombine)

Full data (collected across 10 runs each): https://docs.google.com/spreadsheets/d/13_4ZUBQQYXhexrMzVJ5VICNuLzUhovU8lOl0Rr6wG10/edit?usp=sharing

I'd like to commit this disabled by default while the regressions are fixed in tree.

atrick accepted this revision.May 23 2014, 6:37 PM
atrick edited edge metadata.

LGTM

This revision is now accepted and ready to land.May 23 2014, 6:37 PM
Bigcheese closed this revision.May 28 2014, 7:02 PM
Bigcheese updated this revision to Diff 9901.

Closed by commit rL209791 (authored by mspencer).