Page MenuHomePhabricator

Enable LoopLoadElimination by default
ClosedPublic

Authored by anemet on Jan 18 2016, 12:15 PM.

Details

Summary

If you're interested to benchmark this yourself, you can enable the pass
with -mllvm -enable-loop-load-elim currently. Please let me know if you
see any regressions.

I re-benchmarked this and results are similar to original results in
D13259:

On ARM64:

SingleSource/Benchmarks/Polybench/linear-algebra/solvers/dynprog -59.27%
SingleSource/Benchmarks/Polybench/stencils/adi                   -19.78%

On x86:

SingleSource/Benchmarks/Polybench/linear-algebra/solvers/dynprog  -27.14%

And of course the original ~20% gain on SPECint_2006/456.hmmer with Loop
Distribution.

In terms of compile time, there is ~5% increase on both
SingleSource/Benchmarks/Misc/oourafft and
SingleSource/Benchmarks/Linkpack/linkpack-pc. These are both very tiny
loop-intensive programs where SCEV computations dominates compile time.

The reason that time spent in SCEV increases has to do with the design
of the old pass manager. If a transform pass does not preserve an
analysis we *invalidate* the analysis even if there was *no*
modification made by the transform pass.

This means that currently we don't take advantage of LLE and LV sharing
the same analysis (LAA) and unfortunately we recompute LAA *and* SCEV
for LLE.

(There should be a way to work around this limitation in the case of
SCEV and LAA since both compute things on demand and internally cache
their result. Thus we could pretend that transform passes preserve
these analyses and manually invalidate them upon actual modification.
On the other hand the new pass manager is supposed to solve so I am not
sure if this is worthwhile.)

Diff Detail

Repository
rL LLVM

Event Timeline

anemet updated this revision to Diff 45201.Jan 18 2016, 12:15 PM
anemet retitled this revision from to Enable LoopLoadElimination by default.
anemet updated this object.
anemet added reviewers: dberlin, hfinkel.
anemet added a subscriber: llvm-commits.
reames added a subscriber: reames.Jan 19 2016, 11:47 AM

Just to make sure I understand, LLE is effectively a specialized form of
PRE targeted at loops, but with the additional flexibility to introduce
runtime checks to resolve aliasing queries? Is that a correct rephrasing?

Have you examined how much of the benefit comes from the runtime
checks? I'd be very tempted to introduce an aggressive and a
non-aggressive form which differ only in whether they insert runtime
checks. I'm concerned by the potential code size and compile time
impact of aggressive code duplication.

Has this been stress tested by doing (for instance) a clang self host?
Sorry if this has been addressed earlier, I don't see it mentioned
specifically here and haven't read back through the history.

Just to make sure I understand, LLE is effectively a specialized form of
PRE targeted at loops, but with the additional flexibility to introduce
runtime checks to resolve aliasing queries? Is that a correct rephrasing?

That is correct. The idea is also to allow PRE between non-adjacent iterations so another way to look at the problem this pass solves is loop-aware scalar replacement (with versioning).

Have you examined how much of the benefit comes from the runtime
checks?

I would expect most of the cases require run-time checks right now because Load-PRE currently performs a similar optimization without checks during the canonicalization phase so it steals most of those opportunities. Long term, the hope is that loop Load-PRE will be removed in favor of this pass at which point those cases will fall on this pass too.

For the record, I did look at the benchmarks mentioned in the description and out of those hmmer and adi require memchecks while dynprog does not.

I'd be very tempted to introduce an aggressive and a
non-aggressive form which differ only in whether they insert runtime
checks. I'm concerned by the potential code size and compile time
impact of aggressive code duplication.

That's reasonable. Are you thinking of disabling if SizeLevel is set (i.e. -Os or -Oz) or something different.

Has this been stress tested by doing (for instance) a clang self host?
Sorry if this has been addressed earlier, I don't see it mentioned
specifically here and haven't read back through the history.

Yes I've done a self-host but I never mentioned it.

Thanks for your feedback,
Adam

dberlin edited edge metadata.Jan 22 2016, 10:58 AM

FWIW: I agree this should be on by default and will dance with joy when we can remove the current inter-iteration LoadPRE (in favor of a more standard PRE that handles both loads and scalars)

That said, you mention " Long term, the hope is that loop Load-PRE will be removed in favor of this pass at which point those cases will fall on this pass too."

Out of curiosity, have you tried the pass with load pre disabled?
:)

Out of curiosity, have you tried the pass with load pre disabled?

Thanks! You're making me curious too, I'll do a run. Obviously we only handle full redundancy and no ld->ld but it should be interesting to see how far we are.

If you only handle full redundancy, you'll have to explicitly comment out
the code in processNonLocalLoad.
It does not consider full availability phi insertion to be load PRE.

(GVN.cpp:1761)

It seems to me that partial redundancy means slightly different things between GVN Load-PRE and LLE.

In LLE, a load is partially redundant, if the forwarding store does not dominate all the loop latches. To make it fully redundant we would have to add loads on the unavailable paths.

I think that in GVN, the loopy case is always considered a partial redundancy case because you'd have to insert a load in the preheader. Let me know if I am getting this wrong.

So it seems to me that the LLE partial redundancy case is equivalent to the case in GVN Load-PRE when we need *more than one* load inserted.

Does this make sense?

anemet added a comment.Feb 4 2016, 5:23 PM

I'd be very tempted to introduce an aggressive and a
non-aggressive form which differ only in whether they insert runtime
checks. I'm concerned by the potential code size and compile time
impact of aggressive code duplication.

That's reasonable. Are you thinking of disabling if SizeLevel is set (i.e. -Os or -Oz) or something different.

This was committed in rL259861.

Is this OK to go in now?

Thanks.

Thanks for doing all this work!

I think this is fine to go in at this point, as long as you are willing to monitor it closely and be ready to disable it if serious regressions are discovered that can't be addressed quickly.

(It seems like the time regression is one that we expect to be addressed by the new pass manager, and we understand how to mitigate it if need be)

Thanks for doing all this work!

I think this is fine to go in at this point, as long as you are willing to monitor it closely and be ready to disable it if serious regressions are discovered that can't be addressed quickly.

Absolutely and thanks for the review!

I also plan to keep looking at the interaction with GVN Load-PRE, add support for the load-to-load forwarding and see if GVN can be simplified in turn.

This revision was automatically updated to reflect the committed changes.