This is an archive of the discontinued LLVM Phabricator instance.

[LoopUnroll] Handle certain PHIs in full unrolling properly.
ClosedPublic

Authored by ebevhan on Aug 16 2019, 1:50 AM.

Details

Summary

When reconstructing the CFG of the loop after unrolling,
LoopUnroll could in some cases remove the phi operands of
loop-carried values instead of preserving them, resulting
in undef phi values after loop unrolling.

When doing this reconstruction, avoid removing incoming
phi values for phis in the successor blocks if the successor
is the block we are jumping to anyway.

Diff Detail

Repository
rL LLVM

Event Timeline

ebevhan created this revision.Aug 16 2019, 1:50 AM
Herald added a project: Restricted Project. · View Herald TranscriptAug 16 2019, 1:50 AM
lebedev.ri added inline comments.
test/Transforms/LoopUnroll/full-unroll-miscompile.ll
8 ↗(On Diff #215540)

passing-by remark: it's way too fragile to test exact value names,
you may want to test more of the output (capturing the value names elsewhere),
if not all of it.

ebevhan marked an inline comment as done.Aug 16 2019, 4:13 AM
ebevhan added inline comments.
test/Transforms/LoopUnroll/full-unroll-miscompile.ll
8 ↗(On Diff #215540)

I suppose I can generate the checks with the python script.

ebevhan updated this revision to Diff 215561.Aug 16 2019, 4:18 AM
lebedev.ri added inline comments.Aug 16 2019, 5:23 AM
test/Transforms/LoopUnroll/full-unroll-miscompile.ll
8 ↗(On Diff #215540)

In this case i wasn't insisting on that - it gets pretty unreadable for large functions too.
But if it is autogenerated, you might as well precommit the original test.

<.. just passing by ..>

bjope added a subscriber: bjope.Aug 16 2019, 10:56 AM

Thanks for looking into this!

lib/Transforms/Utils/LoopUnroll.cpp
726 ↗(On Diff #215561)

Currently we have a problem when unrolling loops which exit via the header. We should only preserve the incoming value from BB for the block in the current loop we jump to. For loops where we exit via the latch, this is the current header of the loop. For loops exiting via the header, this should be the successor of the header in the loop.

I think it would be slightly better to just change the name of CurrentHeader and then pass HeaderSucc[I] at line 797, to avoid unnecessary checks and keep the code slightly simpler. Maybe also add a comment in setDest to explain better what is going on.

test/Transforms/LoopUnroll/full-unroll-miscompile.ll
38 ↗(On Diff #215561)

Can you strip away the unnecessary stores/loads? They do not impact unrolling.

59 ↗(On Diff #215561)

I think the test would be slightly better if we make %tmp2 loop dependent.

67 ↗(On Diff #215561)

I think the attributes are not necessary.

ebevhan marked 2 inline comments as done.Aug 19 2019, 2:05 AM
ebevhan added inline comments.
lib/Transforms/Utils/LoopUnroll.cpp
726 ↗(On Diff #215561)

I see, thanks for further insight! I'll make these changes.

test/Transforms/LoopUnroll/full-unroll-miscompile.ll
38 ↗(On Diff #215561)

What should I replace the sdiv with in that case? Just a single load?

fhahn added a comment.Aug 19 2019, 5:50 AM

It would be great if you could add another function to the test, where we do not fully unroll the loop, i.e. by making sure it has a high trip count. For that, the minsize/optsize attributes need to be removed. Also, could you change the test file name to something a bit more descriptive, like unroll-header-exiting-with-phis.ll?

test/Transforms/LoopUnroll/full-unroll-miscompile.ll
38 ↗(On Diff #215561)

it's only used in %.lcssa10, you can replace it with an arbitrary constant there.

Instead of %tmp2 outside the loop, it would be good to have a loop dependent load or something, e.g. (simply pass an i16* %A as argument to the function).

%ptr = getelementptr inbounds i16, i16* %A, i64 %i.0
%tmp2 = load i16, i16* %ptr, align 1
47 ↗(On Diff #215561)

Another nit: we could make %i.0 start at 0 and adjust the check in %cmp accordingly. Maybe also change it so we have more than 1 iteration to unroll, e.g. 2 or 3 iterations.

ebevhan updated this revision to Diff 216077.Aug 20 2019, 2:12 AM
fhahn accepted this revision.Aug 20 2019, 6:03 AM
fhahn added a reviewer: efriedma.

LGTM, thanks! I think it would be good to also pick this for the 9.0 release.

This revision is now accepted and ready to land.Aug 20 2019, 6:03 AM

Yes, makes sense for 9.0

fhahn added a comment.Aug 23 2019, 8:31 AM

@ebevhan it would be great if you could land this soon, so we can get it into the 9.0 release branch

@fhahn I don't have commit access so if someone could land it for me, that would be great!

bjope added a comment.Aug 26 2019, 2:11 AM

@fhahn I don't have commit access so if someone could land it for me, that would be great!

I'll land this on trunk.
Any extra action needed from my side to get it into 9.0?

This revision was automatically updated to reflect the committed changes.
fhahn added a comment.Aug 26 2019, 8:48 AM

@fhahn I don't have commit access so if someone could land it for me, that would be great!

I'll land this on trunk.
Any extra action needed from my side to get it into 9.0?

Thanks! Once the release branch is created, you have to file a bug to get a commit merged into the release branch, which I did for this patch: https://bugs.llvm.org/show_bug.cgi?id=43118

bjope added a comment.Aug 26 2019, 9:04 AM

@fhahn I don't have commit access so if someone could land it for me, that would be great!

I'll land this on trunk.
Any extra action needed from my side to get it into 9.0?

Thanks! Once the release branch is created, you have to file a bug to get a commit merged into the release branch, which I did for this patch: https://bugs.llvm.org/show_bug.cgi?id=43118

@fhahn I don't have commit access so if someone could land it for me, that would be great!

I'll land this on trunk.
Any extra action needed from my side to get it into 9.0?

Thanks! Once the release branch is created, you have to file a bug to get a commit merged into the release branch, which I did for this patch: https://bugs.llvm.org/show_bug.cgi?id=43118

Great. I'll use that bug report as a template the next time :-)