This is an archive of the discontinued LLVM Phabricator instance.

[SimplifyCFG] Fix for non-determinism in codegen
ClosedPublic

Authored by mgrang on Nov 15 2016, 6:17 PM.

Diff Detail

Repository
rL LLVM

Event Timeline

mgrang updated this revision to Diff 78120.Nov 15 2016, 6:17 PM
mgrang retitled this revision from to [SimplifyCFG] Fix for non-determinism in codegen.
mgrang updated this object.
mgrang added reviewers: chenli, majnemer.
mgrang set the repository for this revision to rL LLVM.
mgrang added a subscriber: llvm-commits.

This patch fixes the non-determinism caused due to iterating SmallSet which was uncovered due to the experimenatal "reverse iteration order " patch: https://reviews.llvm.org/D26718

The following unit test fails because of the undefined order of iteration.
LLVM :: Transforms/SimplifyCFG/bug-25299.ll

This is fixed here by changing SmallSet to SmallSetVector so that the iteration order is defined.

Ping for reviews please.

davide added a subscriber: davide.Apr 17 2017, 11:16 AM

Having fixed the very same problem in LCSSA yesterday, I have some sympathy for this.
I think the fix is fine, but I'd appreciate if you can take some measurements and make sure this doesn't impact compile time.

Testcase?

It's very hard to craft testcases for these order of iterations issues. They generally show up in very lovely situations, e.g. stage3 builds, and they break the bot anyway.
So, if it's easy to find a testcase, works for me, otherwise I wouldn't worry too much.

davide requested changes to this revision.Apr 17 2017, 11:19 AM
This revision now requires changes to proceed.Apr 17 2017, 11:19 AM

@arsenm @davide Yes, it's difficult to make a test case for this. I uncovered this issue by turning ON reverse iteration of SmallPtrSet and finding that the following unit test failed:

test/Transforms/SimplifyCFG/bug-25299.ll

It failed because it expects:
continue: ; preds = %entry, %continue

But due to reverse iteration we get:
continue: ; preds = %continue, %entry

So, isn't the failing of this test case in itself sufficient to demonstrate the validity/correctness of this patch?

As far as the compile time is concerned, let me try building some tests and measure the effect on compile time.

mgrang added a comment.EditedApr 17 2017, 1:17 PM

I built the llvm unit tests and the test suite and can see no measurable difference in compile time with and w/o my patch.

For ninja check-all:

Expected Passes    : 21902
Expected Failures  : 68
Unsupported Tests  : 9820
Unexpected Failures: 40

For ninja check-llvm-unit:

Expected Passes    : 2286
Unexpected Failures: 34
 Compile times            | without my patch | with my patch
ninja check-all             | real 1m29.448s     | real 1m28.608s
ninja check-llvm-unit  | real 0m4.553s       | real 0m4.540s

I am also trying to build LNT tests locally and will post the results when I have them.

Thanks!

mgrang added a comment.EditedApr 19 2017, 12:37 PM

I built the LNT test-suite with and without my patch by turning on -flto and here are the compile time results:

Run Duration
With my patch 5:03:00
Baseline 5:05:26

I tried uploading my report.json to http://llvm.org/perf/submitRun but it always errors out:
"must provide input file or data"

Can someone please tell me how I can upload my test-suite json report files to llvm.org?

I am also attaching the reports in json and csv format here.

--Mandeep

davide accepted this revision.Apr 23 2017, 11:01 PM
This revision is now accepted and ready to land.Apr 23 2017, 11:01 PM
This revision was automatically updated to reflect the committed changes.