- User Since
- Jul 7 2012, 2:54 PM (375 w, 1 d)
Wed, Sep 11
Sat, Sep 7
LGTM to fix folks broken by this.
Tue, Sep 3
This also makes sense to me why it would OOM. I suspect we're building a massive list, and at best we get lucky and they get de-duped later. At worst, this is actually fixing a serious functionality problem. We had the same thing w/ normal ASan before too. Since this pass doesn't need to be a function pass anyways, this seems totally fine. Thanks for tracking it down.
This idea isn't fundamentally flawed, its a good idea and something we've discussed many times.
Wed, Aug 28
FWIW, LGTM. Thanks for all the work figuring out the right way to manage this at each step, and seeing it through, I think this is really great.
Tue, Aug 27
Tue, Aug 20
Thanks to Joerg for some useful discussion on IRC -- there was a concern I hadn't thought about that is exactly right: we somewhat want this pass to minimally disrupt things but also to be reasonably self contained.
Some nits below. LGTM with those fixed!
Aug 16 2019
This seems .... somewhat unfortunate. Adding Chris Bieneman to see if this is really the right thing to do or there are any other alternatives.
Aug 14 2019
As we discussed in person, we should refactor this so that when we enable MemorySSA we actually check the the loop passes in question mnanage to preserve it.
Also LGTM for cherrypick to the release branch.
Aug 13 2019
This approach is broken for another reason, which also motivated the LoopSink approach David mentioned.
This too LGTM, and again, thanks for driving this all the way through.
We should specifically call this out in release notes as well (before we forget) as a bunch of downstream people will discover it in LLVM 10.
Aug 12 2019
Aug 5 2019
One high level point that is at least worth clarifying, and maybe others will want to suggest a different approach:
Aug 3 2019
Generally, I do like the approach. Two high level comments:
Aug 1 2019
LGTM! Thanks for fixing this nasty bug (and sorry I wrote it).
LGTM with two nits addressed, thanks!
Jul 31 2019
Jul 11 2019
Just to make sure we're on the same page (and sorry I didn't jump in sooner)...
Sorry for the delay here.
LGTM, thanks so much for sticking through this, I know it was ... nontrivial!
Jul 2 2019
Pointing out the (serious) bug in this change below.
Jul 1 2019
The used thing still seems like there is an underlynig bug here. See below.
Since this review is ongoing, please revert the original patch to unblock people. There is no harm in reverting and landing with the fix once it is ready. =]
Jun 28 2019
Ok, now I've made a full pass through this. Mostly I think the first thing to do is tighten up the design around the core run function template and how reducers work with it. Documenting that design in detail will be really helpful I think.
(Also, really sorry, I mashed send button too soon, I'm still going through the code. Feel free to start on anything i posted, but sorry for the very random subset of comments)
I'd suggest enhancing the main description to include an overview of the code structure and organization to help reviewers follow the implementation design here. Think of it like a mini design doc for the *implementation* itself.
Jun 24 2019
Personally, I'd suggest the name llvm-reduce for the tool.
Jun 21 2019
FWIW, I think we can wait for a bug to be filed or report come in with some functional test case before we change any behavior here.
Jun 20 2019
Eh, this seems close enough now. I'd like a better approach for the x86 builtins, but no idea what it will end up being.
See inline comment, but I think we should just drop the testing of the function attribute bit here rather than adjusting the pipeline.
(Will likely need more eyes than just mine -- RPATH is mostly a mystery to me...)
Jun 19 2019
just a minor comment on one of these...
LGTM, again, really nice. Tiny tweak below.
Jun 18 2019
FWIW, this LGTM. We may want to tweak the behavior around flattening, but that can happen as a follow-up, and this seems to get us to a better state.
OMG, I'm so sorry, I had no idea that the tests would explode like that... Yeah, I don't think that's useful....
Jun 12 2019
LGTM. Bit annoying that we need to do this at O0, but fine...
Code change LGTM, but again, let's update the test to check both ways.
I understand the change to explicitly say -O2. I also understand the change to add an explicit -fno-experimental-new-pass-manager to a RUN line when we have another RUN line that explicitly uses -fexperiemntal-new-pass-manager.
Let's update the test to explicitly run w/ both PMs to make sure this keeps working. LGTM with that change.
Code change LGTM. Can you update at least one of the tests to explicitly run both PMs so that we'll notice if this breaks in some weird way? Feel free to submit with that change.
This really confused me. We shouldn't be seeing this kind of difference in the new PM.
This was a lot easier for me to understand too, thanks. Somewhat minor code changes below.
Jun 10 2019
I think this is somewhat the wrong approach.
I think this ultimately needs to be split up into smaller patches. A bunch of these things can be landed independently. Here is my first cut at things to split out, each one into its own patch.
I would just change this to have the module pass loop over the functions -- that seems like it'll be much cleaner.
Jun 8 2019
Any chance you can add a test that shows one of the problems where verifying MemorySSA fails because we get a "preserved" copy after a loop pass that doesn't actually preserve it?
Jun 6 2019
Joerg points out in IRC that we need control flow pruning to be symmetric w/ true and false.
(For posterity, this also should be in the implementation of SimplifyInstruction itself, not in the pass. The pass only ever calls that.)
I think this isn't quite right (if I've understood what it is doing correctly).
May 22 2019
Sorry I've been a bit slow to respond here...
May 18 2019
May 17 2019
Ok, I'm fine with this going in temporarily while we try to flip this flag acros sthe board (or figure out a heuristic that lets us get the best of both worlds). So, LGTM, but yeah, let's try to remove this too.
Eh, I'd like long term to have a way of testing this field in LLVM, but I think this is fine for now to unblock the Clang changes and other changes.
We should get folks to figure out why this regresses them, but making the two PMs behave the same seems fine for now.
One edge case left I think, but otherwise this LGTM. Happy for you to submit with the suggested change and a test case.
May 15 2019
I think you mentioned that there is already a test case that checks a global initialized with blockaddress? If I've mis-remembered, we should definitely add one and check that it *doesn't* inline (today). We can update it if/when we add support. If it is possible to add one that directly works w/ callbr, that'd be great.
LGTM as long as Eli is ok with the testing arriving after the fact.
May 10 2019
SLH bits LGTM.
May 7 2019
(Sorry for still more comments)
The file name of the test seems odd? How about vectorize-loops.c? I'd also make it a C test and put it in test/CodeGen instead of a C++ test.
May 6 2019
(To be explicit, this LGTM, just letting George mark it as accepted when he's happy too.)
This seems... *really* unlikely to be an important tuning flag for library clients. It seems likely added to allow some limited experimentation. Maybe we can just not add it to the new PM?
Same comment as on other Clang patch -- let's update an IR generation test to reflect this?
Hmm, how should we test that this works?
Can you update some Clang IR generation test that uses these flags to run w/ the new PM? It should fail without this and pass with this.
Eric was hoping for some API improvements here as we add more tuning options, but I don't think these need to be sequenced.
May 4 2019
One question about a test change and a minor nit pick on comments...
I'm trusting George w/ the MemorySSAUpdater review.
Apr 29 2019
LGTM, very nice! Thanks for all the work tracking down this subtle case!
LGTM, maybe add a comment here to archive some of the reasoning? (IE, that when unrolling is disabled we disable both literal unrolling but also interleaving for historical reasons)
Apr 25 2019
You might also update one of the instr prof tests to have two RUN lines, one for each pass manager. Feel free to do that (or not) and submit.
Apr 24 2019
I like the patch, LGTM.
Apr 23 2019
How does this impact use of clang -fno-unroll-loops and clang -fno-loop-vectorize? I'm betting today, -fno-unroll-loops controls the DisableUnrollLoops value, and so people who disable both unrolling and vectorization will be surprised when interleaving now happens.
Oh, you can add a test for this by using the new PM and an invalidate<aa> pass or invalidate<domtree>.