This is an archive of the discontinued LLVM Phabricator instance.

[analyzer] Bump default performance thresholds?
ClosedPublic

Authored by NoQ on Jun 16 2017, 6:47 AM.

Details

Summary

Because we now have faster CPUs and more RAM and stuff, should we now skew the balance to finding more bugs?

We could probably make a few rounds of such changes, observing any delayed feedback from users who use default settings and aren't watching phabricator, and rolling back in case we degrade dramatically on specific smaller projects.

As the first step, i've recently tested the following changes to default -analyzer-options:

  • max-nodes: 150000 -> 225000 (+50%) - the limit on the size of the exploded graph.
  • max-inlinable-size: 50 -> 100 (+100%) - the limit on the number of CFG blocks in inlined functions.

Totally, this gives 10% performance degradation and finds 5% more bugs on a large-ish codebase. max-inlinable-size change skews the analyzer to find more IPA-based bugs than before (+/-5% added/lost), and also overally slightly improves the number of bugs found; max-nodes increase brings back some of these positives.

Generally, it would also be good to make the analyzer work in a more obvious manner in terms of why does or doesn't it cover certain paths, inline certain functions, etc.- currently this is a mess of unobvious heuristics, and if we could make it less obvious by lifting some of these heuristics, it may be an additional benefit of this work as well.

Diff Detail

Event Timeline

NoQ created this revision.Jun 16 2017, 6:47 AM
a.sidorin edited edge metadata.Jun 16 2017, 10:51 AM

Hi Artem,

Could you tell what code bases did you use to collect your statistics? I'll try to check the patch on our code bases. I think we should be careful about default settings. Maybe we should introduce another UMK_* for deeper analysis instead?

zaks.anna edited edge metadata.Jun 16 2017, 12:08 PM

Maybe we should introduce another UMK_* for deeper analysis instead?

The difference here is not substantial enough to warrant a new level. The general motivation for bumping these numbers is that we've set the timeouts many years ago and the hardware improved quite a bit since then.

zaks.anna accepted this revision.Jun 16 2017, 12:13 PM

Once Artem gives more details about the codebase he tested on, I think it would be fine to commit this. We can revert/address concerns later if @a.sidorin or anyone else raises concerns based on further testing on their codebases. @a.sidorin, would this work for you?

This revision is now accepted and ready to land.Jun 16 2017, 12:13 PM
NoQ added a comment.Jun 16 2017, 1:28 PM

This was an mixture of internal apple projects (user apps, drivers, deamons, whatever) with a relatively balanced selection of languages and levels of analyzer adoption. They amounted to ~16 hours of analysis CPU time (i.e. 4 hours on a quad-core machine per run). I've also ran it on LLVM separately, and had similar observations. I'm totally welcoming the feedback from other codebases!

Maybe we should introduce another UMK_* for deeper analysis instead?

The difference here is not substantial enough to warrant a new level. The general motivation for bumping these numbers is that we've set the timeouts many years ago and the hardware improved quite a bit since then.

Yeah, the point was mostly about default settings, for people who don't bother to tweak them, and adding more options essentially defeats the purpose.

xazax.hun edited edge metadata.Jun 19 2017, 9:57 AM

While I have no objections, I am wondering whether this is the good way to spend the performance budget. In particular, there are patches to generate more symbolic expressions, that could also degrade the performance (but fix some fixmes along the way).

a.sidorin accepted this revision.Jun 19 2017, 10:17 AM

Ok, I hope this will work. Anyway, we can always revert this patch in case of any problems.

NoQ added a comment.Jun 20 2017, 12:34 AM

Gabor makes such a good point. Maybe we should commit the zombie symbols patch as well (:

In particular, there are patches to generate more symbolic expressions, that could also degrade the performance (but fix some fixmes along the way).

The patch you are talking about might be promising, but needs much more investigation and tuning for performance vs issues found. I do not think we should block this patch until the investigation is done.

This revision was automatically updated to reflect the committed changes.