Because we now have faster CPUs and more RAM and stuff, should we now skew the balance to finding more bugs?
We could probably make a few rounds of such changes, observing any delayed feedback from users who use default settings and aren't watching phabricator, and rolling back in case we degrade dramatically on specific smaller projects.
As the first step, i've recently tested the following changes to default -analyzer-options:
- max-nodes: 150000 -> 225000 (+50%) - the limit on the size of the exploded graph.
- max-inlinable-size: 50 -> 100 (+100%) - the limit on the number of CFG blocks in inlined functions.
Totally, this gives 10% performance degradation and finds 5% more bugs on a large-ish codebase. max-inlinable-size change skews the analyzer to find more IPA-based bugs than before (+/-5% added/lost), and also overally slightly improves the number of bugs found; max-nodes increase brings back some of these positives.
Generally, it would also be good to make the analyzer work in a more obvious manner in terms of why does or doesn't it cover certain paths, inline certain functions, etc.- currently this is a mess of unobvious heuristics, and if we could make it less obvious by lifting some of these heuristics, it may be an additional benefit of this work as well.