This depends on http://reviews.llvm.org/D20930
Rehashing the ExplodedNode table is very expensive. The hashing itself is expensive, and the general activity of iterating over the hash table is highly cache unfriendly. Instead, we guess at the eventual size by using the maximum number of steps allowed. This generally avoids a rehash. It is possible that we still need to rehash if the backlog of work that is added to the worklist significantly exceeds the number of work items that we process. Even if we do need to rehash in that scenario, this change is still a win, as we still have fewer rehashes that we would have prior to this change.
For small work loads, this will increase the memory used. For large work loads, it will somewhat reduce the memory used. Speed is significantly increased. A large .C file took 3m53.812s to analyze prior to this change. Now it takes 3m38.976s, for a ~6% improvement.