D103314 introduced symbol simplification when a new constant constraint is
added. Currently, we simplify existing equivalence classes by iterating over
all existing members of them and trying to simplify each member symbol with
simplifySVal.
At the end of such a simplification round we may end up introducing a
new constant constraint. Example:
if (a + b + c != d) return; if (c + b != 0) return; // Simplification starts here. if (b != 0) return;
The c == 0 constraint is the result of the first simplification iteration.
However, we could do another round of simplification to reach the conclusion
that a == d. Generally, we could do as many new iterations until we reach a
fixpoint.
We can reach to a fixpoint by recursively calling State->assume on the
newly simplified symbol. By calling State->assume we re-ignite the
whole assume machinery (along e.g with adjustment handling).
Why should we do this? By reaching a fixpoint in simplification we are capable
of discovering infeasible states at the moment of the introduction of the
first constant constraint.
Let's modify the previous example just a bit, and consider what happens without
the fixpoint iteration.
if (a + b + c != d) return; if (c + b != 0) return; // Adding a new constraint. if (a == d) return; // This brings in a contradiction. if (b != 0) return; clang_analyzer_warnIfReached(); // This produces a warning. // The path is already infeasible... if (c == 0) // ...but we realize that only when we evaluate `c == 0`. return;
What happens currently, without the fixpoint iteration? As the inline comments
suggest, without the fixpoint iteration we are doomed to realize that we are on
an infeasible path only after we are already walking on that. With fixpoint
iteration we can detect that before stepping on that. With fixpoint iteration,
the clang_analyzer_warnIfReached does not warn in the above example b/c
during the evaluation of b == 0 we realize the contradiction. The engine and
the checkers do rely on that either assume(Cond) or assume(!Cond) should be
feasible. This is in fact assured by the so called expensive checks
(LLVM_ENABLE_EXPENSIVE_CHECKS). The StdLibraryFuncionsChecker is notably one of
the checkers that has a very similar assertion.
(Actually, recognizing an infeasible parent state could have happened even
before simplification have been introduced. Because of the imperfection of the
solver, adding a new constraint can highlight that the parent state had been
infeasible already.)
Before this patch, we simply added the simplified symbol to the equivalence
class. In this patch, after we have added the simplified symbol, we remove the
old (more complex) symbol from the members of the equivalence class
(ClassMembers). Removing the old symbol is beneficial because during the next
iteration of the simplification we don't have to consider again the old symbol.
Contrary to how we handle ClassMembers, we don't remove the old Sym->Class
relation from the ClassMap. This is important for two reasons: The
constraints of the old symbol can still be found via it's equivalence class
that it used to be the member of (1). We can spare one removal and thus one
additional tree in the forest of ClassMap (2).
Performance and complexity: Let us assume that in a State we have N non-trivial
equivalence classes and that all constraints and disequality info is related to
non-trivial classes. In the worst case, we can simplify only one symbol of one
class in each iteration. The number of symbols in one class cannot grow b/c we
replace the old symbol with the simplified one. Also, the number of the
equivalence classes can decrease only, b/c the algorithm does a merge operation
optionally. We need N iterations in this case to reach the fixpoint. Thus, the
steps needed to be done in the worst case is proportional to N*N. Empirical
results (attached) show that there is some hardly noticeable run-time and peak
memory discrepancy compared to the baseline. In my opinion, these differences
could be the result of measurement error.
This worst case scenario can be extended to that cases when we have trivial
classes in the constraints and in the disequality map are transforming to such
a State where there are only non-trivial classes, b/c the algorithm does merge
operations. A merge operation on two trivial classes results in one non-trivial
class.
IMO we should have a llvm::Statistic here, tracking the maximum iteration count to reach the fixed point and an average iteration count.