This patch greatly improves LVI's ability to prove inductive invariants in loops.

There are two parts to the change:

- Teach the getValueAt interface to use the underlying LVI framework. This should have been a simple change and would have strengthened out ability to prove predicates, but ran across the second issue.
- Depending on query order, LVI would sometimes fail to establish facts about loop phis and then poison all future queries. By adding the check in (1), this became far more common. This turns out to be the more interesting and much more subtle part.

The root problem resulting in (2) was that when directly querying the PHI node in a loop, we were applying edge constraints only after visiting every other node. We'd push the PHI onto the stack, push each operand in turn, and eventually hit the PHI again. At that point, we'd stop the search (required to prevent infinite recursion), and use Overdefined for the value of the PHI. We'd then unwind, applying each set of constraints in turn, and then finally apply the edge constraints of the PHI as the very last step.

This often worked out fine, but in some cases, we run into a precision problem. (Consider the test case in induction.ll @test2.) If we have two constant ranges which have no precise unique intersection (i.e. <0, 400> and <1, INT_MIN+1>), we'd end up with a range for the PHI which was imprecise. Since we cache this value, it would end up poisoning all later queries. (Even those which would have succeeded if rerun in full.)

The root issue is that we end up trying to analyse a arithmetic operation ("add i32 %iv, 1") with insufficient information available to prove no overflow, even when the loop itself static guaranteed it! This would end up causing the intersection imprecision mentioned above.

The fix for this is to essentially visit the first value twice. By ensuring that the phi node is the *last* thing pushed, when we start unwinding we get to take advantage of the edge constraints and prove a stronger constraint over all other variables (specifically, the increment.)

Worth noting is that this is the same solution seemingly previously selected for getEdgeValue, even though the code doesn't specifically say so.

This brings up a general point: maybe we should be handling the recursion limit differently. While we do need to prevent *infinite* recursion, this patch only fixes part of the issue. If you start with another node in the cycle, an unfortunate choice of starting point can lead to the same bad result. It might be worth considering (in future work!) always allowing a the recursion to visit the PHI nodes twice (which would requiring visiting some other nodes twice as well.) This would be far more stable.

p.s. In case anyone is curious, the alternate approach (to this patch only) would be to directly query non-recursive facts after deciding not to push operands. I tried that, and it ended up being far more complicated. In particular, it required a change in how we prevented infinite recursion which turned into a real mess. I think this approach is much cleaner.