Currently, an undef value is reduced to 0 when it is added to a set of potential values.
This patch introduces a flag for under values. By this, for example, we can merge two states {undef}, {1} to {1} (because we can reduce the undef to 1).
Details
Diff Detail
Event Timeline
Cool. I think the code is fine, one minor simplification to consider below. We need tests though.
llvm/include/llvm/Transforms/IPO/Attributor.h | ||
---|---|---|
3462–3464 |
It seems difficult for me to reflect this change in IR of tests because we have not implemented any updateImpl of AAPotentialValues and have not made any use of it yet.
Should I add test and land this after implementing them?
I would prefer to land this with a test. If we need another patch first, so be it. However, I think AACallSiteReturnedFromReturned and AAReturnedFromReturnedValues could save us here.
What happens if you have
static int foo(int c) { int x; if (c == -2) return -2; if (c == 2) return 2; return x; } int bar(int c) { return foo(c) != 0; }
With the patch I'd expect bar to just return 1. Is this the case?
Thank you for giving an example. We can calculate that the returned potential values are {2,-2} without another patch.
But I think we need to implement AAPotentialValuesFloating to calculate that the result of comparison in bar is 1.
Foremost, AAPotentialValues are not identified unless we connect AAPotentialValues with AAValueSimplify.
Therefore, I will upload another patch to do that soon.
I am not sure if the following scenario is possible, but if it is it might be problematic when undef is unified with multiple values. E.g. we found that we have the following potential values 0, 1, undef. If we now assume this is 0, 1, we might simplify and %x, 3 to %x. But that is wrong, if %x is undef. Note that simplifications that result in a constant should be fine either way. As I said, not sure if the potential values could be used like that (possibly in the future).
It doesn't seem to be a problem now. Two results from the IRC discussion:
- We need a test for the above situation, to make sure we don't regress this.
- We should make sure when we merge the possible values with undef properly, e.g., if you intersect two sets of possible values {0,1,undef} and {0, 7} then we should get {0, 7}.
I'm not sure we have to take care of intersection. Currently, there is PotentialValueState::intersectWith but is not used anywhere. I am not able to come up with a case we need it, and I think it should be deleted.
What happens if you have something like
int foo(int c0, int c1, int c2, int c3) { int undef; int x0 = c0 ? 0 : 1; int x1 = c1 ? x0 : undef; int y2 = c2 ? 0 : 7; int z3 = c3 ? x1 : y2; return z3 < 7; } int bar(int c0, int c1 ) { int undef; int x0 = c0 ? 0 : 1; int x1 = c1 ? x0 : undef; return x1 == 7; } int bar(int c0, int c1 ) { int undef; int x0 = c0 ? 0 : undef; int x1 = c1 ? x0 : 1; return x1 == 7; }
I'm sorry, I don't understand what you expect. The following is my understanding. Could you point out where I'm wrong?
- I don't understand the essential difference between the two bar functions. I think the potential values of x1 are {0,1}, and the returned value can be simplified to false in both cases.
- As for foo, we cannot simplify the returned value because the potential value set of z3 is {0,1,7}. I'm not sure how we can take advantage of undef in these examples.
LGTM.
I think you are correct. The examples do not show what I want (we should add them anyway).
- fix initialization at callsite argument position (value-simplify.ll is affected)
- add tests