With floating point we potentially get different results on different CPUs). This patch helps to avoid this.
The change is follow up for https://reviews.llvm.org/D29862.
It should be NFC (at least on all CPUs I have in my pool).
Details
Diff Detail
- Repository
- rL LLVM
Event Timeline
lib/Transforms/Scalar/LoopStrengthReduce.cpp | ||
---|---|---|
1069 | Could you add a comment explaining how the optimization work? | |
1082 | Add an assert D != 0 | |
1087 | I am confused about the meaning of this operator. |
lib/Transforms/Scalar/LoopStrengthReduce.cpp | ||
---|---|---|
1069 | I suppose Numerator and Denominator will be in 31 bits for most cases. However if one of them exceed 31 bits, then operator+ could overflow. | |
1082 | ok. | |
1087 | Well, this is specific for our case: probability of not selecting (PoNS) for reg R. |
What's the rationale for using rationals here instead of a (simpler) fixed-point representation, if we want to get rid of float?
What's the rationale for using rationals here instead of a (simpler) fixed-point representation, if we want to get rid of float?
No specific reason. Actually it is a kind of. I've looked into LLVM support classes and have not found appropriate.
PING.
Should I rewrite this using fixed point float?
Or newly implemented fraction class is acceptable?
Drive by comment: how about putting the FixedPoint64 in ADT and adding one or two unit tests?
The class supports only unsigned values and misses some general operators. However I'm ok with adding it to ADT (and maybe supporting more later).
As for tests, this change should not change anything. The behavior should be the same. (float are replaced with fixed point to avoid potential different results on different CPUs).
I think that's contradictory -- you're saying that this change must be NFC; but it will avoid potential different results on different CPUs?
In any case, by "test" I meant testing the FixedPoint64 class itself.
I think that's contradictory -- you're saying that this change must be NFC; but it will avoid potential different results on different CPUs?
I don't have a case in my mind. However I agreed with Quentin, that floats "can introduce subtle difference from one target to other". That comes from D29826.
In any case, by "test" I meant testing the FixedPoint64 class itself.
Ok. I'll add such.
@gottesmm can you take a look at this? You're more familiar with the APFloat API than I am.
I think it might help if the motivational part would be specified in the differential's description.
Why is this wanted?
With what does this help?
Does this change affect anything?
Can the change be tested?
etc
It was requested by Quentin in https://reviews.llvm.org/D29862
With what does this help?
It potentially helps to avoid different results on different CPUs due to use of floating point arithmetic.
Does this change affect anything?
No. It is NFC on all CPUs I have in my pool.
Can the change be tested?
I don't see how.
etc
Could you add a comment explaining how the optimization work?
The reduction does not look right to me, but my math class are way behind me :P