In the past we sort of pretended float might be implementable
as a non-IEEE type but that never realistically would work. Exotic
FP types would need to be added to the IR. Turning these
into FP operations enables FP tracking optimizations.
Details
Diff Detail
Event Timeline
LGTM
(For future reference, most of the discussion happened on https://reviews.llvm.org/D151934 .)
Having some problems understanding how this interacts with -fdenormal-fp-math. Our target is using preserve-sign. And the fabs is lowered into a floating point instructions that would that has flushed the result to zero.
So a test case that is doing
float input = -0x1.fffffcp-127 + 0.0f; int main() { union { float f; ui32_t i; } rep; rep.f = input; const ui32_t aInt = rep.i; const ui32_t aAbs = aInt & 0x7ffffffful; if (aAbs == 0) { printf("masked %f is zero\n", input); } else { printf("masked %f is not zero\n", input); } ... }
used to result in "masked -0.000000 is not zero" being printed (since the store to rep.f is storing a denormal number to memory).
But with this patch the compiler introduces a fabs operation. And then our backend picks an floating point instruction that has flushed the result to zero. So with this patch it starts printing "masked -0.000000 is zero" instead.
So is this transform bad when denormal-fp-math isn't set to ieee?
Or is it something with the frontend (or test program) that is bad here. What should happen when storing that denormal literal to memory. Should that involve a flush-to-zero as well?
Your fabs is incorrectly implemented then. fabs is defined as a non-canonicalizing bit operation. You can only use this instruction when used in a context where the result is used in a canonicalizing context