__div*c3 functions get overflow with large numbers, e.g. (DBL_MAX + DBL_MAXi) / (0.5DBL_MAX + 0.5DBL_MAXi) for __divdc3.
This patch tries to fix them.
We don't have a clear definition of the required semantics for these builtins, but should FMA formation be allowed here?
As written, it looks like we're at the mercy of the fp-contract flags passed while building compiler-rt, which means that the exact cancellation assumed by the tests won't pass if the default is ever "on". I would suggest either relaxing the tests or adding an explicit #pragma STDC FP_CONTRACT OFF in these functions.
The latter is the simpler option, and doesn't have any real downside; we can discuss relaxing it later if we want.
It would be good to build up these tests a bit more. In particular for double, we should include the test cases from Baudin & Smith's "A Robust Complex Division in Scilab" (https://arxiv.org/abs/1210.4539) for double; there are only 10, but they cover most of the hardest-to-get-right finite-result cases for complex division.
They are trying to implement a division with a *componentwise* error-bound. That's a non-goal for compiler-rt, so we shouldn't expect to get them exactly right, but we should be able to get them with small relative error.
This rescaling strategy is still prone to overflow. Consider a = b = DBL_MAX, c = d = 1. The rescaled a*c + b*d will still overflow, because max(c,d) is 1, but the result should be finite.
In general, I don't think that you can implement a rescaled division like this without considering the scaling of both (a,b) and (c,d); it doesn't suffice to only use the scaling of (c,d) alone.
On targets where we have hardware double-precision, it would be a huge win for float to simply promote to double and skip rescaling entirely. We still need the inf/nan checks, but for finite results this always gives the right answer.
This can be a follow-on patch, however.