Page MenuHomePhabricator

[builtins] Fix overflow issue for complex division with big numbers
Needs RevisionPublic

Authored by zsrkmyn on Jun 9 2019, 9:58 AM.



__div*c3 functions get overflow with large numbers, e.g. (DBL_MAX + DBL_MAXi) / (0.5DBL_MAX + 0.5DBL_MAXi) for __divdc3.
This patch tries to fix them.

Diff Detail

Event Timeline

zsrkmyn created this revision.Jun 9 2019, 9:58 AM
Herald added projects: Restricted Project, Restricted Project. · View Herald TranscriptJun 9 2019, 9:58 AM
Herald added subscribers: Restricted Project, llvm-commits. · View Herald Transcript
MaskRay added a subscriber: MaskRay.Jul 9 2019, 7:16 PM
fzou1 added a subscriber: fzou1.Sep 19 2019, 10:33 PM

@scanon, is this something you can help with?

scanon requested changes to this revision.Nov 20 2019, 6:11 AM
scanon added inline comments.

We don't have a clear definition of the required semantics for these builtins, but should FMA formation be allowed here?

As written, it looks like we're at the mercy of the fp-contract flags passed while building compiler-rt, which means that the exact cancellation assumed by the tests won't pass if the default is ever "on". I would suggest either relaxing the tests or adding an explicit #pragma STDC FP_CONTRACT OFF in these functions.

The latter is the simpler option, and doesn't have any real downside; we can discuss relaxing it later if we want.


It would be good to build up these tests a bit more. In particular for double, we should include the test cases from Baudin & Smith's "A Robust Complex Division in Scilab" ( for double; there are only 10, but they cover most of the hardest-to-get-right finite-result cases for complex division.

They are trying to implement a division with a *componentwise* error-bound. That's a non-goal for compiler-rt, so we shouldn't expect to get them exactly right, but we should be able to get them with small relative error.

This revision now requires changes to proceed.Nov 20 2019, 6:11 AM
scanon added inline comments.Nov 20 2019, 6:39 AM

This rescaling strategy is still prone to overflow. Consider a = b = DBL_MAX, c = d = 1. The rescaled a*c + b*d will still overflow, because max(c,d) is 1, but the result should be finite.

In general, I don't think that you can implement a rescaled division like this without considering the scaling of both (a,b) and (c,d); it doesn't suffice to only use the scaling of (c,d) alone.


On targets where we have hardware double-precision, it would be a huge win for float to simply promote to double and skip rescaling entirely. We still need the inf/nan checks, but for finite results this always gives the right answer.

This can be a follow-on patch, however.