This is an archive of the discontinued LLVM Phabricator instance.

[builtins] Divide shouldn't underflow if rounded result would be normal.
ClosedPublic

Authored by efriedma on Mar 6 2019, 7:01 PM.

Details

Summary

We were treating certain edge cases that are actually normal as denormal results, and flushing them to zero; we shouldn't do that. Not sure this is the cleanest way to implement this edge case, but I wanted to avoid adding any code on the common path.

(This doesn't touch the behavior for results that are actually denormal; they're still flushed to zero.)

Diff Detail

Event Timeline

efriedma created this revision.Mar 6 2019, 7:01 PM
Herald added projects: Restricted Project, Restricted Project. · View Herald TranscriptMar 6 2019, 7:01 PM
Herald added subscribers: Restricted Project, jdoerfert. · View Herald Transcript
compnerd accepted this revision.Mar 11 2019, 10:34 AM

Would be nice if @scanon would also take a look, but, this seems like a good thing to fix.

This revision is now accepted and ready to land.Mar 11 2019, 10:34 AM
scanon requested changes to this revision.Mar 11 2019, 4:34 PM

In the parlance of IEEE 754, there are two ways to "detect tininess": "before rounding" and "after rounding". The standard doesn't define how to flush subnormal results, but in practice most HW flushes results that are "tiny". The existing code flushes as though tininess is detected before rounding. This proposed update flushes as though tininess were detected after rounding.

Of mainstream platforms supported by LLVM, only x86 detects tininess after rounding, and these functions should approximately never be used on x86, because HW floating-point is always available there. So I would lean towards keeping the existing behavior, so that e.g. soft-float and hard-float ARM match their behavior.

This revision now requires changes to proceed.Mar 11 2019, 4:34 PM

Okay, that makes sense, in general.

Do you have any suggestions for fixing the algorithm so it doesn't consider the result "tiny" when it actually isn't?

These results *are* tiny in the before rounding sense.

Rereading my question, it isn't really clear; I'll try to explain in more detail.

The current algorithm for computing the "unrounded" result does not attempt to ensure that it is actually between the two nearest floating-point values. Instead, it only tries to produce a value such that a round-to-even step will result in the correct answer. This means that the "writtenExponent < 1" check does not reliably compute whether a value is tiny: there are cases where the check fails, but the mathematical result of the division is actually in the normal range. The most obvious example of this is my testcase: DBL_MIN is normal, so (DBL_MIN * 2) / 2 should also be normal.

Given the intermediate results the code currently, is there some reasonable way to detect this case?

scanon accepted this revision.Mar 11 2019, 5:17 PM

Ah, now I see what you're talking about. And in fact, because of the way divide works out, there's a little gap of results that are even possible to achieve just below each binade boundary, so the code you have here will work out fine. We *should* add a comment to clarify this somewhat, but I'm happy to do that in a separate commit. LGTM.

This revision is now accepted and ready to land.Mar 11 2019, 5:17 PM
This revision was automatically updated to reflect the committed changes.