- User Since
- Jun 10 2014, 9:32 AM (240 w, 5 d)
Fri, Jan 11
Dec 17 2018
Can you also add a check for .double infinity? It looks like that's likely missing too.
Dec 7 2018
Oct 9 2018
Are these documented anywhere? I haven't seen it in any of the patches so far. What do they return for NaN inputs?
Oct 3 2018
Sep 24 2018
I suspect that we'd rather use ilogb in the long term, but as a like-for-like replacement this looks OK.
Jul 19 2018
Jul 18 2018
May 22 2018
IIRC the optimization of divide-by-power-of-two --> multiply-by-inverse does not occur at -O0, so it would be better to multiply by 2^(-fbits) instead.
May 10 2018
May 9 2018
One more question: the caller is responsible for closing the file when they're done, right?
Two quick notes as someone who has never used these functions:
- Name is really the *path* of the file to open, right?
- What happens to ResultFD if the function fails?
May 8 2018
May 4 2018
Apr 26 2018
I like Chandler's wording. Something like:
Apr 25 2018
@gottesmm can you take a look at this? You're more familiar with the APFloat API than I am.
Apr 20 2018
Tangential question: Do we have an intrinsic floating -> integer conversion with defined semantics for out-of-range values?
Apr 6 2018
I also wonder whether requiring fast-math to allow tree reductions is overkill. Tree reductions can be implemented reasonably efficiently in many architectures, while linearly ordered reduction appear to me to be more of a niche.
Mar 26 2018
Two questions, to which I do not know the answer:
This is the class of optimizations that I would call "formally allowed by the standard, but extremely likely to break things and surprise people." Which isn't to say that we shouldn't do it, just ... be prepared.
Mar 20 2018
I'm OK with this.
Mar 16 2018
Mar 5 2018
IIRC Intrinsic::sqrt is undef for negative inputs (unlike the sqrt libcall), so we don't need FMF.noNaNs to license this transformation.
Mar 2 2018
Works for me.
Mar 1 2018
We should be able to go farther and just do a fptrunc if SrcSize > DstSize.
Feb 7 2018
Feb 6 2018
Underflow or overflow doesn't change sign, so 0 < C < inf && X >= 0 --> C/X >= 0.
Jan 25 2018
(After checking the architecture manual)
My recollection is that on 386 and later unnormals with a zero significand are treated as an invalid operand; should these print as 0 or as NaN?
Dec 4 2017
Yup, there's likely something more general that we could match, but it's also worth taking this as is.
Nov 27 2017
IEEE 754 rules are that everything canonicalizes except bitwise operations (copy, abs, negate, copysign) and decimal re-encoding operations (which you don't care about).
Oct 30 2017
mod is not bound to the IEEE 754 remainder operation. It binds the C fmod operation. You're looking for the remainder operation.
Sep 13 2017
Sep 12 2017
Sep 6 2017
LGTM. Sorry that I didn't see this earlier.
Aug 21 2017
... except, please add another test-case where the other component is not an integer as well.
This looks fine.
Aug 4 2017
Mar 31 2017
(x-y) == 0 --> x == y does not require nsz (zeros of any sign compare equal), nor does it require nnan (if x or y is NaN, both comparisons are false). It *does* require ninf (because inf-inf = NaN), and it also requires that subnormals are not flushed by the CPU. There's been some churn around flushing recently, Hal may have thoughts (+Hal).
This addresses my concerns. Gentle ping to the other reviewers.
Mar 21 2017
The assumption is that an integer add is more canonical and/or cheaper than an FP add. Is that universally true?
Mar 8 2017
What's the rationale for using rationals here instead of a (simpler) fixed-point representation, if we want to get rid of float?
Feb 7 2017
Great, thanks Simon! This looks fine numerically. @gottesmm, can you double-check for style, etc?
Feb 1 2017
Numerically this is sound, and something that I've wanted to fix for a while but haven't had time to deal with. Thanks for doing it! Please add some test cases that would detect the old error (@gottesmm can likely point you in the right direction).
Jan 19 2017
These tests should either be exact, or should have a tolerance that's mathematically sound. +/-1ulp is not sound; the allowed error should be proportional to the magnitude of the larger of the real and imaginary components of the result -- e.g. if one component is very small compared to the other, the smaller one may have *no* "correct" bits and still be acceptable.
Jan 6 2017
OK, I'm satisfied with that.
Jan 4 2017
Dec 23 2016
I don't particularly care about the shift, since the code is completely equivalent either way, though it would be nice to clean up. However, please replace "mantissa" with "significand" before committing.
Dec 22 2016
Please s/mantissa/significand/. I know that "mantissa" is used in a number of places in llvm sources, but it's incorrect terminology. Significand is the IEEE-754 nomenclature, which any new work should follow.
Sep 26 2016
The tests in question are broken. They blindly apply a relative error tolerance that is not motivated by sound numerical analysis, despite the existence of well-known and well-founded error bounds for these problems, in every basic numerical analysis textbook.
Sep 22 2016
Sep 12 2016
Abe, I had a patch to do the same thing last year (that we ended up backing out because it exposed a bug which has since been fixed). Your approach looks a bit different from the one that I took originally, and you seem to be missing some of the arm64 test changes that I had to put in:
Jul 15 2016
I am not the right person to review the C++ template details, but everything else seems OK to me.
Jul 1 2016
I agree with Marshall. Aside from the points he raises, this approach looks fine.
May 26 2016
Numerically, I'm fine with this. Someone else can address the approach and style, etc.
May 23 2016
May 17 2016
-ffp-contract=on obeys the semantics of C's FP_CONTRACT pragma. In particular, it will not fuse:
float m = x*y; float a = m + z;
Whereas you probably want that to fuse for your purposes. -ffp-contract=fast seems more in line with your needs.
May 12 2016
Apr 8 2016
The input code is denorm preserving on architectures that flush denorms, but the output may not be.
There are a few niche floating-point formats that use a representation of the form [ exponent | 2s complement significand ], so the signbit ends up sitting in the middle of the number. Does LLVM claim to support arbitrary oddball formats for float/double? Ideally we would specify that LLVM assumes IEEE formats, even if we don't require fully conformant operations in hardware; that would give us some flexibility.
Apr 7 2016
Breaking isnan, isinf, etc is a non-starter. I know it's appealing from a consistent formal model viewpoint, but in practice it breaks a lot of code (this is why we have the call fallbacks for iOS/OSX).
Mar 22 2016
OK. Let's take this in as is, then, and someone can look at normalize( ) when they have time.