- User Since
- Jan 5 2023, 10:02 AM (12 w, 2 d)
Feb 4 2023
I wrote some code to verify that u64 to f64 conversion using FILD+FADD with 53-bit precision is accurate. I tested it with 10^12 cases and the results are: u64 to f32 conversion failed frequently, but u64 to f64 conversion does not failed.
Feb 2 2023
Could you update the diff? The current diff is outdated and cannot be applied to main branch automatically.
Jan 24 2023
Jan 23 2023
Jan 20 2023
I tested the patch and it seems to have fixed the issue.
Jan 18 2023
Could someone commit this change?
Jan 17 2023
@craig.topper, could you please commit this change? I don't have permission to commit.
Jan 16 2023
Jan 15 2023
Jan 14 2023
I'm not sure why the result depends on the x87 control word, but anyway, there is a possibility that this issue is because of the rounding mode. The conversion from f64 to f32 is typically done using round to nearest, and the target to round can differ between u64 and u64 to f64. We should round over u64, not u64 to f64.
The tests keep failing, but it's all RISC-V related, so the diff probably isn't wrong.
You're right, I should have checked enough before request a review.
I've checked and MSVC's implementation dynamically checks if AVX-512 is available and uses vcvtuqq2pd if available. Therefore, it seems that the result depends on whether the system supports AVX-512 or not.
The main reason I posted a review was because the conversion results from u64 and i64 to f32 did not match, causing problems.
Jan 10 2023
Change to original version of diff mentioned by @lebedev.ri.
Jan 9 2023
The test keeps failing, but Diff 486869 has already built and tested successfully. The only thing that has changed is the comments.
Fix the comments
If SSE2 is available, we only need to use library calls to convert between 64-bit integers and floating point. However, if only x87 is available, precision issues are unavoidable. What I meant was to just use the x87 implementation and print a warning in this case.
Jan 6 2023
On Windows 32-bit, the calling convention uses x87, so it's almost impossible to avoid using x87. So rather than avoiding the use of x87, I think it's better to add an option to inject code like an ICC option. To add this option to all LLVM-based compilers, the code must be injected into the LLVM IR. This will be done in the process of parsing or generating the LLVM IR. However, this requires refactoring a lot of code.
Add the pc80 option. There may be other suitable candidate names for this option, but I used the name from Intel ICC. The current implementation only applies to conversions from 64-bit integers to floating point.
Use library calls only when SSE is enabled. If SSE is disabled and x87 is enabled, the x87 implementation is used regardless of precision.
What to do if user changes the default precision?