This is an archive of the discontinued LLVM Phabricator instance.

[test-suite] [SingleSource] Add aarch64_neon_intrinsics reference output
AbandonedPublic

Authored by peterwaller-arm on May 18 2021, 7:10 AM.

Details

Summary

As far as I can tell, this has been missing for a long time, and leads to a failure like this:

"/home/petwal01/results/tools/fpcmp " simple aarch64_neon_intrinsics
******************** TEST (simple) 'aarch64_neon_intrinsics' FAILED! ********************
Execution Context Diff:
/home/petwal01/results/tools/fpcmp: FP Comparison failed, not a numeric difference between 'e' and 'S'
******************** TEST (simple) 'aarch64_neon_intrinsics' ****************************

Event Timeline

peterwaller-arm requested review of this revision.May 18 2021, 7:10 AM
peterwaller-arm edited the summary of this revision. (Show Details)May 18 2021, 7:11 AM
peterwaller-arm added a reviewer: kristof.beyls.

It was disabled here: https://github.com/llvm/llvm-test-suite/commit/87d67af9d8565d068b6706c081b7ae07addcb882

Is the underlying issue definitely fixed?

Thanks for raising that. I do not have any knowledge about the underlying issue at this time.

However, removing the output reference does not appear to inhibit the test, at least in my environment, it only causes lnt to use the default output which reads 'exit 0', hence the cryptic test failure message.

@aemerson are you able to comment, does the issue persist?

If it does persist, is there another way to inhibit or fix this exec test?

It was disabled here: https://github.com/llvm/llvm-test-suite/commit/87d67af9d8565d068b6706c081b7ae07addcb882

Is the underlying issue definitely fixed?

Thanks for raising that. I do not have any knowledge about the underlying issue at this time.

However, removing the output reference does not appear to inhibit the test, at least in my environment, it only causes lnt to use the default output which reads 'exit 0', hence the cryptic test failure message.

@aemerson are you able to comment, does the issue persist?

If it does persist, is there another way to inhibit or fix this exec test?

https://bugs.llvm.org/show_bug.cgi?id=41260

@DavidSpickett any update?

Until the bug is fixed, this test cannot be re-enabled as a flaky test is worse than no test at all.

Until the bug is fixed, this test cannot be re-enabled as a flaky test is worse than no test at all.

Thanks for chiming in. What I see is that the test is not currently disabled, but instead it falls back to testing for a default reference output and then fails:

https://github.com/llvm/llvm-test-suite/blob/3af2314126514c028cc39cd56510cd8badaba4e9/Makefile.programs#L871-L877

Is that consistent with what you see?

I don't propose to re-enable the test until it is fixed -- at the moment the test is actively failing for myself and colleagues when running 'lnt runtest nt --test-suite llvm-test-suite'. Is there some other way to inhibit the test?

If so, I propose to do that.

I also worry for a moment in time that there is a window of time where *all* of these generated 'basic correctness tests' are disabled. It seems to me those tests could usefully catch bugs. Can we do better by only inhibiting the failing ones?

@DavidSpickett any update?

I don't plan to work on that issue. I've updated the bug with the info I found but it's not conclusive and that area of clang is new to me.

Until the bug is fixed, this test cannot be re-enabled as a flaky test is worse than no test at all.

Thanks for chiming in. What I see is that the test is not currently disabled, but instead it falls back to testing for a default reference output and then fails:

https://github.com/llvm/llvm-test-suite/blob/3af2314126514c028cc39cd56510cd8badaba4e9/Makefile.programs#L871-L877

Is that consistent with what you see?

This doesn’t seem to happen on Darwin, so it’s a surprise that it’s just being noticed now. Either way makes sense to disable it in a better way.

I don't propose to re-enable the test until it is fixed -- at the moment the test is actively failing for myself and colleagues when running 'lnt runtest nt --test-suite llvm-test-suite'. Is there some other way to inhibit the test?

If so, I propose to do that.

I also worry for a moment in time that there is a window of time where *all* of these generated 'basic correctness tests' are disabled. It seems to me those tests could usefully catch bugs. Can we do better by only inhibiting the failing ones?

Sure. I did report this to Arm on multiple occasions, and as maintainers of the ACLE implementation I think Arm’s proprietary tool chain team should be taking responsibility for this to fix the issue.

David, could you please route that bug to whoever is now responsible for leading that group?

peterwaller-arm planned changes to this revision.May 20 2021, 9:29 AM

I'll revisit this next week to find a way of inhibiting the test for now, or abandon it.

peterwaller-arm abandoned this revision.May 24 2021, 6:36 AM

Abandoning this for now. If the test is failing in this manner it is noticeable.

David, could you please route that bug to whoever is now responsible for leading that group?

Done.

I know very little about GlobalISel, so for the benefit of the team could you add a comment on https://bugs.llvm.org/show_bug.cgi?id=41260 explaining how the underlying issue effects what you are doing?