This is an archive of the discontinued LLVM Phabricator instance.

[scudo] Fix implicitly narrow casting (NFC)
ClosedPublic

Authored by Chia-hungDuan on Oct 13 2022, 11:42 PM.

Details

Summary

Explicitly cast int literal to u16 to resolve the warning of narrow
casting.

Diff Detail

Event Timeline

Chia-hungDuan created this revision.Oct 13 2022, 11:42 PM
Herald added a project: Restricted Project. · View Herald TranscriptOct 13 2022, 11:42 PM
Herald added a subscriber: Enna1. · View Herald Transcript
Chia-hungDuan requested review of this revision.Oct 13 2022, 11:42 PM
Herald added a project: Restricted Project. · View Herald TranscriptOct 13 2022, 11:42 PM
Herald added a subscriber: Restricted Project. · View Herald Transcript

Oddly, this change does not seem to fix the error:

/home/dyung/src/upstream/llvm_clean_git/compiler-rt/lib/scudo/standalone/local_cache.h:154:41: error: conversion from ‘int’ to ‘scudo::u16’ {aka ‘short unsigned int’} may change value [-Werror=conversion]
  154 |       P->MaxCount = static_cast<u16>(2) * TransferBatch::getMaxCached(Size);
      |                     ~~~~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

Fix arithmetic type conversion

Oddly, this change does not seem to fix the error:

/home/dyung/src/upstream/llvm_clean_git/compiler-rt/lib/scudo/standalone/local_cache.h:154:41: error: conversion from ‘int’ to ‘scudo::u16’ {aka ‘short unsigned int’} may change value [-Werror=conversion]
  154 |       P->MaxCount = static_cast<u16>(2) * TransferBatch::getMaxCached(Size);
      |                     ~~~~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

It's the arithmetic type conversion. The u16 may be promoted to int. I added the cast for both places. @dyung Can you help on verifying if this works?

Oddly, this change does not seem to fix the error:

/home/dyung/src/upstream/llvm_clean_git/compiler-rt/lib/scudo/standalone/local_cache.h:154:41: error: conversion from ‘int’ to ‘scudo::u16’ {aka ‘short unsigned int’} may change value [-Werror=conversion]
  154 |       P->MaxCount = static_cast<u16>(2) * TransferBatch::getMaxCached(Size);
      |                     ~~~~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

It's the arithmetic type conversion. The u16 may be promoted to int. I added the cast for both places. @dyung Can you help on verifying if this works?

Sure, testing it now.

I can confirm that this seems to fix the problem for these two locations. Can you also fix the other two instances of this error that I mentioned in https://reviews.llvm.org/D133145#3857847?

I can confirm that this seems to fix the problem for these two locations. Can you also fix the other two instances of this error that I mentioned in https://reviews.llvm.org/D133145#3857847?

Sure. Sorry I missed those two. Let me do it!

Fix more missing casts

@dyung, hope this will fix all the casting issues. Let me know if there's any issue left. Sorry for bring the problem.

dyung accepted this revision.Oct 14 2022, 11:47 AM

I can successfully build now on my machine with these changes, so LGTM, and thanks for fixing this!

This revision is now accepted and ready to land.Oct 14 2022, 11:47 AM
This revision was automatically updated to reflect the committed changes.
kamaub added a subscriber: kamaub.Oct 14 2022, 1:05 PM

Please hold off on commit anymore patches until the test breaks identified by my colleague in https://reviews.llvm.org/D134226#3858268 are resolved.
Several scudo test are intermittently failing on multiple powerpc bots https://lab.llvm.org/buildbot/#/builders?tags=%2Bppc
I am able to reproduce the failure on a ppc machines with a stress test:

 ~/llvm/mono/bot_reprod $ cat ~/stress-scudo.sh
RC=0
while [[ $RC == 0 ]]; do
  ninja check-runtimes >> stress-scudo.log
  RC=$?
done

which fails on the 1st, 2nd or 3rd loops (hence the flaky behaviour on the bots)
and I was able to confirm that by applying these reverts the flaky behaviour of ./ScudoUnitTest-powerpc64le-Test/163/220 and ./ScudoUnitTest-powerpc64le-Test/ScudoPrimaryTestMemoryGroup_TestConfig3/MemoryGroup disappear

~/llvm/mono/llvm-project $ git revert fd7c7ad4fe0138314b922ea0db1691d5a679cc75
[main 77190d04d010] Revert "[scudo] Fix implicitly narrow casting (NFC)"
 3 files changed, 5 insertions(+), 10 deletions(-)
~/llvm/mono/llvm-project $ git revert 9c26f51f5e178ac0fda98419e3a61d205d3b58b1
[main d982b881645e] Revert "[scudo] Support partial page releasing"
 4 files changed, 19 insertions(+), 91 deletions(-)
~/llvm/mono/llvm-project $ git revert cf9d7f55d3bec7640fa8b2f8ec1d9c1268233caa
[main 556f28478a6c] Revert "[scudo] Manage free blocks in BatchGroup."
 8 files changed, 67 insertions(+), 635 deletions(-)

The flaky tests still fail after reverting 9c26f51f5e178ac0fda98419e3a61d205d3b58b1 so cf9d7f55d3bec7640fa8b2f8ec1d9c1268233caa is likely the culprit but I will have to revert all three patches since they are dependent on each other in a chain i.e I must revert fd7c7ad4fe0138314b922ea0db1691d5a679cc75 in order to revert 9c26f51f5e178ac0fda98419e3a61d205d3b58b1 in order to revert cf9d7f55d3bec7640fa8b2f8ec1d9c1268233caa

Please hold off on commit anymore patches until the test breaks identified by my colleague in https://reviews.llvm.org/D134226#3858268 are resolved.
Several scudo test are intermittently failing on multiple powerpc bots https://lab.llvm.org/buildbot/#/builders?tags=%2Bppc
I am able to reproduce the failure on a ppc machines with a stress test:

 ~/llvm/mono/bot_reprod $ cat ~/stress-scudo.sh
RC=0
while [[ $RC == 0 ]]; do
  ninja check-runtimes >> stress-scudo.log
  RC=$?
done

which fails on the 1st, 2nd or 3rd loops (hence the flaky behaviour on the bots)
and I was able to confirm that by applying these reverts the flaky behaviour of ./ScudoUnitTest-powerpc64le-Test/163/220 and ./ScudoUnitTest-powerpc64le-Test/ScudoPrimaryTestMemoryGroup_TestConfig3/MemoryGroup disappear

~/llvm/mono/llvm-project $ git revert fd7c7ad4fe0138314b922ea0db1691d5a679cc75
[main 77190d04d010] Revert "[scudo] Fix implicitly narrow casting (NFC)"
 3 files changed, 5 insertions(+), 10 deletions(-)
~/llvm/mono/llvm-project $ git revert 9c26f51f5e178ac0fda98419e3a61d205d3b58b1
[main d982b881645e] Revert "[scudo] Support partial page releasing"
 4 files changed, 19 insertions(+), 91 deletions(-)
~/llvm/mono/llvm-project $ git revert cf9d7f55d3bec7640fa8b2f8ec1d9c1268233caa
[main 556f28478a6c] Revert "[scudo] Manage free blocks in BatchGroup."
 8 files changed, 67 insertions(+), 635 deletions(-)

The flaky tests still fail after reverting 9c26f51f5e178ac0fda98419e3a61d205d3b58b1 so cf9d7f55d3bec7640fa8b2f8ec1d9c1268233caa is likely the culprit but I will have to revert all three patches since they are dependent on each other in a chain i.e I must revert fd7c7ad4fe0138314b922ea0db1691d5a679cc75 in order to revert 9c26f51f5e178ac0fda98419e3a61d205d3b58b1 in order to revert cf9d7f55d3bec7640fa8b2f8ec1d9c1268233caa

I can create the revert 9c26f51f5e178ac0fda98419e3a61d205d3b58b1 and cf9d7f55d3bec7640fa8b2f8ec1d9c1268233caa. May I have 2~3 hours more to try to fix that before we revert them?

@kamaub, I think there may be some crashes happened. Could you share the backtrace? Or where can I get that information from buildbot?

Sorry, I didn’t see your message in time and I reverted. Please let me know if we can help by providing access to a machine where you can test any potential fixes so that you can re-commit.

kamaub added a comment.EditedOct 14 2022, 1:19 PM

https://lab.llvm.org/buildbot/#/builders/230/builds/4037/steps/7/logs/stdio Here is a example log, I will try to get a backtrace for you.

Sorry, I didn’t see your message in time and I reverted. Please let me know if we can help by providing access to a machine where you can test any potential fixes so that you can re-commit.

It's ok. I'll prepare another CL for fixing the narrow casting issue in https://reviews.llvm.org/D133145. Yes, can you help me with accessing a PPC machine?

Given that three changes got reverted. I create another CL(https://reviews.llvm.org/D135985) to address the narrow casting in https://reviews.llvm.org/D133145.
@dyung, sorry I may need your help to verify this on your machine again.

amyk added a subscriber: amyk.Oct 14 2022, 2:00 PM

Hi @Chia-hungDuan! I have sent you an email to facilitate the process in gaining access to a PPC machine.