Page MenuHomePhabricator

Fix off-by-one error in size of the required shadow memory passed to `MapDynamicShadow`.
AcceptedPublic

Authored by delcypher on Aug 5 2020, 5:06 PM.

Details

Summary

The VM region is [kLowMemBegin, kHighMemEnd] (note the inclusive
ranges). Thus the size of the region is

kHighMemEnd - kLowMemBegin + 1

Note kLowMemBegin is assumed to be 0 so the size that should be
passed to MemToShadowSize() should be kHighMemEnd + 1, not
kHighMemEnd.

The overall effect of this bug is we were requesting a shadow memory 1-byte
smaller than required. This is due to the way kHighMemEnd is aligned (adding +1
changes bits that aren't removed by doing >> SHADOW_SCALE).

This latent bug was likely hidden because the shadow memory size is
always page aligned due to being allocated by mmap.

Note this bug was present before the refactor introduced by
5d2be1a18845. However, the refactor preserved it.

rdar://problem/66600450

Diff Detail

Event Timeline

delcypher created this revision.Aug 5 2020, 5:06 PM
Herald added a project: Restricted Project. · View Herald TranscriptAug 5 2020, 5:06 PM
Herald added a subscriber: Restricted Project. · View Herald Transcript
delcypher requested review of this revision.Aug 5 2020, 5:06 PM

@tejohnson Looks like this bug is present on Linux too

uptr FindDynamicShadowStart() {
  uptr shadow_size_bytes = MemToShadowSize(kHighMemEnd);
#if ASAN_PREMAP_SHADOW          
  if (!PremapShadowFailed())
    return FindPremappedShadowStart(shadow_size_bytes);
#endif 
yln accepted this revision.Aug 6 2020, 8:56 AM

This latent bug was likely hidden because the shadow memory size is
always page aligned due to being allocated by mmap.

Is this bug still latent, or does it manifest now? Can you describe what happens when it is triggered?

This revision is now accepted and ready to land.Aug 6 2020, 8:56 AM

@tejohnson Looks like this bug is present on Linux too

uptr FindDynamicShadowStart() {
  uptr shadow_size_bytes = MemToShadowSize(kHighMemEnd);
#if ASAN_PREMAP_SHADOW          
  if (!PremapShadowFailed())
    return FindPremappedShadowStart(shadow_size_bytes);
#endif 

Thanks. Added @vitalybuka as reviewer since he may have a better understanding of the implications, I only just started looking at sanitizer code recently. Similar question to @yln - does this manifest now given the page size alignment? Could you fix the linux code as well?

In D85378#2200000, @yln wrote:

This latent bug was likely hidden because the shadow memory size is
always page aligned due to being allocated by mmap.

Is this bug still latent, or does it manifest now? Can you describe what happens when it is triggered?

@yln

It manifests on Darwin (iOS) if you manually examine the size passed to MapDynamicShadow(). If you stick a Report("shadow_size_bytes: %p\n"); in the implementation of MapDynamicShadow() and then run a test binary then...

Before this patch

==282==shadow_size_bytes: 0x0001f7ffffff

notice how this size is not aligned to the iOS page size (16KiB)

With this patch:

==289==shadow_size_bytes: 0x0001f8000000

Notice how the size is now aligned to a 16 KiB page boundary.

On iOS I don't think this ever caused problems because the MapDynamicShadow() implementation finds a region that is guaranteed to be page aligned size.

However I don't think it makes sense to be passing a non-page-aligned size to MapDynamicShadow(). Maybe I should add a CHECK() for it too so that this doesn't get broken in the future?

@tejohnson Looks like this bug is present on Linux too

uptr FindDynamicShadowStart() {
  uptr shadow_size_bytes = MemToShadowSize(kHighMemEnd);
#if ASAN_PREMAP_SHADOW          
  if (!PremapShadowFailed())
    return FindPremappedShadowStart(shadow_size_bytes);
#endif 

Thanks. Added @vitalybuka as reviewer since he may have a better understanding of the implications, I only just started looking at sanitizer code recently. Similar question to @yln - does this manifest now given the page size alignment? Could you fix the linux code as well?

I'll update the patch to try fix this in other places where this bug exists.

@vitalybuka Do I need to worry about overflow here, i.e. kHighMemEnd is ULONG_MAX on some platform in which case we'd overflow and therefore ask for a shadow size of 0?

I could do MemToShadowSize(kHighMemEnd) + 1 instead which gives the same result in the non-overflow case because of how kHighMemEnd is set (see InitializeHighMemEnd()) and it would avoid the overflow case. It's harder to understand though because the reader needs to know about the properties of the lower bits in kHighMemEnd.

yln added a comment.Aug 6 2020, 2:55 PM

After looking at this again: would changing MemToShadowSize() to round-up to the closest multiple of page_size work?
That would cover all platforms (since it's a shared function in asan_mapping.h).

In D85378#2201220, @yln wrote:

After looking at this again: would changing MemToShadowSize() to round-up to the closest multiple of page_size work?
That would cover all platforms (since it's a shared function in asan_mapping.h).

If we rename the function.
As is I'd expect that MemToShadowSize returns minimal shadow size to cover argument, so round-up to shadow granularity.