This is an archive of the discontinued LLVM Phabricator instance.

[LSan][AArch64] Speed-up leak and address sanitizers on AArch64 for 48-bit VMA
Needs RevisionPublic

Authored by sebpop on Apr 3 2019, 7:58 PM.

Details

Summary

This patch fixes https://github.com/google/sanitizers/issues/703
On a 48-bit VMA aarch64 machine the time spent in LSan and ASan reduced from 2.5s to 0.01s when running

clang -fsanitize=leak compiler-rt/test/lsan/TestCases/sanity_check_pure_c.c && time ./a.out
clang -fsanitize=address compiler-rt/test/lsan/TestCases/sanity_check_pure_c.c && time ./a.out

With this patch, LSan and ASan create both the 32 and 64 allocators and select at run time between the two allocators following a global variable that is initialized at init time to whether the allocator64 can be used in the virtual address space.

Diff Detail

Repository
rL LLVM

Event Timeline

sebpop created this revision.Apr 3 2019, 7:58 PM
Herald added a project: Restricted Project. · View Herald TranscriptApr 3 2019, 7:58 PM
sebpop updated this revision to Diff 193646.Apr 3 2019, 8:08 PM

Rebased patch on today's trunk.

junbuml added a subscriber: junbuml.Apr 4 2019, 9:01 AM

I think the changes are minimal and make sense given the define() blocks for other arches. I'll defer to @kcc on approval though.

brzycki added inline comments.Apr 4 2019, 2:04 PM
compiler-rt/lib/lsan/lsan_allocator.cc
117 ↗(On Diff #193646)

Nit: indent level of p = ... is different than line 133 below.

163 ↗(On Diff #193646)

spacing again.

kcc added a reviewer: eugenis.Apr 5 2019, 4:51 PM

Please don't use this many #ifdefs.
If should not need more than one ifdef for this patch,
Split the logic into separate files, when needed.

sebpop added a comment.Apr 5 2019, 5:10 PM

Ok, I will prepare an updated patch.
Thanks Brian and Kostya for your reviews.

sebpop updated this revision to Diff 194200.Apr 8 2019, 1:14 PM

Address review comments from Kostya: move AArch64 lsan allocator to a separate file to avoid #ifdefs.

kcc added a comment.Apr 8 2019, 1:19 PM

Hm... But this is so much code duplication... Can we have few #ifdefs but also not too much duplication?

Also, this is changing only the standalone lsan, not lsan used as part of asan. Right?
standalone lsan is not widely used, AFAICT.

sebpop added a comment.Apr 8 2019, 1:58 PM

Also, this is changing only the standalone lsan, not lsan used as part of asan. Right?
standalone lsan is not widely used, AFAICT.

Correct, asan is still very slow with this patch on.
I was trying to reach a patch that is in good shape to be accepted for lsan
before moving on to apply the same solution to asan.

Can we have few #ifdefs but also not too much duplication?

Avoiding code dup was the intent of the first version of the patch,
and I agree with you that the ifdefs with else clauses are ugly...
Do you have a suggestion?

I also tried to create a pointer that we would switch over between the two allocators 32/64.
The problem is to name a type that could point to both allocators:

using Allocator = CombinedAllocator<                                                                                                                                                          
  PrimeAlloc, AllocatorCache, SecondAlloc, LocalAddressSpaceView>;                                                                                                                            

using Allocator64 = CombinedAllocator<                                                                                                                                                        
  PrimeAlloc64, AllocatorCache64, SecondAlloc, LocalAddressSpaceView>;                                                                                                                        

Allocator a32;
Allocator64 a64;
Allocator32or64 *a;

if (47_bit_VMA)
  a = &a64;
else
  a = &a32;

In that case the rest of the code remains the same, only replacing a.op() with a->op().

sebpop updated this revision to Diff 194937.Apr 12 2019, 12:54 PM
sebpop retitled this revision from [LSan][AArch64] Speed-up leak-sanitizer on AArch64 for 47-bit VMA to [LSan][AArch64] Speed-up leak and address sanitizers on AArch64 for 47-bit VMA .

This patch reduces the number of #ifdefs as suggested by Kostya, and speeds up both the leak and address sanitizers on aarch64.
Passes check-all on x86_64-linux and aarch64-linux is still under test.
Worked with Brian Rzycki @brzycki.

Hi @kcc. @sebpop and I decided on this approach because we were unable to create a base allocator class. We ran into pointer coercion issues for 32-bit or 64-bit pointers returned from some of the methods. We either had to mask this with (void *) casts at all the callsites. Even if we did this every routine would require a runtime check for 32 vs 64 on aarch64. This latest patch reduces the number of ifdefs and is still a compile-time check for every non-aarch64 platform.

sorry for delay.
Vitaly, can you give a recommendation on how to avoid #ifdefs and avoid too much code duplication in this case.

I propose to finalize one patch for either lsan or asan and then replicate it for another.

compiler-rt/lib/asan/asan_allocator.cc
750 ↗(On Diff #194937)

allocated =

compiler-rt/lib/lsan/lsan_allocator.cc
40 ↗(On Diff #194937)

maybe

#if defined(SANITIZER_SELECT_ALLOCATOR_AT_RUNTIME)
bool useAllocator1 = false;

struct AllocatorCache {
  Type1 cache1;
  Type2 cache2;

  void Fn1() {
    if (useAllocator1)
      cache1.Fn1();
    else
      cache2.Fn1();
  }
}

struct Allocator {
  Type1 allocator1;
  Type2 allocator2;
  
  void Fn1() {
    if (useAllocator1)
      allocator1.Fn1();
    else
      allocator2.Fn1();
  }
};

#else

using Allocator = CombinedAllocator<
   PrimeAlloc, AllocatorCache, SecondAlloc, LocalAddressSpaceView>;
using AllocatorCache = AllocatorASVT<LocalAddressSpaceView>;

#endif

Would be nice to be able to enable SANITIZER_SELECT_ALLOCATOR_AT_RUNTIME
even on platforms where it's not needed, just to check the build

I've started some refactoring there. So maybe it would be easier after that.

I've started some refactoring there. So maybe it would be easier after that.

Hello @vitalybuka I plan on looking at your suggestions this week. Please let me know when you think you have the refactored code in a state you perfer and I will start working from that as the new base.

I've started some refactoring there. So maybe it would be easier after that.

Hello @vitalybuka I plan on looking at your suggestions this week. Please let me know when you think you have the refactored code in a state you perfer and I will start working from that as the new base.

Done.
maybe we can use something like: D61401

Hello @vitalybuka , thank you for the example in D61401. Do you have a suggestion what file and namespace the DoubleAllocator template class definition should reside? The most recent diff for D60243 was authored April 3 and no longer cleanly applies (the file lsan_allocator.h has changed considerably since then). This is complicated by @sebpop changing jobs and requiring legal approval before he can help work on this again. In order to make progress I'm essentially re-writing this patch based on the new Allocator layout in lsan_allocator.h.

Hello @vitalybuka , thank you for the example in D61401. Do you have a suggestion what file and namespace the DoubleAllocator template class definition should reside? The most recent diff for D60243 was authored April 3 and no longer cleanly applies (the file lsan_allocator.h has changed considerably since then). This is complicated by @sebpop changing jobs and requiring legal approval before he can help work on this again. In order to make progress I'm essentially re-writing this patch based on the new Allocator layout in lsan_allocator.h.

Probably namespace __sanitizer as you will need it for asan and lsan
have no idea about name. DoubleAllocator was for example but does not sound as helpful.
maybe DoubleAllocator -> SizeClassAllocatorPair in sanitizer_allocator_primary_pair.h.

Also instead of bool use1 maybe better to use virtual methods, but I suspect it will be more complicated. I am fine either way.
From D60243 probably you need only the part which decides which allocator to use and maybe make sure that we have appropriate AP32/AP64 for the new allocator

brzycki added a comment.EditedMay 9 2019, 8:55 AM

Hi @vitalybuka , I am attempting to use D61401 as you suggested but have run into an issue regarding use1 used inside static member functions. This error persists regardless if I attempt to make it a member variable or a global variable:

/work/b.rzycki/upstream/llvm-project/llvm/projects/compiler-rt/lib/asan/../sanitizer_common/sanitizer_allocator_primary_pair.h:76:9: error: invalid use of member 'use1' in static member function
    if (use1)
        ^~~~
/work/b.rzycki/upstream/llvm-project/llvm/projects/compiler-rt/lib/asan/../sanitizer_common/sanitizer_allocator_primary_pair.h:82:9: error: invalid use of member 'use1' in static member function
    if (use1)
        ^~~~

I don't quite understand why use1 is failing for me as a header, but not for your test case embedded in cc code. Here's the code that LLVM is unhappy about:

static bool use1 = false;

...

  static bool CanAllocate(uptr size, uptr alignment) {
    if (use1)
      return A1::CanAllocate(size, alignment);
    return A2::CanAllocate(size, alignment);
  }

  static uptr ClassID(uptr size) {
    if (use1)
      return A1::ClassID(size);
    return A2::ClassID(size);
  }

EDIT: Please disregard. For some reason I am no longer seeing the error. I consider it user error. :)

Hello @vitalybuka , I have uploaded a WIP diff in D61766 and I appreciate any insight you can give.

First, this approach still requires ifdefs for __aarch64__. The problem is I cannot use SizeClassAllocatorPair for arches where there is only one Allocator. This unfortunately prevents the request of using this approach to build this code even on platforms that do not need to select an allocator at runtime. This goes against the original intent of what you and @kcc asked for after the initial patch review.

Second, there are compile-time errors due to issues with replacing SizeClassAllocatorXX with the new SizeClassAllocatorPair class on Aaarch64. It's not a 1:1 drop-in replacement and in some cases I don't know what the correct answer is. For example, here's an LLVM error when building lsan_allocator.cc:

In file included from /home/cc/bmr/llvm-project/llvm/projects/compiler-rt/lib/lsan/lsan_allocator.cc:14:
In file included from /home/cc/bmr/llvm-project/llvm/projects/compiler-rt/lib/lsan/lsan_allocator.h:17:
In file included from /home/cc/bmr/llvm-project/llvm/projects/compiler-rt/lib/lsan/../sanitizer_common/sanitzer_allocator.h:77:
/home/cc/bmr/llvm-project/llvm/projects/compiler-rt/lib/lsan/../sanitizer_common/sanitizer_allocator_combine.h:28:53: error: no type named 'MapUnmapCallback' in '__sanitizer::SizeClassAllocatorPair<__lsan::AP64<__santizer::LocalAddressSpaceView>, __lsan::AP32<__sanitizer::LocalAddressSpaceView> >'
      LargeMmapAllocator<typename PrimaryAllocator::MapUnmapCallback,
                         ~~~~~~~~~~~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~
/home/cc/bmr/llvm-project/llvm/projects/compiler-rt/lib/lsan/lsan_allocator.h:122:24: note: in instantiationof template class '__sanitizer::CombinedAllocator<__sanitizer::SizeClassAllocatorPair<__lsan::AP64<__sanitizr::LocalAddressSpaceView>, __lsan::AP32<__sanitizer::LocalAddressSpaceView> >, __sanitizer::LargeMmapAllocatrPtrArrayDynamic>' requested here
using AllocatorCache = Allocator::AllocatorCache;
                       ^

In SizeClassAllocatorXX the MapUnmapCallback is passed in as a typedef inside one of the two APXX structs. The SizeClassAllocatorPair class cannot select one of the two MapUnmapCallback A1 or A2 template classes until the runtime memory check determines the size of the VMA. But by then it's too late: we need to instantiate a CombinedAllocator class in order to create AllocatorCache in lsan_allocator.h. We are back to this needing to be a compile-time property for another dependent class. I could always select A1's typedef of MapUnmapCallback, similar to what you do with PrimaryAllocator::MapUnmapCallback but I am uncertain if this is correct.

Please let me know if I misunderstood your testcase in D61401 or if I'm inserting the templated class in the wrong location.

sebpop updated this revision to Diff 198980.May 9 2019, 10:39 PM
sebpop edited the summary of this revision. (Show Details)

I have verified that the updated patch compiles and it reduces the execution time of leak sanitized trivial example.

Thanks @vitalybuka for your guidance on how to avoid the #ifdefs.
Thanks @brzycki for your help on this patch.
I got the ok to update the patch.

If we are happy with this fix for LSan, I will send another patch to fix ASan.

@sebpop welcome back! I'm glad you received clearance to work on this. :) My comments are inline.

compiler-rt/lib/lsan/lsan_allocator.cc
38 ↗(On Diff #198980)

Needs to be under #if defined(__aarch64__) or UseAllocator32 needs to be moved outside of the Aarch64 define in lsan_allocator.h.

46 ↗(On Diff #198980)

Needs to be under #if defined(__aarch64__) or UseAllocator32 needs to be moved outside of the Aarch64 define in lsan_allocator.h.

compiler-rt/lib/lsan/lsan_allocator.h
137 ↗(On Diff #198980)

Is this correct? This is part of what I was commenting on in my previous patch. We're always using the 32-bit versions of MapUnmapCallback and AddressSpaceView even if we switch at run-time to the 64-bit allocator. @vitalybuka would know better than I if this is guaranteed to be the same across 32/64 on the same arch or not.

brzycki added inline comments.May 10 2019, 8:09 AM
compiler-rt/lib/lsan/lsan_allocator.cc
38 ↗(On Diff #198980)

Nevermind, I now see what you're doing. It's fine as it is.

46 ↗(On Diff #198980)

Nevermind, I now see what you're doing. It's fine as it is.

sebpop marked 3 inline comments as done.May 10 2019, 8:24 AM
sebpop added inline comments.
compiler-rt/lib/lsan/lsan_allocator.h
75 ↗(On Diff #198980)

Both AP64 and AP32 typedef MapUnmapCallback to be the same type: NoOpMapUnmapCallback.

137 ↗(On Diff #198980)

See my comment above for MapUnmapCallback and the comment below for AddressSpaceView.

215 ↗(On Diff #198980)

Both Allocator32ASVT and Allocator64ASVT are instantiated with the same AddressSpaceView, so
Allocator32::AddressSpaceView == Allocator64::AddressSpaceView.

brzycki added inline comments.May 10 2019, 8:42 AM
compiler-rt/lib/lsan/lsan_allocator.cc
43 ↗(On Diff #198980)

Compiling on x86_64 causes a shift-count-overflow warning to be emitted:

/tmp/tmp.rRhin33V2X/src/llvm/projects/compiler-rt/lib/lsan/lsan_allocator.cc:43:42: warning: shift count >= width of type [-Wshift-count-overflow]
  if (GetMaxVirtualAddress() < (((uptr)1 << 48) - 1))
                                         ^  ~~
1 warning generated.

Changing the comparison to the following line removes the warning:

if (GetMaxVirtualAddress() < (uptr)0xffffffffffffUL)

It's a bit less readable but it doesn't have the warning or the potential to overflow.

sebpop updated this revision to Diff 199085.May 10 2019, 2:26 PM
sebpop retitled this revision from [LSan][AArch64] Speed-up leak and address sanitizers on AArch64 for 47-bit VMA to [LSan][AArch64] Speed-up leak and address sanitizers on AArch64 for 48-bit VMA .
sebpop edited the summary of this revision. (Show Details)

Updated patch fixes ASan.

sebpop updated this revision to Diff 199111.May 10 2019, 5:16 PM

Fix the x86_64 overflow warning with 1ULL << 48.

LGTM. The number of if defined() and duplicated code regions is minimal. I'm curious to know what @vitalybuka and @kcc think of this iteration of the patch.

LGTM. The number of if defined() and duplicated code regions is minimal. I'm curious to know what @vitalybuka and @kcc think of this iteration of the patch.

Oh, I didn't realize that this is ready for review. I'll take a look.

vitalybuka added inline comments.May 13 2019, 11:00 AM
compiler-rt/lib/asan/asan_allocator.h
189 ↗(On Diff #199111)

I see 3 copies of this template.
We need to extract that into separate header in sanitizer_common/

In shared extracted version I'd like to avoid using 32/64 in naming, just something generic first/second 1/2 ... etc.

200 ↗(On Diff #199111)

we need to make UseAllocator32 as a static member of DoubleAllocator as we want to make difference instantiations if the template have own copy of this var.

compiler-rt/lib/sanitizer_common/tests/sanitizer_allocator_test.cc
289 ↗(On Diff #199111)

There are other tests which use directly or indirectly SizeClassAllocator32/SizeClassAllocator64.
they all should work with DoubleAllocator.

Could you please define Allocator32or64Compact in the top of the file and extend other tests?

using Allocator32or64Compact = DoubleAllocator<Allocator32Compact, Allocator64Compact>;

...

TEST(SanitizerCommon, SizeClassAllocator32or64Compact) {
  Allocator32or64Compact::use1 = false;
  TestSizeClassAllocator<Allocator32or64Compact>();
  Allocator32or64Compact::use1 = true;
  TestSizeClassAllocator<Allocator32or64Compact>();
}
802 ↗(On Diff #199111)

we want to test shared template not the one we define here.

sebpop updated this revision to Diff 200171.May 19 2019, 1:17 AM

Addressed comments from @vitalybuka: factored up the 3 versions and added more tests.
Passes with no new fails ninja check-all on an AArch64 Graviton A1 instance.

vitalybuka added inline comments.May 21 2019, 1:28 PM
compiler-rt/lib/asan/asan_allocator.cc
36 ↗(On Diff #200171)

same as lsan and preinit_arrays

122 ↗(On Diff #200171)

get_allocator().getKMaxSize() ?
same for the rest

compiler-rt/lib/asan/asan_allocator.h
163 ↗(On Diff #200171)

Historically it's Google style and we start function names with capitals.

169 ↗(On Diff #200171)

can you make it get_allocator().getKMaxSize() ?
it would be nice to move them into a separate patch.

compiler-rt/lib/lsan/lsan_allocator.cc
32 ↗(On Diff #200171)

it's too late if we use SANITIZER_CAN_USE_PREINIT_ARRAY
Please move into DoubleAllocator::Init

compiler-rt/lib/sanitizer_common/sanitizer_doubleallocator.h
3 ↗(On Diff #200171)

Please update the first line and rename the file to sanitizer_double_allocator.h
we usually split words with _

18 ↗(On Diff #200171)

sorry for these naming in my sample patch
historically we use Google style here
a1, a2, UseAllocator1 -> first_, second_, use_first_

I am open for better recommendations

22 ↗(On Diff #200171)

is possible to make it non static?

sebpop marked 9 inline comments as done.Jun 11 2019, 1:13 PM

The updated patch I will post is addressing all the review comments.

sebpop updated this revision to Diff 204145.Jun 11 2019, 1:15 PM

The updated patch passes make check-lsan check-asan and is still under test for check-all on aarch64-linux.

sebpop updated this revision to Diff 204177.Jun 11 2019, 3:08 PM

For some reason asan/tests/asan_noinst_test.cc is not compiled on make check-asan and that has exposed a compile error that was not triggered by the other tests:

sanitizer_double_allocator.h:30:11: error: use of non-static data member 'use_first_' of 'DoubleAllocator' from nested type 'DoubleAllocatorCache'
      if (use_first_)
          ^~~~~~~~~~

The updated patch fixes this by accessing the non-static field of the enclosing class through a this pointer to one of the instances:

-      if (use_first_)
+      if (this->use_first_)

The updated patch passes check-all with no new fails on aarch64-linux.

Ping patch.
The last version of the patch addresses all the comments from reviews.
Ok to commit?

Almost LGTM

compiler-rt/lib/asan/asan_allocator.h
164 ↗(On Diff #204177)

Max(SizeClassMap32::kNumClasses, SizeClassMap64::kNumClasses) ?

compiler-rt/lib/asan/asan_stats.cc
35 ↗(On Diff #204177)

I thing we should print only relevant part of array

static void PrintMallocStatsArray(const char *prefix,
                                  uptr *array, uptr size) {
  Printf("%s", prefix);
  for (uptr i = 0; i < size; i++) {
...

PrintMallocStatsArray("  mallocs by size class: ", malloced_by_size, get_allocator().KMaxSize());
compiler-rt/lib/sanitizer_common/sanitizer_double_allocator.h
1 ↗(On Diff #204177)

any idea for better name?

17 ↗(On Diff #204177)

``
template <class Allocator1, class Allocator2>
class DoubleAllocator {

Allocator1 a1_;
Allocator2 a2_;

``

sebpop marked 6 inline comments as done.Aug 20 2019, 11:52 AM
sebpop added inline comments.
compiler-rt/lib/asan/asan_allocator.h
164 ↗(On Diff #204177)

Calling Max() instead of the cond_expr fails in several places where kMaxNumberOfSizeClasses is used. It seems like in those places a cond_expr is allowed by the language whereas a function call is not:

/home/ubuntu/s/llvm-project/llvm/projects/compiler-rt/lib/asan/asan_stats.h:41:48: error: array bound is not an integer constant before ‘]’ token
   uptr malloced_by_size[kMaxNumberOfSizeClasses];
                                                ^
compiler-rt/lib/asan/asan_stats.cc
35 ↗(On Diff #204177)

get_allocator() is static to asan_allocator.cc. I exported it to make it available to asan_stats.cc.

compiler-rt/lib/sanitizer_common/sanitizer_double_allocator.h
1 ↗(On Diff #204177)

What about sanitizer_runtime_select_allocator.h?

sebpop updated this revision to Diff 216203.Aug 20 2019, 11:54 AM
sebpop marked 2 inline comments as done.
sebpop updated this revision to Diff 216211.Aug 20 2019, 12:17 PM

Updated patch to current llvm trunk.

vitalybuka accepted this revision.Aug 20 2019, 12:48 PM

Thanks! LGTM

This revision is now accepted and ready to land.Aug 20 2019, 12:48 PM
This revision was automatically updated to reflect the committed changes.
vitalybuka reopened this revision.Aug 20 2019, 10:05 PM

check-sanitizer does not work
reverted with r369495

This revision is now accepted and ready to land.Aug 20 2019, 10:05 PM
vitalybuka requested changes to this revision.Aug 20 2019, 10:14 PM

I've started to fix this but realized that it's more than a quick fix

diff --git a/compiler-rt/lib/sanitizer_common/sanitizer_allocator_combined.h b/compiler-rt/lib/sanitizer_common/sanitizer_allocator_combined.h
index c11d1f83fb54..47b4aba488bd 100644
--- a/compiler-rt/lib/sanitizer_common/sanitizer_allocator_combined.h
+++ b/compiler-rt/lib/sanitizer_common/sanitizer_allocator_combined.h
@@ -159,7 +159,7 @@ class CombinedAllocator {
   void TestOnlyUnmap() { primary_.TestOnlyUnmap(); }
 
   void InitCache(AllocatorCache *cache) {
-    cache->Init(&stats_);
+    cache->Init(&primary_, &stats_);
   }
 
   void DestroyCache(AllocatorCache *cache) {
diff --git a/compiler-rt/lib/sanitizer_common/sanitizer_allocator_local_cache.h b/compiler-rt/lib/sanitizer_common/sanitizer_allocator_local_cache.h
index 108dfc231a22..d63ef6aa443f 100644
--- a/compiler-rt/lib/sanitizer_common/sanitizer_allocator_local_cache.h
+++ b/compiler-rt/lib/sanitizer_common/sanitizer_allocator_local_cache.h
@@ -18,7 +18,7 @@ template <class SizeClassAllocator>
 struct SizeClassAllocator64LocalCache {
   typedef SizeClassAllocator Allocator;
 
-  void Init(AllocatorGlobalStats *s) {
+  void Init(SizeClassAllocator *allocator, AllocatorGlobalStats *s) {
     stats_.Init();
     if (s)
       s->Register(&stats_);
@@ -122,7 +122,7 @@ struct SizeClassAllocator32LocalCache {
   typedef SizeClassAllocator Allocator;
   typedef typename Allocator::TransferBatch TransferBatch;
 
-  void Init(AllocatorGlobalStats *s) {
+  void Init(SizeClassAllocator *allocator, AllocatorGlobalStats *s) {
     stats_.Init();
     if (s)
       s->Register(&stats_);
diff --git a/compiler-rt/lib/sanitizer_common/sanitizer_runtime_select_allocator.h b/compiler-rt/lib/sanitizer_common/sanitizer_runtime_select_allocator.h
index 3b9e35445981..d8538f88428e 100644
--- a/compiler-rt/lib/sanitizer_common/sanitizer_runtime_select_allocator.h
+++ b/compiler-rt/lib/sanitizer_common/sanitizer_runtime_select_allocator.h
@@ -26,11 +26,11 @@ class RuntimeSelectAllocator {
     typename Allocator2::AllocatorCache a2;
 
    public:
-    void Init(AllocatorGlobalStats *s) {
-      if (this->use_first_allocator)
-        a1.Init(s);
+    void Init(RuntimeSelectAllocator *allocator, AllocatorGlobalStats *s) {
+      if (allocator->use_first_allocator)
+        a1.Init(&allocator->a1, s);
       else
-        a2.Init(s);
+        a2.Init(&allocator->a2, s);
     }
     void *Allocate(RuntimeSelectAllocator *allocator, uptr class_id) {
       if (allocator->use_first_allocator)
@@ -86,6 +86,18 @@ class RuntimeSelectAllocator {
     return Allocator2::ClassID(size);
   }
 
+  uptr LargestClassID() {
+    if (use_first_allocator)
+      return Allocator1::SizeClassMapT::kLargestClassID;
+    return Allocator2::SizeClassMapT::kLargestClassID;
+  }
+
+  uptr ClassSize(uptr id) {
+    if (use_first_allocator)
+      return Allocator1::SizeClassMapT::Size(id);
+    return Allocator2::SizeClassMapT::Size(id);
+  }
+
   uptr KNumClasses() {
     if (use_first_allocator)
       return Allocator1::KNumClasses();
@@ -110,6 +122,12 @@ class RuntimeSelectAllocator {
     return a2.GetMetaData(p);
   }
 
+  uptr TotalMemoryUsed() {
+    if (use_first_allocator)
+      return a1.TotalMemoryUsed();
+    return a2.TotalMemoryUsed();
+  }
+
   uptr GetSizeClass(const void *p) {
     if (use_first_allocator)
       return a1.GetSizeClass(p);
diff --git a/compiler-rt/lib/sanitizer_common/tests/sanitizer_allocator_test.cpp b/compiler-rt/lib/sanitizer_common/tests/sanitizer_allocator_test.cpp
index dc26a0a445f0..e70449e07802 100644
--- a/compiler-rt/lib/sanitizer_common/tests/sanitizer_allocator_test.cpp
+++ b/compiler-rt/lib/sanitizer_common/tests/sanitizer_allocator_test.cpp
@@ -160,8 +160,19 @@ using Allocator32CompactASVT =
     SizeClassAllocator32<AP32Compact<AddressSpaceView>>;
 using Allocator32Compact = Allocator32CompactASVT<LocalAddressSpaceView>;
 
+#if SANITIZER_CAN_USE_ALLOCATOR64
 using Allocator32or64Compact =
     RuntimeSelectAllocator<Allocator32Compact, Allocator64Compact>;
+class Allocator32or64CompactUse1 : public Allocator32or64Compact {
+ public:
+  Allocator32or64CompactUse1() { use_first_allocator = true; }
+};
+
+class Allocator32or64CompactUse2 : public Allocator32or64Compact {
+ public:
+  Allocator32or64CompactUse2() { use_first_allocator = false; }
+};
+#endif
 
 template <class SizeClassMap>
 void TestSizeClassMap() {
@@ -196,7 +207,7 @@ void TestSizeClassAllocator() {
   a->Init(kReleaseToOSIntervalNever);
   typename Allocator::AllocatorCache cache;
   memset(&cache, 0, sizeof(cache));
-  cache.Init(0);
+  cache.Init(a, 0);
 
   static const uptr sizes[] = {
     1, 16,  30, 40, 100, 1000, 10000,
@@ -215,7 +226,7 @@ void TestSizeClassAllocator() {
       uptr n_iter = std::max((uptr)6, 4000000 / size);
       // fprintf(stderr, "size: %ld iter: %ld\n", size, n_iter);
       for (uptr i = 0; i < n_iter; i++) {
-        uptr class_id0 = Allocator::SizeClassMapT::ClassID(size);
+        uptr class_id0 = a->ClassID(size);
         char *x = (char*)cache.Allocate(a, class_id0);
         x[0] = 0;
         x[size - 1] = 0;
@@ -228,7 +239,7 @@ void TestSizeClassAllocator() {
         CHECK(a->PointerIsMine(x + size / 2));
         CHECK_GE(a->GetActuallyAllocatedSize(x), size);
         uptr class_id = a->GetSizeClass(x);
-        CHECK_EQ(class_id, Allocator::SizeClassMapT::ClassID(size));
+        CHECK_EQ(class_id, a->ClassID(size));
         uptr *metadata = reinterpret_cast<uptr*>(a->GetMetaData(x));
         metadata[0] = reinterpret_cast<uptr>(x) + 1;
         metadata[1] = 0xABCD;
@@ -278,10 +289,8 @@ TEST(SanitizerCommon, SizeClassAllocator64Compact) {
 }
 
 TEST(SanitizerCommon, SizeClassAllocator32or64Compact) {
-  Allocator32or64Compact::UseAllocator1 = false;
-  TestSizeClassAllocator<Allocator32or64Compact>();
-  Allocator32or64Compact::UseAllocator1 = true;
-  TestSizeClassAllocator<Allocator32or64Compact>();
+  TestSizeClassAllocator<Allocator32or64CompactUse1>();
+  TestSizeClassAllocator<Allocator32or64CompactUse2>();
 }
 
 TEST(SanitizerCommon, SizeClassAllocator64Dense) {
@@ -327,13 +336,13 @@ void SizeClassAllocatorMetadataStress() {
   a->Init(kReleaseToOSIntervalNever);
   typename Allocator::AllocatorCache cache;
   memset(&cache, 0, sizeof(cache));
-  cache.Init(0);
+  cache.Init(a, 0);
 
   const uptr kNumAllocs = 1 << 13;
   void *allocated[kNumAllocs];
   void *meta[kNumAllocs];
   for (uptr i = 0; i < kNumAllocs; i++) {
-    void *x = cache.Allocate(a, 1 + i % (Allocator::kNumClasses - 1));
+    void *x = cache.Allocate(a, 1 + i % (a->KNumClasses() - 1));
     allocated[i] = x;
     meta[i] = a->GetMetaData(x);
   }
@@ -344,7 +353,7 @@ void SizeClassAllocatorMetadataStress() {
     EXPECT_EQ(m, meta[idx]);
   }
   for (uptr i = 0; i < kNumAllocs; i++) {
-    cache.Deallocate(a, 1 + i % (Allocator::kNumClasses - 1), allocated[i]);
+    cache.Deallocate(a, 1 + i % (a->KNumClasses() - 1), allocated[i]);
   }
 
   a->TestOnlyUnmap();
@@ -368,10 +377,8 @@ TEST(SanitizerCommon, SizeClassAllocator64CompactMetadataStress) {
   SizeClassAllocatorMetadataStress<Allocator64Compact>();
 }
 TEST(SanitizerCommon, SizeClassAllocator32or64CompactMetadataStress) {
-  Allocator32or64Compact::UseAllocator1 = false;
-  SizeClassAllocatorMetadataStress<Allocator32or64Compact>();
-  Allocator32or64Compact::UseAllocator1 = true;
-  SizeClassAllocatorMetadataStress<Allocator32or64Compact>();
+  SizeClassAllocatorMetadataStress<Allocator32or64CompactUse1>();
+  SizeClassAllocatorMetadataStress<Allocator32or64CompactUse2>();
 }
 #endif
 
@@ -387,10 +394,10 @@ void SizeClassAllocatorGetBlockBeginStress(u64 TotalSize) {
   a->Init(kReleaseToOSIntervalNever);
   typename Allocator::AllocatorCache cache;
   memset(&cache, 0, sizeof(cache));
-  cache.Init(0);
+  cache.Init(a, 0);
 
-  uptr max_size_class = Allocator::SizeClassMapT::kLargestClassID;
-  uptr size = Allocator::SizeClassMapT::Size(max_size_class);
+  uptr max_size_class = a->LargestClassID();
+  uptr size = a->ClassSize(max_size_class);
   // Make sure we correctly compute GetBlockBegin() w/o overflow.
   for (size_t i = 0; i <= TotalSize / size; i++) {
     void *x = cache.Allocate(a, max_size_class);
@@ -421,10 +428,8 @@ TEST(SanitizerCommon, SizeClassAllocator64CompactGetBlockBegin) {
   SizeClassAllocatorGetBlockBeginStress<Allocator64Compact>(1ULL << 33);
 }
 TEST(SanitizerCommon, SizeClassAllocator32or64CompactGetBlockBegin) {
-  Allocator32or64Compact::UseAllocator1 = false;
-  SizeClassAllocatorGetBlockBeginStress<Allocator32or64Compact>(1ULL << 33);
-  Allocator32or64Compact::UseAllocator1 = true;
-  SizeClassAllocatorGetBlockBeginStress<Allocator32or64Compact>(1ULL << 33);
+  SizeClassAllocatorGetBlockBeginStress<Allocator32or64CompactUse1>(1ULL << 33);
+  SizeClassAllocatorGetBlockBeginStress<Allocator32or64CompactUse2>(1ULL << 33);
 }
 #endif
 TEST(SanitizerCommon, SizeClassAllocator64VeryCompactGetBlockBegin) {
@@ -470,7 +475,7 @@ TEST(SanitizerCommon, SizeClassAllocator64MapUnmapCallback) {
   EXPECT_EQ(TestMapUnmapCallback::map_count, 1);  // Allocator state.
   typename Allocator64WithCallBack::AllocatorCache cache;
   memset(&cache, 0, sizeof(cache));
-  cache.Init(0);
+  cache.Init(a, 0);
   AllocatorStats stats;
   stats.Init();
   const size_t kNumChunks = 128;
@@ -506,7 +511,7 @@ TEST(SanitizerCommon, SizeClassAllocator32MapUnmapCallback) {
   EXPECT_EQ(TestMapUnmapCallback::map_count, 0);
   Allocator32WithCallBack::AllocatorCache cache;
   memset(&cache, 0, sizeof(cache));
-  cache.Init(0);
+  cache.Init(a, 0);
   AllocatorStats stats;
   stats.Init();
   a->AllocateBatch(&stats, &cache, 32);
@@ -540,7 +545,7 @@ TEST(SanitizerCommon, SizeClassAllocator64Overflow) {
   a.Init(kReleaseToOSIntervalNever);
   Allocator64::AllocatorCache cache;
   memset(&cache, 0, sizeof(cache));
-  cache.Init(0);
+  cache.Init(&a, 0);
   AllocatorStats stats;
   stats.Init();
 
@@ -717,10 +722,8 @@ TEST(SanitizerCommon, CombinedAllocator64Compact) {
   TestCombinedAllocator<Allocator64Compact>();
 }
 TEST(SanitizerCommon, CombinedRuntimeSelectAllocator) {
-  Allocator32or64Compact::UseAllocator1 = false;
-  TestCombinedAllocator<Allocator32or64Compact>();
-  Allocator32or64Compact::UseAllocator1 = true;
-  TestCombinedAllocator<Allocator32or64Compact>();
+  TestCombinedAllocator<Allocator32or64CompactUse1>();
+  TestCombinedAllocator<Allocator32or64CompactUse2>();
 }
 #endif
 
@@ -741,7 +744,7 @@ void TestSizeClassAllocatorLocalCache() {
 
   a->Init(kReleaseToOSIntervalNever);
   memset(&cache, 0, sizeof(cache));
-  cache.Init(0);
+  cache.Init(a, 0);
 
   const uptr kNumAllocs = 10000;
   const int kNumIter = 100;
@@ -784,10 +787,8 @@ TEST(SanitizerCommon, SizeClassAllocator64CompactLocalCache) {
   TestSizeClassAllocatorLocalCache<Allocator64Compact>();
 }
 TEST(SanitizerCommon, SizeClassAllocator32or64CompactLocalCache) {
-  Allocator32or64Compact::UseAllocator1 = false;
-  TestSizeClassAllocatorLocalCache<Allocator32or64Compact>();
-  Allocator32or64Compact::UseAllocator1 = true;
-  TestSizeClassAllocatorLocalCache<Allocator32or64Compact>();
+  TestSizeClassAllocatorLocalCache<Allocator32or64CompactUse1>();
+  TestSizeClassAllocatorLocalCache<Allocator32or64CompactUse2>();
 }
 #endif
 TEST(SanitizerCommon, SizeClassAllocator64VeryCompactLocalCache) {
@@ -922,7 +923,7 @@ void TestSizeClassAllocatorIteration() {
   a->Init(kReleaseToOSIntervalNever);
   typename Allocator::AllocatorCache cache;
   memset(&cache, 0, sizeof(cache));
-  cache.Init(0);
+  cache.Init(a, 0);
 
   static const uptr sizes[] = {1, 16, 30, 40, 100, 1000, 10000,
     50000, 60000, 100000, 120000, 300000, 500000, 1000000, 2000000};
@@ -1065,7 +1066,7 @@ TEST(SanitizerCommon, SizeClassAllocator64PopulateFreeListOOM) {
   a->Init(kReleaseToOSIntervalNever);
   SpecialAllocator64::AllocatorCache cache;
   memset(&cache, 0, sizeof(cache));
-  cache.Init(0);
+  cache.Init(a, 0);
 
   // ...one man is on a mission to overflow a region with a series of
   // successive allocations.
@@ -1368,10 +1369,8 @@ TEST(SanitizerCommon, SizeClassAllocator64CompactReleaseFreeMemoryToOS) {
   TestReleaseFreeMemoryToOS<Allocator64Compact>();
 }
 TEST(SanitizerCommon, SizeClassAllocator32or64CompactReleaseFreeMemoryToOS) {
-  Allocator32or64Compact::UseAllocator1 = false;
-  TestReleaseFreeMemoryToOS<Allocator32or64Compact>();
-  Allocator32or64Compact::UseAllocator1 = true;
-  TestReleaseFreeMemoryToOS<Allocator32or64Compact>();
+  TestReleaseFreeMemoryToOS<Allocator32or64CompactUse1>();
+  TestReleaseFreeMemoryToOS<Allocator32or64CompactUse2>();
 }
 
 TEST(SanitizerCommon, SizeClassAllocator64VeryCompactReleaseFreeMemoryToOS) {
diff --git a/compiler-rt/lib/scudo/scudo_allocator_combined.h b/compiler-rt/lib/scudo/scudo_allocator_combined.h
index d61cc9ec1a52..ec36ae3b318f 100644
--- a/compiler-rt/lib/scudo/scudo_allocator_combined.h
+++ b/compiler-rt/lib/scudo/scudo_allocator_combined.h
@@ -50,7 +50,7 @@ class CombinedAllocator {
   }
 
   void initCache(AllocatorCache *Cache) {
-    Cache->Init(&Stats);
+    Cache->Init(&Primary, &Stats);
   }
 
   void destroyCache(AllocatorCache *Cache) {
compiler-rt/lib/lsan/lsan_allocator.h
47 ↗(On Diff #216211)

this patch is larger then expected
could you please move asan/lsan specific stuff into separate patches

compiler-rt/lib/sanitizer_common/sanitizer_runtime_select_allocator.h
67 ↗(On Diff #216211)

we need to remove GetMaxVirtualAddress from this template
RuntimeSelectAllocator has no address space specific logic

This revision now requires changes to proceed.Aug 20 2019, 10:14 PM