This is an archive of the discontinued LLVM Phabricator instance.

[tsan] Disable randomized address space on linux aarch64.
ClosedPublic

Authored by yabinc on Mar 9 2016, 11:12 AM.

Details

Summary

After patch https://lkml.org/lkml/2015/12/21/340 is introduced in
linux kernel, the random gap between stack and heap is increased
from 128M to 36G on 39-bit aarch64. And it is almost impossible
to cover this big range. So I think we need to disable randomized
virtual space on aarch64 linux.

Diff Detail

Event Timeline

yabinc updated this revision to Diff 50165.Mar 9 2016, 11:12 AM
yabinc retitled this revision from to [tsan] Disable randomized address space on linux..
yabinc updated this object.
rengolin edited edge metadata.Mar 10 2016, 2:20 AM

Is this Android specific?

zatrazz added inline comments.Mar 10 2016, 2:30 AM
lib/tsan/rtl/tsan_platform_linux.cc
296

Is this correct? This test is checking if system is *not* using ASLR, not the contrary. I think the correct test should be

(old_personality & ADDR_NO_RANDOMIZE) == ADDR_NO_RANDOMIZE
298

TSAN do work on aarch64 with ASLR, at least with kernel 3.19. I have not tested with a newer kernel, but I noticed it has now some VMA changes. Which is the environment that you seeing this failing?

dvyukov edited edge metadata.Mar 10 2016, 8:29 AM

FWIW we could also check whether modules are loaded at suitable addresses or not. And disable randomization/reexec only if it's really necessary. I don't know why reexec is a thing to avoid, though. So maybe it's good as is.

yabinc updated this revision to Diff 50351.Mar 10 2016, 2:09 PM
yabinc edited edge metadata.

Check if real heap space matches expectation before reexec.

yabinc updated this revision to Diff 50352.Mar 10 2016, 2:13 PM

Limit columns to < 80.

yabinc retitled this revision from [tsan] Disable randomized address space on linux. to [tsan] Disable randomized address space on linux when necessary..Mar 10 2016, 2:20 PM
yabinc updated this object.

not android specific, I have updated in SUMMARY.
Done checking real heap space.

lib/tsan/rtl/tsan_platform_linux.cc
298

This is correct. I want to set ADDR_NO_RANDOMIZE instead of removing it. even it works with kernel 3.19, but doesn't work with latest kernel. And https://lkml.org/lkml/2015/12/21/340 has been cherry picked back to android kernels < 3.19. I mainly use N5X android device to test, which uses kernel 3.10.

dvyukov added inline comments.Mar 11 2016, 2:22 AM
lib/tsan/rtl/tsan_platform_linux.cc
267

Please call it somehow differently from "Heap". Heap commonly means "malloc heap" throughout the runtime, as is HeapMemEnd below.
I think we are mainly looking for mmaped modules here, so "ModulesEnd" would be fine. "MmapEnd" would be fine as well.

272

Please explain why we do this.

297

I am trying to convince myself that it won't break any of linux setups. We can have pie/non-pie binary. COMPAT mapping (setarch -L) is not supported now on x86 (modules mapped at 0x2a). Also disabled randomization is _not_ supported on x86 (modules mapped at 0x55).
So you have a good explanation as to why it all will continue to work?
I guess that we always have at least 1 dynamic library (static libc linking is not supported), so on x86 glibc should be mapped at 0x7f in supported configuration (randomization + no compat mapping). So GetHeapEnd should return 0x7f, and that's larger than HeapMemEnd.

But I think it will make failure mode for at least x86+COMPAT mapping much worse (infinite exec recursion instead of a readable message). Potentially there are other cases on other platforms (e.g. power/mips) when that will happen as well. Or maybe they legally have modules below heap?

I think it makes sense to tread more gradually and enable it just for aarch64 with an explanation as to why we are doing this. Other platforms will be enable to enable this by just altering ifdef condition later if necessary.

Or am I missing something?

yabinc added inline comments.Mar 11 2016, 2:16 PM
lib/tsan/rtl/tsan_platform_linux.cc
267

I am confused what is kHeapMem is tsan_platform.h for? In my current understanding, if the malloc map is allocated using brk(), the space is just above bss; if the malloc map is allocated using mmap(), the space should have no difference than other mmaped areas. like the one in my x86_64 linux pc:
006f0000-006f9000 rw-p 000f0000 fc:00 1838842 /bin/bash
006f9000-006ff000 rw-p 00000000 00:00 0
00f8e000-01401000 rw-p 00000000 00:00 0 [heap]

To make me more confusing, there is a gap between kHeapMemEnd and kHiAppMemBeg in x86_64, how is it decided?

For the code here, I am looking for mmaped modules, but I think it is probably represented by kHeapMem in tsan_platform.h, that's why I compare GetHeapEnd() < HeapMemEnd() below.

297

Frankly I don't understand what is COMPAT mapping and try setarch -L fails on my x86_64 pc. I'd like to learn if you can give me more contexts or instructions about how to test.
As I can see, on x86_64, mmap base is always at HiAppMem range, thus larger than HeapMemEnd.
x86+COMPAT mapping, how to experiment that? setarch -L seems don't work.
I don't think there will a infinite loop of reexec, because if CHECK(personality()) fails, it will exit. In consideration of not breaking existing behavior, I remove the CHECK() and don't reexec if failed to set personality. I am not sure if it is proper.

I just realize that the check of GetHeapEnd() < HeapMemEnd() doesn't work on aarch64 39-bit. Because there is a hole between kHeapMemEnd and kHiAppMemBeg, whose shadow mem space is taken by kMidAppMem. It means even if GetHeapEnd() is bigger than HeapMemEnd(), it can be like 0x7d20000000, and takes the same shadow mem as kMidAppMem. Currently I think we have two choices, one is making it android specific and reexec without checking real mmap space, another is keep the checking and change Mapping39 of aarch64 (but I am not confident as I don't know what kHeapMem stands for).

dvyukov added inline comments.Mar 15 2016, 2:55 AM
lib/tsan/rtl/tsan_platform_linux.cc
267

Tsan substitutes malloc, so the libc [heap] mapping is pretty much irrelevant. Tsan malloc manually mmaps heap at kHeapMem-kHeapMemEnd.

To make me more confusing, there is a gap between kHeapMemEnd and kHiAppMemBeg in x86_64, how is it decided?

These holes should be mprotected to prevent app from mapping anything there. See CheckAndProtect in tsan_platform_posix.cc.

For the code here, I am looking for mmaped modules, but I think it is probably represented by kHeapMem in tsan_platform.h, that's why I compare GetHeapEnd() < HeapMemEnd() below.

No, kHeapMem is tsan mapping for heap. mmap for heap uses fixed address, so kHeapMem can be arbitrary is not related to any other mappings.

297

I don't think there will a infinite loop of reexec, because if CHECK(personality()) fails, it will exit.

I mean the following situation.
Randomization is already disabled, but we still have a bad mapping. We try to disable randomization and it succeeds (as randomization is already disable). Then we reexec, but nothing is changed, we still have the same bad mapping. And here we get into infinite loop.

It probably would be safer to never try to reexec more than once. If first reexec does not help, we can just continue and CheckAndProtect later will catch and properly report the bad mapping.

Re COMPAT mapping, I can't find any definitive docs. It probably requires some kernel CONFIG. What it does is it causes modules to be mapped at 0x2a range on x86_64 linux. It is not supported now, but we must not go into infinite reexec loop, if user enables it.

yabinc updated this revision to Diff 50842.Mar 16 2016, 11:36 AM

Disable randomized address space only for aarch64.

yabinc retitled this revision from [tsan] Disable randomized address space on linux when necessary. to [tsan] Disable randomized address space on linux aarch64..Mar 16 2016, 11:37 AM
yabinc updated this object.
yabinc added inline comments.Mar 16 2016, 11:46 AM
lib/tsan/rtl/tsan_platform_linux.cc
297

`Thanks for explanation of kHeapMem, that makes much sense. And now I believe it is wrong to compare ModuleEnd() with HeapEnd().
For COMPAT mode, I run following commands with this patch enabled on x86_64:
$./thread_sanitize
old_personality = 0, ADDR_NO_RANDOMIZE = 40000

16085==WARNING: Program is run with randomized virtual address space, and uses mmap base out of that assumed by ThreadSanitizer.

16085==Re-execing with fixed virtual address space.

old_personality = 40000, ADDR_NO_RANDOMIZE = 40000

$setarch i686 -L ./thread_sanitize
old_personality = 200008, ADDR_NO_RANDOMIZE = 40000

16186==WARNING: Program is run with randomized virtual address space, and uses mmap base out of that assumed by ThreadSanitizer.

16186==Re-execing with fixed virtual address space.

old_personality = 240008, ADDR_NO_RANDOMIZE = 40000
FATAL: ThreadSanitizer: unexpected memory mapping 0x2aaaaaaab000-0x2aaaaaace000

So it won't cause infinite loop. Because personality() takes effect accross execv(), so the code won't try to set it again if it has been set before. But I think it is fine to limit this patch to aarch64 if other architectures don't need this.`

dvyukov accepted this revision.Mar 19 2016, 8:13 AM
dvyukov edited edge metadata.

LGMT with a nit

lib/tsan/rtl/tsan_platform_linux.cc
296

Please leave a short comment here for next generations as to why we are doing this and name/hash of the kernel commit.

This revision is now accepted and ready to land.Mar 19 2016, 8:13 AM
yabinc updated this revision to Diff 51188.Mar 21 2016, 10:49 AM
yabinc edited edge metadata.

Add comment.

yabinc updated this object.Mar 21 2016, 10:50 AM
rengolin added inline comments.Mar 21 2016, 10:51 AM
lib/tsan/rtl/tsan_platform_linux.cc
298

Just making sure I got it right: this should work on all kernels, right?

yabinc added inline comments.Mar 21 2016, 6:15 PM
lib/tsan/rtl/tsan_platform_linux.cc
298

yes. personality(ADDR_NO_RANDOMIZE) exists before the git history of linux kernel, and this option is not architecture specific.

rengolin accepted this revision.Mar 22 2016, 2:26 AM
rengolin edited edge metadata.

LGTM too, thanks!

yabinc closed this revision.Mar 22 2016, 10:21 AM