... so that we can find intra-granule buffer overflows.
The default is still to always align left.
It remains to be seen wether we can enable this mode at scale.
Details
Diff Detail
- Repository
- rCRT Compiler Runtime
- Build Status
Buildable 25062 Build 25061: arc lint + arc unit
Event Timeline
lib/hwasan/hwasan_allocator.cc | ||
---|---|---|
168 | Yeah, at least for calls like posix_memalign. This mode will break some things anyway, ex.: int count; char name[0]; }; When allocated with malloc(sizeof(S) + count) the size will not be a multiple of alignment, so "count" will be misaligned. That's why this mode can not be on by default. Let's hope it is not a common case. | |
171 | Use named constants, like kHandleSignalYes. | |
172 | This is not very random - entire region will have the same bit, so long running programs are likely to be stuck in either direction. Use HwasanThread::random_buffer_ instead? |
lib/hwasan/hwasan_allocator.cc | ||
---|---|---|
168 | Why will not respecting alignment argument help? That's the alignment that we ensure in normal mode. |
use the pointer tag's bit as a source of randmness.
split the test lines for right and left alignment.
use a named constant (kRightAlignAlways) instead of just '2' when checking the malloc_align_right flag.
lib/hwasan/hwasan_allocator.cc | ||
---|---|---|
168 | I've changed the code so that it respects alignment > 16. This mode is unlikely to be enabled by default in its current form. | |
171 | done | |
172 | Changed to use a bit of the tag, that is random (unless, of course, we disable random tags via a flag) | |
test/hwasan/TestCases/heap-buffer-overflow.c | ||
30–31 | done | |
36–37 | done | |
42 | done |
BTW, electric fence has EF_ALIGNMENT=1 (to set alignment to 1) and people have been using it for decades,
so this approach is not entirely unheard of.
lib/hwasan/hwasan_allocator.cc | ||
---|---|---|
168 | Strictly saying this can be handled in standard-conforming way for C++. int *volatile p1 = new int; 400614: bf 04 00 00 00 mov $0x4,%edi 400619: be 04 00 00 00 mov $0x4,%esi 40061e: e8 ed fe ff ff callq 400510 <operator new(unsigned long, std::align_val_t)@plt> 400623: 48 89 44 24 08 mov %rax,0x8(%rsp) short *volatile p2 = new short[2]; 400628: bf 04 00 00 00 mov $0x4,%edi 40062d: be 02 00 00 00 mov $0x2,%esi 400632: e8 c9 fe ff ff callq 400500 <operator new[](unsigned long, std::align_val_t)@plt> 400637: 48 89 44 24 10 mov %rax,0x10(%rsp) |
Atomic operations on aarch64 require alignment, so this approach will break whenever the code does something like
struct Foo{
AtomiWord a; // used by atomic instructions char vla[0];
}
all other loads/stores seem to work (see "B2.5.2 Alignment of data accesses" in https://static.docs.arm.com/ddi0487/da/DDI0487D_a_armv8_arm.pdf)
so far I've found one such place (std::string implementation in libstdc++, which uses a ref counting string).
I've also tried several benchmarks from fuzzer-test-suite with hwasan and right-alignment.
Those that don't use std::string just work, so I think we still have a chance to get away with this on aarch64,
although we may require some code modifications to avoid the situation above.
One more case: https://github.com/google/re2/blob/master/re2/dfa.cc#L758
Here, again, we have a 8-byte object accessed atomically
and the size of the allocation is not necessary divisible by 8 (68 in my this case)
const int kStateCacheOverhead = 40;
int nnext = prog_->bytemap_range() + 1; // + 1 for kByteEndText slot int mem = sizeof(State) + nnext*sizeof(std::atomic<State*>) + ninst*sizeof(int); if (mem_budget_ < mem + kStateCacheOverhead) { mem_budget_ = -1; return NULL; } mem_budget_ -= mem + kStateCacheOverhead;
This particular case is easy to fix by aligning mem by sizeof(void*)
Added two more modes for malloc_align_right to allow 8-byte alignment
(randomly or always).
Extended comments, added one more test.
PTAL
CHECK that the flag has a valid value