- User Since
- Dec 6 2012, 2:31 AM (296 w, 6 d)
Mon, Aug 13
Thu, Jul 26
Sure. Merged as 338023. Thanks.
Wed, Jul 25
Tue, Jul 24
Fri, Jul 20
Submitted in 337531
Jul 16 2018
Jun 22 2018
Jun 14 2018
Jun 13 2018
Jun 12 2018
Jun 9 2018
And we did not even get to windows, mac, fuchsia and akaros.
Well on Linux you can check process infos on /proc/<proc>/status I believe but that might be overkill.
True ideally what I tried to say in the description is ideally the cpu load ought to be checked first but that s a lot of code :-)
I understand your points, but usually see the jobs are often bound to cpu0 while this cpu can be potentially pretty busy.
What's the gain? Performance improvement? What are the numbers for benchmarking?
Jun 7 2018
So your current version is actually the best one.
Thanks for bearing with me.
What exactly versions did you test? Current vs removing precedence and trylocking all caches? If yes, then it's unclear what factor affects performance as we change 2 things at the same time. So I would also try:
- Remove precedence, but scan only 4 random caches, then lock the current cache.
- Remove precedence, but scan only 4 random caches, then lock a random cache.
- Keep precedence but scan all caches.
I would not expect 3 to be faster, but who knows.
Looking at your results I think it mostly needs to be tested on t-test1, because it should not affect android with 2/4 caches.
Jun 3 2018
May 26 2018
May 16 2018
I see. Let's wait for C++14.
Making it static const makes the compiler use cxa_guard_acquire & cxa_guard_release which we don't want to be able to stay free of libc++ dependencies.
- const uptr BatchClassId = SizeClassMap::ClassID(sizeof(QuarantineBatch));
May 13 2018
May 11 2018
Interesting. Who calls these functions? Why? Can we change them to call normal functions instead?
May 9 2018
May 7 2018
There are several reasons:
- The address range is still ignored after unload. So if anything else loaded at that range it will be falsely ignore and this can lead to both false positives and false negatives.
- The list of ignored libraries is append-only, so it's not possible to remove from it.
- If a library is unloaded, it can be loaded again later. This will lead to unbounded growth of the list of ignored libraries. The list is fixed size, so soon we will crash with an obscure CHECK failure.
Apr 30 2018
Apr 27 2018
Apr 23 2018
I'm not too worried about the first variation. It's very unlikely that user code will try to mmap(MAP_FIXED)
Apr 20 2018
Then I don't understand what happens here anymore. Go and drop it.
Tests are not linked with interceptors?
I am not following. Isn't it a test for prctl interceptor? Why are we removing useful tests?
Perhaps it's not the most critical functionality. And perhaps SanitizerSetThreadName needs to be moved to the test itself. But still it's a test.
Apr 19 2018
Let's concentrate on correctness for now.
Shadow memory is not guaranteed to be zeros for new memory. At munmap does not clear it, and probably some other things. Perhaps if you use munmap in test, it will be easier to catch this (e.g. mmap a large chunk of memory, write to it, munmap, create a thread).
Apr 16 2018
Two things that I don't like here:
- This imposes cost of zeroing of up to 32MB (standard 8MB stack x 4x shadow) per thread creation/destruction for all OSes. Some programs create threads like insane.
- I don't think this fixes the actual root cause, only makes it even harder to localize. Note that cur_thread_finalize already clears the shadow slot, so if pthread reuses stack/tls wholesale, then the slot should be zero already. However, tsan does not generally keep shadow clear (e.g. munmap does not clear shadow too, and most likely a bunch of other things). So if the slot reuses memory from a previous mmap, it will crash the same way.
I wonder if moving the slot to _meta_ shadow is the right things to do. We actually clear meta shadow on unmap. I don't see where we clear stack, but we should, otherwise we can leak lots of sync objects on stack.
Apr 13 2018
Looks good to me.
Do you have commit access, or you want me to commit it?
Much nicer now!
I think this is ready for commit after fixing these few nits.
Apr 12 2018
Apr 11 2018
Apr 5 2018
No, it explicitly checks alignment to decide whether to make a libcall itself.
I've just checked the LangRef and realised this is false. LLVM does make such loads & stores UB, though I still think the Clang use is valid (if unfortunate).
Atomics accessed via C11 _Atomic and C++11 std::atomic will be suitably aligned, but there's a reasonable amount of legacy code that uses the GCC builtins on non-atomic types (of unknown alignment) and this is what Clang uses to implement those accesses when they come up.
How how can atomic objects end up being mis-aligned? Isn't it UB already?
Apr 3 2018
Mar 21 2018
This broke darwin build. I've fixed it up in:
Both external symbolization tests still pass, so I hope it works for your use case.
Submitted in rev 328079.
Looks good. Comitting.
Mar 16 2018
Mar 15 2018
It actually fails on my laptop in the same way now. But works with this patch.
Mar 7 2018
Feb 14 2018
Feb 7 2018
Is there an upstream Go bug for race on powerpc? Please CC me on it.
Jan 29 2018
Jan 22 2018
Committed in r323140.
Dec 13 2017
LGTM with a nit
Dec 5 2017
Dec 4 2017
Dec 1 2017
Nov 30 2017
Nov 29 2017
Nov 28 2017
LGTM with a nit
Where do you want to use this?
I see only 2 uses of atomic_sint32_t, at least one of which should actually use atomic_uint32_t.