This is an archive of the discontinued LLVM Phabricator instance.

[tsan] Add support for GCD IO channels on Darwin
ClosedPublic

Authored by kubamracek on Jun 30 2016, 4:25 AM.

Details

Summary

This patch adds interceptors for dispatch_io_*, dispatch_read and dispatch_write functions. This avoids false positives when using GCD IO. Adding several test cases.

Diff Detail

Event Timeline

kubamracek updated this revision to Diff 62349.Jun 30 2016, 4:25 AM
kubamracek retitled this revision from to [tsan] Add support for GCD IO channels on Darwin.
kubamracek updated this object.
kubamracek added reviewers: dvyukov, kcc, glider.
kubamracek added a project: Restricted Project.
dvyukov added inline comments.Jun 30 2016, 5:37 AM
lib/tsan/rtl/tsan_libdispatch_mac.cc
491

Please outline in a comment here what operations we want to synchronize with what.
It's difficult to restore the intention from there acquires and released on queues and channels scattered across all these functions.

499

dispatch_callback_wrap will Acquire on context address, which is at best wasteful.
What do we want to synchronize with what? Why do we acquire/release on the queue address? Do we want to synchronize all reads and writes? If we want to synchronize submission with callback execution, then existing dispatch_callback_wrap machinery will do what we need in more precise way (using context address).

527

Same here.

558

Documentation says:

The barrier operation applies to the channel’s file descriptor and not to a specific channel. In other words, if multiple channels are associated with the same file descriptor, a barrier operation scheduled on any of the channels acts as a barrier across all of the channels. All previously scheduled operations on any of those channels must complete before the barrier block is executed.

But here you synchronize on channel address. So I guess we will have some false positives.

Do you release on channel in read/write to synchronize with barriers? If so, it seems to be exactly the kind of reader/writer synchronization we added for barriers recently. Can we reuse that machinery? It captures precisely who needs to synchronize with whom.

573

When is the cleanup handler executed? The documentation is quite vague on this. Is it executed when the channel is closed? If so, should we use a more specific address to not interfere with all other operations (e.g. chan+8)?

617

Where is the pairing acquire?

kubamracek added inline comments.Jun 30 2016, 6:05 AM
lib/tsan/rtl/tsan_libdispatch_mac.cc
499

Yes, I want to synchronize submission with callback execution, plus the usual “within-one-queue” rules. I wanted to avoid malloc’ing the context, but I guess it’s okay here.

558

The io_barrier is a bit more tricky, since it’s not submitted to the same queue as the read&write callbacks, so user’s shouldn’t be using this to synchronize with their code. So I basically only care about synchronizing submission and execution.

573

The cleanup handler is *submitted* (executed asynchronously) when the channel is closed explicitly (dispatch_io_close) or implicitly by releasing all references to the channel (dispatch_release or removing the reference in ARC). The second case isn’t handled by this patch, because I didn’t want to get into the complexity of intercepting dispatch_release.

617

The Acquire is in the cleanup handler in dispatch_io_create_*.

kubamracek added inline comments.Jun 30 2016, 6:24 AM
lib/tsan/rtl/tsan_libdispatch_mac.cc
558

Ah, so the issue is that we don’t have a dispatch_queue_t parameter here, so we can’t simply use the regular dispatch_callback_wrap...

dvyukov added inline comments.Jun 30 2016, 6:40 AM
lib/tsan/rtl/tsan_libdispatch_mac.cc
558

dispatch_callback_wrap uses dispatch_queue_t only as synchronization address and to understand if the queue is serial or not.

What about adding is_serial to tsan_block_context_t, and then having 2 AllocContext's: one takes dispatch_queue_t and uses IsQueueSerial(q); second -- just takes uptr addr and bool is_serial from caller?

kubamracek updated this revision to Diff 63065.Jul 7 2016, 6:39 AM

Updating the patch with review comments. Adding more testcases.

dvyukov added inline comments.Jul 8 2016, 9:29 AM
lib/tsan/rtl/tsan_libdispatch_mac.cc
512

Here is something I am missing.
Is new_context captured by reference or by value? Internet says that __block makes it captured by reference. But then doesn't new_h refer to a dangling stack location? new_context is stack allocated in this function, but new_h will be executed some time later.
And if it is captured by value, then synchronizing on address &new_context is pointless -- closure will have a different address.
What am I missing?

652

Also here: is it captured by value or by reference?
Or such variables somehow magically promoted to heap?

kubamracek added inline comments.Jul 8 2016, 2:10 PM
lib/tsan/rtl/tsan_libdispatch_mac.cc
512

__block turns the variable into a reference-counted object that is captured by reference. It’s actually allocated as part of the block. When the block is promoted to a heap object (via Block_copy), the variable is promoted as well. Note that &new_context is different before the Block_copy and after it (before it points to the stack, after it points to the heap).

Does this make sense? You can take a look at this via clang -rewrite-objc file.m -o - which will rewrite all Obj-C and all block stuff into pure C/C++ code. But the full mechanism is pretty complex and the generated code is hard to read.

dvyukov accepted this revision.Jul 11 2016, 1:42 AM
dvyukov edited edge metadata.
dvyukov added inline comments.
lib/tsan/rtl/tsan_libdispatch_mac.cc
512

Okay

514

I wonder if it is possible to make dispatch_callback_wrap forward any number of arguments with some vaargs template magic. Then we could write just:
REAL(dispatch_read)(fd, length, q, dispatch_callback_wrap<dispatch_data_t, int>);
Up to you. No need to do in this change.

This revision is now accepted and ready to land.Jul 11 2016, 1:42 AM

Thanks for the review!

This revision was automatically updated to reflect the committed changes.