This patch adds release and acquire semantics for dispatch groups, plus a test case.
Details
Diff Detail
- Repository
- rL LLVM
Event Timeline
LGTM
It seems there is some potential for group API misuse. For example, it is possible to "enter" inside of the async piece of work, instead of before spawning that async piece of work (quite common bug in Go WaitGroup's). Also, since there is ref-counting involved, it can be subject to race use-after-frees (e.g. dispatch_group_notify instead of dispatch_group_async).
Don't know how common are such bugs in real life. But on the other hand we won't know that until we implement detection of these bugs. You can look at https://github.com/golang/go/blob/master/src/sync/waitgroup.go for some ideas (search for "race").
lib/tsan/rtl/tsan_libdispatch_mac.cc | ||
---|---|---|
193 ↗ | (On Diff #41403) | Why don't we need Block_copy here, but need it below? Is the block automatically referenced from captured by another block? |
lib/tsan/rtl/tsan_libdispatch_mac.cc | ||
---|---|---|
193 ↗ | (On Diff #41403) | Technically, we don't need Block_copy for correctness, because the function that will store a reference to a block for later (dispatch_async in this case) will do it anyway. The reason for a Block_copy here is to get rid of a very common false positive, because Block_copy will call malloc and the malloc'd region is then accessed from another thread. Since this happens *inside* dispatch_async, it's already after we've Release()'d. So I'm just forcing the allocation to happen early (before Release()). I'm not calling Block_copy here, because WRAP(dispatch_async) will call it anyway. Below, we're calling REAL(...), so we need to Block_copy. |