This is an archive of the discontinued LLVM Phabricator instance.

[libc] Fix mtx_unlock to handle multiple waiters on a single mutex.
Needs ReviewPublic

Authored by sivachandra on May 21 2020, 12:18 PM.

Details

Reviewers
phosek

Diff Detail

Event Timeline

sivachandra created this revision.May 21 2020, 12:18 PM

But now you have a "thundering herd" problem?

But now you have a "thundering herd" problem?

Yes, I am aware of this. Sorry for not being more descriptive about what I have in mind. My goal was to implement a self-contained mutex. As in, a mutex which can be used with any threading library. Most other libc implementations use a higher level futex locking facility (FUTEX_LOCK_PI and friends). We can take the same route here, but it has some shortcomings:

  1. The "fastest" implementation will tie us to a specific threading library (which for us will/should be LLVM libc's threading library) as it will use thread local storage.
  2. The slowest implementation can avoid using TLS but will require one additional syscall each in the lock and unlock functions.

We have two alternates:

  1. This patch. But, obviously it leads to the thundering herd problem.
  2. Use a counter to keep track of the current set of waiters. This counter has to be an atomic object so will not match the performance of #1 from above.

I am now beginning to feel that going the route of #1 from the first list is probably the right approach. FWIW, the infrastructure we build is required anyway to implement other pieces like recursive locks.

Just a note: I will update this patch shortly with a more optimal solution.