Page MenuHomePhabricator

[Symbol] Improve TypeSystemMap mutex safety
AbandonedPublic

Authored by xiaobai on Jul 19 2019, 3:29 PM.

Details

Summary

Relying on m_clear_in_progress is dangerous for modifying m_map. It's
entirely possible for one thread to invoke TypeSystemMap::Clear(), lock
the mutex, and copy the map for finalization while another thread tries
to add a new type system. This could end up in a situation where a type
system isn't finalized because of unsynchronized access.

I propose we remove the m_clear_in_progress variable entirely and rely
on the mutex.

Event Timeline

xiaobai created this revision.Jul 19 2019, 3:29 PM
xiaobai abandoned this revision.Jul 19 2019, 4:03 PM

Actually from the looks of it, I completely misunderstood what's going on here. It looks like AddToMap should only be called by things that hold the mutex, meaning that this change isn't actually necessary. I do think that makes this code kind of frustrating to understand though. Closing.

Maybe make AddMap take a lock_guard parameter as recommended here

https://stackoverflow.com/questions/28825749/ensuring-that-current-thread-holds-a-lock-on-a-c11-mutex

?

I fear this will deadlock again now. The original reason for the m_clear_in_progress was:

r260624 | jingham | 2016-02-11 16:03:19 -0800 (Thu, 11 Feb 2016) | 14 lines

When calling TypeSystemMap::Clear, objects being destroyed in the process of
clearing the map ended up calling back into the TypeSystemMap to do lookups.
Not a good idea, and in this case it would cause a deadlock.

You would only see this when replacing the target contents after an exec, and only if you
had stopped before the exec, evaluated an expression, then continued
on to the point where you did the exec.

Fixed this by making sure the TypeSystemMap::Clear tears down the TypeSystems in the map before clearing the map.
I also add an expression before exec to the TestExec.py so that we'll catch this
issue if it crops up again in the future.

rdar://problem/24554920