This is an archive of the discontinued LLVM Phabricator instance.

[lldb-vscode] Read requests asynchronously
AbandonedPublic

Authored by wallace on Apr 23 2021, 1:36 PM.

Details

Reviewers
clayborg
kusmour
Summary

Currently, lldb-vscode is reading and processing requests one at a time, which
doesn't match exactly the interactivity of the UI. What happens when LLDB is
working hard parsing symbols, and at the same time the user is hovering around,
pressing resume multiple times, expanding variables in the variables view, etc?
In this case, LLDB will eventually process all of these actions one by one
even though the user has already pressed resume, which means that most of the
responses will be discarded by the UI. This imposes a big burden on lldb.

A way to fix that is to parallelize the reading and processing of requests,
so that while a request is being processed, the reader can filter out pending
useless requests.

This diff only contains the concurrent queue used for this. If this code makes
sense, I'll proceed to add some filtering logic for pending requests.

Diff Detail

Event Timeline

wallace requested review of this revision.Apr 23 2021, 1:36 PM
wallace created this revision.
Herald added a project: Restricted Project. · View Herald TranscriptApr 23 2021, 1:36 PM

This will be tricky to do right and I really don't know how much extra performance we will get out of this for all of the possible issues we will can introduce by multi-threading things. That being said, I am happy to help see if this will actually improve performance.

The first issue with adding threading to the packets is that this introduces a large overhead when processing packets. Having a read thread reading packets and using a mutex + condition variable slowed down lldb-server packets way too much. I know that the VS Code DAP packets aren't sent in the same volume so it may not make a difference, but we should make sure this doesn't slow things down.

The second issue is we need to understand the packets and do something intelligent with them based on what they are. For example if we get a packet sequence like:

  • step
  • threads
  • scopes
  • get locals from scopes
  • step
  • threads
  • scopes
  • get locals from scopes

We will need to know that we must do the "step" items, and as soon as another "step" is received, we could just return an error to the "threads", "scopes" and "get locals" so if we haven't processed the "step" yet, we could simplify this down to:

  • step
  • step
  • threads
  • scopes
  • get locals from scopes

The other tricky issue is all things that control the process or require the process to stay stopped while the request happens (like "threads", "scopes" and "get locals") all need to happen on the same thread anyway. But this is almost all of the packets to begin with, so we really won't be able to multi-thread the handling of packets. Think of what would happen if you got the following packets:

  • step
  • continue
  • kill

We can't grab each one of these and try each on on a separate thread because what if we run the "kill" first, then the "continue" then the "step". So the all packets we receive can't just be run on different threads without risking things not making sense and messing up the debug session.

That leads me to what we can do. We could receive all packets on a read thread, and parse them one by one on the main thread and possible return errors to the ones that don't need to be done because we know there is another process control packet after them. But I am not sure even if a debug adaptor should be doing this as I would hope the IDE would have a small timeout before requesting the locals after say stepping or resuming...

lldb/tools/lldb-vscode/lldb-vscode.cpp
3196

Rename to packets?

3198

rename to "m_condition"? the name m_can_consume seems like a bool.

wallace abandoned this revision.Apr 28 2021, 3:29 PM

i'll rather focus on parallelizing symbol preloading