This is an archive of the discontinued LLVM Phabricator instance.

[clangd] ignore parallelism level for quick tasks
ClosedPublic

Authored by qchateau on Jan 17 2021, 9:59 AM.

Details

Summary

This allows quick tasks without dependencies that
need to run fast to run ASAP. This is mostly useful
for code formatting.


This fixes something that's been annoying me:

  • Open your IDE workspace and its 20 open files
  • Clangd spends 5 minutes parsing it all
  • In the meantime you start to work
  • Save a file, trigger format-on-save, which hangs because clangd is busy
  • You're stuck waiting until clangd is done parsing your files before the formatting and save takes place

Diff Detail

Event Timeline

qchateau created this revision.Jan 17 2021, 9:59 AM
qchateau requested review of this revision.Jan 17 2021, 9:59 AM
nridge added a subscriber: nridge.Jan 17 2021, 2:39 PM
sammccall accepted this revision.Jan 25 2021, 11:10 AM

Save a file, trigger format-on-save, which hangs because clangd is busy

OK, that's really terrible :-( In some way this is LSP's fault, the whole design is around async unreliable language servers, and then they put in a couple of features that really have to be sync.
But those features solve real problems...
We're really lucky here that formatting doesn't need an AST or preamble, there's no real fundamental reason for that.

I was pretty skeptical of the implementation at first, but working through the alternatives brought me around.


(Rest is my notes, feel free to ignore)

  1. The technique in the patch: separate pool for cheap tasks with short deadlines. This gets slow if the tasks don't turn out to be cheap.
  2. Or we could take them off the semaphore altogether. This spawns lots of threads if the tasks don't turn out to be cheap.
  3. Another option is to lean on the fact that the task in question here isn't terribly important - we'd rather give up than queue. We could do this by using try_lock instead of lock on the semaphore. This degrades functionality if we get overlapping requests.
  4. We could record priorities and run the highest-priority task next.

4 is a non-starter without pre-emption: we have to wait for a task to finish, and probably all running tasks are slow.

3 trades away reliability for latency guarantees. I don't like it because it may do so even below the threshold where latency matters.

We could combine 1+3: a separate fast-tasks semaphore, but try_lock. This will still format under load, but will refuse to ever form a queue.
I guess the most relevant scenario is some large save-all operation. Still, we don't have any evidence this is going to be slow enough that we should choose to be unreliable.

1 vs 2 is partly a question of how you want to fail. Queueing seems just as good as leaking threads to me.
But it also determines whether we get any parallelism in a large-save-all scenario. It seems a little silly not to, to me - so maybe we should increase the semaphore size.

clang-tools-extra/clangd/TUScheduler.cpp
1268

1 means if we get *any* concurrent requests they'll execute serially instead of in parallel.

I get the desire to limit this, and the idea that the tasks should be fast, but this seems like it'll be the bottleneck if the user hits "save all" after a rename hits 20 files in their editor.

We might as well set it to 4 or AsyncThreadsCount or something?

1345

nit: i'd slightly prefer runQuick, as it groups alphabetically with the other methods.

And that way "quick" is jammed ambiguously between the "run" and the "action", and applies to both :-)

This revision is now accepted and ready to land.Jan 25 2021, 11:10 AM
qchateau updated this revision to Diff 319083.Jan 25 2021, 11:58 AM
  • [clangd] fix nits

I applied your suggestion as I agree with both. I chose to use Opts.AsyncThreadsCount instead of a hard-coded constant. This way the "formatting speed" will grow as the user allow more parallelism, which IMO makes sense.

I agree the implementation had drawbacks but this is the ever-ending problem of non-preemptable tasks. You can land this if this looks good to you.

Commit email: quentin.chateau@gmail.com

This revision was automatically updated to reflect the committed changes.