This is (probably) not required by LSP, but at least one buggy client wants it.
It also simplifies some tests - changed a few completion tests to use -pretty.
Details
Diff Detail
- Repository
- rL LLVM
Event Timeline
Sorting got hardcoded? libclang has it optional (clang_sortCodeCompletionResults). If hardcoded, it becomes a performance problem, since some clients may wish to order it themselves with private heuristics. Personally, I'm not in favor of hardcoding.
Is the performance regression here measurable? If clients are re-sorting themselves, then this I think this can only add a small constant factor, and I suspect it's dwarfed by serialization/parsing.
You can gather performance traces with clangd -trace ~/trace.json, and view the RPCs in chrome's chrome://tracing UI. If you have cases where there's a large difference before/after this patch, I'd definitely like to fix that.
I'm still in the process of construction of a general jsonrpc/LSP client and didn't start consuming clangd yet. But, I do have previous experience with libclang, and I don't know whether clangd will be able to return completion results at global scope (not necessarily on member access solely), like libclang does. This is my main concern, because global scope completion is one of the most useful use cases, even more in languages like C, but the results can be large and can go at the order of 43k results by simply including a header like windows.h. As I still didn't start consuming clangd, I don't know whether it even supports returning all the results from libclang to let the client do its filtering and sorting.
If you concerns about clangd's responsiveness, I recommend testing using VSCode, which has a high-quality LSP client. The clangd plugin can be installed from the extension browser.
But, I do have previous experience with libclang, and I don't know whether clangd will be able to return completion results at global scope (not necessarily on member access solely), like libclang does.
Yes, this works today. On my usage in the LLVM codebase it's fast (~0.5 ms) but VSCode takes ~200ms to process the results, and other clients (e.g. an experimental YCMd integration) are much slower. Based on this, I'm not concerned about the server-side sorting overhead.
This is my main concern, because global scope completion is one of the most useful use cases, even more in languages like C, but the results can be large and can go at the order of 43k results by simply including a header like windows.h. As I still didn't start consuming clangd, I don't know whether it even supports returning all the results from libclang to let the client do its filtering and sorting.
This will work, but it's not really possible to make it fast. Fully populating, serializing and deserializing 43k results so the client can see them is a lot of work.
The only way to make this fast is to score and filter results before seralizing them. D39852 goes down this path. (It can be disabled if you're set on filtering client-side).
OK. I do agree serialization, etc, is a stumbling block on delivering large results fast. I just wanted to warn on this because I've just seen ycmd devs go through this, where sorting was one of the bottlenecks, like, to std::sort 43k results instead of simply partial sorting them just over a 30 results window, which is orders of magnitude faster.
If LSP completion ends up only feasible when there's filtering server side, with its own hard-coded sorting/filtering/ranking mechanisms, it's really sad, because it completely reduce flexibility of consumers wishing to apply their own (custom/configurable/language-uniform) sorting mechanisms.