Introduces a helper to collect memory usage metrics from multiple
Adds measurements for dynamic index, ast cache and preambles.
Any idea how expensive this is? I suppose TUScheduler::update() is rare enough that it's not a big deal?
I ask because of bad experience with estimating memory usage inside critical path on another project ;-)
Maybe we can add a span around this, so we can see if it's expensive in traces? Or maybe that will turn out to be more expensive than memory estimation?
Isn't FD->Worker->update() above async (with correct options)? If so, this is still reading the old data, right?
Would it make sense to collect the number of preambles and ASTs cached at this point as well? Exported under a different metric, but at the same time, to be able to join the data together?
actually it can potentially be triggered after every keystroke. I suppose it might make sense to move this out of the main thread completely. I suppose runWithPreambles action might be a better place to record, which is still quite often btw (every signature help and code completion request goes through it), but runs on a separate thread.
i would expect that will turn out to be more expensive than memory estimation to be the case, will do some checks though.
Most of the calculations happening in there is just addition of some members. the source manager makes me afraid a little bit tho, as it seems to be going over all the buffers.
yes that's true. i was aiming for this to be an estimate, rather than an accurate view.
yeah that definitely makes sense, but I would rather do that on a separate patch.
nit: This is the same name as getUsedBytes(Key K). Maybe rename to getTotalUsedBytes()?
+1 to moving call to this to runWithPreamble or something like that. If a request gets cancelled, for example, there's no need to update the memory usage.
I think this deserves a comment.
So technically this is incorrect.
- Rename the overload
- Add comments around possible caveats that might result in inaccuracies.
- Move the metric recording itself into another thread.
- Keep the calculations in the main thread, as they seemed to have <1ms latency, even with huge preambles/asts.
Just for the record, as I think Kadir and Adam are both aware...
We discussed making this a bit richer and less reliant on static state.
We'd build a tree-shaped profile by passing a tree-builder recursively into various components.
Then the metrics would be exported at the top level, but we'd also want to expose it via debugging actions. (The tree edges from e.g. dynamic index to individual files would probably be optional - collapsed for metrics but present for debugging)
LMK if I have this wrong and you want to move forward with a simpler approch.