The test checks the context in which we run addDocument can be used to
approximate the lifetime of processing the file for diagnostics.
Details
- Reviewers
sammccall
Diff Detail
- Repository
- rCTE Clang Tools Extra
- Build Status
Buildable 24414 Build 24413: arc lint + arc unit
Event Timeline
clangd/TUScheduler.h | ||
---|---|---|
77 ↗ | (On Diff #171981) | To avoid boilerplate this could be changed to: using DiagnosticsResult = Optional<vector<Diag>>; Would still keep it a named typedef to allow documenting what the None value means. |
Discussed offline a bit. We should be judicious with intrusive changes to support out-of-tree clients, and this interface seems a bit messy.
There are alternatives with advantages:
- less intrusive: clients can create a context before calling addDocument and observe its death to know that there will be no more notifications. This is a bit subtle, we can add a test.
- avoid the special case: we could store diagnostics so we can always emit them (move out of ParsedAST). I actually think they belong in ParsedAST, and this is a bit wasteful, so not sure about this.
- address the use case directly: the client wants to know what's going on with the file. We could define a per-file status structure, and send the client a stream of updates. (Via LSP: a notification like diagnostics themselves). This can be used for "is building", "has a fallback CompileCommand", etc. And debug info: size in index, AST size, preamble size...
My preference would be pushing clients towards 1 for now and getting to 3 eventually.
- Instead of changing the interface, added a test we can rely on Context to give us what we need
unittests/clangd/ClangdTests.cpp | ||
---|---|---|
1037 | this tests a lot of things, and after 10 minutes I can't understand exactly what. AIUI the thing we really want to verify is that if TUScheduler::update decides not to update, then context death is a timely signal that it did so. (Doing this test at the ClangdServer level makes it more black-box, but I don't think that justifies the complexity.) So isn't it enough to schedule 3 updates, make sure the middle one does no work, and verify its context is destroyed in between the other two? e.g. atomic<int> Counter(0); TUScheduler S; Notification Start; S.update("a.cc", inputs1, WantDiagnostics::Yes, []{ // ensure we finish scheduling before everything finishes, else // we're not really testing anything Start.wait(); EXPECT_EQ(1, ++Counter); }); { WithContextValue WatchDestructor(llvm::make_scope_exit() { EXPECT_EQ(2, ++Counter); }); S.update("a.cc", inputs1, WantDiagnostics::Yes, []{ FAIL() << "Expect no diagnostics, inputs unchanged"; }); } S.update("a.cc", inputs2, WantDiagnostics::Yes, []{ EXPECT_EQ(3, ++Counter); }); EXPECT_EQ(0, Counter); Start.notify(); S.blockUntilIdle(); EXPECT_EQ(3, Counter); |
this tests a lot of things, and after 10 minutes I can't understand exactly what.
AIUI the thing we really want to verify is that if TUScheduler::update decides not to update, then context death is a timely signal that it did so.
(Doing this test at the ClangdServer level makes it more black-box, but I don't think that justifies the complexity.)
So isn't it enough to schedule 3 updates, make sure the middle one does no work, and verify its context is destroyed in between the other two? e.g.