During active replay, the ::Initialize call is replayed like any other SB API
call and the return value is ignored. Since we can't intercept this, we
terminate here before the uninitialized debugger inevitably crashes.
Details
Diff Detail
- Repository
- rG LLVM Github Monorepo
Event Timeline
I wonder if there's a way to add some consistency checks into the (active) replay machinery. Like, maybe we could, for each function that returns an SBError, record a flag saying whether that error was in a success state or not. Then if this flag differs during replay, we know that we have started to diverge and can stop replaying (or at least give a very loud warning about it).
lldb/source/API/SystemInitializerFull.cpp | ||
---|---|---|
45 | Print the error that has happened? maybe via report_fatal_error ? |
On the one hand I like the idea of detecting divergence early, but on the other hand I don't like special-casing something for SBError. There's a bunch of function that return a boolean and some that got error handling after the fact through and overload with an in-out parameter. A more generic approach could be a way to checkpoint the "object registry" an give (some of) the SB classes a private method that returns a "hash" (whatever that means for the reproducers, for an SBError that could be success or not, but for a target or a process it could be whether it's connected) and then have the reproducers periodically compare the state of the registry against that in the reproducer.
I like that idea.
lldb/source/API/SystemInitializerFull.cpp | ||
---|---|---|
45 | Btw, there's a new report_fatal_error overload taking an llvm::Error -- I meant to use that one :) |
lldb/source/API/SystemInitializerFull.cpp | ||
---|---|---|
45 | π |
Print the error that has happened? maybe via report_fatal_error ?