The debugger itself doesn't get stale between script prompt usages.
Setting this to None also breaks users of the scripting API that used
this property in the background.
The debugger itself doesn't get stale between script prompt usages.
Your argument is correct because the interactive script interpreter always belongs to a single debugger. That being said, I don't like this for a few reasons:
- LLDB supports multiple debuggers and the relationship between the script interpreter and the debugger are an implementation detail. This shouldn't be something the user has to think or know about.
- The convenience variables were introduced with a very specific goad and empirically only a small subset of the users actually understands their purpose and limitations. It's already relatively complex and adding an exception for the debugger only makes things harder to explain and understand.
- There's always a way to get the debugger in a non-interactive context (e.g. __lldb_init_module or from the current frame).
FWIW this has always been LLDB's behavior. You're probably seeing it now because the convenience variables are initialized to None (instead of default constructed) which results in a Python exception when you try to call a method on them. We got some reports of this internally and in most cases the convenience variables were misused and potential bug got fixed. There were a few innocent uses, such as in the crashlog.py script which I've since fixed.
I'm not sure that even this is true (for python, anyway). IIUC all SBDebuggers share the same python interpreter (because python does not allow the creation of completely independent python contexts). We do some trickery to make it it appear as if these were independent, but I am not sure the trickery extends to the lldb module...
It depends on your definition of Python interpreter. The Debugger owns the ScriptInterpreterSP which is what would decide which debugger to put in the convenience variable, but you're right that they all share the same "python instance" under the hood.
What's the boundaries of the stable API, then? This was a public API that was removed and broke a plugin I used for vim. The author used threading.Thread(someFunc, (debugger,)) to listen on a socket for fetch requests from lldb outside of the prompt. Not the most beautiful of implementations, but it worked for years on top of a promised public stable API.
As far as I know, the only other ways to do this would be to use the listener/event forwarding mechanism Greg used to set up the curses based GUI in Debugger::EnableForwardEvents or to write an entire new frontend analogous to the lldb tool itself. The latter implementation is a couple orders of magnitude more work to implement for a simple plugin author like this. The former isn't exposed in the SBAPI.
Maybe the SBAPI needs access to the Debugger::m_forward_listener_sp for externalized interfaces (and maybe other usages)? Something like:
class SBForwardEventListener: // called for process events def HandleProcessEvent(self, event: lldb.SBEvent): // called for thread events def HandleThreadEvent(self, event: lldb.SBEvent): // called for breakpoint events def HandleBreakpointEvent(self, event: lldb.SBEvent):
for the developer can invoke
def __lldb_init_module(...): debugger.RegisterForwardEventListener(MyListener())
and be informed for the same whenever the curses GUI would be.
The debugger variable is still present in the lldb namespace, it's None instead of a default constructed debugger, so I would argue that this is not an API breaking change. It has always been documented that those variables are only valid in the interactive script interpreter (https://lldb.llvm.org/use/python-reference.html#embedded-python-interpreter) so it's also not really a policy change. I totally understand how annoying it is that this breaks things (I had to fix incorrect uses myself, such as in the crashlog/symbolication script) but I think it's worth it because it prevents accidental misuse.
As far as I know, the only other ways to do this would be to use the listener/event forwarding mechanism Greg used to set up the curses based GUI in Debugger::HandleProcessEvent or to write an entire new frontend analogous to the lldb tool itself. The latter implementation is a couple orders of magnitude more work to implement for a simple plugin author like this. The former isn't exposed in the SBAPI.
I'm not sure I understand how this is a problem for the event listener. Why can't you save the debugger in __lldb_init_module and start the event listening thread from there?
I guess if the intention of that is maintain the debugger instance around it should work, but at the moment it segfaults pretty quick with Xcode's lldb using:
import threading import lldb import time def background(debugger: lldb.SBDebugger): while True: time.sleep(1) print(debugger) def __lldb_init_module( debugger: lldb.SBDebugger, internal_dict: dict ): threading.Thread() threading.Thread(target=background, args=(debugger,)).start()
$ lldb a.out (lldb) command script import test.py (lldb) // do a few things 0 lldb 0x00000001058d8575 llvm::sys::PrintStackTrace(llvm::raw_ostream&) + 37 1 lldb 0x00000001058d7b55 llvm::sys::RunSignalHandlers() + 85 2 lldb 0x00000001058d8dd6 SignalHandler(int) + 262 3 libsystem_platform.dylib 0x00007fff20394d7d _sigtramp + 29 4 libsystem_platform.dylib 0x000000010762d680 _sigtramp + 18446603344394422560 5 liblldbPluginScriptInterpreterPython3.dylib 0x00000001059f1802 _wrap_SBDebugger___str__(_object*, _object*) + 130 6 Python3 0x0000000105bf6453 cfunction_call_varargs + 323 7 Python3 0x0000000105bf5dd6 _PyObject_MakeTpCall + 374 8 Python3 0x0000000105cd613c call_function + 652 9 Python3 0x0000000105cd26d6 _PyEval_EvalFrameDefault + 29782 10 Python3 0x0000000105bf678d function_code_fastcall + 237 11 Python3 0x0000000105bf7192 _PyObject_FastCall_Prepend + 178 12 Python3 0x0000000105c4e527 slot_tp_str + 183 13 Python3 0x0000000105c37a62 PyObject_Str + 146 14 Python3 0x0000000105c09b95 PyFile_WriteObject + 149 15 Python3 0x0000000105cc96f2 builtin_print + 450 16 Python3 0x0000000105c34a82 cfunction_vectorcall_FASTCALL_KEYWORDS + 130 17 Python3 0x0000000105cd6012 call_function + 354 18 Python3 0x0000000105cd278a _PyEval_EvalFrameDefault + 29962 19 Python3 0x0000000105bf678d function_code_fastcall + 237 20 Python3 0x0000000105bf6124 PyVectorcall_Call + 100 21 Python3 0x0000000105cd2a34 _PyEval_EvalFrameDefault + 30644 22 Python3 0x0000000105bf678d function_code_fastcall + 237 23 Python3 0x0000000105cd6012 call_function + 354 24 Python3 0x0000000105cd26b9 _PyEval_EvalFrameDefault + 29753 25 Python3 0x0000000105bf678d function_code_fastcall + 237 26 Python3 0x0000000105cd6012 call_function + 354 27 Python3 0x0000000105cd26b9 _PyEval_EvalFrameDefault + 29753 28 Python3 0x0000000105bf678d function_code_fastcall + 237 29 Python3 0x0000000105bf91e2 method_vectorcall + 322 30 Python3 0x0000000105bf6124 PyVectorcall_Call + 100 31 Python3 0x0000000105d7800a t_bootstrap + 74 32 Python3 0x0000000105d29829 pythread_wrapper + 25 33 libsystem_pthread.dylib 0x00007fff20350950 _pthread_start + 224 34 libsystem_pthread.dylib 0x00007fff2034c47b thread_start + 15 fish: 'lldb' terminated by signal SIGSEGV (Address boundary error)
Hmm, the debugger somehow gets borked when it's passed into the background method. If I call GetID() on it, it returns a totally bogus value. I need to read up on how these objects are passed to new threads. The following script works as expected:
import threading import lldb import time def background(debugger_id: int): debugger = lldb.SBDebugger.FindDebuggerWithID(debugger_id) while True: time.sleep(1) print(debugger) def __lldb_init_module( debugger: lldb.SBDebugger, internal_dict: dict ): threading.Thread() threading.Thread(target=background, args=(debugger.GetID(),)).start()