We load python modules in two main ways: through command script import and automatically when we read in a dSYM and find a scripting resource in the dSYM. The latter case has a slight flaw currently which is that when the scripting resource from the dSYM is loaded, the target it's being loaded for is in the process of being created, so the module being imported has no way of knowing what that target is. Even if the debugger had selected this new target before loading the scripting resource, it would still be relying on that being the "currently selected target" which is racy. It's better to do this explicitly.
This patch adds a detector for an optional:
__lldb_init_module_with_target
and if that is present it will run that routine - passing in the target - before running __lldb_init_module.
I considered making an overload of __lldb_init_module that also passed in the target, but that trick seems to end up being confusing. And this will allow more convenient "dual use" modules, where if you are being sourced for a target you can prime the state for the regular init, and if not, then run the regular init.
I'm surprised we might call both. The way I expected this to work is that if __lldb_init_module_with_target is defined, that's what we use if wee have a target and otherwise fall back to __lldb_init_module assuming it's defined.
What's the benefit of calling both? Do you expect the implementation to be different? Or do you think it's more likely that the implementations will be similar, just one having access to the target?