At the moment the IRInterpreter will stop interpreting an expression after a hardcoded 4096 instructions. After it reaches the limit it will stop interpreting and leave the process in whatever state it was when the timeout was reached.
This patch changes the instruction limit to a timeout and uses the user-specified expression timeout value for this.
The main motivation is to allow users on targets where we can't use the JIT to run more complicated expressions if they really want to (which they can do now by just increasing the timeout).
The time-based approach also seems much more meaningful than the arbitrary (and very low) instruction limit. 4096 instructions can be interpreted in a few microseconds on some setups but might take much longer if we have a slow connection to the target. I don't think any user actually cares about the number of instructions that are executed but only about the time they are willing to wait for a result.
One problem that we have with allowing the user to change the IRInterpreter timeout is that there is currently no way to interrupt the interpreter process. As a follow-up we should check if can nicely hook up the driver's SIGINT handler to the IRInterpreter's work loop, but until this is done one can easily cause LLDB to get stuck in the interpreter by specifying no timeout and running an infinite loop-ing expression. To prevent that any user accidentally locks up their debugging session, this patch prevents that the user can pass 'no timeout' to the IRInterpreter but instead sets the limit to 30 seconds when no timeout was set. Users can still manually specify a longer timeout if they really need it.
Nit: should we check for timeout->count() > 0?
The Timeout docs don't really specify what < 0 means but AFAICT nothing technically stops that from happening