Nearly all of our tab completion tests are powered by lldbtest's complete_from_to which is expected to gather the tab completions LLDB provides and then check the result. While debugging a random failure in TestCompletion in b51321ccc894 I noticed how the semantics of complete_from_to are rather forgiving.
Just to recapitulate how LLDB completions work: LLDB's HandleCompletion function computes a list of strings that are all completions for the current command token (not the whole command line). The first string has a magic meaning as it's the common prefix of all provided completions that LLDB for some reason tries to compute for the client.
complete_from_to has some interesting approaches to testing HandleCompletion:
- It runs HandleCompletion, looks at the output and then decides based on the output whether the caller wants to check the common prefix (the first magic element) or the rest of the list. This lead to the obscure bug in b51321ccc894. It seems really odd that a test function guesses based on the result what is the most likely thing the test actually wants to test.
- The way one checks that a command has no completions is to do complete_from_to("cmd", "cmd"). This check is rather forgiving it seems as it apparently can't fail (at least there are several commands that are expected to fail but are magically passing in our test suit).
- It actually just does substr search on the whole command line including the user input when checking results. One test didn't even specify a valid LLDB command but still passed as the function found the completion in the user input...
- It contains a bunch of code for regex-based result matching but no one is actually doing that. This just causes that we have a bunch of workaround code to escape regex or turn this feature off.
- The error messages on failure are really not useful and look like this: 'process attach -p 1' does not match expected result, got 'process attach -p 1' (that's a real error).
Given that the problem here is just matching a list of strings, I think we can really simplify this whole setup by just using normal asserts.
This patch replaces complete_from_to with three small test functions that just gather the completion strings and then use the normal assert* functions from Python's unittest. The test functions are for an exhaustive check of completions, a subset of completions (for tests where we the possible list of completions can change) and an explicit call to assert no completions.
- This means that we now get the really helpful error messages from the assert* functions (which even include diffs of what strings/characters are different).
- I completely removed the regex testing as no one is actually using it at the moment.
- I removed the whole approach of substr searching and we just use string equality to prevent bogus passes.
- Removed the requirement to specify the whole command that should be completed. The function also no longer searches the user input for 'expected' completions. This just made most calls to complete_from_to really verbose and lead to bogus passes.
- Removed my own test functions that I wrote for the expression completion test. Apparently I already run into this issue back then.
- Added FIXMEs to the checks that were incorrectly passing before now.
While you're here... can you document the magic values?