A number of test cases were failing on big-endian systems simply due to
byte order assumptions in the tests themselves, and no underlying bug
in LLDB.
These two test cases:
tools/lldb-server/lldbgdbserverutils.py python_api/process/TestProcessAPI.py
actually check for big-endian target byte order, but contain Python errors
in the corresponding code paths.
These test cases:
functionalities/data-formatter/data-formatter-python-synth/TestDataFormatterPythonSynth.py functionalities/data-formatter/data-formatter-smart-array/TestDataFormatterSmartArray.py functionalities/data-formatter/synthcapping/TestSyntheticCapping.py lang/cpp/frame-var-anon-unions/TestFrameVariableAnonymousUnions.py python_api/sbdata/TestSBData.py (first change)
could be fixed to check for big-endian target byte order and update the
expected result strings accordingly. For the two synthetic tests, I've
also updated the source to make sure the fake_a value is always nonzero
on both big- and little-endian platforms.
These test case:
python_api/sbdata/TestSBData.py (second change) functionalities/memory/cache/TestMemoryCache.py
simply accessed memory with the wrong size, which wasn't noticed on LE
but fails on BE.
Zero seems like a bad alternate value as it could cover up a real failure. Can we improve this test so that we are testing for actual values? Or can we change the test by endianness and test for 16777216 for little endian and something else that is not zero for big endian? We don't want zero to be valid for little endian tests.