Index: docs/CMake.rst =================================================================== --- docs/CMake.rst +++ docs/CMake.rst @@ -602,8 +602,8 @@ For more information about some of the advanced build configurations supported via Cache files see :doc:`AdvancedBuilds`. -Executing the test suite -======================== +Executing the tests +=================== Testing is performed when the *check-all* target is built. For instance, if you are using Makefiles, execute this command in the root of your build directory: Index: docs/SourceLevelDebugging.rst =================================================================== --- docs/SourceLevelDebugging.rst +++ docs/SourceLevelDebugging.rst @@ -115,8 +115,8 @@ and call functions which were optimized out of the program, or inlined away completely. -The :ref:`LLVM test suite ` provides a framework to test -optimizer's handling of debugging information. It can be run like this: +The :doc:`LLVM test suite ` provides a framework to +test optimizer's handling of debugging information. It can be run like this: .. code-block:: bash Index: docs/TestSuiteGuide.rst =================================================================== --- /dev/null +++ docs/TestSuiteGuide.rst @@ -0,0 +1,407 @@ +================ +test-suite Guide +================ + +.. contents:: + :local: + + +Quickstart +========== + +To run the test suite, you need the following: + +#. The lit test runner is required to run the tests. We recommend to + install it via pip from an LLVM checkout: + + .. code-block:: bash + + % svn co http://llvm.org/svn/llvm-project/llvm/trunk llvm + % sudo pip install llvm/utils/lit + # You can also install the tool manually if lit is not available: + # cd llvm/utils/lit ; sudo python setup.py install + + % lit --version + lit 0.5.0dev + +#. Check out the ``test-suite`` module with: + + .. code-block:: bash + + % svn co http://llvm.org/svn/llvm-project/test-suite/trunk test-suite + +#. Create a build directory and Use CMake to configure the suite. The + ``CMAKE_C_COMPILER`` option can be used to test a custom clang build. + The cache files provide typical build configurations: + + .. code-block:: bash + + % mkdir test-suite-build + % cd test-suite-build + % cmake -DCMAKE_C_COMPILER=/path/to/clang \ + -C../test-suite/cmake/caches/O3.cmake \ + ../test-suite + +#. Build the benchmarks: + + .. code-block:: bash + + % make + Scanning dependencies of target timeit-target + [ 0%] Building C object tools/CMakeFiles/timeit-target.dir/timeit.c.o + [ 0%] Linking C executable timeit-target + [ 0%] Built target timeit-target + Scanning dependencies of target fpcmp-host + [ 0%] [TEST_SUITE_HOST_CC] Building host executable fpcmp + [ 0%] Built target fpcmp-host + Scanning dependencies of target timeit-host + [ 0%] [TEST_SUITE_HOST_CC] Building host executable timeit + [ 0%] Built target timeit-host + +#. Run the tests with lit: + + .. code-block:: bash + + % lit -v -j 1 -o results.json . + -- Testing: 474 tests, 1 threads -- + PASS: test-suite :: MultiSource/Applications/ALAC/decode/alacconvert-decode.test (1 of 474) + ********** TEST 'test-suite :: MultiSource/Applications/ALAC/decode/alacconvert-decode.test' RESULTS ********** + compile_time: 0.2192 + exec_time: 0.0462 + hash: "59620e187c6ac38b36382685ccd2b63b" + size: 83348 + ********** + PASS: test-suite :: MultiSource/Applications/ALAC/encode/alacconvert-encode.test (2 of 474) + +#. Show and compare result files (optional): + + .. code-block:: bash + + % sudo pip install pandas + # Show a single result file: + % test-suite/utils/compare.py results.json + # Compare two result files: + % test-suite/utils/compare.py results_a.json results_b.json + + +Structure +========= + +The ``test-suite`` module contains a number of programs that can be compiled +and executed. The programs come with reference outputs so that their +correctness can be checked. And the suite comes with tools to measure aspects +like benchmark runtime, compilation time or code size. + +The suite is divided into several directories: + +- ``SingleSource/`` + + Contains test programs that are only a single source file in size. A + subdirectory may contain several such programs. + +- ``MultiSource/`` + + Contains subdirectories which entire programs with multiple source files. + Large benchmarks and whole applications go here. + +- ``MicroBenchmarks/`` + + Programs using the `google-benchmark `_ + library. The programs define functions that are run multiple times until the + measurement results are statistically significant. + +- ``External/`` + + Contains descriptions and test data for code that cannot be directly + distributed with the test-suite. The most prominent members of this + directory are the SPEC CPU benchmark suites. The user can supply the sources + via other means. + +- ``Bitcode/`` + + These tests are mostly written in LLVM bitcode. + +- ``CTMark/`` + + Contains symbolic links to other benchmarks forming a representative sample + for compilation performance measurements. + +Benchmarks +---------- + +Every program can work as a correctness test. However some of the programs are +unsuitable for performance measurements. Enabling the +`TEST_SUITE_BENCHMARKING_ONLY` option will disable them. + + +Configuration +============= + +The test-suite comes with a number of configuration options to customize +building and running the benchmarks. CMake can print a list of available +options: + +.. code-block:: bash + + % cd test-suite-build + # Print basic options: + % cmake -LH + # Print all options: + % cmake -LAH + +Common Configuration Options +---------------------------- + +- ``CMAKE_C_FLAGS`` + + Specify extra flags to be passed to C compiler invocations. Currently the + test-suite passes these flags to C++ compiler and linker invocations too. + See ``_ + +- ``CMAKE_C_COMPILER`` + + Select the C compiler executable to be used. Note that the C++ compiler + is inferred automatically i.e. when specifying ``path/to/clang`` + CMake will automatically use ``path/to/clang++`` as the C++ compiler. + See ``_ + +- ``CMAKE_BUILD_TYPE`` + + Select a build type like ``OPTIMIZE`` or ``DEBUG`` selecting a set of + predefined compiler flags. These flags are applied regardless of the + ``CMAKE_C_FLAGS`` option and may be changed by modifying + ``CMAKE_C_FLAGS_OPTIMIZE`` etc. See + ``_ + +- ``TEST_SUITE_RUN_UNDER`` + + Prefix test invocations with the given tool. This is typically used to run + cross-compiled tests within a simulator tool. + +- ``TEST_SUITE_BENCHMARKING_ONLY`` + + Disable tests that are unsuitable for performance measurements. The + disabled tests either run for a very short time or are dominated by I/O + performance making them unsuitable as compiler performance tests. + +- ``TEST_SUITE_SUBDIRS`` + + Semicolon separated list of directories to include. This can be used to + only build parts of the test-suite or to include external suites. + Note that using this option does not work reliably with deeper + subdirectories as it skips intermediate ``CMakeLists.txt`` files which may + be required. + +- ``TEST_SUITE_COLLECT_STATS`` + + Collect internal LLVM statistics: Appends ``-save-stats=obj`` to the + compiler command lines and makes the lit runner collect and merge the + statistic files. + +- ``TEST_SUITE_RUN_BENCHMARKS`` + + If this is set to ``OFF`` then lit will not actually run the tests but just + collect build statistics like compile time and code size. + +- ``TEST_SUITE_USE_PERF`` + + Use the ``perf`` tool for time measurement instead of the ``timeit`` tool + coming with the suite. The ``perf`` is usually available on linux systems. + +- ``TEST_SUITE_SPEC2000_ROOT``, ``TEST_SUITE_SPEC2006_ROOT``, ``TEST_SUITE_SPEC2017_ROOT``, ... + + Specify installation directories of external benchmark suites. You can find + more information about expected versions or usage in the README files in the + ``External`` directory (such as ``External/SPEC/README``) + +Common CMake Flags +------------------ + +- ``-GNinja`` + + Generate build files for the ninja build tool. + +- ``-C test-suite/cmake/caches/cachefile.cmake`` + + Use a CMake cache. The test-suite comes with several cache files: They + group a set of configuration options to provide a convenient way to enable + common or tricky build configurations. + + +Displaying and Analyzing Results +================================ + +The test-suite comes with a little script called ``compare.py`` to display and +compare result files. You should invoke ``lit`` with the ``-o filename.json`` +flag to produce a result file. Example usage: + +- Basic Usage: + + .. code-block:: bash + + % test-suite/utils/compare.py baseline.json + Warning: 'test-suite :: External/SPEC/CINT2006/403.gcc/403.gcc.test' has No metrics! + Tests: 508 + Metric: exec_time + + Program baseline + + INT2006/456.hmmer/456.hmmer 1222.90 + INT2006/464.h264ref/464.h264ref 928.70 + ... + baseline + count 506.000000 + mean 20.563098 + std 111.423325 + min 0.003400 + 25% 0.011200 + 50% 0.339450 + 75% 4.067200 + max 1222.896800 + +- Show compile_time or text segment size metrics: + + .. code-block:: bash + + % test-suite/utils/compare.py -m compile_time baseline.json + % test-suite/utils/compare.py -m size.__text baseline.json + +- Compare two result files and filter short running tests: + + .. code-block:: bash + + % test-suite/utils/compare.py --filter-short baseline.json experiment.json + ... + Program baseline experiment diff + + SingleSour.../Benchmarks/Linpack/linpack-pc 5.16 4.30 -16.5% + MultiSourc...erolling-dbl/LoopRerolling-dbl 7.01 7.86 12.2% + SingleSour...UnitTests/Vectorizer/gcc-loops 3.89 3.54 -9.0% + ... + +- Merge multiple baseline and experiment result files by taking the minimum + runtime each: + + .. code-block:: bash + + % test-suite/utils/compare.py base0.json base1.json base2.json vs exp0.json exp1.json exp2.json + + +External Suites +=============== + +External suites such as SPEC can be enabled by either: + +- Placing (or linking) them into the ``test-suite/test-suite-externals/xxx`` directory (example: ``test-suite/test-suite-externals/speccpu2000``) +- Using a configuration option such as ``-D TEST_SUITE_SPEC2000_ROOT=path/to/speccpu2000`` + +You can find further information like the expected versions in the respective +README files such as ``test-suite/External/SPEC/README``. + +For the SPEC benchmarks you can switch between the ``test``, ``train`` and +``ref`` input datasets via the ``TEST_SUITE_RUN_TYPE`` configuration option. +The ``train`` dataset is used by default. + + +Custom Suites +============= + +You can build custom suites using the test-suite infrastructure. They must have +a ``CMakeLists.txt`` file at the top directory and will be picked up +automatically if placed into a subdirectory of the test-suite or when setting +the ``TEST_SUITE_SUBDIRS`` variable: + + .. code-block:: bash + + % cmake -DTEST_SUITE_SUBDIRS=path/to/my/benchmark-suite ../test-suite + + +Profile Guided Optimization +=========================== + +Profile guided optimization requires to compile and run the suite twice: First +the benchmark should be compiled with profile generation instrumentation +enabled and run with training data. After running the benchmarks the ``lit`` +runner will merge the profile files using ``llvm-profdata`` so they can be used +by the second compilation run: Example: + + .. code-block:: bash + + # Profile generation run: + % cmake -DTEST_SUITE_PROFILE_GENERATE=ON \ + -DTEST_SUITE_RUN_TYPE=train \ + ../test-suite + % make + % lit . + # Use the profile data for compilation and actual benchmark run: + % cmake -DTEST_SUITE_PROFILE_GENERATE=OFF \ + -DTEST_SUITE_PROFILE_USE=ON \ + -DTEST_SUITE_RUN_TYPE=ref \ + . + % make + % lit -o result.json . + +Note: The ``TEST_SUITE_RUN_TYPE`` setting only affects the SPEC benchmark suite. + + +Cross Compilation and External Devices +====================================== + +Compilation +----------- + +CMake allows to cross compile to a different target via toolchain files. More +information can be found here: + +- ``_ + +- ``_ + +Cross-compilation from macOS to iOS is possible with the +``test-suite/cmake/caches/target-target-*-iphoneos-internal.cmake`` CMake cache +files; However this requires an internal iOS SDK. + +Running +------- + +There are currently two ways to run the tests in a cross compilation setting: + +- You can use an ssh connection to the external device: The + ``TEST_SUITE_REMOTE_HOST`` option should be set to the ssh host name. After + compilation the executables and data files need to be transferred to the + device. This is typically done via the ``rsync`` make target. The lit runner + can then be used on the host machine and will prefix the benchmark and + verification command lines with a ``ssh`` command. Example: + + .. code-block:: bash + + % cmake -G Ninja -D CMAKE_C_COMPILER=path/to/clang \ + -C ../test-suite/cmake/caches/target-arm64-iphoneos-internal.cmake \ + -D TEST_SUITE_REMOTE_HOST=mydevice \ + ../test-suite + % ninja + % ninja rsync + % lit -j1 -o result.json . + +- You can specify a simulator for the target machine with the + ``TEST_SUITE_RUN_UNDER`` setting. The ``lit`` runner will prefix all + benchmark invocations with it. + + +Running the test suite via LNT +============================== + +The LNT tool comes with a mode to run the test-suite. This should be used +when submitting test results to an LNT database. + +See ``_ for details. + +Running the test suite via Makefiles (deprecated) +================================================= + +.. NOTE:: + The test-suite comes with a set of Makefiles that are considered deprecated. + They do not support newer testing modes like `Bitcode` or `Microbenchmarks` + and are harder to use. + +The old documentation can be found in the :doc:`TestSuiteMakefileGuide`. Index: docs/TestSuiteMakefileGuide.rst =================================================================== --- docs/TestSuiteMakefileGuide.rst +++ docs/TestSuiteMakefileGuide.rst @@ -1,161 +1,13 @@ -===================== -LLVM test-suite Guide -===================== +====================================== +test-suite Makefile Guide (deprecated) +====================================== .. contents:: - :local: + :local: Overview ======== -This document describes the features of the Makefile-based LLVM -test-suite as well as the cmake based replacement. This way of interacting -with the test-suite is deprecated in favor of running the test-suite using LNT, -but may continue to prove useful for some users. See the Testing -Guide's :ref:`test-suite Quickstart ` section for more -information. - -Test suite Structure -==================== - -The ``test-suite`` module contains a number of programs that can be -compiled with LLVM and executed. These programs are compiled using the -native compiler and various LLVM backends. The output from the program -compiled with the native compiler is assumed correct; the results from -the other programs are compared to the native program output and pass if -they match. - -When executing tests, it is usually a good idea to start out with a -subset of the available tests or programs. This makes test run times -smaller at first and later on this is useful to investigate individual -test failures. To run some test only on a subset of programs, simply -change directory to the programs you want tested and run ``gmake`` -there. Alternatively, you can run a different test using the ``TEST`` -variable to change what tests or run on the selected programs (see below -for more info). - -In addition for testing correctness, the ``test-suite`` directory also -performs timing tests of various LLVM optimizations. It also records -compilation times for the compilers and the JIT. This information can be -used to compare the effectiveness of LLVM's optimizations and code -generation. - -``test-suite`` tests are divided into three types of tests: MultiSource, -SingleSource, and External. - -- ``test-suite/SingleSource`` - - The SingleSource directory contains test programs that are only a - single source file in size. These are usually small benchmark - programs or small programs that calculate a particular value. Several - such programs are grouped together in each directory. - -- ``test-suite/MultiSource`` - - The MultiSource directory contains subdirectories which contain - entire programs with multiple source files. Large benchmarks and - whole applications go here. - -- ``test-suite/External`` - - The External directory contains Makefiles for building code that is - external to (i.e., not distributed with) LLVM. The most prominent - members of this directory are the SPEC 95 and SPEC 2000 benchmark - suites. The ``External`` directory does not contain these actual - tests, but only the Makefiles that know how to properly compile these - programs from somewhere else. The presence and location of these - external programs is configured by the test-suite ``configure`` - script. - -Each tree is then subdivided into several categories, including -applications, benchmarks, regression tests, code that is strange -grammatically, etc. These organizations should be relatively self -explanatory. - -Some tests are known to fail. Some are bugs that we have not fixed yet; -others are features that we haven't added yet (or may never add). In the -regression tests, the result for such tests will be XFAIL (eXpected -FAILure). In this way, you can tell the difference between an expected -and unexpected failure. - -The tests in the test suite have no such feature at this time. If the -test passes, only warnings and other miscellaneous output will be -generated. If a test fails, a large FAILED message will be -displayed. This will help you separate benign warnings from actual test -failures. - -Running the test suite via CMake -================================ - -To run the test suite, you need to use the following steps: - -#. The test suite uses the lit test runner to run the test-suite, - you need to have lit installed first. Check out LLVM and install lit: - - .. code-block:: bash - - % svn co http://llvm.org/svn/llvm-project/llvm/trunk llvm - % cd llvm/utils/lit - % sudo python setup.py install # Or without sudo, install in virtual-env. - running install - running bdist_egg - running egg_info - writing lit.egg-info/PKG-INFO - ... - % lit --version - lit 0.5.0dev - -#. Check out the ``test-suite`` module with: - - .. code-block:: bash - - % svn co http://llvm.org/svn/llvm-project/test-suite/trunk test-suite - -#. Use CMake to configure the test suite in a new directory. You cannot build - the test suite in the source tree. - - .. code-block:: bash - - % mkdir test-suite-build - % cd test-suite-build - % cmake ../test-suite - -#. Build the benchmarks, using the makefiles CMake generated. - -.. code-block:: bash - - % make - Scanning dependencies of target timeit-target - [ 0%] Building C object tools/CMakeFiles/timeit-target.dir/timeit.c.o - [ 0%] Linking C executable timeit-target - [ 0%] Built target timeit-target - Scanning dependencies of target fpcmp-host - [ 0%] [TEST_SUITE_HOST_CC] Building host executable fpcmp - [ 0%] Built target fpcmp-host - Scanning dependencies of target timeit-host - [ 0%] [TEST_SUITE_HOST_CC] Building host executable timeit - [ 0%] Built target timeit-host - - -#. Run the tests with lit: - -.. code-block:: bash - - % lit -v -j 1 . -o results.json - -- Testing: 474 tests, 1 threads -- - PASS: test-suite :: MultiSource/Applications/ALAC/decode/alacconvert-decode.test (1 of 474) - ********** TEST 'test-suite :: MultiSource/Applications/ALAC/decode/alacconvert-decode.test' RESULTS ********** - compile_time: 0.2192 - exec_time: 0.0462 - hash: "59620e187c6ac38b36382685ccd2b63b" - size: 83348 - ********** - PASS: test-suite :: MultiSource/Applications/ALAC/encode/alacconvert-encode.test (2 of 474) - - -Running the test suite via Makefiles (deprecated) -================================================= - First, all tests are executed within the LLVM object directory tree. They *are not* executed inside of the LLVM source tree. This is because the test suite creates temporary files during execution. @@ -208,7 +60,7 @@ again (unless the test code or configure script changes). Configuring External Tests --------------------------- +========================== In order to run the External tests in the ``test-suite`` module, you must specify *--with-externals*. This must be done during the @@ -238,7 +90,7 @@ ``configure``. Running different tests ------------------------ +======================= In addition to the regular "whole program" tests, the ``test-suite`` module also provides a mechanism for compiling the programs in different @@ -258,7 +110,7 @@ that you develop with LLVM. Generating test output ----------------------- +====================== There are a number of ways to run the tests and generate output. The most simple one is simply running ``gmake`` with no arguments. This will @@ -284,7 +136,7 @@ test run. Writing custom tests for the test suite ---------------------------------------- +======================================= Assuming you can run the test suite, (e.g. "``gmake TEST=nightly report``" should work), it is really easy to run Index: docs/TestingGuide.rst =================================================================== --- docs/TestingGuide.rst +++ docs/TestingGuide.rst @@ -8,6 +8,7 @@ .. toctree:: :hidden: + TestSuiteGuide TestSuiteMakefileGuide Overview @@ -25,10 +26,6 @@ software required to build LLVM, as well as `Python `_ 2.7 or later. -If you intend to run the :ref:`test-suite `, you will also -need a development version of zlib (zlib1g-dev is known to work on several Linux -distributions). - LLVM testing infrastructure organization ======================================== @@ -77,6 +74,8 @@ The test-suite is located in the ``test-suite`` Subversion module. +See the :doc:`TestSuiteGuide` for details. + Debugging Information tests --------------------------- @@ -96,9 +95,8 @@ ``llvm/test`` (so you get these tests for free with the main LLVM tree). Use ``make check-all`` to run the regression tests after building LLVM. -The more comprehensive test suite that includes whole programs in C and C++ -is in the ``test-suite`` module. See :ref:`test-suite Quickstart -` for more information on running these tests. +The more comprehensive test suite that includes whole programs in C and C++ is +in the ``test-suite`` module. See :doc:`TestSuiteGuide` for details. Regression tests ---------------- @@ -585,65 +583,3 @@ (b) it speeds things up for really big test cases by avoiding interpretation of the remainder of the file. - -.. _test-suite-overview: - -``test-suite`` Overview -======================= - -The ``test-suite`` module contains a number of programs that can be -compiled and executed. The ``test-suite`` includes reference outputs for -all of the programs, so that the output of the executed program can be -checked for correctness. - -``test-suite`` tests are divided into three types of tests: MultiSource, -SingleSource, and External. - -- ``test-suite/SingleSource`` - - The SingleSource directory contains test programs that are only a - single source file in size. These are usually small benchmark - programs or small programs that calculate a particular value. Several - such programs are grouped together in each directory. - -- ``test-suite/MultiSource`` - - The MultiSource directory contains subdirectories which contain - entire programs with multiple source files. Large benchmarks and - whole applications go here. - -- ``test-suite/External`` - - The External directory contains Makefiles for building code that is - external to (i.e., not distributed with) LLVM. The most prominent - members of this directory are the SPEC 95 and SPEC 2000 benchmark - suites. The ``External`` directory does not contain these actual - tests, but only the Makefiles that know how to properly compile these - programs from somewhere else. When using ``LNT``, use the - ``--test-externals`` option to include these tests in the results. - -.. _test-suite-quickstart: - -``test-suite`` Quickstart -------------------------- - -The modern way of running the ``test-suite`` is focused on testing and -benchmarking complete compilers using the -`LNT `_ testing infrastructure. - -For more information on using LNT to execute the ``test-suite``, please -see the `LNT Quickstart `_ -documentation. - -``test-suite`` Makefiles ------------------------- - -Historically, the ``test-suite`` was executed using a complicated setup -of Makefiles. The LNT based approach above is recommended for most -users, but there are some testing scenarios which are not supported by -the LNT approach. In addition, LNT currently uses the Makefile setup -under the covers and so developers who are interested in how LNT works -under the hood may want to understand the Makefile based setup. - -For more information on the ``test-suite`` Makefile setup, please see -the :doc:`Test Suite Makefile Guide `. Index: docs/index.rst =================================================================== --- docs/index.rst +++ docs/index.rst @@ -145,6 +145,9 @@ :doc:`LLVM Testing Infrastructure Guide ` A reference manual for using the LLVM testing infrastructure. +:doc:`TestSuiteGuide` + Describes how to compile and run the test-suite benchmarks. + `How to build the C, C++, ObjC, and ObjC++ front end`__ Instructions for building the clang front-end from source.