Hello,
Based on the suggestion from @MatzeB, I've added the logic to allow a Result object to have nested Result objects to report microbenchmarks. I have also made the change to report these tests individually in the json output. This will allow tests to report separate results when desired.
Do we need to add a way to store the unit of measurement? The exec_time shown below are mean cpu_time in usec.
I've altered the microbenchmark in the test-suite to test and we get the following outputs:
lit cmd line output
-- Testing: 1 tests, 1 threads --
PASS: test-suite :: MicroBenchmarks/XRay/ReturnReference/retref-bench.test (1 of 1)
********** TEST 'test-suite :: MicroBenchmarks/XRay/ReturnReference/retref-bench.test' RESULTS **********
compile_time: 0.8284
hash: "daec0d37414da26fdae56f0d9bfa2cc0"
iterations: 10
link_time: 0.0587
**********
*** MICRO-TEST 'BM_RDTSCP_Cost RESULTS ***
exec_time: 14.1476
std_dev: 0.0003
*** MICRO-TEST 'BM_ReturnInstrumentedPatched RESULTS ***
exec_time: 12.3221
std_dev: 0.0002
*** MICRO-TEST 'BM_ReturnInstrumentedPatchedThenUnpatched RESULTS ***
exec_time: 2.2820
std_dev: 0.0001
*** MICRO-TEST 'BM_ReturnInstrumentedPatchedWithLogHandler RESULTS ***
exec_time: 42.8986
std_dev: 0.0008
*** MICRO-TEST 'BM_ReturnInstrumentedUnPatched RESULTS ***
exec_time: 3.4562
std_dev: 0.0050
*** MICRO-TEST 'BM_ReturnNeverInstrumented RESULTS ***
exec_time: 1.8256
std_dev: 0.0001
**********
Testing Time: 43.79s
Expected Passes : 1lit json output
{
"__version__": [
0,
6,
0
],
"elapsed": 43.78972411155701,
"tests": [
{
"code": "PASS",
"elasped": null,
"metrics": {
"exec_time": 3.45623,
"std_dev": 0.00501622
},
"name": "test-suite :: MicroBenchmarks/XRay/ReturnReference/retref-bench.BM_ReturnInstrumentedUnPatched.test",
"output": ""
},
{
"code": "PASS",
"elasped": null,
"metrics": {
"exec_time": 2.28196,
"std_dev": 6.62184e-05
},
"name": "test-suite :: MicroBenchmarks/XRay/ReturnReference/retref-bench.BM_ReturnInstrumentedPatchedThenUnpatched.test",
"output": ""
},
{
"code": "PASS",
"elasped": null,
"metrics": {
"exec_time": 42.8986,
"std_dev": 0.000839534
},
"name": "test-suite :: MicroBenchmarks/XRay/ReturnReference/retref-bench.BM_ReturnInstrumentedPatchedWithLogHandler.test",
"output": ""
},
{
"code": "PASS",
"elasped": null,
"metrics": {
"exec_time": 14.1476,
"std_dev": 0.000291057
},
"name": "test-suite :: MicroBenchmarks/XRay/ReturnReference/retref-bench.BM_RDTSCP_Cost.test",
"output": ""
},
{
"code": "PASS",
"elasped": null,
"metrics": {
"exec_time": 1.82558,
"std_dev": 5.89162e-05
},
"name": "test-suite :: MicroBenchmarks/XRay/ReturnReference/retref-bench.BM_ReturnNeverInstrumented.test",
"output": ""
},
{
"code": "PASS",
"elasped": null,
"metrics": {
"exec_time": 12.3221,
"std_dev": 0.000165266
},
"name": "test-suite :: MicroBenchmarks/XRay/ReturnReference/retref-bench.BM_ReturnInstrumentedPatched.test",
"output": ""
},
{
"code": "PASS",
"elapsed": 43.70928406715393,
"metrics": {
"compile_time": 0.8284,
"hash": "daec0d37414da26fdae56f0d9bfa2cc0",
"iterations": 10,
"link_time": 0.0587
},
"name": "test-suite :: MicroBenchmarks/XRay/ReturnReference/retref-bench.test",
"output": "\n/home/bhomerding/build/test-suite-micro2/MicroBenchmarks/XRay/ReturnReference/retref-bench --benchmark_repetitions=10 --benchmark_format=csv --benchmark_report_aggregates_only=true > /home/bhomerding/build/test-suite-micro2/MicroBenchmarks/XRay/ReturnReference/Output/retref-bench.test.bench.csv"
}
]
}