xtperf

Extended Toolkit to benchmark for time, CPU, memory

This project is maintained by njase

Usage details

Let us assume that there is an application called MyApp which you want to benchmark in some particular scenario.

A typical use case for such a benchmarking would be:

  1. Pre-setup To ensure reliability of benchmark results, we need to do some steps manually. Refer to your respective HW guide or OS guide on how to disable them:
    1. Disable BIOS based performance boosting technologies
      1. CPU throttling/requency scaling techniques: e.g. Intel SpeedStep and AMD Cool’n’Quiet
      2. CPU thermal based performance boosting techniques: e.g. Intel TurboBoost and AMD TurboCore
    2. Disable HyperTheading on Intel CPUs
  2. Get xtperf Download xtperf and install it as:
     $> python setup.py install
    

    this will be installed as module name “perf”

    • Update any needed dependencies like matplotlib, statistics,psutil
  3. Create benchmarking test scenario Create a test.py (or any other name) and write small test code to trigger the “particular scenario” of MyApp.

    For useful analysis, benchmarking should always be performed for a controlled scenario.

    See test.py for a simple example, and ilastikbench for a real example of writing such a test.

    This test program is used as application process to collect process specific benchmark results

  4. Run benchmarking
       $> python ibench.py --traceextstats 1 --loops=1 --values=1 -p 5 -o output.json
    

    Change -p to the number of times benchmarking should be performed, and -o as desired file name for output. The rest are recommended as such. For more details, type:

       $> python -m perf help
    
  5. Analyze output or save for offline analysis
    1. Graphical analysis of a single output with both system(-s) and process(-n) stats:
      $> python -m perf plot -sn <mybenchmark.json>
      
    2. Graphical comparative analysis of two benchmark results with both system(-s) and process(-n) stats
      $> python -m perf plot -sn <mybenchmark1.json> <mybenchmark.json>
      
    3. See results directly on command line:
      $> python -m perf stats -x <mybenchmark.json>
      $> python -m perf dump -d <mybenchmark.json>
      

Output

The xtperf results can be stored in an output file in json format.

The results can be offline analyzed on command line using steps mentioned in previous section. These provide:

The results are displayed for each worker process and every periodic iteration within each worker.

They can also be offline analyzed graphically using steps mentioned in previous section. Sample output is shown below.

These results show:

System and Process benchmark

Other visualizations are possible, e.g. comparative plots across two benchmarks as shown below: System wide comparison

Process wide comparison

Remarks and recommendations