Extended Toolkit to benchmark for time, CPU, memory
This project is maintained by njase
Let us assume that there is an application called MyApp which you want to benchmark in some particular scenario.
A typical use case for such a benchmarking would be:
$> python setup.py install
this will be installed as module name “perf”
Create benchmarking test scenario Create a test.py (or any other name) and write small test code to trigger the “particular scenario” of MyApp.
For useful analysis, benchmarking should always be performed for a controlled scenario.
See test.py for a simple example, and ilastikbench for a real example of writing such a test.
This test program is used as application process to collect process specific benchmark results
$> python ibench.py --traceextstats 1 --loops=1 --values=1 -p 5 -o output.json
Change -p
$> python -m perf help
$> python -m perf plot -sn <mybenchmark.json>
$> python -m perf plot -sn <mybenchmark1.json> <mybenchmark.json>
$> python -m perf stats -x <mybenchmark.json>
$> python -m perf dump -d <mybenchmark.json>
The xtperf results can be stored in an output file in json format.
The results can be offline analyzed on command line using steps mentioned in previous section. These provide:
The results are displayed for each worker process and every periodic iteration within each worker.
They can also be offline analyzed graphically using steps mentioned in previous section. Sample output is shown below.
These results show:
Other visualizations are possible, e.g. comparative plots across two benchmarks as shown below: