Getting started with Google Caliper

Earlier this week, I came across a few references to a micro-benchmarking tool called Google Caliper. As I was working on enhancing the micro-benchmarking features in VLPT I thought this would be a good time to give another framework a try - no point in going to the effort of writing the enhancements if something else out there did the job really well after all.

It takes a little bit of legwork to get up and running with Caliper, so I thought I'd blog the journey in case it can help anyone else.

Caliper is hosted here on google code - and the first thing you'll spot is there are no files available in the downloads link. I did a quick search in Maven central and there is a 0.5 release candidate 1 entry -

This gives us two options: git clone the current repo and use the bleeding edge, or stick with the maven option from February 2012.


Lets try the maven option first. My pom gets the following dependency:


And I decide a quick test is in order to see if there are any differences between getting times in milliseconds or nanoseconds (hopefully there wont be!)


public class CaliperTest extends SimpleBenchmark {

    public void timeNanoTime(int reps) {
        for (int i = 0; i < reps; i++) {

    public void timeSystemTime(int reps) {
        for (int i = 0; i < reps; i++) {

It took a bit of guesswork to figure out how to run it (in the end I worked this out after git cloning and having a look in the scripts...) - create a new configuration with these settings:

 0% Scenario{vm=java, trial=0, benchmark=NanoTime} 0.45 ns; &#963;=0.00 ns @ 3 trials
50% Scenario{vm=java, trial=0, benchmark=SystemTime} 0.45 ns; &#963;=0.00 ns @ 3 trials

 benchmark    ns linear runtime
  NanoTime 0.453 ==============================
SystemTime 0.453 =============================

vm: java
trial: 0

So its actually pretty simple - you can probably get here in less than 5 minutes if you dont have to go digging around. Sadly the User Guide on the google code wiki doesn't have much content so it does take a bit of work.

One of the great things on the landing page is the image of the web view : - so how do we get that working?

Fortunately the online result page did have useful content - if you go to and log in with a google account, you will get key:

Caliper uses an API key to link your local installation to your Google account. To enable online access, copy and paste this API key into your .caliperrc file:

# Caliper API key for
apiKey: xxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx

on Mac OS X, Linux and UNIX:	~/.caliperrc
on Windows:	C:\.caliperrc

The linux advice looks sound, but I'm running on Win 7. The home directory maps to C:\Users\James for me, so that's where I need to create the .caliperrc file. I pasted in all three lines directly from the web page.

We dont have to make any changes to the run time configuration - just hit run again and wait for the test to complete. You should see an extra line at the end of the output:

View current and previous benchmark results online:

Hitting the link should take you to the results (it took a little while to load for me) :


So that's pretty good - nice and straight forwards. What I really like about their results webapp are the box plots on the results.

They do a great job of explaining the various aspects on the online result page - basically you get five juicy facts in one lovely bar:

My example code isn't great for highlighting these things as the results are so consistent. But for a real example (I'm going to do one for VL Logging appenders soon) it provides invaluable data:

Their online results system has some other nice features for tracking changes to your benchmarks over successive runs as well, simply run the test again and the results will be shown alongside the previous runs.

One interesting aspect of their implementation is how many test runs they carry out per test. Its not 100% clear but I think its something along these lines (for the default MicrobenchmarkInstrument) :

So you end up with 9 results, based on the average of a variety of runs with different numbers of repetitions, and you know that each run will take around 15 seconds.

Cloning from Git

I might have picked a bad day to hit the bleeding edge, as it looks like the project is transitioning from 'old' Caliper to 'new' Caliper. Things of note:

public static void main(String[] args) {
        CaliperMain.main(CaliperTest.class, args);
You specified a Caliper API key, but web uploads are not yet available in new Caliper.
If you require use of the web application then please use old Caliper.

Scenario selection: 
  Benchmark methods: [NanoTime, SystemTime]
  User parameters:   {}
  Virtual machines:  [C:\Program Files (x86)\Java\jdk1.6.x_xx\jre]
  Selection type:    Full cartesian product

This selection yields 2 scenarios.
Measuring 1 trials each of 2 scenarios. Estimated runtime: 30s.
Results for
 benchmark         ns runtime
  NanoTime      0.454 ==============================
SystemTime      0.453 =============================

vm: C:\Program Files (x86)\Java\jdk1.6.x_xx\jre
Posting to failed: Internal Server Error
Posting to failed: Server returned HTTP response code: 500 for URL:

1 of 2 measurements complete: 50.0%.
2 of 2 measurements complete: 100.0%.
Execution complete: 29.26s.

For now I'll be sticking with the Maven 0.5rc1 version, but the progress on the trunk looks good once they've wired everything back together in the 'new' version.

Where to go from here

# Caliper ultimately records only the final N measurements, where N is this value.