vlpt

Contents


VLPT

VLPT is the temporary name for Vertex Lab's Performance Testing Framework.

This aims to fill a gap within the current world of testing frameworks, focusing on some key aspects of performance testing:

Getting started

The best place to get started is here : Getting started guide

Using VLPT from Ant

There is also another guide to help get VLPT up and running using the Ant headless runner : VLPT and Ant

Agents

Once you've got your tests running locally, you'll want to distribute them to one or more remote agents. This guide takes you through the Agent basics : VLPT Agents

SLA support

You can specify SLA values for each of your tests and transactions in the test configuration xml file. Each <test> element can have a <transactionSLAs> element in which you can define the value for the test as a whole (using an empty name string), or specific named subtransactions.

 <test name='MyTest' class='com.vertexlabs.tests.MyTest' 
       targetThreads='1' threadStep='1' threadStepTime='1000' 
       targetRate='100' rateStep='10' rateStepTime='1000' recordAllValues='true'>

       <transactionSLAs>
           <transactionSLA name="" value="80"/>
           <transactionSLA name="subtransaction1" value="25"/>
           <transactionSLA name="subtransaction2" value="40"/>
           <transactionSLA name="subtransaction3" value="5"/>
      </transactionSLAs>
</test>

The SLA is used in the VLPT report to indicate which tests and transactions are below the acceptable level - using the 90% Percentile value for the comparison. The SLA threshold is also visible on the transaction frequency analysis chart for each test and transaction.

When running in headless mode, it is possible to trigger a failure (via Ant or Hudson) if any of your tests fail their SLAs. This is the primary method of getting continuous feedback from your performance tests, so it's definitely worth investing the time in coming up with suitable SLA levels for your systems.

Agents

Agents are configured in the console xml file like this:

<interactiveConfiguration>
    <agents>
        <agent name="embedded"/>
        <agent name="agent1" address="server1" port="20001"/>
        <agent name="agent2" address="server1" port="20002"/>
        <agent name="agent3" address="server2" port="20001"/>   
   </agents>
...
</interactiveConfiguration>

The configuration options are hopefully pretty straightforward. The embedded agent is a magic name that asks the console to fire up an Agent in the same JVM as the console. This is great for simple tests and also for debugging (as it's all in the same VM, you can breakpoint in your tests.) The communication with the agent is still done as if the agent was running externally, so it's a really good test to do before you spin out a test to a large number of remote agents.

Ant support

VLPT can be run from Ant using a custom task.

First you need to do the standard Ant taskdef to bring in the vlpt task :

<taskdef name="vlpt" classname="com.vertexlabs.performance.console.headless.VLPTAntTask" classpath="..."/>

You'll need to replace the ... with the path to the VLPT Console jar you are using.

Then you can just use the vlpt task wherever in your target you want :

<vlpt configPath="path.to.your.configuration.xml" 
      warmupTime="5000" 
      sampleTime="4000" 
      timeToWaitForAgents="10000" 
      agentsRequired="1">           
     
     <classpath>
    <pathelement path="your.jars.here"/>
     </classpath>
</vlpt>

The headless-mode console has a straightforward lifecycle:

  1. Load and validate the config file
  2. Attempt to connect to all of the agents specified in the configuration
  3. Wait for agentsRequired agents to come online, giving up if we cant find enough after timeToWaitForAgents milliseconds have passed
  4. Start the test
  5. Wait for warmupTime to give the tests a chance to settle down
  6. Clear the recorded stats (just the same as pushing the clear stats button on the front end)
  7. Wait for sampleTime to accumulate results
  8. Generate a report
  9. Stop the test

The test results are stored in both html and csv formats, containing exactly the same values as you see in the front end html report.

Results Repository

VLPT comes with its own Results Repository which allows you to view and compare historical test results. Have a read through this guide to get started with the repository : VLPT Results Repository

Telemetry Viewer

[Added in 1.1.46]

You can now use the telemetry viewer outside of VLPT - full instructions can be found here : Telemetry Viewer

FAQ

I'd like the test to start running as soon as the agents have connected, can I do this?

Yes, set -DvlptConsoleAutostartAgents=[number of agents to wait for] in your jvm properties and the console will automatically start the test when that number of agents have connected.

Is there a quick way to launch different tests?

It's a bit of a pain creating new launch configurations for different test configs; there is another way if you are ok with creating extra classes to make launching convienent:

import com.vertexlabs.performance.console.VLPTConsole;

public class VLPTSleep {
    public static void main(String[] args) {    
        VLPTConsole.runWithAutostart("src/main/resources/com/vertexlabs/performance/console/configuration/sleep-benchmark.xml", 1);
    }
}

The 1 at the end is another way of specifying the auto-start agents.

Do I have to use XML configurations? Can't I just run the test from code?

[Added in 1.1.16]

You can use the VLPTConfigurationBuilder to create a test configuration in code :

package com.vertexlabs.performance.console.sample;

import static com.vertexlabs.performance.console.configuration.VLPTConfigurationBuilder.*;

public class SampleLauncher {
    
    public static void main(String[] args) {
        
        builder().addTest(test(SleepTest.class).name("delay-20").properties("delay=20").threads(10))
                 .addTest(test(SleepTest.class).name("delay-10").properties("delay=10"))
                 .addAgent(embeddedAgent())
                 .addAgent(agent().address("localhost").port(444).name("agent1"))
                 .autostart(1)
                 .execute();
        
    }
    
}

The first cut only allows for simple test properties - let us know if you want to be able to configure complex property sets using this approach and we can add it.

How do I pass properties to my tests?

There are two ways to do this. The quickest is the inline method :

<test name='sleep-25@25' class='com.vertexlabs.performance.console.sample.SleepTest' targetRate='25' rateStep='100' properties="delay=25"/>

The properties="name=value,name=value" attribute allows you to pass simple properties in a csv format.

If you need a more complex properties setup (and for things with commas in them) you have to do it long hand:

<test name='test1b' class='com.vertexlabs.performance.console.sample.Test1' targetThreads='5'>
            <properties>
                <property name="testProperty" value="b"/>
            </properties>
        </test>

You access the properties in your test through the TestContext object:

public void setup(TestContext pti) throws Exception {
        String property = pti.getProperty("testProperty");
        System.out.println("Test1 individual property is " + property);
}

Is there an easy way to scale an entire test up without having to change the individual test rates?

Yes, you can tweak the transactionRateModifier in the configuration and in real-time. This factor is used in the calculation of target rates in the agents, so will scale the whole test up or down. The default value is 1, and you can change it in the test configuration as so:

<?xml version="1.0" encoding="UTF-8" standalone="yes"?>

<interactiveConfiguration transactionRateModifier="1.5">
    
    <agents>
        <agent name="embedded"/>
    </agents>
    
    <tests>
        <test name='hudson' class='com.vertexlabs.performance.console.sample.WebPageLoader' targetRate='20' rateStep='100' properties="url=http://server:8081"/>
    </tests>
    
</interactiveConfiguration>

You can change it in real-time by raising or lowering (or setting an absolute value) of the ticker on the Rate Control panel.

Is there a way to leave a test running on the agents when I shut down the console?

Normally the last thing the VLPT console does when you close its window is send a kill message to all agents currently running a test. If you want to run a long running test, you can add 'sendKillOnConsoleClose="false"' to the interactiveConfiguration element in the xml config.

<interactiveConfiguration ... sendKillOnConsoleClose="false">

Is there an easy way to log the behaviour in my tests

(From version 1.1.39)

As your tests get more complex, it can be really helpful to see what is going on. You have a variety of options for logging:

Internal logging

There is a method on the TestContext interface:

void log(String format, Object... params);

This takes a format string in the slf4j style (using curly braces for variable substitution) and sends the resulting message back to the VLPT Console. You can view the output in the Console tab.

This system is fairly simplistic but can work well for simple tests; for anything more complex, you can interface VLPT with the Vertex Labs Logging Hub.

LoggingHub

The VLPT agent can be wired up to send logging information to an instance of the Vertex Labs LoggingHub. You'll need to set this up separately (it should take you a couple of minutes max) and full instructions can be found here : [vllogging-gettingstarted]

As the configuration needs to be distributed to the clients, you'll need to tweak your configuration with a couple of new values:

<?xml version="1.0" encoding="UTF-8" standalone="yes"?>

<interactiveConfiguration loggingHubs="localhost" loggingTypes="internal">


</interactiveConfiguration>

With this configuration, any calls to TestContext.log(...) will route their messages to the LoggingHub (as well as the VLPT Console.) This can provide quite a bit of extra functionality for understanding what your test code is up to, but the options are still limited to the single log message.

This is where the other loggingTypes come in - you can wire up standard log4j and java.util.logging loggers as well. Here are the other choices:

If for some reason you can't use the standard logging frameworks (our main problem being when we want to test the logging frameworks themselves) there is a fourth option:

   private static final Logger logger = Logger.getLoggerFor(MyClass.class);

and it works in a very similar way to the slf4j facade and Logback.

Is there a way to make my tests fail if performance drops below an acceptable level?

(From version 1.1.39)

You can now add the failureThreshold attribute to each of your tests when running in headless mode and from the VLPTAnt task.

This value is compared against the current average successful transaction time, and if the threshold is exceeded the test will fail. To give the test time to warm up, the check is only made after the warmup time has been reached.

At least 10 data points are required before the test will fail (with one data point added each second of the test) by default - this can be configured on a per test basis using the failureThresholdResultCountMinimum attribute of the testConfiguration elements.

(From version 1.1.40)

You can now set the failureThresholdMode attribute on each test to one of the following (these things are case sensitive) :

Is there a way to make my tests fail if they report too many failed transactions?

(From version 1.1.40)

Yes - this is very similar to the failureThreshold feature - add the failedTransactionCountFailureThreshold attribute to your tests. If the total number of failed transactions for that test reaches that threshold value, the test will be stopped and the report will contain an appropriate failure reason.

You can also add the same attribute to the top level interactiveConfiguration element - this will take the total failed transactions across all tests and apply the same checks.