In February 2010, I wrote a brief article, describing how MySQL could be used instead of VTS as a repository for test parameter data during performance tests. This has so far been the most popular article on this website and the associated MySQL LoadRunner libraries have been downloaded over 100 times.. Since February, together with John Howley, I’ve refined this technique and spent some time improving the documentation.
There are a number of reasons why I was keen to improve this technique.
- I needed a reliable, scalable alternative to VTS.
- I needed the ability to store results from multiple tests and compare them by using simple queries.
- I needed to be able to select test data at random from large tables yet still ensure that the record in use by one script couldn’t be used by another script.
- MySQL gives us the ability to provide LoadRunner with dynamic data (and possibly even change it during a test.).
- In a recent engagement, LoadRunner was having problems when using large parameter files, even on our large load generator and controller servers (Quad core Windows 2003 servers with 6GB RAM). Some paramaters were not evaluated properly and occasionally corrupt or partial strings were sent to the test system by LoadRunner.
- Script initialisation was taking a long time during script initialisation when the large parameter files were sent to the load generators by the controller. (Our Parameter files were often 30MB or more and our largest file was 258MB and contained more than three million rows.)
- LoadRunner summarises results and can display average response times at a minimum granularity of one second. Our client had a requirement to log all response times to determine whether poor performance in a one-second period was down to one very slow transaction or a number of slower than average transactions.
- It was necessary to be able to share detailed transaction information with the project developers. To help identify one transaction from many, we saved information to a MySQL results database. The records in this database contained the exact time of the transaction, details of the API under test, parameters used as well as information about the HTTP response code.
- We were asked to provide the ability to test multiple APIs either simultaneously or independently. The application which we were testing behaves differently for different client types and so we were asked to cater for this in our scripts. We wrote scripts which would read run-time information from MySQL and set the proportion of transactions of each type accordingly.
- In our scripts, we set the proportion of JSON/XML requests in a MySQL table and the LoadRunner scripts read this information at run-time and used the appropriate ratio when simulating client interactions with the test system. The test application was an online music application and using the same technique we could change the proportions of the search scripts so that they searched for artists/tracks/albums in the appropriate ratio.
Reporting and analysis
- Running queries against the MySQL results database gave us the ability to compare multiple tests with one another, which is not possible using LoadRunner Analysis.
- We were able to report on transaction response times at a level of detail which is not possible using LoadRunner Analysis.
- Tools such as XML/SWF charts by maani.us allow us to produce configurable results charts for our client. http://www.maani.us/xml_charts/
- Excel and Access can be used to provide detailed reports and charts based on our test results which can be easily reconfigured by the client to display the results that they’re interested in (drill-down).