Had a really good experience with a customer recently, but it highlighted the problems with performance testing, especially using iometer. Now, we use iometer a lot, and it’s a great tool to drill down into a specific set of performance characteristics to show a specific response from a storage system.
However … the problem with such a situation is getting the parameters right so that you are testing the right parameters that match your data.
So this customer was looking at maxcache – our SSD caching functionality that uses SSD drives attached to the 81605ZQ controller to add read and write caching to an array.
Testing with iometer didn’t show that much of an improvement (at least according to the customer). Discussion regarding the test parameters and how long to run a test for (1 minute won’t cut the mustard) saw a big improvement over their original testing (and yes, these guys know what they are doing with their systems so I’m not having a go at any individual system builder here).
So after much testing, it was decided to put the machine into test with real-world customers in a virtual desktop environment (believe is was openE running a whole stack of virtual desktops). Guess what – customers (end users) were as happy as pigs in …
Turns out the real world data is perfectly suited to caching (as suspected by the system builder), but that iometer was not able to accurately reflect the data characteristics of the real-world server. End result: everyone (system builder, datacenter operator, end users) – all happy and amazed at the performance of the system.
So where is the moral in this story? Simply that it’s difficult to play with a test software and come up with something that will closely match the end result of a server used in the real world. Is there an answer to this? Probably not, but I’m suggesting that everyone take performance testing software and the results they get with a grain of salt, and look at testing in the real world, or at least a close simulation.
The results can be very surprising.