This one is a perennial favourite of mine. There are certain RAID vendors around the place who promote the fact that their card has more cache than anyone else’s, and there are a multitude of system developers who believe that more cache equals more performance. Of course I’m talking here about the cache on the controller, not SSD caching etc (though that comes into play).
Now in the case of spinning media, cache is important. Having write cache turned on can speed up the writes dramatically because the OS doesn’t have to wait for the drives to write the data – instead it gets put in the cache on the controller and dumped to the drives at a later date … not the importance of cache protection in the form of supercap technology at this point because of the data sitting in DRAM while it’s waiting to go to the drives.
But what about when we get to SSDs?
When you make an array from pure SSD on our cards, the controller will prompt you to “turn off the cache” (both read and write). Doesn’t make sense really – after all why not use the cache?
So I asked this question of my product marketing team – and got back a marketing response (I’ll leave it at that). Seems there are situations where cache works, and where having the cache turned works better. The vast majority of installations fall into the second category here – most data types and installations works best in pure SSD environments without the cache turned on.
In a very small number of cases – mostly where there are a large number of very, very small writes, having cache turned on will help. So the question arises … how to determine? There is no way from our side of the fence to tell what will work best for your system – it’s simply a matter of testing in real world. Since it’s easy to turn the cache on or off on the array after it has been made this is no great issue – it just requires some real-world testing by the customer to see what works best for them.
Notice that I said “real-world testing”? While I’m a big user of iometer, and love to be able to generate all sorts of crazy workloads – they are almost never representative of the real world workload of a server.
So if you are using pure SSD environments leave the cache off, then do some testing by turning it on. Would love to get some feedback from the real world regarding any scenarios that actually benefit from turning the cache on.