Singapore here I come …

October 24, 2014

Heading off next week to the Cloud Computing Conference in Singapore (which is a lovely place to be at any time of year as long as the air-conditioning is working).

Will be manning a stand showing a Series 8 controller running HDD, SSD and also connected to iSCSI – caching and tiering all over the place (it’s a regular little datacenter in a box). In the interests of having less noisy equipment on the stand, I decided to go mad and run HyperV in Windows (I’m a Windows sort of person … and no, that does not mean “unreliable” :-)). Running virtual servers lets me run multiple performance testing scenarios on the same hardware.

This in itself is pretty cool. I’ll have one virtual server running on a tiered volume (ssd and hdd combined into a single volume), and another virtual server running on an iSCSI volume across the network, being cached locally in the head unit. Two versions of iometer running on the same machine and you have yourself a crazy demo.

The Cloud Computing Conference is focused on exactly that – the cloud. So what does a RAID card vendor have to do with the Cloud? Pretty simply there are a lot of system integrators pushing solutions into the cloud looking for solutions like pure SSD environments, caching and tiering to get maximum performance from the biggest storage arrays possible. If it’s big it generally means SATA drives. If it’s fast it generally means flash of some description (eg Flashtec or SSD drives). But one or the other doesn’t cut the mustard – datacenters need more performance than SATA spinning drives can provide, and more capacity than pure SSD environments. So they are very interested in combinations of the two that will give them their competitive edge.

The fun part is the speaking role. How to make a short presentation on RAID, HBA, caching and tiering interesting to a tired group of worn-out attendees at the end of their second day on the job … now that will be much more of a challenge than manning the hardware stand all week.

So … if you happen to be wandering around the Singapore Cloud Computing Conference drop in and see Adaptec – we’re full of surprises.

Ciao
Neil

facebooktwitterlinkedinmail

But I need more cache …

October 24, 2014

This one is a perennial favourite of mine. There are certain RAID vendors around the place who promote the fact that their card has more cache than anyone else’s, and there are a multitude of system developers who believe that more cache equals more performance. Of course I’m talking here about the cache on the controller, not SSD caching etc (though that comes into play).

Now in the case of spinning media, cache is important. Having write cache turned on can speed up the writes dramatically because the OS doesn’t have to wait for the drives to write the data – instead it gets put in the cache on the controller and dumped to the drives at a later date … not the importance of cache protection in the form of supercap technology at this point because of the data sitting in DRAM while it’s waiting to go to the drives.

But what about when we get to SSDs?

When you make an array from pure SSD on our cards, the controller will prompt you to “turn off the cache” (both read and write). Doesn’t make sense really – after all why not use the cache?

So I asked this question of my product marketing team – and got back a marketing response (I’ll leave it at that). Seems there are situations where cache works, and where having the cache turned works better. The vast majority of installations fall into the second category here – most data types and installations works best in pure SSD environments without the cache turned on.

In a very small number of cases – mostly where there are a large number of very, very small writes, having cache turned on will help. So the question arises … how to determine? There is no way from our side of the fence to tell what will work best for your system – it’s simply a matter of testing in real world. Since it’s easy to turn the cache on or off on the array after it has been made this is no great issue – it just requires some real-world testing by the customer to see what works best for them.

Notice that I said “real-world testing”? While I’m a big user of iometer, and love to be able to generate all sorts of crazy workloads – they are almost never representative of the real world workload of a server.

So if you are using pure SSD environments leave the cache off, then do some testing by turning it on. Would love to get some feedback from the real world regarding any scenarios that actually benefit from turning the cache on.

Ciao
Neil

 

facebooktwitterlinkedinmail

What a difference a year makes …

October 24, 2014

I was in India promoting our Series 7/8 and maxCachePlus solutions recently. It’s been over 12 months since I’ve been there and was amazed at the difference in the country since my last visit. Not only has the infrastructure improved dramatically, but the standard and level of computer engineering is going through the roof.

There are, of course, many data centers in India, and Telecoms reign supreme with such a big mobile market, but the real telling factor was the interest in SSD, Flash and Caching/Tiering solutions that everyone now has.

Previously India was seen as a low-cost marketplace (and yes, price is still a big factor), but now performance and innovation are the key drivers in this marketplace – everyone is keen to find a solution that gives them an edge.

iSCSI is huge in India. Funnily enough I don’t hear much of it down in Oz, but it seems that adding storage to your server by plugging in a cheap external storage solution is big business over there. So when I started promoting the ability of our Series 8 maxCachePlus Windows/Linux solutions to provide read-cache to the iSCSI external volumes (especially without having to reconfigure the data) they were very, very keen to say the least.

So it seems this idea of providing read-cache to an iSCSI volume (caching in the head unit, not the iSCSI target) is a bit of a goer … especially in markets that are looking for innovative ways of doing things.

Along with that, pure SSD environments – something that this only just starting to catch on in Australia, is not pretty commonplace in India. Of course they are screaming for IOP performance in these configurations and are very interested in Series 8 – yes Series 8 is 12Gb and the drives they are using are 6Gb SATA, but the key point here is the IOP performance of the card – it really doesn’t matter what the bus speed is, as long as the processor can handle the massive IOP capability of the 16-24 SSDs connected to each server – and of course Series 8 has the highest IOP we make.

So I came away from my trip with a new focus on this marketplace – high-end and innovation rather than focusing on low-end products … which was a pleasant surprise and a big eye-opener for all concerned.

Think I’ll be spending a lot more time in India in the future :-)

Ciao
Neil

facebooktwitterlinkedinmail

Where did all those disks come from?

October 1, 2014

(Another word for “FlexConfig”)

I’m doing it again … writers block so back to explaining a feature of our cards that you may not be aware of.

In the BIOS of our controller (under “Controller Settings/Controller Configuration/Controller Mode”) we have added several new options. Series 8 gets one more option than Series 7 just for confusion sake. Before I go into the details of the modes, I need to explain metadata.

Metadata is the area of the disk where we store our RAID information – who/what/where/why and how. It is created by “initializing” a disk. So when you initialize a disk it wipes out any previous metadata, and creates a new clean structure for the controller to store the RAID information on the disk.

The opposite to this is “uninitialise”. This removes metadata completely, leaving a blank clean disk that is for all intents and purposes not part of anything to do with RAID.

These two differences are important. Keep it in mind when reading the following breakdown of what the different modes do …

Continue reading

facebooktwitterlinkedinmail