How to make things fast …

September 28, 2011

I recently returned from one of my overseas jaunts (country will remain unnamed to protect the innocent) … where I found a general attitude towards making things go fast.

That was … use 15K SAS drives.

Simple as that. Don’t worry about which RAID card you use, or which RAID type you use for your different data sets, just use SAS drives for everything and that’s how you make the fastest server you can make.

While in principle I agree that for fileservers you should use SAS drives – they are fast, reliable and getting larger by the day, but to use these drives for every server type is missing the point when it comes to building a server to suit your needs.

10K and 15K SAS drives have a great advantage over SATA drives when it comes to seek time – so general file serving and database work will benefit greatly from these drives – but for video work I don’t personally see the need to use these drives. SATA drives (enterprise of course) are not far behind SAS drives in sequential throughput, and a dramatically cheaper and larger – meaning you can use more spindles in a SATA environment than you can afford to use in a SAS environment – and that’s what will really give you the speed in a streaming environment – spindles.

(While re-reading this article before posting, I actually find I’m disagreeing with myself on this point (slightly). 7200rpm SAS drives (commonly called “nearline”) seem to be the best go for this sort of work. The SAS interface gives these drives a slight performance improvement over their SATA counterparts and therefore are my favoured drives for many types of storage.)

Add to that the fact that certain RAID types are suited to different data environments and you can make a big difference to the speed of the system by using the correct RAID level for your data (eg don’t use RAID5 with database).

When I showed some of my customers results from a couple of my crazy Australian customers doing unbelievable speeds on video with nearline SAS drives – people were stunned. They had been told, and firmly believed, that to make something fast you needed 15K SAS drives.

Not so (not all the time at least).

Therefore, think about the drive types you are using, and about building a server to suit your data – not just filling the box up with expensive fast drives and hoping that will do the job.

Ciao
Neil

facebooktwitterlinkedinmail

5EE – is anyone actually using this RAID level? …

September 28, 2011

Several years ago Adaptec adopted the 5EE RAID level as a standard component of our hardware RAID cards. It seemed like a very good idea – making use of the idle drive that was traditionally a hot spare.

However, as I’ve come to know this RAID level better, I’ve been advising people against using it for one particular reason.

Compare: System 1 – 5 drives in a RAID5 and 1 x hot spare, System 2 – 6 drives in a RAID5EE.

System 2 should run faster than system 1 – after all it has more spindles doing the day to day work … and yes, it does run faster. In theory it’s around 15% faster but I’m yet to see that in practice.

So what’s wrong with this RAID level? My problem lies when a drive dies. With System 1 when a drive dies, the hot spare kicks in, the RAID rebuilds and all is good again.

With System 2, when a drive dies, the RAID5EE compacts itself into a standard RAID5. That’s all good, except when you replace the drive – that’s when I have problem. The RAID5EE will expand itself back out to a RAID5EE – which is another lengthy process which I believe (now) is not required.

So 5EE probably makes sense when you are running just 4 drives in a small JBOD or 1U server, but when the drive count increases I think (again, now) that it’s probably better to run just a RAID5 with a hot spare than to run 5EE.

So is anyone actually using this RAID level and if so, how do you find it?

Ciao
Neil

facebooktwitterlinkedinmail

Measuring performance (in the modern server world) …

September 28, 2011

In days gone by measuring performance was easy – Mb/sec (megabytes per second) was all anyone was really interested in. However measuring performance this way is only suited to systems where you are pushing large amounts of data through the system, not for measuring database or cloud-computing type environments.

For database we measure performance in IOPs – I/O’s per second. This measures how quickly small amounts of data can be transferred to and from the storage subsystem – in general the amount of data may not add up to many Mb/sec at all, so this traditional measurement method has little bearing here.

Now many customers are fully aware of these two measurement systems – and certainly anyone who has sat through one of my boring presentations is aware of where and when to use these two measurement systems, but there’s a new kid on the block that’s proving a little tougher to nail down.

Latency.

What is latency? It’s basically the amount of time it takes for information to be delivered to an end user, generally in a web or cloud environment. This metric is measured in milliseconds and for many organisations is the difference between profit and loss, or life and death.

I recently sat through a presentation with one of the industry’s leading experts in measuring data, and working out what is important to a customer. In that presentation I saw some pretty amazing statistics regarding latency:

Amazon consider that an additional 100ms latency costs 1% in sales (and that’s a very, very large amount of money), Google consider that an additional 500ms latency reduces page viewing by 20% (in other words people won’t wait).

Those are just a few of the amazing statistics that customers attribute to high latency. For most people, it’s a case of just getting sick of waiting for a page of information to turn up on their screens, but there are big dollars behind that frustration.

So how do we measure latency? That’s the 64-dollar question because it’s not easily, or sometimes even possible to measure at the server point. When you take into account there are a lot of network factors inbetween the server and the end user it adds up to a complex environment to measure.

IOPs don’t necessarily relate to latency. Having high IOPs doesn’t always relate to low latency – in fact it can be quite the opposite.

Sounds complicated doesn’t it – well yes, it is. Adaptec have a lot of years of experience in sorting out problems for customers – whether it be streaming data speeds, database IOPs or now cloud latency – seems our work is never finished. So the question is … what sort of data do you have in what sort of environment, and how do you measure the actual, real performance of that data?

Ciao
Neil

facebooktwitterlinkedinmail