Did you know (1) ? …

October 30, 2013

Having a bit of writers block this morning (“write blog article” appeared in Outlook calendar but brain is not responding in kind), so thought I’d take the easy way out and give a quick technical update on our products with some obscure, bizzare or just not-widely-known bit of information.

“Auto Rebuild” is an option in the bios of our cards about which most people have no idea how it works. This could have been called “the option that helps the forgetful system admin” or “even if you don’t bother we’ll safeguard your data for you if we can” … but I don’t think either of those descriptions would fit I the bios screen. “Auto Rebuild” is so last century (but that’s OK because it’s been around for probably that long).

So what does it do? When enabled (which it is by default) … when the card finds an array that is degraded it will first look for a hot spare. If this is not found then it will look for any unused devices (drives that are not part of an array). If a suitable unused disk is found (right size), then the card will build that disk back into the array.

Why did we need this? We have lots of hot spare options – why bother with an auto feature as well? My theory on this (and I’ll probably never know the full exact reason) is that system builders often send off a new system with a hot spare in place, but when the drive fails the user/customer will replace the drive, but does not know how to, or that they should, or that they need to, make that new drive a hot spare. So in my experience the “new drive” that has replaced the old failed drive, is often just sitting as a raw device … and when the next drive fails it does nothing because it’s not a “hot spare”.

Well in the case of our cards set to their factory defaults … the new drive will kick in and replace a failed drive (and rebuild the array), because of “auto rebuild” or “even if you don’t bother we’ll safeguard your data for you if we can” (as I like to call it).

Did you know that?

Quick update. I might have generalised a bit too much on this one. Note that this works in a ses2 backplane (hot swap backplane) when you change the drive in the same slot. Note also that in a series 7 controller it works if you change the new drive into the same slot as the old drive, but if you put the new drive into a different slot you’ll have to make it a hot spare because an S7 will grab that drive and show it to the OS as a pass-through device. Was trying to keep this simple, but probably went a bit far. Oh the complexities we weave :-)

Ciao
Neil

facebooktwitterlinkedinmail

But I told you … I don’t want 16 ports! (are you deaf?) …

October 25, 2013

So a customer rings up looking for a fast card for RAID 1 and RAID 10 – he’s going to use SSD’s to make his ultimate home workstation/video server/graphics-CAD machine etc etc (standard home user). He’s going full SSD no matter what anyone tells him, so it’s “performance, performance, performance” all the way.

So … what card do I need? I’ve used up all my available fund on drives but I want the fastest possible RAID card to connect these SSDs.

71605E

“Now listen mate. I just told you I only have 4 drives … I’m not forking out for a 16-port controller!” “Typical sales guy … doesn’t listen.” “I said I had 4 drives, I said I wanted RAID 10 and you’re trying to flog me a 16-port controller!”

And so the conversation goes. It’s not until you get someone to look at the pricelist … scroll all the way through to the 71605E that you hear them go “oh!” on the other end of the phone. This is a pretty common scenario in my neck of the woods. The customer has 2 or 4 drives, so is looking for a 2 or 4-port controller … they certainly don’t go looking for a 16-port controller.

So where did this all go wrong? In a way, it was the generosity of the product marketing team that started this (when they read that they’ll think I’ve started being nice to them). The 71605E has less RAM onboard (256Mb) which is fine because it’s only doing RAID 0,1,10,1E,Hybrid … so it doesn’t need a lot of RAM. Combined with the fact that when connecting SSDs we recommend to turn off the cache anyway so it’s pretty pointless putting a lot of the stuff on there … and it lets us get the price down.

However … the chip is the same as all the other 7 series controller (24-port native ROC), so why put only 4 or 8 plastic connectors onto which to connect cables? 16 fit so why not just leave them there? Makes sense to me … whether I need them or not it does the job. In reality it’s a really sensible, good value card that fits the bill for a lot of people … if they knew about it.

But they don’t … because they are not looking for it.

They are looking for a 4 or 8-port controller because that is how many drives they have, and they think that anything with a “16” on it will be crazy expensive so they don’t start at that end of the cattle-dog (catalog) … and hence never know about this card. So take a look at the “entry level” 7 series controller … even though it has 16 ports it may in fact be just the low-cost, high-performance, entry-level controller you are looking for.

Maybe this is the card they should be using instead of the 6805 as mentioned in one of my previous posts? Now even I’m getting confused :-)

Ciao
Neil

facebooktwitterlinkedinmail

Getting the balance right …

October 24, 2013

Ying and yang … the Chinese had it just about right when they coined that phrase (and yes, I used google to check that in fact it is Chinese – though I’m sure some smart soul will tell me otherwise) …

Had several customers recently who have been reading about hybrid RAID. This is where you can mix an SSD and a HDD on an Adaptec Series 6, 7 or 8 and make a mirror out of those two drives. While this sounds crazy it is pretty simple. Writes go to both drives (which is nicely buffered by controller cache so you get good speed), but all reads are directed from the SSD – giving lightning read speed.

Sounds good … and people use the old story of “don’t need RAID 5 so lets go for an entry-level card”. In this case, the 6405E. So far, so good. However … the 6405E is pcie2 x 1 – which limits the throughput to 500mb/sec. That’s probably not going to be too much of a problem on a RAID 1 – one SSD will go close to saturating that but it will be close.

However … make a RAID 10, with 2 x SSD and 2 x HDD and you are starting to stretch the friendship a bit. Reading a relatively large file off that RAID should in theory saturate the pcie2 x 1 bus, making the card the bottleneck. So in this case you need to go to the 6405 controller, not the “E”, which has a pcie2 x 8 connector and can easily handle the throughput of the SSDs.

So yes, you only need an entry level card for a RAID 1 or RAID 10, but if you are doing a hybrid RAID then you probably need to consider the theoretical speed of the SSD you are using and make sure there is no bottleneck in the way of their performance.

Ciao
Neil

 

 

facebooktwitterlinkedinmail

Are you using your storage the right way? …

October 23, 2013

Spent a short time yesterday at VMware VMworld 2013 “Defy” Convention. Now maybe that should be “deny” convention but that would be cynical of me. 99.9% of the vendors at the show were showing SAN products – storage, backup, de-dupe, migration, DR … you name it, it was there – all based around SAN product. Now I’m not that dumb that if you hit me over the head with a SAN for long enough I get the point – they greatly benefit the functionality of all things VMware by allowing migration etc (just plain moving stuff around). So VMware focus on, and promote SAN as their primary/preferred storage medium – makes sense to me.

However … (and yes, there is always a gotcha) …

We sell an awful lot of RAID cards to people using VMware on DAS (direct attached storage). Now I could not even find one person who was willing to discuss direct-attached storage – it was basically a no-no discussion point since it does not fit all the functionality and marketing hype that VMware put around their products – after all it’s all stuck in the one box!

The reality is that no matter what a vendor thinks, and how hard they promote a specific use of product, the customer will always come up with an innovative (I call it “left field”) way of using your product, often to a point that you don’t think is very smart or realistic, but you keep that opinion to yourself because right or wrong, you want the customer to buy more product.

In RAID storage it’s akin to the customer running RAID 0 on a bunch of desktop drives all running an old firmware – stability is non-existent but the customer expects that since this is an option in the RAID card, then it should work just as well as every other option in the menu. Or the customer (and yes I hit one of these guys the other day) wanting to build a RAID 10 out of 16 SSDs … a slightly rather too expensive configuration for my taste, but the customer was convinced that this was the right way to go.

So what is the “right way” to use your storage? There really isn’t one, but I’d strongly urge people to talk to a vendor and discuss the pros and cons, risks and benefits, shortcoming and upsides of their configuration – it may just be that while the customer has thought of all the innovative and “left field” ways of using storage, they haven’t considered the fundamental underlying problems they may run into because of their design.

So the lesson is? Talk to your vendor and don’t worry if they laugh/choke/smirk/scoff or otherwise deride your ideas … just listen to their input and balance your enthusiasm with their usual conservatism.

Ciao
Neil

facebooktwitterlinkedinmail

Growing the datacenter …

October 8, 2013

Adaptec by PMC has some pretty cool products that work really well in datacenters. For datacenter operators who have moved on from the “just throw brand-name hardware at it” to the “lets’ do it a lot cheaper and build our own storage boxes” … we have the RAID cards that provide the performance and density to meet their requirements.

Let me explain the bit in quotes above. Many a wise man has studied the datacenter environment, and found that the startup often goes with the brand name server and storage provider so they can (a) focus on their admin, (b) get service contracts from major vendors and (c) boast their hardware platforms to their prospective customers. This has a lot of benefits and is generally considered the way to start your datacenter. However it comes at a cost … a big cost … in capital outlay and ongoing service contracts.

Then a datacenter starts to grow it generally finds all sorts of cost pressures mounting against the solution of providing high-end brand-name storage … and they start looking to do things a little on the cheaper side. Enter the whitebox storage vendor/product. Nothing wrong with whitebox – Intel and Supermicro for example make excellent product which can be assembled sometimes at a much lower cost that the equivalent brand-name server and capacity (and these companies make some big, big bikkies doing this so we are not talking tin-pot operations here).

So where does Adaptec by PMC fit in. Most commonly a datacenter operator is looking for large scale capacity in their storage in as cheap a platform as possible. Enter the high-density RAID card capable of connecting directly to 24 hard drives in a small environment, or a high-density rack-level environment with your head unit connected out to large numbers of densely packed JBODS. We have products that fit in both of these environments providing the capacity and performance to ensure that the datacenter bottleneck is not in the storage infrastructure.

So we find ourselves living in phase 2 of a datacenter’s life. Phase 3 of that lifecycle is where the customer will start to look at innovative solutions to improve performance, reduce latency and differentiate themselves from the crowd. PMC plays well in this space with intelligent SSD solutions and ASIC embedded solutions for these big players. Customers also look at “can we do this with software?” – where the datacenter starts to look at their application layers and moves to simplified management of hardware via their software applications – and RAID takes a back seat to the humble HBA (and yes, we have those too). There is plenty of scope for transitioning through these phases with modern RAID cards being able to take on different modes of operation and fit across many different platform requirements.

At the top of the tree, in phase 4, is the big end of town where building blocks for the datacenter have moved from servers or racks to cubes or containers, and the scale means that the hardware is completely secondary to the application … and the hardware environment becomes one of “ship it in, run it, then ship it out if it breaks” … with little to no interaction inbetween. The hardware is generally the same as in phase 3, but with greater emphasis on software control and distributed storage/function.

Typically the vast majority of smaller datacenters are at phase 1 or 2 and trying to get their hardware costs under control as their capacities continue to grow. This is not a bad thing – just a phase in the overall life of the datacenter.

So where are you? (and where is your data?)

Ciao
Neil

 

facebooktwitterlinkedinmail

Driving me crazy …

October 2, 2013

I’m constantly asked the question: “what drives should I use?” Well these days I, like many others, am struggling to answer that question.

I talk to drive vendors on a regular basis and they are constantly releasing new drives – but sometimes even they seem to struggle with the marketing naming conventions and the different types of drives being released into the channel. It is true that some drives are released because there is a perceived market segment, and that some drives are built for other customers (eg OEM) and released into Channel because someone thinks it’s a good idea, but in the end the result is a bit of confusion on the part of the poor people trying to work out which drives to use for their day to day server builds.

SSD has a wide range of so-called performance stats and a wide range of prices (even from the one vendor). 5900, 7200, 10K RPM – and that’s just in SATA. Then add SAS to the mix in 7200, 10K and 15K. What about “Hybrid” drives? Oh, by the way, mix in a good dose of 2.5” vs 3.5”, some naming conventions like “NAS, Desktop, Workstation, Datacenter, Audio Video, Cloud, Enterprise, Video” and you have a wonderful mix that confuses the living daylights out of end users and system builders. Did I forget to mention 3gb, 6gb and now 12gb drives hitting the market?

Soon ordering a drive will be like waiting in line at the local café … I’ll have a “triple venti caramel machiatto with whip skim milk and cinnamon”. Now if I hear that in front of me I think “w^%%&er”! But listening to a team of system engineers work out what is the correct drive for the particular customer requirement doesn’t sound too much different.

Try googling “making sense of hard drives” and you won’t get much help. Try working your way through the vendor websites and you are not much better off. So how do you do it? I ring my mates in the industry and even they struggle with all the new models and naming conventions from the marketing teams … so I wonder what the real industry does to work out what you should be using? I’d be interested to hear.

Oh, and by the way, I forgot to take into account the “thickness” of the drive (as opposed to the “thickness” of some of the promotional material :-))

Ciao
Neil

facebooktwitterlinkedinmail