Funny what we take for granted …

July 9, 2015

Was mediating (sort of) a discussion between two customers recently. Both are system builders using our products and I was trying to get some credibility for a solution I was proposing to one customer. Always helps if you can get a recommendation from someone other than yourself for the mad ideas you are proposing.

During the conversations, customer “A” dropped the term “6000Mb/sec”. Customer “B” balked at this, asking whether we meant 6000 megabits or 6000 megabytes. I matter-of-factly indicated we were talking 6000 megabytes per second throughput – I thought that was pretty old hat by now. If we were “writing”, we’d designate megabits as Mb (little “b”) and megabytes as MB (big “B”) – but we weren’t…

Customer “B” almost choked.

He had no idea that storage systems could go that fast. I did indicate that you need about 16 good SSDs to make this happen, but it’s no great brainer for an 8 Series controller directly attached to 16 SSDs to get this sort of performance (with the right data flow of course). So Customer “B” is off to get 16 SSDs to prove me wrong :-)

Lesson: just because I learnt something 12 months ago doesn’t mean that everyone knows the same stuff. I’ll learn to be a little more humble in the future and make sure I tell people what we are doing.



Get off the net (please) …

July 8, 2015

I woke up this morning to the news that the internet is slow … because of Netflix (and other such related companies).

Wow … this is news (not). The internet in Australia is a dodgy hotch-potch of technologies and connections based on an outdated and totally-neglected copper telephone network. It has never been great, but now it’s terrible. In simple terms it can’t keep up with the load of people streaming movies across the internet. This is most evident from around 4.00pm onwards – when the kids get home from school the internet slows to a crawl. There has always been an impact but now it is pretty bad – to the point where things like making Skype calls in the evenings (I do a lot of those for sporting committee meetings) is becoming untenable.

The previous Australian government (I’ll keep political parties out of it) decided we needed a fibre-to-the-home network across Australia (we call it the “NBN” – National Broadband Network), but the current government looked at the cost, decided they could get themselves re-elected for a lot longer if they took that money and spent it elsewhere, and scaled back that little venture, preferring to create a mix of copper, fibre and wifi. Did I mention “hotch-potch” somewhere earlier?

Ironically it’s now home users who are complaining and being heard. Business has been screaming for ages that our communications infrastructure is woeful, and that we pay a fortune for both phone and internet connectivity, but that seems to  have fallen on deaf ears (politically). It will be interesting to see if the politicians now start to take notice due to the fact that the people who do the majority of the voting are now the ones complaining.

Blind freddy can see the benefits of having a fast, reliable, low-cost communications infrastructure in a country. No, I don’t mean we can all watch online movies at night – I mean we can conduct business in an innovative, economic and forward-thinking manner. It might even mean I don’t need to get on a plane to go and see customers across the country – I could do it with video conferencing. However trying to do that in Australia today would end up a farce – with customers a little less happy at the end of such a debacle than at the start.

So Australia struggles along in the 19th century – with all the innovative ways of doing business out there on the table – all hamstrung by the fact that they can’t communicate with one another.

I’m amazed the web hasn’t crashed while I’m trying to upload this.



Economic impact …

July 8, 2015


When I switch on the news these days it’s all doom and gloom – Greece is up for sale, the Australian dollar is plunging and China’s economy is not what you would call “moving forward”. In years gone by as an ordinary worker toiling away in a factory I would have taken little notice, and cared even less.

However, in today’s world of travel, international trade and multinational employers, things are a little different. Greece doesn’t directly impact my day to day, but China does. Australia sells a lot of stuff to China – mostly in the form of iron-ore (we are happily digging up large chunks of Australia and shipping it north).

China’s economic woes are having a big impact on the price of iron-ore. Now I don’t buy that stuff on a weekly basis, so I could be excused for not caring, but it has a major impact on the economy of Western Australia (primary source of most mining exports), which in turn has a flow-on effect to the rest of Australia.

So when I talk to my customers in WA I’m hardly surprised to hear that they are doing it tough, and that the mining industry has slowed dramatically. Considering that the mining industry drives a great deal of the economy, and an even larger portion of the server-consumption market in Western Australia, it’s hardly surprising that they are concerned.

If my customers are concerned then so should I, so it’s time to jump on a plane and see what can be done to do my bit for the Australian economy!



What a difference a bay makes …

July 8, 2015

I did this test several years ago to show (a) how ridiculously expensive full-blown SSD servers were and (b) how much little more you actually paid for maxCache. So I thought I’d have a go at it again based on current prices from the web (these are all dodgy $AUD based on websites from

I picked as an example a database server. I have an 8-bay and 16-bay server, and have done some very rough maths based on capacities of drives. Note that the capacities of the drives will definitely be wrong as you don’t get what you pay for, but I don’t know exactly what the real capacities are for these drives so didn’t worry about that part of the calculation.

Note: I believe (if it works the same for you as it does for me) that you can click on the images and see the picture in larger format).


The 8-bay server

The big non-surprise here is that maxCache is the most expensive. There are a few reasons for that, but mostly the fact that you need a more expensive controller and SSDs compared to the full spinning media server where you can get away with a much slower RAID card.

The real surprise is the cost of the SSD server. I’ve used the 71605E (yes I’m banging on about that card again) because it will “just” cope with 8 x SSD and we don’t need a flash module in pure SSD arrays … plus the fact it’s inexpensive.

8 bay server costs

The 16-bay server

In the 16-bay server things balance out a bit more. The maxCache server is cheaper to purchase than even the spinning media server, but then you don’t get as much capacity. Because you are amortising the cost of the maxCache software (card) over a larger capacity, it has far less impact on the overall cost of the server.

Of course the SSD server goes way up in price, but in fact it’s $ per Gb doesn’t really change from the 8 bay server to the 16 bay server (very little anyway).

16 bay server costs


So why am I putting this out there? I just think it’s interesting to look at the overall cost of things in reality compared to people’s perceptions. For example someone might say “… maxCache/pure SSD are way too expensive … my customer won’t cop the cost of SSDs”. Really? When explained in this manner I think a lot of customers will think differently.

Of course the key is performance. Almost anyone will believe that servers with SSDs will be “way” faster than servers with spinning drives alone – that’s not that hard a sell. So if, in the case of the 8-bay server, you said to your customer … the server will be $1149 more expensive to go full SSD – then yes, he might balk at it.

However, if you say … “for an extra 22cents/GB you can get a full-SSD server which will run like the blazes” it doesn’t sound expensive at all.

Lesson: it all depends on how you present things, but it’s worth investigating the full costs of the differences between a spinning, caching and pure ssd server before just “assuming’ that things are too expensive.



So which SSD should you use?

July 8, 2015

Adaptec produces compatibility reports on a regular basis. We are always testing our equipment against the components that we work with – motherboards, chassis, backplanes and of course drives. The CR reports can be found at:

While many people follow these reports as gospel, we have a lot of customers who do their own testing and use what they regard as suitable equipment before we get around to testing a particular product.

Here in Australia there are 3 very popular brands of SSD: Intel, SanDisk and Samsung. They are all good drives with their own particular advantages and strong points. They also have a lot of different models, not all of which the vendors design for use in RAID environments, but which users often use with our controllers for some very effective/efficient storage solutions.

Ironically, I find that a lot of people base their usage on “the drive they trust” vs any real technical data. I’m a big believer in gut-feelings, and it seems I’m not on my own. Just as in spinning drives where many a user will say “I use Seagate only – never had a problem”, or “WD are the only drives I trust”, users will also have their own feelings regarding SSDs, for example: “we have tested SanDisk Extreme Pro and they offer the best value for money when developing a high-performance server” (direct quote from one of my customers).

So … does the fact that a drive is not on our CR mean that it’s no good? No, often it means we just haven’t tested that drive yet. There are far too many on the market for us to test them all. It’s also a case that different regions have different product availability (Asia/Pacific can’t buy some drives that can be found in the US and Europe), so finding the drive you want to use on the CR is not always possible.

The obvious and, in reality, only answer is: make sure you test your drives carefully before using them in large scale environments. You should also take into account the style of data that you are putting on the SSDs. In a read-intensive environment most drives will perform very well and last a long, long time. However if your environment exposes the RAID array to massive intensive writes, then I would strongly suggest sticking to datacenter/enterprise specific high-end SSDs to ensure the lifespan of the drive meets user requirements.

So in the end … should you only use drives on our CR? I’d like to say yes, but I know that is never going to happen. So talk to your drive vendors, talk to Adaptec, test your drives, then make your decisions and monitor your systems. The practical reality is that while there are a lot of different drives for a lot of different purposes, I’m finding as a general rule that SSDs are far more reliable, far faster and less troublesome than the short-term urban myths would have us believe.



Should you be using SSD?

July 8, 2015

Came across a customer the other day who (remaining nameless) was using some sort of fancy data capture software in heavy industry – data logging large amounts of data from rolling mills etc. The software vendor indicated that SATA drives were not acceptable, and that the RAID system needed to use 15K SAS drives in a RAID 10 array to provide acceptable performance to allow for “maximum capture of logging data”.

So we were discussing how to set up a system – and I indicated that I thought SSD would be the best way to go here. Capture large numbers of small writes – which then get moved on to a database at a point in time (so the data volume is not great). It was all about speed of small writes. So when I suggested that we use SSD (SATA) I found it very surprising that the customer told me the software vendor had indicated no, SSD should not be used – they should use 15K SAS spinning media in a RAID array.

Hmmm … how does the software application (a Windows app) know what sort of drives are underpinning the storage? We take a bunch of drives and combine them into a RAID array. We present that logical drive to the OS and handle all the reads and writes via our own on-board RAID processor. I can understand that the application can measure the speed of the storage (write speed in this case) and judge it to be sufficient or not, but it can’t see what sort of drives are attached – that’s hidden from a properly-written application.

Considering this system will be in a very computer-unfriendly industrial environment, I would have thought that using drives that can handle vibration (there will be lots of that), don’t use lots of power nor generate a lot of heat … along with having by far the highest writing speed of the drives that a user could choose for this application, would be the ducks guts for this job.

So … my guess is that the recommendation on the software vendor’s website for using 15K SAS drives is probably 5-10 years old and would have made a lot of sense back then, but now it’s just plain out of date. If this isn’t an ideal application for 4 x SSDs on a 71605E in a RAID 10 I don’t know what is.

Lessons to be learned from this:

  1. Information on websites is not always up to date.
  2. Not everyone has jumped on the SSD bandwagon yet.
  3. You need to do a lot of investigation with vendors to find out what options you have for innovative storage solutions in today’s fast-moving environment.
  4. Telling me stuff ends up on a blog :-)

Enough said …



Solution selling …

July 7, 2015

When sitting around in marketing meetings a constant bone of contention in communications with our customers is “solution selling”. In other words – how do we provide solution information to customers rather than just mumbo-jumbo about speeds and feeds of our products?

It’s a much larger and more complex question that you might first think. Just take a quick look at the broad scope of product placement where you find adaptec controllers … from the workstation through to hyper-scale datacenters (and everywhere in-between).

Now of course the boss is just going to say “write about them all”, but some of the blog articles are already looking like War and Peace (I forget to stop typing occasionally), and my real question here is “is this really the place you come for answers on designing solutions?” If in fact that is what you do, then let us know. Because if you are looking for detailed analysis of how to design systems for specific applications then I’m up for the typing – but you have to be up for the reading to go along with it!

I should probably just give it a go and see what happens. I’ll stick my head out the window occasionally and see if I can hear large-scale snoring … that will tell me I’m boring you to death with overly-long blogs. But then again, my wife tells me I’m deaf so I probably won’t know anyway.

Send me your thoughts (nice ones only please).



When to use RAID 50 …

July 3, 2015

(this is another in the series of crazy-long blogs … the bloke who approves these things is going to get sick of reading :-))

So most people know about RAID 5 – it’s the general purpose great all-rounder of RAID arrays – gets used for just about anything and everything. However when it comes to capacity there are limitations. First…there are physical limitations, then…there are common-sense limitations.

It’s true that you can put up to 32 drives in a RAID 5. You can also buy some pretty big drives these days (, so 32 x 10TB drives in a RAID 5 would yield (theoretically if in fact the drives gives you 10TB usable space): 310TB – remembering of course that RAID 5 loses 1 drive’s capacity for parity data.

Now not many people want 310TB on their home server (though some of my mates come close with movies), but there are more and more businesses out there with massive amounts of archival, static and large-scale data … so it’s not inconceivable that someone will want this monster one day.

It’s a realistic fact that you may want to build a big server, but will be using smaller drives as you can’t afford to purchase such large drives, but the principles of building this server remain the same no matter what drives you use.

So what are the problems with running a 32-drive array using 10TB drives? Plenty (imho)

Most of the issues come in the form of building and rebuilding the array. For a controller to handle this size disk in this size array, let’s look at some dodgy math.

  • Stripe size (per disk … what we call a minor stripe) – 256Kb
    Number of stripes on single disk – 40+ million (give or take a hundred thousand)
  • Each major stripe (the stripe that runs across the entire disk) is made of up 32 x 256Kb pieces (8Mb)
  • 1 RAID parity calculation means reading 31 disks, calculating parity and writing it to 1 disk – per stripe
  • Multiply that by 40+ million stripes

That’s going to wear out the abacus for sure :-)

The problem with all these stripes and all this data, is that all 31 drives are involved in any operation. Now that will mean some pretty good streaming speed, but it will also make for killer rebuild times. For example, to rebuild an array (best case, with no-one using it) – to get it done in 24 hours the drives would need to read/write at least 115Mb/sec. Now SATA and SAS drives might come close to that on reads, but they are nowhere near that on writes, so rebuilds on this monster will take a lot longer than 24 hours.

Since it’s a RAID 5, if another drives goes west (not south, I’m already in Australia!) during this rebuild process your data is toast and you have a major problem on your hands.

So what is the alternative? Use RAID 50.

RAID 50 is a grouping of RAID 5 arrays into a single RAID 0. Don’t panic, I’m not telling you to use RAID 0 (which in itself has no redundancy), but let’s look at how it works. In a standard RAID 0 you use a bunch of single drives to make a fast RAID array. The individual drives would be called the “legs” of the array. However there is nothing protecting each drive – if one fails there is no redundancy or copy of data for that drive – and your whole array fails (which is why we don’t use RAID 0).

However in a RAID 50 array, each leg of the top-level RAID 0 is made up of a RAID 5. So if a drive fails, the leg does not collapse (as in the case of a RAID 0), it simply runs a bit slower because it is now “degraded”.

The maths above doesn’t change per drive, but the number of drives in the major stripe of the array changes dramatically (at least half), so the speeds in all areas go up accordingly.

A lot of people have heard of RAID 10, 50 and 60 (they are similar in nature), but think like humans – 2 legs. However all these combination RAID levels can have multiple legs – 2, 3, 4 – however many you like. And more legs generally is better. But let’s look at our 32-drive configuration.

Instead of 32-drives in a single RAID 5, a RAID 50 could be 2 x RAID 5 of 16 drives each, with a single RAID 0 over the top. The capacity would be one drive less than the 32-drive RAID 5, but the performance and rebuild times will be greatly improved.

Why? In reality 32 drives is beyond the efficiency levels of RAID 5 algorithms, and it’s not as quick as an 8-16 drive RAID 5 (the sweet spot). So just on its own, a RAID 5 of 16 drives will generally be quicker than a RAID 5 of 32 drives. But now you have two RAID 5 arrays combined into a single array.

The benefit come to light in several ways. In RAID rebuilds (when that dreaded drive fails), only 16 of the drives are involved in the rebuild process. The other half of the RAID 50 (the other 16 drives in the RAID 5 array) are not impacted or affected by the rebuild process. So the rebuild happens a lot faster and performance of the overall array is not impacted anywhere near as badly as the performance of the rebuilding RAID 5 array made up of 32 drives.

So what is the downside to the RAID 50 compared to the single large RAID 5? In this case, with 2 legs in the array, you would lose one additional drive’s capacity.

Mathematics (fitting things in boxes) always comes into play with RAID 50/60 … I want to make a RAID 50 of 3 legs over 32 drives – hmmm … the math doesn’t work does it. It hardly ever does. If you have 32 drives then the best three-leg RAID 50 array you could make would be 30 drives (3 legs of 10 drives each). That would give 27 drives capacity, but it would be (a) faster and (b) rebuild much, much faster than anything described above.

So would you do a 4-leg RAID 50 with 32 drives? Yes you could. That would mean 4 drives lost to parity, giving a total of 28 drives capacity, but now each RAID 5 (each leg) is down to 8 drives and ripping along at a great pace of knots, and the overall system performance is fantastic. Rebuild times are awesome and have very little impact on the server. Downside? Cost is up and capacity is down slightly.

As you can see, there is always a trade-off in RAID 50 – the more legs, the more cost and less capacity, but the better performance. So back to the 32 drives … what could I do? Probably something like a 30-drive 3-leg RAID 50, with 2 hot spares sitting in the last two slots.

But what about your OS in this configuration? Where do you put it? Remember that you can have multiple RAID arrays on the same set of disks, so you can build 2 of these RAID 50 arrays on the same disks … one large enough for your OS (which will use very, very little disk space), and the rest for your data.

So should you consider using RAID 50? Absolutely – just have a think about the long-term usefulness of the system you are building and talk to us if you need advice before going down this path.



I always knew I had no “Q” …

June 29, 2015

Talk about a different way to promote a new product. Take an existing product, remove a feature and drop the price. Sounds pretty easy but what you end up with is something pretty spectacular.

Up until now Adaptec’s only 16-port internal RAID card has been the 81605ZQ. The “Z” is for ZMCP (zero maintenance cache protection) – in other words it has the supercap functionality built into the card – with just the supercapacitor to plug in (no daughter card). The “Q” part of the moniker denotes maxCache capability – the 81605ZQ is a caching controller (great for specific applications).

But what did you buy if you wanted a 16 port internal controller but did not need the “Q” function? You might be putting together a pure SSD system, or you might be building a nice large storage server that doesn’t need caching. The only choice was to go back to the 7 series.

So we took the 81605ZQ and removed maxCache. That makes it an 81605Z. Comes standard with 16 internal ports and cache protection … but note that it can’t be upgraded to a “Q” model – you can’t add that via firmware etc.

As an aside … you should note that you can swap out the drives from an 81605Z and an 81605ZQ without any reconfiguration – the drivers etc are all the same and both cards recognise the RAID arrays from the other card.

So there you have it … a new card. It does less than it’s “Q” cousin, but then again, it costs less :-)
Now you know.



Upgrading to maxCache?

June 17, 2015

Some thoughts from the Storage Advisor

I get a lot of calls from people who are interested in maxCache … how does it work, what does it do, and most importantly … will it work for me? So I thought I’d put some ramblings down on what has worked for customers and where I think maxCache could/should be used.

Firstly just a quick summary of maxCache functionality in plain English. You need an Adaptec card with “Q” on the end of it for this to work, and no, you can’t upgrade a card without “Q” to a “Q” card – but you can swap out the drives from an existing controller to a “Q” controller, then plug in SSDs and enable maxCache (bet you didn’t know that one). maxCache is the process of taking SSDs and treating them as read and write cache for a RAID array – that’s a basic statement but it’s pretty close to what happens – add a very large amount of cache to a controller.

So let’s take an existing system that’s running 8 x enterprise SATA in a RAID 5 – pretty common configuration. That might be connected to a 6805 controller in an 8-bay server. You want to make this thing faster for the data that has ended up on this server without reconfiguring the server or rebuilding the software installation. This server started life as just a plain file server, but now has small database, accounting software, and is now running terminal server … a far cry from what this thing started life as. You want to increase the performance of the random data. maxCache does not impact or affect the performance of streaming data – it only works on small, random, frequent blocks of data.

Upgrade the drivers in your OS (always a good starting point) and make sure the new drivers support the 81605ZQ. In most OS this is standard – we have for example one windows driver that supports all our cards. Then disconnect the 6 series from the drives, plug in and wire up the 81605ZQ and reboot. All should be well. You will see some performance difference as the 8 series is dramatically quicker than the 6 series controller, but the spinning drives will be the limiting factor in this equation.

Once you’ve seen that all is working well, and you’ve updated maxView management software to the latest version etc, then shut the system down, grab a couple of SSDs (lets for argument sake say 2 x 480GB Sandisk Extreme Pro) and fit them in the server somewhere. Even if there are no hot swap bays available there is always somewhere to stick an SSD (figuratively speaking) – they don’t vibrate and don’t get hot so they can be fitted just about anywhere.

Create a RAID 1 out of the 2 x SSDs. Then add that RAID 1 to the maxCache pool (none of which takes very long). When finished enable maxCache read and write cache on your RAID 5 array. Sit back and watch. Don’t get too excited as nothing much seems to happen immediately. In fact maxCache takes a while to get going (how long is a while? … how long is a piece of string?). The way it works is that once enabled, it will watch the blocks of data that are transferring back and forth from the storage to the users and vice versa.

So just like a tennis umpire getting a sore neck in the middle of a court, the controller watches everything that goes past. It then learns as it goes as to what is small, random and frequent in nature, keeping track of how often blocks of data are read from the array etc. As it sees suitable candidates of data blocks, it puts them in a list. Once the frequency of the blocks hits a threshhold, the blocks are copied in the background from the HDD array to the SSDs. This is important – note that it is a “copy” process – not a moving process.

Once that has happened, a copy of the data block lives on the SSDs as well as on the HDD array. Adaptec controllers use a process of “shortest path to data”. When a request comes for a block of data from the user/OS, we look first in the cache on the controller. If it’s there then great, it’s fed straight from the DDR on the controller (quickest possible method). If it’s not there then we look up a table running in the controller memory to see if the data block is living on the SSDs. If so, then we get it from there. Finally, if it can’t be found anywhere we’ll get it from the HDD array, and will take note of the fact that we did (so adding this data block to the learning process going on all the time).

Why does this help? Pretty obviously the read speed of the SSD is dramatically faster than the spinning drives in the HDD array, especially when it comes to a small block of data. Now as life goes on and users read and write to the server we are learning all the time, and constantly adding new blocks to the SSD pool. Therefore performance increases over a period of time rather than being a monumental jump immediately.

The SSD write cache side of things comes into play when blocks that live in the SSD pool (remembering these are copies of data from the HDD) are updated. If the block is already in the SSD pool then it’s updated there, and copied across to the HDD as a background process a little later (when the HDD are not so busy).

End result … your server read and write performance increases over a period of time.


Pitfalls and problems for young players …

All this sounds very easy, and in fact it is, but there are some issues to take note of that require customer education as much as technical ability.

Speed before and after

If you have no way of measuring how fast your data is travelling prior to putting maxCache in the system, then you won’t have any way of measuring later, so you can only go by “feel” … what the users experience when accessing data. While this is a good measure, it’s pretty hard to quantify.

Let me share some experiences I had from the early days of showing this to customers. I added maxCache to an existing system for a customer on a trial basis (changing controller to Q, adding SSD etc). Left the customer running for a week feeling quite confident that it would be a good result when I went back. Upon return, the customer indicated that he didn’t think it was much of a difference and wasn’t worth the effort or cost. So I put the system back the way it was before I started (original controller and no SSD) and rebooted. The customer started yelling at me very loudly that I’d stuffed his system … “it was never this slow before!” Truth of the matter was that it was exactly the same as before, so the speed was what he had been living with. Lesson: customers are far less likely to say anything about a computer getting faster, but they yell like stuck pigs as soon as something appears to be “slower” :-)

Second example was in a terminal server environment. This time we could measure the performance of the server by measuring the logon time of the terminal server screen etc. It was pretty bad (about 1 minute). So we went through the process again and added maxCache. The boss of the organization (who happen to be a good reseller of mine) immediately logged on to TS – and grandly indicated that there was no difference and I didn’t know what I was doing. So we went to the pub. Spent a good night out on the town and went back to the customer in the morning (a little the worse for wear). The boss got to work around 10.00am (as bosses do) and was pretty much the last person to log on to TS that morning. Wow, 6 seconds to log on. We then had the Spanish Inquisition (no-one expects the Spanish Inquisition – as to what we had done that night. The boss was thinking we’d spent all night working on the server instead of working on the right elbow.

In reality, the server had learnt the data blocks involved in the TS logon (which are pretty much the same for all users), so by the time he logged in it was mostly reading from the SSDs, hence a great performance improvement. Lesson: educate the customer as to how it works and what to expect before embarking on the grand installation.

The third and last experience was with performance testing. I’ve already blogged about this, but it bears mentioning here. Customer running openE set up his machine and did a lot of testing (unfortunately in a city far away from me so I could not do hands on demo etc). Lots of testing with iometer did not prove a great deal of performance improvement, but when finally biting the bullet and putting the server into action, the customers were ecstatic. A great performance improvement on Virtual Desktop software. Lesson: spend a lot more time talking to the customer about how the product works so they understand its random data that’s at play here, and that performance testing streaming data won’t show any performance improvement whatsoever.



There are a lot of servers out there that would benefit from maxCache to speed up the random data that has found its way onto the server whether intentionally or not. It needs to be kept in mind that servers don’t need rebuilding to add maxCache, and it can be added (and removed) without any great intrusion into a client’s business.

The trick is to talk to the customer, talk to the users and find out what the problems in life are before just jumping in and telling them that this will fix their problems. Then again, you should probably do that anyway before touching anything on a server … but that’s one of life’s lessons that people have to work out for themselves :-)



The longest blog article ever? RAID storage configuration considerations…

June 16, 2015

RAID storage configuration considerations (for the Channel System Builder)

SAS/SATA spinning media, SSD and RAID types – helping you make decisions
Some thoughts from the Storage Advisor

Note: I started writing this for other purposes – some sort of documentation update. But when I finished I realised it was nothing like the doc the user requested … and then “write blog” popped up on the screen (Outlook notification). So I took the easy way out and used my ramblings for this week’s update.

When designing and building a server to meet customer needs, there are many choices you need to consider: CPU, memory, network and (probably most importantly) storage.

We will take it as a given that we are discussing RAID here. RAID is an essential part of the majority of servers because it allows your system to survive a  drive failure (HDD or SSD) and not lose data, along with the added benefits of increasing capacity and performance. While there are many components within your system that will happily run for the 3-5 year life of your server, disk drives tend not to be one of those items.

So you need to take a long-term approach to the problem of storage – what do you need now, what will you need in the future and how will your survive mechanical and electronic failures during the life of the server.


What sort of drives do I need to meet my performance requirements?

Rather than looking at capacity first, it’s always a good idea to look at performance. While the number of devices have an impact on the overall performance of the system, you will not build a successful server if you start with the wrong disk type.

There are three basic types of disks on the market today:

  • SATA spinning media
  • SAS spinning media
  • SSD (generally SATA but some SAS)

SATA spinning drives are big and cheap. They come in many different flavours, but you should really consider using only RAID-specific drives in your server. Desktop drives do not work very well with RAID cards as they do not implement some of the specific features of enterprise-level spinning media that help them co-operate with a RAID card in providing a stable storage platform.

The size of the drive needs to be taken into consideration. While drives are getting larger, they are not getting any faster. So a 500Gb drive and a 6Tb drive from the same family will have pretty much the same performance.

Note that this is not the case with SSDs. SSDs tend to be faster the larger they get, so check your specifications carefully to ensure you know the performance characteristics of the specific size of SSD you buy – not just what is on the promotional material.

The key to performance with spinning media is the number of spindles involved in the IO processes. So while it’s possible to build a 6TB array using 2 drives in a mirror configuration, the performance will be low due to the fact that there are 2 spindles in operation at any time. If the same array was built using 7 x 1TB drives, it would be much quicker in both streaming and random data access due to the multiple spindles involved.

SAS spinning media generally rotate at higher revolutions than SATA drives (often 10,000 RPM or higher vs 5400/7200 for SATA), and the SAS interface is slightly quicker than the SATA interface, so they outperform their SATA equivalents in certain areas. This is mostly in the form of random data access: SAS drives are faster than SATA drives. When it comes to streaming data there is little to no difference between SATA and SAS spinning media.

However all performance calculations go out the window when SSD are introduced into the equation. SSD are dramatically faster than spinning media of any kind, especially when it comes to random data. Keeping in mind that random data storage systems tend to be smaller capacity than streaming data environments, the SSD is rapidly overtaking the SAS spinning media as the media of choice for random data environments. In fact, the SSD drive is so much faster than SAS or SATA spinning media for random reads and writes, that it is the number one choice for this type of data.


So what about capacity calculations?

Capacity both confuses and complicates the performance question. With SATA spinning drives reaching upwards of 8TB it’s pretty easy to look at the capacity requirements of a customer and think you can just use a small number of very large spinning drives to meet the capacity requirements of the customer.

And that is true. You can build very big servers with not many disks, but think back to the previous section on performance. With spinning media, it’s all about the number of spindles in the RAID array. Generally speaking, the more there are, the faster it will be. That applies to both SATA and SAS spinning media. The same cannot be said for SSD drives.

So if you need to build an 8TB server you are faced with many options:

  • 2 x 8TB drives in a RAID 1
  • 4 x 4TB drives in a RAID 10
  • 3 x 4TB drives in a RAID 5
  • 5 x 2TB drives in a RAID 5
  • 9 x 1TB drives in a RAID 5

Etc, etc.

So what is best with spinning drives? 2 x 8TB or 9 x 1TB? A good general answer is that the middle ground will give you the best combination of performance, cost and capacity. Note however that you need to think about the data type being used on this server, and the operating system requirements. If for example you are building a physical server running multiple virtual machines, all of which are some sort of database-intensive server, then you are wasting your time considering spinning drives at all, and should be moving straight to SSD.

If however this is a video surveillance server, where the data heavily leans towards streaming media, then 3 x 4TB SATA drives in a RAID 5 will be adequate for this machine.


What RAID controller type do I need?

This one is easier to determine. The RAID controller needs to have enough capacity to handle the IOP capability of your drives, with sufficient ports to connect the number of drives you end up choosing. Since there are so many different ways of mounting drives in servers today, you will need to take into account whether the drives are directly attached to the server or whether they are sitting in a hot-swap backplane with specific cabling requirements.


What RAID level should I use?

There are two basic families of RAID:

  • Non-Parity RAID
  • Parity RAID

Non-Parity RAID consists of RAID 1, and RAID 10. Parity RAID consists of RAID 5, 6, 50 and 60. Generally speaking, you should put random data on non-parity RAID, and general/streaming data on parity RAID. Of course things aren’t that simple as many servers have a combination of both data types running through their storage at any given time. In this case you should lean towards non-parity RAID for performance considerations.

Note of course (there’s always a gotcha) that non-parity RAID tends to be more expensive because it uses more disks to achieve any given capacity than RAID 5 for example.


Putting this all together …

By now you can see that designing the storage for a server is a combination of:

  • Capacity requirement
  • Performance requirement
  • Disk type
  • RAID controller type
  • RAID level
  • Cost

Let’s look at some examples:

  1. General use fileserver for small to medium business
    General Word, Excel and other office file types (including CAD files)

Capacity: 10TB
Performance requirements: medium
Disk type: spinning will be more than adequate
RAID controller type: Series 6,7,8 with sufficient ports
RAID level: RAID 5 for best value for money
Options: should consider having a hot spare in the system
Should also consider having cache protection to protect writes in cache in event of power failure or system crash

Remembering that you don’t get the total usable capacity that you expect from a drive. For example, a 4TB drive won’t give 4TB of usable capacity – it’s a little more like 3.75TB….(I know, seems like a rip off!)

In this scenario we are going to recommend enterprise SATA spinning media. 4 x 3TB drives will give approximately 8TB capacity, with good performance from the 4 spindles. Since many server chassis support 6 or more drives, then the 5th drive can become a hot spare, which will allow the RAID to rebuild immediately in the case of a drive failure.

With spinning drives a 6-series controller will be sufficient for performance, so the 6805 would be the best choice controller. We would recommend an AFM-600 be attached to the controller to protect the cache in event of a power failure etc.

  1. High-performance small-capacity database server
    Windows 2012 stand-alone server running an industry-specific database with a large number of users

Capacity: 2-3TB
Performance requirements: high
Disk type: pure SSD to handle the large number of small reads and writes
RAID controller type: Series 7 (71605E)
RAID level: RAID 10 for best performance
Options: should consider having a hot spare in the system

In this scenario we are definitely going to use a pure SSD configuration. Database places a great load on the server with many small reads and writes, but the overall throughput of the total server data is not great.

RAID 10 is the fastest RAID. When creating a RAID array from pure SSD drives, we recommend to turn off the read and write cache on the controller. Therefore you (a) don’t need much cache on the controller and (b) don’t need cache protection. In this case we would recommend 6 x 1TB (eg 960Gb Sandisk Extreme Pro drives) – which would give approximately 2.7TB usable space in an extremely fast server.

When using SSDs you need to use a Series 7 or Series 8 controller. These controllers have a fast enough processor to keep up with the performance characteristics of the SSDs (the Series 6 is not fast enough).

Again, a hot spare would be advisable in such a heavily used server. This would make a total of 7 drives in a compact 2U server.

  1. Mixed-mode server with high-performance database and large data file storage requirements
    Multiple user types within the organisation – some using a high-speed database and some general documentation. Organisation has requirement to store large volume of image files

Capacity: 20+TB
Performance requirements: high for database, medium for rest of data
Disk type: mix of pure SSD to handle the database requirements and enterprise SATA for general image files
RAID controller type: Series 8 (81605Z)
RAID level: SSD in RAID 10 for operating system and database (2 separate RAID 10 arrays on same disks). Enterprise SATA drives in RAID 6 due to fact that large number of static image files will not be backed up
Options: definitely have a hot spare in the system

In this scenario (typically a printing company etc), the 4 x SSDs will handle the OS and database requirements. Using 4 x 512Gb SSD, we would make a RAID 10 of 200Gb for Windows server, and a RAID 10 of 800Gb (approx) for the database.

The enterprise SATA spinning media would be 8 x 4TB drives, with 7 in a RAID 6 (5 drives capacity) and 1 hot spare. In this scenario it would be advisable to implement a feature called “copyback hot spare” on the RAID card so the hot spare can protect both the SSD RAID array and spinning media RAID array.

This will give close to 20TB usable capacity in the data volumes.



Some of the key features of RAID cards that need to be taken into consideration, which will allow for the best possible configuration, include:

  • Multiple arrays on the same disks
    It is possible to build up to 4 different RAID arrays (of differing or same RAID level) on the same set of disks. This means you don’t have to have (for example) 2 disks in a mirror for an operating system, and 2 disks in a mirror for a database, when you can do both requirements on the same 2 disks
  • RAID 10 v RAID 5 v RAID 6
    RAID 10 is for performance. RAID 5 is the best value for money RAID and is used in most general environments. Many people shy away from RAID 6 because they don’t understand it, but in a situation such as in option 3 above, when a customer has a large amount of data that they are keeping as a near-line backup, or copies of archived data for easy reference … that data won’t be backed up. So you should use RAID 6 to ensure protection of that data. Remember that the read speed of RAID 6 is similar to RAID 5, with the write speed being only very slightly slower.
  • Copyback Hot Spare
    When considering hot spares, especially when you have multiple drive types within the server, then copyback hot spare makes a lot of sense. In option 3 above, the server has 4Tb SATA spinning drives and 512Gb SSD drives. You don’t want to have 2 hot spares in the system as that wastes drive bays, so having 1 hot spare (4Tb spinning media) will cover both arrays. In the event that an SSD fails, the 4Tb SATA spinning drive will kick in and replace the SSD, meaning the RAID 1 will be made of an SSD and HDD. This keeps your data safe but is not a long-term solution. With copyback hot spare enabled, when the SSD is replaced, the data sitting on the spare HDD will be copied to the new SSD (re-establishing the RAID), and the HDD will be turned back into a hot spare.



As you can see, there are many considerations to take into account when designing server storage, with all factors listed above needing to be taken into consideration to ensure the right mix of performance and capacity at the best possible price.

Using a combination of the right drive type, RAID level, controller model and quantity of drives will give a system builder an advantage over the brand-name “one-model-fits-all” design mentality of competitors.

If you have questions you’d like answered then reply to this post and I’ll see what I can do to help you design your server to suit your, or your customer’s, needs.



“Comatose” … no, not me, the server!

June 9, 2015

(though I think I’ve been called that a few times in my youth).

A colleague sent me a link to a report on the web recently, and while I found it mildly interesting, I actually think the writers may have missed some of the point (just a little). While in general everything they say is correct, there is a factor that they haven’t taken into account … “stuff”.

So what is “stuff”?

Well I have a lot of “stuff” on my laptop. There is “stuff” on CD’s lying around the place, “stuff” on my NAS and “stuff” in the Mac on the other end of the desk. To me “stuff” is old data. I hardly, if ever, use it, but I sure as heck want it kept close and immediately accessible. In my business my old data is my historical library and a great backup to my slowing fading memory.

So what is living out in datacenter land? Lots of useful information, and lots and lots of “stuff”. It has become evident when dealing with users over the past decade that people are reticent, if not downright unwilling, to remove, delete, consolidate or even manage old data – they just want it kept forever “just in case”.

So while there are strategies out there to minimize it’s footprint, there is no strategy out there for changing people’s mindsets on how long they keep data. So datacenterland is, and always will be, awash with “stuff” … which means more and more “comatose” storage. I don’t disagree with the web link article on server compute – that needs to be managed and centralized into newer and newer, faster and more power efficient servers. It’s just the data (storage) side of the equation that I have issues with.

If we take as a given that no-one is going to delete anything, then what do datacenters do about it? While there are larger and larger storage devices coming out all the time (eg high density storage boxes utilizing 10Tb disks), the problem these bring to the datacenter is that while they can handle the capacity of probably 10 old storage boxes, the datacenter is faced with moving all of the data off the old storage onto the new storage to free up the old storage. By the time a large datacenter gets through that little process, 10Tb drives will be replaced by 20Tb drives and the process will start all over again – meaning datacenters will have data in motion almost continuously – with tremendous management, network and overheads/costs to go along with … exactly the sort of stuff that datacenter operators don’t want to hear about.

I’m guessing that datacenter operators are looking at exactly this issue and are crunching numbers. Is it cheaper for us to keep upgrading our storage to handle all this “stuff”, with all of the management complications etc, or do we just buy more storage and keep filling it up without having to touch the old data? Or do we do both and just try to keep costs down everywhere while doing so?

It would be very, very interesting to know how the spreadsheet crunches that little equation.

“Stuff” for thought.




It seems like the world has stopped turning …

June 3, 2015

No, this is not biblical, nor is it prophetic. In fact I’m referring to disk drives :-)

In a meeting the other day we were discussing unofficial conversations with disk vendors that we have on a regular basis. It seems the spinning world (and I’m talking channel here), is slowing down. The SSD vendors however are romping along at 20%+ growth year on year.

So that is stating the obvious – SSD uptake is growing at the expense of HDD. Of course HDD is still king in the cold data storage world and those guys are making a killing in the datacenter world – all our “cloud” information has to live somewhere (like all your photos of yesterday’s lunch on FB etc).

But in the channel, the SSD is taking over for performance systems. The 10-15K SAS drives are giving way to similar sized SSDs – some at the enterprise level but a lot more at the high-end gaming and channel level – drives that at first glance don’t appear to be made for RAID, but in fact work just fine.

When talking to users, performance is a given – they all understand that the SSD is faster than the spinning drive, but many are still worried about drives wearing out – will they fail? I myself was wondering that so I looked at some specifications of drives and specifically their “DWPD” values (drive writes per day). This is pretty much the standard that SSD vendors use to indicate how long they think a drive will last before the overprovisioning is used up and the drive starts to degrade.

You will see values between 1/3 of a drive write per day and 25 drive writes per day – and if you were using these drives in an intensive write datacenter environment, I know which drives I’d be opting for. But let’s look at the 1/3 drive write per day and do a little maths. Let’s take 4 x 1TB drives (close enough) and make a RAID 10. Roughly speaking, that will give you 2TB capacity. Now if I can safely write 1/3 drive’s worth of data per day for the life of the drive, then that would be (approximately) 600GB of data written each day – remembering that the data is split across the two sets of mirrors in the RAID 10, and each set of mirrors can supposedly handle 300GB per day (1/3 their capacity).

Then lets look at the sort of systems that people are putting SSDs into. Are they using them for video servers? Not likely (too expensive to get the capacity). In fact they are using them for database and high-performance servers that are generally handling lots of small random reads and writes.

A bit more maths works out that an average 40 hour business week is approximately 25% of the overall time during the week, so you’d need to cram that 600GB writes into that timeframe (40 hours) to start stressing the drive. That’s something like 15GB writes per hour … remembering that this is based on a drive with a DWPD of 1/3. So a drive with higher values can handle more, and I’m yet to think of a business that is running SSDs in a database environment that is even within cooee of these numbers.

So when you look at the values on the drives, and think … wow, 1/3 DWPD is not very much … you need to balance that with actually thinking about how much data it is that your business will actually be writing to the disks on any given day.

I found it pretty interesting math – and it opened my eyes to the reality of DWPD values. Remember of course that in a datacenter you should use datacenter drives – mainly because many thousands of users can be accessing those drives at any given point in time and yes, you can get some pretty amazing amounts of data written to the drives each day, but in the channel, in the small business, and even in the medium to large-sized business, the amount of data being written each day is not the scary number that you may have thought without some detailed analysis.

It’s food for thought.

Oh, and by the way, if you are using SSD, then you should be using Series 8 RAID controllers. I know they are 12GB controllers and your SSDs are only 6GB devices, but it’s not the speed of the pipe that matters, it’s the number of transactions per second that the controller can handle. You don’t want to bottleneck your investment in SSDs at the controller level – you want a controller that will easily handle all the small reads and writes that the SSDs are capable of. Now whether your software or customers are capable of throwing or dragging that much data from the drives is a moot point, but putting SSDs on a slow controller is not the smartest thing to do.



The problem with performance testing …

May 26, 2015

Had a really good experience with a customer recently, but it highlighted the problems with performance testing, especially using iometer. Now, we use iometer a lot, and it’s a great tool to drill down into a specific set of performance characteristics to show a specific response from a storage system.

However … the problem with such a situation is getting the parameters right so that you are testing the right parameters that match your data.

So this customer was looking at maxcache – our SSD caching functionality that uses SSD drives attached to the 81605ZQ controller to add read and write caching to an array.

Testing with iometer didn’t show that much of an improvement (at least according to the customer). Discussion regarding the test parameters and how long to run a test for (1 minute won’t cut the mustard) saw a big improvement over their original testing (and yes, these guys know what they are doing with their systems so I’m not having a go at any individual system builder here).

So after much testing, it was decided to put the machine into test with real-world customers in a virtual desktop environment (believe is was openE running a whole stack of virtual desktops). Guess what – customers (end users) were as happy as pigs in …

Turns out the real world data is perfectly suited to caching (as suspected by the system builder), but that iometer was not able to accurately reflect the data characteristics of the real-world server. End result: everyone (system builder, datacenter operator, end users) – all happy and amazed at the performance of the system.

So where is the moral in this story? Simply that it’s difficult to play with a test software and come up with something that will closely match the end result of a server used in the real world. Is there an answer to this? Probably not, but I’m suggesting that everyone take performance testing software and the results they get with a grain of salt, and look at testing in the real world, or at least a close simulation.

The results can be very surprising.



A technical issue regarding RAID build/rebuild

May 13, 2015

Been getting a few questions regarding building RAID arrays recently, and thought it warranted putting something down on paper.

Now I’m talking about RAID 5/6 and other redundant arrays (not RAID 0 – that’s not for “real” data imho). So the questions arise about whether it is possible to restart a server during a raid build or rebuild, and what happens when a drive fails during that process. So let’s take a look at exactly what our controllers actually do in these situations.

RAID Build

If you are building a redundant array using either the clear or build/verify method, then yes, you can power down the server (or the power can go out by any other means) and it won’t hurt your process. We continue building from where we left off, so if the build process gets to 50% and you need to reboot your server, then no worries, it will just continue to build from where it left off – it does not go back to the start again.

If a drive fails during the build process of say a RAID 5, then no worries, the build will continue. When it’s finished, the array will be in degraded mode, and you’ll have to replace the drive, but that’s the normal process. Again, even if a drive fails, you can power down and restart the server during the build process and it will resume building from the point it left off, still finishing in a degraded array that needs fixing.

And if you think drives don’t fail during RAID builds, then think again … I’ve had it happen more than once.

RAID Rebuild

What happens when a drive fails during a rebuild is a bit dependent on the RAID type. Let’s take the example of a RAID 5. A drive fails, so you replace it and the controller starts a rebuild. During that process, another drive fails. You are toast. There is not enough data left for us to calculate the missing data from the first drive failure because now you are missing 2 drives in a RAID 5 and that’s fatal. You need to fix your drives, build a new array and restore from backup.

In a RAID 6 environment it’s slightly different. RAID 6 can support 2 drives failures at the same time, so if a drive fails, you replace it and start a rebuild and then if another drive fails, no worries. The controller will continue to rebuild the array but it will be impacted when finished because it’s still one drive short of a picnic. However the data will be safe during this process, and you’ll just have to replace the second failed drive and let the array rebuild to completion.

Of course, like any of the above, you can power down and restart the server at any time during any of these processes and things will just continue on from where they left off.

Hope that answers a few questions.



Learning about storage (the hard or easy way) …

May 13, 2015

Google and youtube are wonderful places to get information, but as always there is a question-mark over the authenticity, quality and downright accuracy of the information provided by all and sundry in their posts and blogs. Now while I’m not casting aspersions over those that provide all this wonderful information, wouldn’t it be nice to get something directly from the horses mouth? (Australian-speak for “Vendor”).

Well, you can.

On our website home page ( , if you look closely enough, you’ll find the following:

  1. youtube link – some older stuff about our products from the product marketing and an ongoing effort by our Alabama Slammer (you’ll get it when you listen to the video) on technical aspects of how to do stuff with our cards – Liz is by far the best RAID Support Tech in the business so she’s well worth listening to. My only problem with youtube is getting distracted. All that lovely interesting stuff that appears down the right side always looks more interesting than RAID :-)
  2. Facebook – hmmm, I thought this was only for putting pictures of your last meal … however I liked it (pardon the pun) because I found one of my videos on there (you never know where this stuff will end up).
  3. Adaptec University. Last, but certainly not least, this is a major source of all sorts of information on RAID and storage in general, and our products and how to use them in particular. I should know … I spend quite a bit of time writing this stuff (then having it cleaned up by a lovely lady whose English is a whole lot betterer than mine :-)).Yes, you have to register, but no we don’t ask for your first-born as a down-payment (in fact it’s free), but once you are in, there is a wealth of information to peruse through at your leisure. Look at the catelog to see what is available, then go to “my learning” to see what you have completed, not read or not even started yet, etc. You can come back to this as often (or as little) as you like.

So should you stop using Google? Heck no, there is tons of valuable stuff out there (I use it all the time), but you should also consider getting the right word from the right people – Adaptec by PMC.



The problem with Indonesia …

May 10, 2015

is that … there is no problem with Indonesia!

Recently I spent a week there with our new distributors, where we presented at a Datacenter/Cloud Forum and talked to customers regarding the suitability of our products for their markeplace. Indonesia is a booming economy and the IT sector is growing at a good steady rate, so I can see us doing good business there over the next few years – which means I need to be there on a regular basis.

Since this was my first trip to Jakarta, I thought I’d analyse the problems I found:

1. The people are fantastically friendly and helpful … so no problem there

2. The food is great … so no problem there

3. The customers are smart, up to date and right on top of their game … so no problem there

4. The weather … damn it was humid … so if that’ the only problem I came across I guess I can live with it.

The best part about Indonesia was that I managed to get two weeks holiday after the trip, which was spent chasing my boy around at our BMX National Championships (2nd, Elite Men so a pretty good result), then 4 days on an Island off the coast of Queensland doing some 4-wheel-driving in the sand and relaxing on the beach (with the trip book-ended by some lengthy road trips through outback NSW for good measure).

Hopefully this explains the lack of blog postings over the last month … but now we’re back in business so after I catch up on 4000 emails in my inbox, we’ll be back to regular posts.



Adaptec and Toshiba …

April 17, 2015

The PMC Adaptec lads in Germany, and their counterparts at Toshiba, have put together a demo to take to shows and let people see what we are doing:

  • Intel Solution Summit in Abu Dhabi (28/29 April)
    • Showing Demo Server with SSD max performance life benchmarks
  • Technical Seminar for large nordic OEM (22 April)
    • Demonstrating RAID setups, Volume setups and performance tradeoffs
  • IBC Show in Amsterdam (10~14 September)

The good people at Wortmann kindly lent (donated, never to be returned?) some equipment in the form of snazzy little server and external drive bays, and Toshiba provided some pretty fantastic SSDs to round out the system.

So if you happen to be in Abu Dhabi or Amsterdam, then drop in and see our demo. You can learn a lot about setting up RAID using SSD. Seeing is believing.


Now … how do I convince my boss that I need to be in attendance? Hmmm …



So we actually “do” know what we are talking about?

April 1, 2015

As a technical advisor, and now a salesman, marketing expert, logistics expert and general dogsbody I spend a lot of time talking to my customers. In fact if someone rings up asking the question “which card do I need?” they probably end up regretting it because it’s never a short simple answer.

The same goes with “which disk do you recommend?” … that one is a can of worms that common-sense says I should stay away from, but I’ve never been accused of having too much of that commodity.

So … my push has been to move people to Series 8 (6 or 12Gb/s system compatible) controllers, and towards SSDs when they suit the customer’s data needs. With that in mind I’ve talked to a lot of my larger integrators who have done considerable testing on drives that are readily available in the Australian marketplace, and base a lot of my recommendations on their “real-world” experiences.

Now in Australia the question does not start with “what is your cheapest RAID card?”, it generally starts with “what is the right card to make this thing go fast enough so the customer won’t complain?”. That’s a good conversation to have because it helps my customers think about their customer storage needs, not just the bottom line (though yes, that is still very, very important).

So what do I recommend? This is probably very different for customers across the world because of the discrepancies in drive prices that we see from country to country across my region. SAS is cheap in India, SSD is expensive. SAS is expensive in Australia and SSD is taking over big time due to price, capacity and performance. However, all that taken into account I’m finding a great uptake on 8 Series Controllers and Sandisk SSDs (top of the range of course). It seems people are finding that 15K SAS is just not worth it (heat, power consumption and cost for not a fantastic speed), and that SSD is a good choice in the enterprise market.

Now all this is good for my sales and gives me someone to talk to on a daily basis (even if the customer can’t wait to get off the phone), but it makes me wonder whether this is a worldwide phenomenon … since this is a global blog I’ll ask the question of the worldwide community: “Do you talk to your vendor to ask what is the right product to suit your needs?”

Ironically I find more and more people who don’t think they can even talk to the vendor, but rather have to go online and sift through the chaff on websites trying to (a) understand what they are seeing and (b) make sense of it all to come up with an informed decision.

As far as I’m concerned both are a waste of time. I might be old fashioned but the mobile phone on my desk is still predominantly used for making phone calls (and not facebooking, etc), and I still find it useful to actually talk to someone if I want to find something out about a product – not try and become a product expert myself with limited idea of what I’m doing.

So what do you do? I’d strongly suggest you pick up the phone and talk to us. No matter where you are in the world there will be someone who knows something about our products – from the company directly to our distribution and reseller channel who are trained in the use and capabilities of our products.

Beats reading the web (which is ironic because that’s what you are dong while reading this) … so give us a call and discuss your requirements – the phone numbers are on the web (that’s tongue in cheek).



A step in the right direction …

March 12, 2015

Our team in Germany must have too much time on their hands :-)

The lads have put together a vendor lab where vendors such as hard drive and SSD manufacturers can bring their gear and test against our products. While we have validation testing going on all the time in other centres, having the ability for a vendor to sit and play with the combination of our gear and theirs is getting people pretty excited.

Our German engineering team are constantly putting new SSDs (for example) through their paces and providing feedback to the vendors – a collaborative effort to make sure that the business, enterprise and datacentre customers get the product combinations that work for them.

So PMC is putting in some big efforts to make sure that we are at the cutting edge of SSD design performance to keep up with some of these amazing devices being developed.

Along with that, the team in Germany is using the lab for customer training and education sessions. This is a great initiative by the boys over there … I’m just wondering how it would work in Australia:

Adaptec: “We want to do some testing with your equipment”
Customer/vendor: “No worries mate, meet us down the pub this arvo and we’ll shoot the breeze over a couple of schooners and sort something out!”

Not sure many people outside the antipodes will understand that one.

The lab in Germany:



The lab in Australia:




What were they thinking? …

March 10, 2015

My work system was recently “upgraded” to Office 2013. Notice I highlighted “upgraded” because that is a very, very loose definition of what happened. My main focus here is Outlook.

There are some improvements, and some nice features that make it a little nicer to use, but in general it’s a major backward step from Outlook 2010, and there is one major, very important, and vitally usable feature that has gone missing in the name of “upgrading”. I’m referring to “unified search”. On a windows 7 machine, with office 2010, you can use the search function in the bottom of the start menu in Windows, and it will find all documents, including emails, that contain the key word you are searching for.

That has been removed, and now, with office 2013 on the machine, Windows 7 (and I believe 8.1 also but I’ll never use that) won’t find or show emails.

What the? …

That was probably the most heavily used function on my system. My documents and emails are my resource library to find information about our systems and customers, especially when looking for something like an issue that you think you may have dealt with before … now where was that email?

I am no fan of Microsoft, but am forced by corporate (at the moment) to use a PC, and have developed my workstyle to use this feature heavily. In fact I rely on it more than my own memory (which is pretty dodgy to say the least). So now, in the name of an “upgrade”, I’ve lost a major useful feature in my day to day work life. What a pain in the neck.

So while I was googling to try and find a way to fix this scenario (don’t mind the odd hack here and there), it came to mind that maybe we have done, or do, the same thing.

Have we taken away something you need, use or like in our software or controllers? I can think of a few things we’ve done that have upset customers, but I’d love to hear from the punter out there slogging away building systems exactly what it is that we’ve done that makes us look like Microsoft and their Outlook “upgrade”.

Throw them at me folks – warts and all.



Is RAID really that boring? …

March 5, 2015

The lads in our marketing department obviously have too much time on their hands (not), and have been watching youtube … specifically looking at RAID videos. They’ve had the bright idea that “hey, we could do that!” and pulled my name out of the hat to do this stuff. Considering it was my hat and my name was the only one in there I didn’t stand too much of a chance.

There are lots of videos out there. From snappy little graphics-only talk-fests to ancient old hands-on demonstrations in noisy labs where you can’t hear a word the presenter is saying, through to death-by-powerpoint gabfests that had me nodding off by slide 3.

Now I know that the subject matter is known to me, so I’m not going to find this stuff all that interesting or challenging, but I wanted to get a look at presentations styles … what works and what doesn’t, how the angles and views work, and what level the videos are pitched at. So after an afternoon of trolling youtube, getting sidetracked on a regular basis, and falling asleep several times, I’ve come to the conclusion that there is no real “best” way to do this – and that all videos are boring and RAID is mind-bendingly dull.

So how do we intend to (a) make a better video, (b) make it interesting and (c) keep the camera off my ugly-mug … I don’t know. This is going to be quite a challenge. Stay tuned for some laughs I’m sure.

Of course, if there are subjects (legal and ethical) that you want to see a video on, drop us a line and give us your suggestions. Damn, I’m going to have to put a shirt on for this stuff :-)



A mild fixation …

February 27, 2015

Here goes with another controversial blog for Product Marketing to ponder …

Customers have a fixation on port numbers. I find it a bit unusual because we tend not to have the same sort of fixation on anything else we do in life. However when it comes to port count on RAID controllers or HBA’s we definitely have a fixation.

Example: I have 2 x SSD – hmm, you don’t have a 2-port controller so I”ll look at 4-port instead. I’ll never even think about looking at 16 port controllers because I’m fixated on the number of drives I currently have. That’s a shame because in fact it’s the 16-port controller that you need, whether you currently recognise it or not.

I’ll come back to that but let’s look at the rest of our life compared to RAID controllers. The speed limit in my country (on the open motorway) is 110km/h. So therefore why do I need a motorbike that does more than 110km/h? Why on earth would I own a Ducati Monster that’s capable of around 265km/h (don’t tell the wife that).
I recently upgraded the suspension on my 4WD – I could have standard height, 30mm lift or 60mm lift? Hmmm, I mostly drive on the road so I could stick with the standard height, but guess what – the wife now needs a step ladder to get into the car because it’s had the 60mm lift :-)

So I have no fixation on the limits imposed by the law, or the limits imposed by common-sense when it comes to my vehicles, but I certainly have one (as do most customers) when it comes to choosing RAID cards.

You guessed it – I’m banging on about the 71605E again – a 16 port “entry” card that is absolutely perfect for RAID 10 on SSD – fast, capable and cheap(ish). But do customers ever look at it? No, because it’s 16 ports and they are fixated on 4 and 8 port controllers. So I own the vehicles that suit my needs, not vehicles that are dictated to me by random numbers imposed by the law or my wife’s height. Funny enough that’s the way we should look at controllers – what suits my “performance” requirements, not what suits my “numbers” requirements.

If our marketing department was based in Australia we’d run an ad campaign entitled: “who gives a *^%&$ how many ports the thing has?”. Can’t really see that ever happening in this company, but at the end of the day that’s what needs to be embedded in customer mindsets – stop thinking about the number of drives you have, and start thinking about what controller has the processing performance to handle your requirements. This is mostly addressed at SSD users because while they have made up their mind to use a faster device, they still haven’t changed their mindset about which controller card they should use – and they need to big time.

Using my bike as analogy again … I can buy cheap tyres, or I can buy ridiculously expensive Pirelli’s that are far more capable than my riding ability – and guess what I buy (certainly not the cheap and nasty) – I want to give my bike the ability to perform to it’s full potential – even if the rider isn’t as capable as he once was :-)

So stop thinking port count … and start thinking performance (7 and 8 series).



The KISS principle …

February 12, 2015

(regarding product lines that is) …

Was just reading an excellent article – a bit dated time-wise but the story never gets old …

Why does this come to mind right now? Two recent personal issues have shown Apple’s direction to be the right one in my mind, and a lot of other organisations might take note.

1. The daughter’s phone was stolen. That in itself is traumatic enough for a 20-year-old, but dealing with the phone company on deciding a new plan made my blood boil.

2. My mother’s electricity provider. Mum asked me to look at her provider and make sure she was getting the best deal (she wants to change to another provider because my son works for that provider and she’d like to support him) – not the greatest of reasons but who am I to argue. So off to the electricity provider in question to look at what “deal” best suits.

So by now I’m ready to kill the first person who gives me a choice. Luckily I cook dinner in our house, because if my wife asked me whether I wanted x or y, I think I’d end up in goal.

The premise of the Techcrunch article is simple, even if a bit controversial. People want choice? Rubbish, and I 100% agree with that – people want a product that does most of the things they want very well, reliably and simply, and they will either put up with, gripe and whine, or just ignore the things that product does not do (ie they will make do). When I walk into a fast food vendor (any one, take your choice), I look at the board above the counter and just glaze over. Normally by the time I’ve absorbed the first set of choices, some clever electronic screen has rolled over to a completely new set of choices – so I end up ignoring the whole thing and buying what I normally buy because I just can’t be bothered trying to work it all out.

I get where the phone and electricity companies are coming from … they want to use the FUD principle (Fear, Uncertainty, Doubt) to make you spend more than you actually need to or should – they are experts at extortion (they like to call it customer service but in reality it’s extortion). But what about everyone else? What about Adaptec? We have an awful lot of products – with the aim of course to make a product for every conceivable niche market we can sell in to.But do people really want such a range? In my experience, no, but as you can tell I don’t make the product marketing decisions around here.

When I go food shopping (which I do regularly), I can go to the supermarket that has 47-different kinds of preserves, or I can go to the German-owned worldwide organisation that has 1 type of strawberry jam – yes folks, there is only one of everything. You guessed it, I spend my life in the simple supermarket because it makes my life a whole lot easier. I prefer the KISS principle (Keep It Simple Stupid) nearly every time.

So let me know your thoughts … do you really need such a large product range from any organisation that you deal with? Or are you like me when I’m asked whether I want a cup of coffee or tea? … “Yes please” does it every time :-).

Wonder if this one will get published.



Discrepancies …

January 29, 2015

I’m back in India talking to customers about RAID and HBA products (what’s new), however had a bit of an eye-opener yesterday.

We were discussing the uptake of SSD, and how in my opinion they should be wiping out the 15K SAS HDD market due to pricing and performance advantages.

What? Pricing advantage? (that came from the customer I was talking to). Suppose I should have checked my facts before opening my mouth (bad habit), but I was working on the basis of what I see in Australia … that SSDs are price competitive with SAS hard drives, and are in fact pretty much wiping out the 15K SAS HDD market (and hurting 10K SAS HDD pretty badly as well).

But in India, just as in China, the problem here is price. It’s not that the SSDs are drastically expensive, it’s more like the fact that SAS HDD seem to be far cheaper than they are at home. So they still hold a good price advantage of SSD, and that’s keeping the SAS HDD in the marketplace longer than it should be (imho).

However, even though they are cheaper to buy, with the power consumption and heat generation of SAS HDD being a major problem in Indian datacenters, there is still room for discussion regarding the TCO of using SSD in place of SAS HDD.

It just makes me wonder what the drive vendors are up to … are they (a) ripping us off in Australia with high prices on SAS HDD, or (b) dumping older technology drives into the growing marketplaces to keep their profit margins high? Now that sounds cynical, and I could never be accused of being that, but I can see this as a barrier to developing high-performance datacenters in this country.

India’s IT marketing is growing at a phenomenal rate. No idea what Gartner and the lads are saying officially, but with the excitement in the country generated by the new Prime Minister (Mr Modi), the country is on a high and is booming in infrastructure, along with IT and software.

Shame that they are struggling with the same major move to SSD like a lot of other regions are doing (due to price that is).



But I don’t want a 12Gb controller! …

January 21, 2015

I hear this all the time. Adaptec makes two series of cards that are very similar in function and nature … take, for instance, the 6Gb/sec 7805 and the 12Gb/sec 8805. Both are 8 port fully-featured RAID cards – but one is 6Gb (Series 7) vs the other at 12Gb/sec (Series 8).

Now most drives on the market today are 6Gb/sec, so buyers go looking for 6Gb/sec cards to fulfill their needs. When I point out that the 8805 is slightly cheaper than the 7805 (apart from the one dodgy seller on Amazon who is promoting this card $10 cheaper than we sell it to the market for – complete with the wrong picture) then it starts to get people’s attention … but they still come back to me and say “but I don’t want a 12Gb controller!”

So … with our 8805 (12Gb/sec) controller, if you plug 12Gb SAS SSD into it then it’s a 12Gb controller … but if you only plug 6Gb/sec drives into it then it’s a ??? controller? You guessed it … the speed of the drives dictate the connection speed so in effect the 8805 works as a 6Gb controller.

Function is the same, and IOP performance is far greater than the 7 series controller – for slightly less price, but guess what I still hear???…

“I don’t want a 12Gb controller!”

It gives me the irits sometimes to try and understand the mentality of people who are blinded by the numbers on the box, and don’t think about “what is right for the system (or their customer)” … they just go off the numbers because that’s what they know.

So is it a problem to have a faster card than you really need? Is it ever a problem to have something fast? Only if it costs an arm and a leg … and the 8805 doesn’t.

So when looking at 8 port controllers, especially when SSD are involved, take the blinkers off and look at the 8805 … you might just come out in front.