The longest blog article ever? RAID storage configuration considerations…

RAID storage configuration considerations (for the Channel System Builder)

SAS/SATA spinning media, SSD and RAID types – helping you make decisions
Some thoughts from the Storage Advisor

Note: I started writing this for other purposes – some sort of documentation update. But when I finished I realised it was nothing like the doc the user requested … and then “write blog” popped up on the screen (Outlook notification). So I took the easy way out and used my ramblings for this week’s update.

When designing and building a server to meet customer needs, there are many choices you need to consider: CPU, memory, network and (probably most importantly) storage.

We will take it as a given that we are discussing RAID here. RAID is an essential part of the majority of servers because it allows your system to survive a  drive failure (HDD or SSD) and not lose data, along with the added benefits of increasing capacity and performance. While there are many components within your system that will happily run for the 3-5 year life of your server, disk drives tend not to be one of those items.

So you need to take a long-term approach to the problem of storage – what do you need now, what will you need in the future and how will your survive mechanical and electronic failures during the life of the server.

 

What sort of drives do I need to meet my performance requirements?

Rather than looking at capacity first, it’s always a good idea to look at performance. While the number of devices have an impact on the overall performance of the system, you will not build a successful server if you start with the wrong disk type.

There are three basic types of disks on the market today:

  • SATA spinning media
  • SAS spinning media
  • SSD (generally SATA but some SAS)

SATA spinning drives are big and cheap. They come in many different flavours, but you should really consider using only RAID-specific drives in your server. Desktop drives do not work very well with RAID cards as they do not implement some of the specific features of enterprise-level spinning media that help them co-operate with a RAID card in providing a stable storage platform.

The size of the drive needs to be taken into consideration. While drives are getting larger, they are not getting any faster. So a 500Gb drive and a 6Tb drive from the same family will have pretty much the same performance.

Note that this is not the case with SSDs. SSDs tend to be faster the larger they get, so check your specifications carefully to ensure you know the performance characteristics of the specific size of SSD you buy – not just what is on the promotional material.

The key to performance with spinning media is the number of spindles involved in the IO processes. So while it’s possible to build a 6TB array using 2 drives in a mirror configuration, the performance will be low due to the fact that there are 2 spindles in operation at any time. If the same array was built using 7 x 1TB drives, it would be much quicker in both streaming and random data access due to the multiple spindles involved.

SAS spinning media generally rotate at higher revolutions than SATA drives (often 10,000 RPM or higher vs 5400/7200 for SATA), and the SAS interface is slightly quicker than the SATA interface, so they outperform their SATA equivalents in certain areas. This is mostly in the form of random data access: SAS drives are faster than SATA drives. When it comes to streaming data there is little to no difference between SATA and SAS spinning media.

However all performance calculations go out the window when SSD are introduced into the equation. SSD are dramatically faster than spinning media of any kind, especially when it comes to random data. Keeping in mind that random data storage systems tend to be smaller capacity than streaming data environments, the SSD is rapidly overtaking the SAS spinning media as the media of choice for random data environments. In fact, the SSD drive is so much faster than SAS or SATA spinning media for random reads and writes, that it is the number one choice for this type of data.

 

So what about capacity calculations?

Capacity both confuses and complicates the performance question. With SATA spinning drives reaching upwards of 8TB it’s pretty easy to look at the capacity requirements of a customer and think you can just use a small number of very large spinning drives to meet the capacity requirements of the customer.

And that is true. You can build very big servers with not many disks, but think back to the previous section on performance. With spinning media, it’s all about the number of spindles in the RAID array. Generally speaking, the more there are, the faster it will be. That applies to both SATA and SAS spinning media. The same cannot be said for SSD drives.

So if you need to build an 8TB server you are faced with many options:

  • 2 x 8TB drives in a RAID 1
  • 4 x 4TB drives in a RAID 10
  • 3 x 4TB drives in a RAID 5
  • 5 x 2TB drives in a RAID 5
  • 9 x 1TB drives in a RAID 5

Etc, etc.

So what is best with spinning drives? 2 x 8TB or 9 x 1TB? A good general answer is that the middle ground will give you the best combination of performance, cost and capacity. Note however that you need to think about the data type being used on this server, and the operating system requirements. If for example you are building a physical server running multiple virtual machines, all of which are some sort of database-intensive server, then you are wasting your time considering spinning drives at all, and should be moving straight to SSD.

If however this is a video surveillance server, where the data heavily leans towards streaming media, then 3 x 4TB SATA drives in a RAID 5 will be adequate for this machine.

 

What RAID controller type do I need?

This one is easier to determine. The RAID controller needs to have enough capacity to handle the IOP capability of your drives, with sufficient ports to connect the number of drives you end up choosing. Since there are so many different ways of mounting drives in servers today, you will need to take into account whether the drives are directly attached to the server or whether they are sitting in a hot-swap backplane with specific cabling requirements.

 

What RAID level should I use?

There are two basic families of RAID:

  • Non-Parity RAID
  • Parity RAID

Non-Parity RAID consists of RAID 1, and RAID 10. Parity RAID consists of RAID 5, 6, 50 and 60. Generally speaking, you should put random data on non-parity RAID, and general/streaming data on parity RAID. Of course things aren’t that simple as many servers have a combination of both data types running through their storage at any given time. In this case you should lean towards non-parity RAID for performance considerations.

Note of course (there’s always a gotcha) that non-parity RAID tends to be more expensive because it uses more disks to achieve any given capacity than RAID 5 for example.

 

Putting this all together …

By now you can see that designing the storage for a server is a combination of:

  • Capacity requirement
  • Performance requirement
  • Disk type
  • RAID controller type
  • RAID level
  • Cost

Let’s look at some examples:

  1. General use fileserver for small to medium business
    General Word, Excel and other office file types (including CAD files)

Capacity: 10TB
Performance requirements: medium
Disk type: spinning will be more than adequate
RAID controller type: Series 6,7,8 with sufficient ports
RAID level: RAID 5 for best value for money
Options: should consider having a hot spare in the system
Should also consider having cache protection to protect writes in cache in event of power failure or system crash

Remembering that you don’t get the total usable capacity that you expect from a drive. For example, a 4TB drive won’t give 4TB of usable capacity – it’s a little more like 3.75TB….(I know, seems like a rip off!)

In this scenario we are going to recommend enterprise SATA spinning media. 4 x 3TB drives will give approximately 8TB capacity, with good performance from the 4 spindles. Since many server chassis support 6 or more drives, then the 5th drive can become a hot spare, which will allow the RAID to rebuild immediately in the case of a drive failure.

With spinning drives a 6-series controller will be sufficient for performance, so the 6805 would be the best choice controller. We would recommend an AFM-600 be attached to the controller to protect the cache in event of a power failure etc.

  1. High-performance small-capacity database server
    Windows 2012 stand-alone server running an industry-specific database with a large number of users

Capacity: 2-3TB
Performance requirements: high
Disk type: pure SSD to handle the large number of small reads and writes
RAID controller type: Series 7 (71605E)
RAID level: RAID 10 for best performance
Options: should consider having a hot spare in the system

In this scenario we are definitely going to use a pure SSD configuration. Database places a great load on the server with many small reads and writes, but the overall throughput of the total server data is not great.

RAID 10 is the fastest RAID. When creating a RAID array from pure SSD drives, we recommend to turn off the read and write cache on the controller. Therefore you (a) don’t need much cache on the controller and (b) don’t need cache protection. In this case we would recommend 6 x 1TB (eg 960Gb Sandisk Extreme Pro drives) – which would give approximately 2.7TB usable space in an extremely fast server.

When using SSDs you need to use a Series 7 or Series 8 controller. These controllers have a fast enough processor to keep up with the performance characteristics of the SSDs (the Series 6 is not fast enough).

Again, a hot spare would be advisable in such a heavily used server. This would make a total of 7 drives in a compact 2U server.

  1. Mixed-mode server with high-performance database and large data file storage requirements
    Multiple user types within the organisation – some using a high-speed database and some general documentation. Organisation has requirement to store large volume of image files

Capacity: 20+TB
Performance requirements: high for database, medium for rest of data
Disk type: mix of pure SSD to handle the database requirements and enterprise SATA for general image files
RAID controller type: Series 8 (81605Z)
RAID level: SSD in RAID 10 for operating system and database (2 separate RAID 10 arrays on same disks). Enterprise SATA drives in RAID 6 due to fact that large number of static image files will not be backed up
Options: definitely have a hot spare in the system

In this scenario (typically a printing company etc), the 4 x SSDs will handle the OS and database requirements. Using 4 x 512Gb SSD, we would make a RAID 10 of 200Gb for Windows server, and a RAID 10 of 800Gb (approx) for the database.

The enterprise SATA spinning media would be 8 x 4TB drives, with 7 in a RAID 6 (5 drives capacity) and 1 hot spare. In this scenario it would be advisable to implement a feature called “copyback hot spare” on the RAID card so the hot spare can protect both the SSD RAID array and spinning media RAID array.

This will give close to 20TB usable capacity in the data volumes.

 

Options

Some of the key features of RAID cards that need to be taken into consideration, which will allow for the best possible configuration, include:

  • Multiple arrays on the same disks
    It is possible to build up to 4 different RAID arrays (of differing or same RAID level) on the same set of disks. This means you don’t have to have (for example) 2 disks in a mirror for an operating system, and 2 disks in a mirror for a database, when you can do both requirements on the same 2 disks
  • RAID 10 v RAID 5 v RAID 6
    RAID 10 is for performance. RAID 5 is the best value for money RAID and is used in most general environments. Many people shy away from RAID 6 because they don’t understand it, but in a situation such as in option 3 above, when a customer has a large amount of data that they are keeping as a near-line backup, or copies of archived data for easy reference … that data won’t be backed up. So you should use RAID 6 to ensure protection of that data. Remember that the read speed of RAID 6 is similar to RAID 5, with the write speed being only very slightly slower.
  • Copyback Hot Spare
    When considering hot spares, especially when you have multiple drive types within the server, then copyback hot spare makes a lot of sense. In option 3 above, the server has 4Tb SATA spinning drives and 512Gb SSD drives. You don’t want to have 2 hot spares in the system as that wastes drive bays, so having 1 hot spare (4Tb spinning media) will cover both arrays. In the event that an SSD fails, the 4Tb SATA spinning drive will kick in and replace the SSD, meaning the RAID 1 will be made of an SSD and HDD. This keeps your data safe but is not a long-term solution. With copyback hot spare enabled, when the SSD is replaced, the data sitting on the spare HDD will be copied to the new SSD (re-establishing the RAID), and the HDD will be turned back into a hot spare.

 

Conclusion

As you can see, there are many considerations to take into account when designing server storage, with all factors listed above needing to be taken into consideration to ensure the right mix of performance and capacity at the best possible price.

Using a combination of the right drive type, RAID level, controller model and quantity of drives will give a system builder an advantage over the brand-name “one-model-fits-all” design mentality of competitors.

If you have questions you’d like answered then reply to this post and I’ll see what I can do to help you design your server to suit your, or your customer’s, needs.

Ciao
N

facebooktwitterlinkedinmail

“Comatose” … no, not me, the server!

(though I think I’ve been called that a few times in my youth).

A colleague sent me a link to a report on the web recently, and while I found it mildly interesting, I actually think the writers may have missed some of the point (just a little). While in general everything they say is correct, there is a factor that they haven’t taken into account … “stuff”.

http://www.datacenterknowledge.com/archives/2015/06/03/report-30b-worth-of-idle-servers-sit-in-data-centers/

So what is “stuff”?

Well I have a lot of “stuff” on my laptop. There is “stuff” on CD’s lying around the place, “stuff” on my NAS and “stuff” in the Mac on the other end of the desk. To me “stuff” is old data. I hardly, if ever, use it, but I sure as heck want it kept close and immediately accessible. In my business my old data is my historical library and a great backup to my slowing fading memory.

So what is living out in datacenter land? Lots of useful information, and lots and lots of “stuff”. It has become evident when dealing with users over the past decade that people are reticent, if not downright unwilling, to remove, delete, consolidate or even manage old data – they just want it kept forever “just in case”.

So while there are strategies out there to minimize it’s footprint, there is no strategy out there for changing people’s mindsets on how long they keep data. So datacenterland is, and always will be, awash with “stuff” … which means more and more “comatose” storage. I don’t disagree with the web link article on server compute – that needs to be managed and centralized into newer and newer, faster and more power efficient servers. It’s just the data (storage) side of the equation that I have issues with.

If we take as a given that no-one is going to delete anything, then what do datacenters do about it? While there are larger and larger storage devices coming out all the time (eg high density storage boxes utilizing 10Tb disks), the problem these bring to the datacenter is that while they can handle the capacity of probably 10 old storage boxes, the datacenter is faced with moving all of the data off the old storage onto the new storage to free up the old storage. By the time a large datacenter gets through that little process, 10Tb drives will be replaced by 20Tb drives and the process will start all over again – meaning datacenters will have data in motion almost continuously – with tremendous management, network and overheads/costs to go along with … exactly the sort of stuff that datacenter operators don’t want to hear about.

I’m guessing that datacenter operators are looking at exactly this issue and are crunching numbers. Is it cheaper for us to keep upgrading our storage to handle all this “stuff”, with all of the management complications etc, or do we just buy more storage and keep filling it up without having to touch the old data? Or do we do both and just try to keep costs down everywhere while doing so?

It would be very, very interesting to know how the spreadsheet crunches that little equation.

“Stuff” for thought.

Ciao
N

 

facebooktwitterlinkedinmail

It seems like the world has stopped turning …

No, this is not biblical, nor is it prophetic. In fact I’m referring to disk drives :-)

In a meeting the other day we were discussing unofficial conversations with disk vendors that we have on a regular basis. It seems the spinning world (and I’m talking channel here), is slowing down. The SSD vendors however are romping along at 20%+ growth year on year.

So that is stating the obvious – SSD uptake is growing at the expense of HDD. Of course HDD is still king in the cold data storage world and those guys are making a killing in the datacenter world – all our “cloud” information has to live somewhere (like all your photos of yesterday’s lunch on FB etc).

But in the channel, the SSD is taking over for performance systems. The 10-15K SAS drives are giving way to similar sized SSDs – some at the enterprise level but a lot more at the high-end gaming and channel level – drives that at first glance don’t appear to be made for RAID, but in fact work just fine.

When talking to users, performance is a given – they all understand that the SSD is faster than the spinning drive, but many are still worried about drives wearing out – will they fail? I myself was wondering that so I looked at some specifications of drives and specifically their “DWPD” values (drive writes per day). This is pretty much the standard that SSD vendors use to indicate how long they think a drive will last before the overprovisioning is used up and the drive starts to degrade.

You will see values between 1/3 of a drive write per day and 25 drive writes per day – and if you were using these drives in an intensive write datacenter environment, I know which drives I’d be opting for. But let’s look at the 1/3 drive write per day and do a little maths. Let’s take 4 x 1TB drives (close enough) and make a RAID 10. Roughly speaking, that will give you 2TB capacity. Now if I can safely write 1/3 drive’s worth of data per day for the life of the drive, then that would be (approximately) 600GB of data written each day – remembering that the data is split across the two sets of mirrors in the RAID 10, and each set of mirrors can supposedly handle 300GB per day (1/3 their capacity).

Then lets look at the sort of systems that people are putting SSDs into. Are they using them for video servers? Not likely (too expensive to get the capacity). In fact they are using them for database and high-performance servers that are generally handling lots of small random reads and writes.

A bit more maths works out that an average 40 hour business week is approximately 25% of the overall time during the week, so you’d need to cram that 600GB writes into that timeframe (40 hours) to start stressing the drive. That’s something like 15GB writes per hour … remembering that this is based on a drive with a DWPD of 1/3. So a drive with higher values can handle more, and I’m yet to think of a business that is running SSDs in a database environment that is even within cooee of these numbers.

So when you look at the values on the drives, and think … wow, 1/3 DWPD is not very much … you need to balance that with actually thinking about how much data it is that your business will actually be writing to the disks on any given day.

I found it pretty interesting math – and it opened my eyes to the reality of DWPD values. Remember of course that in a datacenter you should use datacenter drives – mainly because many thousands of users can be accessing those drives at any given point in time and yes, you can get some pretty amazing amounts of data written to the drives each day, but in the channel, in the small business, and even in the medium to large-sized business, the amount of data being written each day is not the scary number that you may have thought without some detailed analysis.

It’s food for thought.

Oh, and by the way, if you are using SSD, then you should be using Series 8 RAID controllers. I know they are 12GB controllers and your SSDs are only 6GB devices, but it’s not the speed of the pipe that matters, it’s the number of transactions per second that the controller can handle. You don’t want to bottleneck your investment in SSDs at the controller level – you want a controller that will easily handle all the small reads and writes that the SSDs are capable of. Now whether your software or customers are capable of throwing or dragging that much data from the drives is a moot point, but putting SSDs on a slow controller is not the smartest thing to do.

Ciao
N

facebooktwitterlinkedinmail

The problem with performance testing …

Had a really good experience with a customer recently, but it highlighted the problems with performance testing, especially using iometer. Now, we use iometer a lot, and it’s a great tool to drill down into a specific set of performance characteristics to show a specific response from a storage system.

However … the problem with such a situation is getting the parameters right so that you are testing the right parameters that match your data.

So this customer was looking at maxcache – our SSD caching functionality that uses SSD drives attached to the 81605ZQ controller to add read and write caching to an array.

Testing with iometer didn’t show that much of an improvement (at least according to the customer). Discussion regarding the test parameters and how long to run a test for (1 minute won’t cut the mustard) saw a big improvement over their original testing (and yes, these guys know what they are doing with their systems so I’m not having a go at any individual system builder here).

So after much testing, it was decided to put the machine into test with real-world customers in a virtual desktop environment (believe is was openE running a whole stack of virtual desktops). Guess what – customers (end users) were as happy as pigs in …

Turns out the real world data is perfectly suited to caching (as suspected by the system builder), but that iometer was not able to accurately reflect the data characteristics of the real-world server. End result: everyone (system builder, datacenter operator, end users) – all happy and amazed at the performance of the system.

So where is the moral in this story? Simply that it’s difficult to play with a test software and come up with something that will closely match the end result of a server used in the real world. Is there an answer to this? Probably not, but I’m suggesting that everyone take performance testing software and the results they get with a grain of salt, and look at testing in the real world, or at least a close simulation.

The results can be very surprising.

Ciao
N

facebooktwitterlinkedinmail

A technical issue regarding RAID build/rebuild

Been getting a few questions regarding building RAID arrays recently, and thought it warranted putting something down on paper.

Now I’m talking about RAID 5/6 and other redundant arrays (not RAID 0 – that’s not for “real” data imho). So the questions arise about whether it is possible to restart a server during a raid build or rebuild, and what happens when a drive fails during that process. So let’s take a look at exactly what our controllers actually do in these situations.

RAID Build

If you are building a redundant array using either the clear or build/verify method, then yes, you can power down the server (or the power can go out by any other means) and it won’t hurt your process. We continue building from where we left off, so if the build process gets to 50% and you need to reboot your server, then no worries, it will just continue to build from where it left off – it does not go back to the start again.

If a drive fails during the build process of say a RAID 5, then no worries, the build will continue. When it’s finished, the array will be in degraded mode, and you’ll have to replace the drive, but that’s the normal process. Again, even if a drive fails, you can power down and restart the server during the build process and it will resume building from the point it left off, still finishing in a degraded array that needs fixing.

And if you think drives don’t fail during RAID builds, then think again … I’ve had it happen more than once.

RAID Rebuild

What happens when a drive fails during a rebuild is a bit dependent on the RAID type. Let’s take the example of a RAID 5. A drive fails, so you replace it and the controller starts a rebuild. During that process, another drive fails. You are toast. There is not enough data left for us to calculate the missing data from the first drive failure because now you are missing 2 drives in a RAID 5 and that’s fatal. You need to fix your drives, build a new array and restore from backup.

In a RAID 6 environment it’s slightly different. RAID 6 can support 2 drives failures at the same time, so if a drive fails, you replace it and start a rebuild and then if another drive fails, no worries. The controller will continue to rebuild the array but it will be impacted when finished because it’s still one drive short of a picnic. However the data will be safe during this process, and you’ll just have to replace the second failed drive and let the array rebuild to completion.

Of course, like any of the above, you can power down and restart the server at any time during any of these processes and things will just continue on from where they left off.

Hope that answers a few questions.

Ciao
N

facebooktwitterlinkedinmail

Learning about storage (the hard or easy way) …

Google and youtube are wonderful places to get information, but as always there is a question-mark over the authenticity, quality and downright accuracy of the information provided by all and sundry in their posts and blogs. Now while I’m not casting aspersions over those that provide all this wonderful information, wouldn’t it be nice to get something directly from the horses mouth? (Australian-speak for “Vendor”).

Well, you can.

On our website home page (www.adaptec.com) , if you look closely enough, you’ll find the following:

  1. youtube link – some older stuff about our products from the product marketing and an ongoing effort by our Alabama Slammer (you’ll get it when you listen to the video) on technical aspects of how to do stuff with our cards – Liz is by far the best RAID Support Tech in the business so she’s well worth listening to. My only problem with youtube is getting distracted. All that lovely interesting stuff that appears down the right side always looks more interesting than RAID :-)
  2. Facebook – hmmm, I thought this was only for putting pictures of your last meal … however I liked it (pardon the pun) because I found one of my videos on there (you never know where this stuff will end up).
  3. Adaptec University. Last, but certainly not least, this is a major source of all sorts of information on RAID and storage in general, and our products and how to use them in particular. I should know … I spend quite a bit of time writing this stuff (then having it cleaned up by a lovely lady whose English is a whole lot betterer than mine :-)).Yes, you have to register, but no we don’t ask for your first-born as a down-payment (in fact it’s free), but once you are in, there is a wealth of information to peruse through at your leisure. Look at the catelog to see what is available, then go to “my learning” to see what you have completed, not read or not even started yet, etc. You can come back to this as often (or as little) as you like.

So should you stop using Google? Heck no, there is tons of valuable stuff out there (I use it all the time), but you should also consider getting the right word from the right people – Adaptec by PMC.

Ciao
N

facebooktwitterlinkedinmail

The problem with Indonesia …

is that … there is no problem with Indonesia!

Recently I spent a week there with our new distributors, where we presented at a Datacenter/Cloud Forum and talked to customers regarding the suitability of our products for their markeplace. Indonesia is a booming economy and the IT sector is growing at a good steady rate, so I can see us doing good business there over the next few years – which means I need to be there on a regular basis.

Since this was my first trip to Jakarta, I thought I’d analyse the problems I found:

1. The people are fantastically friendly and helpful … so no problem there

2. The food is great … so no problem there

3. The customers are smart, up to date and right on top of their game … so no problem there

4. The weather … damn it was humid … so if that’ the only problem I came across I guess I can live with it.

The best part about Indonesia was that I managed to get two weeks holiday after the trip, which was spent chasing my boy around at our BMX National Championships (2nd, Elite Men so a pretty good result), then 4 days on an Island off the coast of Queensland doing some 4-wheel-driving in the sand and relaxing on the beach (with the trip book-ended by some lengthy road trips through outback NSW for good measure).

Hopefully this explains the lack of blog postings over the last month … but now we’re back in business so after I catch up on 4000 emails in my inbox, we’ll be back to regular posts.

Ciao
N

facebooktwitterlinkedinmail

Adaptec and Toshiba …

The PMC Adaptec lads in Germany, and their counterparts at Toshiba, have put together a demo to take to shows and let people see what we are doing:

  • Intel Solution Summit in Abu Dhabi (28/29 April)
    • Showing Demo Server with SSD max performance life benchmarks
  • Technical Seminar for large nordic OEM (22 April)
    • Demonstrating RAID setups, Volume setups and performance tradeoffs
  • IBC Show in Amsterdam (10~14 September)

The good people at Wortmann kindly lent (donated, never to be returned?) some equipment in the form of snazzy little server and external drive bays, and Toshiba provided some pretty fantastic SSDs to round out the system.

So if you happen to be in Abu Dhabi or Amsterdam, then drop in and see our demo. You can learn a lot about setting up RAID using SSD. Seeing is believing.

whd

Now … how do I convince my boss that I need to be in attendance? Hmmm …

Ciao
Neil

facebooktwitterlinkedinmail

So we actually “do” know what we are talking about?

As a technical advisor, and now a salesman, marketing expert, logistics expert and general dogsbody I spend a lot of time talking to my customers. In fact if someone rings up asking the question “which card do I need?” they probably end up regretting it because it’s never a short simple answer.

The same goes with “which disk do you recommend?” … that one is a can of worms that common-sense says I should stay away from, but I’ve never been accused of having too much of that commodity.

So … my push has been to move people to Series 8 (6 or 12Gb/s system compatible) controllers, and towards SSDs when they suit the customer’s data needs. With that in mind I’ve talked to a lot of my larger integrators who have done considerable testing on drives that are readily available in the Australian marketplace, and base a lot of my recommendations on their “real-world” experiences.

Now in Australia the question does not start with “what is your cheapest RAID card?”, it generally starts with “what is the right card to make this thing go fast enough so the customer won’t complain?”. That’s a good conversation to have because it helps my customers think about their customer storage needs, not just the bottom line (though yes, that is still very, very important).

So what do I recommend? This is probably very different for customers across the world because of the discrepancies in drive prices that we see from country to country across my region. SAS is cheap in India, SSD is expensive. SAS is expensive in Australia and SSD is taking over big time due to price, capacity and performance. However, all that taken into account I’m finding a great uptake on 8 Series Controllers and Sandisk SSDs (top of the range of course). It seems people are finding that 15K SAS is just not worth it (heat, power consumption and cost for not a fantastic speed), and that SSD is a good choice in the enterprise market.

Now all this is good for my sales and gives me someone to talk to on a daily basis (even if the customer can’t wait to get off the phone), but it makes me wonder whether this is a worldwide phenomenon … since this is a global blog I’ll ask the question of the worldwide community: “Do you talk to your vendor to ask what is the right product to suit your needs?”

Ironically I find more and more people who don’t think they can even talk to the vendor, but rather have to go online and sift through the chaff on websites trying to (a) understand what they are seeing and (b) make sense of it all to come up with an informed decision.

As far as I’m concerned both are a waste of time. I might be old fashioned but the mobile phone on my desk is still predominantly used for making phone calls (and not facebooking, etc), and I still find it useful to actually talk to someone if I want to find something out about a product – not try and become a product expert myself with limited idea of what I’m doing.

So what do you do? I’d strongly suggest you pick up the phone and talk to us. No matter where you are in the world there will be someone who knows something about our products – from the company directly to our distribution and reseller channel who are trained in the use and capabilities of our products.

Beats reading the web (which is ironic because that’s what you are dong while reading this) … so give us a call and discuss your requirements – the phone numbers are on the web (that’s tongue in cheek).

Ciao
N

facebooktwitterlinkedinmail

A step in the right direction …

Our team in Germany must have too much time on their hands :-)

The lads have put together a vendor lab where vendors such as hard drive and SSD manufacturers can bring their gear and test against our products. While we have validation testing going on all the time in other centres, having the ability for a vendor to sit and play with the combination of our gear and theirs is getting people pretty excited.

Our German engineering team are constantly putting new SSDs (for example) through their paces and providing feedback to the vendors – a collaborative effort to make sure that the business, enterprise and datacentre customers get the product combinations that work for them.

So PMC is putting in some big efforts to make sure that we are at the cutting edge of SSD design performance to keep up with some of these amazing devices being developed.

Along with that, the team in Germany is using the lab for customer training and education sessions. This is a great initiative by the boys over there … I’m just wondering how it would work in Australia:

Adaptec: “We want to do some testing with your equipment”
Customer/vendor: “No worries mate, meet us down the pub this arvo and we’ll shoot the breeze over a couple of schooners and sort something out!”

Not sure many people outside the antipodes will understand that one.

The lab in Germany:

lab1

lab2

The lab in Australia:

lab3

Ciao
N

facebooktwitterlinkedinmail