To expand or not expand …

July 26, 2011

That is the question.

Many things are technically possible in this world, but that doesn’t always mean they are a good idea. My motorbike is good for > 250km/h, but it’s not something I do without first thinking very hard about my licence and the nagging I’m going to get if I end up in jail.

The same goes for expanding an array. Yes, array expansion is technically possible and works just fine in the right situations, but at times you have to ask yourself … is this a good idea?

Recently a customer of mine was thinking of purchasing a 24-port RAID card to make a very large archive server. The immediate thought process was to start with 12 disks in a RAID 6, then over a period of time add 2-3 drives and expand the array to encompass the new disks.

This would mean at least 4 raid expansion processes during the life of the server. Yes, this is possible, but no, it’s not really recommended.

During an array expansion things can and sometimes do go wrong. Now if a drive fails, that’s not so much of a problem. The expansion will finish and the array will be in a degraded state. But if you have major power outages or other unforseen problems (like a bunch of disks playing up because they are having to work hard for the first time in their lives) there is the possibility of things going completely pear-shaped.

Now if you are doing 2-drives out to 3 drives, or if you are doing 4-drives out to 6 drives, I’d be comfortable living with that sort of sizing arrangement. But if you have 12 x 2tb sata drives and are adding another 4 then your timeframe for the expansion is going to be very, very large. Depending on how large the load is on the server this could take 3-4 days.

While it will work, that’s 3-4 days where you risk something going wrong and causing you major headaches. I’m a strong believer in Murphy’s Law (what can go wrong will) so this would concern me greatly.

I personally think you would be better off buying another 12 drives, migrating your data to another server or devices, then blowing the existing server away, building a new array on 24 drives (and I’d probably go for a RAID60 on that number of drives) then putting your data back on the server.

Then again, I tend to worry more than others. It does, however, bear thinking about when doing expansions on very, very large arrays – is it really worth it?



Green around the gills …

July 25, 2011

You may be surprised when looking in Adaptec Storage Manager on some machines to find the array and drives with a green background.

Did our programmers just get bored? (not likely)

The green behind an array indicates that the array is under power management. This means that after the administrator has set the necessary timers for each array, the card will watch the activity to and from the drives. When there is no activity for a set period of time the card will take action to either slow the drives or spin them down.

The drives show a green background when they are spun down.

If you are looking for this effect and can’t see the green background, look in the view menu and make sure the checkbox “power management” is ticked.

This is a great way of saving a few dollars – in fact quite a few over the life of a server. I set my server for 30 minutes inactivity to slow the drives. After 1 hour of no activity the card will spin the drives down. The end result of all this trickery is that my server spends a large amount of time with the drives spun down, saving Adaptec a great deal of electricity (take note Boss).

So are you saving yourself or your customers money? It’s a feature worth looking at (Series 2, 5 and 6).



Seagate prove me wrong …

July 25, 2011

A little while ago I said I thought that pure SAS drives would go the way of the dodo.
Well Seagate has put a kybosh on that little theory. They have released a 900Gb 2.5″ 10K SAS drive (full SAS, not hybrid).

That’s one heck of a drive.
You can find the details at:



Maintaining a healthy array…

July 25, 2011

This is another excerpt from our “Maintenance Best Practices for Adaptec RAID Solutions” document. I can’t claim credit for this document, but I’m pretty sure there’s nothing illegal about plagiarising our own documentation :-)

A “verify with fix” is a single, quick check of the array. After the verification process has checked all sectors of the array, it stops and will not start again until started manually by the administrator. In manual mode, the verification process commands are given a higher priority than in Auto mode so that the check completes significantly faster.

Verify with fix is a data-level check and requires more controller resources to read and compare data. Also, because of the additional resources required, verify with fix is not designed to run continuously. Rather, it should be scheduled to run at a regular interval, preferably during periods of low drive activity, or during system maintenance.

To verify and fix a logical drive using Adaptec Storage Manager:

a. In the Logical Devices View, right-click the logical drive.

b. Select Verify with fix and confirm that you want to verify

c. To begin the verification immediately, click Yes. To schedule the verification, click Schedule, and then set the date and time. You can also choose to set the verification as a recurring task.

While the verification is in progress, the logical drive is shown as an animated icon to indicate that the task is in progress. When the verification is complete, an event notice is generated in the local system’s event log.

The full story on maintenance best practices for your array can be found at:

Note: Don’t do this during a working day … your users will be less than impressed. However you should schedule this for a quiet period in your working week (eg 2am Sunday morning). Storage Manager makes it easy to run this check on a regular basis (scheduling) without you haveing to get up at some ungodly hour on Sunday morning to make sure your arrays are in tip top shape.