Simplifying SSDs …

August 11, 2011

That’s one way of putting it. I’ve noticed current SSD drives are not being listed as SLC, MLC or eMLC, but instead Intel, for example, call their new 510 drive models “multi-level cell compute-quality components”.
Now in my reading that’s MLC-based NAND flash.

However drive vendors are moving away from SLC/MLC variants towards eMLC (Enterprise MLC). This new memory is supposed to give approximately a 30% write-endurance improvement over the original MLC derivative.

Complicating matters further is the fact that vendors are moving towards higher-density memory (34nm, 25nm, 18nm) which apparently reduces the life span of the memory.

All of this leads to a confusing time for the end user. What drives are right by my enterprise system? What drive technologies should I steer clear of for my mission-critical server?

As it stands at the moment, those are pretty hard questions to answer. My only hope is that somewhere down the track we can just move towards a single standard for SSD drives, and a standardised measurement method for read speed, write speed and endurance. The disk vendors seem to have this fairly well organised for spinning platters – so why can’t the SSD boys get their act together.

I know this is a fast-moving market segment, but I’m sure vendors are losing sales in the enterprise space simply because customers can’t work out whether a certain drive will in fact be safe to use in their servers.

Come on lads – remember the KISS principle.

Ciao
Neil

facebooktwitterlinkedinmail

Setting up RAID arrays for Small Business Server …

August 11, 2011

I wrote this a long time ago, but it still holds true today …

Windows Small Business Server is an extremely popular complete operating system for small to medium business throughout the world. It contains most applications that a business requires in a simple to install and manage package.

This article concentrates on the storage needs of SBS, in particular creating the correct RAID arrays to get the best performance and flexibility from your system. It goes without saying that you require a system with sufficient CPU and memory capacity, but that information can be provided by any system integrator.

Horses for Courses … different raid types for different types of data

While there are many different “types” of RAID arrays, they fall into two basic categories … Parity and Non-Parity RAID arrays. Each has their benefits and detractions, with each type being suited to different kinds of data.

Non-parity RAID

Non-parity RAID can be simply defined as RAID 1 (mirror), RAID 10 (stripe of mirrors) and RAID 1E (stripe of mirrors on odd numbers of disks).

The benefits of non-parity RAID is that they have very good write speeds, especially for small data writes. This makes them very well suited to support Operating Systems and any kind of database files.

The downside to non-parity RAID is the cost … they take up considerable disk space. For example a RAID 1 (mirror) gives 50% usable space from the disk drives (2 drives gives 1 drive space).

Parity RAID

Parity RAID can be simply defined as RAID 5 (distributed parity capable of surviving single disk failures), RAID 6 (distributed parity capable of surviving 2 disk failures) and RAID 5EE – a different kind of RAID 5 that contains both parity data and hot spare disk space.

Parity RAID is the great all-rounder of RAID arrays. RAID 5 is generally good for most types of read and writes, and gives quite good streaming write speeds, but it suffers from performance problems when writing small amounts of data (such as database and operating system writes). It is the most economical of RAID arrays in that the equation for capacity is N-1 (you lose one disk’s worth of capacity). For example, 3 drives in a RAID 5 will give 2 drives worth of capacity. 10 drives in a RAID 5 will give 9 drives worth of capacity.

Hot Spare Disks

A hot spare disk is a hard drive that does not contain data, but is simply spare to the system. When a drive fails a hot spare disk will “kick-in” and replace the failed drive, rebuilding all the RAID arrays affected by the disk failure in as short a time as possible.

This is very important. It minimises the time in which the system is at risk of a second drive failure (which in most cases would cause data loss). The system rebuilds automatically putting the RAID arrays back into a safe state, allowing the administrator time to source and replace the failed drive.

So if there is room in the system, and the customer can afford the implementation, then put a hot spare in the box. Especially if the customer site is well-away from an easy supply of hard drives (remote location), it makes a great deal of sense to have the hot spare kick-in and rebuild the array as quickly as possible, giving the system administrator time to source that replacement drive and re-implement a hot spare.

Different types of hard drives

The two basic types of hard drives available on the market today are SATA and SAS. For the sake of this exercise I’ll ignore SSD because of the price … people who are using SBS are doing so because it’s quite cheap … so it seems a bit silly to put really expensive SSD drives in the box.

SATA drives are cheap, fairly reliable and have a large capacity. They are commonly found in desktop and laptop systems, and are now finding their way into servers because of price and capacity requirements. The thing to note about SATA drives is their inability to do many things at once. They are single-tasking devices with fairly slow response times … in other words they are great for streaming but in database applications they suffer from not being able to react at the speed of their more expensive cousins.

SAS drives on the other hand are smaller, much more expensive and very much faster. SAS drives are excellent for database and operating system work because they do multiple things at once (multi-tasking) and have very fast response times. In other words they react very quickly to the small reads and write requests from databases.

Different types of drive controllers

Inside all servers is (or should be) a RAID controller. A RAID controller gives a system the ability to survive drive failures without losing data, and allows a system administrator to create different performance sections within the one system.

RAID is a compromise between performance, reliability, cost and capacity (pick any 3). While I say it is a “compromise” I don’t mean this as a detraction. RAID controllers are essential to server systems to ensure continuity of data to an organisation (and help minimise expensive system downtime).

Some manufacturers create RAID controllers that connect only SATA hard drives (SATA controllers). These are limited in the number of drives they can attach to and will only work with SATA drives.

Adaptec RAID controllers are “SAS” controllers. This means they talk to SAS drives AND SATA drives. In fact you can connect both SAS and SATA hard disks to the same RAID controller. This gives great flexibility in building a system by using drive types that suit your data needs, but we’ll talk about that later.

The components of SBS

These can be fairly simply defined as:

The Operating System
Exchange
SQL
Data

Of course there may be other specialised applications a user runs on Windows SBS, but for the moment we’ll concentrate on the components that come with the standard install package.

Why is this an issue? The different components of SBS have different data characteristics. You can basically put them in two categories … database-type data and general-purpose or “streaming” data.

With SBS the operating system (Windows Server), Exchange and SQL all have database-type characteristics … many small read and writes to the underlying disks in a completely random manner.

The data portion of the disks (Word, Excel, Powerpoint, customer data) are generally less dependent on speed, with large capacity being much more of an issue to the system administrator and end-user.

Basically, with SBS, what you want is a disk structure that is good for database, AND a disk structure that is good for general storage … however they are not the same thing.

Building RAID arrays … the old and new ways of the world

In the past, and still with many of our competitors, the building block of a RAID array was a complete disk drive. This meant that if you wanted to create a mirror of 2 x 250gb drives, the mirror consumed both drives completely and no space was left on the drives for any other use.

Similarly, a RAID 5 on 4 disks used the complete space of all 4 disks … giving no flexibility to the system builder do anything else with those drives.

Now if you had an unlimited number of hard drives to play with then this would not seem much of a problem … however that is not the economic or practical reality of building computer systems today.

Adaptec RAID controllers use a different methodology to building RAID arrays. Instead of the basic building block of the array being a complete drive, it is now a portion of the disk (being whatever size portion you want to use). This is commonly called a “container”.

Therefore a RAID array on an Adaptec RAID controller can be made up from small containers on different hard drives. It is possible to create different RAID types on the same set of disks using this approach.

For example … if I had 4 hard drives in my 1U server, I could create a RAID 10 of say 50Gb capacity across all 4 hard drives. This would in fact use only 25Gb from each of the hard drives. The rest of the space on the drives can then be utilised by the RAID card to create other RAID arrays (of the same or different types).

Putting this all together … the problems

SBS is an unusual software package … it places demands on a system for both database and streaming-type data, generally on lower-end server system. By its very nature Microsoft have created a system that users see as being a cheaper solution than purchasing all the individual components, with many users carrying that cost saving mindset across to the hardware that the system is installed on.

This pretty much means SATA disks. While SAS disks would be much better, they are generally too expensive for the type of customer who is installing SBS, especially if the customer is looking for capacity as well as performance.

While I have nothing against SATA disks, and they are perfect for streaming and general-purpose data, they are not ideally suited to database applications. Now as we have read before, SBS has some applications that have database-type read and write characteristics, and some applications that are much more general in nature.

This means that running SATA drives puts the database components at a disadvantage. To counter this it is essential to run these components on a non-parity RAID array (preferably a RAID 1E or RAID 10).

However, RAID 1E or RAID 10 consume a great deal of disk space (50%), which makes them much less suitable for the general data storage that comes with the majority of the customer’s data.

So here is the problem.  What we really want is SAS drives running non-parity RAID for database applications, and SATA drives running parity RAID for general data storage. All of a sudden to get the best performance and flexibility from your SBS system you need to spend a lot of money on the server hardware. This flies in the face of saving money by using SBS in the first place.

The solution (or at least one of them)

The majority of users will use SATA drives for their SBS system … price and capacity dictate that this is the most sensible way to go.

So …

Using an example of a server with 4 x 500gb hard drives, the following scenario would be an ideal implementation for an SBS system.

1 x 50Gb RAID 10 array for the Operating System

1 x 100Gb RAID 10 array for Exchange and SQL

1 x 1.2Tb RAID 5 for general data

The benefits of this implementation are:

Both the OS and Exchange/SQL applications reside on a non-parity RAID array (RAID 10). This gives them the best disk performance that can be achieved with 4 disks (while still retaining data safety).

The data volume would reside in the large RAID 5 array. This gives good performance for general file serving, while maximising the capacity of the remaining space on the drives.

Using an implementation like this, a user can get the best of both worlds. High-speed, non-parity RAID arrays for the OS and database applications, and a large parity array for their general fileserving data.

It is important to note that this feature resides on all Adaptec 3, 5 and 6 series RAID controllers. Competitors use a different technique – called slicing, to break a raid array up into different logical volumes (drives) for use by different parts of the operating system. The problem with this approach is simply the fact that the underlying RAID array is the same for all logical drives. So if a user chooses RAID 5 as the underlying RAID structure, then all disks are running on RAID 5 – which as we have read is not good for database. Adaptec have taken the approach that the RAID structure should be flexible, and a user should be able to match the RAID type to the data structure sitting on those disks.

The problems of growth …

Building a server that will have sufficient storage for a customer for the entire life of the server puts a large financial burden on the initial build of the server … paying for a lot of disks that you aren’t currently using is not a good use of business cash reserves.

So … start with the capacity you require. You can add drives to a system at any time, and expand the RAID arrays on your existing drives to incorporate the new drives. There are, of course, different caveats on this with different RAID types, RAID 5 and RAID 6 are idea candidates for adding drives and increasing the size of the system. (see note at end of this article)

Does this take a long time? … possibly (depends on the size of the disks). Does this mean a lot of downtime? … No. Once the disks are added to the system then it can be restarted and this expansion process completed in the background, while users are working on the system.

But what if I only have 3 disks?

3 disks brings about a slight change of plans to the solution described above. I’m no fan of mirrors … they are safe but not particularly fast, and they only use 2 of the 3 drives available.

In this case instead of the RAID 10 described in my first solution, you would use a new type of array called 1E. It’s basically a mirror on 3 or more drives. A mirror on an odd number of drives? Sounds weird doesn’t it, but it works, and is faster than a mirror because you have more disks working for you rather than the 2 of the mirror. This also works in my above solution because 3 disks will handle the RAID 5 capabilities of general file serving for SBS.

Using different disks types for ultimate performance/flexibility

My ideal solution for SBS (or just about any other server system for that matter), is to both combine RAID types and disk types in the one system. SAS drives are very fast, with rapid response times in database and OS applications. Therefore, my idea solution would be 3 drives with two RAID 1E created on them … one for the OS and another for the database portion of SBS.

I’d then add 3 or more SATA drives in a RAID 5 or 6 to create a large space for general fileserving duties. This combination places the database-type data on fast responding SAS disks, but keeps the cost down for the large capacity file-serving duties of the server.

Card types this works with

Adaptec’s Series 2, 3, 5 and 6 series cards can all create multiple arrays on the same set of disks. Note that the Series 2 (and 6E products) are entry level hardware RAID cards that can’t do RAID 5 or 6, but the Series 5 and 6 can create just about any RAID type you wish on as many drives as you wish to attach.

What happens when a disk dies in this scenario

For the technically-minded folks who’ve made it this far through this blurb, you’ll be wondering … what happens if a drive dies when there are 2 or more arrays on those disks. Nothing unusual or wonderous happens at all. All arrays that are located on the failed drive are impacted but keep running.

When the drive is replaced, or when the hot spare kicks in, all arrays are rebuilt and life goes on as normal.

Summary

(It’s always nice to see that word … means the ramble is almost finished) …

Windows Small Business Server and Adaptec RAID controllers make ideal partners. Adaptec’s ability to create different RAID types on the same set of disks helps get the most performance and capacity to enable SBS to fulfil the customer’s needs.

If you learnt RAID 5 years ago then it’s time to go back to school. A lot has changed. Multiple arrays on the same disks … different disk types in the same system … expanding systems on the fly while customers are still working …

Food for thought.

Ciao
Neil

facebooktwitterlinkedinmail

Raid 5 and database …

August 11, 2011

Just finished a quick road trip to the other side of the country where I was espousing the benefits of our new MaxIQ product. As a consequence of talking to database integrators we spent a lot of time discussing existing implementations that they were having problems with (and therefore candidates for maxCache).

Something that came to light on a regular basis was the fact that a lot of integrators use RAID 5 for every system they implement, whether it be a fileserver or a database server. Now RAID 5 is a good all rounder. It’s great for fileserving and general server use, makes good use of the available disk space and most people are comfortable enough with the technology to actually think they understand it.

So what’s wrong with that. Simply put, RAID 5 is (in general) no good for environments with small random writes. Since I was promoting maxCache which is excellent for small random reads, naturally I found myself in an environment where there were a lot of random writes. On almost all occasions customers were using RAID 5. Most were using SAS disks, which meant they had recognised the signfiicance of the performance issues they were faced on database servers and had opted to offset their performance issues with fast-spinning/seeking disks.

Therefor we had a scenario where customers were trying to fix performance problems with hardware alone. Put a faster RAID card in the machine, put faster disks in the machine, add more RAM, improve the processor … but what about something as simple as using a different RAID level? RAID 5 is a great performer on many disk types, over a wide variety of read/write scenarios and data sizes, but it has one weakness. Random writes become slow because of the minor stripe write characteristics of RAID 5. There are multiple reads and writes on disks, plus parity calculations to be made for every small write in a RAID 5.

So what to do? Simply for most database applications you should consider using RAID 10. RAID 10 does not do parity, but simply writes the same data in two separate disks within the array. Consequently for a variety of technical reasons RAID 10 has faster random writes than RAID 5. Yes, there are scenarios where RAID 5 is good for database (in mostly read-type database environments), but in general, especially in the SMB market with accounting database, SQL, Exchange etc, RAID 10 is a better option for database performance.

Remember … you don’t have to make your entire disk structure of the same RAID type (mix and match for your different data types), you can have more than 4 drives in the RAID 10 and you can have different disk types connected to the same card, so now you can mix your raid type across different physical hardware arrays (just don’t put both disk types in the same array).

Point of the exercise? Hardware alone won’t give the performance every time. It will help, but you need to keep an eye on your RAID type to ensure it matches your data set.

Ciao
Neil

facebooktwitterlinkedinmail

Using large hard drives …

August 11, 2011

With the drive vendors now currently selling 3TB drives (and 4TB drives in the near future), what are the advantages/disadvantages of using these massive storage devices?

Large drives are great … you can fit a lot of data into a much smaller storage footprint, but there is a downside. Whether you purchase a 500gb, 1TB, 2Tb or 3Tb drive from the same manufacturer, if they are the same technology then they pretty much run at the same speed. That’s great for standardisation as far as the drive vendor goes, but it impacts on user mentality and overall performance when it comes to RAID.

Almost all server builders will use RAID to safeguard against drive failure. The problem is that large drives allow them to build servers that give the customer the space they want, but use fewer, rather than more, drives in the raid array.
It also means that customers are using mirrors (2 drives), or 3-drive RAID5 configurations more often.

The problem here is that the fewer drives you use, the less “workers” you have reading and writing data.

So a combination of small drive counts and slow raid types (mirrors or small RAID5) lead to less than impressive performance in general.

So are big drives better? Not really … they’re just bigger.

If you are interested in performance as well as capacity, think hard about the RAID configuration you put together, including just how many drives you use to give you the best combination of speed and capacity.

System building is not quite as simple as it used to be! :-)

Ciao
Neil

facebooktwitterlinkedinmail

I think the government is trying to tell me something …

August 11, 2011

The wife hit the roof this morning. Literally (and believe me it’s a long way from the top of the wife to the ceiling). From July 1 the price I pay for electricity increased by 20% … ouch. This is a significant increase in the running cost of a home and a business. It’s a wake-up call for me to get serious about looking at exactly what uses electricity in both my home and office … and to look at what I can do to reduce the amount of electricity I use.

There are several basic electricity uses in the office that I can work around … reducing the number of lights that are on in areas not heavily used … putting on a jacket instead of pumping the air-conditioning flat out in the cold weather (yes, it’s cold where I live), but the biggest electricity use in the whole office is my lab.

It’s really convenient to have all my servers and NAS devices humming away so that I can access whatever I want, or test whatever I need, without waiting. It’s also necessary for some of my machines to be running all the time whether I like it or not.

So on those machines I turn on power saving on the hard drives. That’s a pretty simple process but let’s look at exactly what it does for me. If you want the real nuts and bolts look at our FAQ on Power Saving: http://www.adaptec.com/en-us/products/controllers/hardware/sas/performance/sas-5405/_compatibility/ipm_faqs.htm?nc=/en-US/products/Controllers/Hardware/sas/performance/SAS-5405/_compatibility/IPM_FAQs.htm

However, in simplistic terms, this is the scenario in my home office/lab. I work a ridiculous number of hours (that one is for if the boss reads this blog), but I don’t access my servers all the time. I sleep occasionally (when I don’t access my servers at all) and I do not, repeat do not, work weekends (except when travelling). Now I’ve done a few calculations that are a little scary, but eye-opening:

Hours in a week – 24×7=168
Hours worked per day – 15
Days worked per week – 5
Hours worked per week (maximum) – 75
Percentage of time that I could possibly, remotely, conceivably access my servers – 44.64%

Simply put … that’s 55.36% of the week that I don’t access my servers. Now consider the fact that the hard drives in my servers are all spinning all the time and you start to ask some serious questions … like “why?”

Since, for some reason known only to me, I use Adaptec controllers in my servers, I have enabled power saving on all my raid arrays. This means that the server is still running, and I can access it whenever I want, but the hard drives are not spinning unless I either need to write data to them or read something from them. So in theory I’m now saving 50% of my electricity costs. That’s probably not realistic in several ways. Firstly the rest of the server is still running (power supply, processor and fans etc so that will reduce the savings somewhat. On the other hand, servers like my backup server which only run for less than 1 hour a day gain massive power savings from having the hard drives asleep for a large portion of the day.

I’ll do the maths over the coming months, and watch the electricity bill (as will the wife I’m sure), but it makes sense for even a small operation like mine to save runing costs. Imagine what it can do for a larger organisation!

Ciao
Neil

facebooktwitterlinkedinmail

Storage Manager Email Notifications …

August 10, 2011

This handy tip is part of a larger document that one of our very knowledgeable people put together. It’s part of our “best practices” to ensure that your system performs to it’s utmost for the longest period of time.

Adaptec Storage Manager can be configured to send email messages (or notifications) about events on a system in your storage space. We recommend doing this if your storage space is not managed by a dedicated person, or if that particular system is off-site or not connected to a monitor. Email notifications can help you monitor activity on your entire storage space from any location, and are especially useful in storage spaces that include multiple systems running the Adaptec Storage Manager Agent only.

To set up email notifications:

  1. In the Configure menu (on the tool bar), select the system you want, and then select Email Notifications.
  2. The Email Notifications window opens. The SMTP Server Settings window opens if you haven’t set up email notifications previously.
  3. Enter the address of your SMTP server and the “From” address to appear in email notifications. If an email recipient will be replying to email notifications, be sure that the “From” address belongs to a system that is actively monitored.
  4. Click OK to save the settings.
  5. In the Email Notifications window tool bar, click Add email recipient. The Add Email Recipient window opens.
  6. Enter the recipient’s email address, select the level of events for which the recipient will receive an email, and then click Add. Repeat this Step to add more email recipients. Click Cancel to close the window.

This simple tool will help you monitor your systems and allow you to take action when something goes wrong … rather that waiting for a second drive to fail and all hell to break loose.

You can find the full document (which I’ll be plagiarising further in this blog) at:
http://download.adaptec.com/pdfs/miscellaneous_support/Adaptec_RAID_Maintenance_Best_Practices_v2b.pdf

Ciao
Neil

facebooktwitterlinkedinmail

Disk in … disk out …

August 10, 2011

(I think I’ve watched the Karate Kid too many times)

System builders are often faced with the dilemma of adding enough drives to a system to meet a customer’s capacity and performance requirements. The traditional approach has been to purchase a chassis large enough to hold all the drives internally. This approach has a few problems.

Firstly, you need to spend up big on the initial chassis. Even if the customer doesn’t need all that space right now, if you are going internal then you still have to spend enough money up front to purchase the 24 or 48-drive chassis.

A smart customer will want to purchase a system that can grow with his needs, but purchasing a massive server up front often puts the quote out of the reach of the initial sale.

Purchasing a smaller chassis without expansion capability often means people are thinking of replacing the hard drives at a later date with larger drives to meet a capacity upgrade requirement. Bad move … this is not cost effective, takes a lot of time and is not in the best interests of the customer (even though it may get the system builder a bit of extra income at some future point in time).

So … you want to build a system now that meet’s today’s requirements for capacity, and you only want to pay for what you are using now – smart. But … you know your space requirements will grow in the future and you don’t want to throw away a perfectly good server just because it has run out of disk space a year after purchase.

What to do?

It’s pretty simple really … just put the disks “outside” the server. Many users can purchase all the horsepower and memory requirements they need in a cheap 1U server. By adding a JBOD (just a bunch of disks – I love that acronym) and a RAID card with external connectivity, you can add as many drives as you like. On top of that you can just keep daisy-chaining JBODs together until the cows come home, giving you as much storage capacity as you can possibly imagine (try 250TB+ in the one server).

If this is so simple, why do people shy away from it? FUD (fear, uncertainty and doubt) is generally the reason. Let’s look at some of the questions people have when considering this type of configuration:

  • I’ll create a performance bottleneck between the server and the storage …
  • If one cable breaks I’ll lose all my storage …

A modern SAS RAID card (and yes, it accepts both SAS drives and SATA drives at the same time), has a funny looking new type of external connector called a 4x (4 by, 4 lane, 4 channel – call it what you will) connector. This is, practically, 4 single SAS/SATA communication channels joined together at the hip.

A single SAS/SATA connector runs at 3 or 6 gigabit – roughly 300mb or 600mb per second throughput. But SAS is smart, much smarter than SATA. When a SAS card finds a 4x cable going from the card to another device that also uses a 4x connector, it joins all 4 cables into one large pipe. Sort of multiplexing for those of us old enough to remember that technology.

The end result of all these smarts, is that the pipe between the RAID card and the JBOD is capable of sustaining 1200mb-2400mb per second throughput. That’s pretty fast, and eliminates any bottleneck concerns that a user may have about connecting drives externally to a card.

Now the question about the cable. When was the last time you saw an external cable “break”. I’ve spent a lot of years in tech support, especially dealing with SCSI. Now SCSI connectors were dodgy to say the least, and were (and still are) often used on devices like external tape drives where they were connected and disconnected frequently.

Over the years I have hardly replaced a SCSI cable. Now if we add to that the fact that modern SAS external cabling uses a much better connector type and cables connecting servers and JBODs are not connected/disconnected frequently, and you end up with an almost bulletproof connection system that is by no means the weakest link in the data storage chain.

Now that we’ve dispelled the myths (FUD), let’s look at the benefits of building a server system in this manner.

Firstly, you start with a 1U server with 4 drives (or 2 or 3 and a CDROM). Spend your money on CPU and memory components. In that box you need a SAS RAID card that has 4 internal connectors to handle the (up to) 4 drives inside the box, and a 4x external connector to connect to a JBOD. An example of a card like this would be Adaptec’s 5445 or a 6445.

Then buy yourself a JBOD. These are generally 2U rack units that take 12 drives. You may or may not want to purchase all 12 drives now … just purchase as many as you need. Therefore your initial investment in the server hardware comes down to a beefy 1U server, a good quality RAID card, a JBOD and as many drives as you currently require.

But time goes on … and sooner or later you’ll run out of space. Simply purchase more drives and add them to the JBOD. Fill the JBOD? No worries, purchase another JBOD and daisy-chain it to the first one. Start filling that with drives. Run out of space? Just repeat the cycle with another JBOD and keep on adding drives for as long as you wish. The technical maximum at this point in time is 250 hard drives, which for all intents and purposes is unlimited … I haven’t see anyone come even close to this yet.

So you end up with a big expensive server – great. However you paid for it over a couple of years, and took advantage of the dealer specials along the way. You also gained from hard drive price decreases. Along the way were hard drive capacity increases (these two go hand in hand like death and taxes).

It’s hard to see anything but benefits to users … reduced capital expenditure, increased flexibility over a fixed system, access to better technology as it both appears and drops in price.

Sounds like a win win win to me.

Ciao
Neil

facebooktwitterlinkedinmail

Drive and tape …

August 10, 2011

Question to the Storage Advisors: Pete: Wanting to use the 5805 with sas hard drives and also a quantum sas tape drive that’s on the compatibility list. Is there any performance disadvantage to using them both on the same card?

Pete … good question. I presume you’ve asked this because in days gone by you may have asked the same question with SCSI and been told “don’t do it”. However SAS is a different kettle of fish.

Yes, you can run drives and tape on the same controller without performance degradation. In SAS (and SATA) all connections are point to point so there are no bus issues to contend with as in SCSI days.

Hope this helps.

Ciao
Neil

facebooktwitterlinkedinmail

Building your system …

August 10, 2011

Just had a discussion with a customer who has a 5805 raid card and 7 1tb drives (vendor not an issue here). He originally contacted me asking what the grey ribbon cable on the cables provided with our card were for, but we quickly moved past that to looking at his planned system config.

Note that all drives are 1tb in capacity, and all in a hot-swap backplane. Customer is installing Windows 2008, Exchange (full) and will be using the server as a fileserver as well as an Exchange server. CPU, memory etc were all more than adequate.

Planned spec was:
2 x 1tb drives in RAID 1 mirror for OS (1tb capacity)
4 x 1tb drives in RAID 5 for Exchange and data (3tb capacity)
1 x 1tb drive as hot spare

Problems

1tb for an OS installation is beyond what even Microsoft need these days, and it wastes a fantastic amount of space. Having 4 drives in a RAID 5 is not an issue, except for the fact that Exchange is a database and works much better on RAID 10 than on RAID 5 … the small writes involved in Exchange are not friendly to RAID 5 (or vice versa).

Capacity wise the customer is ending up with 3tb for Exchange and data.

My suggested config for this server is …

1 x RAID 10 on 6 drives for OS – 100gb capacity … this will use 33gb off each disk
1 x RAID 10 on 6 drives for Exchange – 200gb capacity … this will use 66gb off each disk
1 x RAID 5 on 6 drives for data – based on the fact that 1tb drives are generally around 930gb in capacity there will be approxmiately 830gb left on each drive. Making a RAID 5 from 6 of these drives will give around 4tb capacity.

The remaining drive will be a hot spare.

So …

Customer gets

(a) system OS of correct size … which runs faster than it would on a mirror
(b) a fast RAID 10 for his Exchange … which runs a lot faster than it would on RAID 5
(c) a RAID 5 for data which is 4tb in capacity for data (which is a whole lot better than his current 3tb for both exchange and data)

End result … a lot quicker, better utilised system.

This is a classic example of how “knowing” about the capabilities of your RAID card can give you a much better result than just doing the same old things you’ve been doing for the last 10 years.

So what was the grey cable for? That’s another blog if anyone is interested.

Food for thought.

Ciao
Neil

facebooktwitterlinkedinmail

Did you know … OLTP

August 10, 2011

Not many people have picked up the fact that we have a performance tuning option on our 5 and 6 series cards that optimises the card cache for OLTP. What is OLTP I hear you ask? Onlne Transaction Processing. Now why isn’t that OTP instead of OLTP? Doesn’t matter … I’m drifting again.

The setting can be found by right-clicking on the controller properties in ASM (Change Performance Mode). The performance options are “Dynamic” and “OLTP”. Dymanic is the default, and is best for just about every kind of system other than a database server.

OLTP on the other hand improves database function … normally by a decent amount. Of course a customer will have to test … you can never “know” from looking at specs as to whether the system will benefit, but its an easy turn-on, turn-off feature that can be easily tested.

Note that this is a card-wide setting. Note also that OLTP will hurt streaming or sequential data (by that I mean it will slow it down). So if you have a mixed mode server then don’t use it … Dynamic is for you. But if you are building a SQL server running on SAS drives on RAID 10 because you are looking for the absolute best you can be … give OLTP a go.

Bet you didn’t know that one.

Ciao
Neil

facebooktwitterlinkedinmail