I always knew I had no “Q” …

June 29, 2015

Talk about a different way to promote a new product. Take an existing product, remove a feature and drop the price. Sounds pretty easy but what you end up with is something pretty spectacular.

Up until now Adaptec’s only 16-port internal RAID card has been the 81605ZQ. The “Z” is for ZMCP (zero maintenance cache protection) – in other words it has the supercap functionality built into the card – with just the supercapacitor to plug in (no daughter card). The “Q” part of the moniker denotes maxCache capability – the 81605ZQ is a caching controller (great for specific applications).

But what did you buy if you wanted a 16 port internal controller but did not need the “Q” function? You might be putting together a pure SSD system, or you might be building a nice large storage server that doesn’t need caching. The only choice was to go back to the 7 series.

So we took the 81605ZQ and removed maxCache. That makes it an 81605Z. Comes standard with 16 internal ports and cache protection … but note that it can’t be upgraded to a “Q” model – you can’t add that via firmware etc.

As an aside … you should note that you can swap out the drives from an 81605Z and an 81605ZQ without any reconfiguration – the drivers etc are all the same and both cards recognise the RAID arrays from the other card.

So there you have it … a new card. It does less than it’s “Q” cousin, but then again, it costs less :-)
Now you know.

Ciao
N

facebooktwitterlinkedinmail

Upgrading to maxCache?

June 17, 2015

Some thoughts from the Storage Advisor

I get a lot of calls from people who are interested in maxCache … how does it work, what does it do, and most importantly … will it work for me? So I thought I’d put some ramblings down on what has worked for customers and where I think maxCache could/should be used.

Firstly just a quick summary of maxCache functionality in plain English. You need an Adaptec card with “Q” on the end of it for this to work, and no, you can’t upgrade a card without “Q” to a “Q” card – but you can swap out the drives from an existing controller to a “Q” controller, then plug in SSDs and enable maxCache (bet you didn’t know that one). maxCache is the process of taking SSDs and treating them as read and write cache for a RAID array – that’s a basic statement but it’s pretty close to what happens – add a very large amount of cache to a controller.

So let’s take an existing system that’s running 8 x enterprise SATA in a RAID 5 – pretty common configuration. That might be connected to a 6805 controller in an 8-bay server. You want to make this thing faster for the data that has ended up on this server without reconfiguring the server or rebuilding the software installation. This server started life as just a plain file server, but now has small database, accounting software, and is now running terminal server … a far cry from what this thing started life as. You want to increase the performance of the random data. maxCache does not impact or affect the performance of streaming data – it only works on small, random, frequent blocks of data.

Upgrade the drivers in your OS (always a good starting point) and make sure the new drivers support the 81605ZQ. In most OS this is standard – we have for example one windows driver that supports all our cards. Then disconnect the 6 series from the drives, plug in and wire up the 81605ZQ and reboot. All should be well. You will see some performance difference as the 8 series is dramatically quicker than the 6 series controller, but the spinning drives will be the limiting factor in this equation.

Once you’ve seen that all is working well, and you’ve updated maxView management software to the latest version etc, then shut the system down, grab a couple of SSDs (lets for argument sake say 2 x 480GB Sandisk Extreme Pro) and fit them in the server somewhere. Even if there are no hot swap bays available there is always somewhere to stick an SSD (figuratively speaking) – they don’t vibrate and don’t get hot so they can be fitted just about anywhere.

Create a RAID 1 out of the 2 x SSDs. Then add that RAID 1 to the maxCache pool (none of which takes very long). When finished enable maxCache read and write cache on your RAID 5 array. Sit back and watch. Don’t get too excited as nothing much seems to happen immediately. In fact maxCache takes a while to get going (how long is a while? … how long is a piece of string?). The way it works is that once enabled, it will watch the blocks of data that are transferring back and forth from the storage to the users and vice versa.

So just like a tennis umpire getting a sore neck in the middle of a court, the controller watches everything that goes past. It then learns as it goes as to what is small, random and frequent in nature, keeping track of how often blocks of data are read from the array etc. As it sees suitable candidates of data blocks, it puts them in a list. Once the frequency of the blocks hits a threshhold, the blocks are copied in the background from the HDD array to the SSDs. This is important – note that it is a “copy” process – not a moving process.

Once that has happened, a copy of the data block lives on the SSDs as well as on the HDD array. Adaptec controllers use a process of “shortest path to data”. When a request comes for a block of data from the user/OS, we look first in the cache on the controller. If it’s there then great, it’s fed straight from the DDR on the controller (quickest possible method). If it’s not there then we look up a table running in the controller memory to see if the data block is living on the SSDs. If so, then we get it from there. Finally, if it can’t be found anywhere we’ll get it from the HDD array, and will take note of the fact that we did (so adding this data block to the learning process going on all the time).

Why does this help? Pretty obviously the read speed of the SSD is dramatically faster than the spinning drives in the HDD array, especially when it comes to a small block of data. Now as life goes on and users read and write to the server we are learning all the time, and constantly adding new blocks to the SSD pool. Therefore performance increases over a period of time rather than being a monumental jump immediately.

The SSD write cache side of things comes into play when blocks that live in the SSD pool (remembering these are copies of data from the HDD) are updated. If the block is already in the SSD pool then it’s updated there, and copied across to the HDD as a background process a little later (when the HDD are not so busy).

End result … your server read and write performance increases over a period of time.

 

Pitfalls and problems for young players …

All this sounds very easy, and in fact it is, but there are some issues to take note of that require customer education as much as technical ability.

Speed before and after

If you have no way of measuring how fast your data is travelling prior to putting maxCache in the system, then you won’t have any way of measuring later, so you can only go by “feel” … what the users experience when accessing data. While this is a good measure, it’s pretty hard to quantify.

Let me share some experiences I had from the early days of showing this to customers. I added maxCache to an existing system for a customer on a trial basis (changing controller to Q, adding SSD etc). Left the customer running for a week feeling quite confident that it would be a good result when I went back. Upon return, the customer indicated that he didn’t think it was much of a difference and wasn’t worth the effort or cost. So I put the system back the way it was before I started (original controller and no SSD) and rebooted. The customer started yelling at me very loudly that I’d stuffed his system … “it was never this slow before!” Truth of the matter was that it was exactly the same as before, so the speed was what he had been living with. Lesson: customers are far less likely to say anything about a computer getting faster, but they yell like stuck pigs as soon as something appears to be “slower” :-)

Second example was in a terminal server environment. This time we could measure the performance of the server by measuring the logon time of the terminal server screen etc. It was pretty bad (about 1 minute). So we went through the process again and added maxCache. The boss of the organization (who happen to be a good reseller of mine) immediately logged on to TS – and grandly indicated that there was no difference and I didn’t know what I was doing. So we went to the pub. Spent a good night out on the town and went back to the customer in the morning (a little the worse for wear). The boss got to work around 10.00am (as bosses do) and was pretty much the last person to log on to TS that morning. Wow, 6 seconds to log on. We then had the Spanish Inquisition (no-one expects the Spanish Inquisition – https://www.youtube.com/watch?v=7WJXHY2OXGE) as to what we had done that night. The boss was thinking we’d spent all night working on the server instead of working on the right elbow.

In reality, the server had learnt the data blocks involved in the TS logon (which are pretty much the same for all users), so by the time he logged in it was mostly reading from the SSDs, hence a great performance improvement. Lesson: educate the customer as to how it works and what to expect before embarking on the grand installation.

The third and last experience was with performance testing. I’ve already blogged about this, but it bears mentioning here. Customer running openE set up his machine and did a lot of testing (unfortunately in a city far away from me so I could not do hands on demo etc). Lots of testing with iometer did not prove a great deal of performance improvement, but when finally biting the bullet and putting the server into action, the customers were ecstatic. A great performance improvement on Virtual Desktop software. Lesson: spend a lot more time talking to the customer about how the product works so they understand its random data that’s at play here, and that performance testing streaming data won’t show any performance improvement whatsoever.

 

Finally

There are a lot of servers out there that would benefit from maxCache to speed up the random data that has found its way onto the server whether intentionally or not. It needs to be kept in mind that servers don’t need rebuilding to add maxCache, and it can be added (and removed) without any great intrusion into a client’s business.

The trick is to talk to the customer, talk to the users and find out what the problems in life are before just jumping in and telling them that this will fix their problems. Then again, you should probably do that anyway before touching anything on a server … but that’s one of life’s lessons that people have to work out for themselves :-)

Ciao
N

facebooktwitterlinkedinmail

The longest blog article ever? RAID storage configuration considerations…

June 16, 2015

RAID storage configuration considerations (for the Channel System Builder)

SAS/SATA spinning media, SSD and RAID types – helping you make decisions
Some thoughts from the Storage Advisor

Note: I started writing this for other purposes – some sort of documentation update. But when I finished I realised it was nothing like the doc the user requested … and then “write blog” popped up on the screen (Outlook notification). So I took the easy way out and used my ramblings for this week’s update.

When designing and building a server to meet customer needs, there are many choices you need to consider: CPU, memory, network and (probably most importantly) storage.

We will take it as a given that we are discussing RAID here. RAID is an essential part of the majority of servers because it allows your system to survive a  drive failure (HDD or SSD) and not lose data, along with the added benefits of increasing capacity and performance. While there are many components within your system that will happily run for the 3-5 year life of your server, disk drives tend not to be one of those items.

So you need to take a long-term approach to the problem of storage – what do you need now, what will you need in the future and how will your survive mechanical and electronic failures during the life of the server.

 

What sort of drives do I need to meet my performance requirements?

Rather than looking at capacity first, it’s always a good idea to look at performance. While the number of devices have an impact on the overall performance of the system, you will not build a successful server if you start with the wrong disk type.

There are three basic types of disks on the market today:

  • SATA spinning media
  • SAS spinning media
  • SSD (generally SATA but some SAS)

SATA spinning drives are big and cheap. They come in many different flavours, but you should really consider using only RAID-specific drives in your server. Desktop drives do not work very well with RAID cards as they do not implement some of the specific features of enterprise-level spinning media that help them co-operate with a RAID card in providing a stable storage platform.

The size of the drive needs to be taken into consideration. While drives are getting larger, they are not getting any faster. So a 500Gb drive and a 6Tb drive from the same family will have pretty much the same performance.

Note that this is not the case with SSDs. SSDs tend to be faster the larger they get, so check your specifications carefully to ensure you know the performance characteristics of the specific size of SSD you buy – not just what is on the promotional material.

The key to performance with spinning media is the number of spindles involved in the IO processes. So while it’s possible to build a 6TB array using 2 drives in a mirror configuration, the performance will be low due to the fact that there are 2 spindles in operation at any time. If the same array was built using 7 x 1TB drives, it would be much quicker in both streaming and random data access due to the multiple spindles involved.

SAS spinning media generally rotate at higher revolutions than SATA drives (often 10,000 RPM or higher vs 5400/7200 for SATA), and the SAS interface is slightly quicker than the SATA interface, so they outperform their SATA equivalents in certain areas. This is mostly in the form of random data access: SAS drives are faster than SATA drives. When it comes to streaming data there is little to no difference between SATA and SAS spinning media.

However all performance calculations go out the window when SSD are introduced into the equation. SSD are dramatically faster than spinning media of any kind, especially when it comes to random data. Keeping in mind that random data storage systems tend to be smaller capacity than streaming data environments, the SSD is rapidly overtaking the SAS spinning media as the media of choice for random data environments. In fact, the SSD drive is so much faster than SAS or SATA spinning media for random reads and writes, that it is the number one choice for this type of data.

 

So what about capacity calculations?

Capacity both confuses and complicates the performance question. With SATA spinning drives reaching upwards of 8TB it’s pretty easy to look at the capacity requirements of a customer and think you can just use a small number of very large spinning drives to meet the capacity requirements of the customer.

And that is true. You can build very big servers with not many disks, but think back to the previous section on performance. With spinning media, it’s all about the number of spindles in the RAID array. Generally speaking, the more there are, the faster it will be. That applies to both SATA and SAS spinning media. The same cannot be said for SSD drives.

So if you need to build an 8TB server you are faced with many options:

  • 2 x 8TB drives in a RAID 1
  • 4 x 4TB drives in a RAID 10
  • 3 x 4TB drives in a RAID 5
  • 5 x 2TB drives in a RAID 5
  • 9 x 1TB drives in a RAID 5

Etc, etc.

So what is best with spinning drives? 2 x 8TB or 9 x 1TB? A good general answer is that the middle ground will give you the best combination of performance, cost and capacity. Note however that you need to think about the data type being used on this server, and the operating system requirements. If for example you are building a physical server running multiple virtual machines, all of which are some sort of database-intensive server, then you are wasting your time considering spinning drives at all, and should be moving straight to SSD.

If however this is a video surveillance server, where the data heavily leans towards streaming media, then 3 x 4TB SATA drives in a RAID 5 will be adequate for this machine.

 

What RAID controller type do I need?

This one is easier to determine. The RAID controller needs to have enough capacity to handle the IOP capability of your drives, with sufficient ports to connect the number of drives you end up choosing. Since there are so many different ways of mounting drives in servers today, you will need to take into account whether the drives are directly attached to the server or whether they are sitting in a hot-swap backplane with specific cabling requirements.

 

What RAID level should I use?

There are two basic families of RAID:

  • Non-Parity RAID
  • Parity RAID

Non-Parity RAID consists of RAID 1, and RAID 10. Parity RAID consists of RAID 5, 6, 50 and 60. Generally speaking, you should put random data on non-parity RAID, and general/streaming data on parity RAID. Of course things aren’t that simple as many servers have a combination of both data types running through their storage at any given time. In this case you should lean towards non-parity RAID for performance considerations.

Note of course (there’s always a gotcha) that non-parity RAID tends to be more expensive because it uses more disks to achieve any given capacity than RAID 5 for example.

 

Putting this all together …

By now you can see that designing the storage for a server is a combination of:

  • Capacity requirement
  • Performance requirement
  • Disk type
  • RAID controller type
  • RAID level
  • Cost

Let’s look at some examples:

  1. General use fileserver for small to medium business
    General Word, Excel and other office file types (including CAD files)

Capacity: 10TB
Performance requirements: medium
Disk type: spinning will be more than adequate
RAID controller type: Series 6,7,8 with sufficient ports
RAID level: RAID 5 for best value for money
Options: should consider having a hot spare in the system
Should also consider having cache protection to protect writes in cache in event of power failure or system crash

Remembering that you don’t get the total usable capacity that you expect from a drive. For example, a 4TB drive won’t give 4TB of usable capacity – it’s a little more like 3.75TB….(I know, seems like a rip off!)

In this scenario we are going to recommend enterprise SATA spinning media. 4 x 3TB drives will give approximately 8TB capacity, with good performance from the 4 spindles. Since many server chassis support 6 or more drives, then the 5th drive can become a hot spare, which will allow the RAID to rebuild immediately in the case of a drive failure.

With spinning drives a 6-series controller will be sufficient for performance, so the 6805 would be the best choice controller. We would recommend an AFM-600 be attached to the controller to protect the cache in event of a power failure etc.

  1. High-performance small-capacity database server
    Windows 2012 stand-alone server running an industry-specific database with a large number of users

Capacity: 2-3TB
Performance requirements: high
Disk type: pure SSD to handle the large number of small reads and writes
RAID controller type: Series 7 (71605E)
RAID level: RAID 10 for best performance
Options: should consider having a hot spare in the system

In this scenario we are definitely going to use a pure SSD configuration. Database places a great load on the server with many small reads and writes, but the overall throughput of the total server data is not great.

RAID 10 is the fastest RAID. When creating a RAID array from pure SSD drives, we recommend to turn off the read and write cache on the controller. Therefore you (a) don’t need much cache on the controller and (b) don’t need cache protection. In this case we would recommend 6 x 1TB (eg 960Gb Sandisk Extreme Pro drives) – which would give approximately 2.7TB usable space in an extremely fast server.

When using SSDs you need to use a Series 7 or Series 8 controller. These controllers have a fast enough processor to keep up with the performance characteristics of the SSDs (the Series 6 is not fast enough).

Again, a hot spare would be advisable in such a heavily used server. This would make a total of 7 drives in a compact 2U server.

  1. Mixed-mode server with high-performance database and large data file storage requirements
    Multiple user types within the organisation – some using a high-speed database and some general documentation. Organisation has requirement to store large volume of image files

Capacity: 20+TB
Performance requirements: high for database, medium for rest of data
Disk type: mix of pure SSD to handle the database requirements and enterprise SATA for general image files
RAID controller type: Series 8 (81605Z)
RAID level: SSD in RAID 10 for operating system and database (2 separate RAID 10 arrays on same disks). Enterprise SATA drives in RAID 6 due to fact that large number of static image files will not be backed up
Options: definitely have a hot spare in the system

In this scenario (typically a printing company etc), the 4 x SSDs will handle the OS and database requirements. Using 4 x 512Gb SSD, we would make a RAID 10 of 200Gb for Windows server, and a RAID 10 of 800Gb (approx) for the database.

The enterprise SATA spinning media would be 8 x 4TB drives, with 7 in a RAID 6 (5 drives capacity) and 1 hot spare. In this scenario it would be advisable to implement a feature called “copyback hot spare” on the RAID card so the hot spare can protect both the SSD RAID array and spinning media RAID array.

This will give close to 20TB usable capacity in the data volumes.

 

Options

Some of the key features of RAID cards that need to be taken into consideration, which will allow for the best possible configuration, include:

  • Multiple arrays on the same disks
    It is possible to build up to 4 different RAID arrays (of differing or same RAID level) on the same set of disks. This means you don’t have to have (for example) 2 disks in a mirror for an operating system, and 2 disks in a mirror for a database, when you can do both requirements on the same 2 disks
  • RAID 10 v RAID 5 v RAID 6
    RAID 10 is for performance. RAID 5 is the best value for money RAID and is used in most general environments. Many people shy away from RAID 6 because they don’t understand it, but in a situation such as in option 3 above, when a customer has a large amount of data that they are keeping as a near-line backup, or copies of archived data for easy reference … that data won’t be backed up. So you should use RAID 6 to ensure protection of that data. Remember that the read speed of RAID 6 is similar to RAID 5, with the write speed being only very slightly slower.
  • Copyback Hot Spare
    When considering hot spares, especially when you have multiple drive types within the server, then copyback hot spare makes a lot of sense. In option 3 above, the server has 4Tb SATA spinning drives and 512Gb SSD drives. You don’t want to have 2 hot spares in the system as that wastes drive bays, so having 1 hot spare (4Tb spinning media) will cover both arrays. In the event that an SSD fails, the 4Tb SATA spinning drive will kick in and replace the SSD, meaning the RAID 1 will be made of an SSD and HDD. This keeps your data safe but is not a long-term solution. With copyback hot spare enabled, when the SSD is replaced, the data sitting on the spare HDD will be copied to the new SSD (re-establishing the RAID), and the HDD will be turned back into a hot spare.

 

Conclusion

As you can see, there are many considerations to take into account when designing server storage, with all factors listed above needing to be taken into consideration to ensure the right mix of performance and capacity at the best possible price.

Using a combination of the right drive type, RAID level, controller model and quantity of drives will give a system builder an advantage over the brand-name “one-model-fits-all” design mentality of competitors.

If you have questions you’d like answered then reply to this post and I’ll see what I can do to help you design your server to suit your, or your customer’s, needs.

Ciao
N

facebooktwitterlinkedinmail

“Comatose” … no, not me, the server!

June 9, 2015

(though I think I’ve been called that a few times in my youth).

A colleague sent me a link to a report on the web recently, and while I found it mildly interesting, I actually think the writers may have missed some of the point (just a little). While in general everything they say is correct, there is a factor that they haven’t taken into account … “stuff”.

http://www.datacenterknowledge.com/archives/2015/06/03/report-30b-worth-of-idle-servers-sit-in-data-centers/

So what is “stuff”?

Well I have a lot of “stuff” on my laptop. There is “stuff” on CD’s lying around the place, “stuff” on my NAS and “stuff” in the Mac on the other end of the desk. To me “stuff” is old data. I hardly, if ever, use it, but I sure as heck want it kept close and immediately accessible. In my business my old data is my historical library and a great backup to my slowing fading memory.

So what is living out in datacenter land? Lots of useful information, and lots and lots of “stuff”. It has become evident when dealing with users over the past decade that people are reticent, if not downright unwilling, to remove, delete, consolidate or even manage old data – they just want it kept forever “just in case”.

So while there are strategies out there to minimize it’s footprint, there is no strategy out there for changing people’s mindsets on how long they keep data. So datacenterland is, and always will be, awash with “stuff” … which means more and more “comatose” storage. I don’t disagree with the web link article on server compute – that needs to be managed and centralized into newer and newer, faster and more power efficient servers. It’s just the data (storage) side of the equation that I have issues with.

If we take as a given that no-one is going to delete anything, then what do datacenters do about it? While there are larger and larger storage devices coming out all the time (eg high density storage boxes utilizing 10Tb disks), the problem these bring to the datacenter is that while they can handle the capacity of probably 10 old storage boxes, the datacenter is faced with moving all of the data off the old storage onto the new storage to free up the old storage. By the time a large datacenter gets through that little process, 10Tb drives will be replaced by 20Tb drives and the process will start all over again – meaning datacenters will have data in motion almost continuously – with tremendous management, network and overheads/costs to go along with … exactly the sort of stuff that datacenter operators don’t want to hear about.

I’m guessing that datacenter operators are looking at exactly this issue and are crunching numbers. Is it cheaper for us to keep upgrading our storage to handle all this “stuff”, with all of the management complications etc, or do we just buy more storage and keep filling it up without having to touch the old data? Or do we do both and just try to keep costs down everywhere while doing so?

It would be very, very interesting to know how the spreadsheet crunches that little equation.

“Stuff” for thought.

Ciao
N

 

facebooktwitterlinkedinmail

It seems like the world has stopped turning …

June 3, 2015

No, this is not biblical, nor is it prophetic. In fact I’m referring to disk drives :-)

In a meeting the other day we were discussing unofficial conversations with disk vendors that we have on a regular basis. It seems the spinning world (and I’m talking channel here), is slowing down. The SSD vendors however are romping along at 20%+ growth year on year.

So that is stating the obvious – SSD uptake is growing at the expense of HDD. Of course HDD is still king in the cold data storage world and those guys are making a killing in the datacenter world – all our “cloud” information has to live somewhere (like all your photos of yesterday’s lunch on FB etc).

But in the channel, the SSD is taking over for performance systems. The 10-15K SAS drives are giving way to similar sized SSDs – some at the enterprise level but a lot more at the high-end gaming and channel level – drives that at first glance don’t appear to be made for RAID, but in fact work just fine.

When talking to users, performance is a given – they all understand that the SSD is faster than the spinning drive, but many are still worried about drives wearing out – will they fail? I myself was wondering that so I looked at some specifications of drives and specifically their “DWPD” values (drive writes per day). This is pretty much the standard that SSD vendors use to indicate how long they think a drive will last before the overprovisioning is used up and the drive starts to degrade.

You will see values between 1/3 of a drive write per day and 25 drive writes per day – and if you were using these drives in an intensive write datacenter environment, I know which drives I’d be opting for. But let’s look at the 1/3 drive write per day and do a little maths. Let’s take 4 x 1TB drives (close enough) and make a RAID 10. Roughly speaking, that will give you 2TB capacity. Now if I can safely write 1/3 drive’s worth of data per day for the life of the drive, then that would be (approximately) 600GB of data written each day – remembering that the data is split across the two sets of mirrors in the RAID 10, and each set of mirrors can supposedly handle 300GB per day (1/3 their capacity).

Then lets look at the sort of systems that people are putting SSDs into. Are they using them for video servers? Not likely (too expensive to get the capacity). In fact they are using them for database and high-performance servers that are generally handling lots of small random reads and writes.

A bit more maths works out that an average 40 hour business week is approximately 25% of the overall time during the week, so you’d need to cram that 600GB writes into that timeframe (40 hours) to start stressing the drive. That’s something like 15GB writes per hour … remembering that this is based on a drive with a DWPD of 1/3. So a drive with higher values can handle more, and I’m yet to think of a business that is running SSDs in a database environment that is even within cooee of these numbers.

So when you look at the values on the drives, and think … wow, 1/3 DWPD is not very much … you need to balance that with actually thinking about how much data it is that your business will actually be writing to the disks on any given day.

I found it pretty interesting math – and it opened my eyes to the reality of DWPD values. Remember of course that in a datacenter you should use datacenter drives – mainly because many thousands of users can be accessing those drives at any given point in time and yes, you can get some pretty amazing amounts of data written to the drives each day, but in the channel, in the small business, and even in the medium to large-sized business, the amount of data being written each day is not the scary number that you may have thought without some detailed analysis.

It’s food for thought.

Oh, and by the way, if you are using SSD, then you should be using Series 8 RAID controllers. I know they are 12GB controllers and your SSDs are only 6GB devices, but it’s not the speed of the pipe that matters, it’s the number of transactions per second that the controller can handle. You don’t want to bottleneck your investment in SSDs at the controller level – you want a controller that will easily handle all the small reads and writes that the SSDs are capable of. Now whether your software or customers are capable of throwing or dragging that much data from the drives is a moot point, but putting SSDs on a slow controller is not the smartest thing to do.

Ciao
N

facebooktwitterlinkedinmail