Was mediating (sort of) a discussion between two customers recently. Both are system builders using our products and I was trying to get some credibility for a solution I was proposing to one customer. Always helps if you can get a recommendation from someone other than yourself for the mad ideas you are proposing.
During the conversations, customer “A” dropped the term “6000Mb/sec”. Customer “B” balked at this, asking whether we meant 6000 megabits or 6000 megabytes. I matter-of-factly indicated we were talking 6000 megabytes per second throughput – I thought that was pretty old hat by now. If we were “writing”, we’d designate megabits as Mb (little “b”) and megabytes as MB (big “B”) – but we weren’t…
Customer “B” almost choked.
He had no idea that storage systems could go that fast. I did indicate that you need about 16 good SSDs to make this happen, but it’s no great brainer for an 8 Series controller directly attached to 16 SSDs to get this sort of performance (with the right data flow of course). So Customer “B” is off to get 16 SSDs to prove me wrong
Lesson: just because I learnt something 12 months ago doesn’t mean that everyone knows the same stuff. I’ll learn to be a little more humble in the future and make sure I tell people what we are doing.
I woke up this morning to the news that the internet is slow … because of Netflix (and other such related companies).
Wow … this is news (not). The internet in Australia is a dodgy hotch-potch of technologies and connections based on an outdated and totally-neglected copper telephone network. It has never been great, but now it’s terrible. In simple terms it can’t keep up with the load of people streaming movies across the internet. This is most evident from around 4.00pm onwards – when the kids get home from school the internet slows to a crawl. There has always been an impact but now it is pretty bad – to the point where things like making Skype calls in the evenings (I do a lot of those for sporting committee meetings) is becoming untenable.
The previous Australian government (I’ll keep political parties out of it) decided we needed a fibre-to-the-home network across Australia (we call it the “NBN” – National Broadband Network), but the current government looked at the cost, decided they could get themselves re-elected for a lot longer if they took that money and spent it elsewhere, and scaled back that little venture, preferring to create a mix of copper, fibre and wifi. Did I mention “hotch-potch” somewhere earlier?
Ironically it’s now home users who are complaining and being heard. Business has been screaming for ages that our communications infrastructure is woeful, and that we pay a fortune for both phone and internet connectivity, but that seems to have fallen on deaf ears (politically). It will be interesting to see if the politicians now start to take notice due to the fact that the people who do the majority of the voting are now the ones complaining.
Blind freddy can see the benefits of having a fast, reliable, low-cost communications infrastructure in a country. No, I don’t mean we can all watch online movies at night – I mean we can conduct business in an innovative, economic and forward-thinking manner. It might even mean I don’t need to get on a plane to go and see customers across the country – I could do it with video conferencing. However trying to do that in Australia today would end up a farce – with customers a little less happy at the end of such a debacle than at the start.
So Australia struggles along in the 19th century – with all the innovative ways of doing business out there on the table – all hamstrung by the fact that they can’t communicate with one another.
I’m amazed the web hasn’t crashed while I’m trying to upload this.
When I switch on the news these days it’s all doom and gloom – Greece is up for sale, the Australian dollar is plunging and China’s economy is not what you would call “moving forward”. In years gone by as an ordinary worker toiling away in a factory I would have taken little notice, and cared even less.
However, in today’s world of travel, international trade and multinational employers, things are a little different. Greece doesn’t directly impact my day to day, but China does. Australia sells a lot of stuff to China – mostly in the form of iron-ore (we are happily digging up large chunks of Australia and shipping it north).
China’s economic woes are having a big impact on the price of iron-ore. Now I don’t buy that stuff on a weekly basis, so I could be excused for not caring, but it has a major impact on the economy of Western Australia (primary source of most mining exports), which in turn has a flow-on effect to the rest of Australia.
So when I talk to my customers in WA I’m hardly surprised to hear that they are doing it tough, and that the mining industry has slowed dramatically. Considering that the mining industry drives a great deal of the economy, and an even larger portion of the server-consumption market in Western Australia, it’s hardly surprising that they are concerned.
If my customers are concerned then so should I, so it’s time to jump on a plane and see what can be done to do my bit for the Australian economy!
I did this test several years ago to show (a) how ridiculously expensive full-blown SSD servers were and (b) how much little more you actually paid for maxCache. So I thought I’d have a go at it again based on current prices from the web (these are all dodgy $AUD based on websites from shopbot.com.au).
I picked as an example a database server. I have an 8-bay and 16-bay server, and have done some very rough maths based on capacities of drives. Note that the capacities of the drives will definitely be wrong as you don’t get what you pay for, but I don’t know exactly what the real capacities are for these drives so didn’t worry about that part of the calculation.
Note: I believe (if it works the same for you as it does for me) that you can click on the images and see the picture in larger format).
The 8-bay server
The big non-surprise here is that maxCache is the most expensive. There are a few reasons for that, but mostly the fact that you need a more expensive controller and SSDs compared to the full spinning media server where you can get away with a much slower RAID card.
The real surprise is the cost of the SSD server. I’ve used the 71605E (yes I’m banging on about that card again) because it will “just” cope with 8 x SSD and we don’t need a flash module in pure SSD arrays … plus the fact it’s inexpensive.
The 16-bay server
In the 16-bay server things balance out a bit more. The maxCache server is cheaper to purchase than even the spinning media server, but then you don’t get as much capacity. Because you are amortising the cost of the maxCache software (card) over a larger capacity, it has far less impact on the overall cost of the server.
Of course the SSD server goes way up in price, but in fact it’s $ per Gb doesn’t really change from the 8 bay server to the 16 bay server (very little anyway).
So why am I putting this out there? I just think it’s interesting to look at the overall cost of things in reality compared to people’s perceptions. For example someone might say “… maxCache/pure SSD are way too expensive … my customer won’t cop the cost of SSDs”. Really? When explained in this manner I think a lot of customers will think differently.
Of course the key is performance. Almost anyone will believe that servers with SSDs will be “way” faster than servers with spinning drives alone – that’s not that hard a sell. So if, in the case of the 8-bay server, you said to your customer … the server will be $1149 more expensive to go full SSD – then yes, he might balk at it.
However, if you say … “for an extra 22cents/GB you can get a full-SSD server which will run like the blazes” it doesn’t sound expensive at all.
Lesson: it all depends on how you present things, but it’s worth investigating the full costs of the differences between a spinning, caching and pure ssd server before just “assuming’ that things are too expensive.
Adaptec produces compatibility reports on a regular basis. We are always testing our equipment against the components that we work with – motherboards, chassis, backplanes and of course drives. The CR reports can be found at: http://www.adaptec.com/compatibility
While many people follow these reports as gospel, we have a lot of customers who do their own testing and use what they regard as suitable equipment before we get around to testing a particular product.
Here in Australia there are 3 very popular brands of SSD: Intel, SanDisk and Samsung. They are all good drives with their own particular advantages and strong points. They also have a lot of different models, not all of which the vendors design for use in RAID environments, but which users often use with our controllers for some very effective/efficient storage solutions.
Ironically, I find that a lot of people base their usage on “the drive they trust” vs any real technical data. I’m a big believer in gut-feelings, and it seems I’m not on my own. Just as in spinning drives where many a user will say “I use Seagate only – never had a problem”, or “WD are the only drives I trust”, users will also have their own feelings regarding SSDs, for example: “we have tested SanDisk Extreme Pro and they offer the best value for money when developing a high-performance server” (direct quote from one of my customers).
So … does the fact that a drive is not on our CR mean that it’s no good? No, often it means we just haven’t tested that drive yet. There are far too many on the market for us to test them all. It’s also a case that different regions have different product availability (Asia/Pacific can’t buy some drives that can be found in the US and Europe), so finding the drive you want to use on the CR is not always possible.
The obvious and, in reality, only answer is: make sure you test your drives carefully before using them in large scale environments. You should also take into account the style of data that you are putting on the SSDs. In a read-intensive environment most drives will perform very well and last a long, long time. However if your environment exposes the RAID array to massive intensive writes, then I would strongly suggest sticking to datacenter/enterprise specific high-end SSDs to ensure the lifespan of the drive meets user requirements.
So in the end … should you only use drives on our CR? I’d like to say yes, but I know that is never going to happen. So talk to your drive vendors, talk to Adaptec, test your drives, then make your decisions and monitor your systems. The practical reality is that while there are a lot of different drives for a lot of different purposes, I’m finding as a general rule that SSDs are far more reliable, far faster and less troublesome than the short-term urban myths would have us believe.
Came across a customer the other day who (remaining nameless) was using some sort of fancy data capture software in heavy industry – data logging large amounts of data from rolling mills etc. The software vendor indicated that SATA drives were not acceptable, and that the RAID system needed to use 15K SAS drives in a RAID 10 array to provide acceptable performance to allow for “maximum capture of logging data”.
So we were discussing how to set up a system – and I indicated that I thought SSD would be the best way to go here. Capture large numbers of small writes – which then get moved on to a database at a point in time (so the data volume is not great). It was all about speed of small writes. So when I suggested that we use SSD (SATA) I found it very surprising that the customer told me the software vendor had indicated no, SSD should not be used – they should use 15K SAS spinning media in a RAID array.
Hmmm … how does the software application (a Windows app) know what sort of drives are underpinning the storage? We take a bunch of drives and combine them into a RAID array. We present that logical drive to the OS and handle all the reads and writes via our own on-board RAID processor. I can understand that the application can measure the speed of the storage (write speed in this case) and judge it to be sufficient or not, but it can’t see what sort of drives are attached – that’s hidden from a properly-written application.
Considering this system will be in a very computer-unfriendly industrial environment, I would have thought that using drives that can handle vibration (there will be lots of that), don’t use lots of power nor generate a lot of heat … along with having by far the highest writing speed of the drives that a user could choose for this application, would be the ducks guts for this job.
So … my guess is that the recommendation on the software vendor’s website for using 15K SAS drives is probably 5-10 years old and would have made a lot of sense back then, but now it’s just plain out of date. If this isn’t an ideal application for 4 x SSDs on a 71605E in a RAID 10 I don’t know what is.
Lessons to be learned from this:
- Information on websites is not always up to date.
- Not everyone has jumped on the SSD bandwagon yet.
- You need to do a lot of investigation with vendors to find out what options you have for innovative storage solutions in today’s fast-moving environment.
- Telling me stuff ends up on a blog
Enough said …
When sitting around in marketing meetings a constant bone of contention in communications with our customers is “solution selling”. In other words – how do we provide solution information to customers rather than just mumbo-jumbo about speeds and feeds of our products?
It’s a much larger and more complex question that you might first think. Just take a quick look at the broad scope of product placement where you find adaptec controllers … from the workstation through to hyper-scale datacenters (and everywhere in-between).
Now of course the boss is just going to say “write about them all”, but some of the blog articles are already looking like War and Peace (I forget to stop typing occasionally), and my real question here is “is this really the place you come for answers on designing solutions?” If in fact that is what you do, then let us know. Because if you are looking for detailed analysis of how to design systems for specific applications then I’m up for the typing – but you have to be up for the reading to go along with it!
I should probably just give it a go and see what happens. I’ll stick my head out the window occasionally and see if I can hear large-scale snoring … that will tell me I’m boring you to death with overly-long blogs. But then again, my wife tells me I’m deaf so I probably won’t know anyway.
Send me your thoughts (nice ones only please).
(this is another in the series of crazy-long blogs … the bloke who approves these things is going to get sick of reading :-))
So most people know about RAID 5 – it’s the general purpose great all-rounder of RAID arrays – gets used for just about anything and everything. However when it comes to capacity there are limitations. First…there are physical limitations, then…there are common-sense limitations.
It’s true that you can put up to 32 drives in a RAID 5. You can also buy some pretty big drives these days (http://www.extremetech.com/computing/189813-western-digital-unveils-worlds-first-10tb-hard-drive-helium-filled-shingled-recording), so 32 x 10TB drives in a RAID 5 would yield (theoretically if in fact the drives gives you 10TB usable space): 310TB – remembering of course that RAID 5 loses 1 drive’s capacity for parity data.
Now not many people want 310TB on their home server (though some of my mates come close with movies), but there are more and more businesses out there with massive amounts of archival, static and large-scale data … so it’s not inconceivable that someone will want this monster one day.
It’s a realistic fact that you may want to build a big server, but will be using smaller drives as you can’t afford to purchase such large drives, but the principles of building this server remain the same no matter what drives you use.
So what are the problems with running a 32-drive array using 10TB drives? Plenty (imho)
Most of the issues come in the form of building and rebuilding the array. For a controller to handle this size disk in this size array, let’s look at some dodgy math.
- Stripe size (per disk … what we call a minor stripe) – 256Kb
Number of stripes on single disk – 40+ million (give or take a hundred thousand)
- Each major stripe (the stripe that runs across the entire disk) is made of up 32 x 256Kb pieces (8Mb)
- 1 RAID parity calculation means reading 31 disks, calculating parity and writing it to 1 disk – per stripe
- Multiply that by 40+ million stripes
That’s going to wear out the abacus for sure
The problem with all these stripes and all this data, is that all 31 drives are involved in any operation. Now that will mean some pretty good streaming speed, but it will also make for killer rebuild times. For example, to rebuild an array (best case, with no-one using it) – to get it done in 24 hours the drives would need to read/write at least 115Mb/sec. Now SATA and SAS drives might come close to that on reads, but they are nowhere near that on writes, so rebuilds on this monster will take a lot longer than 24 hours.
Since it’s a RAID 5, if another drives goes west (not south, I’m already in Australia!) during this rebuild process your data is toast and you have a major problem on your hands.
So what is the alternative? Use RAID 50.
RAID 50 is a grouping of RAID 5 arrays into a single RAID 0. Don’t panic, I’m not telling you to use RAID 0 (which in itself has no redundancy), but let’s look at how it works. In a standard RAID 0 you use a bunch of single drives to make a fast RAID array. The individual drives would be called the “legs” of the array. However there is nothing protecting each drive – if one fails there is no redundancy or copy of data for that drive – and your whole array fails (which is why we don’t use RAID 0).
However in a RAID 50 array, each leg of the top-level RAID 0 is made up of a RAID 5. So if a drive fails, the leg does not collapse (as in the case of a RAID 0), it simply runs a bit slower because it is now “degraded”.
The maths above doesn’t change per drive, but the number of drives in the major stripe of the array changes dramatically (at least half), so the speeds in all areas go up accordingly.
A lot of people have heard of RAID 10, 50 and 60 (they are similar in nature), but think like humans – 2 legs. However all these combination RAID levels can have multiple legs – 2, 3, 4 – however many you like. And more legs generally is better. But let’s look at our 32-drive configuration.
Instead of 32-drives in a single RAID 5, a RAID 50 could be 2 x RAID 5 of 16 drives each, with a single RAID 0 over the top. The capacity would be one drive less than the 32-drive RAID 5, but the performance and rebuild times will be greatly improved.
Why? In reality 32 drives is beyond the efficiency levels of RAID 5 algorithms, and it’s not as quick as an 8-16 drive RAID 5 (the sweet spot). So just on its own, a RAID 5 of 16 drives will generally be quicker than a RAID 5 of 32 drives. But now you have two RAID 5 arrays combined into a single array.
The benefit come to light in several ways. In RAID rebuilds (when that dreaded drive fails), only 16 of the drives are involved in the rebuild process. The other half of the RAID 50 (the other 16 drives in the RAID 5 array) are not impacted or affected by the rebuild process. So the rebuild happens a lot faster and performance of the overall array is not impacted anywhere near as badly as the performance of the rebuilding RAID 5 array made up of 32 drives.
So what is the downside to the RAID 50 compared to the single large RAID 5? In this case, with 2 legs in the array, you would lose one additional drive’s capacity.
Mathematics (fitting things in boxes) always comes into play with RAID 50/60 … I want to make a RAID 50 of 3 legs over 32 drives – hmmm … the math doesn’t work does it. It hardly ever does. If you have 32 drives then the best three-leg RAID 50 array you could make would be 30 drives (3 legs of 10 drives each). That would give 27 drives capacity, but it would be (a) faster and (b) rebuild much, much faster than anything described above.
So would you do a 4-leg RAID 50 with 32 drives? Yes you could. That would mean 4 drives lost to parity, giving a total of 28 drives capacity, but now each RAID 5 (each leg) is down to 8 drives and ripping along at a great pace of knots, and the overall system performance is fantastic. Rebuild times are awesome and have very little impact on the server. Downside? Cost is up and capacity is down slightly.
As you can see, there is always a trade-off in RAID 50 – the more legs, the more cost and less capacity, but the better performance. So back to the 32 drives … what could I do? Probably something like a 30-drive 3-leg RAID 50, with 2 hot spares sitting in the last two slots.
But what about your OS in this configuration? Where do you put it? Remember that you can have multiple RAID arrays on the same set of disks, so you can build 2 of these RAID 50 arrays on the same disks … one large enough for your OS (which will use very, very little disk space), and the rest for your data.
So should you consider using RAID 50? Absolutely – just have a think about the long-term usefulness of the system you are building and talk to us if you need advice before going down this path.
Talk about a different way to promote a new product. Take an existing product, remove a feature and drop the price. Sounds pretty easy but what you end up with is something pretty spectacular.
Up until now Adaptec’s only 16-port internal RAID card has been the 81605ZQ. The “Z” is for ZMCP (zero maintenance cache protection) – in other words it has the supercap functionality built into the card – with just the supercapacitor to plug in (no daughter card). The “Q” part of the moniker denotes maxCache capability – the 81605ZQ is a caching controller (great for specific applications).
But what did you buy if you wanted a 16 port internal controller but did not need the “Q” function? You might be putting together a pure SSD system, or you might be building a nice large storage server that doesn’t need caching. The only choice was to go back to the 7 series.
So we took the 81605ZQ and removed maxCache. That makes it an 81605Z. Comes standard with 16 internal ports and cache protection … but note that it can’t be upgraded to a “Q” model – you can’t add that via firmware etc.
As an aside … you should note that you can swap out the drives from an 81605Z and an 81605ZQ without any reconfiguration – the drivers etc are all the same and both cards recognise the RAID arrays from the other card.
So there you have it … a new card. It does less than it’s “Q” cousin, but then again, it costs less
Now you know.
Some thoughts from the Storage Advisor
I get a lot of calls from people who are interested in maxCache … how does it work, what does it do, and most importantly … will it work for me? So I thought I’d put some ramblings down on what has worked for customers and where I think maxCache could/should be used.
Firstly just a quick summary of maxCache functionality in plain English. You need an Adaptec card with “Q” on the end of it for this to work, and no, you can’t upgrade a card without “Q” to a “Q” card – but you can swap out the drives from an existing controller to a “Q” controller, then plug in SSDs and enable maxCache (bet you didn’t know that one). maxCache is the process of taking SSDs and treating them as read and write cache for a RAID array – that’s a basic statement but it’s pretty close to what happens – add a very large amount of cache to a controller.
So let’s take an existing system that’s running 8 x enterprise SATA in a RAID 5 – pretty common configuration. That might be connected to a 6805 controller in an 8-bay server. You want to make this thing faster for the data that has ended up on this server without reconfiguring the server or rebuilding the software installation. This server started life as just a plain file server, but now has small database, accounting software, and is now running terminal server … a far cry from what this thing started life as. You want to increase the performance of the random data. maxCache does not impact or affect the performance of streaming data – it only works on small, random, frequent blocks of data.
Upgrade the drivers in your OS (always a good starting point) and make sure the new drivers support the 81605ZQ. In most OS this is standard – we have for example one windows driver that supports all our cards. Then disconnect the 6 series from the drives, plug in and wire up the 81605ZQ and reboot. All should be well. You will see some performance difference as the 8 series is dramatically quicker than the 6 series controller, but the spinning drives will be the limiting factor in this equation.
Once you’ve seen that all is working well, and you’ve updated maxView management software to the latest version etc, then shut the system down, grab a couple of SSDs (lets for argument sake say 2 x 480GB Sandisk Extreme Pro) and fit them in the server somewhere. Even if there are no hot swap bays available there is always somewhere to stick an SSD (figuratively speaking) – they don’t vibrate and don’t get hot so they can be fitted just about anywhere.
Create a RAID 1 out of the 2 x SSDs. Then add that RAID 1 to the maxCache pool (none of which takes very long). When finished enable maxCache read and write cache on your RAID 5 array. Sit back and watch. Don’t get too excited as nothing much seems to happen immediately. In fact maxCache takes a while to get going (how long is a while? … how long is a piece of string?). The way it works is that once enabled, it will watch the blocks of data that are transferring back and forth from the storage to the users and vice versa.
So just like a tennis umpire getting a sore neck in the middle of a court, the controller watches everything that goes past. It then learns as it goes as to what is small, random and frequent in nature, keeping track of how often blocks of data are read from the array etc. As it sees suitable candidates of data blocks, it puts them in a list. Once the frequency of the blocks hits a threshhold, the blocks are copied in the background from the HDD array to the SSDs. This is important – note that it is a “copy” process – not a moving process.
Once that has happened, a copy of the data block lives on the SSDs as well as on the HDD array. Adaptec controllers use a process of “shortest path to data”. When a request comes for a block of data from the user/OS, we look first in the cache on the controller. If it’s there then great, it’s fed straight from the DDR on the controller (quickest possible method). If it’s not there then we look up a table running in the controller memory to see if the data block is living on the SSDs. If so, then we get it from there. Finally, if it can’t be found anywhere we’ll get it from the HDD array, and will take note of the fact that we did (so adding this data block to the learning process going on all the time).
Why does this help? Pretty obviously the read speed of the SSD is dramatically faster than the spinning drives in the HDD array, especially when it comes to a small block of data. Now as life goes on and users read and write to the server we are learning all the time, and constantly adding new blocks to the SSD pool. Therefore performance increases over a period of time rather than being a monumental jump immediately.
The SSD write cache side of things comes into play when blocks that live in the SSD pool (remembering these are copies of data from the HDD) are updated. If the block is already in the SSD pool then it’s updated there, and copied across to the HDD as a background process a little later (when the HDD are not so busy).
End result … your server read and write performance increases over a period of time.
Pitfalls and problems for young players …
All this sounds very easy, and in fact it is, but there are some issues to take note of that require customer education as much as technical ability.
Speed before and after
If you have no way of measuring how fast your data is travelling prior to putting maxCache in the system, then you won’t have any way of measuring later, so you can only go by “feel” … what the users experience when accessing data. While this is a good measure, it’s pretty hard to quantify.
Let me share some experiences I had from the early days of showing this to customers. I added maxCache to an existing system for a customer on a trial basis (changing controller to Q, adding SSD etc). Left the customer running for a week feeling quite confident that it would be a good result when I went back. Upon return, the customer indicated that he didn’t think it was much of a difference and wasn’t worth the effort or cost. So I put the system back the way it was before I started (original controller and no SSD) and rebooted. The customer started yelling at me very loudly that I’d stuffed his system … “it was never this slow before!” Truth of the matter was that it was exactly the same as before, so the speed was what he had been living with. Lesson: customers are far less likely to say anything about a computer getting faster, but they yell like stuck pigs as soon as something appears to be “slower”
Second example was in a terminal server environment. This time we could measure the performance of the server by measuring the logon time of the terminal server screen etc. It was pretty bad (about 1 minute). So we went through the process again and added maxCache. The boss of the organization (who happen to be a good reseller of mine) immediately logged on to TS – and grandly indicated that there was no difference and I didn’t know what I was doing. So we went to the pub. Spent a good night out on the town and went back to the customer in the morning (a little the worse for wear). The boss got to work around 10.00am (as bosses do) and was pretty much the last person to log on to TS that morning. Wow, 6 seconds to log on. We then had the Spanish Inquisition (no-one expects the Spanish Inquisition – https://www.youtube.com/watch?v=7WJXHY2OXGE) as to what we had done that night. The boss was thinking we’d spent all night working on the server instead of working on the right elbow.
In reality, the server had learnt the data blocks involved in the TS logon (which are pretty much the same for all users), so by the time he logged in it was mostly reading from the SSDs, hence a great performance improvement. Lesson: educate the customer as to how it works and what to expect before embarking on the grand installation.
The third and last experience was with performance testing. I’ve already blogged about this, but it bears mentioning here. Customer running openE set up his machine and did a lot of testing (unfortunately in a city far away from me so I could not do hands on demo etc). Lots of testing with iometer did not prove a great deal of performance improvement, but when finally biting the bullet and putting the server into action, the customers were ecstatic. A great performance improvement on Virtual Desktop software. Lesson: spend a lot more time talking to the customer about how the product works so they understand its random data that’s at play here, and that performance testing streaming data won’t show any performance improvement whatsoever.
There are a lot of servers out there that would benefit from maxCache to speed up the random data that has found its way onto the server whether intentionally or not. It needs to be kept in mind that servers don’t need rebuilding to add maxCache, and it can be added (and removed) without any great intrusion into a client’s business.
The trick is to talk to the customer, talk to the users and find out what the problems in life are before just jumping in and telling them that this will fix their problems. Then again, you should probably do that anyway before touching anything on a server … but that’s one of life’s lessons that people have to work out for themselves