Taming the Storage Budget Beast
Some economic experts think that the economy is improving – or at least getting worse less fast. Let’s hope so. But for all you IT managers, the budget situation for the rest of 2009 and 2010 is likely to remain tight. Storage, which is consuming an increasing share of the CapEx budget, will be heavily impacted. Nevertheless, business continues – you need to address growing data volumes and increasingly stringent SLAs without increasing headcount or CapEx. Lots of platitudes available about doing more with less – insert your favorite here. Fortunately, there are some simple strategies for coping. Obviously, the best choice to make-do with what you have without sacrificing results or the budget.
Listed below are four steps to making the most of what you have. But first, a little math. According to The InfoPro, average array utilization in data centers is just 35%. The average storage growth rate is pegged at 50% compounded annually by some accounts. Thus, an “average” IT organization could last fully two years before hitting 80% utilization, the top end of the best-practice range. Of course, the trick is finding and enabling that storage. Now for the steps.
Step 1: Find out what you have
In contrast to The InfoPro numbers, user surveys report 70%-80% utilization. Who’s telling the truth? Probably both – the discrepancy is how you measure the numbers. Utilization can be measured by data written to disk (which might explain The InfoPro numbers), by storage provisioned (which might explain the user numbers) or other measures. The problem is, few users go through the laborious task of measuring every LUN, adding them together and doing the math on a regular basis, regardless of the measurement. Even fewer have visibility across the enterprise to generate a comprehensive number.
In the mainframe world, nearly all shops have a storage resource management (SRM) tool. In UNIX/Windows environments, very few do. Yet, SRM products can give – and maintain – utilization data essential to optimizing storage. A good SRM tool not only aggregates data and gives visibility across the enterprise, but drills down to find out which LUNs are over-provisioned and which are over-utilized. Of course, they can do much more, but unlocking 10s or 100s of TB of available space alone makes them worth the effort.
Step 2: Adopt thin provisioning
Nearly all Tier 1 and Tier 2 storage array vendors currently support thin provisioning. Lots of good resources on the Web to explain thin provisioning if you’re not familiar with it, so we won’t digress here. Bottom line, thin provisioning allows you to pool all the over-provisioned storage and makes it available to every application on an as-needed basis. No more guessing as to how much storage to provision to a given LUN and no more LUN-shrinking and expanding exercises.
Now, I can guess what you’re thinking: “OK, smart guy, I’ve got more than 100 TB of storage and hundreds of applications. How do I get from ‘thick’ to ‘thin’ without a major disruption to the organization?” First, select a “thin aware” file system. “Thin aware” file systems are essential to staying thin over time. Without one, a “thin” file will become fat over time and requires manual intervention and downtime. Second, look for a data movement tool that works across any operating system, is storage hardware independent, and can move data from “thick” to “thin” online (no app downtime) and can automatically reclaim the unused storage. Between the two, you’ll get thin and stay thin, technologically speaking. Can’t help with your waistline, though.
Step 3: Implement deduplication
Deduplication is one of those immediate-impact schemes to free up storage space. The biggest offender of duplicate storage is backup and recovery (B/R). By the very nature of B/R, we back up the same stuff over and over. Dedup appliances can address the issue, but add another layer of devices to manage in the data center and add more storage to it as well. A better solution is to have deduplication integrated directly with your B/R app, that can work at a global level (such as remote offices, data centers, and virtual servers) so that it’s never stored in the first place.
Step 4: Archive unstructured data
The bane of a storage manager’s existence is obsolete and orphaned user storage. E-mail is often the main culprit. Trouble is, manually removing it costs more in human effort than it saves in disk space. Fortunately, there are products out there that can automatically move this data to the archive storage of choice. Best of all, you get to set the policy regarding time frame, size or whatever other criteria you decide to trigger the movement to archive. All of those duplicate PowerPoint presentations will be consolidated in to a single instance. Once archived, the data should still be fully discoverable for legal requirements. The users still have full access to the data. It may take a few seconds to recover, but recover it will without tracking down tapes in a vault. Oh, by the way, did I mention it would dramatically reduce your B/R window as well? No need to backup the same thing over and over.
None of these steps are dependent upon the others. Any of them will extend the life of your current storage infrastructure. Taken together, you may be able to ride out the current economic downturn without buying a single MB of additional capacity.