Google+

Archive

Archive for the ‘Guest Blogger’ Category

CDP: Blurring the Line Between High Availability and Backup

September 8th, 2009 2 comments

Josef Pfeiffer

For as long as people have been protecting data there have been a myriad of products to help. High availability and backup are two general categories of products that can assist but they offer very different benefits.  On one end of the spectrum, high availability includes technologies like clustering, replication and shared file systems and they really allow for near zero recovery time when a problem occurs.  If a clustered server fails, then it automatically fails over to another server and helps to ensure the application stays up and running.  What high availability lacks is the ability to roll back to older points in time.  For this reason, high availability is almost always complemented with backup products that make additional copies of the data at specific moments in time. Together they help create recovery point objectives and recovery time objectives that can be tailored to the importance of the data.

Recently however, continuous data protection (CDP) has started to blur the lines between these two separate product categories. CDP is often correctly viewed as a different way to protect data.  It tracks all changes to a disk continuously block-by-block, as opposed to scheduled points in time. CDP’s key differentiator is how it changes recovery.  Backup products always store data in a different location, whether on tape or disk (deduplicated), it has to go through some process to get copied back to its original location.  You simply can’t run your server or application off of that backup storage.  

CDP changes this however by virtualizing the backup storage and presenting read/write volumes that can be used.  The mere fact that you no longer have to copy data back to another location means that your recovery time is dramatically reduced to near zero. Sound familiar?  Yep, just like high availability.  Can’t find replacement storage in production when a problem occurs?  No problem, just run the application off a CDP server until a more permanent recovery option is available and you can fail the data back once things are fixed.  The benefit is near zero downtime. If the corruption is copied to CDP, you can simply rewind to a previous moment in time and present a virtualized disk volume of how that original volume looked at any point in time.

While there usually is a trade off between recovery points and recovery time, CDP gets pretty close to reaching near zero on both.  While not every application needs high availability or CDP, it is becoming an easy option to add to your existing data protection environment.

Taming the Storage Budget Beast

August 24th, 2009 No comments

Phil Goodwin

Some economic experts think that the economy is improving – or at least getting worse less fast. Let’s hope so. But for all you IT managers, the budget situation for the rest of 2009 and 2010 is likely to remain tight. Storage, which is consuming an increasing share of the CapEx budget, will be heavily impacted. Nevertheless, business continues – you need to address growing data volumes and increasingly stringent SLAs without increasing headcount or CapEx. Lots of platitudes available about doing more with less – insert your favorite here. Fortunately, there are some simple strategies for coping. Obviously, the best choice to make-do with what you have without sacrificing results or the budget.

Listed below are four steps to making the most of what you have. But first, a little math. According to The InfoPro, average array utilization in data centers is just 35%. The average storage growth rate is pegged at 50% compounded annually by some accounts. Thus, an “average” IT organization could last fully two years before hitting 80% utilization, the top end of the best-practice range. Of course, the trick is finding and enabling that storage. Now for the steps.

Step 1: Find out what you have

In contrast to The InfoPro numbers, user surveys report 70%-80% utilization. Who’s telling the truth? Probably both – the discrepancy is how you measure the numbers. Utilization can be measured by data written to disk (which might explain The InfoPro numbers), by storage provisioned (which might explain the user numbers) or other measures. The problem is, few users go through the laborious task of measuring every LUN, adding them together and doing the math on a regular basis, regardless of the measurement. Even fewer have visibility across the enterprise to generate a comprehensive number.

In the mainframe world, nearly all shops have a storage resource management (SRM) tool. In UNIX/Windows environments, very few do. Yet, SRM products can give – and maintain – utilization data essential to optimizing storage. A good SRM tool not only aggregates data and gives visibility across the enterprise, but drills down to find out which LUNs are over-provisioned and which are over-utilized. Of course, they can do much more, but unlocking 10s or 100s of TB of available space alone makes them worth the effort.

Step 2: Adopt thin provisioning

Nearly all Tier 1 and Tier 2 storage array vendors currently support thin provisioning. Lots of good resources on the Web to explain thin provisioning if you’re not familiar with it, so we won’t digress here. Bottom line, thin provisioning allows you to pool all the over-provisioned storage and makes it available to every application on an as-needed basis. No more guessing as to how much storage to provision to a given LUN and no more LUN-shrinking and expanding exercises.

Now, I can guess what you’re thinking: “OK, smart guy, I’ve got more than 100 TB of storage and hundreds of applications. How do I get from ‘thick’ to ‘thin’ without a major disruption to the organization?” First, select a “thin aware” file system. “Thin aware” file systems are essential to staying thin over time. Without one, a “thin” file will become fat over time and requires manual intervention and downtime. Second, look for a data movement tool that works across any operating system, is storage hardware independent, and can move data from “thick” to “thin” online (no app downtime) and can automatically reclaim the unused storage. Between the two, you’ll get thin and stay thin, technologically speaking. Can’t help with your waistline, though.

Step 3: Implement deduplication

Deduplication is one of those immediate-impact schemes to free up storage space. The biggest offender of duplicate storage is backup and recovery (B/R). By the very nature of B/R, we back up the same stuff over and over. Dedup appliances can address the issue, but add another layer of devices to manage in the data center and add more storage to it as well.  A better solution is to have deduplication integrated directly with your B/R app, that can work at a global level (such as remote offices, data centers, and virtual servers) so that it’s never stored in the first place.

Step 4: Archive unstructured data

The bane of a storage manager’s existence is obsolete and orphaned user storage. E-mail is often the main culprit. Trouble is, manually removing it costs more in human effort than it saves in disk space. Fortunately, there are products out there that can automatically move this data to the archive storage of choice. Best of all, you get to set the policy regarding time frame, size or whatever other criteria you decide to trigger the movement to archive. All of those duplicate PowerPoint presentations will be consolidated in to a single instance. Once archived, the data should still be fully discoverable for legal requirements. The users still have full access to the data. It may take a few seconds to recover, but recover it will without tracking down tapes in a vault. Oh, by the way, did I mention it would dramatically reduce your B/R window as well? No need to backup the same thing over and over.

None of these steps are dependent upon the others. Any of them will extend the life of your current storage infrastructure. Taken together, you may be able to ride out the current economic downturn without buying a single MB of additional capacity.

First Guest Post Coming

August 21st, 2009 No comments

Late last month, I had written a post on an open Invitation to readers about a Guest Post on the StorageNerve Blog.

After some careful selections, the first post is about to be released on Monday, stay tuned for it. Over the month of September you will see some additional Guest Posts on the StorageNerve Blog.

Hope you enjoy the post, and these topics bring forth a variety of subjects for readers that I have personally not been able to cover.

If you feel you can contribute a post about Storage / Virtualization or related topics, please feel free to get in touch with me on Twitter or though the Contact link. The requirements of a Guest Blog Post are on the Invitation post.

Enjoy reading!!!

Cheers, @StorageNerve

Invitation

July 28th, 2009 No comments

An Open invitation for a blog post on StorageNerve Blog

Hello All Readers,

As the topic of this blog post states, I am extending an open invitation to you to write a blog post on the StorageNerve Blog at http://storagenerve.com

Here are a few requirements related to the guest blog post

  1. Topics of discussion could be around Storage, Virtualization, Cloud Computing, Technology Analysis or explaining differences between technologies.
  2. No negative comments about any manufacturer
  3. Minimum 1 page, maximum 5 pages
  4. Article shouldn’t be posted on any other sites
  5. You can be an independent, or work for a partner, or be a consultant, or already have a blog or work for a manufacturer and still contribute as a guest blogger.

Along with the blog post we can include a picture on the related post referencing you as a Guest Blogger. Other important contact information like name, email address and any social media signatures will be included as well.

If this were something you would like to do, please feel free to email me at devang @ storagenerve.com or send me a DM on Twitter @storagenerve.

Cheers, @storagenerve