Google+

Storage Resource Analysis (SRA): Part 9

April 29th, 2009 No comments

TO SUBSCRIBE TO STORAGENERVE BLOG

 

The end result

Continuing the blog posts on Storage Resource Analysis (SRA), this will be the final series post. This post focuses on the end result of running an analysis in our Storage environment.

 

To read the previous blog posts on Storage Resource Analysis (SRA)

Storage Resource Analysis (SRA): Part 1: Storage Resource Analysis and Storage Economics

Storage Resource Analysis (SRA): Part 2: The IT – Storage World of 2009

Storage Resource Analysis (SRA): Part 3: The IT – Storage Budgets of 2009

Storage Resource Analysis (SRA): Part 4: Some Fundamental Questions

Storage Resource Analysis (SRA): Part 5: Facts about your Data

Storage Resource Analysis (SRA): Part 6: Inconsistencies in Storage Environments

Storage Resource Analysis (SRA): Part 7: The Technical Case

Storage Resource Analysis (SRA): Part 8: The Business Case

Storage Resource Analysis (SRA): Part 9: The End Result

 

In this blog post we will try to wrap up some important things we discussed in the previous blog posts.

Here is how Storage Analysis of your Infrastructure help you

1)      Reduce CAPEX

2)      Reduce OPEX

3)      Reduce Total Cost of Ownership

4)      Not spend CAPEX for implementation

5)      OPEX savings should pay for analysis by achieving efficiency and higher utilization

6)      Immediate ROI

7)      Make sure your numbers are not some arbitrary numbers; they have to be real dollars, not a 5 year plan to consolidate your assets, remember the word Immediate ROI.

8 )      Understand how much you will be paying at a front end of the deal, understand how much you will be paying as an ongoing cost, understand how much upgrades will cost, understand how many resources you will need to deploy (hardware, software, licenses, training, manpower), understand how reporting works, etc

9)      Gain operational efficiency

10)   Process should be agent less

11)   Should work Cross platform (EMC, HDS, NetApp, 3Par, IBM, HP)

12)   Data should be collected during business hours, it should be light weight, more or less not require change controls.

13)   Data should be collected possibly from the least numbers of places (host) but get a full representation of the host environment as well as storage environment.

14)   Don’t try to analyze your environment based on what someone else is using, rather see what best fits your environment based on your business processes, rules and needs. Do not just evaluate an OEM operational tool; idea is to look beyond it.

15)   PB’s of storage should be analyzed within hours, not months.

16)   Minimum Training

17)   Maximum Drill Down for Reports

18 )   Reports for various folks within an organization like Storage Operators, Storage Admins, Host Admins, Storage Managers, Infrastructure Managers, CIO’s Office.

19)   Check how your Configurations are setup in your environment

20)   Check how your tiering is setup in your environment

21)   Check for inconsistencies

22)   Check for reclamation

 

So the above might help you get much closer to your possible goals of 2009, “DO MORE WITH LESS”.

 

Storage Analysis is not something you should run once, but as an organization establish a team of engineers who are responsible around increasing efficiency and utilization of your storage environment. Don’t forget your storage is between 30 to 35% of your IT budgets. Better efficiency will help you save millions on the front end (CapEx) and millions on backend (OpEx).

 

The “PRACTICE OF STORAGE ECONOMICS”, which seems to be a big thing every OEM is jumping on to these days, should be followed within your organization.

 

It has to be made a Practice, not just a onetime reclamation exercise. Best example, we live in our house and how often do we clean it, repairs, ongoing work to make it better every time. Storage is the same way, it needs work. 

 

Experience

We have been talking to a large manufacturer here in the US. They have in excess of 10 PB of Storage. During our initial meeting with them about Storage, they mentioned how they have been able to successfully implement a plan in their organization for Storage Reclamation which has helped them reduce millions of dollars in Storage purchases. Also on the other hand, due to the Storage economics practice, they have managed to increase their operational efficiency in storage and thereby reduce their OpEx, again savings which would account for millions.

Really happy talking to these customers, that they are not driven by an OEM to just purchase new storage, but rather their internal practices help them achieve what they target for.

 

What are your experiences with Storage and have you implemented a Storage Economics practice within your organization?

Storage Resource Analysis (SRA): Part 8

April 28th, 2009 No comments

TO SUBSCRIBE TO STORAGENERVE BLOG

The Business Case

Continuing the blog posts on Storage Resource Analysis (SRA), this post focuses on the business challenges on why analysis of our storage platforms is important and how it might help us discover inconsistencies in storage environments eventually saving millions in CapEx and OpEx.

 

To read the previous blog posts on Storage Resource Analysis (SRA)

Storage Resource Analysis (SRA): Part 1: Storage Resource Analysis and Storage Economics

Storage Resource Analysis (SRA): Part 2: The IT – Storage World of 2009

Storage Resource Analysis (SRA): Part 3: The IT – Storage Budgets of 2009

Storage Resource Analysis (SRA): Part 4: Some Fundamental Questions

Storage Resource Analysis (SRA): Part 5: Facts about your Data

Storage Resource Analysis (SRA): Part 6: Inconsistencies in Storage Environments

Storage Resource Analysis (SRA): Part 7: The Technical Case

Storage Resource Analysis (SRA): Part 8: The Business Case

Storage Resource Analysis (SRA): Part 9: The End Result

 

It is important from a Business Standpoint that each aspect of this Storage Analysis project yield us the necessary results to help us make savvy business decisions related to our Storage Estate. While we do so, we still want to verify the analysis does not cost us our valuable CapEx dollars, which are more or less not available in 2009.

Some of the important business requirements, decisions & outcomes related to storage analysis are highlighted below:

1)      Initial purchase and setup fees (CapEx dollars) for analysis software; if possible let’s keep this zero for storage analysis.  

2)      Ongoing cost (OpEx dollars) for analysis software; this constitutes your training, upgrades, engineering expense, etc. Let’s keep this zero as well for storage analysis.

3)      With the given above scenario’s how does the Total Cost of Ownership (TCO) look like now?

4)      Let’s add a twist to this, let’s make this Storage-Analysis-On-Demand. You only pay for what you analyze.

5)      No ongoing cost on a monthly basis, no setup fees, no upgrades, no CapEx dollars. Too good to be true, let’s find the solution now, how we can achieve this.

6)      May be SaaS is the way to go (Software as a Service), no firewall issues, no security, no licenses, no upgrades, no deployment. 

7)      From an ROI (Return on Investment) standpoint, you should be able to reclaim your Storage right now, not 6 or 12 months later when the financial situation changes.

8 )      If you have multi site datacenters with single digit or double digit PB of Storage, you know how painful it is to deploy an enterprise wide Operational Tool (Time, effort, people, testing, training, meetings, outages, VMware, supported/unsupported, service packs, etc, etc).

9)      Lets add another twist to this, no licensing cost for deployment, no charges per port, no charges per report, no charges per array, no charges per TB of raw data for licensing, no upgrades, no windows licensing, no VMware licensing, no infrastructure software licensing, rather a flat fee per TB to analyze a multi PB environment.

10)   Minimum cost to deploy, may be all the storage & host related data can be collected for our environment from a single server

 

Do we have a Storage Economics Practice setup within our Storage environment to consistently increase our utilization, efficiency, reclamation and lower our outages & cost?

The above are some of the key points you should consider before you make a judgment to deploy any Storage Analysis software in your environment.

 

Experience

This is a fact…..

A MNC (Multi National Company – two digit PB storage), by successfully implementing a Storage Reclamation, Efficiency & Utilization project have managed to reduce their CapEx by 80% in the first year. They are planning to reduce their CapEx by 50% year after year.

Another MNC, by automating certain storage process for (storage) report creation reduced 20 man days to 2 hours a month for management reports.

 

How much CapEx and OpEx savings does that equate to?

Dave Donatelli's departure and what is next?

April 28th, 2009 No comments

TO SUBSCRIBE TO STORAGENERVE BLOG

 

As news hit the wire this afternoon, about the latest move by Dave Donatelli (President EMC Storage Division) from EMC to HP. As Dave’s new job, he will report to Ann Livermore at HP and will handle all Server, Storage and Networking business also known as ESS (Enterprise Storage & Server) Division, worth about US 20B, more than his current responsibilities at EMC, in terms of Dollars. 

Dave has been with EMC since his early college days and more or less these days was talked about being the next CEO at EMC. Something might have changed over this past year, that made him make this move. People in these positions, well respected, accomplished a tons of things, groomed to be the next CEO, have a passion for something they do, and you see them jump the ship…something more to the story. There has been a new appointment into EMC board in 2008, Mr. Hu, I think it was a quite appointment, but a big one, as Hu comes with a great industry M&A knowledge and has been well respected within the consulting, investment and IT business environments. 

Jumping back to Mr. Donatelli, he has been a great icon of EMC Storage division and has played a major role with shaping where EMC stands today from a Storage perspective. 

As news will probably flow over the next couple of weeks, we should see a shift within EMC’s top management with some new faces coming into picture. The latest was Mr. Frank Hauck that runs Global marketing and customer quality will be taking over Dave Donatelli’s role. 

So the move for Mr. Donatelli comes at a very crucial time, a big bang redo of the EMC Symmetrix products and then at the after launch party, resign from the company.  With this move, we should more or less see some other folks within EMC follow his foot steps. The question becomes, who within the Storage Blogosphere will end up among the executive profiles of EMC’s leadership chart.

Good luck and Good Wishes to a bright, smart man, Mr. Donatelli.

 

Here are some other blog posts covering Mr. Donatelli’s move, here, here, here, and I am sure more to follow tomorrow morning.

Storage Resource Analysis (SRA): Part 7

April 27th, 2009 No comments

TO SUBSCRIBE TO STORAGENERVE BLOG

 

The Technical Case

Continuing the blog posts on Storage Resource Analysis (SRA), this post focuses on the technical case on why analysis of your storage platforms is important and how it might help you discover inconsistencies in storage environments.

 

To read the previous blog posts on Storage Resource Analysis (SRA)

Storage Resource Analysis (SRA): Part 1: Storage Resource Analysis and Storage Economics

Storage Resource Analysis (SRA): Part 2: The IT – Storage World of 2009

Storage Resource Analysis (SRA): Part 3: The IT – Storage Budgets of 2009

Storage Resource Analysis (SRA): Part 4: Some Fundamental Questions

Storage Resource Analysis (SRA): Part 5: Facts about your Data

Storage Resource Analysis (SRA): Part 6: Inconsistencies in Storage Environments

Storage Resource Analysis (SRA): Part 7: The Technical Case

Storage Resource Analysis (SRA): Part 8: The Business Case

Storage Resource Analysis (SRA): Part 9: The End Result

 

From a technology standpoint, it’s very important to understand what Storage Analysis will do and how it might overall bring more value, efficiencies and utilization in your environments. To talk about a few technical issues it might help you understand are..

1)      How much headroom (total possible growth) we have in our storage environment (drilldown array, lun)

2)      How much reclaimable storage do we have in our environment (drilldown array, lun)

3)      How much immediate deployable storage do we have in our storage environment (drilldown where)

4)      Can we predict capacity planning and future growth

5)      The information obtained above should be as of today, not something you started working about 3 months ago.

6)      In large volatile storage environments, things are changing every second, it hard to keep a track of your storage configurations, relationships, headroom, capacity, reclamation.

7)      Are you maintaining spreadsheets or access databases to keep a track of your applications, application owners, wwn, servers, zones, etc. You need to consider something soon.

8 )      Do you enforce Tiering in our environment, how much data do we have based on each tier.

9)      Do we follow ILM approach, how much data needs to be migrated over to different tiers based on business needs and rules (we should see FAST later this year that should automate the process on V-Max)

10)   Do we have any configuration issues in our environments that have caused major storage outages (single path host, multipath host with only one path active, LUN masking issues, zoning issues, BCV issues, other configuration issues)

11)   How many times in the past 6 months have we had a major application outage and what caused it (how much penalties did we pay for those – OPEX dollars).

12)   If we follow any compliance (SEC, Sarbanes Oxley, HIPPA, etc), is our data complaint in terms of replication, policies, etc

13)   Do we have any manual processes for charge backs and bill backs, if so, what can we do to automate it.

14)   Do we know how the LUN’s in our environment are setup and the relationships it has with LUN’s on other arrays in terms of replication, BCV, Snaps, Clones, SRDF, etc.

15)   Do we know how the storage is growing in our environment: Trend Analysis

16)   What sorts of report are available to you for the analysis you are performing.

17)   Be careful to not just obtain a nice topology diagram of what is connected where, but being able to drill down real time to obtain LUN level details is important.

18)   With any storage analysis product, how much work is involved, How much training, How much training related cost, ease of use, number of users, detailed drill down, how easy would it be to analyze your environment, etc needs to be understood before the project starts.

19)   Do we have a Storage Economics Practice setup within our Storage environment to consistently increase our utilization, efficiency, reclamation and lower our outages & cost.

 

Experience

We had a conference call with a potential customer late last week about our storage offerings. This is a large insurance company that has acquired quite a few different companies over the past 5 years and are growing and currently going through data center consolidation projects.

During the call, we asked what they were doing for reclamation and other storage economics. To my surprise, they answered, we had purchased an OEM based Operational Software about 5 years ago and we didn’t like it, there are different people within the organization that still use it, but it’s not giving us the required results we want, more or less its used for alerts.

Now we have just purchased and going through an implementation of another OEM’s Operational Software for data reclamation, analysis and monitoring. The customer goes ahead and says, we have been trying to implement this software within our environment for the past 4 months now.

The point I am trying to make is, whatever these deployments are, they have to be easy enough, cost effective, not time and resource consuming, not consume your CAPEX dollars and not spend you OPEX dollars (training, implementation, outages).

It has to be light weight, easily deployable, should yield results in a short duration of time (hours or days rather than months), but still should be able to analyze your environment at a very detailed level.

 

What are you using today to manage your several hundred TB or an enormously large double digit PB storage estate?