Home > Gestalt IT, Storage, Storage Economics, Technology > Storage Resource Analysis (SRA): Part 7

Storage Resource Analysis (SRA): Part 7



The Technical Case

Continuing the blog posts on Storage Resource Analysis (SRA), this post focuses on the technical case on why analysis of your storage platforms is important and how it might help you discover inconsistencies in storage environments.


To read the previous blog posts on Storage Resource Analysis (SRA)

Storage Resource Analysis (SRA): Part 1: Storage Resource Analysis and Storage Economics

Storage Resource Analysis (SRA): Part 2: The IT – Storage World of 2009

Storage Resource Analysis (SRA): Part 3: The IT – Storage Budgets of 2009

Storage Resource Analysis (SRA): Part 4: Some Fundamental Questions

Storage Resource Analysis (SRA): Part 5: Facts about your Data

Storage Resource Analysis (SRA): Part 6: Inconsistencies in Storage Environments

Storage Resource Analysis (SRA): Part 7: The Technical Case

Storage Resource Analysis (SRA): Part 8: The Business Case

Storage Resource Analysis (SRA): Part 9: The End Result


From a technology standpoint, it’s very important to understand what Storage Analysis will do and how it might overall bring more value, efficiencies and utilization in your environments. To talk about a few technical issues it might help you understand are..

1)      How much headroom (total possible growth) we have in our storage environment (drilldown array, lun)

2)      How much reclaimable storage do we have in our environment (drilldown array, lun)

3)      How much immediate deployable storage do we have in our storage environment (drilldown where)

4)      Can we predict capacity planning and future growth

5)      The information obtained above should be as of today, not something you started working about 3 months ago.

6)      In large volatile storage environments, things are changing every second, it hard to keep a track of your storage configurations, relationships, headroom, capacity, reclamation.

7)      Are you maintaining spreadsheets or access databases to keep a track of your applications, application owners, wwn, servers, zones, etc. You need to consider something soon.

8 )      Do you enforce Tiering in our environment, how much data do we have based on each tier.

9)      Do we follow ILM approach, how much data needs to be migrated over to different tiers based on business needs and rules (we should see FAST later this year that should automate the process on V-Max)

10)   Do we have any configuration issues in our environments that have caused major storage outages (single path host, multipath host with only one path active, LUN masking issues, zoning issues, BCV issues, other configuration issues)

11)   How many times in the past 6 months have we had a major application outage and what caused it (how much penalties did we pay for those – OPEX dollars).

12)   If we follow any compliance (SEC, Sarbanes Oxley, HIPPA, etc), is our data complaint in terms of replication, policies, etc

13)   Do we have any manual processes for charge backs and bill backs, if so, what can we do to automate it.

14)   Do we know how the LUN’s in our environment are setup and the relationships it has with LUN’s on other arrays in terms of replication, BCV, Snaps, Clones, SRDF, etc.

15)   Do we know how the storage is growing in our environment: Trend Analysis

16)   What sorts of report are available to you for the analysis you are performing.

17)   Be careful to not just obtain a nice topology diagram of what is connected where, but being able to drill down real time to obtain LUN level details is important.

18)   With any storage analysis product, how much work is involved, How much training, How much training related cost, ease of use, number of users, detailed drill down, how easy would it be to analyze your environment, etc needs to be understood before the project starts.

19)   Do we have a Storage Economics Practice setup within our Storage environment to consistently increase our utilization, efficiency, reclamation and lower our outages & cost.



We had a conference call with a potential customer late last week about our storage offerings. This is a large insurance company that has acquired quite a few different companies over the past 5 years and are growing and currently going through data center consolidation projects.

During the call, we asked what they were doing for reclamation and other storage economics. To my surprise, they answered, we had purchased an OEM based Operational Software about 5 years ago and we didn’t like it, there are different people within the organization that still use it, but it’s not giving us the required results we want, more or less its used for alerts.

Now we have just purchased and going through an implementation of another OEM’s Operational Software for data reclamation, analysis and monitoring. The customer goes ahead and says, we have been trying to implement this software within our environment for the past 4 months now.

The point I am trying to make is, whatever these deployments are, they have to be easy enough, cost effective, not time and resource consuming, not consume your CAPEX dollars and not spend you OPEX dollars (training, implementation, outages).

It has to be light weight, easily deployable, should yield results in a short duration of time (hours or days rather than months), but still should be able to analyze your environment at a very detailed level.


What are you using today to manage your several hundred TB or an enormously large double digit PB storage estate?