Google+

Archive

Posts Tagged ‘EMC’

Clariion SPCollects for CX, CX3, CX4


The following is the procedure for SPCollects on a Clariion, CX, CX3 and CX4 machines.

If you are running release 13 and above, you will be able to perform the SP Collects from the GUI of Navisphere Manager Software.

Using Navisphere perform the following steps to collect and transfer the SPCollects to your local drive.

  1. Login to Navisphere Manager
  2. Identify the Serial number of the array you want to perform the SP Collects on
  3. Go to SP A using expand (+)
  4. Right click on it and from the menu, select SPCollects
  5. Now go to SP B in the same array
  6. Right click on it and from the menu, select SPCollects
  7. Wait for 5 to 10 minutes depending on the how big your array is and how busy your array is
  8. Now right click on SP A and from the menu select File Manager
  9. From the window, select the zip file SerialNumberOfClariion_spa_Date_Time_*.zip
  10. From the window, hit the transfer button to transfer the files to your local computer.
  11. Follow a similar process ( 8, 9, 10) for SPB, from the File Manager
  12. The SP B file name will be SerialNumberOfClariion_spb_Date_Time_*.zip

For customers that do not have SPCollects in the menu (running release 13 below), there is a manual way to perform SPCollects using Navicli from your management console or an attached host system.

To gather SPCollects from SP A, run the following commands

navicli  –h  xxx.xxx.xxx.xxx  spcollect  –messner

Wait for 5 to 7 mins

navicli  –h  xxx.xxx.xxx.xxx  managefiles  –list

The name of the SPCollects file will be SerialNumberOfClariion_spa_Date_Time_*.zip

navicli  –h  xxx.xxx.xxx.xxx  managefiles  –retrieve

where xxx.xxx.xxx.xxx is the IP Address of SP A

For SP B, similar process like above, the name of the file you will be looking for is SerialNumberOfClariion_spb_Date_Time_*.zip

Where xxx.xxx.xxx.xxx will be the IP Address of SP B

SPCollects information is very important with troubleshooting the disk array and will give the support engineer all the necessary vital data about the storage array (environment) for troubleshooting.

The following data that is collected using the SP Collects from both the SP’s:

Ktdump Log files

iSCSI data

FBI data (used to troubleshoot backend issues)

Array data (sp log files, migration info, flare code, sniffer, memory, host side data, flare debug data, metalun data, prom data, drive meta data, etc)

PSM data

RTP data (mirrors, snaps, clones, sancopy, etc)

Event data (windows security, application and system event files)

LCC data

Nav data (Navisphere related data)

To read previous blog post on Clariion Technology, please see the following links

Clariion Cache: Page Size

Clariion Cache: Read and Write Caching

Clariion Cache: Navicli Commands

Clariion Cache: Idle, Watermark and Forced Flushing

Clariion Basics: DAE, DPE, SPE, Drives, Naming Convention and Backend Architecture

Clariion Flare Code Operating Environment

Or

Tag Clariion on StorageNerve

Dave Donatelli's departure and what is next?

April 28th, 2009 No comments

TO SUBSCRIBE TO STORAGENERVE BLOG

 

As news hit the wire this afternoon, about the latest move by Dave Donatelli (President EMC Storage Division) from EMC to HP. As Dave’s new job, he will report to Ann Livermore at HP and will handle all Server, Storage and Networking business also known as ESS (Enterprise Storage & Server) Division, worth about US 20B, more than his current responsibilities at EMC, in terms of Dollars. 

Dave has been with EMC since his early college days and more or less these days was talked about being the next CEO at EMC. Something might have changed over this past year, that made him make this move. People in these positions, well respected, accomplished a tons of things, groomed to be the next CEO, have a passion for something they do, and you see them jump the ship…something more to the story. There has been a new appointment into EMC board in 2008, Mr. Hu, I think it was a quite appointment, but a big one, as Hu comes with a great industry M&A knowledge and has been well respected within the consulting, investment and IT business environments. 

Jumping back to Mr. Donatelli, he has been a great icon of EMC Storage division and has played a major role with shaping where EMC stands today from a Storage perspective. 

As news will probably flow over the next couple of weeks, we should see a shift within EMC’s top management with some new faces coming into picture. The latest was Mr. Frank Hauck that runs Global marketing and customer quality will be taking over Dave Donatelli’s role. 

So the move for Mr. Donatelli comes at a very crucial time, a big bang redo of the EMC Symmetrix products and then at the after launch party, resign from the company.  With this move, we should more or less see some other folks within EMC follow his foot steps. The question becomes, who within the Storage Blogosphere will end up among the executive profiles of EMC’s leadership chart.

Good luck and Good Wishes to a bright, smart man, Mr. Donatelli.

 

Here are some other blog posts covering Mr. Donatelli’s move, here, here, here, and I am sure more to follow tomorrow morning.

Storage Resource Analysis (SRA): Part 7

April 27th, 2009 No comments

TO SUBSCRIBE TO STORAGENERVE BLOG

 

The Technical Case

Continuing the blog posts on Storage Resource Analysis (SRA), this post focuses on the technical case on why analysis of your storage platforms is important and how it might help you discover inconsistencies in storage environments.

 

To read the previous blog posts on Storage Resource Analysis (SRA)

Storage Resource Analysis (SRA): Part 1: Storage Resource Analysis and Storage Economics

Storage Resource Analysis (SRA): Part 2: The IT – Storage World of 2009

Storage Resource Analysis (SRA): Part 3: The IT – Storage Budgets of 2009

Storage Resource Analysis (SRA): Part 4: Some Fundamental Questions

Storage Resource Analysis (SRA): Part 5: Facts about your Data

Storage Resource Analysis (SRA): Part 6: Inconsistencies in Storage Environments

Storage Resource Analysis (SRA): Part 7: The Technical Case

Storage Resource Analysis (SRA): Part 8: The Business Case

Storage Resource Analysis (SRA): Part 9: The End Result

 

From a technology standpoint, it’s very important to understand what Storage Analysis will do and how it might overall bring more value, efficiencies and utilization in your environments. To talk about a few technical issues it might help you understand are..

1)      How much headroom (total possible growth) we have in our storage environment (drilldown array, lun)

2)      How much reclaimable storage do we have in our environment (drilldown array, lun)

3)      How much immediate deployable storage do we have in our storage environment (drilldown where)

4)      Can we predict capacity planning and future growth

5)      The information obtained above should be as of today, not something you started working about 3 months ago.

6)      In large volatile storage environments, things are changing every second, it hard to keep a track of your storage configurations, relationships, headroom, capacity, reclamation.

7)      Are you maintaining spreadsheets or access databases to keep a track of your applications, application owners, wwn, servers, zones, etc. You need to consider something soon.

8 )      Do you enforce Tiering in our environment, how much data do we have based on each tier.

9)      Do we follow ILM approach, how much data needs to be migrated over to different tiers based on business needs and rules (we should see FAST later this year that should automate the process on V-Max)

10)   Do we have any configuration issues in our environments that have caused major storage outages (single path host, multipath host with only one path active, LUN masking issues, zoning issues, BCV issues, other configuration issues)

11)   How many times in the past 6 months have we had a major application outage and what caused it (how much penalties did we pay for those – OPEX dollars).

12)   If we follow any compliance (SEC, Sarbanes Oxley, HIPPA, etc), is our data complaint in terms of replication, policies, etc

13)   Do we have any manual processes for charge backs and bill backs, if so, what can we do to automate it.

14)   Do we know how the LUN’s in our environment are setup and the relationships it has with LUN’s on other arrays in terms of replication, BCV, Snaps, Clones, SRDF, etc.

15)   Do we know how the storage is growing in our environment: Trend Analysis

16)   What sorts of report are available to you for the analysis you are performing.

17)   Be careful to not just obtain a nice topology diagram of what is connected where, but being able to drill down real time to obtain LUN level details is important.

18)   With any storage analysis product, how much work is involved, How much training, How much training related cost, ease of use, number of users, detailed drill down, how easy would it be to analyze your environment, etc needs to be understood before the project starts.

19)   Do we have a Storage Economics Practice setup within our Storage environment to consistently increase our utilization, efficiency, reclamation and lower our outages & cost.

 

Experience

We had a conference call with a potential customer late last week about our storage offerings. This is a large insurance company that has acquired quite a few different companies over the past 5 years and are growing and currently going through data center consolidation projects.

During the call, we asked what they were doing for reclamation and other storage economics. To my surprise, they answered, we had purchased an OEM based Operational Software about 5 years ago and we didn’t like it, there are different people within the organization that still use it, but it’s not giving us the required results we want, more or less its used for alerts.

Now we have just purchased and going through an implementation of another OEM’s Operational Software for data reclamation, analysis and monitoring. The customer goes ahead and says, we have been trying to implement this software within our environment for the past 4 months now.

The point I am trying to make is, whatever these deployments are, they have to be easy enough, cost effective, not time and resource consuming, not consume your CAPEX dollars and not spend you OPEX dollars (training, implementation, outages).

It has to be light weight, easily deployable, should yield results in a short duration of time (hours or days rather than months), but still should be able to analyze your environment at a very detailed level.

 

What are you using today to manage your several hundred TB or an enormously large double digit PB storage estate?

EMC Symmetrix DMX-4: Components

March 16th, 2009 6 comments

In my previous posts on EMC Symmetrix 3, 5, 8 Series and EMC Symmetrix DMX, DMX-2 Series we discussed some important components that comprise in systems, in this post we will discuss some of the important components of EMC Symmetrix DMX-4.

EMC Symmetrix DMX-4 consist of 1 System Bay and (1 upto 8) Scalable Storage Bay’s. Each Storage Bay can hold up to 240 Disk Drives totaling 1920 drive in 8 Storage bays or 1024 TB System.  Systems  with special requirements can be configured to 2400 drives instead of standard 1920 drives.

The primary bay is the System Bay which includes all directors, service processor, adapters, etc, while the Storage Bay contains all the disk drives, etc.

 

System Bay (1 Bay)

Channel directors: Front End Directors (FC, ESCON, FICON, GigE, iSCSI), these are the I/O Directors.

Disk directors: Back End Directors (DA), these control the drives in the System.

Global memory directors: Mirrored Memory available with DMX-4, Memory Director sizes range from 8GB, 16GB, 32GB or 64GB totaling 512GB (256GB mirrored).

Disk adapters: Back End Adapters, they provide an interface to connect disk drives through the storage bays.

Channel adapters: Front End Adapters, they provide an interface for host connection (FC, ESCON, FICON, GigE, iSCSI).

Power supplies: 3 Phase Delta or WYE configuration, Zone A and Zone B based Power Supplies, maximum 8 of them in the system bay.

Power distribution units (PDU): One PDU per zone, 2 in total.

Power distribution panels (PDP): One PDP per zone, 2 in total, power on/off, main power.

Battery backup Unit (BBU): 2 Battery backup modules, 8 BBU units, between 3 to 5 mins of backup power in case of a catastrophic power failure.

Cooling fan modules: 3 Fans at the top of the bay to keep it cool.

Communications and Environmental Control (XCM) modules: Fabric and Environmental monitoring, 2 XCM located at the rear of the system bay. This is the message fabric, that is the interface between directors, drives, cache, etc. Environmental monitoring is used to monitor all the VPD (Vital Product Data).

Service processor components
:
Keyboard, Video, Display and Mouse. Used for remote monitoring, call home, diagnostics and configuration purposes.

UPS: UPS for the Service Processor

Silencers: Made of foam inside, different Silencers for System and Storage bay’s.

 

 

Storage bay (1 Bay Minimum to 8 Bay’s Maximum)

Disk drives: Combination of 73GB, 146GB, 300GB, 400GB, 450GB, 500GB, 1TB and now EFD’s 73GB, 146GB and 200GB available. Speed: 10K, 15K, 7.2K SATA are all compatible, each RAID Group and each drive enclosure should only have similar speed drives, similar type drives. 15 drives per Enclosure, 240 per bay, 1920 total in the system. If the color of the LED lights on the drive is Blue its 2GB speed, if the color of the LED is green, the speed is 4GB.

Drive Enclosure Units: 16 per Storage Bay, 15 drives per enclosure

Battery Backup Unit (BBU): 8 BBU modules per Storage bay, each BBU support 4 Drive enclosures

Power Supply, System Cooling Module: 2 per drive enclosure

Link Control Cards: 2 per drive enclosure

Power Distribution Unit (PDU): 1 PDU per zone, 2 in total

Power Distribution Panels (PDP): 1 PDP per zone, 2 in total

 

In the next couple of post, we will discuss EMC Symmetrix DMX-4 and some of its design features.