Google+

Archive

Posts Tagged ‘Clariion’

EMC Clariion Systems: Global Hot Spares & Proactive Hot Spares

July 30th, 2009 No comments

The concept of Global Hot Spares has been supported in Clariion environments since the first generation of FC & CX platforms. Now the technology has been extended into the CX3 and then the CX4 platforms. The primary purpose of global hot sparing is to protect the system against disk drive failures.

Typically look at a CX4-960, which can be scaled up to 960TB of raw storage and can have as many as 960 disk drives in it. With certain failure rates guaranteed, large number of drives can create a higher probability of failure. Every storage manufacturer these days includes some sort of hot sparing technology in the storage subsystems. EMC started offering this technology to its customers as Global Hot Spares. Then came an era where some value add offerings were brought in for proactive failures to minimize the chance of data loss. This brought to the table a technology that is termed as Proactive Hot Spares, where proactively failing drive is determined and global hot spare is kicked in.

I believe flare release 24 started offering Proactive hot spares. With this Flare release customers can proactively initiate a kickoff of hot spares through Navisphere or Naviseccli against a suspect drive.

Depending on the RAID type implemented, the RAID Groups can withstand drive failures and can run in degraded state without data loss or data unavailability. With RAID 6 implemented, a machine can have as many as 2 drive failures in the same RAID group, with RAID 5, a machine can have as many as 1 drive failure in the same RAID group, with RAID 1/0, RAID 1 a machine can have as many as 1 drive failure in the RAID group without data loss.

Drives supported on Clariion CX, CX3, CX4, AX and AX4 systems typically are FC (Fiber Channel), SATA II and ATA drives.

A Global Hot Spare has to be configured in an EMC Clariion system as a single RAID Group (with one drive). Once the RAID Group is created, a LUN should be bound as a Global Hot Spare before it could be activated.

The following is the sequence of steps that take place on a Clariion Subsystem related to Global Hot Spares (Supported on CX, CX3, CX4 systems)

  1. Disk Drive failure: A disk drive failure in the system, Flare Code marks it bad.
  2. Hot spare invoked: A preconfigured Global Hot Spare is invoked based on the Global Hot Spare selection criteria.
  3. Rebuild: The Global Hot Spare is rebuilt from surviving raid group members.
  4. Failed drive replaced: Failed disk drive is replaced with a good drive by a Customer Engineer
  5. Copy Back: The Global Hot Spare copy has to finish before the new drive starts rebuilding. The rebuild or equalize happens in a sequential order of LBA (Logical Block Address) and not the LUNs bound no it.
  6. Return Hot Spare: Once the sync of new drive is finished, the hot spare is invalidated (zero’ed) and put back in the Global Hot Spare pool.

The following is the sequence of steps that take place on a Clariion Subsystem related to Proactive Hot Spares (Supported on CX300, CX500, CX700, CX3, CX4). Proactive Hot Spares essentially use the same drives that are configured as Global Hot Spares.

  1. Threshold of errors on Disk Drive: A drive gets hit with errors, it surpasses the number and type of those errors, and the flare code marks it as a potential candidate for failure.
  2. Proactive Hot Spare invoked: Based on the potential candidate’s (drive) type, drive size and bus location a Global Hot Spare is indentified and the process is kicked off for data rebuild.
  3. Potential candidate fails: Once the Proactive Hot Spare is synced, the flare code fails the indentified potential candidate.
  4. Failed drive replacement: The failed drive is replaced by a Customer Engineer
  5. Copy Back: From the proactive hot spare, the data is copied back to the newly inserted drive. The rebuilt or equalize happens in a sequential order of LBA (Logical Block Address).
  6. Return Proactive Hot Spare: Once the sync of new drive is finished, the hot spare is invalidated (zero’ed) and put back to the Global Hot Spares pool.

The Global Hot Spares Selection Criteria:

The following are the criteria’s that are followed with selection (invoke) of a Global Hot Spare when a potential proactive candidate is identified or disk drive is failed. In the sequence listed below, Drive type is the first selection, Size of the drive is the second selection and location of the Global Hot Spare is the third selection. Speed of the drive (RPM) is not a selection criterion.

  1. Type of Global Hot Spare Drive: As discussed above, Clariion Systems use three primary drive types. For FC and SATA II type drives, either or can be invoked against each other type. ATA drives can be invoked against an ATA drive failure.
  2. Size of Global Hot Spare: Upon a disk failure, the drive size (Global Hot Spare) is examined by Flare Code. The size of failed drive is not the key in invoking the hot spare, but the total space of all LUNs (bound) on the drive is used as a determination criteria.
  3. Location of Global Hot Spare: Based on the above two criteria, the location of the Global Hot Spare is considered as the third criteria. If the Global Hot Spare is on the same bus as the failed drive, it will be considered as the primary selection if the above two criteria’s are met. If the above two criteria’s are met and the drive is not on the same bus, then the Global Hot Spare is selected from other buses.

Other Considerations:

  1. RAID Types: For the copy of data, with RAID 3 and RAID 5 data on the hot spare is built using the parity drive. With RAID 6 raid types, data on the hot spare is built using the RP (row parity) and / or DP (Diagonal Parity) depending on the number of failures in the RAID Groups. For the RAID 1/0 and RAID 1, data on the hot spare is built using the surviving mirrors.
  2. Copy Times: The time required to copy or rebuilt a hot spare really depends on how large the drive is, the speed of the drive, the cache available on the drive, the cache available on the array, the type of the array, raid type and the current job processing on the array. Typical rebuilt times vary from 30 minutes to 90 minutes again depending upon how busy the storage subsystem is.
  3. Global hot Spare types: For every 30 drives (2 DAE’s of drives), consider having 1 drive as a Global hot spare. Also verify, for every drive type (size, speed) in the machine, you have at least one configured global hot spare. Good idea to have global hot spares on various different buses and spread across multiple Service Processors.
  4. Vault Drives: Vault Drives cannot be used for Global Hot Spares. The Vault drives are considered as the first 5 drives [ 0_0_0, 0_0_1, 0_0_2, 0_0_3, 0_0_4 ] on the Clariion System. If a vault drive fails, a Global Hot Spare takes over its position.
  5. Rotational Speed: Rotational Speed of the Global Hot Spare is not considered before invoking it. It might be a good idea to have Global Hot Spares running 15K RPM’s potentially with large size drives.
  6. Mixed Loop Speed: With certain Clariion Systems like CX3’s, available loop options are 4GB and / or 2GB and you can have a mixed loop speed in your machine, for hot spare selection the loop speed is not considered, in those cases it might be wise to have similar hot spares on both the 2GB and 4GB loops.

EMC Symmetrix, 20 Years in the making

July 29th, 2009 1 comment

So next year will mark a history of Symmetrix Products within EMC, still classified as one of the most robust systems out there after 20 years of its inception. In this blog post, we will talk about some facts on Symmetrix products as it relates to its features, characteristics, Enginuity microcode versions, model numbers, year released, etc.

Also in this blog post you will see links to most of my previous posts about Symmetrix products.

——————————————————————————————————————————

——————————————————————————————————————————

So the journey of Symmetrix systems started with Moshe Yanai (along with his team) joining EMC in late 80’s. A floating story says, the idea of a cache based disk array was initially pitched to both IBM and HP and was shot down.  EMC was predominately a mainframe memory selling company back in the late 1980’s. The Symmetrix products completely changed the direction of EMC in a decade.

Joe Tucci comes in at the end of 90’s from Unisys with a big vision. Wanted to radically change EMC. Through new acquisitions, new technologies, vision and foremost the integration of all the technologies created today’s EMC.

Symmetrix has always been the jewel of EMC. Back in the Moshe days, the engineers were treated so royally (Have heard stories about helicopter rides and lavish parties with a satellite bus waiting outside for a support call). Then comes the Data General acquisition in late 90’s that completely changed the game.

Some people within EMC were against the DG acquisition and didn’t see much value in it. While the Clariion DG backplane is what changed the Symmetrix to a Symmetrix DMX – Fiber Based Drives. Over this past decade, EMC radically changes its position and focuses on acquisitions, support, products, quality, efficiency, usability and foremost changing itself from a hardware company to an Information Solutions company focusing on software as its integral growth factor.  New acquisitions like Legato, Documentum, RSA, kept on changing the culture and the growth focus within EMC.

Then came VMware and it changed the rules of the game, EMC’s strategic move to invest into VMware paid off big time.  Then happens the 3-way partnership between VMware – EMC – Cisco, to integrate next generation products, V-Max (Symmetrix), V-Sphere and UCS are born.

Here we are in 2009, almost at the end of 20 years since the inception of the Symmetrix, the name, the product, the Enginuity code, the robust characteristics, the investment from EMC all stays committed with changing market demands.

——————————————————————————————————————————

——————————————————————————————————————————

Jumping back into the Symmetrix, here are a few articles you might find interesting, overall talking about various models, serial numbers of the machines and importantly a post on Enginuity Operating Environment.

To read about EMC Symmetrix Enginuity Operating Environment

To read about EMC Symmetrix Serial Number naming convention,

To read about EMC Symmetrix Models in a previous blog post

To read about various EMC models based on different Platforms

To read about all EMC Clariion models since the Data General Acquisition

——————————————————————————————————————————

——————————————————————————————————————————

Symmetrix Family 1.0

ICDA – Integrated Cache Disk Array

Released 1990 and sold through 1993

A 24GB total disk space introduced

Wow, I was in elementary school or may be middle school when this first generation Symmetrix was released….

Symmetrix 4200

——————————————————————————————————————————

——————————————————————————————————————————

Symmetrix Family 2.0

ICDA – Integrated Cache Disk Array

Released 1991 and sold through 1994

A 36GB total disk space

Mirroring introduced

Symmetrix 4400

——————————————————————————————————————————

——————————————————————————————————————————

Symmetrix Family 2.5

ICDA – Integrated Cache Disk Array

Released 1992 and sold through 1995

RSF capabilities added

(I actually met a guy about 2 years ago, he was one of the engineers that had worked on developing the first RSF capabilities at EMC and was very instrumental in developing the Hopkinton PSE lab)

Symmetrix 4800:

——————————————————————————————————————————

——————————————————————————————————————————

Symmetrix Family 3.0 also called Symmetrix 3000 and 5000 Series

Released 1994 and sold through 1997

ICDA: Integrated Cache Disk Array

Includes Mainframe Support (Bus & Tag)

Global Cache introduced

1GB total Cache

NDU – Microcode

SRDF introduced

Supports Mainframe and open systems both

Enginuity microcode 50xx, 51xx

Symmetrix 3100: Open systems support, half height cabinet, 5.25 inch drives

Symmetrix 5100: Mainframe support, half height cabinet, 5.25 inch drives

Symmetrix 3200: Open Systems support, single cabinet, 5.25 inch drives

Symmetrix 5200: Mainframe support, single cabinet, 5.25 inch drives

Symmetrix 3500: Open Systems support, triple cabinet, 5.25 inch drives

Symmetrix 5500: Mainframe support, triple cabinet, 5.25 inch drives

——————————————————————————————————————————

——————————————————————————————————————————

Symmetrix Family 4.0 also called Symmetrix 3000 and 5000 Series

Released 1997 and sold through 2000

RAID XP introduced

3.5 Inch drive size introduced

On triple cabinet systems 5.25 inch drives used

Supports Mainframe and Open Systems both

Timefinder, Powerpath, Ultra SCSI support

Enginuity microcode 5265.xx.xx, 5266.xx.xx

Symmetrix 3330: Open Systems Support, half height cabinet, 32 drives, 3.5 inch drives

Symmetrix 5330: Mainframe Support, half height cabinet, 32 drives, 3.5 inch drives

Symmetrix 3430: Open Systems Support, single frame, 96 drives, 3.5 inch drives

Symmetrix 5430: Mainframe Support, single frame, 96 drives, 3.5 inch drives

Symmetrix 3700: Open Systems Support, triple cabinet, 128 drives, 5.25 inch drives

Symmetrix 5700: Mainframe Support, triple cabinet, 128 drives, 5.25 inch drives

To read about EMC Symmetrix Hardware Components

——————————————————————————————————————————

——————————————————————————————————————————

Symmetrix Family 4.8 also called Symmetrix 3000 and 5000 Series

Released 1998 and sold through 2001

Symmetrix Optimizer Introduced

Best hardware so far: least outages, least problems and least failures (not sure if EMC will agree to it, most customers do)

3.5 inch drives used with all models

Enginuity microcode 5265.xx.xx, 5266.xx.xx, 5267.xx.xx

Symmetrix 3630: Open Systems support, half height cabinet, 32 drives

Symmetrix 5630: Mainframe support, half height cabinet, 32 drives

Symmetrix 3830: Open Systems support, single cabinet, 96 drives

Symmetrix 5830: Mainframe support, single cabinet, 96 drives

Symmetrix 3930: Open Systems support, triple cabinet, 256 drives

Symmetrix 5930: Mainframe support, triple cabinet, 256 drives

Models sold as 3630-18, 3630-36, 3630-50, 5630-18, 5630-36, 5630-50,3830-36, 3830-50, 3830-73, 5830-36, 5830-50, 5830-73, 3930-36, 3930-50, 3930-73, 5930-36, 5930-50, 5930-73 (the last two digits indicate the drives installed in the frame)

To read about EMC Symmetrix Hardware Components

——————————————————————————————————————————

——————————————————————————————————————————

Symmetrix Family 5.0 also called Symmetrix 8000 Series

[ 3000 (open sytems) + 5000 (mainframe) = 8000 (support for both) ]

Supports Open Systems and Mainframe without BUS and TAG through ESCON

Released 2000 and sold through 2003

181GB Disk introduced

Enginuity microcode 5567.xx.xx, 5568.xx.xx

Symmetrix 8130: Slim cabinet, 48 drives

Symmetrix 8430: Single cabinet, 96 drives

Symmetrix 8730: Triple cabinet, 384 drives

Some models sold as 8430-36, 8430-73, 8430-181 or 8730-36, 8730-73, 8730-181 (the last two digits indicate the drives installed in the frame)

To read about EMC Symmetrix Hardware Components

——————————————————————————————————————————

——————————————————————————————————————————

Symmetrix Family 5.5 LVD also called Symmetrix 8000 Series

Released 2001 and sold through 2004

LVD: Low Voltage Disk Introduced

146GB LVD drive introduced

Ultra SCSI drives cannot be used with the LVD frame

Mainframe optimized machines introduced

4 Slice directors introduced with ESCON and FICON

FICON introduced

Enginuity microcode 5567.xx.xx, 5568.xx.xx

Symmetrix 8230: Slim cabinet, 48 drives, (rebranded 8130, non lvd frame)

Symmetrix 8530: Single cabinet, 96 drives, (rebranded 8430, non lvd frame)

Symmetrix 8830: Triple cabinet, 384 drives, (rebranded 8730, non lvd frame)

Symmetrix 8230 LVD: LVD frame, slim cabinet, 48 LVD drives

Symmetrix 8530 LVD: LVD frame, single cabinet, 96 LVD drives

Symmetrix 8830 LVD: LVD frame, triple cabinet, 384 LVD drives

Symmetrix z-8530: LVD frame, Single cabinet, 96 drives, optimized for mainframes

Symmetrix z-8830: LVD frame, Triple cabinet, 384 drives, optimized for mainframe

Some models sold as 8530-36, 8530-73, 8530-146, 8530-181 or 8830-36, 8830-73, 8830-146, 8830-181 (the last two digits indicate the drives installed in the frame)

To read about EMC Symmetrix Hardware Components

——————————————————————————————————————————

——————————————————————————————————————————

Symmetrix DMX or also called Symmetrix Family 6.0

Released Feb 2003 and sold through 2006

Direct Matrix Architecture (Data General Backplane) introduced

DMX800 was the first DMX system introduced

4 Slice directors introduced

RAID 5 introduced after being introduced on DMX-3

First generation with common DA / FA hardware

Introduction of modular power

Enginuity Microcode 5669.xx.xx, 5670.xx.xx, 5671.xx.xx

Symmetrix DMX800: Single cabinet, DAE based concept for drives, 96 drives (I swear, a customer told me, they have ghost like issues with their DMX800)

Symmetrix DMX1000: Single cabinet, 18 drives per loop, 144 drives total

Symmetrix DMX1000-P: Single cabinet, 9 drives per loop, 144 drives total, P= Performance System

Symmetrix DMX2000: Dual cabinet, modular power, 18 drives per loop, 288 drives

Symmetrix DMX2000-P: Dual cabinet, modular power, 9 drives per loop, 288 drives, P=Performance System

Symmetrix DMX3000-3: Triple cabinet, modular power, 18 drives per loop, 3 phase power, 576 drives

To read about EMC Symmetrix DMX Hardware components

To read about EMC Symmetrix DMX models and major differences

——————————————————————————————————————————

——————————————————————————————————————————

Symmetrix DMX2 or also called Symmetrix Family 6.5

Released Feb 2004 and sold through 2007

Double the processing using DMX2

DMX and DMX2 frames are same, only directors from DMX must be changed to upgrade to DMX2, reboot of entire systems required with this upgrade

RAID 5 introduced after being introduced on DMX-3

64GB memory introduced

4 Slice Directors

Enginuity Microcode 5669.xx.xx, 5670.xx.xx, 5671.xx.xx

Symmetrix DMX801: 2nd generation DMX, Single cabinet, DAE based concept for drives, 96 drives, FC SPE 2 (I swear, a customer told me, they have ghost like issues with their DMX800)

Symmetrix DMX1000-M2: 2nd generation DMX, Single cabinet, 18 drives per loop, 144 drives

Symmetrix DMX1000-P2: 2nd generation DMX, Single cabinet, 9 drives per loop, 144 drives, P=Performance System

Symmetrix DMX2000-M2: 2nd generation DMX, Dual cabinet, 18 drives per loop, 288 drives

Symmetrix DMX2000-P2: 2nd generation DMX, Dual cabinet, 9 drives per loop, 288 drives, P=Performance System

Symmetrix DMX2000-M2-3: 2nd generation DMX, Dual cabinet, 18 drives per loop, 288 drives, 3 Phase power

Symmetrix DMX2000-P2-3: 2nd generation DMX, Dual cabinet, 9 drives per loop, 288 drives, P=Performance System, 3 Phase power

Symmetrix DMX3000-M2-3: 2nd generation DMX, Triple cabinet, 18 drives per loop, 576 drives, 3 Phase power

To read about EMC DMX Symmetrix Hardware components

To read about EMC Symmetrix DMX models and major differences

——————————————————————————————————————————

——————————————————————————————————————————

Symmetrix DMX-3 or also called Symmetrix 7.0

Released July 2005 and still being sold

8 Slice directors

1920 disk (RPQ ‘ed to 2400 drives)

DAE based concept introduced

Symmetrix Priority Controls

RAID 5 introduced and then implemented on older DMX, DMX-2

Virtual LUN technology

SRDF enhancements

Concept of vaulting introduced

Enginuity microcode 5771.xx.xx, 5772.xx.xx

Symmetrix DMX-3 950: System Cabinet, Storage Bay x 2, 360 drives max, Modular Power, 3 Phase power

Symmetrix DMX-3: System Cabinet, Storage Bay x 8 (Expandable), 1920 drives max, RPQ’ed to 2400 drives, 3 Phase power

To read about differences between EMC Symmetrix DMX3 and DMX4 platforms

——————————————————————————————————————————

——————————————————————————————————————————

Symmetrix DMX-4 or also called Symmetrix 7.0

Released July 2007 and still being sold

Virtual provisioning

Flash Drives

FC / SATA drives

RAID 6 introduced

SRDF enhancements

Total Cache: 512 GB

Total Storage: 1 PB

Largest drive supported 1TB SATA drive

Flash drives 73GB, 146GB later now support for 200GB and 400GB released

1920 drives max (RPQ’ed to 2400 drives)

Enginuity microcode 5772.xx.xx, 5773.xx.xx

Symmetrix DMX-4 950: System Cabinet, Storage Bay x 2, 360 drives max, Modular Power, 3 Phase power

Symmetrix DMX-4: System Cabinet, Storage Bay x 8 (Expandable), 1920 drives max, RPQ’ed to 2400 drives, Modular power, 3 Phase Power

Some models sold as DMX-4 1500, DMX-4 2500, DMX-4 3500 and DMX-4 4500

To read about a blog post on EMC Symmetrix: DMX4 Components

To read about differences between EMC Symmetrix DMX3 and DMX4 platforms

To read about different drives types supported on EMC Symmetrix DMX4 Platform

To read about differences between EMC Symmetrix DMX4 and V-Max Systems

——————————————————————————————————————————

——————————————————————————————————————————

Symmetrix V-Max

(Released April 2009)

Enginuity Microcode 5874.xxx.xxx

Total number of drives supported: 2400

Total Cache: 1 TB mirrored (512GB usable)

Total Storage: 2 PB

All features on the V-Max have been discussed earlier on the blog post linked below

Symmetrix V-Max SE: Single System Bay, SE=Single Engine, Storage Bay x 2, 360 drives max, cannot be expanded to a full blown 8 engine system if purchased as a SE, 3 Phase power, Modular Power

Symmetrix V-Max: System Cabinet, Storage Bay x 10, 2400 drives max, modular power, 3 phase power

To read about differences between EMC Symmetrix DMX4 and V-Max Systems

To read about different drives types supported on EMC Symmetrix V-Max Platforms

To read all about the EMC Symmetrix V-Max Platform

——————————————————————————————————————————

——————————————————————————————————————————

I could have easily added total memory capacity per frame, total number of dedicated DA/DAF slots, total slots, total universal slots, total memory slots, but then I didn’t know information on some of the old systems and didn’t want to be incorrect on them.

Hope you have enjoyed reading this post, with a bit of history related to the Symmetrix platform. I am pretty positive, as of today you will not find this consolidated information on any blog or the manufacturers website.

I really wish, EMC decided to open blogging to some Symmetrix, Clariion, Celerra, Centera specialist that support these systems on a day to day basis, the information that could come out from those guys could be phenomenal. Barry Burke writes a lot of stuff, but again a lot of FUD from him against IBM and HDS, its great reading him, but only a controlled amount of technical information comes from him.

——————————————————————————————————————————

——————————————————————————————————————————

EMC Clariion RAID-6 requirements and limitations

July 15th, 2009 7 comments

Here are some requirements and limitations related to using the RAID-6 technology on the EMC Clariion platforms.

  • RAID-6 is only supported with Flare Release 26 and above on Clariion systems.
  • Flare 26 only works on the EMC Clariion CX300, CX500, CX700, all CX3-xx platforms and all CX4-xxx platforms.|
  • Any systems running below Flare Release 26 (example Release 13, 16, 19, 24) are not compatible to run RAID-6 (Clariion Systems like CX200, CX400 and CX600).
  • Minimum disk required to support RAID-6 with Clariion systems is 2 or 4 or 6 or 8 or 14 data disks with 2 Parity disks (Your typical configuration would look like 2D+2P or 4D+2P or 6D+2P or 8D+2P or 14D+2P, where D = Data Disk and P = Parity Disk)
  • To configure RAID-6, you will need even number of disk drives in the RAID Group that you are trying to configure.
  • RAID-6 is supported on either EFD (Enterprise Flask Disk) or Fiber (FC) or ATA or SATA drives on EMC Clariion Systems.
  • RAID-6 Raid group (RAID SET) can be implemented within an enclosure or expanded beyond a single enclosure
  • RAID-6 can co-exist in the same DAE (disk array enclosure) as a RAID-5 and/or RAID-1/0 and/or other RAID types.
  • RAID-6 supports global hot sparing like other RAID technologies.
  • Supports MetaLUN expansion through concatenated or striped expansion only if all the meta member LUNs are RAID-6 devices (LUNs).
  • RAID-6 configuration is possible through Navisphere and naviseccli only.
  • With RAID-6 traditionally supported CLI interfaces like Java CLI and Classic CLI have been retired.
  • Defragmentation with RAID-6 is currently not supported on Flare Release 26.
  • You cannot add new drives to an existing RAID-6 LUN, but you can expand the LUN through RAID-6 MetaLUN technology. Example of this will be, if you have a 6D+2P RAID-6 set and would like to add 16 more drives to the same RAID Group, you cannot accomplish it, but if you manage to create either 2 sets of 6D+2P or 1 set of 14D+2P, and then run a MetaLUN concatenate, you will be able to necessarily achieve the same end result.
  • You can have Clariion systems with various different RAID group technologies in the same global domain, but again from a management perspective certain traditional CLI interfaces will not work with RAID-6.
  • Using the Virtual LUN Technology with Flare Release 26, now customers can migrate various LUNs (RAID-5, RAID-1/0) to RAID-6 technology. The technology allows the new RAID-6 LUN to assume the exact identity of the previous LUN making the migration process much easy.
  • Traditional replication and copy software’s like SANCopy, SnapView, MirrorView, and RecoverPoint are all supported for RAID-6 technology.
  • Never use RAID-6 technology with a mix of EFD, FC, ATA and SATA drives in the same RAID Group.
  • Never use RAID-6 technology with a mix of various drive speeds like 15K or 10K or 7.2K RPM, drive speed should be exactly similar.
  • Oh the most important note: 2 drive failures in the same RAID Group and no data loss or data unavailable (DU / DL), making this a very robust RAID technology. There are some performance overhead related to use of RAID-6 systems with small and random writes. While there is an added penalty with Row Parity and Diagonal Parity calculations on the Clariion.

If you would like to see any further post on RAID-6 workings on Clariion Platforms, please feel free to leave a comment.

To read about other RAID-6 implementations with various platforms, please see below.

EMC Symmetrix RAID 6

SUN StorageTek’s RAID 6

HP’s RAID 6

NetApp’s RAID–DP

Hitachi’s (HDS) RAID 6

Different RAID Technologies (Detailed)

Different RAID Types

Clariion SPCollects for CX, CX3, CX4


The following is the procedure for SPCollects on a Clariion, CX, CX3 and CX4 machines.

If you are running release 13 and above, you will be able to perform the SP Collects from the GUI of Navisphere Manager Software.

Using Navisphere perform the following steps to collect and transfer the SPCollects to your local drive.

  1. Login to Navisphere Manager
  2. Identify the Serial number of the array you want to perform the SP Collects on
  3. Go to SP A using expand (+)
  4. Right click on it and from the menu, select SPCollects
  5. Now go to SP B in the same array
  6. Right click on it and from the menu, select SPCollects
  7. Wait for 5 to 10 minutes depending on the how big your array is and how busy your array is
  8. Now right click on SP A and from the menu select File Manager
  9. From the window, select the zip file SerialNumberOfClariion_spa_Date_Time_*.zip
  10. From the window, hit the transfer button to transfer the files to your local computer.
  11. Follow a similar process ( 8, 9, 10) for SPB, from the File Manager
  12. The SP B file name will be SerialNumberOfClariion_spb_Date_Time_*.zip

For customers that do not have SPCollects in the menu (running release 13 below), there is a manual way to perform SPCollects using Navicli from your management console or an attached host system.

To gather SPCollects from SP A, run the following commands

navicli  –h  xxx.xxx.xxx.xxx  spcollect  –messner

Wait for 5 to 7 mins

navicli  –h  xxx.xxx.xxx.xxx  managefiles  –list

The name of the SPCollects file will be SerialNumberOfClariion_spa_Date_Time_*.zip

navicli  –h  xxx.xxx.xxx.xxx  managefiles  –retrieve

where xxx.xxx.xxx.xxx is the IP Address of SP A

For SP B, similar process like above, the name of the file you will be looking for is SerialNumberOfClariion_spb_Date_Time_*.zip

Where xxx.xxx.xxx.xxx will be the IP Address of SP B

SPCollects information is very important with troubleshooting the disk array and will give the support engineer all the necessary vital data about the storage array (environment) for troubleshooting.

The following data that is collected using the SP Collects from both the SP’s:

Ktdump Log files

iSCSI data

FBI data (used to troubleshoot backend issues)

Array data (sp log files, migration info, flare code, sniffer, memory, host side data, flare debug data, metalun data, prom data, drive meta data, etc)

PSM data

RTP data (mirrors, snaps, clones, sancopy, etc)

Event data (windows security, application and system event files)

LCC data

Nav data (Navisphere related data)

To read previous blog post on Clariion Technology, please see the following links

Clariion Cache: Page Size

Clariion Cache: Read and Write Caching

Clariion Cache: Navicli Commands

Clariion Cache: Idle, Watermark and Forced Flushing

Clariion Basics: DAE, DPE, SPE, Drives, Naming Convention and Backend Architecture

Clariion Flare Code Operating Environment

Or

Tag Clariion on StorageNerve