Archive for the ‘Gestalt IT’ Category

EMC Symmetrix: Permanent Sparing

July 21st, 2009 1 comment

There are two types of sparing strategies available on EMC Symmetrix Series of machines.

Dynamic Hot Sparing:
Starting the Symmetrix 4.0, EMC had introduced dynamic hot spares in its Enginuity code to support customers against failing disk drives and reducing the probability of a data loss. Available there onwards on each version of Symmetrix, customers have been able to use this Hot Sparing technology. Today the Dynamic sparing is available on Symmetrix 4.0, Symmetrix 4.8, Symmetrix 5.0, Symmetrix 5.5, DMX, DMX2, DMX3, and DMX4 systems.

Permanent Spares:
Was introduced starting the Symmetrix DMX3 products, now available on DMX4’s and V-Max systems. I believe, Enginuity code 5772 started supporting Permanent Spares to guard customers against failing disk drives to further help reduce any performance, redundancy and processing degradation on the Symmetrix systems with features that were not available with the Dynamic Hot Sparing.

Highlights of Permanent Sparing

Due to some design, performance, redundancy limitations and Symmetrix mirror positions, dynamic hot spares were becoming a bottleneck related to customer internal job processing, example: a failed 1TB SATA drive sync to dynamic spare might take more than 8 to 48 hours.  While a similar process to remove the dynamic spare and equalize the replaced drive might take the same. During this time the machine is more or less in a lock down (Operational but not configurable).

Due to these limitations, a concept of Permanent spares was introduced on EMC Symmetrix systems, which would help fulfill some gaps the Dynamic hot spares technology has. Following are the criteria for Permanent Spares.

Some important things to consider with Permanent Spares

  1. Permanent Spares are supported through the microcode (Enginuity) versions starting the DMX-3 (5772 onwards) into the latest generation Symmetrix V-Max Systems.
  2. The customer needs to identify and setup the devices for Permanent Spares using Solutions enabler or an EMC CE should perform a BIN file change on the machine to enable Permanent Spares and the associated devices.
  3. When the Permanent Spare kicks in upon a failing / failed drive, a BIN file change locally within the machine is performed using the unattended SIL. Any configuration locks or un-functional Service Processors will kill the process before it’s initiated, in this instance the Permanent Spare will not be invoked but rather will invoke the Dynamic Hot Spare.
  4. An EMC CE will not require attending the site right away to replace the drive since the Permanent Spare has been invoked and all the data is protected. All failed drives where Permanent spares have been invoked can be replaced in a batch. When the failed drive is replaced, it will become a Permanent spare and will go the Permanent spares pool.
  5. Configuration of Permanent Spares is initiated through BIN file change, during this process, the CE or the customer will required to consider Permanent Spares rules related to performance and redundancy.
  6. If a Permanent Spare cannot be invoked due to any reasons related to performance and redundancy, a Dynamic Hot Spare will be invoked against the failing / failed device.
  7. The Permanent Spare will take all the original characteristics of a failed disk (device flags, meta configs, hyper sizes, mirror positions, etc) as it gets invoked.
  8. The rule of thumb with permanent spares is to verify that the machine has required type / size / speed / capacity / block size of the related permanent spare drives configured.
  9. You can have a single Symmetrix frame with Permanent Spares and Dynamic Hot Spares both configured.
  10. While the Permanent Spare or Dynamic Hot Spare is not invoked and is sitting in the machine waiting for a failure, these devices are not accessible from the front end (customer). The folks back at the PSE labs, will still be able to interact with these devices and invoke it for you incase of a failure or a proactive measure or for any reasons the automatic invoke fails.
  11. Permanent spares can be invoked against Vault drives, if a permanent spare drive is available on the same DA where the failure occurred.
  12. Permanent spares can be configured with EFD’s. I believe for every 2 DAE’s (30+ drives) you have to configure one hot spare EFD (permanent spares).
  13. Permanent Spares supports RAID type RAID 1, RAID 10, RAID 5, RAID 6 and all configurations within.

Some important Benefits of Permanent Sparing

  1. Additional protection against data loss
  2. Permanent sparing reduces the number of times the data copy is required (one time) instead of dynamic spares that needs to data copy (two times).
  3. Permanent sparing resolves the problem of mirror positions.
  4. Permanent spares (failed) drives can be replaced in batches, do not require immediate replacement.
  5. Permanent spares do not put a configuration lock on the machine, while an invoked dynamic spare will put a configuration lock until replaced.
  6. Permanent spares obey the rules of performance and redundancy while Dynamic hot sparing does not.

As a requirement to all the new systems that are configured now, sparing is required. Hope this provides a vision into configuring your next EMC Symmetrix on the floor.

EMC AX4 Platform

July 16th, 2009 10 comments

On a few reader request, here is a blog post on EMC AX4 technology.

Previously in one of the post on StorageNerve, I had explained the evolution of the EMC Clariion Technology including the AX products and the flare code that is associated with the success of this platform, to read the blog post.

The following are the 4 available models within the AX4 Platform; the naming conventions will explain it further.

AX4-5F (Fiber Channel Host Connect)

AX4-5FSC (Fiber Channel Host Connect, Single RAID Controller)

AX4-5i (iSCSI Host Connect)

AX4-5iSC (iSCSI Host Connect, Single Raid Controller)
Drives per DAE (Drive Array Enclosure)


Maximum DAE’s (Drive Array Enclosure) supported per System


DPE (Disk Processor Enclosure)

12 drive per DPE

Maximum Drives Supported per System


Minimum Drives Supported per System

4 Drives including the Flare Drives (Flare code resides on the 1st 4 drives of the DPE, with do not remove stickers).

Naming Convention for Drives



Drive No

Possible Options

Bus 0 and 1

Enclosure 1, 2, 3, 4

DPE becomes enclosure 0

Drive 0 through 11 (12 Drives total)

B0_E3_D11 becomes Bus 0, Enclosure 3 and Drive 12

The following are the disk drives that are supported with the AX4 Platforms.

Capacity: 1TB

Speed: 7.2K

Type: SATA

Capacity: 750GB

Speed: 7.2K

Type: SATA

Capacity: 450GB

Speed: 15K

Type: SAS

Capacity: 400GB

Speed: 15K

Type: SAS

Capacity: 400GB

Speed: 10K

Type: SAS

Capacity: 300GB

Speed: 15K

Type: SAS

Capacity: 146GB

Speed: 15K

Type: SAS

As you notice above, only SAS and SATA drives are usable on an AX4 system with no support for FC or EFD’s.

Supported RAID Types with AX4 Systems

RAID 0, RAID 0+1, RAID 3 and RAID 5 Technologies.

CORRECTION: RAID 6 is now supported on AX-4 Platforms starting release 23 (Navisphere Express)

Supported Software on the AX4 Systems

Navisphere Express included

Navisphere Manager (Only with 2 Service Processors)

Navisphere QoS

Navisphere Analyzer


MirrorView/S and /A


Ionix Control Center plug-in


RecoverPoint/SE and CDP/CRR

Replication Manager

Host Type Supported

Windows 2000/2003/2008






Some very important characteristics of the AX4 Platforms

Very cost effective

Completes in the SMB space with low end IBM and HP Storage

Scales upto 60TB RAW Storage

Supports upto 64 Host systems connected

1GB Cache per Service Processor, 2GB max for dual processors.

4GB FC connectivity to host (fiber ports to switch / host)

You can run it with a single controller or dual controllers

iSCSI Support

SAS and SATA drives are supported

All Clariion Software supported

SPS (single included): Standby Power Supply – Battery allows the cache to destage during a forced shutdown resulting in no data loss

MetaLUN Technology supported

Virtual LUN Technology supported

Snaps and Clones supported

Supports Flare Release 23, 26 and 28.

EMC Clariion RAID-6 requirements and limitations

July 15th, 2009 7 comments

Here are some requirements and limitations related to using the RAID-6 technology on the EMC Clariion platforms.

  • RAID-6 is only supported with Flare Release 26 and above on Clariion systems.
  • Flare 26 only works on the EMC Clariion CX300, CX500, CX700, all CX3-xx platforms and all CX4-xxx platforms.|
  • Any systems running below Flare Release 26 (example Release 13, 16, 19, 24) are not compatible to run RAID-6 (Clariion Systems like CX200, CX400 and CX600).
  • Minimum disk required to support RAID-6 with Clariion systems is 2 or 4 or 6 or 8 or 14 data disks with 2 Parity disks (Your typical configuration would look like 2D+2P or 4D+2P or 6D+2P or 8D+2P or 14D+2P, where D = Data Disk and P = Parity Disk)
  • To configure RAID-6, you will need even number of disk drives in the RAID Group that you are trying to configure.
  • RAID-6 is supported on either EFD (Enterprise Flask Disk) or Fiber (FC) or ATA or SATA drives on EMC Clariion Systems.
  • RAID-6 Raid group (RAID SET) can be implemented within an enclosure or expanded beyond a single enclosure
  • RAID-6 can co-exist in the same DAE (disk array enclosure) as a RAID-5 and/or RAID-1/0 and/or other RAID types.
  • RAID-6 supports global hot sparing like other RAID technologies.
  • Supports MetaLUN expansion through concatenated or striped expansion only if all the meta member LUNs are RAID-6 devices (LUNs).
  • RAID-6 configuration is possible through Navisphere and naviseccli only.
  • With RAID-6 traditionally supported CLI interfaces like Java CLI and Classic CLI have been retired.
  • Defragmentation with RAID-6 is currently not supported on Flare Release 26.
  • You cannot add new drives to an existing RAID-6 LUN, but you can expand the LUN through RAID-6 MetaLUN technology. Example of this will be, if you have a 6D+2P RAID-6 set and would like to add 16 more drives to the same RAID Group, you cannot accomplish it, but if you manage to create either 2 sets of 6D+2P or 1 set of 14D+2P, and then run a MetaLUN concatenate, you will be able to necessarily achieve the same end result.
  • You can have Clariion systems with various different RAID group technologies in the same global domain, but again from a management perspective certain traditional CLI interfaces will not work with RAID-6.
  • Using the Virtual LUN Technology with Flare Release 26, now customers can migrate various LUNs (RAID-5, RAID-1/0) to RAID-6 technology. The technology allows the new RAID-6 LUN to assume the exact identity of the previous LUN making the migration process much easy.
  • Traditional replication and copy software’s like SANCopy, SnapView, MirrorView, and RecoverPoint are all supported for RAID-6 technology.
  • Never use RAID-6 technology with a mix of EFD, FC, ATA and SATA drives in the same RAID Group.
  • Never use RAID-6 technology with a mix of various drive speeds like 15K or 10K or 7.2K RPM, drive speed should be exactly similar.
  • Oh the most important note: 2 drive failures in the same RAID Group and no data loss or data unavailable (DU / DL), making this a very robust RAID technology. There are some performance overhead related to use of RAID-6 systems with small and random writes. While there is an added penalty with Row Parity and Diagonal Parity calculations on the Clariion.

If you would like to see any further post on RAID-6 workings on Clariion Platforms, please feel free to leave a comment.

To read about other RAID-6 implementations with various platforms, please see below.

EMC Symmetrix RAID 6

SUN StorageTek’s RAID 6


NetApp’s RAID–DP

Hitachi’s (HDS) RAID 6

Different RAID Technologies (Detailed)

Different RAID Types

EMC Symmetrix V-Max Systems: FAST & VIRTUAL

A must read, blog post on FAST (Fully Automated Storage Tiering) and the V-Max Technology at Gestalt IT. The post focuses on the current marketing efforts from EMC related to the V-Max technology, the current state of the technology and the vision behind this technology.