Google+

Archive

Archive for the ‘Storage’ Category

EMC Clariion RAID-6 requirements and limitations

July 15th, 2009 7 comments

Here are some requirements and limitations related to using the RAID-6 technology on the EMC Clariion platforms.

  • RAID-6 is only supported with Flare Release 26 and above on Clariion systems.
  • Flare 26 only works on the EMC Clariion CX300, CX500, CX700, all CX3-xx platforms and all CX4-xxx platforms.|
  • Any systems running below Flare Release 26 (example Release 13, 16, 19, 24) are not compatible to run RAID-6 (Clariion Systems like CX200, CX400 and CX600).
  • Minimum disk required to support RAID-6 with Clariion systems is 2 or 4 or 6 or 8 or 14 data disks with 2 Parity disks (Your typical configuration would look like 2D+2P or 4D+2P or 6D+2P or 8D+2P or 14D+2P, where D = Data Disk and P = Parity Disk)
  • To configure RAID-6, you will need even number of disk drives in the RAID Group that you are trying to configure.
  • RAID-6 is supported on either EFD (Enterprise Flask Disk) or Fiber (FC) or ATA or SATA drives on EMC Clariion Systems.
  • RAID-6 Raid group (RAID SET) can be implemented within an enclosure or expanded beyond a single enclosure
  • RAID-6 can co-exist in the same DAE (disk array enclosure) as a RAID-5 and/or RAID-1/0 and/or other RAID types.
  • RAID-6 supports global hot sparing like other RAID technologies.
  • Supports MetaLUN expansion through concatenated or striped expansion only if all the meta member LUNs are RAID-6 devices (LUNs).
  • RAID-6 configuration is possible through Navisphere and naviseccli only.
  • With RAID-6 traditionally supported CLI interfaces like Java CLI and Classic CLI have been retired.
  • Defragmentation with RAID-6 is currently not supported on Flare Release 26.
  • You cannot add new drives to an existing RAID-6 LUN, but you can expand the LUN through RAID-6 MetaLUN technology. Example of this will be, if you have a 6D+2P RAID-6 set and would like to add 16 more drives to the same RAID Group, you cannot accomplish it, but if you manage to create either 2 sets of 6D+2P or 1 set of 14D+2P, and then run a MetaLUN concatenate, you will be able to necessarily achieve the same end result.
  • You can have Clariion systems with various different RAID group technologies in the same global domain, but again from a management perspective certain traditional CLI interfaces will not work with RAID-6.
  • Using the Virtual LUN Technology with Flare Release 26, now customers can migrate various LUNs (RAID-5, RAID-1/0) to RAID-6 technology. The technology allows the new RAID-6 LUN to assume the exact identity of the previous LUN making the migration process much easy.
  • Traditional replication and copy software’s like SANCopy, SnapView, MirrorView, and RecoverPoint are all supported for RAID-6 technology.
  • Never use RAID-6 technology with a mix of EFD, FC, ATA and SATA drives in the same RAID Group.
  • Never use RAID-6 technology with a mix of various drive speeds like 15K or 10K or 7.2K RPM, drive speed should be exactly similar.
  • Oh the most important note: 2 drive failures in the same RAID Group and no data loss or data unavailable (DU / DL), making this a very robust RAID technology. There are some performance overhead related to use of RAID-6 systems with small and random writes. While there is an added penalty with Row Parity and Diagonal Parity calculations on the Clariion.

If you would like to see any further post on RAID-6 workings on Clariion Platforms, please feel free to leave a comment.

To read about other RAID-6 implementations with various platforms, please see below.

EMC Symmetrix RAID 6

SUN StorageTek’s RAID 6

HP’s RAID 6

NetApp’s RAID–DP

Hitachi’s (HDS) RAID 6

Different RAID Technologies (Detailed)

Different RAID Types

EMC Symmetrix DMX-4 and Symmetrix V-Max: Basic Differences

June 30th, 2009 19 comments

EMC Symmetrix DMX-4 and Symmetrix V-Max: Basic Differences

In this post we will cover some important aspects / properties / characteristics / differences between the EMC Symmetrix DMX-4 and EMC Symmetrix V-Max. It seems like a lot of users are searching on blog posts about this information.

From a high level, I have tried to cover the differences in terms of performance and architecture related to the directors, engines, cache, drives, etc

It might be a good idea to also run both the DMX-4 and V-max systems through IOmeter to collect some basic comparisons between the front end and coordinated backend / cache performance data.

Anyways enjoy this post, and possibly look for some more related data in the future post.

EMC Symmetrix DMX-4                         EMC Symmetrix V-Max

Called EMC Symmetrix DMX-4 Called EMC Symmetrix V-Max
DMX: Direct Matrix Architecture V-Max: Virtual Matrix Architecture
Max Capacity: 1 PB Raw Storage Max Capacity: 2 PB of Usable Storage
Max Drives: 1900. On RPQ: 2400 max Max Drives: 2400
EFD’s Supported EFD’s Supported
Symmetrix Management Console 6.0 Symmetrix Management Console 7.0
Solutions Enabler 6.0 Solutions Enabler 7.0
EFD: 73GB, 146GB, 200GB, 400GB EFD: 200GB, 400GB
FC Drives: 73GB, 146GB, 300GB, 400GB, 450GB FC Drives: 73GB, 146GB, 300GB, 400GB
SATA II: 500GB, 1000 GB SATA II: 1000 GB
FC Drive Speed: 10K or 15K FC Drive Speed: 15K
SATA II Drive Speed: 7.2K SATA II Drive Speed: 7.2K
Predecessor of DMX-4 is DMX-3 Predecessor of V-Max is DMX-4
DMX-4 management has got a bit easy compared to the previous generation Symmetrix Ease of Use with Management – atleast with SMC 7.0 or so called ECC lite
4 Ports per Director 8 Ports per Director
No Engine based concept Engine based concept
24 slots The concept of slots is gone
1 System bay, 9 Storage bays 1 System bay, 10 Storage bays
No engines 8 Engines in one System (serial number)
64 Fiber Channel total ports on all directors for host connectivity 128 Fiber Channel total ports on directors/engines for host connectivity
32 FICON ports for host connectivity 64 FICON ports for host connectivity
32 GbE iSCSI ports 64 GbE iSCSCI ports
Total Cache: 512GB with 256 GB usable (mirrored) Total Cache: 1024 GB with 512 GB usable (mirrored)
Drive interface speed either 2GB or 4GB, drives auto negotiate speed Drive interface speed 4GB
Green color drive LED means 2GB loop speed, Blue color drive LED means 4GB loop speed Only 4GB drive speed supported.
512 byte style drive (format) 520-byte style drive (8 bytes used for storing data check info). Remember the clarion drive styles, well the data stored in both the cases is different. The 8 bytes used with the Symmetrix V-Max are the data integrity field based on the algorithm D10-TIF standard proposal
FAST: Fully Automated Storage Tiering may not be supported on DMX-4’s (most likely since the support might come based on a microcode level rather than a hardware level) FAST: Fully Automated Storage Tiering will be supported later this year on the V-Max systems
Microcode: 5772 / 5773 runs DMX-4’s Microcode: 5874 runs V-Max
Released in July 2007 Released in April 2009
Concepts of Directors and Cache on separate physical slots / cards Concept of condensed Director and Cache on board
DMX-4 Timefinder performance has been better compared to previous generation 300% better TImefinder Performance compared to DMX-4
No IP Management interface into the Service Processor IP Management interface to the Service Processor, can be managed through the customer’s Network – IP infrastructure
Symmetrix Management Console is not charged for until (free) DMX-4 Symmetrix Management Console to be licensed at a cost starting the V-Max systems
Architecture of DMX-4 has been similar to the architecture of its predecessor DMX-3 Architecture of V-Max is completely redesigned with this generation and is completely different from the predecessor DMX-4
Microcode 5772 and 5773 has be build on previous generation of microcode 5771 and 5772 respectively Microcode 5874 has been build on base 5773 from previous generation DMX-4
No RVA: Raid Virtual Architecture Implementation of RVA: Raid Virtual Architecture
Largest supported volume is 64GB per LUN Large Volume Support: 240GB per LUN (Open Systems) and 223GB per LUN (Mainframe Systems)
128 hypers per Drive (luns per drive) 512 hypers per Drive (luns per drive)
Configuration change not as robust as V-Max Systems V-Max systems introduced the concept of concurrent configuration change allowing customers to perform change management on the V-Max systems combined to work through single set of scripts rather than a step based process.
DMX-4 does present some challenges with mirror positions Reduced mirror positions giving customers good flexibility for migration and other opportunities
No Virtual Provisioning with RAID 5 and RAID 6 devices Virtual Provisioning allowed now with RAID 5 and RAID 6 devices
No Autoprovisioning groups Concept of Autoprovisioning groups introduced with V-Max Systems
Minimum size DMX-4: A single storage cabinet system, supporting 240 drives can be purchased with a system cabinet Minimum size V-Max SE (single engine) system can be purchased with 1 engine and 360 drive max.
No concepts of Engine, architecture based on slots Each Engine consists of 4 Quad Core Intel Chips with either 32GB, 64GB or 128GB cache on each engine with 16 front-end ports with each engine. Backend ports per engine is 4 ports connecting System bay to storage bay
Power PC chips used on directors Intel Quad Core chips used on Engines
Powerpath VE support for Vsphere – Virtual machines for DMX-4 Powerpath VE supported for Vsphere – Virtual machines for V-Max
Concept of Backplane exists with this generation of storage V-Max fits in the category of Modular Storage and eliminates the bottle neck of a backplane
DMX-4 was truly sold as a generation upgrade to DMX-3 V-Max systems have been sold with a big marketing buzz around hundreds of engines, millions of IOPs, TB’s of cache, Virtual Storage
Systems cannot be federated The concept of Federation has been introduced with V-Max systems, but systems are not federated in production or customer environments yet
Directors are connected to the system through a legacy backplane  (DMX – Direct Matrix Architecture). Engines are connected through copper RAPID IO interconnect at 2.5GB speed
No support for FCOE or 10GB Ethernet No support for FCOE or 10GB Ethernet
No support for 8GB loop interface speeds No support for 8GB loop interface speeds
Strong Marketing with DMX-4 and good success Virtual Marketing for Virtual Matrix (V-Max) since the product was introduced with FAST as a sales strategy with FAST not available for at least until the later part of the year.
No support for InfiniBand expected with DMX-4 Would InfiniBand be supported in the future to connect engines at a short or long distance (several meters)
No Federation With Federation expected in the upcoming versions of V-Max, how would the cache latency play a role if you had federation between systems that are 10 to 10 meters away?
Global Cache on Global Memory Directors Global Cache on local engines chips: again as cache is shared between multiple engines, cache latency is expected as multiple engines request this IO
DMX-4 is a monster storage system The V-Max building blocks (engines) can create a much larger storage monster
256GB total vault on DMX-4 systems 200GB of vault space per Engine, with 8 engines, we are looking at 1.6TB of vault storage
Performance on DMX-4 has been great compared to its previous generation DMX, DMX2, DMX-3 IOPS per PORT of V-Max Systems

128 MB/s Hits

385 Read

385 Write
IOPS for 2 PORT of V-Max Systems

128MB/s Hits

635 Read

640 Write

V-Max performs better compared to DMX-4 FICON 2.2 x Performance on FICON compared to DMX-4 Systems.

2 Ports can have as many as 17000 IOPS on FICON

Large Metadata overhead with the amount of volumes, devices, cache slots, etc, etc A reduction of 50 to 75% overhead with the V-Max related to metadata
SRDF Technology Supported New SRDF/EDP (extended distant protection)

Diskless R21 passthrough device, no disk required for this passthrough

Symmetrix Management Console 6.0 supported, no templates and wizards Templates and Wizards within the new SMC 7.0 console
Total SRDF Groups supported 128 Total SRDF Groups supported 250
16 Groups on Single Port for SRDF 64 Groups on Single Port for SRDF
V-Max comparison on Connectivity 2X Connectivity compared to the DMX-4
V-Max comparison on Usability (Storage) 3X usability compared to the DMX-4
DMX-4 was the first version of Symmetrix where RAID6 support was rolled out RAID 6 is 3.6 times better than the DMX-4
RAID6 support on DMX-4 is and was a little premature RAID 6 on V-Max (performance) is equivalent to RAID 1 on DMX-4
SATA II performance on DMX-4 is better than V-Max SATA II drives do not support the 520-byte style. EMC takes those 8 bytes (520 – 512) of calculation for data integrity T10-DIF standard proposal and writes it in blocks or chunks of 64K through out the entire drive causing performance degradation.
SATA II performance on DMX-4 is better than V-Max The performance of SATA II drives on V-Max is bad the DMX-4 systems
Fiber Channel performance better compared to DMX and DMX-2’s. Fiber Channel performance compared to DMX-4 improved by about 36%
DMX-4 start supporting 4GB interface host connectivity Fiber Channel performance 5000 IOPS per channel
RVA not available on DMX-4 platforms RVA: Raid Virtual Architecture allows to have one mirror position for RAID volumes allowing customers to used the rest of the 3 positions for either BCV’s, SRDF, Migration, etc, etc.
No MIBE and SIB with DMX-4. Rather the DMX-4 directors are connected through a common backplane. MIBE: Matrix Interface Board Enclosure connects the Odd and the Evens or (Fabric A and Fabric B) Directors together. The SIB (System Interface Board) connects these engines together using Rapid IO
Director count goes from Director 1 on the left to Director 18 (Hex) on the right Director count goes from 1 on the bottom to 16 (F) on the top, based on each engine having 2 directors. 8 Engines, 16 Directors.
2 Directors failures if not in the same fabric or bus, rather are not DI’s (Dual Initiators) of each other will not cause a system outage or data loss / data unavailable Single engine failure (2 Directors) will not cause Data Loss / Data Unavailable and the system will not cause an outage. Failed components can be Directors, Engines, MIBE, PS’s, Fan, Cache in a single Engine or 2 directors.
Single loop outages will not cause DU Single loop outages will not cause DU

More architectural details related to drives, cache, directors, cabinets, Mibe, SIB, Service Processor to come in the V-Max architecture expansion and modularity post over the next week.

Enjoy!!!!

EMC Symmetrix DMX-4: Supported Drive Types

June 28th, 2009 No comments

In this blog post we will discuss the supported drive models for EMC Symmetrix DMX-4. Right before the release of Symmetrix V-Max systems, in early Feb 2009 we saw some added support for EFD’s (Enterprise Flash Disk) on the Symmetrix DMX-4 platform. The additions were denser 200GB and 400GB EFD’s.

The following size drives types are supported with Symmetrix DMX-4 Systems at the current microcode 5773: 73GB, 146GB, 200GB, 300GB, 400GB, 450GB, 500GB, 1000GB. Flavors of drives include 10K or 15K and interface varies 2GB or 4GB.
The drive has capabilities to auto negotiate to the backplane speed. If the drive LED is green the speed is 2GB, if its neon blue its 4GB interface.

To read a blog post on supported drive types on EMC Symmetrix V-Max System

The following are details on the drives for the Symmetrix DMX-4 Systems. You will find details around Drive Types, Rotational Speed, Interface, Device Cache, Access times, Raw Capacity, Open Systems Formatted Capacity and Mainframe Formatted Capacity.


73GB FC Drive

Drive Speed: 10K

Interface: 2GB / 4GB

Device Cache: 16MB

Access speed: 4.7 – 5.4 mS

Raw Capacity: 73.41 GB

Open Systems Formatted Cap: 68.30 GB

Mainframe Formatted Cap: 72.40 GB

73GB FC Drive

Drive Speed: 15K

Interface: 2GB / 4GB

Device Cache: 16MB

Access speed: 3.5 – 4.0 mS

Raw Capacity: 73.41 GB

Open Systems Formatted Cap: 68.30 GB

Mainframe Formatted Cap: 72.40 GB

146GB FC Drive

Drive Speed: 10K

Interface: 2GB / 4GB

Device Cache: 32MB

Access speed: 4.7 – 5.4 mS

Raw Capacity: 146.82 GB

Open Systems Formatted Cap: 136.62 GB

Mainframe Formatted Cap: 144.81 GB

146GB FC Drive

Drive Speed: 15K

Interface: 2GB / 4GB

Device Cache: 32MB

Access speed: 3.5 – 4.0 mS

Raw Capacity: 146.82 GB

Open Systems Formatted Cap: 136.62 GB

Mainframe Formatted Cap: 144.81 GB

300GB FC Drive

Drive Speed: 10K

Interface: 2GB / 4GB

Device Cache: 32MB

Access speed: 4.7 – 5.4 mS

Raw Capacity: 300.0 GB

Open Systems Formatted Cap: 279.17 GB

Mainframe Formatted Cap: 295.91 GB

300GB FC Drive

Drive Speed: 15K

Interface: 2GB / 4GB

Device Cache: 32MB

Access speed: 3.6 – 4.1 mS

Raw Capacity: 300.0 GB

Open Systems Formatted Cap: 279.17 GB

Mainframe Formatted Cap: 295.91 GB

400GB FC Drive

Drive Speed: 10K

Interface: 2GB / 4GB

Device Cache: 16MB

Access speed: 3.9 – 4.2 mS

Raw Capacity: 400.0 GB

Open Systems Formatted Cap: 372.23 GB

Mainframe Formatted Cap: 394.55 GB

450GB FC Drive

Drive Speed: 15K

Interface: 2GB / 4GB

Device Cache: 16MB

Access speed: 3.4 – 4.1 mS

Raw Capacity: 450.0 GB

Open Systems Formatted Cap: 418.76 GB

Mainframe Formatted Cap: 443.87 GB

500GB SATA II Drive

Drive Speed: 7.2K

Interface: 2GB / 4GB

Device Cache: 32MB

Access speed: 8.5 to 9.5 mS

Raw Capacity: 500.0 GB

Open Systems Formatted Cap: 465.29 GB

Mainframe Formatted Cap: 493.19 GB

1000GB SATA II Drive

Drive Speed: 7.2K

Interface: 2GB / 4GB

Device Cache: 32MB

Access speed: 8.2 – 9.2 mS

Raw Capacity: 1000.0 GB

Open Systems Formatted Cap: 930.78 GB

Mainframe Formatted Cap: 986.58 GB

73GB EFD

Drive Speed: Not Applicable

Interface: 2GB

Device Cache: Not Applicable

Access speed: 1mS

Raw Capacity: 73.0 GB

Open Systems Formatted Cap: 73.0 GB

Mainframe Formatted Cap: 73.0 GB

146GB EFD

Drive Speed: Not Applicable

Interface: 2GB

Device Cache: Not Applicable

Access speed: 1mS

Raw Capacity: 146.0 GB

Open Systems Formatted Cap: 146.0 GB

Mainframe Formatted Cap: 146.0 GB

200GB EFD

Drive Speed: Not Applicable

Interface: 2GB / 4GB

Device Cache: Not Applicable

Access speed: 1mS

Raw Capacity: 200 GB

Open Systems Formatted Cap: 196.97 GB

Mainframe Formatted Cap: 191.21 GB

400GB EFD

Drive Speed: Not Applicable

Interface: 2GB / 4GB

Device Cache: Not Applicable

Access speed: 1mS

Raw Capacity: 400.0 GB

Open Systems Formatted Cap: 393.84 GB

Mainframe Formatted Cap: 382.33 GB

Support for 73GB and 146GB EFD’s have been dropped with the Symmetrix V-Max Systems, they will still be supported with the Symmetrix DMX-4 Systems which in addition to 73 GB and 146GB also supports 200GB and 400GB EFD’s.

EMC Symmetrix V-Max: Enginuity 5874

June 26th, 2009 1 comment

EMC Symmetrix V-Max systems were introduced back in the month of April 2009. With this new generation of Symmetrix came a new name V-Max and a new Enginuity family of microcode 5874.
To read about Symmetrix on StorageNerve Blog

http://storagenerve.com/tag/symmetrix

To read about V-Max systems on StorageNerve Blog

http://storagenerve.com/tag/v-max/

With this family of microcode 5874: there are 7 major areas of enhancements as listed below.

Base enhancements

Management Interfaces enhancements

SRDF functionality changes

Timefinder Performance enhancements

Open Replicator Support and enhancements

Virtualization enhancements

Also EMC introduced SMC 7.0 (Symmetrix Management Console) for managing this generation of Symmetrix. Read about the SMC 7.0 post below.

http://storagenerve.com/2009/05/06/emc-symmetrix-management-console-symmetrix-v-max-systems/

With Enginuity family 5874 you also need solutions enabler 7.0

The initial Enginuity was release 5874.121.102, a month into the release we saw a new emulation and SP release 5874.122.103 and the latest release as of 18th of June 2009 is 5874.123.104. With these new emulation and SP releases, there aren’t any new features added to the microcode rather just some patches and fixes related to the maintenance, DU/DL and environmentals.

Based on some initial list of enhancements by EMC and then a few we heard at EMC World 2009, to sum up, here are all of those.

RVA: Raid Virtual Architecture:

With Enginuity 5874 EMC introduced the concept of single mirror positions. Normally it has always been challenging to reduce the mirror positions since they cap out at 4. With enhancements to mirror positions related to SRDF environments and RAID 5 (3D + 1P, 7D +1P) / RAID 6  (6D+2P, 14D+2P) / RAID 1 devices, now it will open doors to some further migration and data movement opportunities related to SRDF and RAID devices.

Large Volume Support:

With this version of Enginuity, we will see max volume size of 240GB for open systems and 223GB for mainframe systems with 512 hypers per drive. The maximum drive size supported on Symmetrix V-Max system is 1TB SATA II drives. The maximum drive size supported for EFD on a Symmetrix V-Max system is 400GB.

Dynamic Provisioning:

Enhancements related to SRDF and BCV device attributes will overall improve efficiency during configuration management. Will provide methods and means for faster provisioning.

Concurrent Configuration Changes:

Enhancements to concurrent configuration changes will allow the customer and customer engineer to perform through Service Processor and through Solutions enabler certain procedures and changes that can be all combined and executed through a single script rather than running them in a series of changes.

Service Processor IP Interface:

All Service Processors attached to the Symmetrix V-Max systems will have Symmetrix Management Console 7.0 on it, that will allow customers to login and perform Symmetrix related management functions. Also the service processor will have capabilities to be managed through the customer’s current IP (network) environment. Symmetrix Management Console will have to be licensed and purchased from EMC for V-Max systems. The prior versions of SMC were free. SMC will now have capabilities to be opened through a web interface.

SRDF Enhancements:

With introduction of RAID 5 and RAID 6 devices on the previous generation of Symmetrix (DMX-4), now the V-Max offers a 300% better performance with TImefinder and other SRDF layered apps to make the process very efficient and resilient.

Enhanced Virtual LUN Technology:

Enhancements related to Virtual LUN Technology will allow customers to non-disruptively perform changes to the location of disk either physically or logically and further simplify the process of migration on various systems.

Virtual Provisioning:

Virtual Provisioning can now be pushed to RAID 5 and RAID 6 devices that were restrictive in the previous versions of Symmetrix.

Autoprovisioning Groups:

Using Autoprovisiong groups, customers will now be able to perform device masking by creating host initiators, front-end ports and storage volumes. There was an EMC Challenge at EMC World 2009 Symmetrix corner for auto provisioning the symms with a minimum number of clicks. Autoprovisioning groups are supported through Symmetrix Management Console.

So the above are the highlights of EMC Symmetrix V-Max Enginuity 5874. As new version of the microcode is released later in the year stay plugged in for more info.