Google+

Archive

Author Archive

Storage Economics – Hardware Maintenance – Part 1

August 5th, 2009 2 comments

So on several occasions, I have written about Storage management and the cost reduction associated with it in terms of CapEx and OpEx. In this blog post we will talk about how your organization may further be able to leverage resources available in the industry to reduce TCO (Total Cost of Ownership) and improve ROA (Return on Assets) for the storage devices you own.

For example purposes, lets assume we are only talking about one single Storage device (frame) in the environment. Also for this blog post, lets assume the manufacturer (OEM) of the Storage frame is the vendor.

The concept of Hardware Maintenance

You purchased a storage asset 3 years ago. Spend a million dollars in acquisition cost on that storage device, also paid for software licenses, implementation cost, migration cost and training cost.  You are almost at a 2 million dollar mark to implement this Enterprise Class Storage, which includes your Tier1 and / or Tier 2 data.

How is this Storage frame doing today?

It’s working great, applications associated with it are robust, Thank God over the past 3 years we haven’t seen any outages in this environment.

Oh…………by the way, the vendor just visited today and is proposing we do a tech refresh in this environment.

The Strategy related to Hardware Maintenance

So the first question, are you ready for this tech refresh?

Is your business ready for this tech refresh?

Is your team ready and trained for this new technology?

Do you need external resources for this tech refresh?

Are there budgets and proposed growth in the business to pay for this tech refresh?

Do we really need a tech refresh?

Are your applications ready for this tech refresh?

Would your host environments be ready for this tech refresh?

What is that you are trying to gain by this tech refresh – Processing Power, Speed, Savings, Green Data Center, Power, Electricity, management cost, etc?

Are your users complaining about your application performance?

Is the number of users growing on these apps?

So how many Nah’s and Yah’s do we have on the questions above?

The Facts about Hardware Maintenance

The vendor is proposing a substantial savings and helping us reduce the TCO on these assets over the next three years.

Cost of hardware maintenance from the vendor for year 4, year 5 and year 6 (on the existing storage asset) is almost equivalent to the cost of purchasing new assets

We are being offered the best deal, free training, the vendor reduced the hardware acquisition cost by 20% and they have another 5% discount for the quarter closing tomorrow.

The beliefs about Hardware Maintenance

Hardware Support: No one other than the vendor can provide hardware support on the Storage assets because it is just too complex to manage.

Remote Support and Diagnostics: No one other than the vendor can provide remote support and diagnostics.

Code Upgrades (Firmware) and Engineering Support: No one other than the vendor can provide Code upgrades and Engineering Support.

Global Technical Support: No one other than the vendor has a 24 x 7 global technical support.

Onsite Certified & Trained Engineers: Only the vendor has trained and certified onsite engineers.

Spares: 24 x 7, 4 hour response spare parts logistics, only the vendor has it.

SLA: Only the vendor can provide a mission critical or a premium SLA that would include either 24 x 7 x 2 support or 24 x 7 x 4 support.

Software Support: No one other than the vendor can provide Software support

So, how do you get around these industry notions?

Please stay tuned for the next blog post on Storage Economics – Hardware Maintenance – Part 2 tomorrow.

New Updated Blogroll

July 31st, 2009 No comments

So as discussed earlier in the Blogroll Update blog post, here is the new StorageNerve Blogroll.

These are based on 3 primary classifications: Storage Blogs, Virtualization Blogs and Analyst Blogs

Along with the links below, the Blogroll is also located on the left sidebar.

Storage Blogs

Barry Whyte

Chris Kranz

Chuck Hollis

Dave Graham

David Merrill

Hu Yoshida

Ruptured Monkey

Storage Anarchist

Storage Architect

StorageZilla

Virtualization Blogs

Virtual Geek

Scott Lowe

Yellow Bricks

Analyst Blogs

Steve Duplessie

Stephen Foskett

StorageMojo

Wikibon

Hope some of the blogs mentioned here, give a link back to the StorageNerve Blog.

Some readers may or may not agree with this classification, neither with all the blogs in those classifications.

Cheers, @storagenerve

EMC Clariion Systems: Global Hot Spares & Proactive Hot Spares

July 30th, 2009 No comments

The concept of Global Hot Spares has been supported in Clariion environments since the first generation of FC & CX platforms. Now the technology has been extended into the CX3 and then the CX4 platforms. The primary purpose of global hot sparing is to protect the system against disk drive failures.

Typically look at a CX4-960, which can be scaled up to 960TB of raw storage and can have as many as 960 disk drives in it. With certain failure rates guaranteed, large number of drives can create a higher probability of failure. Every storage manufacturer these days includes some sort of hot sparing technology in the storage subsystems. EMC started offering this technology to its customers as Global Hot Spares. Then came an era where some value add offerings were brought in for proactive failures to minimize the chance of data loss. This brought to the table a technology that is termed as Proactive Hot Spares, where proactively failing drive is determined and global hot spare is kicked in.

I believe flare release 24 started offering Proactive hot spares. With this Flare release customers can proactively initiate a kickoff of hot spares through Navisphere or Naviseccli against a suspect drive.

Depending on the RAID type implemented, the RAID Groups can withstand drive failures and can run in degraded state without data loss or data unavailability. With RAID 6 implemented, a machine can have as many as 2 drive failures in the same RAID group, with RAID 5, a machine can have as many as 1 drive failure in the same RAID group, with RAID 1/0, RAID 1 a machine can have as many as 1 drive failure in the RAID group without data loss.

Drives supported on Clariion CX, CX3, CX4, AX and AX4 systems typically are FC (Fiber Channel), SATA II and ATA drives.

A Global Hot Spare has to be configured in an EMC Clariion system as a single RAID Group (with one drive). Once the RAID Group is created, a LUN should be bound as a Global Hot Spare before it could be activated.

The following is the sequence of steps that take place on a Clariion Subsystem related to Global Hot Spares (Supported on CX, CX3, CX4 systems)

  1. Disk Drive failure: A disk drive failure in the system, Flare Code marks it bad.
  2. Hot spare invoked: A preconfigured Global Hot Spare is invoked based on the Global Hot Spare selection criteria.
  3. Rebuild: The Global Hot Spare is rebuilt from surviving raid group members.
  4. Failed drive replaced: Failed disk drive is replaced with a good drive by a Customer Engineer
  5. Copy Back: The Global Hot Spare copy has to finish before the new drive starts rebuilding. The rebuild or equalize happens in a sequential order of LBA (Logical Block Address) and not the LUNs bound no it.
  6. Return Hot Spare: Once the sync of new drive is finished, the hot spare is invalidated (zero’ed) and put back in the Global Hot Spare pool.

The following is the sequence of steps that take place on a Clariion Subsystem related to Proactive Hot Spares (Supported on CX300, CX500, CX700, CX3, CX4). Proactive Hot Spares essentially use the same drives that are configured as Global Hot Spares.

  1. Threshold of errors on Disk Drive: A drive gets hit with errors, it surpasses the number and type of those errors, and the flare code marks it as a potential candidate for failure.
  2. Proactive Hot Spare invoked: Based on the potential candidate’s (drive) type, drive size and bus location a Global Hot Spare is indentified and the process is kicked off for data rebuild.
  3. Potential candidate fails: Once the Proactive Hot Spare is synced, the flare code fails the indentified potential candidate.
  4. Failed drive replacement: The failed drive is replaced by a Customer Engineer
  5. Copy Back: From the proactive hot spare, the data is copied back to the newly inserted drive. The rebuilt or equalize happens in a sequential order of LBA (Logical Block Address).
  6. Return Proactive Hot Spare: Once the sync of new drive is finished, the hot spare is invalidated (zero’ed) and put back to the Global Hot Spares pool.

The Global Hot Spares Selection Criteria:

The following are the criteria’s that are followed with selection (invoke) of a Global Hot Spare when a potential proactive candidate is identified or disk drive is failed. In the sequence listed below, Drive type is the first selection, Size of the drive is the second selection and location of the Global Hot Spare is the third selection. Speed of the drive (RPM) is not a selection criterion.

  1. Type of Global Hot Spare Drive: As discussed above, Clariion Systems use three primary drive types. For FC and SATA II type drives, either or can be invoked against each other type. ATA drives can be invoked against an ATA drive failure.
  2. Size of Global Hot Spare: Upon a disk failure, the drive size (Global Hot Spare) is examined by Flare Code. The size of failed drive is not the key in invoking the hot spare, but the total space of all LUNs (bound) on the drive is used as a determination criteria.
  3. Location of Global Hot Spare: Based on the above two criteria, the location of the Global Hot Spare is considered as the third criteria. If the Global Hot Spare is on the same bus as the failed drive, it will be considered as the primary selection if the above two criteria’s are met. If the above two criteria’s are met and the drive is not on the same bus, then the Global Hot Spare is selected from other buses.

Other Considerations:

  1. RAID Types: For the copy of data, with RAID 3 and RAID 5 data on the hot spare is built using the parity drive. With RAID 6 raid types, data on the hot spare is built using the RP (row parity) and / or DP (Diagonal Parity) depending on the number of failures in the RAID Groups. For the RAID 1/0 and RAID 1, data on the hot spare is built using the surviving mirrors.
  2. Copy Times: The time required to copy or rebuilt a hot spare really depends on how large the drive is, the speed of the drive, the cache available on the drive, the cache available on the array, the type of the array, raid type and the current job processing on the array. Typical rebuilt times vary from 30 minutes to 90 minutes again depending upon how busy the storage subsystem is.
  3. Global hot Spare types: For every 30 drives (2 DAE’s of drives), consider having 1 drive as a Global hot spare. Also verify, for every drive type (size, speed) in the machine, you have at least one configured global hot spare. Good idea to have global hot spares on various different buses and spread across multiple Service Processors.
  4. Vault Drives: Vault Drives cannot be used for Global Hot Spares. The Vault drives are considered as the first 5 drives [ 0_0_0, 0_0_1, 0_0_2, 0_0_3, 0_0_4 ] on the Clariion System. If a vault drive fails, a Global Hot Spare takes over its position.
  5. Rotational Speed: Rotational Speed of the Global Hot Spare is not considered before invoking it. It might be a good idea to have Global Hot Spares running 15K RPM’s potentially with large size drives.
  6. Mixed Loop Speed: With certain Clariion Systems like CX3’s, available loop options are 4GB and / or 2GB and you can have a mixed loop speed in your machine, for hot spare selection the loop speed is not considered, in those cases it might be wise to have similar hot spares on both the 2GB and 4GB loops.

EMC Symmetrix, 20 Years in the making

July 29th, 2009 1 comment

So next year will mark a history of Symmetrix Products within EMC, still classified as one of the most robust systems out there after 20 years of its inception. In this blog post, we will talk about some facts on Symmetrix products as it relates to its features, characteristics, Enginuity microcode versions, model numbers, year released, etc.

Also in this blog post you will see links to most of my previous posts about Symmetrix products.

——————————————————————————————————————————

——————————————————————————————————————————

So the journey of Symmetrix systems started with Moshe Yanai (along with his team) joining EMC in late 80’s. A floating story says, the idea of a cache based disk array was initially pitched to both IBM and HP and was shot down.  EMC was predominately a mainframe memory selling company back in the late 1980’s. The Symmetrix products completely changed the direction of EMC in a decade.

Joe Tucci comes in at the end of 90’s from Unisys with a big vision. Wanted to radically change EMC. Through new acquisitions, new technologies, vision and foremost the integration of all the technologies created today’s EMC.

Symmetrix has always been the jewel of EMC. Back in the Moshe days, the engineers were treated so royally (Have heard stories about helicopter rides and lavish parties with a satellite bus waiting outside for a support call). Then comes the Data General acquisition in late 90’s that completely changed the game.

Some people within EMC were against the DG acquisition and didn’t see much value in it. While the Clariion DG backplane is what changed the Symmetrix to a Symmetrix DMX – Fiber Based Drives. Over this past decade, EMC radically changes its position and focuses on acquisitions, support, products, quality, efficiency, usability and foremost changing itself from a hardware company to an Information Solutions company focusing on software as its integral growth factor.  New acquisitions like Legato, Documentum, RSA, kept on changing the culture and the growth focus within EMC.

Then came VMware and it changed the rules of the game, EMC’s strategic move to invest into VMware paid off big time.  Then happens the 3-way partnership between VMware – EMC – Cisco, to integrate next generation products, V-Max (Symmetrix), V-Sphere and UCS are born.

Here we are in 2009, almost at the end of 20 years since the inception of the Symmetrix, the name, the product, the Enginuity code, the robust characteristics, the investment from EMC all stays committed with changing market demands.

——————————————————————————————————————————

——————————————————————————————————————————

Jumping back into the Symmetrix, here are a few articles you might find interesting, overall talking about various models, serial numbers of the machines and importantly a post on Enginuity Operating Environment.

To read about EMC Symmetrix Enginuity Operating Environment

To read about EMC Symmetrix Serial Number naming convention,

To read about EMC Symmetrix Models in a previous blog post

To read about various EMC models based on different Platforms

To read about all EMC Clariion models since the Data General Acquisition

——————————————————————————————————————————

——————————————————————————————————————————

Symmetrix Family 1.0

ICDA – Integrated Cache Disk Array

Released 1990 and sold through 1993

A 24GB total disk space introduced

Wow, I was in elementary school or may be middle school when this first generation Symmetrix was released….

Symmetrix 4200

——————————————————————————————————————————

——————————————————————————————————————————

Symmetrix Family 2.0

ICDA – Integrated Cache Disk Array

Released 1991 and sold through 1994

A 36GB total disk space

Mirroring introduced

Symmetrix 4400

——————————————————————————————————————————

——————————————————————————————————————————

Symmetrix Family 2.5

ICDA – Integrated Cache Disk Array

Released 1992 and sold through 1995

RSF capabilities added

(I actually met a guy about 2 years ago, he was one of the engineers that had worked on developing the first RSF capabilities at EMC and was very instrumental in developing the Hopkinton PSE lab)

Symmetrix 4800:

——————————————————————————————————————————

——————————————————————————————————————————

Symmetrix Family 3.0 also called Symmetrix 3000 and 5000 Series

Released 1994 and sold through 1997

ICDA: Integrated Cache Disk Array

Includes Mainframe Support (Bus & Tag)

Global Cache introduced

1GB total Cache

NDU – Microcode

SRDF introduced

Supports Mainframe and open systems both

Enginuity microcode 50xx, 51xx

Symmetrix 3100: Open systems support, half height cabinet, 5.25 inch drives

Symmetrix 5100: Mainframe support, half height cabinet, 5.25 inch drives

Symmetrix 3200: Open Systems support, single cabinet, 5.25 inch drives

Symmetrix 5200: Mainframe support, single cabinet, 5.25 inch drives

Symmetrix 3500: Open Systems support, triple cabinet, 5.25 inch drives

Symmetrix 5500: Mainframe support, triple cabinet, 5.25 inch drives

——————————————————————————————————————————

——————————————————————————————————————————

Symmetrix Family 4.0 also called Symmetrix 3000 and 5000 Series

Released 1997 and sold through 2000

RAID XP introduced

3.5 Inch drive size introduced

On triple cabinet systems 5.25 inch drives used

Supports Mainframe and Open Systems both

Timefinder, Powerpath, Ultra SCSI support

Enginuity microcode 5265.xx.xx, 5266.xx.xx

Symmetrix 3330: Open Systems Support, half height cabinet, 32 drives, 3.5 inch drives

Symmetrix 5330: Mainframe Support, half height cabinet, 32 drives, 3.5 inch drives

Symmetrix 3430: Open Systems Support, single frame, 96 drives, 3.5 inch drives

Symmetrix 5430: Mainframe Support, single frame, 96 drives, 3.5 inch drives

Symmetrix 3700: Open Systems Support, triple cabinet, 128 drives, 5.25 inch drives

Symmetrix 5700: Mainframe Support, triple cabinet, 128 drives, 5.25 inch drives

To read about EMC Symmetrix Hardware Components

——————————————————————————————————————————

——————————————————————————————————————————

Symmetrix Family 4.8 also called Symmetrix 3000 and 5000 Series

Released 1998 and sold through 2001

Symmetrix Optimizer Introduced

Best hardware so far: least outages, least problems and least failures (not sure if EMC will agree to it, most customers do)

3.5 inch drives used with all models

Enginuity microcode 5265.xx.xx, 5266.xx.xx, 5267.xx.xx

Symmetrix 3630: Open Systems support, half height cabinet, 32 drives

Symmetrix 5630: Mainframe support, half height cabinet, 32 drives

Symmetrix 3830: Open Systems support, single cabinet, 96 drives

Symmetrix 5830: Mainframe support, single cabinet, 96 drives

Symmetrix 3930: Open Systems support, triple cabinet, 256 drives

Symmetrix 5930: Mainframe support, triple cabinet, 256 drives

Models sold as 3630-18, 3630-36, 3630-50, 5630-18, 5630-36, 5630-50,3830-36, 3830-50, 3830-73, 5830-36, 5830-50, 5830-73, 3930-36, 3930-50, 3930-73, 5930-36, 5930-50, 5930-73 (the last two digits indicate the drives installed in the frame)

To read about EMC Symmetrix Hardware Components

——————————————————————————————————————————

——————————————————————————————————————————

Symmetrix Family 5.0 also called Symmetrix 8000 Series

[ 3000 (open sytems) + 5000 (mainframe) = 8000 (support for both) ]

Supports Open Systems and Mainframe without BUS and TAG through ESCON

Released 2000 and sold through 2003

181GB Disk introduced

Enginuity microcode 5567.xx.xx, 5568.xx.xx

Symmetrix 8130: Slim cabinet, 48 drives

Symmetrix 8430: Single cabinet, 96 drives

Symmetrix 8730: Triple cabinet, 384 drives

Some models sold as 8430-36, 8430-73, 8430-181 or 8730-36, 8730-73, 8730-181 (the last two digits indicate the drives installed in the frame)

To read about EMC Symmetrix Hardware Components

——————————————————————————————————————————

——————————————————————————————————————————

Symmetrix Family 5.5 LVD also called Symmetrix 8000 Series

Released 2001 and sold through 2004

LVD: Low Voltage Disk Introduced

146GB LVD drive introduced

Ultra SCSI drives cannot be used with the LVD frame

Mainframe optimized machines introduced

4 Slice directors introduced with ESCON and FICON

FICON introduced

Enginuity microcode 5567.xx.xx, 5568.xx.xx

Symmetrix 8230: Slim cabinet, 48 drives, (rebranded 8130, non lvd frame)

Symmetrix 8530: Single cabinet, 96 drives, (rebranded 8430, non lvd frame)

Symmetrix 8830: Triple cabinet, 384 drives, (rebranded 8730, non lvd frame)

Symmetrix 8230 LVD: LVD frame, slim cabinet, 48 LVD drives

Symmetrix 8530 LVD: LVD frame, single cabinet, 96 LVD drives

Symmetrix 8830 LVD: LVD frame, triple cabinet, 384 LVD drives

Symmetrix z-8530: LVD frame, Single cabinet, 96 drives, optimized for mainframes

Symmetrix z-8830: LVD frame, Triple cabinet, 384 drives, optimized for mainframe

Some models sold as 8530-36, 8530-73, 8530-146, 8530-181 or 8830-36, 8830-73, 8830-146, 8830-181 (the last two digits indicate the drives installed in the frame)

To read about EMC Symmetrix Hardware Components

——————————————————————————————————————————

——————————————————————————————————————————

Symmetrix DMX or also called Symmetrix Family 6.0

Released Feb 2003 and sold through 2006

Direct Matrix Architecture (Data General Backplane) introduced

DMX800 was the first DMX system introduced

4 Slice directors introduced

RAID 5 introduced after being introduced on DMX-3

First generation with common DA / FA hardware

Introduction of modular power

Enginuity Microcode 5669.xx.xx, 5670.xx.xx, 5671.xx.xx

Symmetrix DMX800: Single cabinet, DAE based concept for drives, 96 drives (I swear, a customer told me, they have ghost like issues with their DMX800)

Symmetrix DMX1000: Single cabinet, 18 drives per loop, 144 drives total

Symmetrix DMX1000-P: Single cabinet, 9 drives per loop, 144 drives total, P= Performance System

Symmetrix DMX2000: Dual cabinet, modular power, 18 drives per loop, 288 drives

Symmetrix DMX2000-P: Dual cabinet, modular power, 9 drives per loop, 288 drives, P=Performance System

Symmetrix DMX3000-3: Triple cabinet, modular power, 18 drives per loop, 3 phase power, 576 drives

To read about EMC Symmetrix DMX Hardware components

To read about EMC Symmetrix DMX models and major differences

——————————————————————————————————————————

——————————————————————————————————————————

Symmetrix DMX2 or also called Symmetrix Family 6.5

Released Feb 2004 and sold through 2007

Double the processing using DMX2

DMX and DMX2 frames are same, only directors from DMX must be changed to upgrade to DMX2, reboot of entire systems required with this upgrade

RAID 5 introduced after being introduced on DMX-3

64GB memory introduced

4 Slice Directors

Enginuity Microcode 5669.xx.xx, 5670.xx.xx, 5671.xx.xx

Symmetrix DMX801: 2nd generation DMX, Single cabinet, DAE based concept for drives, 96 drives, FC SPE 2 (I swear, a customer told me, they have ghost like issues with their DMX800)

Symmetrix DMX1000-M2: 2nd generation DMX, Single cabinet, 18 drives per loop, 144 drives

Symmetrix DMX1000-P2: 2nd generation DMX, Single cabinet, 9 drives per loop, 144 drives, P=Performance System

Symmetrix DMX2000-M2: 2nd generation DMX, Dual cabinet, 18 drives per loop, 288 drives

Symmetrix DMX2000-P2: 2nd generation DMX, Dual cabinet, 9 drives per loop, 288 drives, P=Performance System

Symmetrix DMX2000-M2-3: 2nd generation DMX, Dual cabinet, 18 drives per loop, 288 drives, 3 Phase power

Symmetrix DMX2000-P2-3: 2nd generation DMX, Dual cabinet, 9 drives per loop, 288 drives, P=Performance System, 3 Phase power

Symmetrix DMX3000-M2-3: 2nd generation DMX, Triple cabinet, 18 drives per loop, 576 drives, 3 Phase power

To read about EMC DMX Symmetrix Hardware components

To read about EMC Symmetrix DMX models and major differences

——————————————————————————————————————————

——————————————————————————————————————————

Symmetrix DMX-3 or also called Symmetrix 7.0

Released July 2005 and still being sold

8 Slice directors

1920 disk (RPQ ‘ed to 2400 drives)

DAE based concept introduced

Symmetrix Priority Controls

RAID 5 introduced and then implemented on older DMX, DMX-2

Virtual LUN technology

SRDF enhancements

Concept of vaulting introduced

Enginuity microcode 5771.xx.xx, 5772.xx.xx

Symmetrix DMX-3 950: System Cabinet, Storage Bay x 2, 360 drives max, Modular Power, 3 Phase power

Symmetrix DMX-3: System Cabinet, Storage Bay x 8 (Expandable), 1920 drives max, RPQ’ed to 2400 drives, 3 Phase power

To read about differences between EMC Symmetrix DMX3 and DMX4 platforms

——————————————————————————————————————————

——————————————————————————————————————————

Symmetrix DMX-4 or also called Symmetrix 7.0

Released July 2007 and still being sold

Virtual provisioning

Flash Drives

FC / SATA drives

RAID 6 introduced

SRDF enhancements

Total Cache: 512 GB

Total Storage: 1 PB

Largest drive supported 1TB SATA drive

Flash drives 73GB, 146GB later now support for 200GB and 400GB released

1920 drives max (RPQ’ed to 2400 drives)

Enginuity microcode 5772.xx.xx, 5773.xx.xx

Symmetrix DMX-4 950: System Cabinet, Storage Bay x 2, 360 drives max, Modular Power, 3 Phase power

Symmetrix DMX-4: System Cabinet, Storage Bay x 8 (Expandable), 1920 drives max, RPQ’ed to 2400 drives, Modular power, 3 Phase Power

Some models sold as DMX-4 1500, DMX-4 2500, DMX-4 3500 and DMX-4 4500

To read about a blog post on EMC Symmetrix: DMX4 Components

To read about differences between EMC Symmetrix DMX3 and DMX4 platforms

To read about different drives types supported on EMC Symmetrix DMX4 Platform

To read about differences between EMC Symmetrix DMX4 and V-Max Systems

——————————————————————————————————————————

——————————————————————————————————————————

Symmetrix V-Max

(Released April 2009)

Enginuity Microcode 5874.xxx.xxx

Total number of drives supported: 2400

Total Cache: 1 TB mirrored (512GB usable)

Total Storage: 2 PB

All features on the V-Max have been discussed earlier on the blog post linked below

Symmetrix V-Max SE: Single System Bay, SE=Single Engine, Storage Bay x 2, 360 drives max, cannot be expanded to a full blown 8 engine system if purchased as a SE, 3 Phase power, Modular Power

Symmetrix V-Max: System Cabinet, Storage Bay x 10, 2400 drives max, modular power, 3 phase power

To read about differences between EMC Symmetrix DMX4 and V-Max Systems

To read about different drives types supported on EMC Symmetrix V-Max Platforms

To read all about the EMC Symmetrix V-Max Platform

——————————————————————————————————————————

——————————————————————————————————————————

I could have easily added total memory capacity per frame, total number of dedicated DA/DAF slots, total slots, total universal slots, total memory slots, but then I didn’t know information on some of the old systems and didn’t want to be incorrect on them.

Hope you have enjoyed reading this post, with a bit of history related to the Symmetrix platform. I am pretty positive, as of today you will not find this consolidated information on any blog or the manufacturers website.

I really wish, EMC decided to open blogging to some Symmetrix, Clariion, Celerra, Centera specialist that support these systems on a day to day basis, the information that could come out from those guys could be phenomenal. Barry Burke writes a lot of stuff, but again a lot of FUD from him against IBM and HDS, its great reading him, but only a controlled amount of technical information comes from him.

——————————————————————————————————————————

——————————————————————————————————————————