Posts Tagged ‘Clariion’

VAAI and Automated Storage Tiering with Storage Virtualization

September 30th, 2010 No comments


Storage Virtualization

Storage Virtualization is sort of a game changer. The more I think about Storage Virtualization and the advantages it brings in storage environments with adding flexibility with migrations, efficiency, automation, management and importantly adds features that your existing storage arrays that might natively not be supported.

Storage Virtualization will take any storage device that is physically connected to it and remap the physical disks to xLUNs. These xLUNs can now take advantage of all the native features of the Storage Virtualization Array (Engine). Features could include creating Storage Groups, Various Raid Types, Site-to-Site Replication, Pooling of disks, Thin Provisioning, Synchronous Copy, Asynchronous Copy, Local Copy, Stripping, Snapshots, VAAI and Automated Storage Tiering. Again doesn’t matter if your existing Storage natively does not support these features.

Two features that every customer wishes they had right now….VAAI and Policy based Automated Tiering (including Sub-LUN Tiering)


Storage Virtualization


Who Supports Storage Virtualization

There are several manufacturers that support Storage Virtualization today. Some of the leading storage virtualization arrays/engines include IBM SVC, EMC VPLEX, HP SVSP, HP P9500, Hitachi USPV/ VSP.



Same can be said about VAAI (vStorage API for Array Integration), an amazing interface that VMware provides for its technology partners to offload rather intense storage related functions natively within storage devices, compared to the old approach where VMware Host did these tasks. This means Storage Processors need to be able to pick up these massive xcopy, lock operations and block zeroing.

Many storage vendors have already provided VAAI support while many have it on their roadmap and have planned release over the next few months. EMC Clariion was supported Day 1, 3Par similarly supports it with 2.3.1 MU2 InForm OS, while HDS supported VAAI on the AMS platform Day 1.


Automated Storage Tiering

Automated Storage Tiering is another great feature to have natively within storage arrays, but not every vendor supports it today.

Not all the data you have, need to be on the fastest tier, but as your application writes data, it can write to the fastest tier and then demote if the data is not being used. Similarly if there is any data that is frequently used, based on the policy can be moved up to a higher tier. So in short if you keep a good balance of SSD’s and SATA drives, you should be able to keep all your applications happy, all your users happy, all your DBA’s happy and importantly meet your SLA’s.

So initially the idea was to offer this at a LUN level. Create policy, if the LUN is busy based on the time of the day or the month of the year, move it to a faster tier. But then followed the concept of Sub-LUN Tiering. Well not the entire LUN needs to be moved, only a certain set of blocks, chunks, pages are hot and they need to be promoted to a faster tier. Helps tremendously reduce your operations on arrays and keeps cache free.

Compellent is considered a market leader in Automated Storage Tiering and were the first ones to take it to the market. Followed by HDS, EMC and 3Par. Not all storage vendors offer LUN tiering and Sub-LUN tiering with its storage platforms today.


Where am I going?

Well, for a second, lets think…..

The storage environment that you might have today, does not support all the needed features your applications and your business may require. Example VAAI and Automated Storage Tiering including Sub-LUN Tiering.

Why not push the physical Storage assets behind these Virtualization arrays / engines and start taking advantages of the native features offered within them including VAAI and Automated Storage Tiering including Sub-LUN Tiering.

If you are anxiously waiting for features from your existing storage vendors, which may be on their roadmap or may have been promised but never delivered, you do have a choice to closely look at Storage Virtualization and take advantages of these features without a major overhaul in your storage environment.


Storage Federation

July 27th, 2010 3 comments



EMC’s latest addition to the concept of storage federation is the VPLEX announcement that happened at EMC World 2010. VPLEX comes in two forms today, VPLEX Local and VPLEX Metro. Important features of VPLEX include cache coherency, distributed data and active-active clusters at metro distances. In the works are VPLEX Geo and VPLEX Global enabling inter-continental distances.

VPLEX contains no drives today, it is based on a version of Linux Open Source and runs on the same hardware as a VMAX engine. But said that, what prevents EMC from including VPLEX as part of every VMAX and Clariion sold today or may be just run it as a Virtual Appliance within the VMAX (Enginuity) or Clariion (Flare).

While HDS has a slightly different approach yielding almost the same result using Storage Virtualization, HDS approaches storage federation in its USPV platform. The USPV scales upto synchronous distances, I believe 100 kms max distance today.

USPV natively uses a combination of Universal Volume Manager (UVM), High Availability Manager (HAM), Dynamic Replicator, Hitachi TrueCopy Remote Replication and Hitachi Replication Manager to do synchronous distance (100 kms) replication with distributed data in an active-active clustered environment.

VPLEX local and VPLEX metro has been recent announcements, while the USPV has been offering similar features since the past few years now.



Service providers will be largest customers while the VPLEX is still being developed in the Geo and Global modes.

I would think, government customers like DISA, DoD and other cloud providers in the federal space may find VPLEX and USPV very interesting as well.

Migrations using both the VPLEX Local and USPV are a piece of cake, because of its underlying features.

And many more…



Will the future of all storage subsystems have federation in it as a core component? It is most likely with virtualization technologies being designed and pushed today, that we will natively see some of these features into backend storage that can typically hold data in containers and these containers move based on requirements. Look at a VM as a container of information or application.

With a Front-end storage controller, call it a VPLEX or a USPV which doesn’t care what sort of disks are sitting behind it, natively add all storage features to it like snaps, clones, RAID, replication, high availability, virtualization and it doesn’t matter if you use the cheapest storage disk behind it.

Typically with a single storage subsystem, you are looking at scaling your storage to 600 drives or 2400 drives or 4096 drives or 8192 drives or 16384 drives max or does it even matter at this point.

Storage federation will allow a system to scale upto 100’s of PB of storage, for example a EMC VPLEX scales upto 8192 Volumes today, while a USPV scales upto 247PB’s of storage, in essence that is 1 TB x 592,000 disk drives in a single system (federated).

When you connect two of these VPLEX’s or two of the USPV’s at synchronous distances, you now start taking advantage of active-active clusters (datacenters) with distributed data. (Again I will be the first to say, I am not sure how much of cache coherency is built within the USPV today).

But that brings us to some important questions…



Is storage federation that important?

Is storage federation the future of all storage?

Do you care about active-active datacenters?

What is the use-case for federation outside of service providers?

Will this technology shape the future of how we do computing today by leveraging and pooling storage assets together for a higher availability and efficiency?

How large, a single namespace would you like to have? I believe HP IBRIX brings a similar concept of scaling storage to 16PB’s total in a single name space..

Does federation add latency, which limits its usage to only certain applications?

Is VPLEX the future of all EMC storage controller technology, and will that eliminate the Flare or Enginuity code?

If you add a few disk drives to the VPLEX locally, can it serve high demand IOPS applications?

How large will cache get on these storage controllers to minimize the impact of latency and cache coherency on devices at synchronous distances? Is PAM or Flash Cache that answer?

At that point, does it matter if you can do coupling on your systems to extend it like we initially thought the VMAX would have 16 or 32 engines or may be you can couple Clariion SP’s or AMS or USPV’s?


More Questions

Will the future VPLEX look like a USPV with local disk drives attached?

Though the big vision of VPLEX is Global replication creating active – active datacenters, does the next generation VBlocks meant for Service Provider market include a VPLEX natively within it?

Is EMC Virtual Storage just catching upto HDS technology? Or is VPLEX vision a big and unique one that will change the direction of EMC Storage in the future…

Is Storage federation game changing and is EMC ahead of HDS or HDS ahead of EMC…



May 17th, 2010 No comments

The stack wars continue…..


VCE (Vmware – Cisco – EMC) or Virtualized Computing Environment was born back in Nov 2009. We have heard a lot about VBlocks in the Blogosphere, now the same technology is making it to major podcasts, social media live video streaming and significantly many customer discussions are focused around offering IT as a Service, which is enabled by the VBlocks.

Customers saw a vision and a story behind VBlocks and EMC’s initiative on the Private Cloud at EMC World 2010. Right before the conference, Michael Capellas took the reins at Acadia to run this joint venture majorly held by Cisco and EMC.  The Journey to the Private Cloud (the principle theme of EMC World 2010), which was based on VPLEX and VBLOCKS, the so-called next generation virtualized, mobilized, private cloudisized, efficiency driven data centers, brings the theme of data anywhere, anytime.

VBlocks can be best defined as an out of box, ready to deploy offering from VCE Alliance, also this alliance will offer IT based services around VBlocks deployment and orchestration.


Running at its core is virtualization (Vmware not HyperV) and there are 3 available versions in the market today

  • VBlock 0: UCS Blades, EMC Celerra, Cisco MDS Switches, Virtualized Environment
  • VBlock 1: UCS Blades, EMC Clariion CX4, Cisco MDS Switches, Virtualized Environment
  • VBlock 2: UCS Blades, EMC VMAX, Cisco Nexus Switches, Virtualized Environment

All these options now available with EMC VPLEX Local and VPLEX Metro.


The next generation VBlocks should be custom tailored solutions and not a preconfigured and prepackaged box (0, 1 and 2) like it is available today.

Is it about meeting customer requirements…..or pushing BLOCKS…..


VBlock from storagenerve on Vimeo (Video from EMC World 2010 at the VBlocks Booth).


Other stacks being build:

  • Following the footsteps HDS UCP, custom converged environment already announced and expected to be GA’ed later in the year.
  • Similarly HP Convergence offering includes HP Blades, HP EVAs, HP Virtual Connect with HyperV or Vmware Virtualized environment in the box.  This offering was much customized than a prepackaged solution.


Where is it going..

As much as we talk about the stack wars, we agree, we disagree; the truth is it is the platform of tomorrow given its advantages…

Its about manageability, usability, ease of deployment, easy serviceability, enablement of IT As A Service, one throat to choke, tested compatibility, single dashboard (ease of configuration and alerts – single pane of glass view), chargeback’s for IT Services and importantly high efficiency and ability to run defined workloads.

There are drawbacks to VBlocks and Seve…… may continue in a blog post some other time….


GestaltIT Tech Field Day 2010: VBlocks Presentation

April 13th, 2010 No comments

This was surely the most debated discussion – presentation at the GestaltIT Tech Field Day 2010 in Boston, MA. Both the rock stars from the VCE team (Scott Lowe and Ed Saipetch) did the presentation and did an amazing job presenting on this topic.

Though I see a lot of value with the whole concept of VBlocks (VCE) towards the journey to the private cloud and means to compete with the Oracle’s, Dell’s, IBM’s and HP’s of the world, many in the crowd did not buy into this and thought was more of a marketing package without the necessary meat in it….

I am composing a post on VCE – Vblocks for release later this week, where will highlight many pros and cons of this technology based on what we heard and where we see the Vblocks architecture going.

Asked this same question to both Cisco and EMC during the UCS and VBlocks presentation as to how many customers are running UCS and VBlocks in production environments today, unfortunately got no answers. Three large customers I know of today, are practically using it in pre-production / test / development environments.

Here is the reaction from twitterville during and after this presentation.

Download in PDF Format..


Vblock Presentation at Tech Field Day from storagenerve on Vimeo.