Google+

Storage Federation


..

VPLEX and USPV

EMC’s latest addition to the concept of storage federation is the VPLEX announcement that happened at EMC World 2010. VPLEX comes in two forms today, VPLEX Local and VPLEX Metro. Important features of VPLEX include cache coherency, distributed data and active-active clusters at metro distances. In the works are VPLEX Geo and VPLEX Global enabling inter-continental distances.

VPLEX contains no drives today, it is based on a version of Linux Open Source and runs on the same hardware as a VMAX engine. But said that, what prevents EMC from including VPLEX as part of every VMAX and Clariion sold today or may be just run it as a Virtual Appliance within the VMAX (Enginuity) or Clariion (Flare).

While HDS has a slightly different approach yielding almost the same result using Storage Virtualization, HDS approaches storage federation in its USPV platform. The USPV scales upto synchronous distances, I believe 100 kms max distance today.

USPV natively uses a combination of Universal Volume Manager (UVM), High Availability Manager (HAM), Dynamic Replicator, Hitachi TrueCopy Remote Replication and Hitachi Replication Manager to do synchronous distance (100 kms) replication with distributed data in an active-active clustered environment.

VPLEX local and VPLEX metro has been recent announcements, while the USPV has been offering similar features since the past few years now.

..

Use-Case

Service providers will be largest customers while the VPLEX is still being developed in the Geo and Global modes.

I would think, government customers like DISA, DoD and other cloud providers in the federal space may find VPLEX and USPV very interesting as well.

Migrations using both the VPLEX Local and USPV are a piece of cake, because of its underlying features.

And many more…

..

Future

Will the future of all storage subsystems have federation in it as a core component? It is most likely with virtualization technologies being designed and pushed today, that we will natively see some of these features into backend storage that can typically hold data in containers and these containers move based on requirements. Look at a VM as a container of information or application.

With a Front-end storage controller, call it a VPLEX or a USPV which doesn’t care what sort of disks are sitting behind it, natively add all storage features to it like snaps, clones, RAID, replication, high availability, virtualization and it doesn’t matter if you use the cheapest storage disk behind it.

Typically with a single storage subsystem, you are looking at scaling your storage to 600 drives or 2400 drives or 4096 drives or 8192 drives or 16384 drives max or does it even matter at this point.

Storage federation will allow a system to scale upto 100’s of PB of storage, for example a EMC VPLEX scales upto 8192 Volumes today, while a USPV scales upto 247PB’s of storage, in essence that is 1 TB x 592,000 disk drives in a single system (federated).

When you connect two of these VPLEX’s or two of the USPV’s at synchronous distances, you now start taking advantage of active-active clusters (datacenters) with distributed data. (Again I will be the first to say, I am not sure how much of cache coherency is built within the USPV today).

But that brings us to some important questions…

..

Questions

Is storage federation that important?

Is storage federation the future of all storage?

Do you care about active-active datacenters?

What is the use-case for federation outside of service providers?

Will this technology shape the future of how we do computing today by leveraging and pooling storage assets together for a higher availability and efficiency?

How large, a single namespace would you like to have? I believe HP IBRIX brings a similar concept of scaling storage to 16PB’s total in a single name space..

Does federation add latency, which limits its usage to only certain applications?

Is VPLEX the future of all EMC storage controller technology, and will that eliminate the Flare or Enginuity code?

If you add a few disk drives to the VPLEX locally, can it serve high demand IOPS applications?

How large will cache get on these storage controllers to minimize the impact of latency and cache coherency on devices at synchronous distances? Is PAM or Flash Cache that answer?

At that point, does it matter if you can do coupling on your systems to extend it like we initially thought the VMAX would have 16 or 32 engines or may be you can couple Clariion SP’s or AMS or USPV’s?

..

More Questions

Will the future VPLEX look like a USPV with local disk drives attached?

Though the big vision of VPLEX is Global replication creating active – active datacenters, does the next generation VBlocks meant for Service Provider market include a VPLEX natively within it?

Is EMC Virtual Storage just catching upto HDS technology? Or is VPLEX vision a big and unique one that will change the direction of EMC Storage in the future…

Is Storage federation game changing and is EMC ahead of HDS or HDS ahead of EMC…

..

  • Pingback: Tweets that mention Storage Federation |, » Storage Federation -- Topsy.com()

  • JohnDias

    How does this differ from what IBM has been offering with SVC for years now?

  • http://storagenerve.com storagenerve

    Hi John, I am no expert on SVC, but from what i have been hearing about SVC it has limitations and presents quite a few challenges with being able to scale out…and is limited to a certain number of volumes associated with virtualization.

    Devang

  • Ken Wood

    Hi Devang! I enjoyed this blog and the comparisons between storage virtualization technology with HDS’ USP-V and EMC’s VPLEX. In fact, I’ve made it the topic of my recent blog at,

    http://blogs.hds.com/michael/2010/08/talking-about-storage-virtualization.html#more-3718

    You are asking the right questions and I’ve responded to a few of them directly in my blog and one of them in a response to a comment from Nigel. I may actually address a few more of your questions in a future blog. Great stuff and keep up the good work.

    Best regards,
    Ken Wood with HDS