Posts Tagged ‘VMWare’

Hitachi VSP (Virtual Storage Platform) & Command Suite 7– Technology, Comparisons, Architecture

September 29th, 2010 No comments

A Deepdive on VSP Technology, Command Suite 7 overview, Technology within the VSP, Comparison of VSP vs USPV, some architecture discussions and marketing message. Along with this discussion, also see architecture block diagrams and Videos from the event.

The Announcement

Hitachi and its US Subsidiary Hitachi Data Systems announced its next generation Storage platform on the 27th of Sept, 2010. It’s proven technology of storage virtualization that surfaced back in 2007/2008 is being offered in its latest platform code name “VICTORIA” now called VSP – Virtual Storage Platform.

Though I do not want to speculate too much on the naming, VSP – Virtual Storage Platform is a relevant name to the technology, But is the name VSP somehow influenced from the name VMAX (Virtual Matrix)??

The same day HP also announced its P9500 storage platform, which it rebrands from Hitachi with an HP logo and HP management software. The looks of the HP version of VSP, (P9500) is very attractive compared to the Hitachi looks.

Wonder the HP – 3Par acquisition will put some pressure on the OEM relationship between HP and Hitachi Ltd, Japan, since essentially the game would be to compete in the same market space now. Though to my understanding 3Par doesn’t offer Mainframe support with its storage as Hitachi does today with FICON.

Though do not get deceived by the name or the looks, the technology that VSP brings to datacenters (let me correct, virtual datacenters), is one that is revolutionary and will help customers build more resilient and efficient environments.

Hitachi VSP at Hitachi Information Forum in Santa Clara, CA

The color of the VSP cabinet is Green, indicating it’s a step forward towards a highly energy efficient system. As the datacenters are now being completely virtualized with computing environments and no geographical boundaries, the requirement for storage virtualization becomes key in being able to keep these environments resilient, scalable, reduced footprint and manageable.

Victoria was the code name for VSP, during our last visit to HDS at the Geek Day 0.9 in Santa Clara; we were given some hints about this project. But during our visit to Odawara (Tokyo), Japan, Hitachi along with its US subsidiary (HDS) gave us a preview of VSP, the underlying technology and architecture of VSP, a one for all platform Block, File, Object storage. Though we did not blog about the technology discussion that took place in Japan, they were highly focused around engineering & architecture aspects of the VSP technology.

VSP brings architectural enhancements, added flexibility, reduced footprint, higher response times, reduced management, concepts of storage economics, etc natively within the platform.

It is expected VSP will also be the core storage platform on HDS’s UCP (Universal Compute Platform) along with Hitachi Symphony Servers, a networking partner (****) and Microsoft Operations Manager as its orchestration software.

Nigel Poulton also has a Technical Deepdive Post on VSP, and it is very technical in nature.


The Marketing Message

The core messaging behind the VSP platform includes 3D Scaling, which are Scale Out, Scale Deep and Scale Up. In the past we have seen blog posts from Hu Yoshida and Michael Hay about 3D Cartesian scaling and affects of it on Storage Platforms.

Some additional pitches from HDS on VSP include being able to provide Virtualized, Automated, Cloud-Ready and Sustainable platform. Though I necessarily do not understand what Cloud-Ready means. The messaging around Cloud was particularly missing during the Hitachi Information Forum.

Scale Up refers to the tightly coupled storage environment that is easy to expand and manage.

Scale Out refers to the priority queues, dynamic allocation of resources and system that helps customers expand as business needs and workloads change.

Scale Deep refers to the Storage Virtualization piece that allows a single VSP system to grow using external storage through centralized management to more than 255 PB of data.


The Technology

Storage Virtualization is a great technology and the benefits of it are being seen around the industry today. Manufacturers that did not have this technology a few years ago are all jumping in now. These days talking to customers about the managed services businesses and understanding the value storage virtualization brings to the table with technologies like Hitachi USPV, IBM SVC, HP SVSP, EMC VPLEX and now Hitachi VSP is pretty phenomenal.

On the VSP Hitachi also introduced the SAS II drives 2.5 inch form factor, reducing the footprint substantially. With a 2048 drive system, customers are typically looking at 6 standard cabinets vs an EMC VMAX that may utilize 10 cabinets for the same number of drives. The largest drive supported today on the VSP is 1 TB drive.

Along with the added number of drives to the VSP technology, the Storage Virtualization technology enables 255 PB’s of Storage behind a VSP or essentially 1TB x 255000 drives in a single federated storage system.

After the leap by EMC into the Intel Architecture with it enterprise Storage system VMAX earlier last year, Hitachi is the next storage manufacturer to take advantage of the great engineering work that is currently being done by Intel for Enterprise computing. Along with the Intel Xeon CPU’s on the Virtual Storage Processors, Hitachi also uses Hitachi ASIC’s on its controllers for specialized functions within the VSP.

The number of ports has also been now doubled with VSP for host connectivity, substantially less power consumption which numbers seem to be in the range of 40% to 50% energy efficient systems for power savings.

VSP also enables XTS-AES256 bit Encryption of data as its being written to its disk. This technology more than likely could be a third party plug-in that enables this feature. It will need to be purchased and enabled through software keys within the VSP.


One size fits all (Scale Up, Scale Down)

As you are aware, the USPV came in two flavors, the USPV and the USPVM. If the customer had invested into a USPVM and as the business demand (applications requirements, IOPS, workloads) increase, the only option the customer might have is purchase another system. There are similar offerings from EMC in this space with its VMAX and VMAX-SE frames. The VSP goes back to the basics of purchase a system and expand it based on your needs without the necessity to purchase a new system.


The Architecture

With the new generation of storage virtualization technology just brought to the market by Hitachi, there are differences between its predecessor, the USPV.

While the VMAX today offers 128 cores, the VSP starts at 32 Cores, but using Storage Virtualization, you can add thousands of Cores behind it.

Okay, just an example….

One of the large financial houses that were on the panel at Hitachi Information Forum virtualizes DMX-4’s behind USPV’s today. If a VSP supports 255000 drives, you can practically have 106 fully populated (2400 drive configured) VMAX systems behind one VSP.

Since manufacturers leverage technology and its inter-workings in different ways, a side-by-side comparison of VMAX and VSP may not be a fair comparison.


Though I want to point out  differences between VSP technology and USPV technology relating to architecture and configurations.

Hitachi VSP vs USPV

VSP Technology USPV Technology
Name VSP: Virtual Storage Platform USPV: Universal Storage Platform – Virtualization
Cabinet Min: 1 Cabinet

Max: 6 Cabinets (2 Systems)

5 Cabinets
Drives (2.5 inch SAS) Min: 0 Drives (External Storage)

Max: 2.5 inch x 2048 SAS II Drives

3.5 inch x 1152 FC drives
Drives (3.5 inch SAS) Min: 0 Drives (External Storage)

Max: 3.5 inch x 1280 SAS II Drives

3.5 inch x 1152 FC drives
Federation Min: Single System

Max: Two Systems tightly coupled using the Hitachi Star Fabric over PCIe

External Storage (Federation – Virtualization) Max: 255 PB Max: 247 PB
External Storage (Federation Drives) 1TB x 255000 drives 1TB x 247000 drives
Storage 2 PB’s Internal 1 PB Internal
Processors INTEL Quad Core Processors plus ASICs on FED / BED ASICs
Single Controller (System) Min: 1 Cabinet

Max: 3 Cabinets

5 Cabinets
Virtual Storage Director Blades 4 Cores per Blade

Min: 2 Blades (8 Cores)

Max: 8 Blades (32 Cores)

FED (Front End Directors) Each FED has 8 ports

Min: 2 FED (16 ports)

Max: 24 FED (192 Cores)

112 Ports
FED Port Speed 8 Gbps 4 Gbps
FED Port types Min: 16 – FC Ports (8GB)

Max: 192 – FC Ports (8GB),

192 FICON Ports

224 – FC (4GB ports),

112 – FC (8GB ports),

112 – FICON

FCoE It may be supported in a short duration but no support with Release 1 Not Supported
iSCSI Not Supported Not Supported
Infiniband Not Supported Not Supported
BED (Back End Directors) Each BED has 4 SAS links

Min: 0 (incase of external Storage)

Max: 64 BED (64 SAS links – Two federated VSP’s)

64 FC loops – Half duplex AL loops
BED Speed 6 Gbps SAS 4 Gbps FC-AL
Cache 32GB or 64GB Adapters

Min: 2 (64GB cache)

Max: 16 (1024 GB Cache)

512 GB Cache
Cache Protection Flash plus Battery Big Batteries
Power Consumption 30KW (one phase) – 1024 drives 33.2 KW (three phase)
Automated Dynamic Tiering LUN Level and Sub LUN level LUN Level
Secure MultiTenancy Yes No
VAAI – vStorageAPI support Yes (expected late 2010) No
Native within UCP (expected) Yes No
Support for 2.5 inch drives and/or 3.5 inch drives 2.5 inch and 3.5 inch drives 3.5 inch drives only
SATA drive support Yes Yes
SAS and SATA drive support – intermix Yes SAS not supported
FC and SATA drive support – intermix FC not supported Yes
Cache Mirroring Write Cache Mirroring Write Cache Mirroring
Command Suite 7 Supports Supports
SSD Supported Supported
Predecessor USPV USP
Released Sep 2010 2006/2007
Operating System BOS / BOS V BOS / BOS V
LUN based Tiering Yes Yes
Sub LUN based Tiering Yes No
Page Size 42MB 42MB
Large Batteries No Yes
Drive Formats ??? bytes (expecting 520) with 8 bytes ECC ??? bytes (expecting 520) with 8 bytes ECC
Microcode Runs on Virtual Storage Processors Runs on FED, BED
Rack System 19 Inch Racks, 42U 19 inch Racks
Airflow Hot – Cold Aisle Hot – Cold Aisle
RAID Mirroring, RAID 5, RAID 6 Mirroring, RAID 5, RAID 6
Cooling Fans Noise Reduction Non
Cooling Fans Speed 3 level speeds for cooling 1 level speed for cooling
Control Memory On Virtual Storage Directors On FED/BED
Color of the Cabinet Green Bluish / purple
Purchased as VSP only with Controllers or with X number of drives scalable to 2048 drives USPVM only sold with Controllers. USPV sold with controllers and drives. USPVM cannot be upgraded to USPV


Cabinet Numbering and Structure

Below are how two VSP systems are coupled together using the Hitachi Star Switch (PCIe Connect), which enables the expansion of two VSP into a single system scalable to 2048 drives with 1024 GB of cache.

Cab 12 Cab 11 Cab 1 Cab 0 Cab 1 Cab 2
Drives only Drives only Controller 1 + Drives Controller 2 + Drives Drives only Drives only

Each system (VSP Controller Unit) includes

4 x Virtual Storage Director,

8 x Data Cache Adapter,

8 x Front End Directors

4 x Back End Directors

4 x Grid Switch

2 x Drive Chasis in Controller Cabinet

3 x Drive Chasis in each – Drives only Cabinet

Totally 8 Drive Chasis

Each Drive chasis supports 128 drives (SAS)


FRONT of the UNIT includes

4 DataCache Adpaters

4 Virtual Storage Directors

4 Data Cache Adapters

Drive bays have FANs in the front of the unit



BACK of the UNIT includes

4 Front End Directors

2 Back End Directors

4 Data Grid Switches

2 Back End Directors

4 Front End Directors


Virtual Storage Director (The Brains behind the VSP)

There are 4 Virtual Storage Directors in each system

Each Virtual Storage Director has 4 Cores

You can have 16 Cores per system

These processors manage the internal workings of the VSP along with the LUNs, eLUNs, Addressing, data mapping, virtual partition manager, layered software interface, references, SAS drives if internal, operational control data memory.

You can expand this to 32 Cores using the PCIe Hitachi Data Switch Grid and 3 additional cabinets along with a controllers.


Control Memory

Serves at L2 Cache

On Virtual Storage Directors

Responsible for managing and maintaining Metadata, mappings, etc


Data Cache (Global Cache)

Primarily used as Cache for read/write

Caches data during read operations from BED, similarly caches data from FED for write operations

Only write data is mirrored in cache (not all data is mirrored like the VMAX)

1024GB of total cache for 2 VSP’s tightly coupled

Read Operations only require one copy of cached data

Cache backed up to onboard Flash drive, reducing the amounts of needed batteries



Though the Virtual Storage Directors use the Intel Quad Core processors, BED’s and FED’s use special purpose ASICs for I/O operations, which enables a much better, and flexible data movement and associated performance

Back End Directors

Front End Directors


Grid Switch

Unlike VMAX which uses Rapid IO for coupling its engines, Hitachi uses its custom designed Hitachi Star Fabric for tightly coupling its Internal Network to manage data which includes the Drives, Virtual Storage Directors, Data Cache, BED & FED. This switch also connects two VSP’s together to form a 6 cabinet, 2048 drive system, which is connected through PCIe at a CPU level.


Dynamic Automated Tiering (Sub LUN Tiering)

Once a disk is assigned to the VSP whether it is native within the VSP or external virtualized (eLUN), VSP will utilize it for Dynamic Automated Tiering. With the announcement of VSP, HDS is also including policy based Sub LUN Tiering to this platform, allowing automated data movements in page size of 42MBs.

Dynamic Automated Tiering is shipping day one with VSP platform. Look at Sub LUN Tiering as a technology that will move the data real time based on policies setup in the environment. As a certain page gets a higher heat index, the underlying technology will automatically upgrade the tier only for that page.
If over time, a certain page falls on the heat index chart, the data is moved to a lower tier. This technology helps bring more efficiency into environments and does not require having all your data stored on one tier and helps you save on the use of expensive SSD’s for all your data.

Again this offering as it stands today is very unique in the industry and other vendors are moving or have a road map focused towards this technology.


VMware, SMT, Cloud

Day 1, VSP will not support VAAI, the vStorageAPI for VMware offloading VMware related tasks locally within Storage controllers bringing much added flexibility to virtualized environments.

But within the next 45/60 days as first code release rolls out, VAAI support is on the roadmap to be included as supported on VSP.

Also the VSP claims to have SMT (Secure MultiTenacy) built into the architecture that allows the system to operate in virtual partitions including cache & host ports. Though still not sure how the VSP manages to offer audits, resource management etc within an SMT environment. In an SMT environment, encryption becomes a very viable offering where tenants could have its own individual key as the VSP natively supports 32 different keys.


Some Observations

Very noticeable product differentiation from previous generation Hitachi USPV. Improvements visible in terms of I/O, tight coupling of systems, SAS drives, drive speeds, port speeds, data routing. Expansion of external storage to 255PB total, internal drives to 2048 with reduction in floor space and hot and cold aisle friendly cabinet design shows the move towards the next generation thought process.

No Total cache mirroring

No VAAI support day 1

Cloud ready message needs to be more refined

The product seems to miss the flashiness

Having the front bezel green in color is a great message.

Color coded cables in the cabinet based on different loops possibly, cable colors are black, grey, white and are very visible.



HDS Command Suite 7 for VSP, AMS and USPV

Along with the announcement of the VSP, HDS also announced its next generation SRM tool Command Suite 7 that enables management of the VSP along with its predecessor USPV & USP, along lines its mid tier storage platform AMS.

The message driving the Command Suite like the VSP platform is the 3D Cartesian scaling, which is Manage Up, Manage Out and Manage Deep.

The Command Suite 7 is being compared to solutions from other vendors in this space. Along with managing the Hitachi systems at an element manager level, Command Suite 7 also offers heterogeneous storage support for other vendors.

It seems the Command Suite 7 or its elements might end up within the UCP announcement likely to happen early 2011, which will offer an integrated stack of solution from HDS.

Command Suite 7 is a move towards being able to manage & discover all storage, either virtualized behind a VSP, non virtualized but in the environment, VMware host, Virtual Machines, Switches, hypervisor and wants to impact as an infrastructure monitoring tool.


Marketing Messaging

Manage Up: References the ability to manage the Hitachi environments that will enable automated dynamic tiering. Command Suite 7 also boasts the management of 255 PB of storage and 5Million objects through a single installed instance

Manage Out: Single Solution for management whether is File, Block or Object storage. Along with the ability to manage VM hosts, VM’s, applications, it also enables the management of heterogeneous storage. It is not expected to replace your existing native element managers within your storage environment by Command Suite. It is also expected that within Hitachi storage, Command Suite 7 will be able to manage the objects without agents being deployed in the environment.

Manage Deep: Through the use of reporting, single pane of glass view, capacity monitoring, performance monitoring, Command suite 7 enables a granular management of your storage environment adding automation reducing operations to accomplish required tasks.


Automated Dynamic Tiering

Hitachi introduced Sub LUN level tiering with its VSP and Command Suite 7 offering. The automated tiering will work either at File / Object or Block levels. Through a policy-based engine any of the above can be migrated to either a lower or a higher Tier based on SLA’s, performance, time of the day, application requirements or cost.

With the Sub LUN Tiering, Hitachi allows the movement of its data in 42MB page size, which is a standard within its storage environments and enables it to be promoted or demoted based on policy.

The sub-lun level tiering enables only the hot blocks of data or file to be moved rather than an entire LUN. Also by default all new incoming data gets written to the highest performance tier first and gradually gets demoted to a lower tier as the activity on it is reduced, again all this can happens at a LUN level or a sub-LUN level.


All Inclusive

The Command Suite 7 ships as a BOS (Basic Operating System) or BOS V (Basic Operating System V), which includes the following Software modules as part of the offering.

Hitachi Device Manager (BOS)

Hitachi Universal Volume Manager (BOS V)

Hitachi Dynamic Link Manager Advanced (BOS / BOS V)

Hitachi Dynamic Provisioning (BOS / BOS V)

Hitachi Dynamic Tiering (BOS / BOS V)

Hitachi Command Director (BOS / BOS V)

Hitachi Storage Capacity Reporter (BOS / BOS V)

Hitachi Tiered Storage Manager (BOS / BOS V)

Hitachi Tuning Manager (BOS / BOS V)

Hitachi Virtual Server Reporter (BOS / BOS V)

** Have no idea how Command Suite is licensed today, but I would think the pricing for both the BOS and BOS V are different and possibly would have a Base price plus a per TB of licensing cost.


Security, SMT, Cloud

Along with the discussion above around VSP in SMT and Cloud environment, Command Suite 7 offers additional benefits towards applications being hosted in a cloud environment.

Through the use of industry standard implementations of Active Directory, Radius etc, the storage managers, administrators, backup admins, etc get authenticated in the environment.

Through the use of Virtual partitioning, resources can be allocated in partitions and only managed through those with the correct permissions, giving the management of the environment the much-needed granularity. There is also support for Host port segregation, reporting and management includes support for provisioning, migration and replication.

For SLA and Reporting, Command Suite 7 has support around Automated Policy based TIering. There is also support for Hypervisor discovery and reporting along with Charge backs that should be included in the near future.

Okay that was long!!!

Some More….a discussion with RIck Vanover, Chris Evans and Claus Mikkelsen (Chief Scientist, HDS) about the VSP Technology and release.


Roundtable discussion on HDS VSP announcement from storagenerve on Vimeo.


Quick Discussion with Rick Vanover and the introduction with a Japanese Band


Introduction of Hitachi Information Forum, SantaClara, CA from storagenerve on Vimeo.


More discussions on storage virtualization coming up in the next blog posts..


Disclaimer: I do not work for HDS. Access to this information was given by HDS over the past few months helping understand the architecture of the VSP platform. I have attended HDS Geek Day 0.9, Hitachi Japan Trip and Hitachi Information Forum in Santa Clara and learned about the technology at these events. All airfares, lodging and boarding was paid by HDS. I have not received any monetary compensation during these visits nor any gadgets.

This is just an attempt to put some light on Hitachi VSP technology and what Storage Virtualization may enable in virtualized environments.

Storage Federation

July 27th, 2010 3 comments



EMC’s latest addition to the concept of storage federation is the VPLEX announcement that happened at EMC World 2010. VPLEX comes in two forms today, VPLEX Local and VPLEX Metro. Important features of VPLEX include cache coherency, distributed data and active-active clusters at metro distances. In the works are VPLEX Geo and VPLEX Global enabling inter-continental distances.

VPLEX contains no drives today, it is based on a version of Linux Open Source and runs on the same hardware as a VMAX engine. But said that, what prevents EMC from including VPLEX as part of every VMAX and Clariion sold today or may be just run it as a Virtual Appliance within the VMAX (Enginuity) or Clariion (Flare).

While HDS has a slightly different approach yielding almost the same result using Storage Virtualization, HDS approaches storage federation in its USPV platform. The USPV scales upto synchronous distances, I believe 100 kms max distance today.

USPV natively uses a combination of Universal Volume Manager (UVM), High Availability Manager (HAM), Dynamic Replicator, Hitachi TrueCopy Remote Replication and Hitachi Replication Manager to do synchronous distance (100 kms) replication with distributed data in an active-active clustered environment.

VPLEX local and VPLEX metro has been recent announcements, while the USPV has been offering similar features since the past few years now.



Service providers will be largest customers while the VPLEX is still being developed in the Geo and Global modes.

I would think, government customers like DISA, DoD and other cloud providers in the federal space may find VPLEX and USPV very interesting as well.

Migrations using both the VPLEX Local and USPV are a piece of cake, because of its underlying features.

And many more…



Will the future of all storage subsystems have federation in it as a core component? It is most likely with virtualization technologies being designed and pushed today, that we will natively see some of these features into backend storage that can typically hold data in containers and these containers move based on requirements. Look at a VM as a container of information or application.

With a Front-end storage controller, call it a VPLEX or a USPV which doesn’t care what sort of disks are sitting behind it, natively add all storage features to it like snaps, clones, RAID, replication, high availability, virtualization and it doesn’t matter if you use the cheapest storage disk behind it.

Typically with a single storage subsystem, you are looking at scaling your storage to 600 drives or 2400 drives or 4096 drives or 8192 drives or 16384 drives max or does it even matter at this point.

Storage federation will allow a system to scale upto 100’s of PB of storage, for example a EMC VPLEX scales upto 8192 Volumes today, while a USPV scales upto 247PB’s of storage, in essence that is 1 TB x 592,000 disk drives in a single system (federated).

When you connect two of these VPLEX’s or two of the USPV’s at synchronous distances, you now start taking advantage of active-active clusters (datacenters) with distributed data. (Again I will be the first to say, I am not sure how much of cache coherency is built within the USPV today).

But that brings us to some important questions…



Is storage federation that important?

Is storage federation the future of all storage?

Do you care about active-active datacenters?

What is the use-case for federation outside of service providers?

Will this technology shape the future of how we do computing today by leveraging and pooling storage assets together for a higher availability and efficiency?

How large, a single namespace would you like to have? I believe HP IBRIX brings a similar concept of scaling storage to 16PB’s total in a single name space..

Does federation add latency, which limits its usage to only certain applications?

Is VPLEX the future of all EMC storage controller technology, and will that eliminate the Flare or Enginuity code?

If you add a few disk drives to the VPLEX locally, can it serve high demand IOPS applications?

How large will cache get on these storage controllers to minimize the impact of latency and cache coherency on devices at synchronous distances? Is PAM or Flash Cache that answer?

At that point, does it matter if you can do coupling on your systems to extend it like we initially thought the VMAX would have 16 or 32 engines or may be you can couple Clariion SP’s or AMS or USPV’s?


More Questions

Will the future VPLEX look like a USPV with local disk drives attached?

Though the big vision of VPLEX is Global replication creating active – active datacenters, does the next generation VBlocks meant for Service Provider market include a VPLEX natively within it?

Is EMC Virtual Storage just catching upto HDS technology? Or is VPLEX vision a big and unique one that will change the direction of EMC Storage in the future…

Is Storage federation game changing and is EMC ahead of HDS or HDS ahead of EMC…


vExpert 2010

June 14th, 2010 3 comments

Relaxing on a Friday evening after a long week, got an email from John Troyer and the VMware team about the vExpert 2010 award and program invitation. I am humbled and very honored to participate in it. Hope along with many storage topics that I cover on this blog and GestaltIT, this will give me another reason to keep on writing more about related virtualization topics.


This is the first time I have received the vExpert award. Hope this gives me a chance to work with fellow vExperts through out the world, many of which are friends, colleagues and delegates from various events. These folks are truly industry experts and leaders in the virtualization space.

The success we saw with vExpert 2009 program and the emerging details of the 2010 program, there are great advantages to be able to participate in this. This program will enable us to get a sneak peek into the futures, conference session materials, test lab licenses, beta programs, etc at VMware.

Here is the VMware vExpert page.

Twitter users can follow the list of vExperts here by @maishsk

A comprehensive list of vExperts 2010 by Arnim van Lieshout on his Van-Lieshout Blog

To my fellow vExperts, big congratulations and I am really honored to be part of this community. Big thanks to John Troyer, VMware Team and VMware Community in allowing me to participate as a vExpert.

The fun begins now…… looking forward to an exciting year ahead.


GestaltIT Tech Field Day 2010: VBlocks Presentation

April 13th, 2010 No comments

This was surely the most debated discussion – presentation at the GestaltIT Tech Field Day 2010 in Boston, MA. Both the rock stars from the VCE team (Scott Lowe and Ed Saipetch) did the presentation and did an amazing job presenting on this topic.

Though I see a lot of value with the whole concept of VBlocks (VCE) towards the journey to the private cloud and means to compete with the Oracle’s, Dell’s, IBM’s and HP’s of the world, many in the crowd did not buy into this and thought was more of a marketing package without the necessary meat in it….

I am composing a post on VCE – Vblocks for release later this week, where will highlight many pros and cons of this technology based on what we heard and where we see the Vblocks architecture going.

Asked this same question to both Cisco and EMC during the UCS and VBlocks presentation as to how many customers are running UCS and VBlocks in production environments today, unfortunately got no answers. Three large customers I know of today, are practically using it in pre-production / test / development environments.

Here is the reaction from twitterville during and after this presentation.

Download in PDF Format..


Vblock Presentation at Tech Field Day from storagenerve on Vimeo.