Author Archive

Hitachi VSP (Virtual Storage Platform) & Command Suite 7– Technology, Comparisons, Architecture

September 29th, 2010 No comments

A Deepdive on VSP Technology, Command Suite 7 overview, Technology within the VSP, Comparison of VSP vs USPV, some architecture discussions and marketing message. Along with this discussion, also see architecture block diagrams and Videos from the event.

The Announcement

Hitachi and its US Subsidiary Hitachi Data Systems announced its next generation Storage platform on the 27th of Sept, 2010. It’s proven technology of storage virtualization that surfaced back in 2007/2008 is being offered in its latest platform code name “VICTORIA” now called VSP – Virtual Storage Platform.

Though I do not want to speculate too much on the naming, VSP – Virtual Storage Platform is a relevant name to the technology, But is the name VSP somehow influenced from the name VMAX (Virtual Matrix)??

The same day HP also announced its P9500 storage platform, which it rebrands from Hitachi with an HP logo and HP management software. The looks of the HP version of VSP, (P9500) is very attractive compared to the Hitachi looks.

Wonder the HP – 3Par acquisition will put some pressure on the OEM relationship between HP and Hitachi Ltd, Japan, since essentially the game would be to compete in the same market space now. Though to my understanding 3Par doesn’t offer Mainframe support with its storage as Hitachi does today with FICON.

Though do not get deceived by the name or the looks, the technology that VSP brings to datacenters (let me correct, virtual datacenters), is one that is revolutionary and will help customers build more resilient and efficient environments.

Hitachi VSP at Hitachi Information Forum in Santa Clara, CA

The color of the VSP cabinet is Green, indicating it’s a step forward towards a highly energy efficient system. As the datacenters are now being completely virtualized with computing environments and no geographical boundaries, the requirement for storage virtualization becomes key in being able to keep these environments resilient, scalable, reduced footprint and manageable.

Victoria was the code name for VSP, during our last visit to HDS at the Geek Day 0.9 in Santa Clara; we were given some hints about this project. But during our visit to Odawara (Tokyo), Japan, Hitachi along with its US subsidiary (HDS) gave us a preview of VSP, the underlying technology and architecture of VSP, a one for all platform Block, File, Object storage. Though we did not blog about the technology discussion that took place in Japan, they were highly focused around engineering & architecture aspects of the VSP technology.

VSP brings architectural enhancements, added flexibility, reduced footprint, higher response times, reduced management, concepts of storage economics, etc natively within the platform.

It is expected VSP will also be the core storage platform on HDS’s UCP (Universal Compute Platform) along with Hitachi Symphony Servers, a networking partner (****) and Microsoft Operations Manager as its orchestration software.

Nigel Poulton also has a Technical Deepdive Post on VSP, and it is very technical in nature.


The Marketing Message

The core messaging behind the VSP platform includes 3D Scaling, which are Scale Out, Scale Deep and Scale Up. In the past we have seen blog posts from Hu Yoshida and Michael Hay about 3D Cartesian scaling and affects of it on Storage Platforms.

Some additional pitches from HDS on VSP include being able to provide Virtualized, Automated, Cloud-Ready and Sustainable platform. Though I necessarily do not understand what Cloud-Ready means. The messaging around Cloud was particularly missing during the Hitachi Information Forum.

Scale Up refers to the tightly coupled storage environment that is easy to expand and manage.

Scale Out refers to the priority queues, dynamic allocation of resources and system that helps customers expand as business needs and workloads change.

Scale Deep refers to the Storage Virtualization piece that allows a single VSP system to grow using external storage through centralized management to more than 255 PB of data.


The Technology

Storage Virtualization is a great technology and the benefits of it are being seen around the industry today. Manufacturers that did not have this technology a few years ago are all jumping in now. These days talking to customers about the managed services businesses and understanding the value storage virtualization brings to the table with technologies like Hitachi USPV, IBM SVC, HP SVSP, EMC VPLEX and now Hitachi VSP is pretty phenomenal.

On the VSP Hitachi also introduced the SAS II drives 2.5 inch form factor, reducing the footprint substantially. With a 2048 drive system, customers are typically looking at 6 standard cabinets vs an EMC VMAX that may utilize 10 cabinets for the same number of drives. The largest drive supported today on the VSP is 1 TB drive.

Along with the added number of drives to the VSP technology, the Storage Virtualization technology enables 255 PB’s of Storage behind a VSP or essentially 1TB x 255000 drives in a single federated storage system.

After the leap by EMC into the Intel Architecture with it enterprise Storage system VMAX earlier last year, Hitachi is the next storage manufacturer to take advantage of the great engineering work that is currently being done by Intel for Enterprise computing. Along with the Intel Xeon CPU’s on the Virtual Storage Processors, Hitachi also uses Hitachi ASIC’s on its controllers for specialized functions within the VSP.

The number of ports has also been now doubled with VSP for host connectivity, substantially less power consumption which numbers seem to be in the range of 40% to 50% energy efficient systems for power savings.

VSP also enables XTS-AES256 bit Encryption of data as its being written to its disk. This technology more than likely could be a third party plug-in that enables this feature. It will need to be purchased and enabled through software keys within the VSP.


One size fits all (Scale Up, Scale Down)

As you are aware, the USPV came in two flavors, the USPV and the USPVM. If the customer had invested into a USPVM and as the business demand (applications requirements, IOPS, workloads) increase, the only option the customer might have is purchase another system. There are similar offerings from EMC in this space with its VMAX and VMAX-SE frames. The VSP goes back to the basics of purchase a system and expand it based on your needs without the necessity to purchase a new system.


The Architecture

With the new generation of storage virtualization technology just brought to the market by Hitachi, there are differences between its predecessor, the USPV.

While the VMAX today offers 128 cores, the VSP starts at 32 Cores, but using Storage Virtualization, you can add thousands of Cores behind it.

Okay, just an example….

One of the large financial houses that were on the panel at Hitachi Information Forum virtualizes DMX-4’s behind USPV’s today. If a VSP supports 255000 drives, you can practically have 106 fully populated (2400 drive configured) VMAX systems behind one VSP.

Since manufacturers leverage technology and its inter-workings in different ways, a side-by-side comparison of VMAX and VSP may not be a fair comparison.


Though I want to point out  differences between VSP technology and USPV technology relating to architecture and configurations.

Hitachi VSP vs USPV

VSP Technology USPV Technology
Name VSP: Virtual Storage Platform USPV: Universal Storage Platform – Virtualization
Cabinet Min: 1 Cabinet

Max: 6 Cabinets (2 Systems)

5 Cabinets
Drives (2.5 inch SAS) Min: 0 Drives (External Storage)

Max: 2.5 inch x 2048 SAS II Drives

3.5 inch x 1152 FC drives
Drives (3.5 inch SAS) Min: 0 Drives (External Storage)

Max: 3.5 inch x 1280 SAS II Drives

3.5 inch x 1152 FC drives
Federation Min: Single System

Max: Two Systems tightly coupled using the Hitachi Star Fabric over PCIe

External Storage (Federation – Virtualization) Max: 255 PB Max: 247 PB
External Storage (Federation Drives) 1TB x 255000 drives 1TB x 247000 drives
Storage 2 PB’s Internal 1 PB Internal
Processors INTEL Quad Core Processors plus ASICs on FED / BED ASICs
Single Controller (System) Min: 1 Cabinet

Max: 3 Cabinets

5 Cabinets
Virtual Storage Director Blades 4 Cores per Blade

Min: 2 Blades (8 Cores)

Max: 8 Blades (32 Cores)

FED (Front End Directors) Each FED has 8 ports

Min: 2 FED (16 ports)

Max: 24 FED (192 Cores)

112 Ports
FED Port Speed 8 Gbps 4 Gbps
FED Port types Min: 16 – FC Ports (8GB)

Max: 192 – FC Ports (8GB),

192 FICON Ports

224 – FC (4GB ports),

112 – FC (8GB ports),

112 – FICON

FCoE It may be supported in a short duration but no support with Release 1 Not Supported
iSCSI Not Supported Not Supported
Infiniband Not Supported Not Supported
BED (Back End Directors) Each BED has 4 SAS links

Min: 0 (incase of external Storage)

Max: 64 BED (64 SAS links – Two federated VSP’s)

64 FC loops – Half duplex AL loops
BED Speed 6 Gbps SAS 4 Gbps FC-AL
Cache 32GB or 64GB Adapters

Min: 2 (64GB cache)

Max: 16 (1024 GB Cache)

512 GB Cache
Cache Protection Flash plus Battery Big Batteries
Power Consumption 30KW (one phase) – 1024 drives 33.2 KW (three phase)
Automated Dynamic Tiering LUN Level and Sub LUN level LUN Level
Secure MultiTenancy Yes No
VAAI – vStorageAPI support Yes (expected late 2010) No
Native within UCP (expected) Yes No
Support for 2.5 inch drives and/or 3.5 inch drives 2.5 inch and 3.5 inch drives 3.5 inch drives only
SATA drive support Yes Yes
SAS and SATA drive support – intermix Yes SAS not supported
FC and SATA drive support – intermix FC not supported Yes
Cache Mirroring Write Cache Mirroring Write Cache Mirroring
Command Suite 7 Supports Supports
SSD Supported Supported
Predecessor USPV USP
Released Sep 2010 2006/2007
Operating System BOS / BOS V BOS / BOS V
LUN based Tiering Yes Yes
Sub LUN based Tiering Yes No
Page Size 42MB 42MB
Large Batteries No Yes
Drive Formats ??? bytes (expecting 520) with 8 bytes ECC ??? bytes (expecting 520) with 8 bytes ECC
Microcode Runs on Virtual Storage Processors Runs on FED, BED
Rack System 19 Inch Racks, 42U 19 inch Racks
Airflow Hot – Cold Aisle Hot – Cold Aisle
RAID Mirroring, RAID 5, RAID 6 Mirroring, RAID 5, RAID 6
Cooling Fans Noise Reduction Non
Cooling Fans Speed 3 level speeds for cooling 1 level speed for cooling
Control Memory On Virtual Storage Directors On FED/BED
Color of the Cabinet Green Bluish / purple
Purchased as VSP only with Controllers or with X number of drives scalable to 2048 drives USPVM only sold with Controllers. USPV sold with controllers and drives. USPVM cannot be upgraded to USPV


Cabinet Numbering and Structure

Below are how two VSP systems are coupled together using the Hitachi Star Switch (PCIe Connect), which enables the expansion of two VSP into a single system scalable to 2048 drives with 1024 GB of cache.

Cab 12 Cab 11 Cab 1 Cab 0 Cab 1 Cab 2
Drives only Drives only Controller 1 + Drives Controller 2 + Drives Drives only Drives only

Each system (VSP Controller Unit) includes

4 x Virtual Storage Director,

8 x Data Cache Adapter,

8 x Front End Directors

4 x Back End Directors

4 x Grid Switch

2 x Drive Chasis in Controller Cabinet

3 x Drive Chasis in each – Drives only Cabinet

Totally 8 Drive Chasis

Each Drive chasis supports 128 drives (SAS)


FRONT of the UNIT includes

4 DataCache Adpaters

4 Virtual Storage Directors

4 Data Cache Adapters

Drive bays have FANs in the front of the unit



BACK of the UNIT includes

4 Front End Directors

2 Back End Directors

4 Data Grid Switches

2 Back End Directors

4 Front End Directors


Virtual Storage Director (The Brains behind the VSP)

There are 4 Virtual Storage Directors in each system

Each Virtual Storage Director has 4 Cores

You can have 16 Cores per system

These processors manage the internal workings of the VSP along with the LUNs, eLUNs, Addressing, data mapping, virtual partition manager, layered software interface, references, SAS drives if internal, operational control data memory.

You can expand this to 32 Cores using the PCIe Hitachi Data Switch Grid and 3 additional cabinets along with a controllers.


Control Memory

Serves at L2 Cache

On Virtual Storage Directors

Responsible for managing and maintaining Metadata, mappings, etc


Data Cache (Global Cache)

Primarily used as Cache for read/write

Caches data during read operations from BED, similarly caches data from FED for write operations

Only write data is mirrored in cache (not all data is mirrored like the VMAX)

1024GB of total cache for 2 VSP’s tightly coupled

Read Operations only require one copy of cached data

Cache backed up to onboard Flash drive, reducing the amounts of needed batteries



Though the Virtual Storage Directors use the Intel Quad Core processors, BED’s and FED’s use special purpose ASICs for I/O operations, which enables a much better, and flexible data movement and associated performance

Back End Directors

Front End Directors


Grid Switch

Unlike VMAX which uses Rapid IO for coupling its engines, Hitachi uses its custom designed Hitachi Star Fabric for tightly coupling its Internal Network to manage data which includes the Drives, Virtual Storage Directors, Data Cache, BED & FED. This switch also connects two VSP’s together to form a 6 cabinet, 2048 drive system, which is connected through PCIe at a CPU level.


Dynamic Automated Tiering (Sub LUN Tiering)

Once a disk is assigned to the VSP whether it is native within the VSP or external virtualized (eLUN), VSP will utilize it for Dynamic Automated Tiering. With the announcement of VSP, HDS is also including policy based Sub LUN Tiering to this platform, allowing automated data movements in page size of 42MBs.

Dynamic Automated Tiering is shipping day one with VSP platform. Look at Sub LUN Tiering as a technology that will move the data real time based on policies setup in the environment. As a certain page gets a higher heat index, the underlying technology will automatically upgrade the tier only for that page.
If over time, a certain page falls on the heat index chart, the data is moved to a lower tier. This technology helps bring more efficiency into environments and does not require having all your data stored on one tier and helps you save on the use of expensive SSD’s for all your data.

Again this offering as it stands today is very unique in the industry and other vendors are moving or have a road map focused towards this technology.


VMware, SMT, Cloud

Day 1, VSP will not support VAAI, the vStorageAPI for VMware offloading VMware related tasks locally within Storage controllers bringing much added flexibility to virtualized environments.

But within the next 45/60 days as first code release rolls out, VAAI support is on the roadmap to be included as supported on VSP.

Also the VSP claims to have SMT (Secure MultiTenacy) built into the architecture that allows the system to operate in virtual partitions including cache & host ports. Though still not sure how the VSP manages to offer audits, resource management etc within an SMT environment. In an SMT environment, encryption becomes a very viable offering where tenants could have its own individual key as the VSP natively supports 32 different keys.


Some Observations

Very noticeable product differentiation from previous generation Hitachi USPV. Improvements visible in terms of I/O, tight coupling of systems, SAS drives, drive speeds, port speeds, data routing. Expansion of external storage to 255PB total, internal drives to 2048 with reduction in floor space and hot and cold aisle friendly cabinet design shows the move towards the next generation thought process.

No Total cache mirroring

No VAAI support day 1

Cloud ready message needs to be more refined

The product seems to miss the flashiness

Having the front bezel green in color is a great message.

Color coded cables in the cabinet based on different loops possibly, cable colors are black, grey, white and are very visible.



HDS Command Suite 7 for VSP, AMS and USPV

Along with the announcement of the VSP, HDS also announced its next generation SRM tool Command Suite 7 that enables management of the VSP along with its predecessor USPV & USP, along lines its mid tier storage platform AMS.

The message driving the Command Suite like the VSP platform is the 3D Cartesian scaling, which is Manage Up, Manage Out and Manage Deep.

The Command Suite 7 is being compared to solutions from other vendors in this space. Along with managing the Hitachi systems at an element manager level, Command Suite 7 also offers heterogeneous storage support for other vendors.

It seems the Command Suite 7 or its elements might end up within the UCP announcement likely to happen early 2011, which will offer an integrated stack of solution from HDS.

Command Suite 7 is a move towards being able to manage & discover all storage, either virtualized behind a VSP, non virtualized but in the environment, VMware host, Virtual Machines, Switches, hypervisor and wants to impact as an infrastructure monitoring tool.


Marketing Messaging

Manage Up: References the ability to manage the Hitachi environments that will enable automated dynamic tiering. Command Suite 7 also boasts the management of 255 PB of storage and 5Million objects through a single installed instance

Manage Out: Single Solution for management whether is File, Block or Object storage. Along with the ability to manage VM hosts, VM’s, applications, it also enables the management of heterogeneous storage. It is not expected to replace your existing native element managers within your storage environment by Command Suite. It is also expected that within Hitachi storage, Command Suite 7 will be able to manage the objects without agents being deployed in the environment.

Manage Deep: Through the use of reporting, single pane of glass view, capacity monitoring, performance monitoring, Command suite 7 enables a granular management of your storage environment adding automation reducing operations to accomplish required tasks.


Automated Dynamic Tiering

Hitachi introduced Sub LUN level tiering with its VSP and Command Suite 7 offering. The automated tiering will work either at File / Object or Block levels. Through a policy-based engine any of the above can be migrated to either a lower or a higher Tier based on SLA’s, performance, time of the day, application requirements or cost.

With the Sub LUN Tiering, Hitachi allows the movement of its data in 42MB page size, which is a standard within its storage environments and enables it to be promoted or demoted based on policy.

The sub-lun level tiering enables only the hot blocks of data or file to be moved rather than an entire LUN. Also by default all new incoming data gets written to the highest performance tier first and gradually gets demoted to a lower tier as the activity on it is reduced, again all this can happens at a LUN level or a sub-LUN level.


All Inclusive

The Command Suite 7 ships as a BOS (Basic Operating System) or BOS V (Basic Operating System V), which includes the following Software modules as part of the offering.

Hitachi Device Manager (BOS)

Hitachi Universal Volume Manager (BOS V)

Hitachi Dynamic Link Manager Advanced (BOS / BOS V)

Hitachi Dynamic Provisioning (BOS / BOS V)

Hitachi Dynamic Tiering (BOS / BOS V)

Hitachi Command Director (BOS / BOS V)

Hitachi Storage Capacity Reporter (BOS / BOS V)

Hitachi Tiered Storage Manager (BOS / BOS V)

Hitachi Tuning Manager (BOS / BOS V)

Hitachi Virtual Server Reporter (BOS / BOS V)

** Have no idea how Command Suite is licensed today, but I would think the pricing for both the BOS and BOS V are different and possibly would have a Base price plus a per TB of licensing cost.


Security, SMT, Cloud

Along with the discussion above around VSP in SMT and Cloud environment, Command Suite 7 offers additional benefits towards applications being hosted in a cloud environment.

Through the use of industry standard implementations of Active Directory, Radius etc, the storage managers, administrators, backup admins, etc get authenticated in the environment.

Through the use of Virtual partitioning, resources can be allocated in partitions and only managed through those with the correct permissions, giving the management of the environment the much-needed granularity. There is also support for Host port segregation, reporting and management includes support for provisioning, migration and replication.

For SLA and Reporting, Command Suite 7 has support around Automated Policy based TIering. There is also support for Hypervisor discovery and reporting along with Charge backs that should be included in the near future.

Okay that was long!!!

Some More….a discussion with RIck Vanover, Chris Evans and Claus Mikkelsen (Chief Scientist, HDS) about the VSP Technology and release.


Roundtable discussion on HDS VSP announcement from storagenerve on Vimeo.


Quick Discussion with Rick Vanover and the introduction with a Japanese Band


Introduction of Hitachi Information Forum, SantaClara, CA from storagenerve on Vimeo.


More discussions on storage virtualization coming up in the next blog posts..


Disclaimer: I do not work for HDS. Access to this information was given by HDS over the past few months helping understand the architecture of the VSP platform. I have attended HDS Geek Day 0.9, Hitachi Japan Trip and Hitachi Information Forum in Santa Clara and learned about the technology at these events. All airfares, lodging and boarding was paid by HDS. I have not received any monetary compensation during these visits nor any gadgets.

This is just an attempt to put some light on Hitachi VSP technology and what Storage Virtualization may enable in virtualized environments.

Storage – Utilization, efficiency, cost, dedup, TP, virtualization, ZPR, compression or call it Economics

August 3rd, 2010 7 comments


There are fundamental concepts of storage Economics, which typically include Thin Provisioning, Deduplication, Zero Page Reclaim, Compression, Reclaimation, Efficiency, Utilization, TCO, ROI, CAPEX, OPEX, etc.

Storage Economics is one of those subjects, everyone likes to hear about, but it’s hard to find it implemented in today’s storage environments.

With that said, a lot of vendors are natively trying to add the concepts of Storage Economics into their storage arrays. With some latest discussions we had with our current visit to Hitachi Japan on the topics of Storage Economics and the core concepts that help customer increase utilization & efficiency in storage environments, here is an attempt to shed some more light on the topic.

We use storage to run our business, to store structured and unstructured data. Data means everything these days. But have we thought about the economics associated with storage?

As consumers, we tend to consume more than necessary at times if we want to have enough buffer, or if we anticipate projected growth, business requirements, customer requirements, technology improvements, and the list goes on. Many vendors these days guarantee 20% more efficiency, 50% less storage, 50% less storage using Thin, etc, etc, etc.

There are several aspects one should consider related to Storage Economics, how your shrinking IT budgets can still meet up with your growing business requirements, and what you can do to keep a balance between both.


With various aspects of Storage Economics below, some may be applicable in the SMB space, some in the enterprise space, and some really at all levels. These may turn into the building blocks of your Storage Economics practice:

  • It’s important to know what storage you have and where you have it.
  • Try to move away from fat provisioning to thin provisioning.
  • Use the concepts of Storage Virtualization to increase efficiency and utilization
  • Run non-vendor specific SRM (Storage Resource Management) tools for storage optimization and storage management.
  • Having a storage management tool is a must. You can still perform your daily task using various native element managers.
  • Industry standard average storage utilization numbers range between 35 to 45%. If you can push your storage utilization numbers higher, it will help you drive the cost down phenomenally.
  • Implement deduplication; verify your storage array supports deduplication natively. If not, it should be implemented in various parts of your storage like backup, unstructured data, etc.
  • Run a heterogeneous environment with multiple vendors in it to keep balance relating to price structures.
  • Though ILM is a forgotten word these days, make sure you run tiering within your storage environment that can help you move your data from higher SLA tiers to lower SLA tiers for cost containment purposes.
  • Implement storage arrays that natively support Automated Storage Tiering and can automate the movement of data to the required Tiers based on time of the day, policies, spike in usage or business requirements.
  • If there are native compression technologies available on the Storage Arrays for secondary or backup storage, implement those as a means to reduce your footprint.
  • Look at extending the life of your storage arrays from a typical 3 years to 6 or 7 years.
  • Leverage the use of outsourced computing models including Cloud technologies available in the market today. Could be private clouds or public clouds or a mesh of both technologies to reduce the storage footprint and management.
  • Budget for your storage requirements and try to live by those even if you have to take drastic measures to keep it under control.
  • Try to gain more operational efficiencies within the storage environments.
  • Understand the TCO with any new storage purchase, as cost of new storage could include several aspects of implementation including migration, consulting, downtime, missed SLA’s, Training, management, etc.
  • Try to reclaim storage as old host systems / server systems are retired or migrated.
  • Check for inconsistencies in your Storage environment as those could result in missed SLA’s, downtime and penalties.
  • Do not over provision and do not over budget. Its just storage, if you need more you can buy more, but having idle storage doing nothing for years in anticipation of future growth will heavily skew operational storage efficiencies.
  • Do not create unnecessary storage management tasks and processes for your storage environment.
  • Having backups and good working backups is very important, but do not tie down your storage with numerous copies of snaps, clones, mirrors, BCV’s, etc for a rainy day, rather have a DR plan and copy single instance of data remotely for DR purposes.
  • Plot trends for your storage environment. See if trends can help you budget, forecast and provision your storage accurately.
  • Remember the larger storage footprint you have, the larger your backup footprints will be, causing more storage space, more backup time windows, more network traffic, slower response times, more tapes, more offsite backups, more backup management cost and possibly more licensing cost.
  • Get away from managing islands of storage; rather move to a more centralized storage management, long-term effects are amazing.
  • Try to reduce licensing cost around storage software. The less storage you deploy, the less licensing per TB cost that you will pay.


There are many other factors you can implement; here are a few different posts from the past talking about this topic.


A Google search on Storage Economics yields


There are numerous areas of storage management that customers can try to bring in efficiencies that will help them better manage storage, reduce footprint, and reduce CAPEX and OPEX. It starts as a small practice within organizations and the value it creates grips the rest of the IT management teams.

In large organizations, there are Storage Architecture teams, Deployment teams, Provisioning teams and Operational Support teams, but seldom do we see a Storage Economics Team that helps drive utilization and efficiency through best economic practices within storage environments.


So take this opportunity and plant the seeds for your Storage Economics practice now.



Japan, a country of cultures and so seems Hitachi

July 29th, 2010 3 comments


We were invited to go to Japan last week to attend Hitachi’s uValue convention in Tokyo for 4 days.



The Japanese culture is different, very different, but very interesting. No handshakes, you have to bow to people, respected people are called with –San or –Sama after their last names, delicacy food, hot & humid weather, the workplace pride, the dedication to their job, a very hardworking society, very modest, humble and honestly one of the best hosts, these were just some observations.

Things are different, very different…………..…and in middle of that you got a company called HDS, a wholly owned subsidiary of Hitachi Japan that is located in the US. All HDS executives are from the US, while practically everyone on Hitachi’s Executive team is from Japan.

Does it mean, A clash of cultures? Absolutely not……. we rather felt the other way around. The new Executive team within HDS is establishing great relations with their counterparts in Hitachi Japan, enabling a lot of HDS decisions being made locally.

Hitachi CEO, Nakanishi-San went through an hour of speech to the attendees at the uValue convention, there were 50K attendees, out of which 10K were at his speech. He went through the last 100 years of Hitachi innovation and set the stage for the next 10 years of focus.

Hitachi is truly a part of the Japanese culture, as you land at Narita Airport in Tokyo and go through escalators and elevators, you will notice the Hitachi brand. From making the first battery in Japan to making fans, turbines, refrigerators, trains, televisions, they made it all and they still make it all. Talk about IT floors in datacenters, Hitachi is always found in form of Symphony Servers or USPV Storage or Hitachi networking. Every major place you walk into, you will atleast see a thing or two made by Hitachi.


First Hitachi Product Ever


A Battery


A Turbine?






Disk Drive


So far I thought Hitachi Japan made great storage technology and its US wing HDS marketed it. But after seeing the interaction last week, my opinions about the whole relationship between Hitachi Japan and HDS have now changed. HDS seems to help Hitachi Japan with a lot of strategic direction in terms of products and services.


Some More details

Both Hitachi and HDS have been great hosts to the bloggers and the analyst invited to this event. All the way from making sure the logistics of the trip including flights, food, travel, events and local trips were very well organized.

Attendees included (Bloggers): (FRONT – Left to Right) Myself, Robin Harris, Nigel Poulton, (BACK – Left to Right) Chris Evans, Greg Knieriemen, Rick Vanover.



Some activities included

Sunday – Monday: Flying for the better part of the day and losing another 13 hours going there….. pretty much left with only Monday evening for dinner with HDS folks.

Tuesday: Visited Hitachi RSD facilities in Odawara (a suburb of Tokyo). Took the Bullet Train to cover this 70 mile trip, in about 15/20 mins. This train was amazing, the top speed varies upto 200 miles/hour.

The Bullet Train, made by Hitachi


Spend the entire day talking about some NDA stuff. It was very interesting, very interesting guys that work on these sophisticated projects. Went out to dinner with Hitachi executives that night and enjoyed some of the finest food in the world with a 7 course dinner and 6 different wines and sake.

Wednesday: Spend pretty much half a day talking to HDS about some strategy, some really creative discussions about technology and marketing.

Spend the rest of the day in Akihabara, which is the electronics district of Tokyo. Imagine a geek store, here there were hundred’s of them, from small shops to mega stores selling just electronic items.


Nigel Poulton


Rick Vanover, Chris Evans


Met up with some other HDS folks (Professional Services and Managed Services) that evening who were visiting Hitachi Japan from the US. It was great to connect with them.

Thursday: Early morning left for the #uValue convention. Great Place, you can catch some of those pictures on the Facebook page, here.



Hearing the CEO of Hitachi and then Iwata-San in a private meeting with the Press, Bloggers and Analyst, topics were around Hitachi’s strategy about IT and Telecommunications. Attended the Michael Hay session on Hitachi Research Strategy and then headed out for dinner. It was absolutely impressive to see the Convention floor with Hitachi Technologies and then hear about some of the strategy behind it.


Hitachi CEO - Nakanishi-San


Hitachi Executive for IT, Telecommunications, Iwata-San


Hitachi Robot at uValue Convention: EMIEW from storagenerve on Vimeo.


Then was dinner that night in Roppongi, a suburb of Tokyo (it’s the night life district of Tokyo). Had a great time with Hitachi executives from Communication and Engineering both joining with us for dinner.

Then we were given a tour of the 10th tallest building in Japan called the Roppongi Hills, at the 52nd floor lobby overlooking Tokyo. We had a great time there. After a long and tiring day, sit at the 52nd floor overlooking Tokyo and having a glass of beer, a great feeling.


The Gang: Bloggers and Hitachi (Meade-San from HDS Japan and Cecilia Fok from HDS Hong Kong)


Another interesting fact: The Dinner table every night had a sitting scheme, where every person was assigned a chair/seat and that created a nice mix of environments where on each table there would be a blogger, analyst, HDS and Hitachi person. Quite enjoyable to hear different perspective of things.

Friday: It was the day we were all leaving. Finished up some early AM shopping and headed to the airport for a long 24 hour bus, plane, train, plane and car journey back home via Houston.

Uneventful trip back home….

I will end with this; the Japanese know how to take care of their guest. Culturally Japan seems to be a very strong country, Hitachi is a large part of that culture. Syyonara for now….

Thanks Hitachi and HDS for inviting and making us feel part of the family..




Disclaimer: The 4 day trip to Hitachi Japan to attend the uValue Convention is sponsored by HDS. They paid for all travel, boarding and lodging expenses for these days. The attendees / bloggers are not required to Blog about this event. This is my attempt to share what we learned there.

Storage Federation

July 27th, 2010 3 comments



EMC’s latest addition to the concept of storage federation is the VPLEX announcement that happened at EMC World 2010. VPLEX comes in two forms today, VPLEX Local and VPLEX Metro. Important features of VPLEX include cache coherency, distributed data and active-active clusters at metro distances. In the works are VPLEX Geo and VPLEX Global enabling inter-continental distances.

VPLEX contains no drives today, it is based on a version of Linux Open Source and runs on the same hardware as a VMAX engine. But said that, what prevents EMC from including VPLEX as part of every VMAX and Clariion sold today or may be just run it as a Virtual Appliance within the VMAX (Enginuity) or Clariion (Flare).

While HDS has a slightly different approach yielding almost the same result using Storage Virtualization, HDS approaches storage federation in its USPV platform. The USPV scales upto synchronous distances, I believe 100 kms max distance today.

USPV natively uses a combination of Universal Volume Manager (UVM), High Availability Manager (HAM), Dynamic Replicator, Hitachi TrueCopy Remote Replication and Hitachi Replication Manager to do synchronous distance (100 kms) replication with distributed data in an active-active clustered environment.

VPLEX local and VPLEX metro has been recent announcements, while the USPV has been offering similar features since the past few years now.



Service providers will be largest customers while the VPLEX is still being developed in the Geo and Global modes.

I would think, government customers like DISA, DoD and other cloud providers in the federal space may find VPLEX and USPV very interesting as well.

Migrations using both the VPLEX Local and USPV are a piece of cake, because of its underlying features.

And many more…



Will the future of all storage subsystems have federation in it as a core component? It is most likely with virtualization technologies being designed and pushed today, that we will natively see some of these features into backend storage that can typically hold data in containers and these containers move based on requirements. Look at a VM as a container of information or application.

With a Front-end storage controller, call it a VPLEX or a USPV which doesn’t care what sort of disks are sitting behind it, natively add all storage features to it like snaps, clones, RAID, replication, high availability, virtualization and it doesn’t matter if you use the cheapest storage disk behind it.

Typically with a single storage subsystem, you are looking at scaling your storage to 600 drives or 2400 drives or 4096 drives or 8192 drives or 16384 drives max or does it even matter at this point.

Storage federation will allow a system to scale upto 100’s of PB of storage, for example a EMC VPLEX scales upto 8192 Volumes today, while a USPV scales upto 247PB’s of storage, in essence that is 1 TB x 592,000 disk drives in a single system (federated).

When you connect two of these VPLEX’s or two of the USPV’s at synchronous distances, you now start taking advantage of active-active clusters (datacenters) with distributed data. (Again I will be the first to say, I am not sure how much of cache coherency is built within the USPV today).

But that brings us to some important questions…



Is storage federation that important?

Is storage federation the future of all storage?

Do you care about active-active datacenters?

What is the use-case for federation outside of service providers?

Will this technology shape the future of how we do computing today by leveraging and pooling storage assets together for a higher availability and efficiency?

How large, a single namespace would you like to have? I believe HP IBRIX brings a similar concept of scaling storage to 16PB’s total in a single name space..

Does federation add latency, which limits its usage to only certain applications?

Is VPLEX the future of all EMC storage controller technology, and will that eliminate the Flare or Enginuity code?

If you add a few disk drives to the VPLEX locally, can it serve high demand IOPS applications?

How large will cache get on these storage controllers to minimize the impact of latency and cache coherency on devices at synchronous distances? Is PAM or Flash Cache that answer?

At that point, does it matter if you can do coupling on your systems to extend it like we initially thought the VMAX would have 16 or 32 engines or may be you can couple Clariion SP’s or AMS or USPV’s?


More Questions

Will the future VPLEX look like a USPV with local disk drives attached?

Though the big vision of VPLEX is Global replication creating active – active datacenters, does the next generation VBlocks meant for Service Provider market include a VPLEX natively within it?

Is EMC Virtual Storage just catching upto HDS technology? Or is VPLEX vision a big and unique one that will change the direction of EMC Storage in the future…

Is Storage federation game changing and is EMC ahead of HDS or HDS ahead of EMC…