Google+

Archive

Archive for the ‘Technology’ Category

Policy! Policy!! Policy!!!

October 20th, 2009 6 comments

It has been an exciting month, some new details are emerging related to automated storage tiering, workload distributions, workflow automation, SLA’s, QoS and how Policy based storage management can help solve these challenges. “Policy” as we all know in the “business world”, “advanced algorithms” as known in “scientific community” is used to solve complex storage challenges. This has been one of the favorite topics of discussion in the storage blogosphere these days.

Though there are two distinct groups of people, one favoring automation and the other half possibly thinking this technology brings no value-add in terms of how storage is utilized and managed today. This game was initially started by Compellent (Compellent Data Progression technology) about 4 years ago, then joined by Pillar Data Systems and now other OEM’s (including EMC, HDS, IBM) are starting to catchup on policy based automated storage tiering.

With private clouds in the near future and then hybrid clouds (a mesh of private and public clouds) in the horizon, automation, workload distribution, SLA’s, QoS will need to be monitored and managed to optimally run IT Infrastructures. Policy based management will create a new wave of storage management, automation and will act as a principle ingredient of hybrid clouds.

Generation 1 of policy based storage tiering works within a single storage subsystem.
Generation 2 in the near future should work across heterogeneous storage subsystems (by the same manufacturers).
Generation 3 over the next year or two will work across storage platforms irrelevant of the manufacturers.
Generation 3 of policy based management will include the entire stack of management. These products will be capable of not only managing the Storage, but also interact through policies at the Virtualization, Networking, Application, OS, Middleware and other layers in the stack of Infrastructure management..

We should see an up-rise of new emerging technologies that will create these external policy based engines for data movement automation. All infrastructure components including Storage, Virtualization, Networking, Application, OS, Middleware will provide the necessary API’s for these external engines to interact and enable data automation and workflow automation in the hybrid clouds (irrelevant of the manufacturers).

www links

Here are a few articles from the past month related to the topics of Policy, Automated Storage Tiering, Workloads, SLA’s and QoS.

Pillar (OEM)

http://blog.pillardata.com/pillar_data_blog/2009/10/autotiering-of-data.html

EMC (OEM)

http://flickerdown.com/2009/09/why-policy-is-the-future-of-storage/

http://flickerdown.com/2009/10/why-policy-is-the-future-of-storage-part-2/

http://stevetodd.typepad.com/my_weblog/2009/10/greenfield-monitoring-of-a-private-cloud.html

http://stevetodd.typepad.com/my_weblog/2009/09/federation-and-private-cloud.html

Compellent (Partner Blog)

http://blogs.cinetica.it/cinetica/2009/10/19/dear-mike/

http://blogs.cinetica.it/cinetica/2009/08/25/tiered-storage-and-new-features-for-the-rest-of-us/

HDS (OEM)

http://blogs.hds.com/hu/2009/09/ilm-revisited-intelligent-tiered-storage-for-file-and-content-data.html

Independents

http://www.storagemonkeys.com/index.php?option=com_myblog&show=the-end-of-history-or-just-the-beginning-.html&Itemid=136

http://thestoragearchitect.com/2009/10/18/enterprise-computing-do-we-need-fast-v1-emc/

http://www.theregister.co.uk/2009/09/22/emc_fast/

http://itknowledgeexchange.techtarget.com/storage-soup/hp-drops-roadmap-nuggets-at-storageworks-techday/

http://itknowledgeexchange.techtarget.com/storage-soup/spinnaker-founders-bring-avere-out-of-stealth/

http://breathingdata.com/2009/10/18/can-and-when-will-ssds-sata-replace-fcsas/

http://gestaltit.com/featured/top/gestalt/emc-unified-platform-storage-tiering/

http://storagenerve.com/2009/10/14/enhancements-to-emc-symmetrix-v-max-systems/

Your thoughts always welcome!!!

cheers
@storagenerve

Enhancements to EMC Symmetrix V-Max Systems coming!!

October 14th, 2009 No comments

Enhancements to EMC Symmetrix V-Max system is possibly around the corner (FY09 Q4).

FAST (Fully Automated Storage Tiering) is due this quarter and will be one of the most awaited software release in the enterprise storage space by EMC.

Bundled together with FAST, possibly a new microcode version the enables FAST (its associated features) and other expected enhancements.

Though this will be a major software release and functionality upgrade, I don’t think this would qualify as a 2nd generation EMC Symmetrix V-Max system.

But fully expect EMC to release its FAST v2 and V-Max G2 somewhere around Mid year 2010.


Here are a few new features to possibly expect on the EMC Symmetrix V-Max Systems this quarter.

1. Introduction of FAST v1, which should allow automated data movement within a single Symmetrix V-Max system. Here are some features of FAST as discussed on GestaltIT and by Barry Burke (TSA) on his blog.

2. FAST v1 data movement should possibly be policy driven around factors like time (how old is the data), SLA (promised SLA’s), Tier (from Tier 0 to Tier 1 to Tier 2) and possibly I/O or IOPS based.

3. FAST v1 should allow automated policy based data movement or prompt a user for manual intervention for data movement.

4. Do not expect FAST v1 to come for free, it will possibly be licensed based on the total number of TB’s in the storage subsystem.

5. Expect some integration between the IONIX platform and FAST v1 and possibly some very tight integration with future releases of FAST and IONIX.

6. Expect FAST and IONIX to integrate very tightly with Atmos through API’s and policies. We should expect to see this with FAST v2 and not with FAST v1.

7. So when does EMC retire Symmetrix Optimizer, with FAST v1 probably not, with FAST v2 probably yes.

8. 2TB SATA II drives will be introduced (According to a Keynote from Joe Tucci in NYC), though Joe Tucci didn’t mention what platforms the 2TB SATA II drives will be available on, it seems the V-Max upgrade would be the most logical platform.

9. The 2TB SATA II drive upgrade should make the V-Max 4 PB total storage (2400 drives x 2TB), possibly the single largest storage subsystem at an enterprise level.

10. RapidIO speed upgrade from 2.5 Gbps to 4 Gbps (interconnects between the engines) upgraded either through MBIE (new processors) and / or through microcode upgrades. Edit 10/15/2009 – 12:50 PM: Not sure currently the technology that EMC uses for RapidIO, since Parallel RapidIO supports 250 Mhz to 1Ghz clocking speeds while Serial RapidIO supports 1.25Ghz to 3Ghz.

11. Drive connect speed upgrade from 4 Gbps to 8 Gbps

12. FC and FICON (Host Connects) port speeds upgrade from 4 Gbps to 8 Gbps

13. Interconnect between two separate Symmetrix V-Max Systems (8 Engines each per system) expanding into possibly 16 or 32 (max) engines. The more I think about this concept, the more it makes me feel that there are no added benefits of this architecture, rather it will add more complexities with data management and higher latency. We may not see anything related to interconnects in this upgrade, but remember how the V-Max was initially marketed with having hundreds of engines and millions of IOPS, the only way to achieve that vision is through interconnects. The longer the distance, the more latency with cache and I/O. If Interconnets end up making in this release, limitation on the distance between two Symmetrix V-Max system bays would be around 100 feet.

14. To the point above, another way of possibly connecting these systems could merely be federation through external policy based engines. Ed Saipetch and myself have speculated that concept on GestaltIT.

15. With the use of larger drive size, possibly expect a cache upgrade. Currently the Symmetrix V-Max supports 1TB total cache (512GB usable), which may get upgraded to 2TB total cache (1024 GB usable).

16. New possible microcode version 5875 that will help bring features like FAST, SATA II drives and additional cache into the Symmetrix V-Max.

17. Processors: 4 x Quad Core Intel processors on V-Max engines may not get an upgrade in this release, it should possibly be with FAST v2 as a midlife enhancement next year.

18. Further enhancements related to FCoE support.

19. Upgrade of iSCSI interface on Symmetrix V-Max engines from 1GB to 10GB (is now available with the Clariion CX4 platforms).

20. Really do not expect this to happen, but imagine RapidIO interconnects change to FCoE. Really not sure what made EMC to go with RapidIO instead of Infiniband 40 Gbps (which most of the storage industry folks think is dead) or FCoE with Engine interconnects, but if the engineers at EMC thought of RapidIO as a means to connect the V-Max engines, there has to be a reason behind it. Edit 10/15/2009 12:50 PM: Enginuity more or less doesn’t care about the underlying switching technology, making a switch from RapidIO to FCoE or Infiniband can be accomplished without a lot of pains. Though for customers already invested into RapidIO technology (with existing V-Max systems), it might be offline time to change the underlying fabric, which in most cases is unacceptable.

21. Virtual Provisioning on Virtual LUNs which is currently not supported with the existing generation of Microcode on V-Max systems.

22. Atmos currently is running as a beta release and we should expect a market release this Quarter. Should we expect to see an integration between V-Max and Atmos. I am not sure of any integration today.

23. A very interesting feature to have in the EMC Symmetrix V-Max would be system partitioning, where you can run half the V-Max engines at a certain Microcode level with a certain set of features and other half can be treated as a completely separate system with its own identity (almost like a Mainframe environment). Shouldn’t this be a feature of a modular storage array.

24. Symmetrix Management Console (SMC) and Vmware integration (like VMware aware Navisphere and Navisphere aware VMware). There is already quite a bit of support related to VMware in SMC for provisioning and allocation.

25. Also a much tighter integration between IONIX, FAST, SMC, Navisphere and Atmos may after all be the secret sauce, which would enable workflow, dataflow and importantly automation. Though do not expect this integration now, something to look forward for the next year.

Summary

Though I am still a bit confused on where FAST will physically sit.

FAST v1 can merely be a feature integrated within the Microcode, configurable & driven through policy within the Symmetrix Management Console.

FAST v2 (Sometime Mid 2010) will support in-box and out-of-box (eg: Symmetrix to Clariion to Celerra to Centera) data movement through policy engine.

Ed Saipetch and myself have speculated on GestaltIT on how that may work. Though after some thoughts, I do believe a policy engine can merely be a VM or a vAPP sitting outside the physical storage system in the Storage environment.

To promote the sales of the EMC Symmetrix V-Max systems, Barry Burke in his blog post talks about Open Replicator, Open Migrator and SRDF / DM (Data mobility) are now available at no cost for customers purchasing a new EMC Symmetrix V-Max system, these are some of the incentives that EMC is offering and further promoting the sales of its latest generation Symmetrix technology.

It remains to be seen the path of success FAST will carve for Symmetrix V-Max systems.

True IT – Storage Stories: 5 (8,000 Users on MS Exchange)

October 7th, 2009 2 comments

email outageThat unfortunate thursday morning, around 8:30 AM PST, when all hell broke loose.

This customer had a typical setup with BladeCenters servers and a SAN. This setup was providing MS Exchange email services to about 8000 internal users within the organization. Clustered BladeCenter servers, multiple switches, connected to one storage subsystem in the backend serving all user emails.

Though the BladeCenter servers were pretty new, the SAN in the backend had just expired its manufacturers warranty. The customer were deciding to migrate to a new storage subsystem, but in the mean while they let the support on this storage subsystem expire and have T&M support on it, in short no one was monitoring failures, errors, events on this storage subsystem. That morning, for some unknown reason the entire Storage subsystem powered off by itself. With UPS protection and generators in the environment this behavior was very unusual. This caused the MS Exchange databases, logs, mailboxes to fail. 8000 users lost email service. Yes, all the executives of the company were on the same system.

The call was escalated in a few minutes, since this caused a company wide outage, everyone was trying to figure out what had just happened. A T&M call was placed with the manufacturer to fix the system (see, I didn’t say diagnose the problem), SEV 1 calls are pretty nasty. They showed up immediately because of what had happened. The spares had arrived within an hour. 3 hours total and the system was ready to be powered back up, another hour or so to give the final health check, initialize all the mount points, servers, clusters, services, etc.

4 hours of outage, 8000 users affected.

The problem was narrowed down to multiple failed power supplies for the controllers enclosure. Due to lack of monitoring and support, previous failed power supplies went undetected and another failed power supply that morning caused the entire storage subsystem to fall on its knees.

Lesson Learnt:

So its very important to decide which systems will have a lapse of contract or coverage and which ones are business critical systems that need a 24 x 7 coverage. Have the vendor check for failures regularly. Though this customer has a pretty good investment into IT infrastructure, for their MS Exchange they didn’t think about replication or CDP solutions.

As much as it sounds unreal, I have heard several customers today perform “WALK THE DATACENTER” every week, where they have their technicians go from rack to rack to check for amber lights. Errors like these tend to get picked up with those practices in place.

Having world class Power systems, UPS’s, Generators will not help with these issues.

After all the question is what did the customer save or lose by leaving this storage subsystem unsupported.

HP Techday 2009: The Final Thoughts!

October 2nd, 2009 4 comments

This is my 5th consecutive post on HP TechDay in Colorado Springs.

HP Techday 2009 Updates

HP TechDay 2009 Day 0

HP TechDay 2009 Day 1

HP TechDay 2009 Day 2

Screen shot 2009-10-01 at 8.05.25 PM

HP facilities in Colorado Springs, a Satellite view

..

Positives

This event was a very smart move by HP and as far as I can see they have exceeded their expectations with this event. It was truly a fireworks of hash tag #hptechday both Monday and Tuesday which dominated twitter. The after discussions have taken over the blogging, twitter and the Internet press by surprise with the number of twits, blogs and press articles written about this event.

Clearly for me this was a good platform to learn, understand and share some visions and technologies related to HP Storage products. An Event like this helps understand and connect the dots together with future products and emerging technologies.

The R&D and Engineering teams gave us a good background of the inter-workings of the storage technology not necessarily the intra-workings of all technologies messed together. There were some awkward moments, but overall they pulled it together really well. The marketing folks spoke about some strategy related to these technologies and painted an overall picture. The mix of people involved with the presentations and demos seem to accomplish the agenda, where marketing pitches came in with engineering details.

HP really left competition out of all the discussions except for the hands-on lab. No mentions of EMC, NetApp, IBM, Cisco or HDS. The hands-on lab did have an exercise on NetApp FAS2050C and an EMC Clariion CX4-120 for LUN provisioning purposes in comparison to LUN provisioning on HP EVA’s. It was a positive strategy from HP, not to compare their products to that of EMC, NetApp and others.

A lot of discussions revolved around Virtualization related to VMware, Xen and Hyper-V, but HP made it clear they were VMware’s largest revenue producing partner and would like to remain so.

..

Challenges

Platforms like EVA, SVSP, Lefthand, IBRIX and D2D were discussed. Independently every platform looks very interesting and very compelling. But an integration vision was still lacking, a direction or a strategy on how these pieces of puzzle will be joined together to form the common storage platform.  Though HP clearly seems to be making a move towards Converged Data Centers.

HP clearly has a very big competition in the storage market with already proven Vendors and their technologies. EMC, NetApp, IBM, Cisco, VMware in storage, networking and Converged Data Centers. Also technologies that are strong and emerging would largely cause market nuisance or focus disruption for HP.

One of the biggest problems I saw was, HP has these segments of storage and technology, rather not a unified vision, or didn’t come across as one. There are pouches of storage like EVA teams, SVSP teams, Lefthand teams and so forth, not sure if there is technology sharing and again a moved towards integrating all these technologies to form a unified storage platform. Though Proliant is the chosen platform for all Lefthand and Converged Data Center products.

HP still needs a very strong storage technology in the Enterprise space that is there own and not OEM’d. The truth is, eventually the HP – Hitachi relationship has to come to an end with HP’s new product that may compete in the same market space. This strategy will enable HP to be very unique in terms of the markets they serve, which may include their own inhouse storage products for SMB, Midsize and Enterprise customers.

So other lacking things from HP were the Cloud Strategy (if they ever plan to enter that space), Unified Storage details, FCoE discussions, Procurve, Deduplication platform discussions, IBRIX technology integration details, Storage Management, Storage Optimization and XP.

It may have been very hard to cover all these platforms in a day and a half with giving all the technology details behind it. Also remember this was an NON NDA session, so we were not preview to all the future products and technologies.

..

Summary

Overall HP did hammer us for 2 consecutive days with HP Storage Technology. Coming out of it, I can truly say I didn’t realize HP had so much focus on Storage. Their move to hire Dave Donatelli was a smart one and hope over the next year as the Storage business moves under him; he will insert some new strategic direction.

So HP was the first OEM to arrange the HP Techday and make it open to Bloggers as an Invite Only Event. The ratio of Bloggers to HP Personnel was 1:2, giving everyone a lot of attention.

Now the question is who will be next OEM to do a similar event and what will they do to prove themselves different. Already hearing some buzz in the industry about some the effects of HP Techday and some possible events from other OEMs.

But I clearly see an advantage of an event like this and the after effects of it, Good move HP Storage – Marketing Team!