Google+

Archive

Posts Tagged ‘SRDF’

VMAX makes a Re-Debut with lots of enhancements

January 19th, 2011 2 comments

With the recent buzz from EMC about #EMCBreaksRecords on Twitter, Facebook and many other Social Media avenues it was pretty visible that EMC has been secretly preparing for one of the biggest announcements that was made today.

Along with FAST VP and upgrades on the VMAX, EMC also announced the much-anticipated unified storage platform VNX. The VNX strategy was very visible at EMC atleast for the past year and was overdue for a release.

FAST, FAST VP…

We have been talking about FAST for quite a few years now, atleast 1 year and 9 months. Here comes a new version for FAST on VMAX known as FAST VP (Fully Automated Storage Tiering Virtual Pools), which was released today at the big event in NYC. Next as we know would be FAST to the FULLEST (possibly incorporating VPLEX solution to move data across different subsystems).

All new features wrapped around in Enginuity 5875, this release is considered as one of the most aggressive code releases by EMC for the Symmetrix platform.

By the way, if you didn’t know, Symmetrix is now more than 20 years old, probably the longest living brands in the Storage world – still alive – and kicking a**. Here is a blog post talking about the 20-year history of Symmetrix.

http://storagenerve.com/2009/12/15/symmetrix-the-journey-of-20-years/

Some highlights of Enginuity 5875 include

  • FAST VP (Virtual Pools) – SUB LUN Tiering
  • DARE (RSA based Encryption technology) added natively
  • VSI (Virtual Storage Integrator)
  • Addition of 10GB SRDF and iSCSI Directors
  • FLM (Federated Live Migration)
  • VAAI Support
  • ZPR or Zero Page Reclaim

There are some great posts already out by EMC Bloggers and many other independent bloggers around VMAX, FAST VP and the enhancements to VMAX platform. So I am not going to recreate the wheel here.

Barry Burke has pushed out some very interesting posts on the features wrapped around in Enginuity 5875. This post talks about some featured enhancements to the VMAX platform, customer benefits around migration, sub lun tiering, encryption, VAAI support, etc.

http://thestorageanarchist.typepad.com/weblog/2011/01/3017-vmax-2011-edition-powerful-trusted-smartest.html

Few other interesting posts here by Barry on FAST VP and how it works in customer environments.

A few questions / comments / observations on FAST VP

  • As I understand, VMAX Systems will need to be at Enginuity 5875 code level before you qualify for FAST VP, not a big deal (for some).
  • You need licenses for FAST VP.
  • Policies for FAST VP are configured from the SMC?
  • If you are already a FAST customer on VMAX and want to migrate to FAST VP, I suspect the licensing will change with this new feature.
  • As a customer of FAST today, migrating to FAST VP for sub lun level tiering, may require you to reanalyze your business requirements associated with virtual pools.
  • FAST policies today can be migrated to FAST VP policies, like the time of day, IOPS, etc?
  • FAST configuration for VMAX was typically done by the Professional Services Group. I am suspecting the same is true for FAST VP.
  • Tier Advisor is required to perform analysis in your environment is only available through EMC pre and post sales folks.
  • Though the native functionality of FAST VP is built within the Enginuity code, the activity and logic is driven from the service processor of the Symmetrix system, again suspecting SMC is doing this work? Also suspecting without the SP being functional, FAST VP policies do not get updated and the process dies.
  • Do not know, the extent size 768KB on a VMAX is functionally and operationally any advantageous compared to a 42MB extent.
  • FAST VP policies seem to be very granular.
  • Once FAST VP is setup on Virtual Pools in a Storage Group, additional of any other Virtual Pools would require the customer to analyze the new Pools (using Tier Advisor) before setting policies.

http://thestorageanarchist.typepad.com/weblog/2011/01/3018-fast-vpworlds-smartest-storage-tiering-part-1.html#more

http://thestorageanarchist.typepad.com/weblog/2011/01/3019-fast-vp-worlds-smartest-storage-tiering-part-2.html#more

The above posts from Barry are quite detailed and informative, if you are into Symmetrix technology, highly recommend reading it.

Discovered the blog by Itzikr from EMC Israel about 2 weeks ago

Itzikr talks in detail about Enginuity 5875, support for VAAI, VMware integration and FAST VP.

http://itzikr.wordpress.com/2011/01/17/emc-symmetrix-vmax-enginuity-5875-fast-vp-vaai-making-the-best-array-even-better/

And as usual a very informative and lengthy post from Chuck Hollis on VMAX platform.

http://chucksblog.emc.com/chucks_blog/2011/01/symmetrix-vmax-gets-even-smarter.html#more

A Video Stream from Wikibon covering today’s announcement from EMC. The first few mins of the video talk about Symmetrix VMAX FAST VP.

http://www.justin.tv/wikibon/b/277797896

And lastly, one of my favorite posts from Nigel Poulton talking about FAST VP from late December.

http://blog.nigelpoulton.com/vmax-comes-of-age/

Enjoy reading the above posts!! Courteous comments welcome. Again thanks for visiting and reading the blog.

EMC Symmetrix: BIN file

March 12th, 2010 13 comments

EMC Symmetrix BIN file, largely an unknown topic in the storage industry and practically there is no available information related to it. This post is just an attempt to shed some light as to what a BIN file is, how it works, what’s in it and why is it essential with the Enginuity code.
.

Some EMC folks have capitalized on the BIN file as to the personality it brings to the Symmetrix, while the EMC competition always uses it against them as it introduces complexities in the storage environment with management and change control.

.

Personally I feel a Symmetrix wouldn’t be a Symmetrix if the BIN file weren’t there. The personality, characteristics, robustness, compatibility, flexibility, integration with OS’s, etc wouldn’t be there if the BIN file didn’t exist.

.

With the total number of OS’s, device types, channel interfaces and flags it supports today, sort of making it one of the most compatible storage arrays in the market. The configuration and compatibility on the Symmetrix can be verified using the E-Lab navigator available on Powerlink.

.

So here are some facts about the BIN file

  • Only used with Symmetrix systems (Enginuity Code)
    .
  • BIN file stands for BINARY file.
    .
  • BIN file holds all information about the Symmetrix configuration
    .
  • One BIN file per system serial number is required.
    .
  • BIN file was used with Symmetrix Gen 1 in 1990 and is still used in 2010 with Symmetrix V-Max systems.
    .
  • BIN file holds information on SRDF configurations, total memory, memory in slots, serial number of the unit, number of directors, type of directors, director flags, engines, engine ports, front end ports, back end ports, drives on the loop, drives on the SCSI bus, number of drives per loop, drive types in the slots, drive speeds, volume addresses, volume types, meta’s, device flags and many more settings.
    .
  • The setup for host connection if the OS is Open Systems or Mainframe environments using FICON, ESCON, GbE, FC, RF, etc is all defined in the BIN file. Also director emulations, drive formats if OSD or CKD, format types, drive speeds, etc is all defined in the BIN file.
    .
  • BIN file is required to make a system active. It is created based on customer specifications and installed by EMC during the initial setup.
    .
  • Any ongoing changes in the environment related to hardware upgrades, defining devices, changing flags, etc is all accomplished using BIN file changes.
    .
  • BIN file changes can be accomplished 3 ways.
    .
  • BIN file change for hardware upgrades is typically performed by EMC only.
    .
  • BIN file change for other changes that are device, director, flags, meta’s, SRDF configurations etc is either performed through the SYMAPI infrastructure using SymCLI or ECC (Now Ionix) or SMC (Symmetrix Management Console) by the customer. (Edited based on the comments: Only some changes now require traditional BIN file change, typically others are performed using sys calls in enginuity environment)
    .
  • Solutions enabler is required on the Symcli, ECC, SMC management stations to enable SYMAPI infrastructure to operate.
    .
  • VCMDB needs to be setup on the Symmetrix for SymCLI, ECC, SMC related changes to work.
    .
  • Gatekeeper devices need to be setup on the Symmetrix front end ports for SymCLI, ECC, SMC changes to work
    .
  • For Symmetrix Optimizer to work in your environment, you need DRV devices setup on your Symmetrix.(EDITED based on comments: Only required until DMX platform. Going forward with DMX3/4 & V-Max platforms it uses sys calls to perform these Optimizer changes).

.

Back in the day

All and any BIN file changes on the Symmetrix 3.0, Symmetrix 4.0 used to be performed by EMC from the Service Processor. Over the years with introduction of SYMAPI and other layered software products, now seldom is EMC involved in the upgrade process.

.

Hardware upgrades

BIN File changes typically have to be initiated and performed by EMC, again these are the hardware upgrades. If the customer is looking at adding 32GB’s of Cache to the existing DMX-4 system or adding new Front End connectivity or upgrading 1200 drive system to 1920 drives, all these require BIN file changes initiated and performed by EMC. To my understanding the turn around time is just a few days with these changes, as it requires change control and other processes within EMC.

.

Customer initiated changes

Configuration changes around front end ports, creating volumes, creating meta’s, volume flags, host connectivity, configuration flags, SRDF volume configurations, SRDF replication configurations, etc can all be accomplished through the customer end using the SYMAPI infrastructure (with SymCLI or ECC or SMC).

.

Enginuity upgrade

Upgrading the microcode (Enginuity) on a DMX or a V-Max is not a BIN file change, but rather is a code upgrade. Back in the days, many upgrades were performed offline, but in this day and age, all changes are online and accomplished with minimum pains.

.

Today

So EMC has moved quite ahead with the Symmetrix architecture over the past 20 years, but the underlying BIN file change requirements haven’t changed over these 8 generations of Symmetrix.

Any and all BIN file changes are recommended to be done during quite times (less IOPS), at schedule change control times. Again these would include the ones that EMC is performing from a hardware perspective or the customer is performing for device/flag changes.

.

The process

During the process of a BIN file change, the configuration file typically ending with the name *.BIN is loaded to all the frontend directors, backend directors, including the global cache. After the upload, the system is refreshed with this new file in the global cache and the process makes the new configuration changes active. This process of refresh is called IML (Initial Memory Load) and the BIN file is typically called IMPL (Initial Memory Program Load) file.

A customer initiated BIN file works in a similar way, where the SYMAPI infrastructure that resides on the service processor allows the customer to interface with the Symmetrix to perform these changes. During this process, the scripts verify that the customer configurations are valid and then perform the changes and make the new configuration active.

To query the Symmetrix system for configuration details, reference the SymCLI guide. Some standard commands to query your system would include symcfg, symcli, symdev, symdisk, symdrv, symevent, symhost, symgate, syminq, symstat commands and will help you navigate and find all the necessary details related to your Symmetrix. Also similar information in a GUI can be obtained using ECC and SMC. Both will allow the customer to initiate SYMAPI changes.

Unless something has changed with the V-Max, typically to get an excel based representation of your BIN file, ask your EMC CE.

.

Issues

You cannot run two BIN files in a single system, though at times the system can end up in a state where you can have multiple BIN files on various directors. This phenomenon typically doesn’t happen to often, but an automated script when not finished properly can put the system in this state. At this point the Symmetrix will initiate a call home immediately and the PSE labs should typically be able to resolve these issues.

Additional software like Symmetrix Optimizer also uses the underlying BIN file infrastructure to make changes to the storage array to move hot and cold devices based on the required defined criteria. There have been quite a few known cases of Symmetrix Optimizer causing the above phenomenon of multiple BIN files. , Though many critics will disagree with that statement. (EDITED based on comments: Only required until DMX platform. Going forward with DMX3/4 & V-Max platforms it uses sys calls to perform these Optimizer changes).

.

NOTE: One piece of advice, never run SYMCLI or ECC scripts for BIN file changes through a VPN connected desktop or laptop. Always run all necessary SymCLI / SMC / ECC scripts for changes from a server in your local environment. Very highly recommend, never attempt to administer your Symmetrix system with an iPhone or a Blackberry.

Hope in your quest to get more information on BIN files, this serves as the starting point..

.

Cheers
@storagenerve

EMC Symmetrix File System (SFS)

March 8th, 2010 4 comments

Very little is known about the Symmetrix File System largely known as SFS. Symmetrix File System is an EMC IP and practically only used within the Symmetrix environment for housekeeping, security, access control, stats collection, performance data, algorithm selection, etc.

If there are any facts about SFS that are known to you, please feel free to leave a comment. This post talks about the effects of SFS and not really the underlying file system architecture.

Some facts about the Symmetrix File System are highlighted below.

  • Symmetrix File System (SFS) resides on volumes that have specially been created for this purpose on the Symmetrix
  • SFS volumes are created during the initial Enginuity Operating Environment load (Initial install)
  • 4 Volumes (2 Mirrored Pairs) are created during this process
  • SFS volumes were introduced with Symmetrix Series 8000, Enginuity 5567 and 5568

Characteristics

  • 4 SFS volumes are spread across multiple Disk Directors (Backend Ports) for redundancy
  • SFS volumes are considered as reserved space and not available to use by the host
  • Symmetrix 8000 Series: 4 SFS volumes, 3GB each (cylinder size 6140). Reserved space is 3GB x 4 vols = 12 GB total
  • Symmetrix DMX/DMX-2: 4 SFS volumes, 3GB each (cylinder size 6140). Reserved space is 3GB x 4 vols = 12 GB total
  • Symmetrix DMX-3/DMX-4: 4 SFS volumes, 6GB each (cylinder size 6140). Reserved space is 6GB x 4 vols = 24 GB total, (It’s different how the GB is calculated based on cylinder size on a DMX/DMX-2 vs a DMX-3/DMX-4)
  • Symmetrix V-Max: 4 SFS volumes, 16GB each, Reserved space is 16GB x 4 vols = 64GB total
  • SFS volumes cannot reside on EFD (Enterprise Flash Drives)
  • SFS volumes cannot be moved using FAST v1 and/or FAST v2
  • SFS volumes cannot be moved using Symmetrix Optimizer
  • SFS volumes cannot reside on Vault Drives or Save Volumes
  • SFS volumes are specific to a Symmetrix (Serial Number) and do not need migration
  • SFS volumes are managed through Disk Directors (Backend Ports) only
  • SFS volumes cannot be mapped to Fiber Directors (now FE – Frontend Ports)

Effects

  • SFS volumes are write enabled but can only be interfaced and managed through the Disk directors (Backend Ports).
  • SFS volumes can go write disabled, which could cause issues around VCMDB. VCMDB issues can cause host path (HBA) and disk access issues.
  • SFS volume corruption can cause hosts to lose access to disk volumes.
  • If SFS volumes get un-mounted on a Fiber Director (Frontend Port), can result into DU (Data Unavailable) situations.

Fixes

  • Since the SFS volumes are only interfaced through the Disk Directors (Backend Ports), the PSE lab will need to be involved in fixing any issues.
  • SFS volumes can be VTOC’ed (formatted) and some key information below will need to be restored upon completion. Again this function can only be performed by PSE lab.
  • SFS volumes can be formatted while the Symmetrix is running, but in a SCSI-3 PGR reservation environment it will cause a cluster outage and/or a split brain.
  • No Symmetrix software (Timefinder, SYMCLI, ECC, etc) will be able to interface the system while the SFS volumes are being formatted.
  • The security auditing / access control feature is disabled during the format of SFS volumes, causing any Symmetrix internal or external software to stop functioning.
  • Access Control Database and SRDF host components / group settings will need to be restored after the SFS format

Access / Use case

  • Any BIN file changes to map SFS volumes to host will fail.
  • SFS volumes cannot be managed through SYMCLI or the Service Processor without PSE help.
  • SYMAPI (infrastructure) works along with SYMMWIN and SFS volumes to obtain locks, etc during any SYMCLI / SYMMWIN / ECC activity (eg. Bin Changes).
  • Since FAST v1 and FAST v2 reside as a policy engine outside the Symmetrix, it uses the underlying SFS volumes for changes (locks, etc).
  • Performance data relating to FAST would be collected within the SFS volumes, which FAST policy engine uses to gauge performance.
  • Performance data relating to Symmetrix Optimizer would be collected within the SFS volumes, which Optimizer uses to gauge performance.
  • Other performance data collected for the DMSP (Dynamic Mirror Service Policy).
  • All Audit logs, security logs, access control database, ACL’s etc is all stored within the SFS volumes.
  • All SYMCLI, SYMAPI, Solutions enabler, host, interface, devices, access control related data is gathered on the SFS volumes.
  • With the DMX-4 and the V-Max, all service process access, service processor initiated actions, denied attempts; RSA logs, etc are all stored on SFS volumes.

Unknowns

  • SFS structure is unknown
  • SFS architecture is unknown
  • SFS garbage collection  and discard policy is unknown
  • SFS records stored, indexing, etc is unknown
  • SFS inode structures, function calls, security settings, etc is unknown

As more information gets available, I will try to update this post. Hope this is useful with your research on SFS volumes…

Cheers

@storagenerve

Symmetrix V-Max Systems: SRDF Enhancements and Performance

September 10th, 2009 No comments

v-max image 2So this was one of those posts that I always wanted to write related to Symmetrix V-Max and SRDF enhancements that were incorporated with the 5874 microcode.

Yesterday morning had a chat with a friend and ended up talking about SRDF and then later in the day had another interesting conference call on SRDF with a potential customer. So I really thought, today was the day I should go ahead and finish this post.

Back in April 2009 when the V-Max systems were initially launched, Storagezilla had a post on V-Max and SRDF features, he covers quite a bit of ground related to the Groups and the SRDF/EDP (Extended Distance Protection).

Here are the highlights of SRDF for V-Max Systems

SRDF Groups:

  1. 250 SRDF Groups with Symmetrix V-Max (5874) Systems. In the prior generation Symmetrix DMX-4 (5773), it had support for 128 groups. Logically even with 2PB of storage, very seldom do customers hit that mark of 250 groups.
  2. 64 SRDF groups per FC / GigE channel. In the previous generation Symmetrix DMX-4 (5773), there was support for 32 groups per channel.

SRDF Consistency support with 2 mirrors:

  1. Each leg is placed in a separate consistency group so it can be changed separately without affecting the other.

Active SRDF Sessions and addition/removal of devices:

  1. Now customers can add or remove devices from a group without invaliding the entire group, upon the device becoming fully synced it should be added to the consistency group (with previous generation Symmetrix DMX-4, one device add or remove would cause the entire group to invalidate requiring the customers to run full establish again).

SRDF Invalid Tracks:

  1. The “long tail” – last few tracks search has been vastly improved. The search procedure and methods for the “long tail’ has been completely redesigned. It is a known fact with SRDF, that the last invalid tracks take a lot of time to sync as its going through the cache search.
  2. The SRDF establish operations speed is at least improved by 10X; see the numbers below in the performance data.

Timefinder/Clone & SRDF restores:

  1. Customers can now restore Clones to R2 and R2’s to R1’s simultaneously, initially with the DMX-4’s this was a 3-step process.

SRDF /EDP (Extended Distance Protection):

  1. 3-way SRDF for long distance with secondary site as a pass through site using Cascaded SRDF.
  2. For Primary to Secondary sites customers can use SRDF/S, for Secondary to Tertiary sites customer can use SRDF/A
  3. Diskless R21 pass-through device, where the data does not get stored on the drives or consume disk. R21 is really in cache so the host is not able to access it. Needs more cache based on the amount of data transferred.
  4. R1 — S –> R21 — A –> R2 (Production site > Pass-thru Site > Out-of-region Site)
  5. Primary (R1) sites can have DMX-3 or DMX-4 or V-Max systems, Tertiary (R2) sites can have DMX-3 or DMX-4 or V-Max systems, while the Secondary (R21) sites needs to have a V-Max system.

R22 – Dual Secondary Devices:

  1. R22 devices can act as target devices for 2 x R1 devices
  2. One Source device can perform Read write on R22 devices
  3. RTO improved with primary site going down

Other Enhancements:

  1. Dynamic Cache Partitioning enhancements
  2. QoS for SRDF/S
  3. Concurrent writes
  4. Linear Scaling of I/O
  5. Response times equivalent across groups
  6. Virtual Provisioning supported with SRDF
  7. SRDF supports linking Virtual Provisioned device to another Virtual Provisioned device.
  8. Much more faster dynamic SRDF operations
  9. Much more faster failover and failback operations
  10. Much more faster SRDF sync’s

Some very limited V-Max Performance Stats related to SRDF:

  1. 36% improved FC performance
  2. FC I/O per channel up to 5000 IOPS
  3. GigE I/O per channel up to 4000 IOPS
  4. 260 MB/sec RA channel I/O rate, with DMX-4 it was 190 MB/seconds
  5. 90 MB/sec GigE channel I/O rate, with DMX-4 it was almost the same
  6. 36% improvement on SRDF Copy over FC
  7. New SRDF pairs can be created in 7 secs compared to 55 secs with previous generations
  8. Incremental establishes after splits happen in 3 seconds compared to 6 secs with previous generations
  9. Full SRDF establishes happen in 4 seconds compared to 55 seconds with previous generations
  10. Failback SRDF happen in 19 seconds compared to 47 seconds with previous generations

To read more about V-Max systems follow

http://storagenerve.com/tag/v-max

To read more about SRDF systems follow

http://storagenerve.com/tag/srdf