Author Archive

EMC Symmetrix / DMX SRDF Setup

January 26th, 2009 9 comments


This blog talks about setting up basic SRDF related functionality on the Symmetrix / DMX machines using EMC Solutions Enabler Symcli.

For this setup, let’s have two different host, our local host will be R1 (Source) volumes and our remote host will be R2 (Target) volumes.

A mix of R1 and R2 volumes can reside on the same symmetrix, in short you can configure SRDF between two Symmetrix machines to act as if one was local and other was remote and vice versa.

Step 1

Create SYMCLI Device Groups. Each group can have one or more Symmetrix devices specified in it.

SYMCLI device group information (name of the group, type, members, and any associations) are maintained in the SYMAPI database.

In the following we will create a device group that includes two SRDF volumes.

SRDF operations can be performed from the local host that has access to the source volumes or the remote host that has access to the target volumes. Therefore, both hosts should have device groups defined.

Complete the following steps on both the local and remote hosts.

a) Identify the SRDF source and target volumes available to your assigned hosts. Execute the following commands on both the local and remote hosts.

# symrdf list pd (execute on both local and remote hosts)


# syminq

b) To view all the RDF volumes configured in the Symmetrix use the following

# symrdf list dev

c) Display a synopsis of the symdg command and reference it in the following steps.

# symdg –h

d) List all device groups that are currently defined.

# symdg list

e) On the local host, create a device group of the type of RDF1. On the remote host, create a device group of the type RDF2.

# symdg –type RDF1 create newsrcdg (on local host)

# symdg –type RDF2 create newtgtdg (on remote host)

f) Verify that your device group was added to the SYMAPI database on both the local and remote hosts.

# symdg list

g) Add your two devices to your device group using the symld command. Again use (–h) for a synopsis of the command syntax.

On local host:

# symld –h

# symld –g newsrcdg add dev ###


# symld –g newsrcdg add pd Physicaldrive#

On remote host:

# symld –g newtgtdg add dev ###


# symld –g newtgtdg add pd Physicaldrive#

h) Using the syminq command, identify the gatekeeper devices. Determine if it is currently defined in the SYMAPI database, if not, define it, and associate it with your device group.

On local host:

# syminq

# symgate list (Check SYMAPI)

# symgate define pd Physicaldrive# (to define)

# symgate -g newsrcdg associate pd Physicaldrive# (to associate)

On remote host:

# syminq

# symgate list (Check SYMAPI)

# symgate define pd Physicaldrive# (to define)

# symgate -g newtgtdg associate pd Physicaldrive# (to associate)

i) Display your device groups. The output is verbose so pipe it to more.

On local host:

# symdg show newsrcdg |more

On remote host:

# symdg show newtgtdg | more

j) Display a synopsis of the symld command.

# symld -h

k) Rename DEV001 to NEWVOL1

On local host:

# symld –g newsrcdg rename DEV001 NEWVOL1

On remote host:

# symld –g newtgtdg rename DEV001 NEWVOL1

l) Display the device group on both the local and remote hosts.

On local host:

# symdg show newsrcdg |more

On remote host:

# symdg show newtgtdg | more

Step 2

Use the SYMCLI to display the status of the SRDF volumes in your device group.

a) If on the local host, check the status of your SRDF volumes using the following command:

# symrdf -g newsrcdg query

Step 3

Set the default device group. You can use the “Environmental Variables” option.

# set SYMCLI_DG=newsrcdg (on the local host)

# set SYMCLI_DG=newtgtdg (on the remote host)

a) Check the SYMCLI environment.

# symcli –def (on both the local and remote hosts)

b) Test to see if the SYMCLI_DG environment variable is working properly by performing a “query” without specifying the device group.

# symrdf query (on both the local and remote hosts)

Step 4

Changing Operational mode. The operational mode for a device or group of devices can be set dynamically with the symrdf set mode command.

a) On the local host, change the mode of operation for one of your SRDF volumes to enable semi-synchronous operations. Verify results and change back to synchronous mode.

# symrdf set mode semi NEWVOL1

# symrdf query

# symrdf set mode sync NEWVOL1

# symrdf query

b) Change mode of operation to enable adaptive copy-disk mode for all devices in the device group. Verify that the mode change occurred and then disable adaptive copy.

# symrdf set mode acp disk

# symrdf query

# symrdf set mode acp off

# symrdf query

Step 5

Check the communications link between the local and remote Symmetrix.

a) From the local host, verify that the remote Symmetrix is “alive”. If the host is attached to multiple Symmetrix, you may have to specify the Symmetrix Serial Number (SSN) through the –sid option.

# symrdf ping [ -sid xx ] (xx=last two digits of the remote SSN)

b) From the local host, display the status of the Remote Link Directors.

# symcfg –RA all list

c) From the local host, display the activity on the Remote Link Directors.

# symstat -RA all –i 10 –c 2

Step 6

Create a partition on each disk, format the partition and assign a filesystem to the partition. Add data on the R1 volumes defined in the newsrcdg device group.

Step 7

Suspend RDF Link and add data to filesystem. In this step we will suspend the SRDF link, add data to the filesystem and check for invalid tracks.

a) Check that the R1 and R2 volumes are fully synchronized.

# symrdf query

b) Suspend the link between the source and target volumes.

# symrdf suspend

c) Check link status.

# symrdf query

d) Add data to the filesystems.

e) Check for invalid tracks using the following command:

# symrdf query

f) Invalid tracks can also be displayed using the symdev show command. Execute the following command on one of the devices in your device group. Look at the Mirror set information.

On the local host:

# symdev show ###

g) From the local host, resume the link and monitor invalid tracks.

# symrdf resume

# symrdf query

In the next upcoming blogs, we will setup some flags for SRDF and Director types, etc.

Happy SRDF’ing!!!!!

Blogging beyond the Domain of Expertise

January 22nd, 2009 2 comments

Why this topic…..

Well, I have been a reader of quite a few storage blog sites since I started blogging (about 6 months ago). My blogroll consist of my daily reading every morning. As the day progresses, get to read many more storage related items through twitter or other social networking media.

There is a big learning curve going on for myself as I understand these new Storage related Socio-Techno terms, Tech-notes, Standards, Core Storage Technology, Visions, Dominance, Acquisitions, OEM practices and foremost develop a level of expertise in what I blog.

My Comfort Zone…..

As I write these blogs, I do feel the urge to write about technology more than any marketing or sales aspects and specially like to blog about EMC technology since I understand it the best. These days I have been writing about Clariion. My understanding of Symmetrix and DMX along with other EMC technologies is good, but these days I am more comfortable writing about just one storage platform, which is Clariion.

Storage Industry Bloggers…..

As I read blogs from various different bloggers like Chuck Hollis, Chad Sakac, Storagezilla, Dave Graham, Hu Yoshida, Storage Architect, Steve Todd and many more on my list, I feel bloggers are only talking about the technology or subjects they feel more comfortable about ……. over and over again. I am not criticizing any of these bloggers, they are great and write good stuff, but as you start following some of them, you feel the output you get every day from them is the same?, Or almost the same?, Or about the same subject? Or about the same technology? Or about the same Topic?

Human Comfort Zone…..

May be because people understand a certain technology better than the other, they like to blog about it more often. But isn’t that, what blogging is all about?

Personally I think, if a person gets really comfortable with the job profile they are in and keep on doing the same repetitive stuff every single day, where is innovation, creativity and then foremost the learning curve?

It is human nature to feel comfortable in a certain Environment, Zone, Society, Social Media, Job, etc after they have achieved or reached a certain goal, agenda or have success doing a certain thing. If we asked Hu Yoshida to talk about EMC technology or 3Par technology in the next 10 blogs and same question for Chuck, if he was to write the next 10 blogs about HDS and its technology can they do it.  If Steve Todd wrote about the subjects that Chuck Hollis writes and Dave Graham was to write about IBM SVC technology in the next 10 blogs, can they do it?  

As humans, we like to only talk and write about technology, markets, subjects; that we feel comfortable about rather than having an idea of the whole industry the way it is going. It is very tough for one person to understand all the technologies, but then isn’t that creativity, innovation and vision. Industry should not be driven by a single OEM, or a single technology, that would just be the end of Innovation. Dominance is a virtue, but Competition is a win.

This blog is not against any of my fellow bloggers rather only explains the comfort zone of human nature. Would I personally like to blog about a certain subject that I don’t feel comfortable about and is beyond my comfort zone. May be yes, may be no, maybe I should read, research and then write about that subject.

Bloggers in my blogroll are very creative, I love reading all of them; just want to read a different topic once in a while that is beyond what they write every day …..

Courteous Comments always welcome…….

Data Collection from Mcdata Switches

January 21st, 2009 No comments

After reading Diwakar’s Blog on Mcdata Switch data collection, I had to continue on the same token….

To make things more easy….here is the procedure we have been using. 

Log into Connectrix Manager
Go to Product View
Click on Switch to go to Hardware View
Select Maintenance
Select Data Collection
You will now be prompted with a Save As
Name File and save with a .zip extension
Repeat for all switches

Email the files to who ever requires for analyzing…..easy huh…..

Will write something about data collection on Brocade and Cisco switches in upcoming blogs. 

Clariion Cache: Page Size

January 20th, 2009 2 comments

This blog is an extension of my previous blogs related to Clariion Cache.

Clariion Cache: Idle Flushing, Watermark Flushing and Forced Flushing

Clariion Cache: Navicli Cache Commands

Clariion Cache: Read and Write Caching

(All links found below in the Related Posts)

There are 4 different Cache page size settings available in a Clariion; the default size is 8kb with other available options at 2kb, 4kb and 16kb.

Based on your applications you should customize your Cache page size. Applications like Exchange data blocks consume 4kb page size, SQL uses 8kb and Oracle uses 16kb.

Let’s say you are running Exchange and SQL on your Clariion. Your defined cache page size is 4kb in the Clariion. Each Exchange data block will occupy 4kb cache page size and the application along with cache will work excellent. Now assume SQL is running on the same machine which has a data block size of 8kb, so every SQL block will be broken down into 2 separate pages in cache at 4kb each. Now imagine running Oracle on it which has a data block size of 16kb, here each data block will be broken into 4 cache pages, at this point your applications will start having backend performance issues.

In this scenario Exchange will work perfect, SQL with a little performance impact and Oracle heavy impacted with 4 cache pages per data block.

Now let’s imagine a scenario where you are using 16kb page size for Oracle as the primary application for that machine. Time goes by and the machine is upgraded with disk etc and used for SQL and Exchange along with Oracle. Your page size is 16kb, Oracle applications run fine. SQL starts putting its block of data in cache at 8kb, but your size is 16kb, wasting 8kb per block of SQL data that comes in. Similarly with Exchange you have 4kb block size, and you are wasting 12kb per block of data in the cache. In these scenarios you will fill up your cache much faster not with data, but with wasted open space rather called holes.

Best recommendation would be to have separate machines (Clariion) based on the applications you run on them. If you are running SQL and Exchange primarily, it might be a good idea to run it at 8kb Cache size on one Clariion. For Oracle possibly run another machine at 16kb Cache size. If you try to slam all of them together, either the applications will have issues or the cache will have issues or worst you will see performance issues throughout your Clariion.