Google+

Archive

Posts Tagged ‘EMC’

EMC Symmetrix / DMX SRDF Setup

January 26th, 2009 9 comments


TO SUBSCRIBE TO STORAGENERVE BLOG


This blog talks about setting up basic SRDF related functionality on the Symmetrix / DMX machines using EMC Solutions Enabler Symcli.

For this setup, let’s have two different host, our local host will be R1 (Source) volumes and our remote host will be R2 (Target) volumes.

A mix of R1 and R2 volumes can reside on the same symmetrix, in short you can configure SRDF between two Symmetrix machines to act as if one was local and other was remote and vice versa.


Step 1

Create SYMCLI Device Groups. Each group can have one or more Symmetrix devices specified in it.

SYMCLI device group information (name of the group, type, members, and any associations) are maintained in the SYMAPI database.

In the following we will create a device group that includes two SRDF volumes.

SRDF operations can be performed from the local host that has access to the source volumes or the remote host that has access to the target volumes. Therefore, both hosts should have device groups defined.

Complete the following steps on both the local and remote hosts.

a) Identify the SRDF source and target volumes available to your assigned hosts. Execute the following commands on both the local and remote hosts.

# symrdf list pd (execute on both local and remote hosts)

or

# syminq

b) To view all the RDF volumes configured in the Symmetrix use the following

# symrdf list dev

c) Display a synopsis of the symdg command and reference it in the following steps.

# symdg –h

d) List all device groups that are currently defined.

# symdg list

e) On the local host, create a device group of the type of RDF1. On the remote host, create a device group of the type RDF2.

# symdg –type RDF1 create newsrcdg (on local host)

# symdg –type RDF2 create newtgtdg (on remote host)

f) Verify that your device group was added to the SYMAPI database on both the local and remote hosts.

# symdg list

g) Add your two devices to your device group using the symld command. Again use (–h) for a synopsis of the command syntax.

On local host:

# symld –h

# symld –g newsrcdg add dev ###

or

# symld –g newsrcdg add pd Physicaldrive#

On remote host:

# symld –g newtgtdg add dev ###

or

# symld –g newtgtdg add pd Physicaldrive#

h) Using the syminq command, identify the gatekeeper devices. Determine if it is currently defined in the SYMAPI database, if not, define it, and associate it with your device group.

On local host:

# syminq

# symgate list (Check SYMAPI)

# symgate define pd Physicaldrive# (to define)

# symgate -g newsrcdg associate pd Physicaldrive# (to associate)

On remote host:

# syminq

# symgate list (Check SYMAPI)

# symgate define pd Physicaldrive# (to define)

# symgate -g newtgtdg associate pd Physicaldrive# (to associate)

i) Display your device groups. The output is verbose so pipe it to more.

On local host:

# symdg show newsrcdg |more

On remote host:

# symdg show newtgtdg | more

j) Display a synopsis of the symld command.

# symld -h

k) Rename DEV001 to NEWVOL1

On local host:

# symld –g newsrcdg rename DEV001 NEWVOL1

On remote host:

# symld –g newtgtdg rename DEV001 NEWVOL1

l) Display the device group on both the local and remote hosts.

On local host:

# symdg show newsrcdg |more

On remote host:

# symdg show newtgtdg | more

Step 2

Use the SYMCLI to display the status of the SRDF volumes in your device group.

a) If on the local host, check the status of your SRDF volumes using the following command:

# symrdf -g newsrcdg query

Step 3

Set the default device group. You can use the “Environmental Variables” option.

# set SYMCLI_DG=newsrcdg (on the local host)

# set SYMCLI_DG=newtgtdg (on the remote host)

a) Check the SYMCLI environment.

# symcli –def (on both the local and remote hosts)

b) Test to see if the SYMCLI_DG environment variable is working properly by performing a “query” without specifying the device group.

# symrdf query (on both the local and remote hosts)

Step 4

Changing Operational mode. The operational mode for a device or group of devices can be set dynamically with the symrdf set mode command.

a) On the local host, change the mode of operation for one of your SRDF volumes to enable semi-synchronous operations. Verify results and change back to synchronous mode.

# symrdf set mode semi NEWVOL1

# symrdf query

# symrdf set mode sync NEWVOL1

# symrdf query

b) Change mode of operation to enable adaptive copy-disk mode for all devices in the device group. Verify that the mode change occurred and then disable adaptive copy.

# symrdf set mode acp disk

# symrdf query

# symrdf set mode acp off

# symrdf query


Step 5

Check the communications link between the local and remote Symmetrix.

a) From the local host, verify that the remote Symmetrix is “alive”. If the host is attached to multiple Symmetrix, you may have to specify the Symmetrix Serial Number (SSN) through the –sid option.

# symrdf ping [ -sid xx ] (xx=last two digits of the remote SSN)

b) From the local host, display the status of the Remote Link Directors.

# symcfg –RA all list

c) From the local host, display the activity on the Remote Link Directors.

# symstat -RA all –i 10 –c 2

Step 6

Create a partition on each disk, format the partition and assign a filesystem to the partition. Add data on the R1 volumes defined in the newsrcdg device group.

Step 7

Suspend RDF Link and add data to filesystem. In this step we will suspend the SRDF link, add data to the filesystem and check for invalid tracks.

a) Check that the R1 and R2 volumes are fully synchronized.

# symrdf query

b) Suspend the link between the source and target volumes.

# symrdf suspend

c) Check link status.

# symrdf query

d) Add data to the filesystems.

e) Check for invalid tracks using the following command:

# symrdf query

f) Invalid tracks can also be displayed using the symdev show command. Execute the following command on one of the devices in your device group. Look at the Mirror set information.

On the local host:

# symdev show ###

g) From the local host, resume the link and monitor invalid tracks.

# symrdf resume

# symrdf query

In the next upcoming blogs, we will setup some flags for SRDF and Director types, etc.

Happy SRDF’ing!!!!!

Data Collection from Mcdata Switches

January 21st, 2009 No comments

After reading Diwakar’s Blog on Mcdata Switch data collection, I had to continue on the same token….


To make things more easy….here is the procedure we have been using. 

Log into Connectrix Manager
Go to Product View
Click on Switch to go to Hardware View
Select Maintenance
Select Data Collection
You will now be prompted with a Save As
Name File and save with a .zip extension
Repeat for all switches

Email the files to who ever requires for analyzing…..easy huh…..

Will write something about data collection on Brocade and Cisco switches in upcoming blogs. 

Clariion Cache: Page Size

January 20th, 2009 2 comments

This blog is an extension of my previous blogs related to Clariion Cache.

Clariion Cache: Idle Flushing, Watermark Flushing and Forced Flushing

Clariion Cache: Navicli Cache Commands

Clariion Cache: Read and Write Caching

(All links found below in the Related Posts)


There are 4 different Cache page size settings available in a Clariion; the default size is 8kb with other available options at 2kb, 4kb and 16kb.

Based on your applications you should customize your Cache page size. Applications like Exchange data blocks consume 4kb page size, SQL uses 8kb and Oracle uses 16kb.

Let’s say you are running Exchange and SQL on your Clariion. Your defined cache page size is 4kb in the Clariion. Each Exchange data block will occupy 4kb cache page size and the application along with cache will work excellent. Now assume SQL is running on the same machine which has a data block size of 8kb, so every SQL block will be broken down into 2 separate pages in cache at 4kb each. Now imagine running Oracle on it which has a data block size of 16kb, here each data block will be broken into 4 cache pages, at this point your applications will start having backend performance issues.

In this scenario Exchange will work perfect, SQL with a little performance impact and Oracle heavy impacted with 4 cache pages per data block.

Now let’s imagine a scenario where you are using 16kb page size for Oracle as the primary application for that machine. Time goes by and the machine is upgraded with disk etc and used for SQL and Exchange along with Oracle. Your page size is 16kb, Oracle applications run fine. SQL starts putting its block of data in cache at 8kb, but your size is 16kb, wasting 8kb per block of SQL data that comes in. Similarly with Exchange you have 4kb block size, and you are wasting 12kb per block of data in the cache. In these scenarios you will fill up your cache much faster not with data, but with wasted open space rather called holes.

Best recommendation would be to have separate machines (Clariion) based on the applications you run on them. If you are running SQL and Exchange primarily, it might be a good idea to run it at 8kb Cache size on one Clariion. For Oracle possibly run another machine at 16kb Cache size. If you try to slam all of them together, either the applications will have issues or the cache will have issues or worst you will see performance issues throughout your Clariion.

Clariion Cache: Read and Write Caching

January 19th, 2009 1 comment

The following blog explains how Read and Write Cache functions within Clariion.

Clariion Read Caching

The Diagram on the left explains Read Caching (Only One SP involved in this process)

Step A: The host is requesting data from the active path to the Clariion.

Step B: If the data is in Cache, the data is sent over to the host.

Step Aa: This step comes into picture if the requested data is not in cache and is now requested from the Disk.

Step Ab: The Disk reads the data in the cache and Step B is performed completing the request.



Clariion Write Caching

The Diagram on the right explains Write Caching (Both the SP’s are involved in this process)

Step A: The host writes data to the disk (lun) through the active path between Host and Clariion. The data is written in cache.

Step B: The Data from above is in Cache (example SPA) is now copied over to the Cache of SPB using the Clariion Messaging Interface (CMI)

Step C: At this point an acknowledgement is sent to the host that the Write is now complete.

Step D: Using the Cache flushing techniques the data is written to the Clariion Disk (lun)