Google+

Archive

Archive for the ‘Technology’ Category

IBM's Big Annoucement

February 9th, 2009 No comments

IBM’s press release on Building Blocks of 21st Century Infrastructure

It’s a 122 Billion Dollar Market Opportunity according to IBM. I believe this Press Release has to do something with Sam Palmisano meeting up with Barak Obama a week ago, here is the story as published by Tony Pearson on his blog as of last week. 
This press release pretty much talks about Infrastructure investments by IBM into Security, Storage, Grids, Web, Computing and Management (Tivoli). IBM’s push for XiV and DS8000’s is visible in this press release. IBM’s vision to store 15PB of storage on XiV that is generated every day is also visible here. 
Do not have the time to analyze the whole press release now, but will try to write up something in the morning if time permits……
Goodluck reading the Press Release!!!!!

RAID Technology Continued

January 27th, 2009 No comments



RAID [Redundant Array of Independent (Inexpensive) Disk]

After reading couple of Blogs from last week regarding RAID Technology from StorageSearch and StorageIO, decided to elaborate more about the technology behind RAID and its functionality across Storage Platforms.

After I almost finished writing this blog, I ran into a Wikipedia article explaining RAID TECHNOLOGY at a much length, covering different types of RAID technologies like RAID 2, RAID 4, RAID 10, RAID 50, etc.

For example purposes, let’s say we need 5 TB of Space; each disk in this example is 1 TB each.

RAID 0

Technology: Striping Data with No Data Protection.

Performance: Highest

Overhead: None

Minimum Number of Drives: 2 since striping

Data Loss: Upon one drive failure

Example: 5TB of usable space can be achieved through 5 x 1TB of disk.

Advantages:
>
High Performance

Disadvantages: Guaranteed Data loss

Hot Spare: Upon a drive failure, a hot spare can be invoked, but there will be no data to copy over. Hot Spare is not a good option for this RAID type.

Supported: Clariion, Symmetrix, Symmetrix DMX (Meta BCV’s or DRV’s)

In RAID 0, the data is written / stripped across all of the disks. This is great for performance, but if one disk fails, the data will be lost because since there is no protection of that data.

RAID 1

Technology: Mirroring and Duplexing

Performance: Highest

Overhead: 50%

Minimum Number of Drives: 2

Data Loss: 1 Drive failure will cause no data loss. 2 drive failures, all the data is lost.

Example: 5TB of usable space can be achieved through 10 x 1TB of disk.

Advantages: Highest Performance, One of the safest.

Disadvantages: High Overhead, Additional overhead on the storage subsystem. Upon a drive failure it becomes RAID 0.
=”font-size:small;”>

Hot Spare: A Hot Spare can be invoked and data can be copied over from the surviving paired drive using Disk copy.

Supported: Clariion, Symmetrix, Symmetrix DMX

The exact data is written to two disks at the same time. Upon a single drive failure, no data is lost, no degradation, performance or data integrity issues. One of the safest forms of RAID, but with high overhead. In the old days, all the Symmetrix supported RAID 1 and RAID S. Highly recommended for high end business critical applications.

The controller must be able to perform two concurrent separate Reads per mirrored pair or two duplicate Writes per mirrored pair. One Write or two Reads are possible per mirrored pair. Upon a drive failure only the failed disk needs to be replaced.


RAID 1+0

Technology: Mirroring and Striping Data

Performance: High

Overhead: 50%

Minimum Number of Drives: 4

Data Loss: Upon 1 drive failure (M1) device, no issues. With multiple drive failures in the stripe (M1) device, no issues. With failure of both the M1 and M2 data loss is certain.

Example: 5TB of usable space can be achieved through 10 x 1TB of disk.

Advantages: Similar Fault Tolerance to RAID 5, Because of striping high I/O is achievable.

Disadvantages: Upon a drive failure, it becomes RAID 0.

Hot Spare: Hot Spare is a good option with this RAID type, since with a failure the data can be copied over from the surviving paired device.

Supported: Clariion, Symmetrix, Symmetrix DMX

RAID 1+0 is implemented as a mirrored array whose segments are RAID 0 arrays.


RAID 3

Technology: Striping Data with dedicated Parity Drive.

Performance: High

Overhead: 33% Overhead with Parity (in the example above), more drives in Raid 3 configuration will bring overhead down.

Minimum Number of Drives: 3

Data Loss: Upon 1 drive failure, Parity will be used to rebuild data. Two drive failures in the same Raid group will cause data loss.

Example: 5TB of usable space would be achieved through 9 1TB disk.

Advantages: Very high Read data transfer rate. Very high Write data transfer rate. Disk failure has an insignificant impact on throughput. Low ratio of ECC (Parity) disks to data disks which converts to high efficiency.

Disadvantages: Transaction rate will be equal to the single Spindle speed

Hot Spare: A Hot Spare can be configured and invoked upon a drive failure which can be built from parity device. Upon drive replacement, hot spare can be used to rebuild the replaced drive.

Supported: Clariion

RAID 5

Technology: Striping Data with Distributed Parity, Block Interleaved Distributed Parity

Performance: Medium

Overhead: 20% in our example, with additional drives in the Raid group you can substantially bring down the overhead.

Minimum Number of Drives: 3

Data Loss: With one drive failure, no data loss, with multiple drive failures in the Raid group data loss will occur.

Example: For 5TB of usable space, we might need 6 x 1 TB drives

Advantages: It has the highest Read data transaction rate and with a medium write data transaction rate. A low ratio of ECC (Parity) disks to data disks which converts to high efficiency along with a good aggregate transfer rate.

Disadvantages: Disk failure has medium impact on throughput. It also has most complex controller design. Often difficult to rebuild in the event of a disk failure (as compared to RAID level 1) and individual block data transfer rate same as single disk. Ask the PSE’s about RAID 5 issues and data loss?

Hot Spare: Similar to RAID 3, where a Hot Spare can be configured and invoked upon a drive failure which can be built from parity device. Upon drive replacement, hot spare can be used to rebuild the replaced drive.

Supported: Clariion, Symmetrix DMX code 71

RAID Level 5 also relies on parity information to provide redundancy and fault tolerance using independent data disks with distributed parity blocks. Each entire data block is written onto a data disk; parity for blocks in the same rank is generated on Writes, recorded in a distributed location and checked on Reads.

This would classify to be the most favorite RAID Technology used today.



RAID 6

Technology: Striping Data with Double Parity, Independent Data Disk with Double Parity

Performance: Medium

Overhead: 28% in our example, with additional drives you can bring down the overhead.

Minimum Number of Drives: 4

Data Loss: With one drive failure and two drive failures in the same Raid Group no data loss. Very reliable.

Example: For 5 TB of usable space, we might need 7 x 1TB drives

Advantages: RAID 6 is essentially an extension of RAID level 5 which allows for additional fault tolerance by using a second independent distributed parity scheme (two-dimensional parity). Data is striped on a block level across a set of drives, just like in RAID 5, and a second set of parity is calculated and written across all the drives; RAID 6 provides for an extremely high data fault tolerance and can sustain multiple simultaneous drive failures which typically makes it a perfect solution for mission critical applications.

Disadvantages: Very poor Write performance in addition to requiring N+2 drives to implement because of two-dimensional parity scheme.

Hot Spare: Hot Spare can be invoked against a drive failure, built it from parity or data drives and then upon drive replacement use that hot spare to build the replaced drive.

Supported: Clariion Flare 26, 28, Symmetrix DMX Code 72, 73

Clariion Flare Code 26 supports RAID 6. It is also being implemented with the 72 code on the Symmetrix DMX. The simplest explanation of RAID 6 is double the parity. This allows a RAID 6 RAID Groups to be able to have two drive failures in the RAID Group, while maintaining access to the data.

RAID S (3+1)

Technology: RAID Symmetrix

Performance:
>
High

Overhead: 25%

Minimum Number of Drives: 4

Data Loss: Upon two drive failures in the same Raid Group

Example: For 5 TB of usable space, 8 x 1 TB drives

Advantages: High Performance on Symmetrix Environment

Disadvantages: Proprietary to EMC. RAID S can be implemented on Symmetrix 8000, 5000 and 3000 Series. Known to have backend issues with director replacements, SCSI Chip replacements and backend DA replacements causing DU or offline procedures.

Hot Spare: Hot Spare can be invoked against a failed drive, data can be built from the parity or the data drives and upon a successful drive replacement, the hot spare can be used to rebuild the replaced drive.

Supported: Symmetrix 8000, 5000, 3000. With the DMX platform it is just called RAID (3+1)

EMC Symmetrix / DMX disk arrays use an alternate, proprietary method for parity RAID that they call RAID-S. Three Data Drives (X) along with One Parity device. RAID-S is proprietary to EMC but seems to be similar to RAID-5 with some performance enhancements as well as the enhancements that come from having a high-speed disk cache on the disk array.

The data protection feature is based on a Parity RAID (3+1) volume configuration (three data volumes to one parity volume).

RAID (7+1)

Technology: RAID Symmetrix

Performance: High

Overhead: 12.5%

Minimum Number of Drives: 8

Data Loss: Upon two drive failures in the same Raid Group

Example: For 5 TB of usable space, 8 x 1 TB drives (rather you will get 7 TB)

Advantages: High Performance on Symmetrix Environment

Disadvantages: Proprietary to EMC. Available only on Symmetrix DMX Series. Known to have a lot of backend issues with director replacements, backend DA replacements since you have to verify the spindle locations. Cause of concern with DU.

Hot Spare: Hot Spare can be invoked against a failed drive, data can be built from the parity or the data drives and upon a successful drive replacement, the hot spare can be used to rebuild the replaced drive.

Supported: With the DMX platform it is just called RAID (7+1). Not supported on the Symms.

EMC DMX disk arrays use an alternate, proprietary method for parity RAID that is called RAID. Seven Data Drives (X) along with One Parity device. RAID is proprietary to EMC but seems to be similar to RAID-S or RAID5 with some performance enhancements as well as the enhancements that come from having a high-speed disk cache on the disk array.

The data protection feature is based on a Parity RAID (7+1) volume configuration (seven data volumes to one parity volume).

EMC Symmetrix / DMX SRDF Setup

January 26th, 2009 9 comments


TO SUBSCRIBE TO STORAGENERVE BLOG


This blog talks about setting up basic SRDF related functionality on the Symmetrix / DMX machines using EMC Solutions Enabler Symcli.

For this setup, let’s have two different host, our local host will be R1 (Source) volumes and our remote host will be R2 (Target) volumes.

A mix of R1 and R2 volumes can reside on the same symmetrix, in short you can configure SRDF between two Symmetrix machines to act as if one was local and other was remote and vice versa.


Step 1

Create SYMCLI Device Groups. Each group can have one or more Symmetrix devices specified in it.

SYMCLI device group information (name of the group, type, members, and any associations) are maintained in the SYMAPI database.

In the following we will create a device group that includes two SRDF volumes.

SRDF operations can be performed from the local host that has access to the source volumes or the remote host that has access to the target volumes. Therefore, both hosts should have device groups defined.

Complete the following steps on both the local and remote hosts.

a) Identify the SRDF source and target volumes available to your assigned hosts. Execute the following commands on both the local and remote hosts.

# symrdf list pd (execute on both local and remote hosts)

or

# syminq

b) To view all the RDF volumes configured in the Symmetrix use the following

# symrdf list dev

c) Display a synopsis of the symdg command and reference it in the following steps.

# symdg –h

d) List all device groups that are currently defined.

# symdg list

e) On the local host, create a device group of the type of RDF1. On the remote host, create a device group of the type RDF2.

# symdg –type RDF1 create newsrcdg (on local host)

# symdg –type RDF2 create newtgtdg (on remote host)

f) Verify that your device group was added to the SYMAPI database on both the local and remote hosts.

# symdg list

g) Add your two devices to your device group using the symld command. Again use (–h) for a synopsis of the command syntax.

On local host:

# symld –h

# symld –g newsrcdg add dev ###

or

# symld –g newsrcdg add pd Physicaldrive#

On remote host:

# symld –g newtgtdg add dev ###

or

# symld –g newtgtdg add pd Physicaldrive#

h) Using the syminq command, identify the gatekeeper devices. Determine if it is currently defined in the SYMAPI database, if not, define it, and associate it with your device group.

On local host:

# syminq

# symgate list (Check SYMAPI)

# symgate define pd Physicaldrive# (to define)

# symgate -g newsrcdg associate pd Physicaldrive# (to associate)

On remote host:

# syminq

# symgate list (Check SYMAPI)

# symgate define pd Physicaldrive# (to define)

# symgate -g newtgtdg associate pd Physicaldrive# (to associate)

i) Display your device groups. The output is verbose so pipe it to more.

On local host:

# symdg show newsrcdg |more

On remote host:

# symdg show newtgtdg | more

j) Display a synopsis of the symld command.

# symld -h

k) Rename DEV001 to NEWVOL1

On local host:

# symld –g newsrcdg rename DEV001 NEWVOL1

On remote host:

# symld –g newtgtdg rename DEV001 NEWVOL1

l) Display the device group on both the local and remote hosts.

On local host:

# symdg show newsrcdg |more

On remote host:

# symdg show newtgtdg | more

Step 2

Use the SYMCLI to display the status of the SRDF volumes in your device group.

a) If on the local host, check the status of your SRDF volumes using the following command:

# symrdf -g newsrcdg query

Step 3

Set the default device group. You can use the “Environmental Variables” option.

# set SYMCLI_DG=newsrcdg (on the local host)

# set SYMCLI_DG=newtgtdg (on the remote host)

a) Check the SYMCLI environment.

# symcli –def (on both the local and remote hosts)

b) Test to see if the SYMCLI_DG environment variable is working properly by performing a “query” without specifying the device group.

# symrdf query (on both the local and remote hosts)

Step 4

Changing Operational mode. The operational mode for a device or group of devices can be set dynamically with the symrdf set mode command.

a) On the local host, change the mode of operation for one of your SRDF volumes to enable semi-synchronous operations. Verify results and change back to synchronous mode.

# symrdf set mode semi NEWVOL1

# symrdf query

# symrdf set mode sync NEWVOL1

# symrdf query

b) Change mode of operation to enable adaptive copy-disk mode for all devices in the device group. Verify that the mode change occurred and then disable adaptive copy.

# symrdf set mode acp disk

# symrdf query

# symrdf set mode acp off

# symrdf query


Step 5

Check the communications link between the local and remote Symmetrix.

a) From the local host, verify that the remote Symmetrix is “alive”. If the host is attached to multiple Symmetrix, you may have to specify the Symmetrix Serial Number (SSN) through the –sid option.

# symrdf ping [ -sid xx ] (xx=last two digits of the remote SSN)

b) From the local host, display the status of the Remote Link Directors.

# symcfg –RA all list

c) From the local host, display the activity on the Remote Link Directors.

# symstat -RA all –i 10 –c 2

Step 6

Create a partition on each disk, format the partition and assign a filesystem to the partition. Add data on the R1 volumes defined in the newsrcdg device group.

Step 7

Suspend RDF Link and add data to filesystem. In this step we will suspend the SRDF link, add data to the filesystem and check for invalid tracks.

a) Check that the R1 and R2 volumes are fully synchronized.

# symrdf query

b) Suspend the link between the source and target volumes.

# symrdf suspend

c) Check link status.

# symrdf query

d) Add data to the filesystems.

e) Check for invalid tracks using the following command:

# symrdf query

f) Invalid tracks can also be displayed using the symdev show command. Execute the following command on one of the devices in your device group. Look at the Mirror set information.

On the local host:

# symdev show ###

g) From the local host, resume the link and monitor invalid tracks.

# symrdf resume

# symrdf query

In the next upcoming blogs, we will setup some flags for SRDF and Director types, etc.

Happy SRDF’ing!!!!!

Data Collection from Mcdata Switches

January 21st, 2009 No comments

After reading Diwakar’s Blog on Mcdata Switch data collection, I had to continue on the same token….


To make things more easy….here is the procedure we have been using. 

Log into Connectrix Manager
Go to Product View
Click on Switch to go to Hardware View
Select Maintenance
Select Data Collection
You will now be prompted with a Save As
Name File and save with a .zip extension
Repeat for all switches

Email the files to who ever requires for analyzing…..easy huh…..

Will write something about data collection on Brocade and Cisco switches in upcoming blogs.