Google+

Archive

Posts Tagged ‘HP’

HP’s RAID 6 (ADG – Advanced Data Guarding)

February 13th, 2009 No comments

Continuing my RAID 6 posts, first HDS’s RAID 6, then NetApp’s RAID-DP and now this time around it’s HP’s RAID-6 ADG.

Some upcoming post would include RAID 6 Technology and its implementation by SUN, IBM and EMC. The final post should be about a comparison of all OEM products and the usage of RAID 6.

Here are the links to previous post related to RAID 6 and data protection

NetApp’s RAID–DP

Hitachi’s (HDS) RAID 6

Different RAID Technologies (Detailed)

Different RAID Types


Will try to keep this post short in terms of overall RAID 6 concepts, rather jump directly into the technical aspects of HP’s RAID-6 ADG (Advanced Data Guarding).

HP’s Business Case with RAID-6 Advanced Data Guarding (ADG)

So Advanced Data Guarding….the name is just perfect…. HP’s pitch to their potential storage customers would include a slide on ADG (I am a
ssuming that is the case).
This cost effective and fault tolerant technology is proprietary to HP and its patented, just cannot find a reference about it on the US PTO’s website.


RAID-6 ADG is supported on the MSA (Modular Smart Arrays) SAN platform.


I believe it’s not supported on any EVA (Enterprise Virtual Array) platforms. No RAID 6 support available on LeftHand Network SAN’s.


With HP XP-24000, HP XP-20000, HP XP-12000 and HP-XP 10000 there is no support for RAID-6 ADG, but there is native support for RAID 6 (dual parity).


HP Storage Products traditionally have support for RAID 0, RAID 1, RAID 1+0, RAID 5 and now RAID-6 ADG. Some LeftHand Network SAN’s support RAID 3 and RAID 4.


The argument from HP is pretty similar to the ones we already discussed with HDS and NetApp in the previous post. The push for RAID 6 at HP comes due to the existence of larger disk size and requirements for fault tolerance to run 24 x 7 x 365 applications.


Since there is an additional parity calculation associated with RAID-6, HP’s recommendation is to use RAID-6 ADG with lower writes and high reads only. If you have an application performing random writes, RAID 6 (ADG) might not be an option for you.


HP’s RAID-6 Advanced Data Guarding (ADG) Technology

Here is a snapshot of how this technology operates.



In the case here, we have 6 disk drives attached on a fiber loop or SCSI bus / controller. Data is striped on Disk 1, Disk 2, Disk 3, Disk 4 and then Parity (P1) and (Q1) are generated and written on Disk 5 and Disk 6. You can assume each data block is 4kb or 8kb in size.

Similarly, as the process continues, the next set of data strips start at Disk 1 then go to Disk 2, Disk 3 and Disk 6, while the parity is written on Disk 4 (P) and Disk 5 (Q). ADG is based on P + Q algorithm to calculate two independent parity sets. Exclusive OR (XOR) is used to calculate the P and Q Parity. The P Parity is exactly like it would be for RAID 5 and Q is calculated based on Error Correcting Code. The Q is then striped across all the disk within the RAID Group.

If a single drive is lost in the Raid Group, data is rebuild using ordinary XOR P (Parity). Also the P and Q are both recalculated for each rebuild block. If a second drive fails during this time, the rebuild takes place using the Q Parity. During these times data is still completely available with a little degradation.

If you do add a spare drive to this configuration, now your raid group can pretty much withstand 3 drive failures before data loss.

This technology can be implemented with a minimum of 4 drives. The overhead with use of 4 drives in a single RAID Group is 50%. If you run 60 drives in a single RAID group, your overhead might be close to 4% {100 – [100 x 2 (parity) / 60 (drives)]}.

The formula to calculate your usable space will be C * (n – 2), where C is the Size of your smallest drive in the RAID Group and n = number of drives. It is highly recommended all your disk drive sizes are similar.

If you are running more than 14 physical drives in a single RAID Group, HP’s recommendation is to use RAID-6 ADG. With 6 drives in a RAID Group, the failure probability is 1E-10. With 60 drives in a RAID Group, the failure probability is 1E-7.

Again HP’s big pitch with RAID-6 ADG is Cost Effectiveness with Fault Tolerance, not really performance.

LUN and VBUS Mapping for HP-UX

January 10th, 2009 1 comment

Here are the set of commands needed for mapping LUNS and VBUS on a HP-UX system.

Your command file should look like.

map dev XXX to dir FA:P, vbus=X, target=Y, lun=Z;

Parameters:
XXX is the Symmetrix device being mapped
FA is the director the device is being mapped to
P is the port on the FA
X is the virtual bus value (valid values 0-F)
Y is the target id (valid values 0-F)
Z is the lun address (valid values 0-7)

The symcfg –sid xxx list –available –address (Blog on available list of LUNS on a FA for mapping) will display LUNS above 7, but these are not valid and usable by HP-UX. 

You will have to find the next available LUN 0-7. If there are no more available addresses on any VBUS you can map a device and specify the next VBUS. This will create a new VBUS and adds the available LUNS to it.

In the case where the HP-UX host is shared on the FA with another host type and heterogeneous port sharing is being used, it is only necessary to specify a LUN address. 

You will need to enable the Volume Set Address (V) flag on the FA or it will end up in error. The LUN address specified should be 3 digits, containing the required VBUS, target and LUN values. This LUN address will be interpreted as VBUS, target and LUN when the HP-UX host logs into the Symmetrix.

HP-UX Volume (Disk) and File system functions

January 10th, 2009 No comments

Here are some HP-UX Commands that will come handy with Volumes and Filesystem.

Search for attached disk

ioscan -fnC disk

Initialize a disk for use with LVM

pvcreate -f /dev/rdsk/c0t1d0

Create the device structure needed for a new volume group.

cd /dev
mkdir vgdata
cd vgdata
mknod group c 64 0x010000

Create volume group vgdata

vgcreate vgdata /dev/dsk/c0t1d0

{ if your expecting to use more than 16 physical disks use the -p option, range from 1 to 256 disks. }

Display volume group vgdata

vgdisplay -v vg01

Add another disk to volume group

pvcreate -f /dev/rdsk/c0t4d0

vgextend vg01 /dev/dsk/c0t4d0

Remove disk from volume group

vgreduce vg01 /dev/dsk/c0t4d0

Create a 100 MB logical volume lvdata

lvcreate -L 100 -n lvdata vgdata

newfs -F vxfs /dev/vgdata/rlvdata

Extend logical volume to 200 MB

lvextend -L 200 /dev/vgdata/lvdata

Extend file system to 200 MB
{ if you don’t have Online JFS installed volumes must be unmounted before you can extend the file system. }

fuser -ku /dev/vgdata/lvdata { kill all process that has open files on this volume. }

umount /dev/vgdata/lvdata

extendfs /data

{ for Online JFS, 200 MB / 4 MB = 50 LE; 50 x 1024 = 51200 blocks }

fsadm -F vxfs -b 51200 /data

Set largefiles to support files greater than 2GB

fsadm -F vxfs -o largefiles /data


Exporting and Importing disks across system.

1. Make the volume group unavailable

vgchange -a n /dev/vgdata

2. Export the the disk while creating a logical volume map file.

vgexport -v -m data_map vgdata

3. Disconnect the drives and move to new system.

4. Move the data_map file to the new system.

5. On the new system recreate the volume group directory

mkdir /dev/vgdata
mknod /dev/vgdata/group c 64 0x02000

6. Import the disks to the new system

vgimport -v -m data_map /dev/vgdata /dev/dsk/c2t1d0 /dev/dsk/c2t2d0

7. Enable the new volume group

vgchange -a y /dev/vgdata

8. Renaming a logical volume

/dev/vgdata/lvol1 -> /dev/vgdata/data_lv

umount /dev/vgdata/lvol1

ll /dev/vgdata/lvol1 take note of the minor ( e.g 0x010001 )
brw-r—– 1 root root 64 0x010001 Dec 31 17:59 lvol1

mknod /dev/vgdata/data_lv b 64 0x010001 create new logical volume name

mknod /dev/vgdata/rdata_lv c 64 0x010001

vi /etc/fstab { reflect the new logical volume }

mount -a

rmsf /dev/vgdata/lvol1

rmsf /dev/vgdata/rlvol1 

Volume Logix

December 3rd, 2008 No comments

The order for getting fibre channel based hypervolume extentions (HVEs) viewable on systems, particularly SUN systems, is as follows:

1. Appropriately zone so the Host Bus Adapter (HBA) can see the EMC Fibre Adapter (FA).

2. Reboot the system so it can see the vcm database disk on the FA OR
1. SUN:
1. drvconfig -i sd; disks; devlinks (SunOS <= 5.7)
2. devfsadm -i sd (SunOS >= 5.7 (w/patches))

2. HP:
1. ioscan -f # Note the new hw address
2. insf -e -H ${hw}

3. Execute vcmfind to ensure the system sees the Volume Logix database.

4. ID mapped informationi
1. Map HVEs to the FA if not already done.
2. symdev list -SA ${fa} to see what’s mapped.
3. symdev show ${dev} to ID the lun that ${dev} is mapped as. The display should look something like:

Front Director Paths (4):

{
———————————————————————————————–
POWERPATH DIRECTOR PORT
———————- —————– ————
PdevName Type Num Type Num Sts VBUS TID LUN
———————————————————————————————–
Not Visible N/A 03A FA 0 RW 000 00 70

Not Visible N/A 14A FA 0 NR 000 00 70

Not Visible N/A 03B FA 0 NR 000 00 70

Not Visible N/A 14B FA 0 NR 000 00 70

}

The number you’re looking for is under the column LUN. Remember, it’s HEX, so the lun that’ll show up on the ctd is (0x70=112) c#t#d112

5. On SUN systems, modify the /kernel/drv/sd.conf file so the system will see the new disks. You’ll need to do a reconfig reboot after modifying this file. If the system doesn’t see it on a reconfig reboot, this file is probably the culprit!

6. fpath adddev -w ${hba_wwn} -f ${fa} -r “${list_of_EMC_devs}”

You can specify multiple EMC device ranges; just separate them by spaces, not commas
7. Reboot the system so it can see the new disks on the FA OR
1. SUN:
1. drvconfig -i sd; disks; devlinks (SunOS <= 5.7)
2. devfsadm -i sd (SunOS >= 5.7 (w/patches))

2. HP:
1. ioscan -f # Note the new hw address
2. insf -e -H ${hw}