Google+

Archive

Archive for the ‘Technology’ Category

Clariion Basics: DAE, DPE, SPE, Drives, Naming Conventions and Backend Architecture

January 14th, 2009 21 comments


DAE: Disk Array Enclosure

DPE: Disk Processor Enclosure

SPE: Service Processor Enclosure

A DAE, DPE and SPE does sound similar to each other, but below you will see the major differences between them.

The picture above is a diagram of a Clariion Backend Architecture. Drives are enclosed in DAE’s and Service Processors in DPE’s or SPE’s depending on the model types.


DAE: Disk Array Enclosure

Each Disk Array Enclosure (DAE) holds 15 drives, count starts from 0 to 14. I specially remember this reading Dave’s NetAPP Blog on The Story of Chapter Zero (http://blogs.netapp.com/dave/2009/01/the-story-of-ch.html).


DPE: Disk Processor Enclosure

The CX200, CX300, CX400 and CX500 have DPE’s installed in them that can hold 15 drives in the front with 2 Service Processors in the back.

SPE: Service Processor Enclosure

With CX3’s, CX4’s, CX600 and CX700, the SPE holds the Service Processors in the backend with cooling fans in the front end.


Architecture

CX200, CX300, CX3-10 has one bus/loop

CX400, CX500, CX600 has two bus/loops

CX700, CX3-20, CX3-40, CX3-80 has 4 bus/loops.

With more bus / loops you can expect more throughput. The Clariion CX700’s and the new CX3’s & CX4’s have more buses than the traditional CX200, CX300, CX400 and CX500.

All data from host goes to cache and is queued to be written on disk through this backend bus / loops. The speed of backend bus / loop on a CX series of machine is 2Gb, with CX3’s it jumps up to 4 Gb and with CX4’s to 8GB per second.

Also the bus/loop initiates at the SP level and goes up to the DAE’s which have LCC (Link Control Cards). Each LCC is where the bus / loop from the previous DAE/SP comes in and further daisy chains to the one above it, creating a true chained environment and protecting from single points of failure. All LCC’s are connected (loop) using HSSDC (cables). These HSSDC cables and LCC cards are hot swappable which will not cause an outage on the machine. There are Power Supplies on each SPE, DAE, DPE allowing hot replacements on those while the machine is functional. Based on your environment these replacements might possibly cause some performance issues or I/O bottleneck during the replacement window.


Addressing

Part of Architecture of Clariion is the Addressing scheme. To be able to properly understand the Clariion functionality and its backend working, the addressing scheme is very important.

Based on the model number you will have X number of buses.

For example

CX200, CX300, CX3-10 has one bus/loop

CX400, CX500, CX600 has two bus/loops

CX700, CX3-20, CX3-40, CX3-80 has 4 bus/loops.

Each bus is numbered as BUS 0, BUS 1, BUS 2 and BUS 3 depending on the model types.

Each DAE (Disk Array Enclosure) located on the BUS is numbered based on the actual physical loop number running into it. Again numbering starts at 0.

So for a CX700, if you have 4 Buses and 8 DAE’s you will have your addressing as follows:

Bus0_Enclosure0

Bus1_Enclosure0

Bus2_Enclosure0

Bus3_Enclosure0

Bus0_Enclosure1

Bus1_Enclosure1

Bus2_Enclosure1

Bus3_Enclosure1

And so forth…..the picture above explains that in a CX500 with 2 bus / loops.

The idea is the SPE/DPE is where the bus/loop starts and runs into the DAE (enclosures) assigning them a unique ID for communication and expansion purposes.

Further to add some complexity to the mix, each DAE can have 15 drives installed in it starting at Slot 0 and going to Slot 14.

To the equation above with the bus and enclosure, we have BUS X_ENCLOSURE X, now with the disk info included we have BUS X_ENCLOSURE X_DISK XX in short called B_E_D.

Disk 9 installed on Bus 0, Enclosure 0, would designate it as Bus0_Enclosure0_Disk9 or in short 0_0_9.

For the 2nd drive installed in Bus 2, Enclosure 0, you would have the address as ??????

????

????

????

????

????

2_0_1

(Remember the numbering starts at 0; we are talking about the 2nd drive.)

Why is all this information necessary????

Good idea to know exactly where your data is sitting, helps you with parts replacement, troubleshooting and also figuring out disk contention or possibly help you design your environment with your applications and database, so you can put certain apps on certain buses, enclosures and drives (let’s say your ORACLE needs 15K drives and your backups need ATA drives). You will be able to configure all of it using LUNS, MetaLUNS, RAID Groups, Storage Groups, etc.

I will try to discuss those topics in some forthcoming blogs.

To read about Clariion: Please follow the Tag: Clariion at http://storagenerve.com/tag/clariion


LUN and VBUS Mapping for HP-UX

January 10th, 2009 1 comment

Here are the set of commands needed for mapping LUNS and VBUS on a HP-UX system.

Your command file should look like.

map dev XXX to dir FA:P, vbus=X, target=Y, lun=Z;

Parameters:
XXX is the Symmetrix device being mapped
FA is the director the device is being mapped to
P is the port on the FA
X is the virtual bus value (valid values 0-F)
Y is the target id (valid values 0-F)
Z is the lun address (valid values 0-7)

The symcfg –sid xxx list –available –address (Blog on available list of LUNS on a FA for mapping) will display LUNS above 7, but these are not valid and usable by HP-UX. 

You will have to find the next available LUN 0-7. If there are no more available addresses on any VBUS you can map a device and specify the next VBUS. This will create a new VBUS and adds the available LUNS to it.

In the case where the HP-UX host is shared on the FA with another host type and heterogeneous port sharing is being used, it is only necessary to specify a LUN address. 

You will need to enable the Volume Set Address (V) flag on the FA or it will end up in error. The LUN address specified should be 3 digits, containing the required VBUS, target and LUN values. This LUN address will be interpreted as VBUS, target and LUN when the HP-UX host logs into the Symmetrix.

HP-UX Volume (Disk) and File system functions

January 10th, 2009 No comments

Here are some HP-UX Commands that will come handy with Volumes and Filesystem.

Search for attached disk

ioscan -fnC disk

Initialize a disk for use with LVM

pvcreate -f /dev/rdsk/c0t1d0

Create the device structure needed for a new volume group.

cd /dev
mkdir vgdata
cd vgdata
mknod group c 64 0x010000

Create volume group vgdata

vgcreate vgdata /dev/dsk/c0t1d0

{ if your expecting to use more than 16 physical disks use the -p option, range from 1 to 256 disks. }

Display volume group vgdata

vgdisplay -v vg01

Add another disk to volume group

pvcreate -f /dev/rdsk/c0t4d0

vgextend vg01 /dev/dsk/c0t4d0

Remove disk from volume group

vgreduce vg01 /dev/dsk/c0t4d0

Create a 100 MB logical volume lvdata

lvcreate -L 100 -n lvdata vgdata

newfs -F vxfs /dev/vgdata/rlvdata

Extend logical volume to 200 MB

lvextend -L 200 /dev/vgdata/lvdata

Extend file system to 200 MB
{ if you don’t have Online JFS installed volumes must be unmounted before you can extend the file system. }

fuser -ku /dev/vgdata/lvdata { kill all process that has open files on this volume. }

umount /dev/vgdata/lvdata

extendfs /data

{ for Online JFS, 200 MB / 4 MB = 50 LE; 50 x 1024 = 51200 blocks }

fsadm -F vxfs -b 51200 /data

Set largefiles to support files greater than 2GB

fsadm -F vxfs -o largefiles /data


Exporting and Importing disks across system.

1. Make the volume group unavailable

vgchange -a n /dev/vgdata

2. Export the the disk while creating a logical volume map file.

vgexport -v -m data_map vgdata

3. Disconnect the drives and move to new system.

4. Move the data_map file to the new system.

5. On the new system recreate the volume group directory

mkdir /dev/vgdata
mknod /dev/vgdata/group c 64 0x02000

6. Import the disks to the new system

vgimport -v -m data_map /dev/vgdata /dev/dsk/c2t1d0 /dev/dsk/c2t2d0

7. Enable the new volume group

vgchange -a y /dev/vgdata

8. Renaming a logical volume

/dev/vgdata/lvol1 -> /dev/vgdata/data_lv

umount /dev/vgdata/lvol1

ll /dev/vgdata/lvol1 take note of the minor ( e.g 0x010001 )
brw-r—– 1 root root 64 0x010001 Dec 31 17:59 lvol1

mknod /dev/vgdata/data_lv b 64 0x010001 create new logical volume name

mknod /dev/vgdata/rdata_lv c 64 0x010001

vi /etc/fstab { reflect the new logical volume }

mount -a

rmsf /dev/vgdata/lvol1

rmsf /dev/vgdata/rlvol1 

LUN Addresses on a FA Port available for mapping

January 9th, 2009 No comments

Before you are ready to map devices to an FA Port via command line (symcli) you will need to determine what LUN (hyper) addresses are available out there for mapping purposes (unused).


symcfg list -sid xxxx -FA dir -P port -available -addresses

Parameters

xxxx is the last 4 digits of the symmetrix serial number
dir is the FA director number eg 4B
port is the FA’s port number ie 0 or 1

This will produce an output that will indicate all available addresses, a range of available addresses with an asterisk (*).  Also the LUN (hyper) addresses displayed in this output are hexadecimal values.

For example, in the output below, the available LUN (hyper) addresses are 06 through 24, 29 through 7F and greater than 83.


Symmetrix ID: 000185701867 (Local)
Director Device Name Attr Address
—— ——– —- —- ——– —– — —–
Ident  Symbolic Port Sym  Physical VBUS TID LUN
—— ——– —- —- ——– —– — —–
FA-4B 04B 0 0000 Not Visible VCM 0 0 000
0901 Not Visible 0 0 001
0902 Not Visible 0 0 002
0903 Not Visible 0 0 003
0904 Not Visible 0 0 004
0905 Not Visible 0 0 005
– AVAILABLE 0 0 006 *
0925 Not Visible 0 0 025
0926 Not Visible 0 0 026
0927 Not Visible 0 0 027
0928 Not Visible 0 0 028
– AVAILABLE 0 0 029 *
0980 Not Visible 0 0 080
0981 Not Visible 0 0 081
0982 Not Visible 0 0 082
0983 Not Visible 0 0 083
– AVAILABLE 0 0 084 *