Archive for the ‘Storage’ Category

Clariion Cache: Idle, Watermark and Forced Flushing

January 17th, 2009 2 comments

Clariion Cache Flushing is a process – technique – method that controls when data from cache needs to be flushed out to Disk using patented algorithms that are defined as Watermarking levels.

To set watermarking limits on your Clariion frames, open Navisphere Manager. Once you select the Clariion that you need to set the Watermarking on, right click to select the properties of the Array and go to the Cache tab.

There are several parameters you need to setup in there, without the actual knowledge and understanding of watermarking, please leave the default values in place. You will have options to setup Low Watermark (LWM) and High Watermark (HWM).

Ideally there are three different cache flushing techniques.

Idle Cache Flushing

Watermark Cache Flushing

Forced Cache Flushing

All the three processes and applicable scenarios are defined below.

Idle Cache Flushing

When a host is writing data to the connected Clariion Disk via cache on the Clariion, the Clariion takes that data, writes it to cache and acknowledges back to the host that the data has been written to disk. This data can actually be sitting in the cache or being written to the disk when this acknowledgement goes out. The process happens in 64 Kilobyte chunks when the data is being transferred to the disk from the cache. Again the process of emptying cache and pushing it out to the Disk is called Flushing.

Due to large chunks of data coming in from the host, sometimes Idle Cache Flushing is not able to maintain the Low Watermark (LWM), in those cases Watermark Cache Flushing kicks in.

Watermark Cache Flushing

As you setup your Watermarks using the above process from Navisphere Manager, let’s assume your Low Watermark (LWM) is set at 60% and your High Watermark (HWM) is at 80%. In this scenario, Clariion Algorithms will try to keep your cache levels between 60% and 80% since those are defined as the low and high watermarks.

If for some reason the cache exceeds 80% occupancy (HWM), Forced Flushing kicks in disabling all the write cache in the Clariion.

Forced Cache Flushing

So when both the above Idle Cache Flushing and Watermark Cache Flushing fails, Forced Cache Flushing kicks in.

With Forced Cache Flushing, Write Cache on the Clariion is disabled; a destaging of Cache happens to the disk, no data is written to cache at this time, all data is transferred over to disk so the acknowledgement back to the host after write verification takes longer causing performance issues. Data is queued since cache gets more priority to read / write to disk. The host will starve of I/O during these times. This Forced Flushing phenomenon can take between milliseconds to several seconds depending on the amount of data to be flushed.

This process continues until the LWM is reached which is 60% and then the write caching is enabled again.

The above processes are always happening as your Clariion is serving data in terms of read and write to the attached host systems.

Clariion Basics: DAE, DPE, SPE, Drives, Naming Conventions and Backend Architecture

January 14th, 2009 21 comments

DAE: Disk Array Enclosure

DPE: Disk Processor Enclosure

SPE: Service Processor Enclosure

A DAE, DPE and SPE does sound similar to each other, but below you will see the major differences between them.

The picture above is a diagram of a Clariion Backend Architecture. Drives are enclosed in DAE’s and Service Processors in DPE’s or SPE’s depending on the model types.

DAE: Disk Array Enclosure

Each Disk Array Enclosure (DAE) holds 15 drives, count starts from 0 to 14. I specially remember this reading Dave’s NetAPP Blog on The Story of Chapter Zero (

DPE: Disk Processor Enclosure

The CX200, CX300, CX400 and CX500 have DPE’s installed in them that can hold 15 drives in the front with 2 Service Processors in the back.

SPE: Service Processor Enclosure

With CX3’s, CX4’s, CX600 and CX700, the SPE holds the Service Processors in the backend with cooling fans in the front end.


CX200, CX300, CX3-10 has one bus/loop

CX400, CX500, CX600 has two bus/loops

CX700, CX3-20, CX3-40, CX3-80 has 4 bus/loops.

With more bus / loops you can expect more throughput. The Clariion CX700’s and the new CX3’s & CX4’s have more buses than the traditional CX200, CX300, CX400 and CX500.

All data from host goes to cache and is queued to be written on disk through this backend bus / loops. The speed of backend bus / loop on a CX series of machine is 2Gb, with CX3’s it jumps up to 4 Gb and with CX4’s to 8GB per second.

Also the bus/loop initiates at the SP level and goes up to the DAE’s which have LCC (Link Control Cards). Each LCC is where the bus / loop from the previous DAE/SP comes in and further daisy chains to the one above it, creating a true chained environment and protecting from single points of failure. All LCC’s are connected (loop) using HSSDC (cables). These HSSDC cables and LCC cards are hot swappable which will not cause an outage on the machine. There are Power Supplies on each SPE, DAE, DPE allowing hot replacements on those while the machine is functional. Based on your environment these replacements might possibly cause some performance issues or I/O bottleneck during the replacement window.


Part of Architecture of Clariion is the Addressing scheme. To be able to properly understand the Clariion functionality and its backend working, the addressing scheme is very important.

Based on the model number you will have X number of buses.

For example

CX200, CX300, CX3-10 has one bus/loop

CX400, CX500, CX600 has two bus/loops

CX700, CX3-20, CX3-40, CX3-80 has 4 bus/loops.

Each bus is numbered as BUS 0, BUS 1, BUS 2 and BUS 3 depending on the model types.

Each DAE (Disk Array Enclosure) located on the BUS is numbered based on the actual physical loop number running into it. Again numbering starts at 0.

So for a CX700, if you have 4 Buses and 8 DAE’s you will have your addressing as follows:









And so forth…..the picture above explains that in a CX500 with 2 bus / loops.

The idea is the SPE/DPE is where the bus/loop starts and runs into the DAE (enclosures) assigning them a unique ID for communication and expansion purposes.

Further to add some complexity to the mix, each DAE can have 15 drives installed in it starting at Slot 0 and going to Slot 14.

To the equation above with the bus and enclosure, we have BUS X_ENCLOSURE X, now with the disk info included we have BUS X_ENCLOSURE X_DISK XX in short called B_E_D.

Disk 9 installed on Bus 0, Enclosure 0, would designate it as Bus0_Enclosure0_Disk9 or in short 0_0_9.

For the 2nd drive installed in Bus 2, Enclosure 0, you would have the address as ??????







(Remember the numbering starts at 0; we are talking about the 2nd drive.)

Why is all this information necessary????

Good idea to know exactly where your data is sitting, helps you with parts replacement, troubleshooting and also figuring out disk contention or possibly help you design your environment with your applications and database, so you can put certain apps on certain buses, enclosures and drives (let’s say your ORACLE needs 15K drives and your backups need ATA drives). You will be able to configure all of it using LUNS, MetaLUNS, RAID Groups, Storage Groups, etc.

I will try to discuss those topics in some forthcoming blogs.

To read about Clariion: Please follow the Tag: Clariion at

LUN and VBUS Mapping for HP-UX

January 10th, 2009 1 comment

Here are the set of commands needed for mapping LUNS and VBUS on a HP-UX system.

Your command file should look like.

map dev XXX to dir FA:P, vbus=X, target=Y, lun=Z;

XXX is the Symmetrix device being mapped
FA is the director the device is being mapped to
P is the port on the FA
X is the virtual bus value (valid values 0-F)
Y is the target id (valid values 0-F)
Z is the lun address (valid values 0-7)

The symcfg –sid xxx list –available –address (Blog on available list of LUNS on a FA for mapping) will display LUNS above 7, but these are not valid and usable by HP-UX. 

You will have to find the next available LUN 0-7. If there are no more available addresses on any VBUS you can map a device and specify the next VBUS. This will create a new VBUS and adds the available LUNS to it.

In the case where the HP-UX host is shared on the FA with another host type and heterogeneous port sharing is being used, it is only necessary to specify a LUN address. 

You will need to enable the Volume Set Address (V) flag on the FA or it will end up in error. The LUN address specified should be 3 digits, containing the required VBUS, target and LUN values. This LUN address will be interpreted as VBUS, target and LUN when the HP-UX host logs into the Symmetrix.

HP-UX Volume (Disk) and File system functions

January 10th, 2009 No comments

Here are some HP-UX Commands that will come handy with Volumes and Filesystem.

Search for attached disk

ioscan -fnC disk

Initialize a disk for use with LVM

pvcreate -f /dev/rdsk/c0t1d0

Create the device structure needed for a new volume group.

cd /dev
mkdir vgdata
cd vgdata
mknod group c 64 0x010000

Create volume group vgdata

vgcreate vgdata /dev/dsk/c0t1d0

{ if your expecting to use more than 16 physical disks use the -p option, range from 1 to 256 disks. }

Display volume group vgdata

vgdisplay -v vg01

Add another disk to volume group

pvcreate -f /dev/rdsk/c0t4d0

vgextend vg01 /dev/dsk/c0t4d0

Remove disk from volume group

vgreduce vg01 /dev/dsk/c0t4d0

Create a 100 MB logical volume lvdata

lvcreate -L 100 -n lvdata vgdata

newfs -F vxfs /dev/vgdata/rlvdata

Extend logical volume to 200 MB

lvextend -L 200 /dev/vgdata/lvdata

Extend file system to 200 MB
{ if you don’t have Online JFS installed volumes must be unmounted before you can extend the file system. }

fuser -ku /dev/vgdata/lvdata { kill all process that has open files on this volume. }

umount /dev/vgdata/lvdata

extendfs /data

{ for Online JFS, 200 MB / 4 MB = 50 LE; 50 x 1024 = 51200 blocks }

fsadm -F vxfs -b 51200 /data

Set largefiles to support files greater than 2GB

fsadm -F vxfs -o largefiles /data

Exporting and Importing disks across system.

1. Make the volume group unavailable

vgchange -a n /dev/vgdata

2. Export the the disk while creating a logical volume map file.

vgexport -v -m data_map vgdata

3. Disconnect the drives and move to new system.

4. Move the data_map file to the new system.

5. On the new system recreate the volume group directory

mkdir /dev/vgdata
mknod /dev/vgdata/group c 64 0x02000

6. Import the disks to the new system

vgimport -v -m data_map /dev/vgdata /dev/dsk/c2t1d0 /dev/dsk/c2t2d0

7. Enable the new volume group

vgchange -a y /dev/vgdata

8. Renaming a logical volume

/dev/vgdata/lvol1 -> /dev/vgdata/data_lv

umount /dev/vgdata/lvol1

ll /dev/vgdata/lvol1 take note of the minor ( e.g 0x010001 )
brw-r—– 1 root root 64 0x010001 Dec 31 17:59 lvol1

mknod /dev/vgdata/data_lv b 64 0x010001 create new logical volume name

mknod /dev/vgdata/rdata_lv c 64 0x010001

vi /etc/fstab { reflect the new logical volume }

mount -a

rmsf /dev/vgdata/lvol1

rmsf /dev/vgdata/rlvol1