Google+

Archive

Posts Tagged ‘EMC’

Clariion Cache: Navicli Commands

January 17th, 2009 No comments

The following sets of commands are run to change the cache options in the Clariion CX, CX3 and CX4 systems using NAVICLI. Most of these can also be performed using Navisphere Manager GUI.

This blog is an extension of my previous blog on Clariion Cache Flushing Techniques: Idle Cache Flushing, Watermark Cache Flushing and Forced Cache Flushing, links found below.



To Enable Cache on the Clariion

naviseccli – h setcache –wc 1 –rca 1 –rcb1

Command Arguments

-wc Write cache enabled (1) and disabled (0)

-rca Read cache for SPA enabled (1) and disabled (0)

-rcb Read cache for SPB enabled (1) and disabled (0)

To Disable Cache on the Clariion

naviseccli –h setcache –wc 0 –rca 0 –rcb 0

Command Arguments

-wc Write cache enabled (1) and disabled (0)

-rca Read cache for SPA enabled (1) and disabled (0)

-rcb Read cache for SPB enabled (1) and disabled (0)

To Set Cache to 2GB for Write Cache and Set Cache to 4GB for Read Cache on both SP’s (SPA and SPB)

naviseccli – h setcache –wsz 2048 –rsza 4096 –rszb 4096

Command Arguments

-wsz Write Cache Size and the amount of Cache (valid between 1GB and 3GB)

-rsza Read Cache Size for SPA and the amount of Cache (valid between 1GB and 4GB)

-rszb Read Cache Size for SPB and the amount of Cache (valid between 1GB and 4GB)

To sets the page size of cache to 4KB and the Low WaterMark to 60% and the High WaterMark to 80%

naviseccli – h setcache –p 4 –l 60 – h 80

Command Arguments

-p Cache Page Size: 4kb, 8kb, 16kb size

-l Low Watermark and the value

-h High Watermark and the value

To Disable HA Vault Drive Cache

naviseccli –h setcache –hacv 0

Command Arguments

-hacv HA Vault Cache enabled (1) and disabled (0)

To Enable HA Vault Drive Cache

naviseccli –h setcache –hacv 1

Command Arguments

-hacv HA Vault Cache enabled (1) and disabled (0)

Clariion Cache: Idle, Watermark and Forced Flushing

January 17th, 2009 2 comments


Clariion Cache Flushing is a process – technique – method that controls when data from cache needs to be flushed out to Disk using patented algorithms that are defined as Watermarking levels.

To set watermarking limits on your Clariion frames, open Navisphere Manager. Once you select the Clariion that you need to set the Watermarking on, right click to select the properties of the Array and go to the Cache tab.

There are several parameters you need to setup in there, without the actual knowledge and understanding of watermarking, please leave the default values in place. You will have options to setup Low Watermark (LWM) and High Watermark (HWM).

Ideally there are three different cache flushing techniques.

Idle Cache Flushing

Watermark Cache Flushing

Forced Cache Flushing



All the three processes and applicable scenarios are defined below.

Idle Cache Flushing

When a host is writing data to the connected Clariion Disk via cache on the Clariion, the Clariion takes that data, writes it to cache and acknowledges back to the host that the data has been written to disk. This data can actually be sitting in the cache or being written to the disk when this acknowledgement goes out. The process happens in 64 Kilobyte chunks when the data is being transferred to the disk from the cache. Again the process of emptying cache and pushing it out to the Disk is called Flushing.

Due to large chunks of data coming in from the host, sometimes Idle Cache Flushing is not able to maintain the Low Watermark (LWM), in those cases Watermark Cache Flushing kicks in.

Watermark Cache Flushing

As you setup your Watermarks using the above process from Navisphere Manager, let’s assume your Low Watermark (LWM) is set at 60% and your High Watermark (HWM) is at 80%. In this scenario, Clariion Algorithms will try to keep your cache levels between 60% and 80% since those are defined as the low and high watermarks.

If for some reason the cache exceeds 80% occupancy (HWM), Forced Flushing kicks in disabling all the write cache in the Clariion.



Forced Cache Flushing

So when both the above Idle Cache Flushing and Watermark Cache Flushing fails, Forced Cache Flushing kicks in.

With Forced Cache Flushing, Write Cache on the Clariion is disabled; a destaging of Cache happens to the disk, no data is written to cache at this time, all data is transferred over to disk so the acknowledgement back to the host after write verification takes longer causing performance issues. Data is queued since cache gets more priority to read / write to disk. The host will starve of I/O during these times. This Forced Flushing phenomenon can take between milliseconds to several seconds depending on the amount of data to be flushed.

This process continues until the LWM is reached which is 60% and then the write caching is enabled again.


The above processes are always happening as your Clariion is serving data in terms of read and write to the attached host systems.

Clariion Basics: DAE, DPE, SPE, Drives, Naming Conventions and Backend Architecture

January 14th, 2009 21 comments


DAE: Disk Array Enclosure

DPE: Disk Processor Enclosure

SPE: Service Processor Enclosure

A DAE, DPE and SPE does sound similar to each other, but below you will see the major differences between them.

The picture above is a diagram of a Clariion Backend Architecture. Drives are enclosed in DAE’s and Service Processors in DPE’s or SPE’s depending on the model types.


DAE: Disk Array Enclosure

Each Disk Array Enclosure (DAE) holds 15 drives, count starts from 0 to 14. I specially remember this reading Dave’s NetAPP Blog on The Story of Chapter Zero (http://blogs.netapp.com/dave/2009/01/the-story-of-ch.html).


DPE: Disk Processor Enclosure

The CX200, CX300, CX400 and CX500 have DPE’s installed in them that can hold 15 drives in the front with 2 Service Processors in the back.

SPE: Service Processor Enclosure

With CX3’s, CX4’s, CX600 and CX700, the SPE holds the Service Processors in the backend with cooling fans in the front end.


Architecture

CX200, CX300, CX3-10 has one bus/loop

CX400, CX500, CX600 has two bus/loops

CX700, CX3-20, CX3-40, CX3-80 has 4 bus/loops.

With more bus / loops you can expect more throughput. The Clariion CX700’s and the new CX3’s & CX4’s have more buses than the traditional CX200, CX300, CX400 and CX500.

All data from host goes to cache and is queued to be written on disk through this backend bus / loops. The speed of backend bus / loop on a CX series of machine is 2Gb, with CX3’s it jumps up to 4 Gb and with CX4’s to 8GB per second.

Also the bus/loop initiates at the SP level and goes up to the DAE’s which have LCC (Link Control Cards). Each LCC is where the bus / loop from the previous DAE/SP comes in and further daisy chains to the one above it, creating a true chained environment and protecting from single points of failure. All LCC’s are connected (loop) using HSSDC (cables). These HSSDC cables and LCC cards are hot swappable which will not cause an outage on the machine. There are Power Supplies on each SPE, DAE, DPE allowing hot replacements on those while the machine is functional. Based on your environment these replacements might possibly cause some performance issues or I/O bottleneck during the replacement window.


Addressing

Part of Architecture of Clariion is the Addressing scheme. To be able to properly understand the Clariion functionality and its backend working, the addressing scheme is very important.

Based on the model number you will have X number of buses.

For example

CX200, CX300, CX3-10 has one bus/loop

CX400, CX500, CX600 has two bus/loops

CX700, CX3-20, CX3-40, CX3-80 has 4 bus/loops.

Each bus is numbered as BUS 0, BUS 1, BUS 2 and BUS 3 depending on the model types.

Each DAE (Disk Array Enclosure) located on the BUS is numbered based on the actual physical loop number running into it. Again numbering starts at 0.

So for a CX700, if you have 4 Buses and 8 DAE’s you will have your addressing as follows:

Bus0_Enclosure0

Bus1_Enclosure0

Bus2_Enclosure0

Bus3_Enclosure0

Bus0_Enclosure1

Bus1_Enclosure1

Bus2_Enclosure1

Bus3_Enclosure1

And so forth…..the picture above explains that in a CX500 with 2 bus / loops.

The idea is the SPE/DPE is where the bus/loop starts and runs into the DAE (enclosures) assigning them a unique ID for communication and expansion purposes.

Further to add some complexity to the mix, each DAE can have 15 drives installed in it starting at Slot 0 and going to Slot 14.

To the equation above with the bus and enclosure, we have BUS X_ENCLOSURE X, now with the disk info included we have BUS X_ENCLOSURE X_DISK XX in short called B_E_D.

Disk 9 installed on Bus 0, Enclosure 0, would designate it as Bus0_Enclosure0_Disk9 or in short 0_0_9.

For the 2nd drive installed in Bus 2, Enclosure 0, you would have the address as ??????

????

????

????

????

????

2_0_1

(Remember the numbering starts at 0; we are talking about the 2nd drive.)

Why is all this information necessary????

Good idea to know exactly where your data is sitting, helps you with parts replacement, troubleshooting and also figuring out disk contention or possibly help you design your environment with your applications and database, so you can put certain apps on certain buses, enclosures and drives (let’s say your ORACLE needs 15K drives and your backups need ATA drives). You will be able to configure all of it using LUNS, MetaLUNS, RAID Groups, Storage Groups, etc.

I will try to discuss those topics in some forthcoming blogs.

To read about Clariion: Please follow the Tag: Clariion at http://storagenerve.com/tag/clariion


LUN and VBUS Mapping for HP-UX

January 10th, 2009 1 comment

Here are the set of commands needed for mapping LUNS and VBUS on a HP-UX system.

Your command file should look like.

map dev XXX to dir FA:P, vbus=X, target=Y, lun=Z;

Parameters:
XXX is the Symmetrix device being mapped
FA is the director the device is being mapped to
P is the port on the FA
X is the virtual bus value (valid values 0-F)
Y is the target id (valid values 0-F)
Z is the lun address (valid values 0-7)

The symcfg –sid xxx list –available –address (Blog on available list of LUNS on a FA for mapping) will display LUNS above 7, but these are not valid and usable by HP-UX. 

You will have to find the next available LUN 0-7. If there are no more available addresses on any VBUS you can map a device and specify the next VBUS. This will create a new VBUS and adds the available LUNS to it.

In the case where the HP-UX host is shared on the FA with another host type and heterogeneous port sharing is being used, it is only necessary to specify a LUN address. 

You will need to enable the Volume Set Address (V) flag on the FA or it will end up in error. The LUN address specified should be 3 digits, containing the required VBUS, target and LUN values. This LUN address will be interpreted as VBUS, target and LUN when the HP-UX host logs into the Symmetrix.