Google+

Clariion Basics: DAE, DPE, SPE, Drives, Naming Conventions and Backend Architecture

January 14th, 2009 21 comments


DAE: Disk Array Enclosure

DPE: Disk Processor Enclosure

SPE: Service Processor Enclosure

A DAE, DPE and SPE does sound similar to each other, but below you will see the major differences between them.

The picture above is a diagram of a Clariion Backend Architecture. Drives are enclosed in DAE’s and Service Processors in DPE’s or SPE’s depending on the model types.


DAE: Disk Array Enclosure

Each Disk Array Enclosure (DAE) holds 15 drives, count starts from 0 to 14. I specially remember this reading Dave’s NetAPP Blog on The Story of Chapter Zero (http://blogs.netapp.com/dave/2009/01/the-story-of-ch.html).


DPE: Disk Processor Enclosure

The CX200, CX300, CX400 and CX500 have DPE’s installed in them that can hold 15 drives in the front with 2 Service Processors in the back.

SPE: Service Processor Enclosure

With CX3’s, CX4’s, CX600 and CX700, the SPE holds the Service Processors in the backend with cooling fans in the front end.


Architecture

CX200, CX300, CX3-10 has one bus/loop

CX400, CX500, CX600 has two bus/loops

CX700, CX3-20, CX3-40, CX3-80 has 4 bus/loops.

With more bus / loops you can expect more throughput. The Clariion CX700’s and the new CX3’s & CX4’s have more buses than the traditional CX200, CX300, CX400 and CX500.

All data from host goes to cache and is queued to be written on disk through this backend bus / loops. The speed of backend bus / loop on a CX series of machine is 2Gb, with CX3’s it jumps up to 4 Gb and with CX4’s to 8GB per second.

Also the bus/loop initiates at the SP level and goes up to the DAE’s which have LCC (Link Control Cards). Each LCC is where the bus / loop from the previous DAE/SP comes in and further daisy chains to the one above it, creating a true chained environment and protecting from single points of failure. All LCC’s are connected (loop) using HSSDC (cables). These HSSDC cables and LCC cards are hot swappable which will not cause an outage on the machine. There are Power Supplies on each SPE, DAE, DPE allowing hot replacements on those while the machine is functional. Based on your environment these replacements might possibly cause some performance issues or I/O bottleneck during the replacement window.


Addressing

Part of Architecture of Clariion is the Addressing scheme. To be able to properly understand the Clariion functionality and its backend working, the addressing scheme is very important.

Based on the model number you will have X number of buses.

For example

CX200, CX300, CX3-10 has one bus/loop

CX400, CX500, CX600 has two bus/loops

CX700, CX3-20, CX3-40, CX3-80 has 4 bus/loops.

Each bus is numbered as BUS 0, BUS 1, BUS 2 and BUS 3 depending on the model types.

Each DAE (Disk Array Enclosure) located on the BUS is numbered based on the actual physical loop number running into it. Again numbering starts at 0.

So for a CX700, if you have 4 Buses and 8 DAE’s you will have your addressing as follows:

Bus0_Enclosure0

Bus1_Enclosure0

Bus2_Enclosure0

Bus3_Enclosure0

Bus0_Enclosure1

Bus1_Enclosure1

Bus2_Enclosure1

Bus3_Enclosure1

And so forth…..the picture above explains that in a CX500 with 2 bus / loops.

The idea is the SPE/DPE is where the bus/loop starts and runs into the DAE (enclosures) assigning them a unique ID for communication and expansion purposes.

Further to add some complexity to the mix, each DAE can have 15 drives installed in it starting at Slot 0 and going to Slot 14.

To the equation above with the bus and enclosure, we have BUS X_ENCLOSURE X, now with the disk info included we have BUS X_ENCLOSURE X_DISK XX in short called B_E_D.

Disk 9 installed on Bus 0, Enclosure 0, would designate it as Bus0_Enclosure0_Disk9 or in short 0_0_9.

For the 2nd drive installed in Bus 2, Enclosure 0, you would have the address as ??????

????

????

????

????

????

2_0_1

(Remember the numbering starts at 0; we are talking about the 2nd drive.)

Why is all this information necessary????

Good idea to know exactly where your data is sitting, helps you with parts replacement, troubleshooting and also figuring out disk contention or possibly help you design your environment with your applications and database, so you can put certain apps on certain buses, enclosures and drives (let’s say your ORACLE needs 15K drives and your backups need ATA drives). You will be able to configure all of it using LUNS, MetaLUNS, RAID Groups, Storage Groups, etc.

I will try to discuss those topics in some forthcoming blogs.

To read about Clariion: Please follow the Tag: Clariion at http://storagenerve.com/tag/clariion


Transfer of DataStorageWiki to StorageNerve.com

January 13th, 2009 No comments

So after a lot of hard work over the past weekend, finally the DataStoragewiki Blog site is now transferred over to StorageNerve.com.  I have explained in the previous blog why this was important (http://www.storagenerve.com/2009/01/datastoragewikicom-becomes.html).

With a bit planning and import / export of data, finally was able to move over stuff from DataStorageWiki to StorageNerve with practically zero downtime. Also realized how great it was to have FeedBurner, No outages on the RSS & ATOM feeds. The subscriber list is unaffected.

Included script on www.datastoragewiki.com page, which would automatically forward all the users to StorageNerve.com in about 5 seconds.  But this only works with the home page forwarding (www) and not the associated URL’s.

One thing I would really like to do is automatically forwarded all the DNS queries that came over to Datastoragewiki.com/* (any URL within the domain) to StorageNerve.com. I tried to do some domain forwarding using CNAME and A Records but didn’t go anywhere.

The big problem I do see is that Google search results are all based on  Datastoragewiki.com  and its associated URL’s, so when a user clicks on it today, they ending up with a 404 Page. In short the users that are trying to access technical data through search and are getting Page Not Found. Not sure if there is a way to fix this.

Another potential problem is it will take light years for StorageNerve.com site to be relisted again on Google and Yahoo. Since most of the blogging I do is related to technical terms, Google ranking and ratings on them is very important. With the content that is published on the pages, the current Google Page & Google Site ranking for them is pretty good. Now it will take another 6 months before the same is done for the new domain.

Again one of my purposes of blogging is to help Storage Experts throughout the world understand and compare technologies in terms of technical information and product research. With those things in mind, a search engine optimization does play a very important role with worldwide visitors that come in through search engines.

Hope as a reader you have not felt any outages with this site and your subscribed feed to this site is working okay.

I have couple of Blogs ready to be published, but will wait a few days before all the users get updated with the new site address.

Thanks for Reading and Goodluck on your Projects!!!

DataStorageWiki.COM becomes StorageNerve.COM

January 11th, 2009 No comments

FLASH NEWS!!!!! 

As of Jan 10th 2009, DataStorageWiki.com will re-point to StorageNerve.com

When initially I had purchased the domain name DataStorageWiki.com, the idea was to truly create a WIKI site where users can come in and work on Storage docs to keep them updated. This would allow a lot of storage specialist throughout the world to centrally collaborate on upkeep of these docs. 

When the site was launched, it was more a central place for me to create and upload technical documents, but as time progressed it became more of a blogging adventure. 

The name DataStorageWiki is truly not a blogging name and I will keep it reserved in the future for the above endeavor. 

With the current blogging trends etc, its best to keep the website theme, names, etc on lines of blogging rather than WIKI. 

With that in mind I have moved the DataStorageWiki.com site to StorageNerve.com

For right now, all the users will automatically be re-pointed from datastoragewiki.com to storagenerve.com until the search engines etc start relisting all as storagenerve.com. This should be applicable to all the feed readers as well. 

In the mean time as storagenerve.com goes live, I will start designing some concepts on datastoragewiki.com and eventually host that site as a WIKI site. 

Hope there are no outages with this transfer……

LUN and VBUS Mapping for HP-UX

January 10th, 2009 1 comment

Here are the set of commands needed for mapping LUNS and VBUS on a HP-UX system.

Your command file should look like.

map dev XXX to dir FA:P, vbus=X, target=Y, lun=Z;

Parameters:
XXX is the Symmetrix device being mapped
FA is the director the device is being mapped to
P is the port on the FA
X is the virtual bus value (valid values 0-F)
Y is the target id (valid values 0-F)
Z is the lun address (valid values 0-7)

The symcfg –sid xxx list –available –address (Blog on available list of LUNS on a FA for mapping) will display LUNS above 7, but these are not valid and usable by HP-UX. 

You will have to find the next available LUN 0-7. If there are no more available addresses on any VBUS you can map a device and specify the next VBUS. This will create a new VBUS and adds the available LUNS to it.

In the case where the HP-UX host is shared on the FA with another host type and heterogeneous port sharing is being used, it is only necessary to specify a LUN address. 

You will need to enable the Volume Set Address (V) flag on the FA or it will end up in error. The LUN address specified should be 3 digits, containing the required VBUS, target and LUN values. This LUN address will be interpreted as VBUS, target and LUN when the HP-UX host logs into the Symmetrix.