krypted.com

Tiny Deathstars of Foulness

With Apple bundling Xsan into Lion and opening up more storage options than before, it seems like time to start exploring alternatives to Promise Vtrak’s for Xsan storage. ActiveStorage makes a very nice RAID chassis and should be shipping metadata controller appliances soon. I’ve discussed both here before and they make for very nice kit. But in order to have an ‘ecosystem’ you really need a little biodiversity. And the Xsan environment needs to become more of an ecosystem and less of a vendor lock-in situation. So another option that I’d like to discuss is the Rork Aurora Galaxy. These little firecrackers have a lot of potential upside:
  • 4 8Gbps Fibre Channel controllers (in the form of Atto Celerity cards)
  • 36 drive bays
  • 3TB drive modules
  • Low power requirements
  • Linux, so you have root access
  • For those familiar w/ Webmin, the NumaRAID plug-in will seem familiar
  • Great tech support
  • More PCI slots, so upgradeable with more cards, etc.
36 drive bays at 3TB per bay means 108TB of raw storage running at 32Gbps per chassis. I recently had the chance to put a pair of these things through their paces. Using a combination of vMeter and the Qlogic Enterprise Fabric Suite Manager, we added stream after stream and when all of the clients were running multicam edits for an aggregate throughput of well over 50Gbps, we still hadn’t yet found a place where we started to drop frames. But we ran out of clients, streams, media, etc so stopped testing at that point. Watching all the statistics on the RAIDs and clients though, I do not doubt that we could have saturated a good 60Gbps. When the RAIDs show up they have 3 LUNs baked into them. Given the size of the RAID and how Xsan likes to have LUNs added, it seemed prudent to convert those 3 LUNs into 4 LUNs. You manage the RAID by using Webmin, which runs on a default port of 10000. The default IP address is 192.168.1.129, so use the address:
http://192.168.1.129:10000
When you see a login screen, the default username and password is admin/password. Click on NumaRAID in the upper left corner of the screen to see details of the chassis, with the RAIDs first and the LUNs second. Click on a RAID to delete the RAID and associated LUNs for that RAID. When you delete LUNs you need to restart the RAID controller for clients with Apple LSI cards to see them. This can be done by stopping the nr_target service: service nr_target stop Then start it back up: service nr_target start You then have a choice of how to create the RAIDs. You can carve a RAID up to create a LUN. Therefore, you can create 1 or 2 RAID 6 sets and carve those up into 4 LUNs. With 4 fibre channel ports, we then associated 1 LUN per port. However, on testing failover, we realized that when you perform a 1:1 mapping in such a fashion that you get no fibre channel failover. While unlikely that a fibre channel port will fail, there can often be a lot of points of failure on the cables, which may go through fibre channel patch panels and other malfeasants en route to their switch. Therefore, we associated two ports to each LUN. This represented about a 6% boost in performance over not masking any LUNs to ports, although about 3% less than the 1:1 mapping scheme. If you do 4 LUNs you can use about 4gigs of RAM per LUN safely, although I was unable to tell any performance change between using the factory default per LUN cache of 1gig. You will also want to set them as targets in your switch, which may RSCN suppress as most switches tend to think an ATTO card is an initiator (and it often is). When you save things in the Webmin GUI, you are executing writes into an XML file that is stored at /usr/libexec/webmin/NumaRAID/nrconfig.xml. Once one of these is in production, I would personally start making a backup of this file before and after any changes. As an example if you cat nrconfig you can grep for certain items, such as lun: cat nrconfig.xml | grep -i lun The number of LUNs that you see here should match up with the output of lsscsi: lsscsi -s You can also use lsscsi to see the other hosts on the network and confirm that your zoning is open: lsscsi -H You can also change various settings, such as the raid_cache size in the nrconfig.xml file. When you make a change in the xml file, make sure to back it up first and when you bring the nr_target service back up, make sure to watch /var/log/messages, where the Aurora Galaxy saves its logs: tail -f /var/log/messages From your clients, look at /dev. You should see that there is one device node per LUN, per port that the LUN is available on in each chassis. If you do not mask the LUNs then you would do well to create a zone for each port of the RAID, much as you might do with Linux clients of your Xsan. If you use soft zoning and build groups, this actually doesn’t take much time at all. There isn’t any special foo for volume creation. Once the clients can see the LUNs, treat the new volume as a bigger, less power consuming version of your existing storage. Now, while Linux clients can often get away doing non-port based zoning, the Rorke will likely show in Xsan Admin twice on at least one LUN but not more than 2 LUNs per chassis if it’s not zoned just right. This can cause high PIO HiPriWr stats. Once it’s in production you’ll want to manage it. The IP and user/password can be changed pretty easily. Beyond that, there isn’t an snmp MIB specifically for the Aurora. However, it is just Linux, so a standard snmpwalk against a Linux host from your monitoring solution should result in all you need to know regarding possible hardware failure. And the fact that it’s Linux brings up an interesting point. This device is made from off-the-shelf hardware (good off-the-shelf hardware too, not some crappy Fry’s mother board that was on sale ’cause they spilled a 1.75 liter of Crown and Coke on it while partying in the back). It’s a computer with a bunch of drives in it. And many may be concerned about putting a “software RAID” into production. But think about it this way, all RAID controllers are software. The question is whether the software is baked into some EEPROM chip or firmware, or whether it’s loaded on top of a generic kernel. There are certainly a few things you should not do with an appliance-oriented Linux distro. Running a torrent off it for example. But if you need to be told that then you should have your RAIDs taken away… While you might be able to get away with it, don’t run Samba on these (if they’re going into an Xsan/StorNext/HyperFS environment they’re gonna’ have a virtualized file system anyway). Don’t install StorNext (it’s confusing, am I a target or an initiator, I’m so confused I think I’ll just fail repeatedly). Don’t install the root kit or even a lot of security tools. If antivirus, snort or host based IDS tools are required just unplug the network cable. This is an appliance. You can treat it as such and pretend that you only have the Webmin access (I just can’t help but to start tinkering around under the hood…). Finally, I don’t have long-term statistics on how these will hold up. I’ve never heard anyone complain terribly about them, but this model is still pretty new. Given the disk density I’m curious to see how things will go. Rorke will sell you a parts kit in the form of an unlicensed, diskless chassis. Given my experience with other vendors I’d have to recommend getting one, but they didn’t seem too insistent that it was a requirement.

July 19th, 2011

Posted In: Mac OS X Server, Xsan

Tags: , , , , , , , ,

There are a lot of environments that attach Windows client computers to an Xsan or StorNext filesystem. In the past I’ve looked at using different versions of StorNext to communicate with Xsan, but in this article we’re actually going to take a look at Quantum’s StorNext FX2 client software. Before getting started, you’ll want to have the StorNext media, have the serial number added to the metadata controllers, have the HBA (fibre channel card) installed, have the fibre patched into the HBA, have the IP addresses for the metadata controllers documented and have a copy of the .auth_secret file obtainable from the metadata controllers once they’ve been properly licensed. To get started, first install the HBA drivers. This will be different for each brand of card, but in most cases it will be a simple installer. Once the installer is run and the system rebooted (not all HBA’s required reboots), you will see a screen similar to the following, indicating the Generic SCSI Array Device is installed and then starting to recognize the LUNs that comprise the volume. Make sure not to configure a new filesystem for the LUNs, especially if they are already in use. Once you can see the hardware infrastructure you are ready to install the software. If using a 64 bit version of Windows then contact StorNext for the installers, otherwise you should be able to use those downloaded from the website. Always start with the latest drivers rather than using those distributed on the media with the StorNext license. Once you have the installer ready, click on it and you will see the StorNext Installation screen. Here, provided it is a new installation, the first item wills ay Install StorNext. As you can see from the following screen, you also have the options in the future to Upgrade, Reinstall, Remove and Configure StorNext from the StorNext Installation screen. At the StorNext Component selection screen, you will be able to select whether to install the Help Files (FAQs) and/or the StorNext FX2 client. Here, you can probably leave both enabled and simply click on the Next>> button. You should now see your LUNs. Open the Windows shell environment and cd to the C:Program FilesStorNextbin directory.  Run cvlabel -l and verify all of the LUNs that are needed for the volumes you will be working with are present. If they are not, check your zoning and physical infrastructure. Still using the Windows shell environment, cd into the C:Program FilesStorNextconfig directory. Edit the fsnameservers file and enter the IP addresses for your metadata controllers in the order they appear in the fsnameservers list on your metadata controllers. You will also need to copy the .auth_secret file from the metadata controllers to the client computer. By default, this will be copied into the C:Program FilesStorNextconfig directory. Next, reboot. Provided that we can see our LUNs, we should also be able to see our volumes through cvadmin. To verify, we can use the cvadmin command in much the same way we do so in StorNext for Linux or Xsan: cvadmin -e select Provided all of the volumes appear and each has a valid controller (see those with an * below), we can then go into the StorNext Client Configuration tool to complete the client configuration. To do so, open Client Configuration from Start -> Program Files -> StorNext. Once open, you should see the volumes shown in cvadmin and they should be listed as available. Windows accesses volumes through what are known as drive mappings. A drive map is an alphabetical representation of a location. These locations can be folders within a file system, network volumes or direct attached volumes. the configuration is complete, Windows will treat the drive letter being configured as a local volume. Select Tools Properties and you will then be able to map a drive to the appropriate drive letter. It is also possible to mount an volume into a given directory; however, this is somewhat rare comparably. Repeat this process until all volumes are mapped and click on Apply. Now you can mount your new volumes. To do so click on them back at the main screen and you will then be prompted to mount. Click Yes. That’s pretty much it. Have fun.

February 17th, 2011

Posted In: Windows Server, Xsan

Tags: , , , , , ,

In Xsan Admin you can easily label LUNs that are available on your Fibre Channel fabric. Using the cvlabel command, you can also easily label a LUN that isn’t on a Fibre Channel fabric. Labeling a LUN writes data to the LUN, thus allowing Xsan to somewhat mark its territory (insert vivid imagery of an Xsan shaped like a dog taking a whiz on a poor thumb drive). If you then look at that LUN from a Mac OS X system without Xsan installed, the computer will have greyed out options in Disk Utility and will not be able to treat the LUN as a “disk.” You also can’t use diskutil to reformat it. As much as we love our RAIDs, at times they will be ready to put out to pasture (more vivid imagery, this time of a RAID chassis grazing gleefully in the sunshine). This is where cvlabel comes in again (retiring the LUN, not the imagery). The cvlabel command can be used to unlabel a LUN and move it back to what amounts to free space. If you run cvlabel with a -l option you will see a list of LUNs:
cvlabel -l
If you want to unlabel a LUN, first remove it from your SAN. That typically involves something akin to backing up the volume and restoring it following a rebuild or using snfsdefrag to migrate all data off it and then removing it from the volume), typically with the volume stopped. Once the LUN has been removed, find the LUN in the list provided using that -l option and then run the following command (replacing MyLUN with the name of your LUN):
cvlabel -v -u MyLUN
Make sure that you’re using the name of the LUN when you run this and not the name of the volume, or the name of the wrong LUN. Also make sure that the LUN is no longer a member of an Xsan (or StorNext SAN), etc. Once you have unlabeled the LUN, fire up Disk Utility and you should be ready to format the new drive. Seems like this ends up getting done via ssh a lot, which doesn’t have Disk Utility, so to do so with diskutil, run:
diskutil list
Then with the output find the unformated space, which will likely say something akin to disk1 or disk2. You won’t usually find that an empty LUN will have slices like disk0s1, disk0s2, etc. Once you locate it in the list, you can then run (assuming the disk from the output of the previous command was disk1 and the name you want the volume to have is MyVolume):
diskutil eraseDisk HFS+ MyVolume disk1
Note: You must install Xsan to get cvlabel.

October 14th, 2010

Posted In: Xsan

Tags: , , , , ,

I posted another article on Xsanity. This one started out as an article on how to label LUNs from the command line, but ended up something completely different. It still explains how to do it from the command line, but since I wrote it while flying it ended up being more tailored to doing it on a USB jump drive since they don’t allow me to take an Xserve, Qlogic 9200 and a Promise RAID to my seat on the plane with me. Which is really a shame ’cause I could get SOOOO much done that way. Anyway, the article can be found here.

January 20th, 2009

Posted In: Xsan

Tags: , , , , , ,

Using iSCSI targets with Xsan… Don’t do this one at home kids.  It’s just silly and not going to be supported by anyone…  But if you are like me then you can do it if you must.  So to get started with iSCSI check out this article. When you have a LUN that is connected don’t yet assign it a file system (or if you have partition it back to free space). Now install Xsan but don’t yet create a volume. Once you’re done, you can go ahead and fire up your trusty Terminal app from /Applications/Utilities. Type in cvlabel -l which should show you all your available LUNs. Next, type the following, which will dump your cvlabel information out to a file called labels: cvlabel -c >labels Now that you have your file open it in your favorite text editor and change the very first text field to read what you want your LUN to be called within Xsan Admin. Once you’re satisfied save the file. Now, use the following command, which will read the file you just edited and then label the LUN for use with acfs using the name you just provided, making it appear in Xsan admin: cvlabel labels Now you can open up Xsan Admin. Here, click on LUNs in your SAN Assets. Make sure that the LUN you just labelled shows up as seen below.
iSCSI LUN in Xsan Admin

iSCSI LUN in Xsan Admin

Next, click on Volumes in your SAN Assets and then click on the plus sign (+) to create a new Volume.  At the Volume Name and Type Screen, enter the name you would like your volume to be called, customize the Volume Type and advanced fields for any performance tuning you would like to do and then click on the Continue button to proceed.  
Xsan Volume Name and Type Screen

Xsan Volume Name and Type Screen

At the Configure Volume Affinities tab, drag your LUNs into the appropriate Storage Pools.  For this example we only have one storage pool so we won’t be needing the additional items listed here.
Xsan Configure Volume Affinities Screen

Xsan Configure Volume Affinities Screen

Next click Continue, assign the metadata controllers to your volume and then click Continue again.  Your volume will now mount on the desktop and be listed as can be seen below.
Xsan Volumes with iSCSI Storage

Xsan Volumes with iSCSI Storage

Once again, this article shows you how to do something that you should likely not put into production.  Having said this, iSCSI can be great for some uses, but when used in conjunction with Xsan and the apple clustered file system (acfs) you might be best off sticking with fiber channel…

December 14th, 2008

Posted In: Mac OS X, Mac OS X Server, Xsan

Tags: , , , ,