When new versions of operating systems come out sometimes articles need to be updated. It’s always nice when someone else does the hard part. Recently, Ben Levy
, an Apple Consultant from Los Angeles, did some work on an article I did awhile back
. To quote Ben, the new procedure is to:
1. Boot from something other than your intended RAIDed boot drive, open Terminal and use diskutil list to identify the relevant disks and partitions.
2. diskutil appleRAID enable mirror disk0s2 – (assuming correctly identified slice, yours may be different) This command turns your primary disk into a RAID mirror without a mirror
3. Reboot back to your boot drive
4. diskutil checkRAID and diskutil list just so you know where and what everything is…
5. diskutil AppleRAID add member disk2 8014A446-E10D-4BC9-A199-67362E54FB7C – (assuming disk2 is in fact the drive you are adding) the UUID is the UUID of the RAID as discovered in checkRAID
6. diskutil checkRAID should now show it rebuilding the RAID. This could take hours. You can check on the progress again using the same command.
Thanks to Ben for the hard work. Now, I think it’s about time I wrapped this into a GUI app…
IcyDock makes a 4 port chassis for SATA drives that allows you to build your own RAID out of large and inexpensive drives. The resultant JBOD can then be formatted into RAID0 or RAID1 (software RAID) and presented to backup applications (ie – Retrospect) as offline storage. Amazon sells an IcyDock, populated with 1.5TB
drives for a total of 6TB, which is how I’m now snapshotting my VMs in my lab. I’m also using it as the backup destination for my home Kerio server. Works nicely so far.
You can also buy the IcyDock with no drives
and likely populate them with 2TB drives, although I haven’t tested this yet (aka – requires confirmation). The IcyDock connects to Mac, Windows and Linux machines over eSATA and the drive hot swappable modules are eSATA. If you don’t already have an eSATA card for your Mac then then you can get one of those at Amazon as well
. If you would rather roll with the 2TB drives then you can get those at Amazon too!
I’m often asked what I think of upgrading the firmware on servers and storage. My answer there, if it’s a production box and it isn’t broken then don’t fix it… What if you’re upgrading the firmware on a RAID or RAID card and the device becomes unresponsive? There’s usually a reason to upgrade, but if you are not experiencing problems then why risk a potential outage if you do not need to?
To list the RAIDs on your current system use the listRAID option with diskutil.
Unmount any Xserve RAID volumes hosted by the RAID (especially Xsan volumes). Press the reset button on the back of the controller module for about ten seconds. You should see the controller restart and then the controller should be reset. Sometimes you need to reset both controllers. You don’t have to reset the whole controller to just reset the password. To do that, you can press the reset button for about 1-2 seconds and then try to authenticate through RAID Admin to reset the password. By default the password to view the Xserve RAID, once reset is public and to edit settings, the default password is private. By default the IP address is DHCP. If you plug directly into the RAID then provided you’re both on DHCP it should show up in your list in RAID Admin, which uses Rendezvous. If you’re not on the same subnet though, it may not open up properly, even if it shows you the RAID in RAID Admin.
Originally posted at http://www.318.com/TechJournal
The acronym for RAID can often be misleading as it has had multiple meanings over the years. RAID originally stood for a redundant array of inexpensive disks. The acronym RAID is now also known as a redundant array of independent disks as not all RAID disks are inexpensive. RAID refers to a hard drive storage mechanism using multiple hard drives to share or replicate data among the drives. In some cases this can mean having data that is written to a single logical drive stored on multiple drives so there is redundancy of the data or RAID can be used to maximize throughput to drives by aggregating possible speeds of RAID member drives. A key advantage of RAID is the ability to combine drives into an array with more capacity, reliability, speed, or a combination of these, than was affordably available in a single device.
Through the remainder of this article we will be looking at different types of RAID and what each can do. But before we look into RAIDs letâ€™s look at a JBOD. In a JBOD (Just a Bunch of Disks) which is also often called a concatanated RAID in OS X, you can use multiple drives to merge data into one volume. You can take 2 drives of 2 Terabytes each and 4 drives of 1 Terabyte and merge them into one volume of 8 Terabytes. In this scenario you would end up with no fault tolerance in your environment but you would be able to take use of low cost drives, such as LaCies to create a single volume. However, if any of the drives in a JBOD fail the full volume will fail. This leads us to use this type of situation primarily with volatile situations such as a disk-to-disk backup solution.
A RAID0 is similar to a JBOD; however RAID0 requires all drives in a RAID0 array to be identical in size. Provided the drives are the same in size, RAID0 offers the fastest speeds available in a RAID. These are often used for high definition video editing and volumes housing database volumes requiring a lot of speed. RAID0 does not offer any redundancy of data. If one member in the array fails, just as with a JBOD then the volume will fail as well. However, RAID0 is fast and an inexpensive way to get large amounts of fast storage.
RAID1 is often known as a mirror. In RAID1, all data written to one disk is then duplicated onto a second disk of an identical size. In a mirror, if one member of the set of disks were to fail then the disk would continue to be accessible for read/write operations. RAID1 offers amongst the best protection to data loss available to RAID scenarios, but at the highest cost. For every byte of data stored on a RAID1 volume there must be an equal byte used for redundancy. As high end disks have become more and more expensive the development of more complex RAID strategies helps to maximize our ability to make use of a variety of solutions.
In RAID 3 you would end up with one member of each array as a static parity drive. This drive will store a stripe of information about each other drive and if one of them crashes will create itself in that drives image. The parity also causes a slight loss of speed over if it was a large RAID0 volume. RAID0 in the truest sense of the word (no parity) would net you 100% of the usable space.
Parity information is stored striped across all of the drives in RAID5, not just one. In RAID3, parity information is stored on a dedicated parity drive. But even in RAID3 you shouldnâ€™t be able to make the smallest drive the hot swap. In fact you can only typically build a RAID0 out of drives of different sizes (which isnâ€™t much of a RAID but more of a JBOD) unless you slim all drives down to the smallest drive size manually. Thus, a RAID5 + Hot Spare array of a 5 40GB drives would end up being a RAID 5 volume of 4 40GB drives. If you pull one for hot spare you would end up with an 80GB volume. This nets a 33% loss of space. If all 4 drives were in the RAID then you would get a RAID5 volume of 120GB, netting a 25% loss. If 5 drives at 40GB were in the RAID you would end up with a 160GB volume; thus resulting in only a 20% loss. And so on. The parity information is stored on all drives so any single drive can go down and the contents of the RAID will be rebuilt based on the parity stored on each of the drives.
RAID6 offers even more redundancy by writing two stripes of parity information to each member of the array. This allows for two drives in the RAID to crash without loosing data. RAID6 comes with more cost than most other RAIDs, both in RAID hardware and hard drives, and so is used much more rarely.