With the advent of the latest Promise Arrays, I’m starting to see more and more environments stacking a boat load of shelves of storage on top of one another (e.g. for CrashPlan). As such, it occurs to me that I haven’t really covered the initial configuration of a Promise here. The way I like to set them up is using configuration scripts. I’ve been using different iterations of the same scripts for a long time. This script is meant to automatically format 1 E and 7 Js of Promise storage and setup the LUNs named EData1, EData2, J1Data1 and J1Data2, J2Data1, J2Data2, etc. These LUNs and their controller configuration is meant to be used for Direct Attached Storage (although swap out readcache w/ readahead).
Provided that the hardware is racked and the stacking cables connected properly (verify you see all of the shelves before running it), simply open Safari from a machine on the same network as the arrays and then use the Bookmarks menu to Show All Bookmarks. If you have multiple VTrak J-Class expansion chassis, connect their SAS cables from the circle SAS port on the First VTrak J-Class to the Diamond SAS port on the Second VTrak J-Class and down the line until they’re all connected. I usually like to restart them once they’re all connected (or wait until they’re connected to interconnect them). The boot sequence can take awhile when you have a lot stacked atop one another, so be patient. Don’t do any configuration until you can see all of the shelves in WebPAM…
The Promise array should have picked up an IP from DHCP and be announcing itself via Bonjour. Click on it and then log in at the WebPAM (the default username is administrator and the default password is password). Then assign a static IP address to each of the three interfaces, change the admin password and upload the following script (copy it to a new file on the desktop of the machine you’ll be uploading it from, preferably using the command line so there’s no special characters).
To upload the script click on Administrative Tools and then click on the Import tab. Set the Type drop-down menu to Configuration Script and then use the Browse button to select the file you saved this script into. Then click on Submit and provided you don’t get any errors you should see all of the lights go blue and the LUNs will start formatting.
#written by Charles Edge for Tedd
#1 E and 7 Js at 2 LUNs Each Shelf with 8 drives in RAID 5 per LUN
#Direct Attached Storage
ctrl -a mod -i 1 -s "lunaffinity=enable, adaptivewbcache=enable, hostcacheflushing=disable, forcedreadahead=disable"
ctrl -a mod -i 2 -s "lunaffinity=enable, adaptivewbcache=enable, hostcacheflushing=disable, forcedreadahead=disable"
array -a add -p 1,2,3,4,5,6,7,8 -s "alias=EData1" -c 1 -l "alias=EData1, raid=5, readpolicy=readcache, writepolicy=writeback, preferredctrlid=1"
array -a add -p 9,10,11,12,13,14,15,16 -s "alias=EData2" -c 1 -l "alias=EData2, raid=5, readpolicy=readcache, writepolicy=writeback, preferredctrlid=2"
array -a add -p 17,18,19,20,21,22,23,24 -s "alias=J1Data1" -c 1 -l "alias=J1Data1, raid=5, readpolicy=readcache, writepolicy=writeback, preferredctrlid=1"
array -a add -p 25,26,27,28,29,30,31,32 -s "alias=J1Data2" -c 1 -l "alias=J1Data2, raid=5, readpolicy=readcache, writepolicy=writeback, preferredctrlid=2"
array -a add -p 33,34,35,36,37,38,39,40 -s "alias=J2Data1" -c 1 -l "alias=J2Data1, raid=5, readpolicy=readcache, writepolicy=writeback, preferredctrlid=1"
array -a add -p 41,42,43,44,45,46,47,48 -s "alias=J2Data2" -c 1 -l "alias=J2Data2, raid=5, readpolicy=readcache, writepolicy=writeback, preferredctrlid=2"
array -a add -p 49,50,51,52,53,54,55,56 -s "alias=J3Data1" -c 1 -l "alias=J3Data1, raid=5, readpolicy=readcache, writepolicy=writeback, preferredctrlid=1"
array -a add -p 57,58,59,60,61,62,63,64 -s "alias=J3Data2" -c 1 -l "alias=J3Data2, raid=5, readpolicy=readcache, writepolicy=writeback, preferredctrlid=2"
array -a add -p 65,66,67,68,69,70,71,72 -s "alias=J4Data1" -c 1 -l "alias=J4Data1, raid=5, readpolicy=readcache, writepolicy=writeback, preferredctrlid=1"
array -a add -p 73,74,75,76,77,78,79,80 -s "alias=J4Data2" -c 1 -l "alias=J4Data2, raid=5, readpolicy=readcache, writepolicy=writeback, preferredctrlid=2"
array -a add -p 81,82,83,84,85,86,87,88 -s "alias=J5Data1" -c 1 -l "alias=J5Data1, raid=5, readpolicy=readcache, writepolicy=writeback, preferredctrlid=1"
array -a add -p 89,90,91,92,93,94,95,96 -s "alias=J5Data2" -c 1 -l "alias=J5Data2, raid=5, readpolicy=readcache, writepolicy=writeback, preferredctrlid=2"
array -a add -p 97,98,99,100,101,102,103,104 -s "alias=J6Data1" -c 1 -l "alias=J6Data1, raid=5, readpolicy=readcache, writepolicy=writeback, preferredctrlid=1"
array -a add -p 105,106,107,108,109,110,111,112 -s "alias=J6Data2" -c 1 -l "alias=J6Data2, raid=5, readpolicy=readcache, writepolicy=writeback, preferredctrlid=2"
array -a add -p 113,114,115,116,117,118,119,120 -s "alias=J7Data1" -c 1 -l "alias=J7Data1, raid=5, readpolicy=readcache, writepolicy=writeback, preferredctrlid=1"
array -a add -p 121,122,123,124,125,126,127,128 -s "alias=J7Data2" -c 1 -l "alias=J7Data2, raid=5, readpolicy=readcache, writepolicy=writeback, preferredctrlid=2"
init -a start -l 0 -q 512
init -a start -l 1 -q 512
init -a start -l 2 -q 512
init -a start -l 3 -q 512
init -a start -l 4 -q 512
init -a start -l 5 -q 512
init -a start -l 6 -q 512
init -a start -l 7 -q 512
init -a start -l 8 -q 512
init -a start -l 9 -q 512
init -a start -l 10 -q 512
init -a start -l 11 -q 512
init -a start -l 12 -q 512
init -a start -l 13 -q 512
init -a start -l 14 -q 512
init -a start -l 15 -q 512
It would also be a really, really good idea to make sure that the UPS is capable of handling this much load. The arrays will likely spike power consumption for a good 10 hours. Monitor the formatting. If there are any problems, delete all of the LUNs, comment out the ctrl lines, fix whatever problems there are and run the script again. The lines that start with ctrl configure the controllers, the ones that start with array configure the arrays and the lines that start with init perform a quick initialization on the LUNs (ya, 10 hours is quick).
Finally, forcedreadahead is a setting where a lot of people eek out extra performance. For DAS environments with a lot of shelves of storage stacked through the SAS interconnects it’s hard to say whether or not you’ll realize a lot of performance gain. This can be enabled later to see, but is one of the only settings I’d really tweak in a direct attached environment. Hope this helps someone!