Tag Archives: bash

Mac OS X Mac OS X Server Mac Security Mass Deployment Ubuntu Unix VMware Xsan

5 Ways To Manage Background Jobs In A Shell Environment

When running commands that are going to take awhile, I frequently start them with the nohup command, disown the command from the current session or queue them for later execution. The reason is that if I’m running them from a Terminal or SSH session and the session is broken I want to make sure they complete. To schedule a job for later execution, use at. For example, if I want to perform a simple command, I can schedule it in a minute by running it as an echo piped to at:

echo "goldengirlsfix.sh" | at now + 2 minutes

Note, if using 1 minute, you’ll need that to be singular. But you can also disown the job. To do so, end a command with an & symbol. So, running a command or script that will take awhile with an ampersand at the end displays the job number for the command and then you can disown it by running disown followed by -h at the end. for example:

du -d 0 &
disown -h

If you choose not to disown the job, you can check running jobs using the jobs command at any time:

jobs

Nohup runs a command or script in the background even after a shell has been stopped:

nohup cvfsck -nv goldengirls &

The above command runs the command between nohup and the & symbol in the background. By default, you’ll then have the output to the command run in the nohup.out file in your home directory. So if your username were krypted, you could tail the output using the following command:

tail -f /Users/krypted/nohup.out

You can also use screen and then reconnect to that screen. For example, use screen with a -t to create a new screen:

screen -t sanconfigchange

Then run a command:

xsanctl sanConfigChanged

Then later, reconnect to your screen:

screen -x

And you can control-n or control-a to scroll through running background processes this way, provided each is in its own screen.

Finally, in AIX you can actually use the bg command. I used to really like this as I could basically move an existing job into the background if I’d already invoked it from a screen/session. For example, you have pid 88909 running and you want to put it into the background. You can just run bg 88909 and throw it into the background, allowing you to close a tty. But then if you’d like to look at it later, you can always pop it back using, you guessed it, fg. This only worked in AIX really, but is a great process management tool.

cloud Network Infrastructure SQL Ubuntu Unix VMware Windows Server

Scripting Azure On A Mac

Microsoft Azure is Microsoft’s cloud services. Azure can host virtual machines and act as a location to store files. However, Azure can do much more as well, providing an Active Directory instance, provide SQL database access, work with hosted Visual Studio, host web sites or provide BizTalk services. All of these can be managed at https://manage.windowsazure.com.

windows_azure_logo6

You can also manage Windows Azure from the command line on Linux, Windows or Mac. To download command line tools, visit http://www.windowsazure.com/en-us/downloads/#cmd-line-tools. Once downloaded, run the package installer.

Screen Shot 2013-11-29 at 10.51.01 PMWhen the package is finished installing, visit /usr/local/bin where you’ll find the azure binary. Once installed, you’ll need to configure your account from the windowsazure.com site to work with your computer. To do so, log into the windowsazure.com portal.

Screen Shot 2013-12-01 at 8.25.57 PM

Once logged in, open Terminal and then use the azure command along with the account option and the download verb:

azure account download

This account downloads the .publishsettings file for the account you’re logged in as in your browser. Once downloaded, run azure with the account option and the import verb, dragging the path to your .publishsettings file from https://manage.windowsazure.com/publishsettings/index?client=xplat:

azure account import /Users/krypted/Downloads/WindowsAzure-credentials.publishsettings

The account import then completes and your user is imported into azure. Once imported, run azure with the account option and then storage list:

azure account storage list

You might not have any storage configured yet, but at this point you should see the following to indicate that the account is working:

info: No storage accounts defined
info: account storage list command OK

You can also run the azure command by itself to see some neat ascii-art (although the azure logo doesn’t really come through in this spiffy cut and paste job):

info: _ _____ _ ___ ___________________
info:        /_\  |__ / | | | _ \ __|
info: _ ___ / _ \__/ /| |_| |   / _|___ _ _
info: (___ /_/ \_\/___|\___/|_|_\___| _____)
info: (_______ _ _) _ ______ _)_ _
info: (______________ _ ) (___ _ _)
info:
info: Windows Azure: Microsoft's Cloud Platform
info:
info: Tool version 0.7.4
help:
help: Display help for a given command
help: help [options] [command]
help:
help: Open the portal in a browser
help: portal [options]
help:
help: Commands:
help: account to manage your account information and publish settings
help: config Commands to manage your local settings
help: hdinsight Commands to manage your HDInsight accounts
help: mobile Commands to manage your Mobile Services
help: network Commands to manage your Networks
help: sb Commands to manage your Service Bus configuration
help: service Commands to manage your Cloud Services
help: site Commands to manage your Web Sites
help: sql Commands to manage your SQL Server accounts
help: storage Commands to manage your Storage objects
help: vm Commands to manage your Virtual Machines
help:
help: Options:
help: -h, --help output usage information
help: -v, --version output the application version

Provided the account is working, you can then use the account, config, hdinsight, mobile, network, sb, service, site, sql, storage or vm options. Each of these can be invoked along with a -h option to show a help page. For example, to see a help page for service:

azure service -h

You can spin up resources including sites, storage containers and even virtual machines (although you might need to create templates for VMs first). As an example, let’s create a new site using the git template:

azure site create --git

Overall, there are a lot of options available in the azure command line interface. The web interface is very simple, with options in the command line interface mirroring the options in the web interface. Running and therefore scripting around these commands is straight forward. I wrote up some Amazon stuff previously at http://krypted.com/commands/amazon-s3cmd-commands, but the azure controls are really full featured and I’m really becoming a huge fan of the service itself the more I use it (which likely means I’ll post more articles on it soon).

Mac OS X

Units

Go figure, there’s a command that can convert some units to other units. The units command is able to take a number of one type of units and then convert them to another. For example, to convert a mile to feet:

units "1 mile" feet

Or to convert 2 hours to seconds:

units "2 hours" seconds

For a full listing of the formats supported, check out /usr/share/misc/units.lib.

Mac OS X Mac OS X Server Mac Security Ubuntu Unix

Leveraging The Useful Yet Revisionist Bash History

Not, this article is not about 1984. Nor do I believe there is anything but a revisionist history. Instead, this article is about the history command in OS X (and *nix). The history command is a funny beast. Viewing the manual page for history in OS X nets you a whole lotta’ nothin’ because it’s just going to show you the standard BSD General Commands Manual. But there’s a lot more there than most people use. Let’s take the simplest invocation of the history command. Simply run the command with no options and you’ll get a list of your previously run bash commands:

history

This would output something that looks like the following:

1  pwd
2 ls
3 cd ~/Desktop
4 cat asciipr0n

Now, you can clear all of this out in one of a few different ways. The first is to delete the .bash_history (or the history file of whatever shell you like). This would leave you with an interesting line in the resultant history:

l rm ~/.bash_history

Probably not what you were after. Another option would be to nuke the whole history file (as I did on a host accidentally with a misconstrued variable expansion, trying to write a history into a script):

history -c

A less nuke and pave method here, would be to selectively rewrite history, by choosing a line that you’d like to remove using the -d option:

history -d 4

Three are other options for history as well, mostly dealing with substitutions, but I am clearly not the person to be writing about these given that I just took ‘em the wrong direction. They are -anrwps for future reference.

Finally, since you likely want a clean screen, do clear to get a nice clean screen:

clear

Now that we’re finished discussing altering your history, let’s look at using it to make your life faster. One of my most commonly tools at the command line is to use !$. !$ in any command expands to be the last position of your last run command. Take as an example you want to check the permissions of a file on the desktop:

ls -al ~/Desktop/asciipr0n

Now let’s say you want to change the permissions of that object, just use !$ since the last command had it as that only positional parameter and viola:

chmod 700 !$

Where does this come from? Well, any time you use a ! in a command, you are doing a history substitution, or expanding that variable into some kind of event designator, which is the part in your history you’re calling up. The $ is designated as first position (which yes, is a move my daughter did at her last dance recital). !# is another of these, which calls up the whole line typed so far. For example, let’s say you have a file called cat. Well, if you run cat and then use !# (provided you’re in the working directory of said file) you’d show the contents on the screen:

cat !#

Now, view your history after running a couple of these and you’ll notice that the event designators aren’t displayed in your history. Instead, they were expanded at runtime and are therefore displayed as the expanded expression. Let’s do something a tad more complicated. Let’s echo out the contents of your line 4 command from the beginning of this article:

echo `!4`

Now, if your line 4 were the same as my line 4 you’d certainly be disappointed. You see, you lost the formatting, so it’s probably not gonna’ look that pretty. If you were on line 11 and you wanted to do that same thing, you could just run !-7 and you’d go 7 lines back:

echo `!-7`

But the output would still be all jacked. Now, let’s say you ran a command and realized that jeez you forgot to sudo first. Well, !! is here for ya’. Simply run sudo !! and it will expand your last command right after the sudo:

sudo !!

The ! designator also allows you to grab the most recent command that starts with a set of letters. for example, let’s say I wanted to output the contents of my earlier echo command, and I wanted to show just the second position there:

cat !ech:2

That’s actually not gonna’ look pretty either. But that’s aside from the point. There are other designators and even modifiers to the designators as well, which allow for substitution. But again, I’m gonna’ have to go back and review my skills with those as I wouldn’t want to have you accidentally nuking your history because you expanded -c into some expression and didn’t realize that some of these will actually leave the history command as your last run command… :-/

Mac OS X Mac OS X Server Mac Security Mass Deployment

A Well Caffeinated Command Line

One of the big things in OS X Mountain Lion is how the system handles sleeping and sleeping events. For example, Power Nap means that now, Push Notifications still work when the lid is shut provided that the system is connected to a power source. This ties into Notification Center, how the system displays those Push Notifications to users. Sure, there’s tons of fun stuff for Accessibility, Calendar, contacts, Preview, Messages, Gatekeeper, etc. But a substantial underpinning that changed is how sleep is managed.

And the handling of sleep extends to the command line. This manifests itself in a very easy to use command line utility called caffeinate. Ironically, caffeinate is similar to the sleep command, except it will keep the GUI awake in the event that Mountain Lion wants to take a nap (I’m not saying it should not be used as a replacement for sleep btw).

To just get an idea of what it does, run the caffeinate command, followed by a -t operator and then let’s say the number 2:

caffeinate -t 2

The system can’t go to sleep automatically now, for two seconds. The command will sit idle for those two seconds and then return you to a prompt. Now, extend that to about 10000:

caffeinate -t 10000

While the command runs, manually put the system to sleep. Note that the system will go to sleep manually but not automatically. Now, there are different ways that a Mac can go to sleep. Use the -d option to prevent the display from sleeping or -i to prevent the system from going into an idle sleep. The -s is similar to -i but only impactful when the AC power is connected while the -u option deals with user inactivity.

Overall, a fun little command. It’s just another little tool in an ever-growing arsenal of options.

cloud Mass Deployment Ubuntu Unix

Scripting in Google ChromeOS

I recently got my hands on one of those Google ChromeBooks (Cr-48). Interesting to have an operating system that is just a web browser. But, as anyone likely reading this article already knows, the graphical interface is the web browser and the operating system is still Linux. But what version? Well, let’s go on a journey together.

First, you need ChromeOS. If you’ve got a ChromeBook this is a pretty easy thing to get. If not, check http://getchrome.eu/download.php for a USB or optical download that can be run live (or even in a virtual machine). Or, if you know that you’re going to be using a virtual machine, consider a pre-built system from hexxeh at http://chromeos.hexxeh.net/vanilla.php. I have found the VMware builds to be a bit persnickety about the wireless on a Mac, whereas the VirtualBox builds ran perfectly. I split my time between the two anyway, so I’ve just (for now) been rocking VirtualBox for ChromeOS. When you load it for the first time it asks for a Google account. Provide that, select your network adapter, choose from one of the semi-lame account images ( for the record, I like the mad scientist one) and you’re off to the races.

Next, we need a shell. When you first log in, you see a web page that shows you all of the Chromium apps you have installed. By default, you’ll see File manager and Web Store. If you’ve used the OS X App Store then the Chrome Web Store is going to look pretty darn familiar. My favorite for now is Chrome Sniffer. But all of these kinda’ get away from where we’re trying to go: get a scripting environment for Chrome OS.

Chrome comes with 2 types of shell environments. The first is crosh. To bring up a crosh environment, use Control-Alt-t. This keystroke invokes the crosh shell. Here, type help to see a list of the commands available. Notice that cd, chmod, etc don’t work. Instead, there are a bunch of commands that a basic user environment might need for troubleshooting primarily network connections. “But this is Linux” you ask? Yup.

At the help output you’ll notice shell. Type shell and then hit enter. The prompt will change from crosh> to chronos@localhost. Now you can cd and perform other basic commands to your hearts delight. But you’re probably going to need to elevate privileges for the remainder of this exersize. So let’s type sudo bash and just get there for now. If you’re using a ChromeBook, the root password might be root, or if you’re using a downloaded vm from hexxeh then it might be facepunch (great password, btw).

Provided the password worked, the prompt should turn red. Now, if you’re using a hexxeh build then the file system is going to be read-only. You won’t be able to change the root password nor build scripts. But otherwise, you should be able to use passwd to change the password:

passwd chronos

Once you’ve got slightly more secure shell environment (by virtue of not using the default root password), it is time to do a little exploring. Notice that in /bin, you see sh, bash, rbash and the standard fare of Linux commands (chmod, chown, cp, attr, etc. Notice that you don’t see tcsh, csh or ksh. So bash commands from other platforms can come in, but YMMV with tcsh, etc. Running ps will give you some idea of what’s going on process-wise under the hood:

ps aux

From encrypts to crypto to the wpa supplicant, there’s plenty to get lost in exploring here, but as the title of the article suggests, we’re here to write a script. And where better to start than hello world. So let’s mkdir a /scripts directory:

mkdir /scripts

Then let’s touch a script in there called helloworld.sh:

touch /scripts/helloworld.sh

Then let’s give it the classic echo by opening it in a text editor (use vi as nano and pico aren’t there) and typing:

echo "Hello Cruel World"

Now close, save and then run it:

/scripts/helloworld.sh

And you’ve done it. Use the exit command twice to get back to crosh and another time to close the command line screen. You now have a script running on ChromeOS. Next up, it’s time to start looking at deployment. This starts with knowing what you’re looking at. To see the kernel version:

uname -r

Or better:

cat /proc/version

Google has been kind enough to build in similar sandboxing to that in Mac OS X, but the concept that you can’t run local applications is a bit mistaken. Sure, the user interface is a web browser, but under the hood you can still do much of what most deployment engineers will need to do.

If these devices are to be deployed en masse at companies and schools, scripts that setup users, bind to LDAP (GCC isn’t built-in, so it might be a bit of a pain to get there), join networks and the such will need to be forthcoming. These don’t often come from the vendor of an operating system, but from the community that ends up supporting and owning the support. While the LDAP functionality could come from Google Apps accounts that are integrated with LDAP, the ability to have a “One touch deploy” is a necessity for any OS at scale, and until I start digging around for a few specific commands/frameworks and doing some deployment scripts to use them, right now I’m at about a 6 touch deploy… But all in good time!

Active Directory Mac OS X Mac OS X Server Mac Security Mass Deployment

Directory Services Scripting Changes in Lion

opendirectoryd

Scripting directory services events is one of the most common ways that the OS X community automates post-imaging tasks. As such, there are about as many flavors of directory services scripts are there engineers that know both directory services and have a little scripting experience. In OS X Lion, many aspects of directory services change and bring with them new techniques for automation. The biggest change is the move from DirectoryService to opendirectoryd.

In Snow Leopard and below, when you performed certain tasks, you restarted the directory services daemon, DirectoryService. The same is true in Lion, except that instead of doing a killall on DirectoryService, you do it on opendirectoryd:

killall opendirectoryd

Also, local account passwords in OS X have been moved into attributes within user account property lists and so there is no longer a /var/db/shadow/hash directory. Therefore, copying property lists and their associated password hash file is no longer a necessary process.

dsperfmonitor vs odutil

Next, dsperfmonitor has gone to the great binary place in the sky to join dirt and DirectoryService. It is somewhat replaced with odutil. The odutil command is pretty easy and straight forward. You can see all open sessions, nodes, modules, requests, statistics and nodenames using the show verb (along with those subcommands). You can also set the logging level for directory services to alert, critical, error, warning, notice, info and debug, each with more and more events that are trapped. This is done with the set log verb along with the level (which is by default set to error):

odutil set log debug

The odutil command is also used to enable statistics. These are pretty memory intensive (or they were on a mini w/ 4GB of memory in it but might not be with your 32GB of RAM fortified Xserve). This is done using odutil’s set statistics verb w/ an option of either on or off:

odutil set statistics on

Note: It’s worth noticing that stats are persistent across restarts, so don’t forget to turn it off.

dsconfigldap

For Open Directory administrators, you’ll be elated to know that your LDAP bind script just got a bit shorter. Now, search policies are updated automatically when binding via dsconfigldap. But, if you have a bunch of scripting that you don’t want to rip apart you can still do search policies manually by using the spiffy new -S option for dsconfigldap (yes, I just insinuated that -S was for spiffy, what’s it to ya’?!?!).

Kerberos

scutil can now be used to view Active Directory Kerberos information. scutil can also be used to query the search node and interface states. klist no longer seems to function properly, so use ktutil to with a list verb to see service principals:

ktutil list

dsconfigad

Not to be left out, the Active Directory binding tool, dsconfigad, got some new flair as well (yes, I just insinuated that dsconfigad was really Jennifer Aniston’s contribution to OS X and I challenge you to prove me wrong). There is now a -restrictDDNS option, which I’m sure you can guess disable dynamic DNS registration in Active Directory-integrated DNS zones. There’s also the rockin’ new -authority option, which enabls or disables Kerberos authority generation. Finally, dsconfigad gets some minor cosmetic changes. -f becomes -force, -r becomes -remove, -lu becomes -localuser, -lp becomes localpassword, -u becomes username, -p becomes -password, but the original options still work. Who knows how long the old operators will stick around, but my guess is they’ll be around until dsconfgad isn’t…

Most options and settings for the AD plug-in should now be configured following the AD bind process (thanks to @djstarr for that little addition). How does this impact your scripts. Just move the settings to the bottom of the script if they give you gruff… Also, the -enableSSO option has been changed to -enablesso.

Defaults

Finally, defaults allows you to put the .plist in the command when you use a file path to list them out. This should eliminate the 6 backspaces we often had to type to test certain things after auto-completing file names… :)

Mac OS X Mac OS X Server Mac Security Unix

Making Autocomplete a Bit Less Sensitive

I can’t stand it when I open terminal and go to cd into a directory I know to exist only to be confused by why using the tab doesn’t autocomplete my command. For those that don’t know, when you are using any modern command line interface, when you’re indicating a location in a file system, the tab key will autocomplete what you are typing. So let’s say you’re going to /System. I usually just type cd /Sys and then use the tab to autocomplete. In many cases, the first three letters, followed by a tab will get you there and you can therefore traverse deep into a filesystem in a few simple keystrokes.

But then there’s all this case weirdness with a lot of the more Apple-centric stuff in the file system. For example, when it’s FileSystem vs. Filesystem vs. filesystem. This makes sense when using a partitioning scheme that allows for case-based namespace collisions, but not in HFS+ (Journaled), the default format used with Mac OS X. So I find myself frequently editing the .inputrc file. This file can be used to do a number of cool tricks in a terminal session, but the most useful for many is to take the case sensitivity away from tab auto-completes, effectively de-pony-tailing the sensitive pony-tail boy.

To do so, create the hidden .inputrc file in your home folder:

touch ~/.inputrc

Then open it with your favorite text editor and add this line:

set completion-ignore-case on

Then save and close. Open a new terminal window and you should be able to tab auto-complete whether or not you have the case right. Try it with /sys-TAB instead of /Sys-TAB. Best of all, as you sudo the behavior follows your session (including sudo bash). However, if you su the behavior does not follow your session. Enjoy and may the pinky that is ever reaching for that shift key thank you as it gets a bit more rest in the next few days than in the last few…

Oh, to turn it back off either toss your .inputrc file (if you don’t have any other parameters in there) or just set the final word of the line to no

Mac OS X Ubuntu Unix

Using a Colon As A Bash Null Operator

I was recently talking with someone that was constructing an If/Then and they wanted a simple echo to match only if a condition was not met. Given their other requirements it seemed a great use for a null operator, which in bash can be a colon (:). This has the equivalent of /dev/null, but with less typing.

One example of populating something with null is if you have a case where you want to create a file where there may or may not be a file already, and you want your new file to be empty (or start empty so you can write lines into it). Here, you could have a line in a script that simply sent null to the file. Here, we’ll use this to create a file called seldon:

: > /temp/seldon

You might expect to see a colon in the above created /temp/seldon file, but you don’t because : was interpreted as null. You could also run it without the colon and end up with the same thing.

> /temp/seldon

If you echo :, you will see a colon echo’d to the screen. This renders the colon unnecessary, but it is just an example, leading up to another where you would need something, as the payload of an If/Then. In the following case, let’s say that we are checking to see if variable $A is 1 and if it isn’t we’ll create an empty file called seldon

if [ "$A" = "1" ];
then
:
else
: > /temp/seldon
fi

To test whether a string hasn’t yet been declared, you can use -n:

if [ -n $a ]

But you can also use the colon to shorten or eliminate the need for certain If/Then blocks entirely. To quote the bash reference (from http://www.gnu.org/software/bash/manual/bashref.html):

When not performing substring expansion, using the form described below, Bash tests for a parameter that is unset or null. Omitting the colon results in a test only for a parameter that is unset. Put another way, if the colon is included, the operator tests for both parameter’s existence and that its value is not null; if the colon is omitted, the operator tests only for existence.

This means that in the following, if $A has nothing to expand set to 1, otherwise leave it alone:

${A:=1}

But, if you wanted to do the opposite and say that if it has nothing to expand leave it and if it has something, reset that something to 1:

${A:+1}

There are a bunch of other uses in the link provided for the bash manual as well, but overall, you can cut out a lot of typing using a little old colon…

Mac OS X Mac OS X Server MobileMe

Sync'ing iTunes Libraries

I recently spent a few days trimming down the amount of space consumed by my home folder. In so doing I discovered a number of things I could be doing better with regards to utilization of my drive space. So I decided to offload most of my media (photos, movies, etc) off my laptop and onto my Mac Mini server. I also decided that one thing I’d like to live on both is iTunes.

Note: Before you do anything in this article you should verify you have a good back up. Also, both machines will end up needing to be Authorized for your iTunes account.

There are a lot of ways to keep two iTunes libraries in sync. There are also a number of 3rd party tools that can help you do so. I tested all the tools I could find and decided I’d rather just script it myself. Scripting a synchronization operation in Mac and Linux always seems to come down to a little rsync action. Given that rsync is a little old in Mac OS X, I started out by updating rsync to the latest (3.0.7) using the steps provided on bombich.com (I added using /tmp):

mkdir /tmp/rsyncupdate
cd /tmp/rsyncupdate
curl -O http://rsync.samba.org/ftp/rsync/src/rsync-3.0.7.tar.gz
tar -xzvf rsync-3.0.7.tar.gz
curl -O http://rsync.samba.org/ftp/rsync/src/rsync-patches-3.0.7.tar.gz
tar -xzvf rsync-patches-3.0.7.tar.gz
cd rsync-3.0.7
curl -o patches/hfs_compression.diff http://www.bombich.com/software/opensource/rsync_3.0.7-hfs_compression_20100701.diff
curl -o patches/crtimes-64bit.diff https://bugzilla.samba.org/attachment.cgi?id=5288
curl -o patches/crtimes-hfs+.diff https://bugzilla.samba.org/attachment.cgi?id=5966
patch -p1 <patches/fileflags.diff
patch -p1 <patches/crtimes.diff
patch -p1 <patches/crtimes-64bit.diff
patch -p1 <patches/crtimes-hfs+.diff
patch -p1 <patches/hfs_compression.diff
./prepare-source
./configure
make
sudo make install
sudo rm -Rf /tmp/rsyncupdate
/usr/local/bin/rsync –version

Provided the version listed is 3.0.7 then we have a good build of rsync and can move on with our next step, getting a target volume mounted. In this case, I have a volume shared out called simply Drobo (I wonder what kind of RAID that is?!?!). Sharing was done from System Preferences -> Sharing -> File Sharing -> click + -> Choose Drobo and then assign permissions. The AFP server is on an IP address of 192.168.210.10. For the purposes of this example, the username is admin and the password is mypassword. So we’ll do a mkdir in /Volumes for Drobo:

mkdir /Volumes/Drobo

Then we’ll mount it using the mount_afp command along with a -i option:

mount_afp “afp://admin:mypassword@192.168.210.10/Drobo” /Volumes/Drobo

Now that we have a mount we’ll need to sync the library up. In this case, the Music directory on the Drobo has a symlink from ~/Music. This was created by copying my Music folder to the drobo and then rm’ing it (fails when trying from Finder):

rm -Rf ~/Music

Then using ln to generate the symlink:

ln -s ~/Music /Volumes/Drobo/Music

Now sync the files. I’m not going to go into all of the options and what they do, but make sure you have permissions to both the source and the target (using the username and password from the user whose data your changing helps):

/usr/local/bin/rsync -aAkHhxv –fileflags –force –force-change –hfs-compression –delete –size-only ~/Music/iTunes /Volumes/Drobo/Music

Note: If you get a bunch of errors about operations failing then consider disabling the Ignore ownership on this volume setting for any external media you may be using.

Now fire up iTunes on the target machine and make sure it works. At this point, I could also share out the Music folder from my laptop and sync back as well, which would effectively allow me to make changes on both machines. However, for now, I only want to make changes on the laptop and not the desktop so there’s no need for a bidirectional sync.

Once the sync is complete, we can tear down our afp mount:

diskutil unmount /Volumes/Drobo

Now that we can sync data, we still need to automate the process as I’m not going to want to type all this every time I run it. First up, I’m going to create a .sh file (let’s just say /scripts/synciTunes.sh):

touch /scripts/synciTunes.sh

Then I’m going to take the commands to mount the drive, sync the data and then unmount the drive and put them in order into the script:

/bin/mkdir /Volumes/Drobo
mount_afp “afp://admin:mypassword@192.168.210.10/Drobo” /Volumes/Drobo
/usr/local/bin/rsync -aAkHhxv –fileflags –force –force-change –hfs-compression –delete –size-only ~/Music/iTunes /Volumes/Drobo/Music
/usr/sbin/diskutil unmount /Volumes/Drobo

Once created, the script should be run manually and provided it succeeds then it can be automated (ie – creating a LaunchDaemon). If it works after a little while, then you can consider synchronizing your iPhoto and anything else if you so choose. Also, I ended up actually using ssh pre-shared key authentication and doing rsync over ssh. That allows you not to put the password for a host on your network into an unencrypted form in a script. You could do some trickeration with the password, but you might as well look into pre-shared keys if you’re going to automate this type of thing to run routinely. Finally, I also later ended up removing the iTunes Genius files as I started to realize they were causing unneeded data to sync and they would just rebuild on the other end anyway. Hope this helps anyone else looking to build an iLife server of their own!