Monthly Archives: December 2010

Mac OS X Mac OS X Server MobileMe

Sync'ing iTunes Libraries

I recently spent a few days trimming down the amount of space consumed by my home folder. In so doing I discovered a number of things I could be doing better with regards to utilization of my drive space. So I decided to offload most of my media (photos, movies, etc) off my laptop and onto my Mac Mini server. I also decided that one thing I’d like to live on both is iTunes.

Note: Before you do anything in this article you should verify you have a good back up. Also, both machines will end up needing to be Authorized for your iTunes account.

There are a lot of ways to keep two iTunes libraries in sync. There are also a number of 3rd party tools that can help you do so. I tested all the tools I could find and decided I’d rather just script it myself. Scripting a synchronization operation in Mac and Linux always seems to come down to a little rsync action. Given that rsync is a little old in Mac OS X, I started out by updating rsync to the latest (3.0.7) using the steps provided on bombich.com (I added using /tmp):

mkdir /tmp/rsyncupdate
cd /tmp/rsyncupdate
curl -O http://rsync.samba.org/ftp/rsync/src/rsync-3.0.7.tar.gz
tar -xzvf rsync-3.0.7.tar.gz
curl -O http://rsync.samba.org/ftp/rsync/src/rsync-patches-3.0.7.tar.gz
tar -xzvf rsync-patches-3.0.7.tar.gz
cd rsync-3.0.7
curl -o patches/hfs_compression.diff http://www.bombich.com/software/opensource/rsync_3.0.7-hfs_compression_20100701.diff
curl -o patches/crtimes-64bit.diff https://bugzilla.samba.org/attachment.cgi?id=5288
curl -o patches/crtimes-hfs+.diff https://bugzilla.samba.org/attachment.cgi?id=5966
patch -p1 <patches/fileflags.diff
patch -p1 <patches/crtimes.diff
patch -p1 <patches/crtimes-64bit.diff
patch -p1 <patches/crtimes-hfs+.diff
patch -p1 <patches/hfs_compression.diff
./prepare-source
./configure
make
sudo make install
sudo rm -Rf /tmp/rsyncupdate
/usr/local/bin/rsync –version

Provided the version listed is 3.0.7 then we have a good build of rsync and can move on with our next step, getting a target volume mounted. In this case, I have a volume shared out called simply Drobo (I wonder what kind of RAID that is?!?!). Sharing was done from System Preferences -> Sharing -> File Sharing -> click + -> Choose Drobo and then assign permissions. The AFP server is on an IP address of 192.168.210.10. For the purposes of this example, the username is admin and the password is mypassword. So we’ll do a mkdir in /Volumes for Drobo:

mkdir /Volumes/Drobo

Then we’ll mount it using the mount_afp command along with a -i option:

mount_afp “afp://admin:mypassword@192.168.210.10/Drobo” /Volumes/Drobo

Now that we have a mount we’ll need to sync the library up. In this case, the Music directory on the Drobo has a symlink from ~/Music. This was created by copying my Music folder to the drobo and then rm’ing it (fails when trying from Finder):

rm -Rf ~/Music

Then using ln to generate the symlink:

ln -s ~/Music /Volumes/Drobo/Music

Now sync the files. I’m not going to go into all of the options and what they do, but make sure you have permissions to both the source and the target (using the username and password from the user whose data your changing helps):

/usr/local/bin/rsync -aAkHhxv –fileflags –force –force-change –hfs-compression –delete –size-only ~/Music/iTunes /Volumes/Drobo/Music

Note: If you get a bunch of errors about operations failing then consider disabling the Ignore ownership on this volume setting for any external media you may be using.

Now fire up iTunes on the target machine and make sure it works. At this point, I could also share out the Music folder from my laptop and sync back as well, which would effectively allow me to make changes on both machines. However, for now, I only want to make changes on the laptop and not the desktop so there’s no need for a bidirectional sync.

Once the sync is complete, we can tear down our afp mount:

diskutil unmount /Volumes/Drobo

Now that we can sync data, we still need to automate the process as I’m not going to want to type all this every time I run it. First up, I’m going to create a .sh file (let’s just say /scripts/synciTunes.sh):

touch /scripts/synciTunes.sh

Then I’m going to take the commands to mount the drive, sync the data and then unmount the drive and put them in order into the script:

/bin/mkdir /Volumes/Drobo
mount_afp “afp://admin:mypassword@192.168.210.10/Drobo” /Volumes/Drobo
/usr/local/bin/rsync -aAkHhxv –fileflags –force –force-change –hfs-compression –delete –size-only ~/Music/iTunes /Volumes/Drobo/Music
/usr/sbin/diskutil unmount /Volumes/Drobo

Once created, the script should be run manually and provided it succeeds then it can be automated (ie – creating a LaunchDaemon). If it works after a little while, then you can consider synchronizing your iPhoto and anything else if you so choose. Also, I ended up actually using ssh pre-shared key authentication and doing rsync over ssh. That allows you not to put the password for a host on your network into an unencrypted form in a script. You could do some trickeration with the password, but you might as well look into pre-shared keys if you’re going to automate this type of thing to run routinely. Finally, I also later ended up removing the iTunes Genius files as I started to realize they were causing unneeded data to sync and they would just rebuild on the other end anyway. Hope this helps anyone else looking to build an iLife server of their own!

iPhone public speaking

New 318 Press Releases on iOS

We put out 2 press releases at work on Friday. Fun stuff!

http://www.marketwire.com/press-release/Challenged-by-Deployment-of-Apple-iPads-in-Your-Enterprise-Tips-From-318-Consulting-1371111.htm

http://www.marketwire.com/press-release/Leading-Enterprise-Class-Apple-Consultancy-318-Becomes-iPad-Reseller-1371114.htm

Mass Deployment

Refreshing Managed Client Cache

Deleting the contents of the /Library/Managed Preferences directory is definitely one way to refresh your managed preferences cache in Mac OS X, but there have been commands specifically designed to clear the cache for each version of Mac OS X. By OS, these include the following:

  • 10.6 – mcxrefresh – You can use this command (in /usr/bin) to refresh managed preferences
  • 10.6 also has a ManagedClient binary in /System/Library/CoreServices/ManagedClient.app/Contents/MacOS/ManagedClient. When run with a -f option, ManagedClient will force updates.
  • 10.5 has a binary called mcxd located in /System/Library/CoreServices/mcxd.app/Contents/MacOS/mcxd which can also be run with a -f option
  • 10.4 has a binary called MCXCacher, stored in /System/Library/CoreServices/mcxd.app/Contents/Resources/MCXCacher which also supports the same -f option.

There are a number of other ways to go about this. If you have some that you use that I did not mention please feel free to add a comment.

Mass Deployment

KACE 3.3 & the Mac

Version 3.3 of the KACE 2000 appliance introduces a few enhancements for the Mac OS X operating system. These include the following:

  • International Keyboards are now supported in the KACE NetBoot environment
  • Hardware inventory is now supported
  • Pre-installation tasks now support error handling
  • Post-installation tasks now have ByHost support

Overall, a nice update if you’re invested in the KACE appliances, although the Windows enhancements are far more substantial (understandably), with updates to user profile migration (now hive based), driver harvesting and other features, primarily for the Windows 7 clients in your environments.

cloud Final Cut Server

Amazon S3 File Size Limits

Back in November of 2008 I did an article awhile back on a way to use Amazon’s S3 to hook into Final Cut Server. At the time though, S3 had a pretty big limitation in that it wasn’t really suitable for backing up large video files as an archive device for Final Cut Server. But today, Amazon announced that S3 now supports files of up to 5 terabytes using multipart upload (previously the maximum file size was 5 gigabytes).

This finally means that files do not have to be broken up at the file system layer in order to back up to Amazon’s cloud. However, this does not mean that traditional disk-to-disk backup solutions can be leveraged to provide a good target for backups and archives as backups need to be performed using the multipart upload. The ability to now use S3 for large files allows us to finally use Amazon S3 in a way that is much simpler that it was to do so previously, although it is still not as transparent as using a file system or URI path.

Overall, this represents a great step for Amazon and I hope to see even more of this in the future!

Articles and Books iPhone

iPhone and iPad Admin Guide Now Shipping

The Enterprise iPhone and iPad Administrator’s Guide is now shipping (and rapidly moving up in Amazon’s rankings)! There have also been a couple of sightings in Border’s.

Apress also sent out a press release and an email blast regarding the book in the past week. So, feel free to buy it using the link below! :)