Tiny Deathstars of Foulness

There is a lot of talk about “the cloud” in the IT trade magazines and in general at IT shops around the globe. I’ve used Amazon S3 in production for some web, offsite virtual tape libraries (just a mounted location on S3) and a few other storage uses. I’m not going to say I love it for every use I’ve seen it used for, but it can definitely get the job done when used properly. I’m also not going to say that I love the speeds of S3 compared to local storage, but that’s kindof a given now isn’t it… One of the more niche uses has been to integrate it into Apple’s Final Cut Server. In addition to S3 I’ve experimented with CloudFront for web services (which seems a little more like Akamai than S3) and done a little testing of MapReduce for some of log crunching – although the MapReduce testing has thus far been futile compared to just using EC2 it does provide an effective option if used properly. Overall, I like the way the Amazon Machine Instances (AMI – aka VM) work and I can’t complain about the command line environment they’ve built, which I have managed to script against fairly easily. The biggest con thus far (IMHO) about S3 and EC2 is that you can’t test them out for free (or at least not when I started testing them). I still get a bill for around 7 cents a month for some phantom storage I can’t track down on my personal S3 account, but it’s not enough for me to bother to call about… But if you’re looking at Amazon for storage, I’d just make sure you’re using the right service. If you’re looking at them for compute firepower then fire up a VM using EC2, read up on their CLI environment and enjoy. Beyond that, it’s just a matter of figuring out how to build out a scalable infrastructure using pretty much the same topology as if they were physical boxen. I think the reason I’m not seeing a lot of people jumping on EC2 is the pricing. It’s practically free to test, but I think it’s one of those things where a developer has a new app they want to take to market and EC2 gives us a way to do that, but then when the developer looks at potentially paying 4x the intro amount in peak times for processing power (if a VM is always on then you would be going from $72 to $288 per month per VM without factoring data transfer to/from the VM at .1 to .17/GB) they get worried and just go to whatever tried and true route they’ve always used to take it to market. Or they think that it’s just going to do everything for them and then are shocked about the fact that it’s just a VM and get turned off… With all of these services you have to be pretty careful with transfer rates, etc. I haven’t found a product to do this yet, but what I’d really like to have is use something like vSphere/vCenter or MS VMM that could provision, move and manage VMs, whether they sit on Amazon, a host OS in my garage or a bunch of ESX hosts in my office, or a customers office for that matter – and preferably with a cute sexy meter to tell me how much I owe for my virtual sandboxes.

April 30th, 2009

Posted In: Business, Consulting, VMware

Tags: , , , , ,

In Windows, when you have connected to a share, as with a mapped drive letter, that share shows as Active. At some point, if the client cannot communicate with the open session to a SMB/CIFS server the drive will appear to the Windows client as OFFLINE. AFP does something similar, but the result is a constant communication with the AFP server (at least constant as it is perceived from the Finder). While this communication may appear to be constant it is actually verified by the server every 30 seconds, and the AFP client software, if no poll is sent from the server, the client will also attempt to reach out to the server to verify that the connection is available. This process is known as tickling. AFP uses the tickle to verify that clients are still connected to a server. However, this communication causes a minimal amount of network congestion, which some environments want to keep to a minimum. This is similar to the concept of disabling protocols in a network stack that are not being used. These protocols aren’t likely to cause issues on one machine, but when employed by thousands of host they can effectively cause a Denial of Service on the server. Provided that you use AFP, you’re just not going to want to disable the tickle. By default it happens every 30 seconds with no user intervention (other than connecting to a share point and having a session greater than 30 seconds). While you’re not going to want to disable it, you can reduce the number of seconds between updates, to reduce traffic. For example, the following command will reduce the time between tickles to 60 seconds:
serveradmin settings afp:tickleTime = 60
In order to set the tickleTime value back to 30 seconds, you would simply issue the following command:
serveradmin settings afp:tickleTime = 30
Setting a tickleTime isn’t for everyone. In fact, it’s pretty rare when this kind of step is needed; but if your Mac servers are causing a lot of collisions on the network and using packet analysis you determine the traffic to be due to DSI/AFP packets, then it is a fine time to test out using tickleTime as a potential solution. If it doesn’t resolve your issue then you can always move back to 30 seconds. While it can cause beach balls to last a fair amount of time in the event that a server connection is lost, it can also be used to reduce traffic. Finally, the concept of a constant communication channel for file services may be foreign to even a seasoned Windows admin. However, this is just a reality of playing in the Apple sandbox.

April 29th, 2009

Posted In: Mac OS X, Mac OS X Server, Mass Deployment

Tags: , , , , ,

Open up Google Maps and search for 8 Sampsonia Way, Pittsburgh, PA. This is one of the funniest things I’ve seen on Google Maps. There was an old picture of the 318 offices, which showed me going over the fence one day when I locked my keys in the office that I thought was funny (because it was me mostly), but this is way better – and it got me to thinking about what else people have come across that Google has captured in action. Another that my wife mentioned to me is Liam Gallagher, frontman from Oasis, outside his favorite pub (he denies this is him btw). There are also some scantily clad women in Paris, a woman flashing the camera (now taken down) and a guy with his bum hangin’ out of his pants. Then there’s the guy getting carted off to jail (imagine trying to get out of that one) and of course, a guy picking up someone who is reportedly a prostitute. I guess the take away from all this is that you just shouldn’t do anything in public that you don’t want plastered onto Google Maps. While Street View is called, by some critics, an affront to privacy, I think it’s great. There are a multitude of pictures of men going into strip clubs, people getting into fights and even people taking a crap on the street. Would people behave as badly in public as they often do if they knew there was a very, very small chance that a Google car would happen to be driving by at that very moment and catch them in the act. Not that some of these things I mentioned are behaving badly according to some cultural norms (although I’m pretty sure that dropping a big duke in the street is pretty much globally frowned upon). Which brings up my final point – in an increasingly globalized world, there are just some things (like sun bathing nude – also caught by the cameras) that are perfectly kosher in other societies that, while they may offend, are not OK in other parts of the world. There’s nothing new about it, but every day it gets even closer to real time.

April 28th, 2009

Posted In: personal

Tags: , , ,

Mac OS X 10.5 supports SMB signing.  But if you have some older operating systems you may need to disable SMB signing when using Windows Server 2003 and up to host your files, typically when the 2003 Server is also a Domain Controller (DC).  To determine if SMB signing is required use Netmon (Network Monitor).  When using Netmon it is best to use a hub rather than a switch.  Once you have set the addresses and performed a capture, you’ll then look for the SMB negotiation string.  Options here are values of 3, 7 and 15 meaning SMB signing is disabled, enabled/not required and required respectively. If SMB signing is required then you can set it to enabled/not required for testing.  To do so, you will use the Microsoft network client: Digitally sign communications (always) policy in Group Policy (gpedit.msc from Start->Run of the host in question or edit the policy from a DC).  Setting the policy to disable would still have the policy enabled if the client and server can negotiate signing.  At times we may think that the attempt at signing will cause a failure, although this is pretty rare; therefore you can disable signing by setting the Digitally sign communications (if client agrees). These values can also be controlled using the following registry path: HKEY_LOCAL_MACHINESystemCurrentControlSetServicesLanManServerParameters
By setting EnableSecuritySignature to a REG_DWORD value of 0 you would disable Digitally sign communications (if server agrees).  By setting RequireSecuritySignature to a REG_DWORD value of 0 you would disable Digitally sign communications (always).
Digitally sign server communication (always)

April 27th, 2009

Posted In: Active Directory, Mac OS X, Mac OS X Server, Mac Security, Windows Server

Tags: , , ,

There are a number of reasons you might choose to change the location of your iPhoto library.  Maybe you want to store pictures on a firewire drive, or maybe you want to store them on an iSCSI LUN, which I described how to work with recently. Either way, there are two ways I typically see people go about changing the location that iPhoto uses to store data. The first is to actually create a symbolic link from ~/Pictures/iPhoto Library to the directory you would like to use. The second, which is a better option is to go ahead and edit the location that the iPhoto preferences set as the path to the library (you would replace /Volumes/VolumeName/Path with the actual path to your storage):
defaults write RootDirectory /Volumes/VolumeName/Path
Once you have defined a new location, if you want to revert back to storing photos in your home folder you can then use the following command
defaults write RootDirectory ~/Pictures/iPhoto Library
defaults delete RootDirectory

April 26th, 2009

Posted In: Mac OS X

Tags: , , , , ,

No, I’m not getting all teary-eyed about something…  Instead I’m thinking about changing the modification date stamp on a file.  Let’s take a fairly innocuous and hidden file, such as the the COOKIES file located in the /usr/share/emacs/22.1/etc directory.  Since I’ve already tried the recipe, I’m going to go ahead and replace the contents of this file with the contents of the mutex script posted a few days ago. This leaves the date the file was created altered as can be seen by doing an ls -al on the file:
-rw-r–r– 1 root wheel 4968 Apr 21 22:04 /usr/share/emacs/22.1/etc/COOKIES
We’re going to go a step further and use stat on the file to see even more information:
234881030 90192 -rw-r–r– 1 root wheel 0 4968 “Apr 21 22:04:01 2009” “Apr 21 22:04:01 2009” “Apr 21 22:04:01 2009” “May 15 16:57:33 2007” 4096 16 0 /usr/share/emacs/22.1/etc/COOKIES
The following command will set the modified date to the same as the creation date (the last of the dates listed:
touch -t 200705151657 /usr/share/emacs/22.1/etc/COOKIES
There are more than likely going to be times when you don’t want to update a file but instead replace it. For example, if we removed COOKIES and then did a curl of a file to /usr/share/emacs/22.1/etc/COOKIES then the Linux version of touch would typically have a -d option, for that pesky creation date, but as the Mac OS X binary nastygrams that it’s an illegal option, we’re going to use the SetFile command. Below, we’ll take whatever file that has been dropped into the appropriate location and then set a new creation time for the file, to match that of the original file:
./SetFile -d “5/15/2007 57:33” /usr/share/emacs/22.1/etc/COOKIES

April 24th, 2009

Posted In: Mac OS X, Mac OS X Server, Mac Security, Mass Deployment

Tags: , , , , ,

The Mac OS X program, Address Book uses sqlite3 to store information.  The actual database is located in each users Library/Application Support/AddressBook directory and called AddressBook-v22.abcddb.  In order to interfaces with Address you can use the sqlite3 command followed by the path to the database itself.  For example, the following command will simply dump you into a sqlite interactive command line environment:
sqlite3 ~/Library/Application Support/AddressBook/AddressBook-v22.abcddb
Once in the environment you can view databases, manually work with the data, etc.  The basic information about a contact is stored in the ZABCDRECORD table.  You can view the contents of this table using the following command:
select * from ZABCDRECORD
If you type the following then you’ll list all of the contents of the ZABCDEMAILADDRESS table:
Notice that here you’ll see email addresses, but also a lot more information.  To figure out which column of your table that you want to look at to just see the email addresses, use the following command at the sqlite3 interactive prompt (noting that it’s case sensitive):
.header ON
Now that we know that we want to constrain our output to ZADDRESS, go ahead and use the following command to simply list the email addresses:
You can also run the above query from a single command, rather than using the interactive prompt:
sqlite3 ~/Library/Application Support/AddressBook/AddressBook-v22.abcddb “select DISTINCT(ZADDRESS) from ZABCDEMAILADDRESS”
Or, if you would rather work with tab delimited text (and we will want to for a future article) and pull all of the information for these users (which would need to be cross referenced against their unique ID such as ZSERIALNUMBER for other pertinent information btw):
sqlite3 -separator ‘t’ ~/Library/Application Support/AddressBook/AddressBook-v22.abcddb ‘select * from ZABCDEMAILADDRESS’
Next we’re going to simply dump the contents of our file out to a text file called alladdys.txt, stored in /Scripts/alladdys.txt:
sqlite3 ~/Library/Application Support/AddressBook/AddressBook-v22.abcddb “select DISTINCT(ZADDRESS) from ZABCDEMAILADDRESS” > /Scripts/alladdys.txt
Now we’re going to go ahead and limit the output to addresses including and dump that into a file in the same folder called specificaddy.txt, which allows us to check Address Book to see if an email address is there (might be useful later):
sqlite3 -separator ‘t’ ~/Library/Application Support/AddressBook/AddressBook-v22.abcddb ‘select DISTINCT(ZADDRESS) from ZABCDEMAILADDRESS’ | grep > /Scripts/specificaddy.txt
While we were specifically looking for Address Book information, it’s worth noting that sqlite is fairly prolific in Mac OS X.  It is used with iCal, which stores databases in ~/Library/Calendars/Calendar Cache.  It is also used for some things in, which stores information in ~/Library/Mail/Envelope Index.  Safari also stores information about RSS feeds in ~/Library/Syndication/Database3.

April 22nd, 2009

Posted In: Mac OS X, Mac Security, Mass Deployment

Tags: , , , , , ,

Yesterday I posted about Randomizing the Software Update Server for Mac OS X. I posted the script, which I called at But, what if you wanted to update the Software Update Server list in the script automatically using your own URL (ie – on a timed interval)? In this case, you could simply use the following script, which pulls a new copy of the script from the site:
#!/bin/bash URL=”” PATH=”/Scripts/” /usr/bin/curl $URL > $PATH exit 0
Notice that I made the URL and PATH variables. This is really just to make it easier to use if others choose to do so. It would also be pretty easy to add a line at the end to run the script; therefore, it would download the latest copy of the script and then run it. This can also be used as a vehicle for running a number of scripts, pushing out timed updates without ARD (or another similar software package) or just setting a nightly event to look for changes to something and then run it, a process we’ll call mutex checking, for future reference.

April 21st, 2009

Posted In: Mac OS X, Mac OS X Server, Mac Security, Mass Deployment, Ubuntu, Unix

Tags: , , ,

Getting through all of the dependencies for certain Perl modules can be hairy. To give you a sense of how complex perl can be, here’s a small fact: CPAN has over nine thousand perl modules listed. Keeping track of module dependencies can be a real pain. Fortunately, there's a simple solution… is a PERL module that automates the whole process of downloading, unpacking, compiling and packaging modules. For example, if I wanted to install a module called Colors::Yellow, I would type:

perl -MCPAN -e 'install Colors::Yellow'

That's it. The would automatically figure dependancies, download the appropriate modules, and install them. If you want more information on using the (pm is short for perl module) then the URL is, under "How do I install Perl modules?"

April 20th, 2009

Posted In: Mac OS X, Mac OS X Server, Unix

Tags: , , ,

I’ve had a few instances where there was no way to setup round robin DNS or a load balancer and we were looking to alternate between a bunch of software update servers.  In order to do so, I’ve written a quick shell script to do so.  Here it is, in pieces, so it makes sense. The following is a quick script to pull a URL from a random list of servers:
#!/bin/bash Sus=”″ sus=($Sus) num_sus=${#sus[*]} echo -n ${sus[$((RANDOM%num_sus))]} exit 0
This script would simply write to the screen one of the software update servers that we’ve loaded up into an array called sus, chosen using the $RANDOM function.  You can replace the servers in this array with your own and it will simply write to the screen which server it has chosen.  Now to have it actually set the server, replace the line that begins with echo -n with the following line:
defaults write /Library/Preferences/ CatalogURL ${sus[$((RANDOM%num_sus))]}
For deployment we’ve handled this two different ways.  The first is to have this script run at startup as a login hook (it’s really quick since it doesn’t do much) and let the OS run software updates based on whatever schedule you’ve employed.  The second is to set software updates to only ever run manually and then add a line at the end of the script to run them, which allows you to schedule the task using launchd or run it manually over ARD.  To set the software udpates to run manually, run this command on the target system once (it will persist):
softwareupdate –schedule off
Now, after the script chooses a random software update server, tell it to install all available software updates from that server each time it’s run by adding the following to the end of the script:
softwareupdate -i -a
There is a lot more logic that can be built into it, but this is the basics of assigning a random software update server using a shell script.

April 20th, 2009

Posted In: Mac OS X, Mac OS X Server, Mac Security, Mass Deployment

Tags: , , , ,

Next Page »