Tiny Deathstars of Foulness

To tell curl that you can read and write cookies, first we’ll start the engine using an empty cookie jar, using the -b option, which always reads cookies into memory:

curl -b newcookiejar

If your site can set cookies you can then read them with the -L option

curl -L -b newcookiejar

The response should be similar to the following:

Reading cookies from file

Curl also supports reading cookies in from the Netscape cookie format, used by defining a cookies.txt file instead:

curl -L -b cookies.txt

If the server updates the cookies in a response, curl would update that cookie in memory but unless you write something that looks for a new cookie, the next use will read the original cookie again.

To create that file, use the -c option (short for –cookie-jar) as follows:

curl -c cookie-jar.txt

This will save save all types of cookies (including session cookies). To differentiate, curl supports junk, or session cookies using the –junk-session-cookies options, or -j for short. The following can read these expiring cookies:

curl -j -b cookie-jar.txt

Use that to start a session and then that same -c to call them on your next use. This could be as simple as the following:

$CURL -j -b $COOKIEJAR $site

You can also add a username and password to the initial request and then store the cookie. This type of authentication and session management is used frequently, for example in the Munkireport API, as you can see here:

For converting, the -b detects if a file is a Netscape formatted cookie file, parses, then rewrites using the -c option at the end of a line:

curl -b cookie.txt -c cookie-jar.txt

February 20th, 2017

Posted In: Mac OS X, Mac Security

Tags: , , , , , ,

When you’re regression testing, you frequently just don’t want any delays for scripts unless you intentionally sleep your scripts. By default Safari has an internal delay that I’d totally forgotten about. So if your GUI scripts (yes, I know, yuck) are taking too long to run, check this out and see if it helps:

defaults write WebKitInitialTimedLayoutDelay 0

With a script I was recently working on, this made the thing take about an hour less. Might help for your stuffs, might not.

If not, to undo:

defaults delete WebKitInitialTimedLayoutDelay


February 1st, 2017

Posted In: Mac OS X Server, Mac Security

Tags: , , , , , , , ,

Thanks to Mr. Worley for dropping this into HipChat on Friday! <3

March 13th, 2016

Posted In: personal

Tags: , , , , ,

Watchman Monitoring is a tool used to monitor computers. I’ve noticed recently that there’s a lot of traffic on the Watchman Monitoring email list that shows people want a great little (and by little I mean inexpensive from a compute time standpoint) monitoring tool to become a RMM (Remote Management and Monitoring) tool. The difference here is in “Management.” Many of us actually don’t want a monitoring tool to become a management tool unless we are very deliberate about what we do with it. For example, that script that takes a machine name of ‘rm -Rf /’ that some ironic hipster of a user decided to name their hard drive because, well, they can – well that script that was just supposed to run a fix permissions because that ironic jackass of a user in his v-neck with his funny hat and unkempt beard just accidentally cross-site script attacked himself and he’s now crying out of his otherwise brusque no-lense having glasses and you’re now liable for his data loss because you didn’t sanitize that computer name variable before you sent it to some script.

Since we don’t want the scurrilous attention of hipsters everywhere throwing caustic gazes at us, we’ll all continue using a standard patch management system like Casper, Absolute, Munki, FileWave, etc. Many organizations can still take value out of using Watchman Monitoring (and tools like Watchman) to trigger scripted events in their environment.

Now, before I do this I want to make something clear. I’m just showing a very basic thing here. I am assuming that people would build some middleware around something a little more complicated than curl, but given that this is a quick and dirty article, curl’s all I’m using for examples. I’m also not giving up my API key as that would be silly. Therefore, if I were using a script, I’d have two variables in here. The first would be $MACHINEID, the client/computer ID you would see in Watchman. This would be what you see in red here, when looking at an actual computer.

Screen Shot 2013-07-03 at 9.35.54 AM

The second variable is my API token. This is a special ID that you are provided from our friends at Watchman. Unless you’re very serious about building some scripts or middleware like right now, rather than bug them for it, give it a little while and it will be available in your portal. I’ve given the token $APITOKEN as my variable there.

The API, like many these days is json. This doesn’t send entire databases or even queries, but instead an expression of each variable. So, to see all of the available variables for our machine ID, we’re going to use curl (I like to add -i to see my headers) and do the following lookup:

curl -i$MACHINEID.json?auth_token=$APITOKEN

This is going to spit out a bunch of information, parsed with a comma, whereas each variable and then the contents of that variable are stored in quoted text. To delimit my results, I’m simply going to awk for a given position (using comma as my delimiter instead of the default space). In this case, machine name is what I’m after:

curl -i$MACHINEID.json?auth_token=$APITOKEN | awk -F"," '{ print $4}'

And there you go. It’s that easy. Great work by the Watchman team in making such an easy to use and standards compliant API. Because of how common json is I think integrating a number of other tools with this (kinda’ like the opposite of the Bomgar implementation they already have) is very straight forward and should allow for serious automation for those out there that are asking for it. For example, it would be very easy to say take this output and weaponize it to clear caches before bugging you:

“plugin_id”:1237,”plugin_name”:”Check Root Capacity”,”service_exit_details”:”[2013-07-01] WARNING:  92% (276GB of 297GB) exceeds the 90% usage threshold set on the root volume by about 8 GB.”

Overall, I love it when I have one more toy to play with. You can automatically inject information into asset management systems, trigger events in other systems and if need be, allow the disillusioned youth the ability to erase their own hard drives!

July 3rd, 2013

Posted In: cloud, FileMaker, Mac OS X, Mac OS X Server, Mac Security, Mass Deployment, Network Infrastructure, Time Machine, Xsan

Tags: , , , , , ,

CrashPlan Pro Server is a pretty cool tool with a lot of great features that can be used to back up client computers. There are a lot of things that CrashPlan Pro is good at out of the box, but there are also a lot of other things that CrashPlan Pro wasn’t intended for that it could be good at, given a little additional flexibility. The REST API that CrashPlan Pro uses provides a little flexibility and as with most APIs I would expect it to provide even more as time goes on.

I often hear people run away screaming when REST comes up, thinking they’re going to have to learn some pretty complex scripting. And while the scripting can be complex, it doesn’t necessarily have to be. You can find a lot of handy information about the options available in the REST API at The very first example command that CrashPlan gives is the following:


Now, to use this in a very simple script, let’s look at it with curl. You are going to need to authenticate, so we’re going to inject that into the URL in much the same was that we would with something like, let’s say, WebDAV, SSH or FTP. If the server name were foundation.lan, the user name was daneel and the password was seldonrulez then the curl command would actually look like so (you could use the -u operator to inject the authentication information, but as you’ll see later I’d like to make those a bit less complex):

curl http://daneel:seldonrulez@foundation.lan:4280/rest/users?status=Active

Note: The default port for the web administration in CrashPlan Pro is 4280.

This is simply going to output a list of Active users on the server. The reason it’s going to output only Active users is that we asked it to (reading from left to right after the rest is shown in the URL) query users, using the status attribute and specifying only to show us users whose status matches as Active. We could just as easily have requested all users by using the following (which just removes ?status=Active):

curl http://daneel:seldonrulez@foundation.lan:4280/rest/users

Each user has a unique attribute in their id. These are assigned in an ascending order, so we could also query for the user with an ID of 3 by simply following the users with their unique ID:

curl http://daneel:seldonrulez@foundation.lan:4280/rest/users/3

We could also query for all users with a given attribute, such as orgId (note that these attributes are case sensitive unlike many other things that start with http). For example, to find users with an orgID of 3:

curl http://daneel:seldonrulez@foundation.lan:4280/rest/users?orgId=3

The API doesn’t just expose looking at users though. You can look at Organizations (aka orgs), targets (aka mountPoints), server statistics (aka serverStats) and Computers (aka computers). These can be discovered by running the following command:

curl -i http://daneel:seldonrulez@foundation.lan:4280/rest/

To then see each Organization:

curl http://daneel:seldonrulez@foundation.lan:4280/rest/orgs

And to see each Computer:

curl http://daneel:seldonrulez@foundation.lan:4280/rest/computers

You can also perform compound searches fairly easily. For example, let’s say that we wanted to see

curl http://daneel:seldonrulez@foundation.lan:4280/rest/computers?userId=3&status=Active

These basic incantations of curl are simply getting information, which programmatically could also be specified using a -X operator (or –request if you like to type a lot) to indicate the type of REQUEST we’re sending (continuing on with our Code42 Sci-fi inspired example):

curl -X GET -H ‘Content-type: application/json’ http://daneel:seldonrulez@foundation.lan:4280/rest/orgs

The important thing about being able to indicate the type of REQUEST is that we can do more than GET: we can also POST and PUT. We also used the -H operator to indicate the type of data, which we’re specifying as application/json (per the output of a curl -i command against the server’s REST API URL). POST is used to create objects in the database whereas PUT is used update objects in the database. This could result in:

curl -i -H ‘Content-Type: application/json’ -d ‘{“username”: “charlesedge”, “password”: “test”, “firstName”: “Charles”, “lastName”: “Edge”, “orgId”: “3”}’ http://daneel:seldonrulez@foundation.lan:4280/rest/users

Once you are able to write data, you will then be able to script mass events, such as create new users based on a dscl loop using groups, remove users at the end of a school year (PUT {“status”: “Deactivated”}), mass change orgIds based on other variables and basically fully integrate CrashPlan Pro into the middleware that your environment might already employ.

Perl, Python, Ruby and PHP come with a number of options specifically designed for working with REST, which makes more complicated scripting much easier (such as with php’s curl_setopt); however, these are mostly useful if you already know those languages and the point of this article was to stay in shell scripting land. This allows you knock out simple tasks quickly, even if the good people at Code 42 didn’t think to add the specific features to their software that you might have in mind. Once you start to get into scripting more complex events, look to the Python examples at the bottom of the API Architecture page to get ya’ kickstarted!

November 4th, 2010

Posted In: cloud, Mac OS X, Mac OS X Server, Mac Security, Mass Deployment, Ubuntu, Unix

Tags: , , , , , , , , , , ,

The latest Git works swimmingly on the Mac. To download it you can curl it from the repository:

curl -O

Next, extract the files:

tar xzvf git-

Once extracted, cd into the directory that you extracted the files into and then run a make configure with the git- directory as your working directory.

make configure

If you cannot run make becuase you don’t have a compiler, make sure that you have installed the developer tools on your computer. Once you have run the make, run the configure, specifying the directory you would like to install into. In this case I’ll be deploying into /usr/local/git:

./configure –prefix=/usr/local/git NO_MSGFMT=yes make prefix=/usr/local/git all

Now run a make install to complete the installation:

make install

Once git has been installed, let’s look at the global options and choose which ones to configure:

git config –global –list

The first setting you’ll typically want to change is the name that git uses. To set your name use the git config again, but specify the –global option and then the setting (in this case following by the actual name you would like to use (in my case it’s Charles Edge):

git config –global “Charles Edge”

Next, use the setting to set your email address:

git config –global “”

Now that you know how to customize options, check the global options list and change any remaining that you would like to set. Once done, let’s grab the man pages. To install the man pages, first curl them down:

curl -O

Then extract the man pages into your /usr/local/share/man directory (or wherever you like to keep them):

tar xjv -C /usr/local/share/man -f git-manpages-

If git is already installed you can obtain the version information by running git with a –version option:

git –version

Now let’s grab the html docs:

curl -O

And then let’s put them in /Library/WebServer/Documents:

tar xjv -C /Library/WebServer/Documents -f git-htmldocs-

You should now be able to use git.

November 21st, 2009

Posted In: Mac OS X

Tags: , , ,

Yesterday I posted about Randomizing the Software Update Server for Mac OS X. I posted the script, which I called at But, what if you wanted to update the Software Update Server list in the script automatically using your own URL (ie – on a timed interval)? In this case, you could simply use the following script, which pulls a new copy of the script from the site:

/usr/bin/curl $URL > $PATH
exit 0

Notice that I made the URL and PATH variables. This is really just to make it easier to use if others choose to do so. It would also be pretty easy to add a line at the end to run the script; therefore, it would download the latest copy of the script and then run it. This can also be used as a vehicle for running a number of scripts, pushing out timed updates without ARD (or another similar software package) or just setting a nightly event to look for changes to something and then run it, a process we’ll call mutex checking, for future reference.

April 21st, 2009

Posted In: Mac OS X, Mac OS X Server, Mac Security, Mass Deployment, Ubuntu, Unix

Tags: , , ,