krypted.com

Tiny Deathstars of Foulness

You can leverage the API built into the Casper Suite to do lots and lots of cool stuff, without interacting directly with the database. Here, I’ll use a simple curl command in a bash script that has myuser as the username for a server and mypassword as the password. The server is myserver.jamfcloud.com. Basically, we’re going to ask the computers and mobiledevices tables for all their datas. Once we have that, we’ll constrain the output to just the size attribute for each using sed: curl -s -u myuser:mypassword https://myserver.jamfcloud.com/JSSResource/computers | sed -n -e 's/.*<size>\(.*\)<\/size>.*/\1/p' curl -s -u myuser:mypassword https://myserver.jamfcloud.com/JSSResource/mobiledevices | sed -n -e 's/.*<size>\(.*\)<\/size>.*/\1/p' This same logic can then be applied to any payload of XML data coming out of a REST API. Some API’s have different options to constrain output of a request, some don’t. But no matter whether there is or isn’t, you can loop through a bunch of statements like this. Why would you look to the API to constrain data, etc? Well, it comes down to a cost issue. Each time you run the above commands, you’re costing yourself runtime, you’re taxing the server with potentially a substantial query, and you’re potentially transferring a considerable amount of data over the wires between you and where the script is being run. So if the API is smart enough to give you less data, then you might as well do that. In this case, it isn’t, but if you apply this same sed logic in other scripts, it’s great to be cognizant of remaining as efficient as you can.

December 18th, 2015

Posted In: JAMF

Tags: , , , , , , , , ,

Wait, did I say control, I meant query… Sorry to disappoint! I am a home automation nerd. Recently I’ve noticed that as it gets closer to warmer or cooler extremes that it takes longer for my hvac system to bring my house to the temperature I want. I’ve also noticed that NEST claims to automatically learn these factors. Not to be outdone by the Griswolds, I decided to look at building this into my system.
I had been experimenting with using the weather.com site to pull this data but then someone pointed out that NOAA (the National Oceanic and Atmospheric Administration) actually publishes this information on their site. I was able to access a simple-to-parse dump of information for the Minneapolis airport, which is pretty close to my house. The URLs are based on ICAO codes. You can find the code for your airport on the ICAO code wikipedia page. The URL to look at for information is http://weather.noaa.gov/pub/data/observations/metar/decoded/.TXT or http://weather.noaa.gov/pub/data/observations/metar/decoded/KMSP.TXT for Minneapolis (or http://weather.noaa.gov/pub/data/observations/metar/decoded/KANE.TXT for Blaine which is actually closer to me). You can actually just curl this straight with nothing special to view the text file: curl http://weather.noaa.gov/pub/data/observations/metar/decoded/KMSP.TXT The output is basically as follows: MINNEAPOLIS-ST PAUL INTERNATIONAL , MN, United States (KMSP) 44-52N 93-13W 265M Oct 01, 2013 - 10:53 AM EDT / 2013.10.01 1453 UTC Wind: from the WNW (290 degrees) at 13 MPH (11 KT) gusting to 24 MPH (21 KT):0 Visibility: 10 mile(s):0 Sky conditions: mostly clear Temperature: 68.0 F (20.0 C) Dew Point: 48.9 F (9.4 C) Relative Humidity: 50% Pressure (altimeter): 29.82 in. Hg (1009 hPa) Pressure tendency: 0.14 inches (4.6 hPa) higher than three hours ago ob: KMSP 011453Z 29011G21KT 10SM FEW150 20/09 A2982 RMK AO2 SLP094 T02000094 51046 cycle: 15 I subtracted or added the difference in temperature to my desired temperature and am experimenting with how much more quickly I need to fire things up based on that (for my hvac system seems to be about a minute per 10 degrees of delta), but there are definitely plenty of ways to go about such number nerdery. Either way, I can now control the temperature based on the weather using curl, which is basically controlling the weather in my house, so not as untrue a title as with most front-page newspaper articles… Finally, there’s also a REST API, available from NOAA at http://graphical.weather.gov/xml/rest.php.

October 2nd, 2013

Posted In: Home Automation, Mac OS X, Minneapolis, sites

Tags: , , , , , , , , , , , , , ,

Cumulus comes with a number of commands installed in /usr/local/Cumulus_Workgroup_Server. The assets can be in a shared directory location, such as an NFS mount mapped to /cumulus or /Volumes/Cumulus. But in the /usr/local/Cumulus_Workgroup_Server directory there are a number of commands that can be pretty useful. For example, the stop-admin, stop-cumulus, start-cumulus and start-admin commands can be used to restart the Cumulus using a simple ARD template: /usr/local/Cumulus_Workgroup_Server/stop-admin.sh /usr/local/Cumulus_Workgroup_Server/stop-cumulus.sh sleep 30 /usr/local/Cumulus_Workgroup_Server/start-cumulus.sh /usr/local/Cumulus_Workgroup_Server/start-admin.sh There are others, such as status.sh, which shows size of repository, PIDs, and the time running. The repair.sh can be used to repair the database and remove-admin.sh and remove-cumulus.sh can uninstall the admin console and cumulus servers respectively (danger, Will Robinson). The install-admin.sh and install-cumulus.sh scripts can also be used to install these items respectively. The bin directory contains daemons such as cumulusd and services information/cumulusrad. If you want to work with assets, you’ll probably need the Java SE JDK to run and then query the Tomcat server. This web application environment leverages Cumulus Java classes to provide the API that can then be scripted into various workflows, such as providing a site that queries images in the DAM and displays those matching a given pattern on a website. Overall, the scripting that can be done without the API is service control oriented, but with the API and a little SOAP you can pretty much grab or change almost anything you need to.

September 27th, 2013

Posted In: Mac OS X, Mac OS X Server, Network Infrastructure

Tags: , , , , , , ,

Watchman Monitoring is a tool used to monitor computers. I’ve noticed recently that there’s a lot of traffic on the Watchman Monitoring email list that shows people want a great little (and by little I mean inexpensive from a compute time standpoint) monitoring tool to become a RMM (Remote Management and Monitoring) tool. The difference here is in “Management.” Many of us actually don’t want a monitoring tool to become a management tool unless we are very deliberate about what we do with it. For example, that script that takes a machine name of ‘rm -Rf /’ that some ironic hipster of a user decided to name their hard drive because, well, they can – well that script that was just supposed to run a fix permissions because that ironic jackass of a user in his v-neck with his funny hat and unkempt beard just accidentally cross-site script attacked himself and he’s now crying out of his otherwise brusque no-lense having glasses and you’re now liable for his data loss because you didn’t sanitize that computer name variable before you sent it to some script. Since we don’t want the scurrilous attention of hipsters everywhere throwing caustic gazes at us, we’ll all continue using a standard patch management system like Casper, Absolute, Munki, FileWave, etc. Many organizations can still take value out of using Watchman Monitoring (and tools like Watchman) to trigger scripted events in their environment. Now, before I do this I want to make something clear. I’m just showing a very basic thing here. I am assuming that people would build some middleware around something a little more complicated than curl, but given that this is a quick and dirty article, curl’s all I’m using for examples. I’m also not giving up my API key as that would be silly. Therefore, if I were using a script, I’d have two variables in here. The first would be $MACHINEID, the client/computer ID you would see in Watchman. This would be what you see in red here, when looking at an actual computer. Screen Shot 2013-07-03 at 9.35.54 AM The second variable is my API token. This is a special ID that you are provided from our friends at Watchman. Unless you’re very serious about building some scripts or middleware like right now, rather than bug them for it, give it a little while and it will be available in your portal. I’ve given the token $APITOKEN as my variable there. The API, like many these days is json. This doesn’t send entire databases or even queries, but instead an expression of each variable. So, to see all of the available variables for our machine ID, we’re going to use curl (I like to add -i to see my headers) and do the following lookup: curl -i https://318.monitoringclient.com/clients/$MACHINEID.json?auth_token=$APITOKEN This is going to spit out a bunch of information, parsed with a comma, whereas each variable and then the contents of that variable are stored in quoted text. To delimit my results, I’m simply going to awk for a given position (using comma as my delimiter instead of the default space). In this case, machine name is what I’m after: curl -i https://318.monitoringclient.com/clients/$MACHINEID.json?auth_token=$APITOKEN | awk -F"," '{ print $4}' And there you go. It’s that easy. Great work by the Watchman team in making such an easy to use and standards compliant API. Because of how common json is I think integrating a number of other tools with this (kinda’ like the opposite of the Bomgar implementation they already have) is very straight forward and should allow for serious automation for those out there that are asking for it. For example, it would be very easy to say take this output and weaponize it to clear caches before bugging you:
“plugin_id”:1237,”plugin_name”:”Check Root Capacity”,”service_exit_details”:”[2013-07-01] WARNING:  92% (276GB of 297GB) exceeds the 90% usage threshold set on the root volume by about 8 GB.”
Overall, I love it when I have one more toy to play with. You can automatically inject information into asset management systems, trigger events in other systems and if need be, allow the disillusioned youth the ability to erase their own hard drives!

July 3rd, 2013

Posted In: cloud, FileMaker, Mac OS X, Mac OS X Server, Mac Security, Mass Deployment, Network Infrastructure, Time Machine, Xsan

Tags: , , , , , ,

CrashPlan Pro Server is a pretty cool tool with a lot of great features that can be used to back up client computers. There are a lot of things that CrashPlan Pro is good at out of the box, but there are also a lot of other things that CrashPlan Pro wasn’t intended for that it could be good at, given a little additional flexibility. The REST API that CrashPlan Pro uses provides a little flexibility and as with most APIs I would expect it to provide even more as time goes on. I often hear people run away screaming when REST comes up, thinking they’re going to have to learn some pretty complex scripting. And while the scripting can be complex, it doesn’t necessarily have to be. You can find a lot of handy information about the options available in the REST API at http://support.crashplanpro.com/doku.php/api. The very first example command that CrashPlan gives is the following:
http://:4280/rest/users?status=Active
Now, to use this in a very simple script, let’s look at it with curl. You are going to need to authenticate, so we’re going to inject that into the URL in much the same was that we would with something like, let’s say, WebDAV, SSH or FTP. If the server name were foundation.lan, the user name was daneel and the password was seldonrulez then the curl command would actually look like so (you could use the -u operator to inject the authentication information, but as you’ll see later I’d like to make those a bit less complex):
curl http://daneel:seldonrulez@foundation.lan:4280/rest/users?status=Active
Note: The default port for the web administration in CrashPlan Pro is 4280. This is simply going to output a list of Active users on the server. The reason it’s going to output only Active users is that we asked it to (reading from left to right after the rest is shown in the URL) query users, using the status attribute and specifying only to show us users whose status matches as Active. We could just as easily have requested all users by using the following (which just removes ?status=Active):
curl http://daneel:seldonrulez@foundation.lan:4280/rest/users
Each user has a unique attribute in their id. These are assigned in an ascending order, so we could also query for the user with an ID of 3 by simply following the users with their unique ID:
curl http://daneel:seldonrulez@foundation.lan:4280/rest/users/3
We could also query for all users with a given attribute, such as orgId (note that these attributes are case sensitive unlike many other things that start with http). For example, to find users with an orgID of 3:
curl http://daneel:seldonrulez@foundation.lan:4280/rest/users?orgId=3
The API doesn’t just expose looking at users though. You can look at Organizations (aka orgs), targets (aka mountPoints), server statistics (aka serverStats) and Computers (aka computers). These can be discovered by running the following command:
curl -i http://daneel:seldonrulez@foundation.lan:4280/rest/
To then see each Organization:
curl http://daneel:seldonrulez@foundation.lan:4280/rest/orgs
And to see each Computer:
curl http://daneel:seldonrulez@foundation.lan:4280/rest/computers
You can also perform compound searches fairly easily. For example, let’s say that we wanted to see
curl http://daneel:seldonrulez@foundation.lan:4280/rest/computers?userId=3&status=Active
These basic incantations of curl are simply getting information, which programmatically could also be specified using a -X operator (or –request if you like to type a lot) to indicate the type of REQUEST we’re sending (continuing on with our Code42 Sci-fi inspired example):
curl -X GET -H ‘Content-type: application/json’ http://daneel:seldonrulez@foundation.lan:4280/rest/orgs
The important thing about being able to indicate the type of REQUEST is that we can do more than GET: we can also POST and PUT. We also used the -H operator to indicate the type of data, which we’re specifying as application/json (per the output of a curl -i command against the server’s REST API URL). POST is used to create objects in the database whereas PUT is used update objects in the database. This could result in:
curl -i -H ‘Content-Type: application/json’ -d ‘{“username”: “charlesedge”, “password”: “test”, “firstName”: “Charles”, “lastName”: “Edge”, “orgId”: “3”}’ http://daneel:seldonrulez@foundation.lan:4280/rest/users
Once you are able to write data, you will then be able to script mass events, such as create new users based on a dscl loop using groups, remove users at the end of a school year (PUT {“status”: “Deactivated”}), mass change orgIds based on other variables and basically fully integrate CrashPlan Pro into the middleware that your environment might already employ.
Perl, Python, Ruby and PHP come with a number of options specifically designed for working with REST, which makes more complicated scripting much easier (such as with php’s curl_setopt); however, these are mostly useful if you already know those languages and the point of this article was to stay in shell scripting land. This allows you knock out simple tasks quickly, even if the good people at Code 42 didn’t think to add the specific features to their software that you might have in mind. Once you start to get into scripting more complex events, look to the Python examples at the bottom of the API Architecture page to get ya’ kickstarted!

November 4th, 2010

Posted In: cloud, Mac OS X, Mac OS X Server, Mac Security, Mass Deployment, Ubuntu, Unix

Tags: , , , , , , , , , , ,