Obtain Information From Watchman Monitoring Using a Script

Watchman Monitoring is a tool used to monitor computers. I’ve noticed recently that there’s a lot of traffic on the Watchman Monitoring email list that shows people want a great little (and by little I mean inexpensive from a compute time standpoint) monitoring tool to become a RMM (Remote Management and Monitoring) tool. The difference here is in “Management.” Many of us actually don’t want a monitoring tool to become a management tool unless we are very deliberate about what we do with it. For example, that script that takes a machine name of ‘rm -Rf /’ that some ironic hipster of a user decided to name their hard drive because, well, they can – well that script that was just supposed to run a fix permissions because that ironic jackass of a user in his v-neck with his funny hat and unkempt beard just accidentally cross-site script attacked himself and he’s now crying out of his otherwise brusque no-lense having glasses and you’re now liable for his data loss because you didn’t sanitize that computer name variable before you sent it to some script. Since we don’t want the scurrilous attention of hipsters everywhere throwing caustic gazes at us, we’ll all continue using a standard patch management system like Casper, Absolute, Munki, FileWave, etc. Many organizations can still take value out of using Watchman Monitoring (and tools like Watchman) to trigger scripted events in their environment. Now, before I do this I want to make something clear. I’m just showing a very basic thing here. I am assuming that people would build some middleware around something a little more complicated than curl, but given that this is a quick and dirty article, curl’s all I’m using for examples. I’m also not giving up my API key as that would be silly. Therefore, if I were using a script, I’d have two variables in here. The first would be $MACHINEID, the client/computer ID you would see in Watchman. This would be what you see in red here, when looking at an actual computer. Screen Shot 2013-07-03 at 9.35.54 AM The second variable is my API token. This is a special ID that you are provided from our friends at Watchman. Unless you’re very serious about building some scripts or middleware like right now, rather than bug them for it, give it a little while and it will be available in your portal. I’ve given the token $APITOKEN as my variable there. The API, like many these days is json. This doesn’t send entire databases or even queries, but instead an expression of each variable. So, to see all of the available variables for our machine ID, we’re going to use curl (I like to add -i to see my headers) and do the following lookup: curl -i https://318.monitoringclient.com/clients/$MACHINEID.json?auth_token=$APITOKEN This is going to spit out a bunch of information, parsed with a comma, whereas each variable and then the contents of that variable are stored in quoted text. To delimit my results, I’m simply going to awk for a given position (using comma as my delimiter instead of the default space). In this case, machine name is what I’m after: curl -i https://318.monitoringclient.com/clients/$MACHINEID.json?auth_token=$APITOKEN | awk -F"," '{ print $4}' And there you go. It’s that easy. Great work by the Watchman team in making such an easy to use and standards compliant API. Because of how common json is I think integrating a number of other tools with this (kinda’ like the opposite of the Bomgar implementation they already have) is very straight forward and should allow for serious automation for those out there that are asking for it. For example, it would be very easy to say take this output and weaponize it to clear caches before bugging you:
“plugin_id”:1237,”plugin_name”:”Check Root Capacity”,”service_exit_details”:”[2013-07-01] WARNING:  92% (276GB of 297GB) exceeds the 90% usage threshold set on the root volume by about 8 GB.”
Overall, I love it when I have one more toy to play with. You can automatically inject information into asset management systems, trigger events in other systems and if need be, allow the disillusioned youth the ability to erase their own hard drives!

Apple's Customer Facing SystemStatus

Apple now has a new system status page for their services, available at http://www.apple.com/support/systemstatus. This site goes through many of Apple’s services and shows an indicator light for when they are up. Additionally, you can scroll down to the detailed timeline and see a historical account of what services are online. This is yet another step in Apple’s continued progress at providing more and more information to the community on, well, everything. This includes seeing Apple popping up at conferences here and there, most notably at Black Hat this year, publishing more kbase articles that detail problems and allowing more community involvement from some employees. A more open Apple is a more enterprise, education and consumer friendly Apple.

Managing Office 365 Users Using PowerShell

Programmatically controlling the cloud is an important part of trying to reign in the chaos of disparate tools that the beancounters make us use these days. Of all the companies out there, Microsoft seems to understand this about as well as anyone and their fine programmers have provided us with a nice set of tools to manage Office 365 accounts, both in a browser (as with most cloud services) and in a shell (which is what we’ll talk about in this article). This article isn’t really about scripting PowerShell. Instead we’re just looking at a workflow that could be used to script a Student Information System, HRIS solution or another tool that has thousands of users in it to communicate with Microsoft’s 365 cloud offering, providing access to Exchange, Lync, Access, Unified Messaging and of course, minesweeper. Wait, before you get carried away, I still haven’t found a way to access minesweeper through PowerShell… Sorry… In order to manage Office 365 objects, you will first need to import the MSOnline module (e.g. of cmdlets) and then connect to an account with administrative access to an Office365 environment. To import the cmdlets, use the Import-Module cmdlet, indicating the module to import is MSOnline: Import-Module MSOnline The Get-Credential cmdlet informs you what account you are currently signed in as. Once you have imported the appropriate cmdlets, connect to MS Online using the Connect-MsolService cmdlet with no operators, as follows: Connect-MsolService You will then be prompted for a valid Live username and password. The Connect-MsolService cmdlet also supports a -Credential operator (Connect-MsolService –Credential) which allows for injecting authentication information into the command in a script. Next, setup a domain using New-MsolDomain along with the -Name operator followed by the name of the domain to use with Office 365: New-MsolDomain -Name krypted.com The output would appear as follows, indicating that the domain is not yet verified:
Name                  Status                       Authentication krypted.com      Unverified              Managed
Once created, in order to complete that you are authoritative for the domain, build a text record in the DNS for the authoritative name server for the domain. To see what the text record should include, run Get-MsolDomainVerificationDns: Get-MsolDomainVerificationDns -DomainName krypted.com -Mode dnstxtrecord The output would appear as follows:
Label : deploymsonline.com Text : MS=ms123456789 Ttl : 3600
Once the domain name shows as verified, you need to confirm it, done using Confirm-MsolDomain: Confirm-MSolDomain -DomainName krypted.com you can create a user within the domain. To see account information, use the Get-MsolUser cmdlet with no operators: Get-MsolUser To create an account, use the New-MsolUser cmdlet. This requires four attributes for the account being created: UserPrincipalName, DisplayName, FirstName and LastName. These are operators for the command as follows, creating an account called Charles Edge with a display name of Charles Edge and an email address of cedge@krypted.com: New-MsolUser -UserPrincipalName "cedge@krypted.com" -DisplayName "Charles Edge" -FirstName "Charles" -LastName "Edge" Other attributes can be included as well, or you can use a csv file to import accounts. Once created, you can use the Set-MSolUserPassword cmdlet to configure a password, identifying the principal with -userPrincipalName and the new password quoted with -NewPassword. I also elected to not make the user change their password at next login (through the web portal users have to reset their password and they’re randomly generated, so this is much more traditionally equivalent to what we’ve done in Active Directory Users and Computers): Set-MsolUserPassword -userPrincipalName cedge@krypted.com -NewPassword "reamde" -ForceChangePassword False We can also use Set-MsolPasswordPolicy to change the password policy, although here we’ll use Set-MsolUser for the account so that the password never expires: Set-MsolUser -UserPrincipalName cedge@krypted.com -PasswordNeverExpires True Also, you could use Set-MailboxPermission to configure permissions on mailboxes. I’ve also found that Get-MsolAccountSku is helpful to get information about the actual account I’m logged in as and while I’m waiting for a domain to verify that I can use Get-MsolDomain to see the status. Once the domain is accepted, Get-AcceptedDomain shows information about the domain. Set-MsolUserLicense can be used to manage who gets what license. Finally, all of this could be strung together into a subsystem by any organization to centrally bulk import and manage delegated domains in an Office365 environment. There are going to be certain areas where human intervention is required but overall, most of the process can be automated, and once automated, monitoring the status (e.g. number of accounts, etc) can also be automated, providing a clear and easy strategy for 3rd party toolsets to be integrated with the Office 365 service that Microsoft is providing. It is a new world, this cloud thing, but it sure seems a lot like the old world where we built middleware to do the repetitive parts of our jobs… Just so happens we’re tapping into their infrastructure rather than our own…

iWork Public Beta Goes Bye-Bye Today :: Last Call

I’m sure you’ve heard by now. But just in case you hadn’t logged into iWork.com in awhile or let the to-do lapse, it’s just worth a reminder that iWork Public Beta, the site that you could upload Pages, Numbers and Keynotes to, is being deprecated. The end comes on today. In other words, if you have documents up on the site, you should download them immediately or you won’t be able to come August. Apple has even provided a document explaining how. The service that was being provided by the iWork public beta is replaced by iCloud. Using iCloud, you can sync your documents between all of your devices. When you configure iCloud in System Preferences, you are prompted to sync contacts, calendars and bookmarks, but iCloud also gets configured for file synchronization as well at that time. While iCloud doesn’t allow you to edit documents online, you can access them through the iCloud web portal and download them from any computer you like. The new iCloud integration also allows for seeing all your documents in each supported app, when first opened:

I

Google recently decided that it was time to force some other company to buy cloudy dispositioned upstarts, Dropbox and Box.net. Google also decided that Office365 represented Microsoft being a little too brazen in their attempts to counteract the inroads that Google has made into Microsoft territory. Therefor, Google thumped their chest and gave away 5GB of storage in Google Drive. Google then released a tool that synchronizes data stored on a Google Drive to Macs and Windows systems. Installing Google Drive is pretty easy. Just browse to Google Docs and Google will tell you that there’s this weird new Google Drive thing you should check out. Here, click on Download Google Drive for Mac (or Windows if you use Windows). Then agree to give your first born to Google (but don’t worry, they’d never collect on that debt ’cause they’re sworn to do no evil). Once downloaded, run the installer. You can link directly to your documents now using https://drive.google.com. The only real question the installer asks is whether you’d like to automatically sync your Google Drive to the computer. I said yes, but if you’ve got a smallish drive you might decide not to. Once the Google Drive application has been downloaded and installed, open it (by default it’s set to open at startup). You’ll then see a icon in the menu bar that looks a little like a recycling symbol. Here, click on Open Google Drive folder. The folder with your Google Docs then shows up on your desktop. Copy an item in there and it syncs up to Google. It can then easily be shared through the Google Apps web portal and accessed from other systems. While there are still a number of features that Box.net and Dropbox will give you due to the fact that they’re a bit more mature, I’d expect Google Drive to catch up fast. And given that I already have tons of documents in Google Docs, it is nice to have them saved down to my local system. I’m now faced with an interesting new challenge: where to draw the line in my workflow between Google Drive, Dropbox and Box.net. Not a bad problem to have, really! Given the frustrations of having things strewn all over the place I’ll want to minimize some of the haphazardness I’ve practiced with regards to why I put things in different places in the past. In some cases I need to be able to email to folders, have expiring links or to have extended attributes sync between services, so there are some aspects that are likely to be case-by-case… Overall though, I’m very happy with the version 1 release of Google Drive. I mean, who complains about free stuff!?!?!

Link Baiting 101

I almost called this article “Aliens Can Listen To Calls on Your iPhone” or “How To Hack Into Every iPhone Ever (Even When They’re Powered Off)”. But then I thought that maybe it would be a bit too much. I’ve been a little melodramatic at times, but that’s when I was younger and needed the rupees. But TechTarget isn’t young (although I don’t know if they need the rupees). I’d like to point out two recent articles of theirs: I remember reading an article awhile back claiming that the first virus for the iPhone had hit. This was a pretty big site (not TechTarget btw), but they had jumped on Apple and jumped quick, for a lack of good security on the iOS platform. Why? Because Apple’s huge, popular and a frickin’ easy target. But every security researcher knows that if they can hack an iPad or an iPhone that they’re going to be famous. Still, only one has managed to do anything remotely close to cool and you had to download his app, which got him banned, for the “exploit” to work (the “exploit” was actually javascript taxies). Security researchers do most everything they do for fame. Therefore, if there were going to be serious flaws with iOS, they’d have come up by now. Let’s look at these headlines and vs the content of the articles. The first, Apple iOS Security Attacks A Matter Of When, Not If, IT Pros Say. The title isn’t actually that bad, (although I don’t know that the IT Pros quoted are worthy of punditry). It’s the headers within the article that set me off a little. “A false sense of iOS security” was the first: Here they said that iOS users are going to run something if it comes out because there haven’t been any vulnerabilities to iOS. Counter argument would be that since a vulnerability *will* (or would) be on CNN, MSNBC, NPR, every web site, every magazine and possibly a PSA on flights, I think they’ll figure it out pretty quick… The next header, “Responding to iOS security attacks” goes on to explain that (to summarize) iOS virus protection blows. OK, we should develop more FUD-based apps to check for viruses of data that those apps would actually have no access to due to sandbox controls. The next header, “Entry points for iOS security attacks” tells us that someone will exploit HTML5 or post an app with a Trojan or Logic Bomb on the App Store in order to destroy your iPhone as if it were a planet slated for demolition. Each app can only communicate with resources outside of that app using an API Apple allows, an API that doesn’t cause combustion of the phone. If the app goes through the app store then that has to be a public, not private API. It is possible that someone could run a fuzzer against every possible variable exposed by every possible method and come up with a way to do something interesting, like cause the phone to reboot. But that kind of thing is going to be true of every platform and isn’t worthy of the pretense that it’s security consulting. I can dig on the possibility of that kind of vulnerability, but the author then indicates that Apple’s security is 7th worse in the IT industry with a 12% growth in vulnerabilities. Thus an insinuation that people are actually exploiting holes in iOS rather than Google monitoring iPhone user data a bit more than they should… The second headline is much better though: How an iOS virus can infect the enterprise and what to do about it. Reading it, my first impression was that there was an iOS virus; you know, one written for iOS. But no, they’re talking about a virus that someone sends through your corporate Exchange server that is then copied to your Windows XP computer through the magical XP Virus Stream (like Photo Stream but more specific features for XP) and executes the virus that wipes your computer. I like it. I can dig that virus, but regrettably that virus doesn’t exist. And apparently no good anti-virus exists, according to the article. Why not? Because Apple has overly secured the OS and anti-virus has to be invoked manually. Over-security is what makes iOS so great for phones. I’m one of those people that likes to hack stuff. And iOS isn’t for hacking around in unless you have jailbroken the device. That’s why my phone always works and I’m able to actually get stuff done on a consistent basis. There are certainly things Apple could do better. But iOS security is a hard one to point the finger at. I would like to see security researchers more warmly welcomed and for the Apple community to see those researchers as people who are building a stronger product rather than the enemy. I would like to see some technical features added or centralized control over features added. It isn’t just Apple. It’s any company big enough to care about. The tech sites are mostly what I look at, and every time there’s something they think they can hop on with Google or any of the other big names in the tech industry they hop right on that to drive readers, whether well founded or not. Not all tech sites/magazines mind you, just some. And when the company is famous enough (Google, Apple, Microsoft) for mainstream media to care about, all the better… At the end of the day though, the way to get action is to file a feature request with vendors, not to make up crazy headlines aimed at selling FUD as a means of getting someone to go to your website…

Goodbye Google Wave

Looks like Wave will be gone as of January. From Google:
Dear Wavers, More than a year ago, we announced that Google Wave would no longer be developed as a separate product. At the time, we committed to maintaining the site at least through to the end of 2010. Today, we are sharing the specific dates for ending this maintenance period and shutting down Wave. As of January 31, 2012, all waves will be read-only, and the Wave service will be turned off on April 30, 2012. You will be able to continue exporting individual waves using the existing PDF export feature until the Google Wave service is turned off. We encourage you to export any important data before April 30, 2012. If you would like to continue using Wave, there are a number of open source projects, including Apache Wave. There is also an open source project called Walkaround that includes an experimental feature that lets you import all your Waves from Google. This feature will also work until the Wave service is turned off on April 30, 2012. For more details, please see our help center. Yours sincerely, The Wave Team © 2011 Google Inc. 1600 Amphitheatre Parkway, Mountain View, CA 94043 You have received this mandatory email service announcement to update you about important changes to your Google Wave account.

Scripting in Google ChromeOS

I recently got my hands on one of those Google ChromeBooks (Cr-48). Interesting to have an operating system that is just a web browser. But, as anyone likely reading this article already knows, the graphical interface is the web browser and the operating system is still Linux. But what version? Well, let’s go on a journey together. First, you need ChromeOS. If you’ve got a ChromeBook this is a pretty easy thing to get. If not, check http://getchrome.eu/download.php for a USB or optical download that can be run live (or even in a virtual machine). Or, if you know that you’re going to be using a virtual machine, consider a pre-built system from hexxeh at http://chromeos.hexxeh.net/vanilla.php. I have found the VMware builds to be a bit persnickety about the wireless on a Mac, whereas the VirtualBox builds ran perfectly. I split my time between the two anyway, so I’ve just (for now) been rocking VirtualBox for ChromeOS. When you load it for the first time it asks for a Google account. Provide that, select your network adapter, choose from one of the semi-lame account images ( for the record, I like the mad scientist one) and you’re off to the races. Next, we need a shell. When you first log in, you see a web page that shows you all of the Chromium apps you have installed. By default, you’ll see File manager and Web Store. If you’ve used the OS X App Store then the Chrome Web Store is going to look pretty darn familiar. My favorite for now is Chrome Sniffer. But all of these kinda’ get away from where we’re trying to go: get a scripting environment for Chrome OS. Chrome comes with 2 types of shell environments. The first is crosh. To bring up a crosh environment, use Control-Alt-t. This keystroke invokes the crosh shell. Here, type help to see a list of the commands available. Notice that cd, chmod, etc don’t work. Instead, there are a bunch of commands that a basic user environment might need for troubleshooting primarily network connections. “But this is Linux” you ask? Yup. At the help output you’ll notice shell. Type shell and then hit enter. The prompt will change from crosh> to chronos@localhost. Now you can cd and perform other basic commands to your hearts delight. But you’re probably going to need to elevate privileges for the remainder of this exersize. So let’s type sudo bash and just get there for now. If you’re using a ChromeBook, the root password might be root, or if you’re using a downloaded vm from hexxeh then it might be facepunch (great password, btw). Provided the password worked, the prompt should turn red. Now, if you’re using a hexxeh build then the file system is going to be read-only. You won’t be able to change the root password nor build scripts. But otherwise, you should be able to use passwd to change the password: passwd chronos Once you’ve got slightly more secure shell environment (by virtue of not using the default root password), it is time to do a little exploring. Notice that in /bin, you see sh, bash, rbash and the standard fare of Linux commands (chmod, chown, cp, attr, etc. Notice that you don’t see tcsh, csh or ksh. So bash commands from other platforms can come in, but YMMV with tcsh, etc. Running ps will give you some idea of what’s going on process-wise under the hood: ps aux From encrypts to crypto to the wpa supplicant, there’s plenty to get lost in exploring here, but as the title of the article suggests, we’re here to write a script. And where better to start than hello world. So let’s mkdir a /scripts directory: mkdir /scripts Then let’s touch a script in there called helloworld.sh: touch /scripts/helloworld.sh Then let’s give it the classic echo by opening it in a text editor (use vi as nano and pico aren’t there) and typing: echo "Hello Cruel World" Now close, save and then run it: /scripts/helloworld.sh And you’ve done it. Use the exit command twice to get back to crosh and another time to close the command line screen. You now have a script running on ChromeOS. Next up, it’s time to start looking at deployment. This starts with knowing what you’re looking at. To see the kernel version: uname -r Or better: cat /proc/version Google has been kind enough to build in similar sandboxing to that in Mac OS X, but the concept that you can’t run local applications is a bit mistaken. Sure, the user interface is a web browser, but under the hood you can still do much of what most deployment engineers will need to do. If these devices are to be deployed en masse at companies and schools, scripts that setup users, bind to LDAP (GCC isn’t built-in, so it might be a bit of a pain to get there), join networks and the such will need to be forthcoming. These don’t often come from the vendor of an operating system, but from the community that ends up supporting and owning the support. While the LDAP functionality could come from Google Apps accounts that are integrated with LDAP, the ability to have a “One touch deploy” is a necessity for any OS at scale, and until I start digging around for a few specific commands/frameworks and doing some deployment scripts to use them, right now I’m at about a 6 touch deploy… But all in good time!

Removing DigiNotar Trust in OS X

DigiNotar got hacked awhile back. And more and more issues seem to continue to surface as a result (most notably spoofing Google). Read this article for more info on it, but I’m not gonna’ rehash it all right now. Instead, let’s correct it. To do so, we’ll use the security command. Then we’ll use the delete-certificate option along with the -Z operator, which allows inputing (or outputting when installing certificates) a SHA1 has of a certificate. Root Certificates (those that appear under the System Roots section of the Keychain Access application) are all located in the /System/Library/Keychains/SystemRootCertificates.keychain keychain and so we’ll specify that as well: sudo security delete-certificate -Z C060ED44CBD881BD0EF86C0BA287DDCF8167478C "/System/Library/Keychains/SystemRootCertificates.keychain" And that’s it, push out the security command through ARD or a policy and you’re untrusting DigiNotar. To verify removal, use the find-certificate option and either attempt to find via the SHA1 hash (-Z again) or use the email address as follows: security find-certificate -e info@diginotar.nl "/System/Library/Keychains/SystemRootCertificates.keychain" Keep in mind that the certificate can always be re-added to the SystemRootCertificates.keychain when they get all their little issues sorted out.

iCloud, Lion and iOS5

As most people who are going to read anything I write will already know, Apple released their new cloud service today. The Apple pages are already up, with a splash page on the main site pointing to a dedicated iCloud page. Apple has also anticipated some of the questions that most of us using MobileMe were going to ask in a short Kbase article re: the transition from MobileMe to iCloud: http://support.apple.com/kb/HT4597 Additionally, an email went out to MobileMe users today that read:
We’d like to share some exciting news with you about iCloud — Apple’s upcoming cloud service, which stores your content and wirelessly pushes it to your devices. iCloud integrates seamlessly with your apps, so everything happens automatically. Available this fall, iCloud is free for iOS 5 and OS X Lion users. What does this mean for you as a MobileMe member? When you sign up for iCloud, you’ll be able to keep your MobileMe email address and move your mail, contacts, calendars, and bookmarks to the new service. Your MobileMe subscription will be automatically extended through June 30, 2012, at no additional charge. After that date, MobileMe will no longer be available. When iCloud becomes available this fall, we will provide more details and instructions on how to make the move. In the meantime, we encourage you to learn more about iCloud.
Immediately, users of iOS 4.3.3 or higher, can make use of the new music features. I purchased a song in iTunes and received an alert from the iTunes store to enable the feature. I could then go over to Store -> Settings from within the iPhone and enable Music and Apps automatically downloading when purchased from another device. It’s also possible to enable transfers over cell networks, although I can’t imagine a lot of people using such an option. Apple also announced a slew of new features for iOS 5 and for Mac OS X 10.7, Lion. To me the most critical things announced today is that iOS 5 will not need to be tethered to a computer to activate and that it can wirelessly run software updates. Those items are extremely important for growing enterprises of iOS-based devices. The most important things that weren’t bothered to be announced is that Xsan is included in Mac OS X now and that Mac OS X Server survives another profitable year at Apple, but now as an App (or as much an App as you can be when you’re an operating system). These were published to the Apple website. Many thought Xsan would be disappearing, but it is obviously here to stay for some time. The most important thing that we haven’t heard jack-diddly-squat about is the future of our friend Final Cut Server given that Mac OS X can now do a subset of the features out of the box (versioning). Considering that I have more Apple computers than Imelda Marcos had shoes I have a lot of mixed feelings about synchronizing media between devices. Luckily I don’t have to enable the new features on all of them, although I already have on some…