Second Interview With Command Control Power

Was interviewed by the most excellent guys from the Command Control Power podcast. Wetland everything from Bushel, to IBM, to Apple, to OS X Server, to Krypted, to Instagram nerdy and even a little reading It’s now available at Screen Shot 2015-09-17 at 8.14.13 PM I have tons of fun with these guys and look forward to getting a good excuse to hang out with them again! Maybe next time I’ll interview them!

Device Management and Manual Labor

Getting a bunch of iOS and Mac devices setup is more of a logistical challenge than a technical hurdle. When you buy a couple iPads, it’s pretty simple to set them up for the email, security settings and apps that you need those devices to have. You can put them all on a table, give them an Apple ID and then set them up identically to give to users. But the first time someone wipes a device, or looses a device that you need to wipe, you’ll have to do that manual labor again. And if you’re buying more than a couple of Apple devices, then the amount of time becomes amplified to manage all of these tasks. This is where a management solution comes into play. For More On Device Management and How It Impacts Manual Labor Click Here


Sometimes you just have to convert an iconset file to an icns file. And who knew, Apple was kind enough to give us a command to do just that in OS X! To use the iconutil command, run it with the -c option which indicates that the file will be converted. The -o indicates the file to convert a file to. Let’s use the myfile.iconset as the source file and then mynewfile.icns as the target file. The command would be as follows: iconutil -c myfile.iconset -o mynewfile.icns

Automating Image File Changes

Ever need to automate changes to image files? Maybe a LaunchAgent that would watch a specific folder and resize png files that were dropped in there, or a little script that sanitized images as they came in to be a specific size (e.g. Poster Frames)? Well, sips is a little tool built into OS X that can help immensely with this. It will even convert that png to a jpeg or pict to png. Let’s look at using sips. First up, let’s just get the width and height of an image file: sips --getProperty pixelHeight /Shared/tmpimages/1.png sips --getProperty pixelWidth /Shared/tmpimages/1.png Or for dpi: sips --getProperty dpiHeight /Shared/tmpimages/1.png sips --getProperty dpiWidth /Shared/tmpimages/1.png Or to get the format: sips --getProperty format Shared/tmpimages/1.png Now let’s set the property, where the property is format, using the -o option to output a copy of the file to different location: sips --setProperty format jpeg /Shared/tmpimages/1.png -o /Shared/imageoutput/1.jpeg Pretty nifty so far. Now, let’s resize an image using the -z option: sips /Shared/tmpimages/1.png -z 44 70 -o /Shared/imageoutput/converted.png There’s lots more you can do with sips. It also happens to be built into OS X in the /usr/bin folder. Call on it for general still image manipulation. It’s quick and easily scriptable and best of all, a useful tool that can save lots of manual time converting images.

Archive & Restore Assets with fcsvr_client

Final Cut Server has an option to archive and restore assets. When archiving an asset, the asset will be moved to a file system path that is represented by the device ID. The archival and restore can be done using the steps shown in this video:
The process of archival and restore can be kicked off from the command line, which will initiate the movement of the asset. To archive an asset, you will use the archive verb with the fcsvr_client tool. This will require you to provide the asset ID number along with the device that you will be archiving the asset to. For example, to archive an asset with an ID of 318 to a device with an ID of 8 you will use the following command:
fcsvr_client archive /asset/318 /dev/8
Once archived, the asset can be easily restored provided that it is still in the archive path that it was backed up to. So assuming that the asset is still on /dev/8 you could use the following command to restore the asset (the device path is implied as it is tracked in the metadata that corresponds to the asset ID:
fcsvr_client restore /asset/318
If archiving and restoring, it is never a bad idea to log that the action was sent to the queue. For example, if the asset ID were a variable of ASSET and the device had an ID of DEV then you could use the following to log that the automation had been fired off:
fcsvr_client archive /asset/$ASSET /dev/$DEV /usr/bin/logger “Asset $ASSET is being copied to device $DEV”

Don't Defrag the Whole SAN

I see a numer of environments that are running routine defragmentation scripts on Xsan volumes. I do not agree with this practice, but given certain edge cases I have watched it happen. When defragmenting a volume, there is no reason to do so to the entire volume. Especially if much of the content is static and not changing very often. And if specific files doesn’t have a lot of extents then they are easily skipped. Let’s look at a couple of quick ways to narrow down your defrag using snfsdefrag. The first is by specifying the path. In this case you would specify a -r option and follow that with the path starting path you want to recursively seek fragmented files. The second is to limit the number of extents in the file. To combine these, let’s assume that we are looking to defragment a folder called Seldon on an Xsan volume called Harry. snfsdefrag -r -m 25 /Volumes/Harry/Seldon You should also build logic into scripts if you are automating the events. For example, you could also use the -c option to just look at how many extents there are and perform the actual defragmentation as part of an if/then only in the event that there are more than a specified threshold. Another example is to check that there isn’t an existing process running in snfsdefrag. Also, if there is then don’t fire up yet another instance:
currentPID=$(ps -ewo pid,user,command | grep snfsdefrag | grep -v grep | cut -d ” ” -f 1) echo The current snfsdefrag PID is ${currentPID} so we are aborting the process. > $logfile
If you insist on automating the defragmentation of an Xsan volume, then there’s lots of other little sanity checks that you can do as well. Oh, you’re backing up, right?

Create Groups Using dscl

The directory services command line (dscl) command can be used to create a group. Here we’re going to use dscl to create a group called Local Admins (or ldadmins for short).  First up, create the group:
dscl . create /Groups/ladmins
Now give our ladmins group the full name by creating the name key:
dscl . create /Groups/ladmins RealName “Local Admins”
Now to give the group a password:
dscl . create /Groups/ladmins passwd “*”
Now let’s give the group a Group ID:
dscl . create /Groups/ladmins gid 400
That wasn’t so hard, but our group doesn’t have any users.
dscl . create /Groups/ladmins GroupMembership localadmin
Why create a group with just one member though… We can’t use the create verb again, with dscl or we’ll overwrite the existing contents of the GroupMembership field, so we’re going to use append instead:
dscl . append /Groups/ladmins GroupMembership 2ndlocaladmin
If you use dscl to read the group:
dscl . read /Groups/ladmins
You’ll notice that because it was created through dscl it has a Generated ID of its own.  You can easily nest other groups into this one using their Generated IDs as well:
dscl . create /Groups/ladmins GroupMembers 94B6B550-5369-4028-87A8-0ABAB01AE396
The “.” that we’ve been using has been interchangeable (in this case) with /Local/Default. Now let’s look at making a little shell script to do a few of the steps to use with imaging, touch a file called createladmins.bash and then give it the following contents:
dscl . create /Groups/ladmins dscl . create /Groups/ladmins RealName “Local Admins” dscl . create /Groups/ladmins passwd “*” dscl . create /Groups/ladmins gid 400 dscl . create /Groups/ladmins GroupMembership localadmin dscl . append /Groups/ladmins GroupMembership 2ndlocaladmin
If you then want to hide these admins, check out my cheat sheet here:

Config AutoPlay (GUI & Registry) for Windows

When you insert a drive into a Windows, by default it’s gonna’ likely mount the drive (and run the autorun.inf if there is one or AutoPlay to play the music if it’s an audio disc). When you insert a disk or drive into a Windows computer you can hold down the Shift key and it will disable the auto-run and AutoPlay functionality of the system. But you can also control that functionality at a pretty granular level. The most common way to do so is likely using the Global Policy Editor. To do so, open gpedit.msc, click on Computer Configuration and then Administrative Templates, then System and select the option for Turn Off Autoplay. Here you will be able to set it to All or select some specific devices to disable the Autoplay functionality for. You can also configure Autoplay from the registry (and therefore push it out easily with login scripts if you don’t have Active Directory (eg – Open Directory PDC environments). The key to change is NoDriveTypeAutoRun and it is actually located in two places:
  • HKEY_LOCAL_MACHINESoftwareMicrosoftWindowsCurrentVersionPoliciesExplorer
  • HKEY_CURRENT_USERSoftwareMicrosoftWindowsCurrentVersionpoliciesExplorer
The value of this key will be set as follows according to the type of drive/disk/disc/volume you’d like to disable the AutoPlay feature for:
  • 0x1 – Drives of unknown type
  • 0x4 – Removable drives
  • 0x8 – Fixed drives
  • 0x10 – network drives
  • 0x20 – CD-ROM drives
  • 0x40 – RAM disks
  • 0x80 – Drives of unknown type
  • 0xFF – Disables AutoPlay on all kinds of drives
Once reconfigured you’ll need to reboot.  Also, make sure you have all the latest patches installed as there were problems with Auto-Run and AutoPlay settings at some point in the past.  There are specific patches if these persist, according to your version:

Mac OS X: User Templates

New users on a Mac have a certain set of default settings that are copied into their user profiles the first time they log in from the contents of the /System/Library/User Template/English.lproj directory.  You can drop files into this directory or edit files that are already there that will then be copied into new accounts when they’re created, allowing you to customize the look and feel, default documents, fonts and other aspects of user accounts without having to do so each time a new user is created or logged into a system.   This can be incredibly useful for scenarios where you are not using network accounts or mobile accounts and you have a number of different people logging into computers and you want to provide specific settings.  It goes without saying that many policies could be managed through local computer policies using mcx.  However, this is not always going to cover settings you want to use and using templates is easier if you won’t be limiting users from changing settings. For example, let’s say that you want to provide all users with a default, stock set of Fonts.  If you go to the /System/Library/User Template/English.lproj/Library/Fonts directory then you can simply copy fonts into this folder and they will be provided to users when they log in.  Once they have the fonts in the home directory they will then be able to remove them, but will not otherwise be stuck with them.  The same would be true for any items stored in the home directory, including Microsoft Office preferences. Another aspect of using user templates is to perform scripts the first time a user logs in.  For example, if you have a Microsoft Exchange environment, you can have Entourage automatically setup a user account the first time they log in by having a self-destructing LaunchAgent in the users home folder (~/Library/LaunchAgents).  This would therefore result in creating LaunchAgents, a script and the agent itself in the User Template, but if you have a large number of users it would save a lot of time in setup.   Of course, if you’re using Open Directory, Active Directory or some other directory service there are better ways to go about accomplishing much of what you can do with user templates; however it makes a great tool to keep in your bat-belt for when you need it.

FTP Command Line and Automation

The ftp command that runs on a Mac is similar to that from any other platform, including Windows – and not much has changed with regard to FTP for a long, long time. When using FTP you will login to an FTP server, then issue some commands, one of which will kill your session to the host. The commands you issue during an FTP session are issued in an interactive mode of the shell, where you are actually running them against the target server
  • ls – list the contents of a directory on the FTP server
  • cd – change the working directory on the FTP server
  • pwd – show the current directory on the FTP server
  • get – download files from the FTP server
  • put – upload files to the FTP server
  • account – include a password with your login information
  • bye – terminate an ftp session and close ftp (or use disconnect to simply terminate a session)
  • bell – make a cute sound after each file transfer is done
  • chmod – change permissions
  • delete – your guess is as good as mine (OK, you got me, it’s to delete a file off the server)
  • glob – enable globbing
  • hash – only functional in Amsterdam
  • help – get help
  • lpwd – print the local working directory for transfers
  • mkdir – create folders on the FTP server
  • rmdir – delete folders from the FTP server
  • newer – only get a file if it’s newer (great for scripting synchronizations)
  • nmap – use positional parameters to set filenames
  • passive – use FTP passive mode
  • prompt – allows the use of letters to automate answers to prompts
  • rate – limit the speed of an upload or download
Notice the similarity between FTP commands and Mac OS X commands! This continues into repetition: in Mac OS X you can build commands to run automatically when you open a new shell. Using a netrc file you can do the same with opening an FTP session. The ~/.netrc file (or just netrc) is a text file that, like .bash_profile, lives in your home directory (aka – ~). The .netrc file allows FTP to perform automatic logins to FTP servers based on the name (not that there aren’t subcommands to do the same, but it allows you not to insert the password into your history or script). Like .bash_profile, the .netrc will need to be created per user, which can be done with the following:
touch .netrc
Because you’re going to put password information in there, let’s also restrict who can look at it:
chmod 700 .netrc
Now open your shiny new .netrc file and create a block of settings for each FTP server you want to automate access for. This information will be the ftp server (machine), the username (login) and the password (yup, that’s the password). For example, the entire file could be:
machine login ftpkrypted password kryptedsupersecrethax0rpassword
You can also connect to a server using the following format with each command with each command:
Now that you can login without issue, let’s write a script to perform some routine tasks, such as keeping a web site up-to-date.
#!/bin/bash ftp -d << ftpEnd prompt cd /Library/WebServer/Documents put “*.html” put “*.php” cd /Library/WebServer/Documents put “*.png” quit ftpEnd
Or Downloading documents off of a website:
#!/bin/bash ftp -d << ftpEnd prompt cd /My/Documents get “*.doc” quit ftpEnd
There are also some variables that you can use in the prompt:
  • %/ – the current working directory of the FTP server
  • %M – the hostname of the FTP server
  • %m – the hostname only up to the .
  • %n – the username used for the FTP server
Finally, you can also define macro’s in your .netrc file using macdef.