NFS. Not… Dead… Yet…

NFS may just never die. I’ve seen many an xsan covert to NFS-based storage with dedicated pipes and less infrastructure requirements. I’m rarely concerned about debating the merits of technology but usually interested in mapping out a nice workflow despite said merits. So in the beginning… there is rpc. Why? Because before we establish a connection to an nfs share, we first want to check that we can talk to the system hosting it. Do so with rpcinfo:


Now that we’ve established that we can actually communicate with the system, let’s use the mount command (for more on creating mounts see `man exports`). Here, we’ll 

mount -t nfs nfs:// /Network/Servers/

ncctl is a one-stop shop for manipulating kerberized NFS. Ish. You also have ncinit, ncdestroy, and nclist. So almost a one-stop shop. First, let’s check the list of shares you have and how you’re authoring to each:

nclist -v

ncctl list can also be used. The output will be similar to the following:

/Network/Servers/       : No credentials are set

We should probably authenticate into that share. Now let’s actually set our username (assuming you’ve already kerberized via kinit or a gui somewheres):

ncctl set -p

Now that spiffy nclist command should return something like the following:


Finally, ncdestroy is used to terminate your connection. So let’s just turn off the share for the evening:

ncctl destroy

Or ncdestroy is quicker to type. And viola, you’ve got a functional nfs again. Ish. 

Now that you’re connected, nfsstat should show you how the system is performing. For more on using that, see: 

man nfsstat

Limit Upload and Download Streams for Google Drive File Stream on macOS

Google Drive File Stream allows you to access files from Google’s cloud. It’s pretty easy for a lot of our coworkers to saturate our pipes. So you can configure a maximum download and upload speed in kilobytes per second. To do so write a defaults domain into /Library/Preferences/ and use a key of BandwidthRxKBPS for download and BandwidthTxKBPS for upload (downstream and upstream as they refer to them) as follows:

defaults write BandwidthRxKBPS -int 200
defaults write BandwidthTxKBPS -int 200

Create a page for your GitHub project

A page is a great way to have a portal for the various open source projects your organization maintains or about how to use products from your organization. Some great examples of projects out there include:

All of the above have some things in common, that I think are important:

  • The branding is consistent(ish) with company guidelines, so the experience isn’t tooooo dissonant with the main pages
  • The salesy part of the branding has been stripped out
  • The experience is putting useful content for developers right up front
  • Most assume some knowledge of scripting, consuming APIs, and other technical needs
  • They showcase and include information to projects
  • Projects from multiple accounts are included (even projects owned by other organization if they help put developing more open source projects out there)

Taking all this into account, let’s make a page! To get started, first create a project in your GitHub account with the url of <accountname>

Create an index.html page in there (even if it’s just a hello world page). At this point, you’re just writing standard html. You can have standard tags like H1, H2, etc – and import a css elements from another place.

I really like is displaying cards for your GitHub projects. One project that I like to use (I must have tested 30-40) is GitHub-cards. To use that you’ll need to enable Java, as you can tell from the .js. Then you just include a line to display that card, replacing the data-github field contents with the username/projectname of any projects you want to include, as follows (e.g. for the GitHub user/org name jamf and the GitHub project name of KubernetesManifests):

<div class="github-card" data-github="jamf/KubernetesManifests" data-width="400" data-height="" data-theme="default"></div> <script src="//"></script>

One final note, many an organization has a standard css they want to be used when building new web properties, including something like a GitHub site. You can leverage these by calling them out in the header as follows:

<link href="" rel="stylesheet" type="text/css" media="screen">

According to how much is involved in how a given CMS mucks up code, you might have a lot of tweaking to bring elements in and consume them to your orgs spec, but it’s much easier than sifting through rendered source to figure it out on your own. Once published, go to the account (in this example, and viola, you have a new page!

Backup and Restore a Parallels VM Programmatically

Parallels comes with a nifty command line tool called prlctl installed at /usr/local/bin/prlctl when the package is installed. Use the prlctl with the backup verb in order to run a backup of a virtual machine, followed by an option of a name or ID of a registered virtual machine. For example, if the name of the VM was Krypted Server then you could run the following:

prlctl backup ‘Krypted Server'

Or if the unique ID of the VM was 12345678-1234-1234-112233456789

prlctl backup {12345678-1234-1234-112233456789}

To list existing backups of a given VM, the backup-list verb along with the name or unique ID would be used, as follows:

code>prlctl backup-list {12345678-1234-1234-112233456789}

And then to restore, you can either just use the ID of the VM to restore the latest backup:

prlctl restore {12345678-1234-1234-112233456789}

Or to choose a specific backup to restore, supply that serial following the -t flag:

prlctl restore {12345678-1234-1234-112233456789} -t {11223344-1122-1122-112233445566}

And viola, you’ve backed up and restored with the CLI, so you can script away as needed.

Git Quick-start

Git it easy. It’s a command with some verbs. Let’s start with the init verb, which creates an empty git repository in the working directory (or supply a path after the verb)

git init

Now let’s touch a file in that directory. Once a new file is there, which that new repo as your working directory, run git  with the status verb:

git status

Oh look, you now see that you’re “On branch master” – we’ll talk branching later. You see “No commits yet” and hey, what’s that, an Untracked file! Run git with the add verb, and this time you need to specify a file or path (I’ll use . assuming you’re working directory is still the directory of your path).

git add .

Now let’s run that status command again. And hey, it sees that you now have a staged file (or files). 

Now let’s run our first commit. This takes the tracked and staged file that we just created and commits it. Until we do this we can always revert back to the previous state of that file (which in this simple little walkthrough would be for the file to no longer exist). 

git commit -m “test”

Now let’s edit our file and this time let’s run git with the diff verb:

git diff

Hey, you can now see what changed between your file(s). Easy, right? Check out the logs to see what you’ve been doing to poor git:

git log

Hey look, there’s a commit listed there, along with an author, a date and time stamp, as well as a name of the file(s) in the commit. Now, let’s run a reset to revert to our last commit. This will overwrite the changes we just made prior to doing the diff (you can use a specific commit by using it as the next position after —hard or you can just leave it for the latest actual commit:

git reset —hard

Now this resets all files back to the way it was before you started mucking around with those poor files. OK, so we’ve been working off in our own little world. Before we explore the wide world that is cloning a repository off the interwebs, first we’re going to do a quick look at branches. You know how we reset all of our files in the previous command? What if we had 30 files and we just wanted to reset one? You shouldn’t work in your master branch for a number of reasons. So let’s look at existing branches by running git with the branch verb:

git branch

You see that you have one branch, the “* master” branch. To create a new branch, simply type git followed by the name of the branch you wish to create (in this case it will be called myspiffychanges1):

git branch myspiffychanges1

Run git with the branch verb again and you’ll see that below master, your new branch appears. The asterisk is always used so you know which branch you’re working in. To switch between branches, use the checkout verb along with the name of the branch:

git checkout myspiffychanges1

I could have done both of the previous steps in one command, by using the -b flag with the checkout verb:

git checkout -b myspiffychanges1

OK now, the asterisk should be on your new branch and you should be able to make changes. Let’s edit that file from earlier. Then let’s run another git status and note that your modifications can be seen. Let’s add them to the list of tracked changes using the git add  for the working directory again:

git add .

Now let’s commit those changes:

git commit -m "some changes"

And now we have two branches, a little different from one another. Let’s merge the changes into the master branch next. First, let’s switch back to the master branch:

git checkout master

And then let’s merge those changes:

git merge myspiffychanges1

OK – so now you know how to init a project, branch, and merge. Before we go on the interwebs let’s first setup your name. Notice in the logs that the Author field displays a name and an email address. Let’s see where that comes from:

git config --list

This is initially populated by ~/.gitconfig so you can edit that. Or, let’s remove what is in that list:

git config --unset-all

And then we can add a new set of information to the key we’d like to edit:

git config "Charles Edge" --global

You might as well set an email address too, so people can yell at you for your crappy code some day:

git config “” --global

OK, optics aside, let’s clone an existing repository onto our computer. The clone verb allows you to, —insert suspense here— clone a repository into your home directory:

git clone

The remote verb allows you to make a local copy of a branch. But it takes a couple of steps. First, init a project with the appropriate name and then cd into it. Then we’re going to grab the url from GitHub:

And add it using the remote verb:

git remote add AutoPkg

Now let’s fetch a branch of that project, in this case, master:

git fetch test myspiffychanges1

Now we’ll want to download the contents of that branch:

git pull myspiffychanges1

And once we’ve made some changes, let’s push our changes:

git push test myspiffychanges1


A framework is a type of bundle that packages dynamic shared libraries with the resources that the library requires, including files (nibs and images), localized strings, header files, and maybe documentation. The .framework is an Apple structure that contains all of the files that make up a framework.

Frameworks are stored in the following location (where the * is the name of an app or framework):

  • /Applications/*contents/Frameworks
  • /Library/*/
  • /Library/Application Support/*/*.app/Contents/
  • /Library/Developer/CommandLineTools/
  • /Library/Developer/
  • /Library/Frameworks
  • /Library/Printers/
  • /System/iOSSupport/System/Library/PrivateFrameworks
  • /System/iOSSupport/System/Library/Frameworks
  • /System/Library/CoreServices
  • /System/Library/Frameworks
  • /System/Library/PrivateFrameworks
  • /usr/local/Frameworks 

If you just browse through these directories, you’ll see so many things you can use in apps. You can easily add an import followed by the name in your view controllers in Swift. For example, in /System/Library/Frameworks you’ll find the Foundation.framework. Foundation is pretty common as it contains a number of APIs such as NSObject (NSDate, NSString, and NSDateFormatter). 

You can import this into a script using the following line:

import Foundation

As with importing frameworks/modules/whatever (according to the language) – you can then consume the methods/variables/etc in your code (e.g.  let url = NSURL(fileURLWithPath: “names.plist”).

Disk Mount Conditioning In macOS

Here we go. The Disk Mount Conditioner “is a kernel provided service that can degrade the disk I/O being issued to specific mount points, providing the illusion that the I/O is executing on a slower device.” You won’t often find that the system decides to slow throughput to a device very often. But it happens, and equally as useful you can spoof a different type of device, quite helpful when troubleshooting. 

It’s like that. You can run the dmc command to control and view the status of the dmc service, at /usr/bin/dmc. To see how to use dmc, simply run dmc with the help verb:

/usr/bin/dmc help

Sucker M.C.s can see a list of mounts that can be controlled using the list verb:

  0: Faulty 5400 HDD

  1: 5400 HDD

  2: 7200 HDD

  3: Slow SSD



  6: PCIe 2 SSD

  7: PCIe 3 SSD

Is it live? Many of the above aren’t available, but if you look at your hard drive, you can check the status of it (or multiples if you have multiple supported controllers in your computer). To check the status simply use the list verb following the integer of your mount:

/usr/bin/dmc list

Pause. Once you have the list, you can check the status of each until you find the one you need. To check the status simply use the status verb following the integer of your mount:

/usr/bin/dmc status 1

If you run a status check against a mount that is not available (Peter Piper), you’ll get something similar to the following:

DISK_CONDITIONER_IOC_GET error: No such file or directory

If you run the status check on a supported mount (in my case rock box), you’ll get output similar to the following:

Disk Mount Conditioner: OFF

Profile: Custom

 Type: HDD

 Access time: 0 us

 Read throughput: 0 MB/s (unlimited)

 Write throughput: 0 MB/s (unlimited)

 I/O Queue Depth: 0

 Max Read Bytes: 0

 Max Write Bytes: 0

 Max Read Segments: 0

 Max Write Segments: 0

It’s tricky. Note that the Profile is listed as Custom. There’s a funky thing with this command, where if you want to see how the profile that is applied to a given device is configured – you use the characters  assigned to the list number:

/usr/bin/dmc show 1

Sorry, I talk too much. The output would be similar to the following:

Profile: 5400 HDD

 Type: HDD

 Access time: 26111 us

 Read throughput: 100 MB/s

 Write throughput: 100 MB/s

 I/O Queue Depth: 32

 Max Read Bytes: 33554432

 Max Write Bytes: 33554432

 Max Read Segments: 256

 Max Write Segments: 256

Walk this way – if you start the Disk Mount Conditioner for that device, you’d apply the profile of another in order to do so. So, let’s say you want to slow the device down with a given set of settings:

sudo dmc start 1

What’s it all about? Let’s say I wanted to spoof a PCIe 3 SSD. First I’d need to stop the conditioning service:

sudo dmc stop 1

Ooh, whatcha gonna do? Let’s run the start again with the mount followed by the profile index to invoke:

sudo dmc start 1 "PCIe 3 SSD"

Yah… And of course, to stop again, simply run the stop for the device again:

sudo dmc stop 1

But do be careful to revert back when you run dmc… Hard Times, right?