Create Jira Issues From The Command Line

You can use the command line to create Jira tickets. In the below one-liner, we’re going to bring in some standard json and create a ticket. We’ll use curl and use -u to define a username and password, then -X to define a POST and –data to define the contents of the post, wrapped in the single quotes below. Then -H defines the content type as json and a URL to your Jira rest endpoint, “Issues” in the below code:

curl -D- -u krypted:MySuperSecretPassword -X POST --data '{"fields":{"project":{"key": “DOG”},”summary": “Make my feature better.”,”description": “Going to make everything better ever ever ever by doing things and by things I mean allll the things“,”customfield_001”:”Testing testing”,”issuetype": {"name": “Story”}{“time tracking”:{originalEstimate”: “2d 4h”}}}’ -H "Content-Type: application/json" https://krypted.atlassian.net/rest/api/2/issue/

You can swap out the json here with input to a script, or a file. That json can look prettier than it looks in the above single line:

{
 "fields":{
   "project":
   {
    "key": “DOG”
   },
   ”summary": “Make my feature better.”,
   ”description": “Going to make everything better ever ever ever by doing things and by things I mean allll the things“,
   ”customfield_001”:”Testing testing”,
   ”issuetype": {
    "name": “Story”
   }
   {
    “time tracking”:{
    originalEstimate”: “2d 4h”
   }
  }
}

As you can see, we’re creating an issue in the DOG project (which could also say CAT or whatever was generated when you created the project you’re putting this issue into). We’re then adding a “summary” and “description” as I don’t think you can really create one without that information. Then we’re adding information for a custom field our organization created and finally an estimate for how long the task should take, with those being very much optional.

So any other fields you have available can also be created as well, just add them to the correct part of the json with the correct label and inputs to accept.

Pull iTunes App Categories via Bash

I love bash one-liners. Here’s one it took me a bit to get just right that will pull the Category of an app based on the URL of the app.

curl -s 'https://itunes.apple.com/us/app/self-service-mobile/id718509958?mt=8' | grep -Eo '"applicationCategory":.*?[^\\]",'

If you don’t already have the URL for an app, it can be obtained via a lookup using

curl https://itunes.apple.com/lookup?id=718509958

If you’ll be performing these kinds of operations en masse from within server-side scripting, Apple has a number of programs, including the Affiliate Program, which allow you to do so more gracefully. But as a quick and dirty part of a script, this could solve a need. More importantly, hey, parse some json from bash without piping to python or perl or whatevers… Enjoy!

Register A Webhook In Jamf Pro

A webhook is a small web trigger that when fired can easily send amount of small json to a web listener. Most modern software solutions support webhooks. They provide an easy way to trigger events from a piece of software to happen in another piece of software.

An example of this is when a smart group change happens in Jamf Pro, do something elsewhere. To start, you register a webhook in Jamf Pro by opening an instance of Jamf Pro, clicking on Settings, clicking on Global Management, and then clicking on Webhooks.

Registering Webhooks

From the Webhooks screen, click New.

New Webhook Screen

At the New Webhook screen, you will see a number of fields. Here,

  • Display Name: The name used to identify the webhook in Jamf Pro.
  • Enabled: Check to enable the webhook, uncheck the box to disable the webhook.
  • Webhook URL: The URL that the json or xml will be sent to (note that you’ll need something at this URL to accept your webhook).
  • Authentication Type: None is used for an anonymous webhook and basic can be used to send a username and password to the webhook listener.
  • Connection Timeout: How long the webhook will attempt to open a connection before sending data.
  • Read Timeout: How long the webhook will attempt to send data for before it turns off.
  • Content Type: Choose to send information via xml or json.
  • Webhook Event: The type of event that Jamf Pro can send a hook based on.

The options for webhook events include:

  • ComputerAdded
  • ComputerCheckin
  • ComputerInventoryCompleted
  • ComputerPatchPolicyCompleted
  • ComputerPolicyFinished
  • CoputerPushCapabilityChanged
  • DeviceRateLimited
  • JSSShutdown
  • JSSStartup
  • MobileDeviceCheckin
  • MobileDeviceCommandCompleted
  • MobileDeviceEnrolled
  • PatchSoftwareTitleUpdated
  • PushSent
  • RestAPIOperation
  • SCEPChallenge
  • SmartGroupComputerMembershipChange
  • SmartGroupMobileDeviceMembershipChange

An example of a full workflow would be what we did to trigger a Zapier action, documented at http://krypted.com/mac-os-x/add-jamf-pro-smart-group-google-doc-using-zapier/. Here, we look at sending smart group membership changes to a google sheet so we can analyze it with other tools, a pretty standard use case.

.

NFS. Not… Dead… Yet…


NFS may just never die. I’ve seen many an xsan covert to NFS-based storage with dedicated pipes and less infrastructure requirements. I’m rarely concerned about debating the merits of technology but usually interested in mapping out a nice workflow despite said merits. So in the beginning… there is rpc. Why? Because before we establish a connection to an nfs share, we first want to check that we can talk to the system hosting it. Do so with rpcinfo:

rpcinfo server.pretendco.com

Now that we’ve established that we can actually communicate with the system, let’s use the mount command (for more on creating mounts see `man exports`). Here, we’ll 

mount -t nfs nfs://server.pretendco.com/bigfileshare /Network/Servers/server.pretendco.com/bigfileshare

ncctl is a one-stop shop for manipulating kerberized NFS. Ish. You also have ncinit, ncdestroy, and nclist. So almost a one-stop shop. First, let’s check the list of shares you have and how you’re authoring to each:

nclist -v

ncctl list can also be used. The output will be similar to the following:

/Network/Servers/server.pretendco.com/bigfileshare       : No credentials are set

We should probably authenticate into that share. Now let’s actually set our username (assuming you’ve already kerberized via kinit or a gui somewheres):

ncctl set -p krypted@me.com

Now that spiffy nclist command should return something like the following:

/Network/Servers/server.pretendco.com/bigfileshare: krypted@me.com

Finally, ncdestroy is used to terminate your connection. So let’s just turn off the share for the evening:

ncctl destroy

Or ncdestroy is quicker to type. And viola, you’ve got a functional nfs again. Ish. 

Now that you’re connected, nfsstat should show you how the system is performing. For more on using that, see: 

man nfsstat

Limit Upload and Download Streams for Google Drive File Stream on macOS

Google Drive File Stream allows you to access files from Google’s cloud. It’s pretty easy for a lot of our coworkers to saturate our pipes. So you can configure a maximum download and upload speed in kilobytes per second. To do so write a com.google.drivefs.settings defaults domain into /Library/Preferences/com.google.drivefs.settings and use a key of BandwidthRxKBPS for download and BandwidthTxKBPS for upload (downstream and upstream as they refer to them) as follows:

defaults write com.google.drivefs.settings BandwidthRxKBPS -int 200
defaults write com.google.drivefs.settings BandwidthTxKBPS -int 200

Create a github.io page for your GitHub project

A GitHub.io page is a great way to have a portal for the various open source projects your organization maintains or about how to use products from your organization. Some great examples of GitHub.io projects out there include:

All of the above have some things in common, that I think are important:

  • The branding is consistent(ish) with company guidelines, so the experience isn’t tooooo dissonant with the main pages
  • The salesy part of the branding has been stripped out
  • The experience is putting useful content for developers right up front
  • Most assume some knowledge of scripting, consuming APIs, and other technical needs
  • They showcase and include information to projects
  • Projects from multiple accounts are included (even projects owned by other organization if they help put developing more open source projects out there)

Taking all this into account, let’s make a page! To get started, first create a project in your GitHub account with the url of <accountname>.github.io.

Create an index.html page in there (even if it’s just a hello world page). At this point, you’re just writing standard html. You can have standard tags like H1, H2, etc – and import a css elements from another place.

I really like is displaying cards for your GitHub projects. One project that I like to use (I must have tested 30-40) is GitHub-cards. To use that you’ll need to enable Java, as you can tell from the .js. Then you just include a line to display that card, replacing the data-github field contents with the username/projectname of any projects you want to include, as follows (e.g. for the GitHub user/org name jamf and the GitHub project name of KubernetesManifests):

<div class="github-card" data-github="jamf/KubernetesManifests" data-width="400" data-height="" data-theme="default"></div> <script src="//cdn.jsdelivr.net/github-cards/latest/widget.js"></script>

One final note, many an organization has a standard css they want to be used when building new web properties, including something like a GitHub site. You can leverage these by calling them out in the header as follows:

<link href="https://www.jamf.com/css/main.css" rel="stylesheet" type="text/css" media="screen">

According to how much is involved in how a given CMS mucks up code, you might have a lot of tweaking to bring elements in and consume them to your orgs spec, but it’s much easier than sifting through rendered source to figure it out on your own. Once published, go to the account name.github.io (in this example, jamf.github.io) and viola, you have a new page!

Backup and Restore a Parallels VM Programmatically

Parallels comes with a nifty command line tool called prlctl installed at /usr/local/bin/prlctl when the package is installed. Use the prlctl with the backup verb in order to run a backup of a virtual machine, followed by an option of a name or ID of a registered virtual machine. For example, if the name of the VM was Krypted Server then you could run the following:

prlctl backup ‘Krypted Server'

Or if the unique ID of the VM was 12345678-1234-1234-112233456789

prlctl backup {12345678-1234-1234-112233456789}

To list existing backups of a given VM, the backup-list verb along with the name or unique ID would be used, as follows:

code>prlctl backup-list {12345678-1234-1234-112233456789}

And then to restore, you can either just use the ID of the VM to restore the latest backup:

prlctl restore {12345678-1234-1234-112233456789}

Or to choose a specific backup to restore, supply that serial following the -t flag:

prlctl restore {12345678-1234-1234-112233456789} -t {11223344-1122-1122-112233445566}

And viola, you’ve backed up and restored with the CLI, so you can script away as needed.