I’ve been making guides to macOS Server since Server 2:
And along the way, I’ve also sold plenty of books on Mac Servers and gotten a lot of opportunities I might not have gotten otherwise. So thank you to everyone for joining me on that journey. After teaching so many how to use the services that Apple made available in their server operating system, when they announced they’d no longer be making many of the services my readers have grown dependent upon, I decided to start working on a guide on moving away from macOS Server. 
And then there are tons of all-in-one small business servers solutions, including Buffalo, Qnap, NetGear’s ReadyNAS, Thecus, LaCie, Seagate BlackArmor, and Synology. Because I happen to have a Synology, let’s look at setting up the same services we had in macOS Server, but on a cheaper Synology appliance:

In this article, we’ll cover how to use Zapier to connect data from your Jamf Account to a Google sheet. Once you build a WebHooks receiver in Zapier, you don’t have to use Google as the third party service that your WebHook triggers. You could use any other service that Zapier integrates as well, including Mailchimp, WordPress, Shopify, Todoist, ZenDesk, SurveyMonkey, Freshdesk, Quickbooks, Basecamp, and about 1,200 other solutions. In other words, you can link a WebHook from Jamf Pro into pretty much any automated service that you can think of!

So what’s a WebHook? A WebHook is an HTTP callback, or an HTTP POST that is fired when an event happens. The goal of WebHooks is that they are simple event notifications, typically sending a small amount of json to a destination web server that’s then ; a simple event-notification via HTTP POST. A web application listening for that event will then receive the WebHooks and perform a task in the background. Zapier is a great little tool that connects web apps. For modern software apps, most of this is done via acting as a WebHooks receiver and sending an API call to another tool. 

To follow along with this article, you will need a Zapier account, Jamf account and a Google account. Since we’ll be using Zapier to connect Jamf Pro and Google. 

Google Sheet Setup

  1. Signup/Login to your google account at https://docs.google.com/spreadsheets.
  2. Create new spreadsheet.
  3. Add any column names of fields you will want to store on your google sheet about your device—i.e. Date, Group, Name, Mac Address, IP Address, Make, Model, etc. We will use these columns in the Zap to map our data to the correct column.

Zapier Setup

  1. Signup/Login to your Zapier account at https://zapier.com
  2. Click “Make a Zap!” button in top right-hand corner.
  3. You will start my setting up the webhook for catching the smart group change.

Catching Smart Group Membership Change Zap

  1. Name your Zap
  2. Add Note (optional) – Optionally it might be good to add some details about what you zap does. You can do this by clicking the link “Add Note” under the name.
  3. Example Note:Catches POST from Smart Group Change Zap and gets the Id of the computer added and/or removed from the group, to get all the computer details and save to Google Sheet.
  4. Setup your trigger step. Choose Webhooks under Built-In Apps.
  5. Select Catch Hook.
  6. Continue pass the “Pick Off a Child Key” as there is nothing specific we need to select off the hook.
  7. Zapier will provide an URL to send requests to. This URL will need to be copied and pasted in your webhook settings in Jamf.
  8. Open up a new window to begin setup of Jamf webhook, leave this zap open as you will return to it.

Jamf Setup of Webhook

  1. After copying the URL from the zap. Open up a new window and login to your Jamf account.
  2. Click the cog in the top right corner and go to your settings.
  3. Under Global Management click on the Webhooks icon.
  4. Click “New” button.
  5. Add a Display Name.
  6. Check “Enabled”.
  7. Paste the URL from the previous zap to the Webhook URL input.
  8. Set Authentication Type to “None”
  9. Fill in your preferred connection and read timeout
  10. Set Content Type to “JSON”
  11. Select the Webhook event from the dropdown – Set to “SmartGroupComputerMembershipChange”
  12. Set up the target smart computer group. You can select a group or apply to all groups.
  13. Save and return to your Zapier window with your previous zap.

Catching Smart Group Membership Change Zap (continued)

  1. After completing setup of Jamf webhook. Click “Ok, I did this” to begin a test and pull in a test sample. Then return to your Jamf window and trigger the webhook. The webhook can be triggered in two ways.
    1. Trigger the webhook by going to Computers – > Search Inventory -> (Select a computer) -> Edit -> Update site to a new site that is in a different computer group and Save
    2. Trigger the webhook by going to Computers -> Smart Computer Groups – > (Select a Group) -> Update site to a new site that is in a different computer group and Save.
  2. If the event is triggered correctly you will see a test result/message like below:

  1. Next setup an action step. Select Code under Built-In Apps.
  2. Select “Run Javascript”
  3. Use the following Input Data Parameters
    1. eventName : Select “Event Name” from Field Options
    2. addedComputers: Select “Event Group Added Device Ids” from Field Options
    3. removedComputers: Select “Event Group Removed Device Ids” from Field Options
  4. In the code section. Copy and paste the following code:

** The highlighted url will be replaced by another url in a later zap.

var added = [];
var removed = [];
if(inputData.addedComputers) {
added = inputData.addedComputers.split(',');
}
if(inputData.removedComputers) {
removed = inputData.removedComputers.split(',');
}
 
var otherZapURL = "https://hooks.zapier.com/hooks/catch/3491337/wk7qh8/";
for(var i = 0; i< added.length; i++) {
var addedBody = JSON.stringify({
computer: added[i],
status: "added",
eventname:inputData.eventName
});
await fetch(otherZapURL, {method:'POST', body:addedBody});
}
for(var i = 0; i< removed.length; i++) {
var removedBody = JSON.stringify({
computer: removed[i],
status: "removed",
eventname:inputData.eventName
});
await fetch(otherZapURL, {method:'POST', body:removedBody});
}
return [{status: 'ok'}];

  1. This zap is done.

Saving Computer Details to Google Sheet

  1. Start a new zap.
  2. Add Note (optional) – Optionally it might be good to add some details about what you zap does. You can do this by clicking the link “Add Note” under the name.
  3. Example Note:Catches POST from Smart Group Change Zap and gets the Id of the computer added and/or removed from the group, to get all the computer details and save to Google Sheet.
  4. Setup your trigger step. Choose Webhooks under Built-In Apps.
  5. Select Catch Hook.
  6. Continue pass the “Pick Off a Child Key” as there is nothing specific we need to select off the hook.
  7. Zapier will provide an URL to send requests to. This URL will need to be copied and pasted in your previous zap in the code section where the highlighted url was listed.
  8. Setup an action step. Select Webhooks under “Built-In Apps”
  9. Select “GET” action type.
  10. Fill in the following input:
    1. URL : Copy and Paste https://kryptedjamf.jamfcloud.com/JSSResource/computers/id/(replace your kryptedjamf.jamfcloud.com with your role)
      1. Select “Computer” from field options
      2. See below as sample:

  1. Query String Params – leave blank
  2. Send As JSON – Select Yes
  3. JSON key – enter “json”
  4. Unflatten – Select Yes
  5. Basic Auth –separate your username and password with a pipe character
    1. i.e. USERNAME|PASSWORD
  6. Headers – leave blank
  1. Continue to through to Test Step and make sure data was received about the computer.
  2. If successful, continue to add a third action step by clicking the plus icon under you previous step and selection Action.
  3. Search for Google sheets in Search for Apps.
  4. Select Google Sheets and select the action type “Create Spreadsheet Row”.
  5. Connect your Google Account you used to create your Google sheet in the previous steps. See Google Sheets Setup section earlier in this document.
  6. Select Spreadsheet you set up under Spreadsheet input.
  7. Select the worksheet that has your columns names.
  8. After the worksheet is selected, each column will appear as an input option to map fields to. You will need to map each column to the appropriate field by selecting the field from the field options menu to the right of the input.
  9. After you’ve finished the mapping, select Continue to Test the step. If the test was successful, you should see your new record on your google sheet.
  10. Turn both of your zaps on and you are ready to automatically record all your smart group changes!

This was written for a specific use case, but because it’s useful for a number of apps and many of those can be hooked to other apps providing a nearly infinite number of use cases to link Jamf Pro Smart Group changes (one of the most valuable things that Jamf does) to other solutions!

 

Since some of the more interesting features of Time Machine Server are gone, let’s talk about doing even more than what was previously available in that interface by using the command line to access Time Machine.

As with any other command, you should probably start by reading the man page. For Time Machine, that would be:

man tmutil

Sometimes, the incantation of the command you’re looking for might even be available at the bottom of the man page. Feel free to use the space bar a few times to skip to the bottom, or q to quit the man interface.

In addition to the man page, there’s a help command, which can be used in conjunction with any of the command verbs (which makes me think of “conjunction junction, what’s your function”). For example, you can tell tmutil to compare backups using the compare verb. To see more on the usage of the compare verb, use tmutil followed by compare (the verb, or action you wish the command to perform), followed by help:

/usr/bin/tmutil compare help

Before you start using Time Machine, you’ll want to set a backup source and destination. Before you do, check the destination that’s configured:

/usr/bin/tmutil destinationinfo

The output will include

Name: TimeMachineBackup
Kind: Network URL: afp://;AUTH=No%20User%20Authent@MyCloud-YAZ616._afpovertcp._tcp.local/TimeMachineBackup
ID: 265438E6-73E5-48DF-80D7-A325372DAEDB


Once you’ve checked the destination, you can set a destination. For example, the most common destination will be something like /Volumes/mybackupdrive where mybackupdrive is a drive you plugged into your computer for Time Machine. 

sudo /usr/bin/tmutil setdestination /Volumes/mybackupdrive

Once you’ve configured a destination for your backups, it’s time to enable Time Machine. The simplest verbs to use are going to be the enable and disable verbs, which you might guess turn Time Machine on and off respectively. For these, you’ll need elevated privileges. To turn Time Machine on:

sudo /usr/bin/tmutil enable

To then disable Time Machine:

sudo /usr/bin/tmutil disable

You can also kick off a backup manually. To do so, use the startbackup verb as follows:

sudo /usr/bin/tmutil startbackup

To see the status, once you’ve kicked off a backup (this one is gonna’ be hard to remember) use the status verb:

sudo /usr/bin/tmutil status

Or to stop a backup that is running (e.g. if your computer is running slowly and you think it’s due to a backup running), you’d use the stopbackup verb:

sudo tmutil stopbackup


Once backups are complete, you can see the directory they’re being stored in with the machinedirectory verb. This will become important when we go to view information about backups and compare backups, which require that directory to be available as those options check local files and databases for information directly. The tmutil verb to do that is machinedirectory:

sudo /usr/bin/tmutil machinebackup

Other options you can enable, include the ability to exclude files or directories from your backups. For example, you won’t likely want to backup your music or movies that were purchased on iTunes as they take up a lot of space and are dynamically restored from Apple in the event that such a restore is necessary. The verb to do so is addexclusion and this also requires sudo. So to exclude the user krypted’s ~/Music directory, you’d use a command as follows:

sudo /usr/bin/tmutil addexclusion /Users/krypted/Music

To then check if a directory is excluded, use the isexcluded verb and define the path:

sudo /usr/bin/tmutil isexcluded /Users/krypted/Music

If you make an errant exclusion do the opposite to remove, leveraging the removeexclusion verb:

/usr/bin/tmutil removeexclusion /Users/krypted/Music

Once a backup is complete, you can also check various information about the backups. This can be done using a few different verbs. One of the more common manual tasks that is run is listing the recent backups that can be restored. This is done using the listbackups verb with no operators (the backup directory needs to be available when run, so cd into that before using listbackups).

/usr/bin/tmutil listbackups

You can also view the latest backup, which can then be grabbed by your management tool, which is provided in the YYYY-MM-DD-HHMMSS format.
/usr/bin/tmutil latestbackup

You can also compare backups so you can see the files that have been changed, added, and removed, as well as the size of the drift between the two backups. To do so, use the compare verb and provide the paths between the two backups that were obtained when using the listbackups verb, as follows:

/usr/bin/tmutil compare “/Volumes/mybackupdrive/Backups.backupdb/Krypted/2018–04–24–051014” “/Volumes/mybackupdrive/Backups.backupdb/Krypted/2018–04–24–061015”

In the above paths, we’re using the mybackupdrive and krypted is the source volume name. You can also look at all of the backups (and potentially derive future space requirements based on a trend line) by using the calculatedrift verb:

/usr/bin/tmutil calculatedrift /Volumes/mybackupdrive/Backups.backupdb/Krypted

At times, you may end up replacing infrastructure. So you might move backups to a new location, or move backups to a new solution. You can use the inherent backups to claim a new machine directory. So if you moved your backups from /Volumes/mybackupdrive/Backups.backupdb/Krypted to /Volumes/mylargerbackupdrive/Backups.backupdb/Krypted during an upgrade you might run the following so you don’t have to start backing up all over again and end up wiping out your backup history:

/usr/bin/tmutil inheritbackup /Volumes/mylargerbackupdrive/Backups.backupdb/Krypted

Or if you have both available at once, use the associatedisk verb with the new volume followed by the old volume:

sudo /usr/bin/tmutil associatedisk "/Volumes/mylargerbackupdrive/Backups.backupdb/Krypted" "/Volumes/mybackupdrive/Backups.backupdb/Krypted"

Or if you do want to start over but want to clear out old backups, you can use the delete verb followed by the path to the backup or snapshot, as follows:

sudo /usr/bin/tmutil delete /Volumes/mybackupdrive/Backups.backupdb/Krypted

There are also a few more verbs available, mostly for apfs. The localsnapshot command creates new snapshots of APFS volumes, and is used with no operators, as follows:

sudo /usr/bin/tmutil localsnapshot

To then see the snapshots, use the listlocalsnapshots verb,

sudo /usr/bin/tmutil listlocalsnapshots

Which outputs as follows:
com.apple.TimeMachine.2018-04-20-061417

Or to constrain the output for easier parsing, use listlocalsnapshotdates:

sudo /usr/bin/tmutil listlocalsnapshotdates

Which outputs as follows

2018-04-20-061417
And you can delete a snapshot with the deletesnapshot

sudo tmutil deletelocalsnapshots 2018-04-20-061417

Now, thinning out your backups is always an interesting task. And in my experience your mileage may vary. Here, you can use the thinlocalsnapshots verb to prune the oldest data from backups. In the following example, we’re going to purge 10 gigs of data:

sudo /usr/bin/tmutil thinlocalsnapshots / 10000000000

Finally, let’s talk about automated restores. You could use this type of technology to do a rudimentary form of imaging or rolling users into a new machine. To restore a backup, you would use the (shocking here) restore verb. First, let’s look at restoring a single file. In the following example, we’ll restore a file called mysuperimportantfile from a computer called mycomputername and provide the date of the snapshot we’re restoring from:

sudo /usr/bin/tmutil restore /Volumes/mybackupdrive/Backups.backupdb/mycomputername/2018-04-24-051015/Macintosh\ HD/Users/krypted/Desktop/mysuperimportantfile

Now, let’s look at restoring a volume. Here, we’re going to change our working directory to the root of our latest backup, not booted to the volume we’re about to erase and overwrite with a backup):

cd "/Volumes/Time Machine Backup Disk/Backups.backupdb/mycomputername/Latest/Macintosh HD"

And then (this is dangerous, as it wipes out what’s on the old volume with the backed up data):

sudo /usr/bin/tmutil restore -v "/Volumes/Time Machine Backup Disk/Backups.backupdb/mycomputername/Latest/Macintosh HD" "/Volumes/Macintosh HD"

Now, let’s talk about what’s realistic. If I were to programmatically erase one of my coworkers data. I’d really, really want to verify that everything they need is there. So I’d run a checksum against the source and keep a copy of it only once I verify that absolutely everything is going where I want it to go. I would trust a cloning tool, but would I want to basically write my own archival solution using tmutil? No. I’ve simply seen too many strange little tidbits here and there that make me not… exactly… trust it with other people’s data. With my own data, though… sure! <3

The DNS service in macOS Server was simple to setup and manage. It’s a bit more manual in macOS without macOS Server. The underlying service that provides DNS is Bind. Bind will require a compiler to install, so first make sure you have the Xcode command line tools installed. To download Bind, go to ISC at https://www.isc.org/downloads/. From there, copy the installer locally and extract the tar file. Once that’s extracted, run the configure from within the extracted directory:

./configure --enable-symtable=none --infodir="/usr/share/info" --sysconfdir="/etc" --localstatedir="/var" --enable-atomic="no" --with-gssapi=yes --with-libxml2=no

Next, run make:

make

Then run make install:

make install

Now download a LaunchDaemon plist (I just stole this from the org.isc.named.plist on a macOS Server, which can be found at /Applications/Server.app/Contents/ServerRoot/System/Library/LaunchDaemons/org.isc.named.plist or downloaded using that link). The permissions for a custom LaunchDaemon need to be set appropriately:

chmod root:wheel /Library/LaunchDaemons/org.isc.named.plist

Then start it up and test it!

launchctl load -w /Library/LaunchDaemons/org.isc.named.plist

Now you can manage the server as we described at http://krypted.com/mac-os-x-server/export-dns-records-macos-server/.

Acronis True Image is a cloud-based backup solution. Acronis True Image is available at 

https://www.acronis.com/en-us/support/trueimage/2018mac/. To install, download it and then open the zip. 

Drag the Acronis True Image application to your /Applications directory. Then open Acronis True Image from /Applications. The first time you open it, you’ll be prompted to access the licensing agreement.

Once accepted, you’ll be prompted to create an account with Acronis. Provide your credentials or enter new ones to create a trial account. 

At the activation screen, provide a serial or click Start Trial.

At the main screen, you’ll first want to choose the source (by default it’s the drive of the machine) and then click on the panel to the right to choose your destination.

For this example, we’re going to use the Acronis cloud service. 

Click on the cog wheel icon at the top of the screen. Here, you can set how and when the backup occurs. Click Schedule.

At the schedule screen, select the time that backups will run. Note that unless you perform file level backups, you can’t set the continual backup option. For that, I’d recommend not doing the whole computer and instead doing directories where you store data. Click on Clean Up.

Here, you’ll define your retention policies. How many backups will you store and for how long. Click Encryption.

Here you’ll set a password to protect the disk image that stores your backups. The disk image can’t be unpacked without it, so don’t forget the password! Click on Exclusions.

Here, use the plus sign icon to add any folders you want skipped in the backups. This could be stuff you don’t need backed up (like /Applications) or things you intentionally don’t want backed up. Click Network. 

Here you can throttle the speed of network backups. We’ll skip this for now. Now just click on the Back Up button to get your first backup under way!

If you want to automate certain configuration options, check for the com.acronis.trueimageformac.plist at ~/Libarary/Preferences to see if the app has been launched, as you can see from the defaults domain contents:

{  SUEnableAutomaticChecks = 1;
SUHasLaunchedBefore = 1;
SULastCheckTime = “2018-04-07 21:33:01 +0000”; }

There are also log settings available at 
/Applications/Acronis True Image.app/Contents/MacOS/acronis_drive.config:

<?xml version=”1.0″ encoding=”UTF-8″ standalone=”yes”?>
<config><logging>
<channel id=”ti-rpc-client” level=”info” enabled=”true” type=”logscope” maxfiles=”30″ compress=”old” oneday=”true”/>
<channel id=”http” level=”info” enabled=”true” type=”logscope” maxfiles=”30″ compress=”old” oneday=”true”/>
<channel id=”ti_http_srv_ti_acronis_drive” level=”info” enabled=”true” type=”logscope” maxfiles=”30″ compress=”old” oneday=”true”/>
<channel id=”ti-licensing” level=”info” enabled=”true” type=”logscope” maxfiles=”30″ compress=”old” oneday=”true”/>
<channel id=”acronis_drive” level=”info” type=”logscope” maxfiles=”10″ compress=”old” oneday=”true” />  <!–max 10 files, ?MB–></logging>