DeployStudio has the ability to rename volumes as part of a standard workflow. These are typically set to something like “Macintosh HD” (the default) or “Computer Lab” or something like that. But what if you wanted to name the volume something unique to a given computer, which makes it easier to keep track with what you are doing across a number of servers? You could create a workflow for each computer and change the hard drive name for each to something unique; but that would be tedious and pollute your list of workflows, likely resulting in accidentally running the wrong workflow at times. Instead, you could look at a really simple script in most cases (according to how complicated your logic for assigning names would be).
To rename a volume, you can use the diskutil command along with the rename option. You would then list the existing name followed by the new name that you’d like that volume to have. In the case of DeployStudio the initial name of your boot volume might be “Macintosh HD” and to change the name to something like “Computer Lab” you would then use a command like:
diskutil rename Macintosh HD Computer Lab
It might then be logical to use a host name to rename a computer. Therefore, we could replace Computer Lab with the hostname command like so:
diskutil rename Macintosh HD `hostname`
However, this ends up showing the fully qualified name. Therefore, we could replace hostname with an scutil query for the ComputerName:
diskutil rename Macintosh HD `scutil –get ComputerName`
This would result in the name without all the .local, etc. But if you ran this as part of a DeployStudio workflow, you would end up calling the hard drive for all of your machines localhost. This is because the hostname or ComputerName will be queried from the DeployStudio set that you are booted to for running the DeployStudio Runtime. Luckily, DeployStudio has a number of variables that it can use in scripts. One of them is DS_HOSTNAME, which pulls the ComputerName being applied to the system at imaging. This means that if we were to rename the hard drive of the computer from Macintosh HD to the DS_HOSTNAME, you could use the following script:
diskutil rename /Volumes/Macintosh HD $DS_HOSTNAME
Now, one might think to oneself, couldn’t I just put $DS_HOSTNAME in the field for renaming the hard drive (part of a workflow). I tried it a number of different ways and couldn’t get it to work (in parenthesis, quoted out different kinds of ways, in different types of brackets and combinations of the above). If anyone knows of a way to use a variable in a GUI field within DeployStudio, let me know (I am guessing it can be done).
MacSysAdmin will again be held in Gothenburg, Sweden. The dates for MacSysAdmin (and most of the speakers) have been announced. The conference will be held from September 29th through October 1st at the Folkets Hus.
I am honored to again be a speaker and will be there throughout the conference, which includes sessions from a number of Mac gurus, including Arek Dreyer, Andrina Kelly, Alan Gordon, Karl Kuehn and Duncan McCracken.
Click here to sign up and hope to see you there!
The document handler in Podcast Producer has been exposed to the command line in the form of a tool called document2images (located in the /usr/libexec/podcastproducer directory), which takes a pdf and converts it into a set of tiff files. In its most basic iteration the documents2images tool simply inputs a document and then outputs a couple of tiff files per page of that document. 15 pages will typically net you 30 tiffs and an xml output (not that you can put Humpty Dumpty back together again very easily).
When you use document2images you will need to specify the pdf using the –document option, the xml file to output using the –xml option and the directory to drop your images into using the –imagespath option. To use an example of this command, if I wanted to convert a pdf called /Users/cedge/Desktop/test.pdf into images in the /Users/cedge/Desktop/tiffs directory and drop the XML file into /Users/cedge/Desktop I would use the following command:
It’s worth note that the user invoking the command will need access to write to the directory that you’re dropping your images into as well as the directory that the xml file will be written to (and of course, to read the pdf).
BRU Server Agent Config (UB) – A tool used to install the agent, which needs to be located on each machine that will be backed up (including the server if it has any data to back up)
BRU Server Config (UB) – Used to configure the server daemon, backup server configurations and set passwords to communicate with the server. Also used to set licensing information and perform scans for new tape drives and libraries.
BRU Server Console (UB) – Used to configure backup jobs, schedules, etc.
To get started, open the BRU Server Config application from the components that come with your software (or that you downloaded from the BRU website). First you will be asked to provide an administrative password to BRU. Provide the password and then click on Save.
Next, the server components will be copied to /usr/local/bru-server. The system will also perform a hardware scan of your server, looking for tape drives and libraries (you can always rerun this process later if need be).
Once the processes are complete the BRU Server Configuration Tool will open and you can configure the server. To do so, first click Start to start the daemon. If you need to restart it at a later date you can simply click on the Stop or Restart buttons here. Then, if like most, you would like for the server to start at boot, check the box for Server daemon starts at boot. Here, you can also use the Backup and Restore buttons to backup and restore server configurations or the Modify button to enter a new password for the server.
You can also perform most of these options from the command line using the server command located in /usr/local/bru-server. For example, to stop the server, you would use the –kill option:
To then start the server, run it with no arguments:
Or to set the password, you would use (go figure) the –password option:
You can also perform some options not exposed in the Configuration Tool GUI, such as running it on a custom port using the –port option followed by the port number:
Finally, you can check the version and license information using the –version and –license options respectively.
Once you are satisfied with your configuration of the server component, you will then close the tool and move on to installing the Agent(s). Each machine that will get backed up will need an agent installed. Configure the options for the BRU Agent using the BRU Server Agent Config application. Simply open the application from your installer. On first open, the agent will copy /usr/local/bru-server to your machine (if you installed the server it will just copy the agent portions of BRU), which will contain the agent. You will also then see a b icon in the menu bar. Click on the b icon in the menu bar and then click on Agent Configuration to bring up a screen similar to the following.
Here, click on the Start button to configure the agent. You will typically want the agent to start automatically when you install a system, so click on the check box for Agent daemon starts automatically. You will then need to provide a server that the agent can communicate with. To do so, click on the plus sign (+) on the screen and then provide the server that the agent can communicate with and the credentials to do so. Once complete, you will then be able to see the client system in the Server Console.
You can also configure it from the command line, fairly easily. To do so, run the agent, located in /usr/local/bru-server, along with the –config option:
The BRU Server Agent Configuration will then enter into the interactive mode and you will see any BRU Server Console’s that the agent is configured to communicate with. Here, type N and then you will be prompted for the hostname of your BRU Server. Here, provide the name or an IP address for the server and then hit the enter key. When prompted, provide a password to enter into the Console. The server will then be assigned a unique number. Entering that at the interactive prompt will then remove the server again. Once the agent has been started, it can be stopped by running the agent command with the –kill option:
Note: For Windows, the configuration command line tool is located in C:Program FilesBRU Server Agent Configuration.
Now that you have configured the agent and the server, it’s time to actually setup jobs and schedules. To get started, open the BRU Server Console application. The console components will then be copied into /usr/local/bru-server.
Click on OK and then the authenticate to the server.
Once you have logged in, you will see the console. If the installation of the agents went properly, you should see any that you have installed as well.
Disk-to-Disk backups in BRU are mostly considered a staging area, where data is stored while waiting to be shuttled to tape. To set the staging area, click on the BRU Server Console menu and then click on Preferences…
In the Stage Path field, provide a path that the stage files will be stored in. You can also set the maximum age of the staging data and the number of jobs to be stored in the history. When you’re satisfied with your settings, click on the Save button.
Back at the Console screen, you will click on the plus sign (+) to add a new backup job, which will bring up the screen you see here.
The backup job will include the following options:
Job name: A name for the job.
Destination: Where the backup will be written to. You can use Stage Disk or choose a tape drive/library.
Backup Type: Your first job will need to be a full, subsequent jobs can be incremental or differential, which will require you to set a full job that you have created as the “Base Job”. An incremental will backup files that have been altered since the last incremental or full backup. A differential will backup all files altered since the last full, even if they were already backed up. Differentials will lead to faster restore times as you near the end of a backup cycle; however, they will usually take up considerably more space.
Base Job: The full backup job to base a differential or incremental backup job on.
Compression: Whether or not the software will attempt to compress data. Enabling compression causes slower backups, but takes up less space.
Email: An address to send backup reports to for the given job.
Verify Backup: Performs a scan of backed up files to ensure they match the source. This will take longer than if you do not enable it but provides peace of mind/assurance/etc.
Eject Tape after job completes: Only used if you are using tape, usually not used if you are using tape libraries.
Enable archive encryption: Encrypts archives 😉
Once you have configured a job as you see fit, click on OK and you will be taken back to the BRU Server Console screen. For each job you will still need to configure a schedule for the job as well as what source directories/files to be backed up. To set the schedule, click on the job name to be scheduled and then click on the Schedule… button. At the Job Scheduler screen, set the frequency and starting times that your job should run at and then click on the Save button.
You will then need to configure the source directories for your backups. Back at the Console screen, click on the name of the job and then click on each directory to be backed up. Clicking on a directory will cycle through color codes. The colors indicate whether or not the directory will be backed up:
yellow indicates that part of a directory will be backed up
green indicates the entire directory will be backed up
red indicates that a directory will explicitly be skipped
When you are satisfied with your backup job, click Save. You will then configure an incremental or differential job for each base job and finally a job that is specifically for upstaging data to tape, or completing the disk-to-disk-to-tape sequence. When you are finished configuring each of your jobs you can run them manually to test by clicking on the Run Now while the job is selected from the console. When running, you can monitor each job using the Tools icon in the side bar and then the Job Monitor option in the Tools drop-down menu. To stop a job that is running, you can click on the Kill command here.
You can also run jobs from the command line, using the backup option for the bru-server.cmd command located in /usr/local/bru-server. The command can be run using the -j option (name of job), followed by the name of the job to be run, followed by the -t option (type of job), followed by the type of job being run (ie – Full, Incremental or Differental), followed by -Z (enable compression) and -v (enable verification), followed by the paths (starting with server names) to be backed up in brackets. For example, to run our test job:
This allows you to somewhat seamlessly integrate the backup of files that are archived with Final Cut Server, by calling up the backup command as a post-flight action for any automations kicked off by Final Cut Server. You can also backup data using the bru-server.cmd command in /usr/local/bru-server. You can then restore files that are backed up using the bru-server.cmd command’s restore option. In order to use the restore option, you’ll need to know which archive the file is stored in. In order to find that you will also need to script the search option (search for the appropriate file and then craft your restore to pull data back to the restore path for fcsvr_client using the correct archive that the file is stored on). To search through the archives for the appropriate file:
search “my file.mov”
You can also provide archives as part of the search, but we likely wouldn’t be searching here if we knew which ones to use.
Note: The BRU commands are based on python. When the python environment on a machine has been customized the results for BRU can be unexpected.
There are a lot of versions of the popular perl scripting language out there, and depending on what version you may have written a script with you might find that using a different version than the one that comes with an OS by default can have a drastic impact on a script. In Mac OS X you can change the default version of perl that the perl and a2p command will use. Before doing so you should check the version of perl being used by default, which can be done using the perl command, followed by the -v option:
By default, the OS currently uses version 5.10.0. To change the version, you would use the defaults command to change the com.apple.versioner.perl defaults domain. You will add a key called Version with a string of the version you would like to use. For example, to switch to 5.8.8:
defaults write com.apple.versioner.perl Version -string 5.8.8
To change it back to 5.10.0:
defaults write com.apple.versioner.perl Version -string 5.10.0
Final Cut Server has an option to archive and restore assets. When archiving an asset, the asset will be moved to a file system path that is represented by the device ID. The archival and restore can be done using the steps shown in this video:
The process of archival and restore can be kicked off from the command line, which will initiate the movement of the asset. To archive an asset, you will use the archive verb with the fcsvr_client tool. This will require you to provide the asset ID number along with the device that you will be archiving the asset to. For example, to archive an asset with an ID of 318 to a device with an ID of 8 you will use the following command:
fcsvr_client archive /asset/318 /dev/8
Once archived, the asset can be easily restored provided that it is still in the archive path that it was backed up to. So assuming that the asset is still on /dev/8 you could use the following command to restore the asset (the device path is implied as it is tracked in the metadata that corresponds to the asset ID:
fcsvr_client restore /asset/318
If archiving and restoring, it is never a bad idea to log that the action was sent to the queue. For example, if the asset ID were a variable of ASSET and the device had an ID of DEV then you could use the following to log that the automation had been fired off:
fcsvr_client archive /asset/$ASSET /dev/$DEV
/usr/bin/logger “Asset $ASSET is being copied to device $DEV”
One of the tools in the iCal -> iCal Server troubleshooting toolbelt is to debug log HTTP connections. You can capture packets for port 8008 using tcpdump. In the following command, we’ll capture the packets over interface en0 for tcp port 8008 to a file called iCal.pcap:
tcpdump -w iCal.pcap -i en0 tcp port 8008
We’ll then attempt to create a calendar entry in iCal or simply log into the server through iCal. CalDAV traffic will occur and then you can stop the tcpdump. In order to then read the tcpdump:
tcpdump -nnr iCal.pcap
Another option that can help to correlate traffic you see in the pcap from tcpdump is to enable debug logging of HTTP traffic in iCal. To do so, we’ll use the defaults command to write a TRUE value into the LogHTTPActivity key of the com.apple.iCal defaults domain:
Given the output of LogHTTPActivity and tcpdump to iCal.pcap you will in most cases triangulate the source of many of the problems that you encounter in iCal Server. Whether iCal cannot traverse to a given directory with CalDAV data, iCal cannot connect to the server or there is another form of connectivity issue, much of the troubleshooting will start with looking at the traffic over port 8008.
When you’re integrating Final Cut Server with other products, you often find yourself writing scripts to perform various tasks. One of those tasks might be to create a new project, or a production as it’s called in Final Cut Server. Because a production can have a number of attributes, a great way to do this is to create a template production and then make copies of it (or clones) when you want to create subsequent projects. To do so, you’ll use the fcsvr_client command, along with the clone verb. The -name option will allow you to set the name of the production which would then be followed by the unique ID of the production template that you manually create using the Final Cut Server application. Presuming we are creating a production called Emerald with a template of /project/298, we could use the following:
fcsvr_client clone –name Emerald /project/298
If we wanted to get the ID of this project, we would then use:
fcsvr_client search –crit Emerald /project
We could then go a step further and actually create an asset in this new project by using the –projaddr option for createasset. In the below example, we’ll presume that the new project ID was 299 and then create an asset called Emerald1.mov that is stored on a device with an ID of 5 as well as provide a description and a tag:
Now to throw it all together in a little script that could be fired off through another application that can be kicked off from another application. In the below, we assume that it is a bash script that was handed a project name via $1, a device ID in $2 and a file name in $3:
fcsvr_client clone –name $1 /project/298
MyProjectID = fcsvr_client search –crit “$1” /project
/usr/bin/logger “Production $MyProjectID with name of $1 created”
fcsvr_client createasset –background –projaddr /project/$MyProjectID pa_asset_media /dev/$2/$3 CUST_DESCRIPTION=”Automatically uploaded file” CUST_KEYWORDS=”movie, automated”
/usr/bin/logger “Asset $3 on device $2 created in production $1”
If we had just wanted to create the asset, we could have simply used line 5, placing the Project ID in $1 by changing out $MyProjectID with $1. We could also use Transmogrifier to easily return the assetID once it has been created in Final Cut Server, allowing that to be returned to the application that might be calling up this script. This allows you to integrate the asset and production creation part of Final Cut Server with other solutions, such as a PHP web upload application, FileMaker or even another Digital Asset Management solutions.
As of version 8, Retrospect uses port 22024 when the Retrospect Console needs to communicate with the engine. It just so happens that this can become unresponsive when the engine itself decides to stop working. Therefore, if you’re using Retrospect 8, you can run a port scan against port 22024 ( i.e. stroke <IP_ADDRESS> 22024 22024 ) and then restart the engine if it goes unresponsive. To restart the engine, simply unload and then load com.retrospect.launchd.retroengine. For example:
/bin/launchctl unload /Library/LaunchDaemons/com.retrospect.launchd.retroengine.plist; /bin/launchctl load /Library/LaunchDaemons/com.retrospect.launchd.retroengine.plist
I have found that if you alter the nice value that the engine crashes less (not that I’m saying that it crashes a lot or is buggy btw, just seen it in a few cases now). To do so, change the nice value in /Library/LaunchDaemons/com.retrospect.launchd.retroengine.plist from the default (0) to -10 (or -20 even).
Historically, there have been intermittent issues with the client software running. To determine if it’s running or stopped from within the host that the client is running on you can use the following (for versions 6 and below):
ps -cx | grep retroclient
Or you can use the following for version 8:
ps -cx | grep pitond
Or you can port scan port 497 for the client:
stroke <IP_ADDRESS> 497 497