krypted.com

Tiny Deathstars of Foulness

ServerBackup is a new command included in Lion Server, located in the /usr/sbin/ServerBackup directory. The ServerBackup command is used to backup the server settings for services running on a Lion Server. The command is pretty easy and straight forward to use, but does require you to be using Time Machine in order to actually run. In the most basic form, ServerBackup is invoked to run a backup using the backup command. Commands are prefixed with a -cmd followed by the actual command. As you might be able to guess, the commandlet to fire off a backup is backup. The backup command requires a -source option which will almost always be the root of the boot volume (/): /usr/sbin/ServerBackup -cmd backup -source / The data backed up begins in a .ServerBackups directory on the root of the host running Time Machine. Once the backup is complete the data is moved over to the actual Time Machine volume, using a path of: /Volumes/<TimeMachine_volume_name>/Backups.backupd/<hostname>/<date>/<GUID>/<Source_Volume_Name>/.ServerBackups The output of a backup should look similar to the following:
2012-02-01 10:05:17.888 ServerBackup[15716:107] Error encountered creating ServerMetaDataBackupFolder at path := /.ServerBackups! *** nextPath := 40-openDirectory.plist *** nextPath := 45-serverSettings.plist *** nextPath := 46-postgresql.plist *** nextPath := 55-sharePoints.plist *** nextPath := 65-mailServer.plist *** nextPath := 70-webServer.plist 2012-02-01 10:05:18.480 ServerBackup[15716:107] SRC := /etc/apache2/ DST := /.ServerBackups/webServer Failed to copy /etc/apache2/ to /.ServerBackups/webServer/etc/apache2; ret -> 0 2012-02-01 10:05:18.483 ServerBackup[15716:107] SRC := /etc/certificates/ DST := /.ServerBackups/webServer Failed to copy /etc/certificates/ to /.ServerBackups/webServer/etc/certificates; ret -> 0 *** nextPath := 75-iChatServer.plist *** nextPath := com.apple.ServerBackup.plist curServicePath := /.ServerBackups/openDirectory/openDirectory.browse.plist WARNING: Service openDirectory folder does not exist for browsing. curServicePath := /.ServerBackups/serverSettings/serverSettings.browse.plist WARNING: Service serverSettings folder does not exist for browsing. curServicePath := /.ServerBackups/postgresql/postgresql.browse.plist WARNING: Service postgresql folder does not exist for browsing. curServicePath := /.ServerBackups/sharePoints/sharePoints.browse.plist WARNING: Service sharePoints folder does not exist for browsing. curServicePath := /.ServerBackups/mailServer/mailServer.browse.plist WARNING: Service mailServer folder does not exist for browsing. curServicePath := /.ServerBackups/webServer/webServer.browse.plist WARNING: Service webServer folder does not exist for browsing. curServicePath := /.ServerBackups/iChatServer/iChatServer.browse.plist WARNING: Service iChatServer folder does not exist for browsing.
There are usually a lot of warnings, as any given server might not be in use on the server. There is a postBackupComplete commandlet that is supposed to remove the .ServerBackups directory following the backups; however, the default behavior seems to be to remove the directory without requiring that option. You can then view the backup snapshots by path (they can also be viewed by cd’ing straight into them): /usr/sbin/ServerBackup -cmd list To delete a snapshot from the list shown (where <PATH> is a path from the output of list): /usr/sbin/ServerBackup -cmd purgeSnapShot -path <PATH> The backup files themselves are actually the service name followed by a .conf extension; however, the data in the configuration files are just the output of a serveradmin settings of the service, such as what you would get from the following: serveradmin settings afp > afp.conf For running services, there’s also a .status file (personally, I’d prefer a .fullstatus file instead if I had my druthers). While all services are exported, and can be manually restored by flipping that > from the above command to a <, some services can also be restored using the services commandlet. To see a list of services that are backed up specifically and can be granularly installed as an option: /usr/sbin/ServerBackup -cmd services To restore: /usr/sbin/ServerBackup -cmd restore -path /Volumes/VOLUMENAME/Backups.backupdb/HOSTNAME/SNAPSHOT -target / To restore a specific service (for example, the iCal Server): /usr/sbin/ServerBackup -cmd restoreService -path /Volumes/VOLUMENAME/Backups.backupdb/HOSTNAME/SNAPSHOT -target / -service Currently, ServerBackup is not included in the daily, nightly or monthly periodic scripts and it does not back up actual data, just settings, so if you’re going to rely on it, you might need to automate server settings backups as needed. The ServerBackup command does a few pretty cool things. However, there is a lot more work needed to get it to be holistic. We’ve been working on scripts for similar tasks for a long time. For more information on that see sabackup.sourceforge.net (although we’re likely to relocate it to github soon). For more information on ServerBackup itself, see the help page (no man page as of yet): /usr/sbin/serverbackup -help To see what version that ServerBackup is using (not actually very helpful but can be used to programatically verify ServerBackup is using the latest version): /usr/sbin/ServerBackup -cmd version Supposedly there is a prefs command, but I have yet to actually get it to do anything: /usr/sbin/ServerBackup -cmd prefs Finally, if you are scripting this stuff, don’t forget quotes (as you might have a space in the hostname). Also, a quick sanity check to determine size and make sure there’s available capacity using the size command let, which only outputs the required space for a ServerBackup backup: /usr/sbin/ServerBackup -cmd size

February 1st, 2012

Posted In: Mac OS X Server, Mac Security, Time Machine

Tags: , , , , , , , , , , , , ,

Mac OS X Server 10.7, Lion Server, comes with a few substantial back-end changes. One of these is the move from SQLite3 to PostgreSQL for many of the back-end databases, including Wiki and Podcast Producer (collab), Webmail (roundcubemail), iCal Server and Address Book Server (caldav) and as the back-end to the newest service in Lion Server, Profile Manager (device_management). As such, it’s now important to be able to use PostgreSQL the way we once used SQLite3, when trying to augment the data that these databases contains, as there currently aren’t a lot of options for editing this data (aside from manually of course). Postgres has a number of commands that can be used to interact with databases. The most important is probably psql. Many of the other commands simply provide automated options to psql, and over time I’ve started using psql for most everything. For example, PostgreSQL comes with a command /user/bin/createuser. However, as it’s usually more verbose with errors, I like to use psql to do this. In Lion Server, the only user that can access the Postgres databases is _postgres, installed by default with Lion Server. Because a lot of commands require passwords and we might not always want to provide write access to the databases, we’re going to create a new SuperUser, called krypted with a password of daneel. To do so, we will have to use the _postgres user to invoke psql. Any time you want to invoke psql with a different user than the user you are currently logged in as, use the -U option. To define a database, use the -d option (device_management providing access to Profile Manager data, caldav to iCal Server data roundcubemail to WebMail data and collar to Wiki data). To string this together, for accessing the device_management database as _postgres: psql -U _postgres -d device_management To then create a new user called krypted with a password of daneel we’ll use the create option, defining a user as the type of object to create, followed by the user name and then with password followed by the password (single quoted) and then createuser; as follows: device_management=# create user krypted with password 'daneel' create user; Now that there’s a valid user, let’s see what else we can do. To see all of the tables, use d: device_management=# d As you can tell, there are a bunch of them. Run the help command to see a list of SQL commands that can be run and ? for a list of psql options. To put some SQL commands into action, we’re going to look at the tasks that have been performed by Profile Manager. These are stored in the tasks table (aptly named), so we’re going to run the following SQL query (note a space followed by a semi-colon is required at the end of this thing): device_management=# select * from "public"."tasks" limit 1000 offset 0 ; Or to make it a bit simpler if you don’t have a lot of data in there yet: device_management=# select * from "public"."tasks" ; After seeing the output, you’ll probably be a little appreciative of Apple’s formatting. Next, let’s look at dumping the databases. We’re going to create a folder on the root of the volume called db_backups first: sudo mkdir /db_backups This is where these backups will end up getting stored. We’ll continue using the _postgres user for now. To do our database dumps, we’re going to use pg_dump, located at /usr/bin. First, we’ll dump the device_management database (but first we’ll stop the service and after we’ll start it – all commands from here on out also assume you’re sudo’d): serveradmin stop devicemgr pg_dump -U _postgres device_management -c -f /db_backups/device_management.sql serveradmin start devicemgr And the other 3 (stopping and starting each in the process): serveradmin stop web pg_dump -U _postgres roundcubemail -c -f /db_backups/roundcubemail.sql serveradmin start web serveradmin stop wiki pg_dump -U _postgres collab -c -f /db_backups/collab.sql serveradmin start wiki serveradmin stop addressbook serveradmin stop calendar pg_dump -U _postgres caldav -c -f /db_backups/caldav.sql serveradmin start addressbook serveradmin start calendar I haven’t had any problems running the dumps with the services running, but it’s better safe than sorry I guess. I’d probably also add some logging and maybe dump the output of full status for each service to try and track if all is well with each. Any time a service didn’t fire back up I’d then build in a sanity check for that event. There’s also a database for postgres itself, so let’s back that up as well since we’re here: pg_dump -U _postgres postgres -c -f /db_backups/postgres.sql These can then be restored using psql with the -d option to define the database being restored into and the -f option to define the file being restored from. For example, to restore collab: psql -U _postgres -d collab -f /db_backups/collab The databases are all dumped daily using pg_dumpall. These are stored in /var/pgsql but can be changed using serveradmin settings (for example, to move them to /var/pgsql1): serveradmin settings postgres:dataDir = "/var/pgsql1" If you mess up the Profile Manager database (before you put any real data into it) you can always use the /usr/share/devicemgr/backend/wipeDB.sh script to trash the database and start anew (although I’d just use a snapshot of a VM for all this and restore from that). You can also connect to Postgres remotely, or locally through a network socket (common in Apache uses) by adding a listener. To do so, we’ll need to restart the Postgres LaunchDaemon. First, back up the file, just in case: cp org.postgresql.postgres.plist org.postgresql.postgres.plist.OLD_CSE Then stop postgres: serveradmin stop postgres Then edit the org.postgresql.postgres.plist file to change the following line: listen_addresses= To read: listen_addresses=127.0.0.1 Then fire up postgres again: serveradmin start postgres And now let’s scan port 5432 (the default TCP and UDP port used for postgres) for localhost: /Applications/Utilities/Network Utility.app/Contents/Resources/stroke 127.0.0.1 5432 5432 We could have used another IP address for the listen_addresses as well, but with that _postgres user not requiring a password it didn’t really seem prudent to do so. Once you’ve enabled a socket, you’ll then be able to use one of the many GUI tools to manage postgres. Navicat is available on the Mac App Store for $5 and PGnJ is a nice, easy to use, free one. There are tons of others, but I don’t spend a lot of time in a SQL GUI and so don’t need more than a cheap app will get me. One nice thing about most of these is that they help you to form SQL queries (or they help me). This can get really nice if you are, for example, trying to get some good reporting on Profile Manager (a feature it’s a bit light on right now). Finally, don’t do any of this stuff on a production box, except maybe if you want more than nightly backups unless you think pretty hard about what you’re doing and know the exact impact of doing something. If you were to edit the databases on a live boxen, then you can safely assume that with how all of the objects in those databases use GUIDs that you’re probably going to break something, if not bring the whole house of cards tumbling down.

January 4th, 2012

Posted In: iPhone, Mac OS X Server, Mac Security, Mass Deployment, SQL

Tags: , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , ,

Before I get started:
  • By remote, I mean from another machine – I sincerely hope that you will not be opening your Final Cut Server database to the WAN.  So again, please be careful with this as there is no security around the database and you will be limiting access via IP for now.
  • This article lays the beginning framework for a series (no promises on when the next in the series will be posted) on clustering the stored role of Final Cut Server, which provides the database (back end functionality) of Final Cut Server.
  • All of this is done using built in tools for Final Cut Server.
  • Don’t do this unless you absolutely have to (ie – you’ll be clustering your databases using the instructions in this series).
  • To my knowledge, absolutely none of this is supported by Apple, (AppleCare, etc).
  • Don’t roll ANY of this into production without fully testing/vetting yourself.
In order to be able to connect to the database remotely, you first need to enable a listener for PostgreSQL.  In order to do so, open the /var/db/finalcutserver/data/postgresql.conf file using your favorite text editor.  Then uncomment out the following line:
listen_addresses = ‘*’
Now, change the * to the IP address you’d like the daemon to listen on, unless you want it to run on all available IP addresses (or you just have a single IP).  Close the text editor, saving the file in the process.  Now, let’s define which IP addresses can connect to the PostgreSQL database by opening /var/db/finalcutserver/data/pg_hba.conf in your favorite text editor and adding the following line (you can replace the cidr address here with the one that you prefer to use) in the IPv4 local connections section:
host all all 192.168.55.0/24 trust
Now, this is the only real security you are going to have.  There is no signing or encryption involved with these packets, so do so on a trusted (out of band) interface, don’t do it on your production network.  Also, limit the cidr address to allow for the absolute smallest set (if you’re going to replicate the database between four hosts then consider using a /29 or if there are only two consider using a /30). The preferable way to do something like this would actually be to use an SSH tunnel or simply SSH into the remote host.  It just turns out that when we do that the replication doesn’t work properly (so far).  I will continue to look for a better, more secure way, but for now, if you use an out of band network for security then the lack of encryption is fairly mitigated.  Again, I know that this is not how you would typically want to do it, but for reasons illustrated in future articles it is going to be the first step in the clustering process.  Also, you will want an out of band network anyway, given that you don’t want database sync streams to interfere with Xsan metadata traffic (if your Final Cut Server data is stored on a SAN) nor with the bandwidth that end users have into your Final Cut Server environment via your non-Xsan, client facing Ethernet connection. Once you have made all of the changes, you can reboot.  Once done, you can connect and test the connection using standard PostgreSQL tools.  Or just port scan the port defined (by the amazingly complicated port variable) in the postgresql.conf file. Now it’s time to think about what exactly you’re looking to get out of your clustered PostgreSQL database.  For example, in some cases you may be looking for load balancing the back end PostgreSQL functionality (aka stored).  Whereas in other environments you might actually be looking at simply providing high availability via point in time recovery methods (the server crashes and another pops online a couple of seconds later to keep users chugging along) – this is the concept of active-active vs active-passive, respectively.  This series assumes you’ll be using automated failover, although provided I get around to it, I’ll try and cover both by forking the series.  This series also assumes that all hosts are using identical mount points to access data and that you are using two different serial numbers with your Final Cut Servers.  Both Final Cut Server instances need to be using the same paths to access both data and the database.  This may be via nfs, symlinks, cvfs or whathaveyou.  In cases where an operation will not function over a specific protocol/file system I will indicate that; however, when it comes to bottlenecks, I will rely on feedback and further experience to identify those.

May 9th, 2009

Posted In: Final Cut Server, Xsan

Tags: , , , , ,

Daylite 3.9 is actually a fairly substantial update from 3.8. This mainly stems from the fact that 3.9 uses PostgreSQL rather than OpenBase, and it runs Postgres on a dedicated server (not that this increases complexity too much as it’s going to discover those databases using Bonjour). This gives the application speed and the developers a number of new options they hadn’t had before. The MarketCircle developers will likely be able to come to market with new changes faster, thus being able to make you more productive with your productivity app. Also expect more 3rd party developers. Why? Because PostgreSQL is way more popular than OpenBase, is flexible for exchanging data and allows for a number of existing developers to integrate with Daylite. But more importantly, the short term gain is raw, unfettered speed. Before you look to install Daylite 3.9, make sure all your boxen have at minimum 10.4.11 or 10.5.6. Also make sure they have a Gig of RAM and that they’re a 1GHz G4 or better. Finally, like with Workgroup Manager, 640×480 just isn’t enough (I don’t think it’s enough to even load my web site without scrolling, but that’s aside from the point). So make sure you have 1024×768 or better. Because of the migration from OpenBase to PostgreSQL there’s a little work in migrating the database to be done. To get started, perform a final sync on your 3.8 users. Then disconnected them and disable synchronization, backing up your database when you are done. Those 3.8 users should not sync again. You can go ahead and upgrade them to 3.9 while the server is offline. Now install the 3.9 package and install your licenses, just as you would in Daylite 3.8 and below). Then go to the File menu (from within Daylite) and select Database and then Migrate Database. Then enter some admin credentials and click on Migrate. The database will then be migrated and the admin password reset. The application is snappier, both on a LAN and over the WAN. If you’re using Daylite 3.9 over a WAN (and you don’t have a VPN) then one of the first things you’ll look for is the TCP ports to open up. 6113 through 6116 for the server-side app. The new Daylite Touch will also need 6117. Daylight 3.9 also brings Daylite Touch into focus. Daylite Touch is the answer to the fact that people don’t just want CRM or what have you on their desktops. They want it on the handheld as well. Daylite Touch allows you to access that. More on Daylite Touch in later posts. There are a few other features to note as well (other than speed and handheld synchronization). Most of the new features revolve around being able to associate data, be it contacts, calendars or notes, with other data – thus providing a more robust object oriented model for data management within the app. There are also some GUI enhancements to make it easier to find objects on the screen (mostly trying to unify the Daylite Touch interface with that of the fat client). For users sync’ing data, this update should improve the experience, although I haven’t managed to verify that just yet. There are a number of minor bug fixes as well. All in all, 3.9 is a substantial upgrade. I would think that an upgrade where the backend database is migrated to another solution, the server is split into its own component and handheld over-the-air sync is introduced would alone be worthy of a full version number. This really makes me take more and more notice of Daylite; they are just on the ball these days at MarketCircle and I can say I am truly looking forward to seeing what 4.0 has in store for us. Hold-out users of Now who need a server based solution finally have a good upgrade path, albeit one with a slightly different (and more robust) workflow.

March 31st, 2009

Posted In: Mac OS X

Tags: , , , , , , , , ,

  The database that stores the configuration information and assets that you are using with Final Cut Server is built in PostgreSQL.  The name of the PostgreSQL database is px.  The implementation of PostgreSQL that runs on Mac OS X for Final Cut Server uses port 5433 by default, although only through the localhost.  There are two sets of PostgreSQL binaries on a Final Cut Server.  The first is in the /Library/Application Support/Final Cut Server/Final Cut Server.bundle/Contents/PostgreSQL/bin directory.  However, the tools here do not function.  Use the PostgreSQL binary files to manage the database located in the /System/Library/CoreServices/RemoteManagement/rmdb.bundle/bin directory.  The actual database information is stored in /var/db/FinalCutServer. Given the above information, to connect to the database you would use the following command: psql -d px -h 127.0.0.1 -p 5433 -U postgres You will now be located at a px prompt.  From here you can perform a variety of operations, including creating new tables, removing information, performing queries of simply altering fields within the database.  Most of these operations can be extremely dangerous.  Be very careful if you will be customizing this information as you can VERY easily unlink critical pieces of information from one another and render your Final Cut Server useless.  Therefore, changing information on a Final Cut Server should only be done on a test server prior to any augmentation against a production database AND before changing anything make sure to back up the Final Cut Server database. The pg_hba.conf file located in the /var/db/FinalCutServer/data directory handles all authentication to the Final Cut Server database.  Using this file, it is possible to allow users to connect to your database from another host.  If you are going to allow connections from other hosts, then you will likely want to create another user with a strong password and only allow that user.  Luckily, in the /Library/Application Support/Final Cut Server/Final Cut Server.bundle/Contents/PostgreSQL/bin/ directory there is a createuser shell script that will help you do this (although it won’t do it for you).   These shell scripts also have other uses as well.  For example, if you change your mind about the user that was creted withe the createuser script and wish to delete them at a later date you can also remove them using the dropuser shell script in the same location.  One warning is that these scripts do not work out of the box, as mentioned.  You will need to customize them with the information we previously used for your server, but I mention them here as they can act as guides for your customization of the PostgreSQL environment.  When connecting to PostgreSQL you can also use a graphical application such as Navicat or PGnJ.  In this example, I’m going to use PGnJ to connect to the Final Cut Server px database.  To do so, first open PostgreSQL.  When prompted for the authentication information, enter it as you see here and click on the Connect button.  Remember, you can only connect through the server actually running Final Cut Server unless you alter the pg_hba.conf file to allow other hosts to connect and if you do, you will likely want to identify which specific other hosts have access and how they authenticate to the database. Once you’re connected, click on the disclosure triangle beside store and then click on the disclosure triangle beside tables.  Now you will see a listing of each of the tables that Final Cut Server uses to store the data you have put into Final Cut Server.  This data is logically structured in a collection of tables, all starting with the letters px, indicating Proximity (the original creators of Artbox, which was purchased by Apple and turned into Final Cut Server).   One example of something you can do using the tables is to change the wording for one of the screens within Final Cut Server.  We’re going to go ahead and change the “This asset is linked to” text on the resources pane of an asset to make this features a little more user friendly (original text can be seen below). First, open up the pxmdgroup table within PGnJ and click on the pxmdgroup table.  Then, scroll down to the row for mdgroupid 1754.  Next, alter the text for the name field.  In this case we are going to use “Productions that use this asset”. Now you’ll need to restart PostgreSQL, which can most easily be done by rebooting your computer.  Once it comes back online you’ll be able to view the resources for an asset and see that your changes have been made, as can be seen here: This is only one example of the many things that you can do by augmenting the PostgreSQL database with Final Cut Server.  You can also interact with it directly using shell scripts, further enhancing the workflow automations capable for your environment.

November 15th, 2008

Posted In: Final Cut Server

Tags: , ,