Tiny Deathstars of Foulness

Microsoft Azure is Microsoft’s cloud services. Azure can host virtual machines and act as a location to store files. However, Azure can do much more as well, providing an Active Directory instance, provide SQL database access, work with hosted Visual Studio, host web sites or provide BizTalk services. All of these can be managed at


You can also manage Windows Azure from the command line on Linux, Windows or Mac. To download command line tools, visit Once downloaded, run the package installer.

Screen Shot 2013-11-29 at 10.51.01 PMWhen the package is finished installing, visit /usr/local/bin where you’ll find the azure binary. Once installed, you’ll need to configure your account from the site to work with your computer. To do so, log into the portal.

Screen Shot 2013-12-01 at 8.25.57 PM

Once logged in, open Terminal and then use the azure command along with the account option and the download verb:

azure account download

This account downloads the .publishsettings file for the account you’re logged in as in your browser. Once downloaded, run azure with the account option and the import verb, dragging the path to your .publishsettings file from

azure account import /Users/krypted/Downloads/WindowsAzure-credentials.publishsettings

The account import then completes and your user is imported into azure. Once imported, run azure with the account option and then storage list:

azure account storage list

You might not have any storage configured yet, but at this point you should see the following to indicate that the account is working:

info: No storage accounts defined
info: account storage list command OK

You can also run the azure command by itself to see some neat ascii-art (although the azure logo doesn’t really come through in this spiffy cut and paste job):

info: _ _____ _ ___ ___________________
info:        /_\  |__ / | | | _ \ __|
info: _ ___ / _ \__/ /| |_| |   / _|___ _ _
info: (___ /_/ \_\/___|\___/|_|_\___| _____)
info: (_______ _ _) _ ______ _)_ _
info: (______________ _ ) (___ _ _)
info: Windows Azure: Microsoft's Cloud Platform
info: Tool version 0.7.4
help: Display help for a given command
help: help [options] [command]
help: Open the portal in a browser
help: portal [options]
help: Commands:
help: account to manage your account information and publish settings
help: config Commands to manage your local settings
help: hdinsight Commands to manage your HDInsight accounts
help: mobile Commands to manage your Mobile Services
help: network Commands to manage your Networks
help: sb Commands to manage your Service Bus configuration
help: service Commands to manage your Cloud Services
help: site Commands to manage your Web Sites
help: sql Commands to manage your SQL Server accounts
help: storage Commands to manage your Storage objects
help: vm Commands to manage your Virtual Machines
help: Options:
help: -h, --help output usage information
help: -v, --version output the application version

Provided the account is working, you can then use the account, config, hdinsight, mobile, network, sb, service, site, sql, storage or vm options. Each of these can be invoked along with a -h option to show a help page. For example, to see a help page for service:

azure service -h

You can spin up resources including sites, storage containers and even virtual machines (although you might need to create templates for VMs first). As an example, let’s create a new site using the git template:

azure site create --git

Overall, there are a lot of options available in the azure command line interface. The web interface is very simple, with options in the command line interface mirroring the options in the web interface. Running and therefore scripting around these commands is straight forward. I wrote up some Amazon stuff previously at, but the azure controls are really full featured and I’m really becoming a huge fan of the service itself the more I use it (which likely means I’ll post more articles on it soon).

December 2nd, 2013

Posted In: cloud, Network Infrastructure, SQL, Ubuntu, Unix, VMware, Windows Server

Tags: , , , , , , , , , , ,

Corruption happens. Sometimes, it’s little things that cause problems. With MySQL, the mysql command line tool has long held the answer for easy corruption issues. There are a number of tools to repair corruption, but the place to start is the REPAIR command within that trusty mysql command line tool.

To start, let’s try a backup. In this case, I’m going to use a tool those of us who deal with Media Assets frequently tinker with, CatDV. I’m going to backup the databases with a simple mysqldump command, defining the user and then piping the data out to some backup file, which in this case is catdvbak on the desktop:

mysqldump -u catdvadmin -pcatdv catdv > ~/Desktop/catdvbak

If this fails due to corruption then I personally like to stop my databases and back it up flat before I make any changes to it, which a repair command will of course do. Then, we’ll need to tap into mysql:

mysql -P 1099 -p

Then, we will be in an interactive mysql environment. Let’s just say the auditLog in the catdv database is corrupt. First, select the database:

use catdv;

Then, repair that table:


Note: You’ll need to quote things if the name of your table isn’t quite so simple and has special characters.

Then try and re-run your backup if it didn’t complete and you should be good to go! If the repair doesn’t go swimmingly, check out myisamchk for more detailed options.

July 26th, 2013

Posted In: Mac OS X, Mac OS X Server, SQL, Xsan

Tags: , , , , , , ,

Mac OS X Server 10.7, Lion Server, comes with a few substantial back-end changes. One of these is the move from SQLite3 to PostgreSQL for many of the back-end databases, including Wiki and Podcast Producer (collab), Webmail (roundcubemail), iCal Server and Address Book Server (caldav) and as the back-end to the newest service in Lion Server, Profile Manager (device_management). As such, it’s now important to be able to use PostgreSQL the way we once used SQLite3, when trying to augment the data that these databases contains, as there currently aren’t a lot of options for editing this data (aside from manually of course).

Postgres has a number of commands that can be used to interact with databases. The most important is probably psql. Many of the other commands simply provide automated options to psql, and over time I’ve started using psql for most everything. For example, PostgreSQL comes with a command /user/bin/createuser. However, as it’s usually more verbose with errors, I like to use psql to do this. In Lion Server, the only user that can access the Postgres databases is _postgres, installed by default with Lion Server. Because a lot of commands require passwords and we might not always want to provide write access to the databases, we’re going to create a new SuperUser, called krypted with a password of daneel.

To do so, we will have to use the _postgres user to invoke psql. Any time you want to invoke psql with a different user than the user you are currently logged in as, use the -U option. To define a database, use the -d option (device_management providing access to Profile Manager data, caldav to iCal Server data roundcubemail to WebMail data and collar to Wiki data). To string this together, for accessing the device_management database as _postgres:

psql -U _postgres -d device_management

To then create a new user called krypted with a password of daneel we’ll use the create option, defining a user as the type of object to create, followed by the user name and then with password followed by the password (single quoted) and then createuser; as follows:

device_management=# create user krypted with password 'daneel' create user;

Now that there’s a valid user, let’s see what else we can do. To see all of the tables, use d:

device_management=# d

As you can tell, there are a bunch of them. Run the help command to see a list of SQL commands that can be run and ? for a list of psql options. To put some SQL commands into action, we’re going to look at the tasks that have been performed by Profile Manager. These are stored in the tasks table (aptly named), so we’re going to run the following SQL query (note a space followed by a semi-colon is required at the end of this thing):

device_management=# select * from "public"."tasks" limit 1000 offset 0 ;

Or to make it a bit simpler if you don’t have a lot of data in there yet:

device_management=# select * from "public"."tasks" ;

After seeing the output, you’ll probably be a little appreciative of Apple’s formatting. Next, let’s look at dumping the databases. We’re going to create a folder on the root of the volume called db_backups first:

sudo mkdir /db_backups

This is where these backups will end up getting stored. We’ll continue using the _postgres user for now. To do our database dumps, we’re going to use pg_dump, located at /usr/bin. First, we’ll dump the device_management database (but first we’ll stop the service and after we’ll start it – all commands from here on out also assume you’re sudo’d):

serveradmin stop devicemgr
pg_dump -U _postgres device_management -c -f /db_backups/device_management.sql
serveradmin start devicemgr

And the other 3 (stopping and starting each in the process):

serveradmin stop web
pg_dump -U _postgres roundcubemail -c -f /db_backups/roundcubemail.sql
serveradmin start web
serveradmin stop wiki
pg_dump -U _postgres collab -c -f /db_backups/collab.sql
serveradmin start wiki
serveradmin stop addressbook
serveradmin stop calendar
pg_dump -U _postgres caldav -c -f /db_backups/caldav.sql
serveradmin start addressbook
serveradmin start calendar

I haven’t had any problems running the dumps with the services running, but it’s better safe than sorry I guess. I’d probably also add some logging and maybe dump the output of full status for each service to try and track if all is well with each. Any time a service didn’t fire back up I’d then build in a sanity check for that event. There’s also a database for postgres itself, so let’s back that up as well since we’re here:

pg_dump -U _postgres postgres -c -f /db_backups/postgres.sql

These can then be restored using psql with the -d option to define the database being restored into and the -f option to define the file being restored from. For example, to restore collab:

psql -U _postgres -d collab -f /db_backups/collab

The databases are all dumped daily using pg_dumpall. These are stored in /var/pgsql but can be changed using serveradmin settings (for example, to move them to /var/pgsql1):

serveradmin settings postgres:dataDir = "/var/pgsql1"

If you mess up the Profile Manager database (before you put any real data into it) you can always use the /usr/share/devicemgr/backend/ script to trash the database and start anew (although I’d just use a snapshot of a VM for all this and restore from that).

You can also connect to Postgres remotely, or locally through a network socket (common in Apache uses) by adding a listener. To do so, we’ll need to restart the Postgres LaunchDaemon. First, back up the file, just in case:

cp org.postgresql.postgres.plist org.postgresql.postgres.plist.OLD_CSE

Then stop postgres:

serveradmin stop postgres

Then edit the org.postgresql.postgres.plist file to change the following line:


To read:


Then fire up postgres again:

serveradmin start postgres

And now let’s scan port 5432 (the default TCP and UDP port used for postgres) for localhost:

/Applications/Utilities/Network 5432 5432

We could have used another IP address for the listen_addresses as well, but with that _postgres user not requiring a password it didn’t really seem prudent to do so. Once you’ve enabled a socket, you’ll then be able to use one of the many GUI tools to manage postgres. Navicat is available on the Mac App Store for $5 and PGnJ is a nice, easy to use, free one. There are tons of others, but I don’t spend a lot of time in a SQL GUI and so don’t need more than a cheap app will get me. One nice thing about most of these is that they help you to form SQL queries (or they help me). This can get really nice if you are, for example, trying to get some good reporting on Profile Manager (a feature it’s a bit light on right now).

Finally, don’t do any of this stuff on a production box, except maybe if you want more than nightly backups unless you think pretty hard about what you’re doing and know the exact impact of doing something. If you were to edit the databases on a live boxen, then you can safely assume that with how all of the objects in those databases use GUIDs that you’re probably going to break something, if not bring the whole house of cards tumbling down.

January 4th, 2012

Posted In: iPhone, Mac OS X Server, Mac Security, Mass Deployment, SQL

Tags: , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , ,

Yesterday I did a quick review of the various cloud offerings from Amazon. Previous to that I had done a review of using S3, the Amazon storage service, with Mac OS X, primarily through the lens of using S3 as a destination for Final Cut Server archives. Today I’m going to go ahead and look at using EC2 from Mac OS X. To get started, first download the EC2 tools from Amazon.

Next, log into Amazon Web Services. If you don’t yet have a login you will obviously need to create one to proceed. Additionally, if you don’t yet have a private key you’ll need one of those too – in that case there will be a big green box to create it when you first log in. When the keys are created you can double-click on the x.509 certificate file to install it into Keychain. This key is a private key so make sure not to give it out. You can return to this screen later if you need to.

Next, go to the AWS Management Console. Because I don’t personally find the site terribly user friendly I like to keep the Management Console bookmarked. Once you have the Management Console open, click on Instances and then click on Launch Instance. You will then be greeted by a list of prebuilt virtual machines that you can use. Amazon has built Fedora and Windows for you, which will be listed under the QuickStart tab of the Launch Instances screen; however, you can also click on Community AMIs in order to use one that has been built and made available by others within the EC2 community. These include Debian, Ubuntu and CentOS (amongst others).

Once you have picked your poison, click on Select and you will then be prompted to create a key pair specifically for the instance. The reason for this is that you might have instances that you’ld like to distribute information for to people you wouldn’t want to access all of your images globally to your account. You can skip this step or enter a name for the keypair and click on Create. Now click on Continue and you’ll be prompted to create a security group. A security group controls the ports that are opened to/from your virtual machine. For Windows you’ll pretty much always want RDC (3389) open (pretty much) and for *nix, typically SSH. Amazon tries to make this easy and so pre-fills the form with common ports based on your use. Think of a security group like an Access Control List on a Cisco. You can resuse them across various instances. Next, click Continue.

Next, you’ll be asked to provide a name for the VM (aka AMI), a number of instances of the VM and whether the AMI is to be a smaller, standard item or whether it will be hit with high CPU utilization. You’ll also be able to select the security group to apply to the host based on the previous information. The name will be automatically filled in based on the template you chose to use, so you can actually click on the Change button if you’d like to supply a new name.

Next, click Launch and the AMI will start to fire up, becoming an instance. Windows AMIs will take a little longer in my experience than Linux AMIs. While the instance is booting, it is worth mentioning that at this point you’ll notice the option to launch/create volumes and what Amazon calls Elastic IPs. Amazon doesn’t provide an IP for free, as you may have noticed when you accepted their terms of service. Therefore, if you are going to create an instance that will have static access over the WAN using a static IP, you will need to go ahead and assign an elastic IP to it. Unless that is, you can communicate with the instance even if it has a dynamic IP (there are a ton of ways to do this). The volumes option allows you to build storage that is independent of the instance. This can be used to mount on multiple instances (although I haven’t found a way to do so concurrently) or to simply have storage independent of the instance so that you can easily move data.

Now click on Instances. Here, you’ll note that your newly created instance is listed. Click on it and then click on More Actions and select Get Password (where OS is the OS you chose to setup). Here, you’ll receive an option to decrypt the password using the Private Key. You can cat the .pem file that was downloaded when you setup the key and copy/paste the entire contents into the field. Once the field has been populated, click on the Decrypt button and you will see the Admin/root password for your new virtual host.

Next, click on Connect and you’ll find instructions to connect to your new instance (for Windows it will be a dynamic DNS entry to use RDC with). You can now login. Once you have connected it is as though you are in a typical VM environment. Next, you’ll want to take a look at the options for Bundle Tasks (if you’re using Windows), which allows you to duplicate an AMI into multiple instances. You’ll also want to look at Volumes, as mentioned previously and Snapshots, which can be used to back up the Volumes.

Overall, we were able to create a new instance of Fedora, Windows or Ubuntu (even those tuned to be Active Directory domain controllers, LAMP hosts or SQL), faster than if we installed it from scratch and without using any resources outside of Amazon to do so. Later, we’ll look at doing all of this from the command line. And don’t forget to stop your instance so that you don’t get billed for all that time that you’re not using it!

May 1st, 2009

Posted In: Active Directory, Articles and Books, Business, Consulting, Network Infrastructure, SQL, Ubuntu, Unix, VMware, Windows Server

Tags: , , , , , ,

Moodle is a popular Course Management Solution (CMS) that is typically deployed in a less-than-secure manner. This is an issue with any system, but in a number of schools it can cause some pretty serious problems given the penchant that students have to muck with things. In the broader technology certification world it can be an even bigger issue in that, well, we’re geeks…

So a few tips on securing Moodle:

  • Tie Moodle into Active Directory (from the Admin console, Users->Authentication->LDAP) or some other Directory Service.
  • From the Admin console, go to Administration->Security and make sure all of these settings match your security plan.
  • Automate security scanning.  For example, check for root kits using rkhunter and chkrootkit.  Also use your favorite virus scanner (which is likely a low priority but why not).  Also consider automating a checksum tool to let you know when something changes in certain parts of the file hierarchy, such as tripwire.
  • Centralize your logs and actually check them out.
  • Limit access: Use sandbox-exec to limit access to what the Apache and/or MySQL binaries can do if they get compromised. Also make sure that the actual POSIX permissions for the files are as low as possible.
  • Get an SSL Certificate. Even if it’s a self-assigned cert (httpslogins=yes)…
  • Put a good backup in place. Both for the userland data and the MySQL databases.
  • Use good passwords.  I know this sounds dumb, but the default admin password (12345) is probably not a good one to be using…  Also, disabling guest access is usually a good idea.
  • Practice good general SQL security.  For example, if SQL and Moodle are running on the same host, don’t allow SQL to accept network connections, there’s no reason to do so…  If you must use tools like MySQLAdmin then put them in a realm and use the SSL cert from above.  Limit what MySQL users can do, etc.  More on MySQL security here.
  • About those network connections mentioned above, make sure only the ports that need to be allowed into the host running Moodle are allowed.  For example, don’t allow afp if it’s not an AFP server, just so you can quickly and easily upload content using Command-K).  Remember, Mac OS X password policies in OD and OS X Server ARE NOT enforced on admin accounts.
  • Occasionally check out the Moodle security rss feed.
  • Enable secure forms in Moodle.
  • And last but not least, build a good course template so your teachers/instructors will be able to practice good security for courses and exams without having to be CISSPs…
  • Oh, and if you’ll be doing mod development check out Petr from’s overview on Moodle Security, which goes into more detail on trusted vs. untrusted users, functions, etc.

Overall, we’re talking about adding an extra 3-4 hours of work more than likely.  But that’s probably well over 8 hours saved the first time an AP Computer Science student or a CISSP student end up going through one of your courses.

February 9th, 2009

Posted In: Mac OS X Server, Mac Security, SQL

Tags: , ,

Is it possible that a client can build a Facebook app, deploy it and not tell anyone, yet get so much traffic on the app that we have to add a second server within 6 hours?  Yes.  Is it then possible to quickly prop up an Apache cluster, forklift the data in there, throw together a quick MySQL cluster and be off to the races within 4 more hours while the end users of the app are complaining?  Yes.  Why, because there’s money in it.  And where there’s money there’s will.  And where there’s will there’s a way!

June 10th, 2008

Posted In: SQL, Unix

Tags: , , ,

MAMP provides a portable LAMP-style environment for the Mac.  Easy to move stuff around, easy to use MySQL and Apache.  And can take your Apache 1.x environment in Tiger to an Apache 2.x environment.

December 27th, 2007

Posted In: Mac OS X, Mac OS X Server, Mac Security, sites, SQL

Tags: , , , ,

So there are actually 3 possible uses for Boolean variables in many cases.  The first two are the traditional True/False aspects.  The third though is null.  So for example, rock paper and scissors.  Rock can be true, paper can be false and scissors can be null.  Could help to cut way down on lookup times in SQL…

October 18th, 2007

Posted In: SQL

Tags: ,

Edit the user record in the mysql.user table, giving it access to connect to the mysql server from an external host by running the following command at the MySQL interface

grant all privileges on *.* to ‘user’@’IPADDRESS’ identified by ‘password’ with GRANT OPTION;

specify the user, IPADDRESS and password you’d like to use in that command and you’re good to go.

July 23rd, 2007

Posted In: SQL


February 3rd, 2007

Posted In: SQL, Ubuntu, Unix

Tags: , , ,

Next Page »