Tiny Deathstars of Foulness

Setting up and installing WordPress is pretty straight forward. That’s not to say it’s not going to take a little work to go from 0 to 60 on a base Linux installation. But I’ll lay the work out for you so as not to be that tricky. Everything we’ll be doing will require elevated privileges, so sudo in front of each command or sudo bash before you get going. First up, install Apache, as you’ll need a web server. I think the base apache2 config is pretty straight forward out-of-the-box:
apt-get install apache2
During installation you will be asked to type y to continue. Do that and it will finish with no major issues. Next up, install MySQL, php5, php5-mysql and phpmyadmin. We can use apt-get to knock all this out at once:
apt-get install mysql-server-5.1 php5 php5-mysql phpmyadmin
Again, you will be asked to choose whether to proceed, type y and hit enter. The next few steps will change according to versions, but for now, you’ll then be asked for a password for the MySQL root user. Provide that password and then tab to the OK button. You’ll then be asked to select which web server you are using. Assuming you did the apache2 install previously, choose Apache and then tab to the OK dialog. Then you will be asked to provide the MySQL password. This will be the password you typed earlier. You’ll then be prompted for a phpmyadmin password, which will be a password to access phpmyadmin’s web interface. Once the installation is done, you should have a fully functional LAMP environment. I like to reboot and check syslog afterwards just to make sure that everything is in working order and not reporting any major malfunctions. Next up, we will need to create the MySQL user and database that WordPress will use. To do so, log into phpmyadmin using a URL that begins with http:// followed by the address of your server and finally the /phpmyadmin. For example, if your server is at then the address would be You will be asked to authenticate, and here you will want to use the password you provided during the phpmyadmin package installation. Once you have authenticated, click on the Privileges tab and then click on the Add a new user button. You will then be asked to provide a username and password for the user you are creating, define what addresses that user can log in from (if you have multiple front-end servers you probably aren’t using this post to install WordPress so you might as well limit it to localhost) and most importantly you have a radio button for “Create database with same name and grant all privileges”. If you use this option then both the user and the database will be created in one step, making life pretty easy. I used wordpress as my username in the example. Once you have all the services installed and the MySQL user and database setup, then you’re ready to install WordPress. I like to cd into /var/www and then wget the, which always has the latest version of WordPress:
Then you want to unzip that (the unzip command is built into Ubuntu 10):
This will extract the wordpress folder into /var/www. Then make sure your admin user has permission (mine is oddly enough called cedge):
chown -R cedge:users wordpress
Now cd into the wordpress directory:
cd wordpress
Make a copy of the main configuration template called wp-config.php:
cp wp-config-sample.php wp-config.php
And then let’s edit that new file (vi, nano, tapping directly into the Matrix, or whatever you like), looking for DB_NAME, DB_USER, DB_PASSWORD and DB_HOST. In these respective fields, put the name of the database (wordpress in this example), the username for administrative rights to the database (wordpress again in this example), the password for the database (whatever you provided in phpmyadmin’s web interface for your new user and the IP or hostname of the database server (let’s assume if the database and web servers are the same). Scroll down a little further until you see the Authentication Unique Keys: AUTH_KEY, SECURE_AUTH_KEY, LOGGED_IN_KEY and NONCE_KEY. You’ll want to visit the WordPress secret key generator at to get your keys. Then simply cut/copy/paste the whole section, commenting out the existing lines or paste the contents of each line over the line it is replacing. Once that is done save your changes to the file and exit your text editor. Now visit the address of the site followed by WordPress (ie – You’ll then be able to setup WordPress for the first time. At the first login, you will see a screen prompting you to define a title for the site (Your domain name is a pretty traditional title to use), the username you want to use to administer the site (ie – admin), the password (ie – according to the movie Hackers, god) and and administrative email address. Here, you can also choose whether you want the site to be crawled by search engines. Once you’re happy with your settings, click on the Install WordPress button down at the bottom of the page. Now you should be able to see your first post, create posts and use WordPress. That should have been pretty painless. If it were any more painless, then I fear the dribble that people would post… Anyway, if you want the webroot ( instead of to be WordPress, then you will also want to change the DocumentRoot setting in /var/www to point to the /var/www/wordpress folder in the /etc/apache2/sites-enabled/000-default file (or whichever site it is if you have multiple ones).

November 30th, 2010

Posted In: Mac OS X Server, Ubuntu, Unix, WordPress


The wget command is used to download files from the web and is one of the most useful commands around. But while it comes included with most distributions of Linux, it is not built into Mac OS X by default. Therefore, let’s look at installing wget. To get started, install the developer tools for Mac OS X so that you can get a working copy of a compiler (gcc). Once the developer tools have been installed, you’ll want to download the latest version of wget from gnu. To do so, either download it manually from or use the ftp command to do so for you:
Next, extract the tar file using the tar command:
tar -xvzf wget-latest.tar.gz
You will then have a directory called wget- followed by the version of wget you just downloaded (currently 1.12). Let’s cd into that directory:
cd wget-1.12
Then run the configure script:
Then make the installer:
Then run the installer (with elevated privileges:
make install
You will then have the wget command located in /usr/local/bin/wget. To use it, simply use wget, followed by the path to the file you’d like to download using the –tries option:
wget –tries=10
There are a lot of options for wget, but some that I use more than others include –user= and –password=, which allows you to authenticate to a host by specifying a username and a password (respectively of course) and –limit-rate, which funny enough, let’s you throttle the speeds of transfers so as not to saturate your bandwidth. I also frequently need to use the -r operator, which allows for recursive downloads and the -o operator which outputs to a log file. Overall wget is one of the most useful commands around, and hopefully after reading this you’ll download it and get used to using it (if you weren’t already).

November 29th, 2010

Posted In: Mac OS X

Tags: , , , , , , , ,

There are a number of ways that you can interact with Google Apps: there is the website, the new Google Cloud Connect and an API that allows you to integrate Google Apps with your own solutions. The API is available for python and java and can take some time to get used to, even though Google has done a good job with making it pretty straight forward (comparably). Therefore, there are a couple of tools that ease the learning curve a bit.

GoogleCL on Ubuntu

The first, and easiest is GoogleCL. GoogleCL is a command line version of Google Apps that will allow you to interact with YouTube, Picasa, Blogger and of course Google Docs. To use GoogleCL you’re going to need python-gdata. If you’re using Ubuntu, you would do an apt-get and install python-gdata:
apt-get install python-gdata
Once installed, you’ll want to then download the deb package from Google Code:
Once downloaded, install it using dpkg with the -i option (assuming you’re still using the same working directory:
dpkg -i googlecl_0.9.11-1_all.deb

GoogleCL on Mac OS X

GoogleCL is also available for the Mac. First, download the gdata-python-client from and then extract the file (ie – unzip gdata-2.0.13). Next, install it using Python (2.0.13 is the latest version) with your working directory set to the previously extracted folder:
python install
Next up, let’s grab GoogleCL from the GoogleCL Google Code page: wget Then hop into the newly extracted directory and run the python installer: python install

Using GoogleCL on Mac and Linux

Once GoogleCL has been installed, the use is the same between Mac OS X and Linux. Simply use the newly acquired google command (this is actually a Python front-end to the API at /usr/bin/google) followed by a service and then a verb. Verbs are based on services (not all services offer the same features and therefore do not have the same verbs). A list of services with their verbs includes the following. docs – Allows for interaction with Google Docs, with verbs that include the following:
  • edit – Allows you to indicate an application to use as an editor for the given document (ie – vi).
  • delete – Delete a document on Google Docs.
  • list – List documents on Google Docs.
  • upload – Uploads the specified document (options include title, folder and format of the document being uploaded).
  • get – Downloads the specified document in the format specified using the format option.
blogger – Manage content stored using the blogger service.
  • post – Allows you to post content (which is then known as blog).
  • tag – Requires a title (for blog entries) and the tags that you would like to use with the post in question.
  • list – Shows posts (can use blog entry, title and owner as a delimiter, useful when used w/ grep to constrain output).
  • delete – Removes a post specified.
picasa – Allows you to interact with the picasa service for posting and obtaining images used with Google Apps.
  • get – Download specified albums.
  • create – Create an album.
  • list – List images.
  • list-albums – List albums.
  • tag – Tag images
  • post – Add a photo to an album.
  • delete – Delete a photo or an album.
contacts – Manage contacts (given the lack of an edit option, use an add and then a delete to impart an edit).
  • list – Show contacts (can specify fields to constrain output).
  • list-groups – Show the groups for a user.
  • add – Add a contact.
  • add-groups – Create a group of contacts.
  • delete-groups – Remove a group of contacts
  • delete – Remove a single contact
calendar – Manage calendars.
  • add – Create a calendar entry
  • list – Show all events on a given calendar.
  • today – Show calendar events over the next 24 hour period.
  • delete – Remove calendar events.

Beyond GoogleCL

Let’s put this into perspective. Let’s say I have an application, and that application can run a simple shell command. Then, let’s say I create a calendar event in that application. The application could send a command to the shell with a variable. If I had calendar information to create such as “Meeting with KK tomorrow at 9am” then I could send a command as follows:
google calendar add “Meeting with KK tomorrow at 9am”
This would cause the event to appear on my calendar and sync to any devices that were then configured to work with my calendar. But, if I were to issue this command on the server-side then it would attempt to create all events for the same users, which is likely not very helpful for most organizations that have more than one calendar and/or user. As mentioned /usr/sbin/google is a python script. It makes use of python-gdata and provides a more direct access to the Google Apps API. As such, it allows for far more complex logic than the GoogleCL front-end does. The google script does give savvy developers a look at how Google intends for many of their methods to be used and even allows you to borrow a line or two of code here and there. Simple logic can be parlayed into code quickly using GoogleCL, but you will quickly outgrow what can be done with GoogleCL and move into using the API more directly if you have any projects of substance!

November 28th, 2010

Posted In: cloud, Mac OS X, Mac OS X Server, Ubuntu, Unix

Tags: , , , , , , , , ,

StorNext and Xsan go pretty well together. I wrote up an article going on two years ago for Xsanity on setting up RedHat clients for Xsan environments at But I didn’t go into much detail on troubleshooting. There isn’t a ton, beyond the traditional steps you take in Mac OS X, when troubleshooting Xsan clients as there isn’t a lot that can go wrong. But, let’s look at how I normally proceed when I only have one volume that will not mount. The first step is to stop and then start up cvfs. To stop cvfs, run the following command:
/etc/init.d/cvfs stop
To then start it back up:
/etc/init.d/cvfs start
At this point, a file will be written to with some typically detailed notes on why a volume didn’t mount at /usr/cvfs/debug/mount.VOLUME.out (where VOLUME is replaced by your volume name). If the file is empty then the volume didn’t even attempt to mount. Assuming that we are only looking at a mount (meaning cvadmin will show you the Xsan or StorNext volumes) then it could also mean that you can’t stat the volume. If the error specifically indicates that it cannot stat the volume then it is either that there is not a folder with the same name as the Xsan in /mnt (or wherever you are attempting to stat the volume to) or that the permissions on the folder on the local file system are bad. Changing these to 777 temporarily will likely resolve if it is permissions. Next, check cvadmin and verify that the volume is being hosted by a metadata controller that is accessible by the Redhat client. The server that is actually hosting the metadata for a given volume will have an * by the name of that server. Also verify that in /usr/cvfs/config/fsnameservers you see that server. Keep in mind that when you add and remove metadata controllers in Xsan that the fsnameservers file does not synchronize to non-Apple products (ie – StorNext) and that you will need to hand roll these changes. Also, keep in mind that per StorNext, the order of the objects within this file needs to be consistent across all clients no matter the platform. Next: consider licensing. Make sure that licensing files for each metadata controller are available to the clients. If a single license file is out-dated then even if you can get a volume to mount, failover will not be possible. Another possible (and likely on new volumes) candidate for why a given volume will not mount is that there are inaccessible LUNs. If this is the case then you will see errors that indicate that there is a “stripe group down”, as with Xsan. To isolate these, I usually compare the results of a cvlabel -l command with what I see in Xsan Admin for a given LUN. A little grep will make this process go by very quickly. If you have moved a LUN then I’ve had to do a full physical reboot to get the cvlabel cache to actually update following the move. Also, along this same line of troubleshooting, if you are using a version of StorNext that is a bit older, then you will want to check and verify that it supports LUNs greater than 2TB. This is an older issue, but some still run old software… That’s about all I have time for, but it’s very specific towards troubleshooting single volume mount issues in StorNext environments. While this was geared towards Apple Xsan environments with StorNext clients, all of the information is also pertinent to StorNext metadata controlling environments as well.

November 27th, 2010

Posted In: Mac OS X, Mac OS X Server, Xsan

Using the firewall in Ubuntu can be as easy or as hard as you want to make it. BSD variants all basically use the ipfw command whereas most of the rest of the *nix world will use netfilter. Netfilter has a number of front ends; the one that comes pre-installed in Ubuntu is ufw, short for ‘uncomplicated firewall’. Ufw is good for basic port management: allow and deny type of stuff. It’s not going to have the divert or throttling options. So let’s look at some basic incantations of ufw (you need to have elevated privileges to do all of this btw).

Initial Configuration

First you need to enable ufw, which is done using the ufw command (no need to apt-get this to install it or build it from source) followed by the enable option:
ufw enable
You can also use the disable option to turn the firewall back off:
ufw disable
And to see rules and the status of the firewall, use the status option:
ufw status

The ufw Configuration File

The ufw configuration file is /etc/default/ufw. Here, you can manage some basic options of ufw. These include:
  • IPV6 – Set to YES to enable
  • DEFAULT_INPUT_POLICY – Policy for how to handle incoming traffic not otherwise defined by a rule. Defaults at DROP but can be changed to ACCEPT or REJECT
  • DEFAULT_OUTPUT_POLICY – Same as above but for handling outgoing traffic not otherwise defined by a rule.
  • DEFAULT_FORWARD_POLICY – Same as above but for forwarding packets (routing).
  • DEFAULT_APPLICATION_POLICY – I’d just leave this as the default, SKIP.
  • MANAGE_BUILTINS – when set to yes, allows ufw to manage default iptables chains as well.
  • IPT_MODULES – An array of iptables modules that can be added
To restart ufw after you make changes to the configuration file, use the services command:
service ufw reload
service ufw restart

Creating Rules

The first thing most people will want to do is enable a port. And of the ports, 22 is going to be pretty common, since without it you can’t ssh back into the box. For this, you’ll use the allow option followed by the name of the service (application profile):
ufw allow ssh
You can use numbers instead (since ufw isn’t going to know every possible combination and you might be running some on custom ports):
ufw allow 22
You can also deny traffic using the same structure, just swapping allow with deny:
ufw deny http
Beyond a basic allow and deny, you can also specify what IP addresses are able to access each port. This is done using ufw followed by the proto option, which is the followed by the actual protocol (tcp vs udp, etc) which is then followed by the from option and then the source then the to option then the IP to accept traffic (or the any option for all IPs on your box) and finally the port option followed by the actual port. Sounds like a lot until you see it in action. Let’s say you actually want to allow traffic for port 10000 but only from In that case, your rule would be:
ufw allow proto tcp from to any port 10000
Or if you only wanted 10000 to be accessible on one IP of your system (theoretically you have two in this scenario) that has an address of
ufw allow proto tcp from to port 10000

Using ufw

Once you have your rules configured, you are invariably going to have to troubleshoot issues with the service. Obviously, start with log review to perform a hypothesis of what the problem is. To enable logging use the logging option and specify the on parameter for it:
ufw logging on
Once enabled I usually like to view both /var/log/messages and /var/log/syslog for entries:
cat /var/log/syslog | grep UFW ; cat /var/log/messages | grep UFW
One of the best troubleshooting tools to prove any hypothesis that has to do with a rule is to simply delete the rule. To delete the deny http rule that we made earlier, just use the ufw command along with the delete option specifying the deny 22 rule as the rule to remove:
ufw delete deny http
Additionally, just disabling ufw will usually tell you definitively whether you are looking at a problem with a rule, allowing you to later look into disabling each rule until you find the offending rule.


Ubuntu also comes with iptables by default. iptables is the ipchains replacement introduced a number of years ago and is much more complicated and therefore flexible than ufw, although using one does not mean you cannot use the other. To get started, let’s look at the rules:
iptables -L
You will then see all of the rules on your host. If you have been enabling rules with ufw these will be listed here. You can then configure practically anything for how each chain (a chain is a series of rules for handling packets) functions. You can still do basic tasks, such as enabling ssh, but iptables will need much more information about what specifically you are trying to do. For example, to accept incoming traffic you would need to define that the chain will add an input (by appending it to the chain using -A INPUT) for tcp packets (-p tcp) on port 22 (–dport ssh) and accepting those packets (-j ACCEPT):
iptables -A INPUT -p tcp –dport ssh -j ACCEPT
That is about as simple as iptables get. I’ll try and write up more on dealing with it later but for now you should have enough information to get a little wacky with some basic firewall functionality on Linux. Enjoy.

November 24th, 2010

Posted In: Ubuntu, Unix

Tags: , , , , , , , , , ,

NFS is an old standby in the *nix world. It seems that it’s about as old as the hills and while it can be cranky at times, it’s pretty easy to setup, manage and use. Once it’s configured, you use it in a similar fashion as you do in Mac OS X Server. The client configuration is identical. To get started, let’s install the nfs-kernel-server, nfs-common and portmap packages on our Ubuntu 10.04 box:
apt-get install nfs-kernel-server nfs-common portmap
Then let’s create a directory to share (aka export):
mkdir /Homes
Then we need to define the permissions for /Homes (ends up similar in functionality to the export to option in Server Admin for Mac OS X Server users):
chown nobody:nogroup /Homes
Now, let’s open up /etc/exports and allow access to Homes by configuring it as an export. To do so, paste this line in at the bottom:
/Homes (rw,sync,no_subtree_check)
In the above line, we’re defining the path to the directory, followed by the address(es) that access the export. This could just be one IP address, or it could be a range of IP addresses. The above CIDR allows all IP addresses from to to access the export. Now save and close the file and then run the exportfs command with the -a option (all) and you should be done with the server configuration portion:
exportfs -a
Next up, let’s port scan for nfs (port 2049) from Mac OS X using the stroke command:
/Applications/Utilities/Network 2049 2049
Now, we need to verify that Mac OS X clients. From a client that can access the NFS server, open Disk Utility from /Applications/Utilities. Then, click on the File menu and select NFS Mounts… to bring up the NFS Mounts screen. From the NFS Mounts screen, click on the plus sign (+) and you will see an overlay with fields for Remote NFS URL: and Mount Location:. The Remote NFS URL: field will be nfs:// followed by the name or IP of your server followed by the name of the mount you just created. The Mount Location is going to be where on the client computer that you would like the folder to be. For most scenarios, /Volumes/ followed by the name of the mount will suffice. You can see how these shake out in the following screen: Click on Verify if it looks right and provided that the file system can be properly mounted then you’ll receive a message saying such. Then click on Save and you’re done: you should be able to browse and interact with it as needed.

November 23rd, 2010

Posted In: Mac OS X Server, Mass Deployment, Ubuntu

Tags: , , , , , , , , , , , , ,

On Sunday, I mentioned making your forward and reverse DNS entries match up. But I didn’t really discuss what to do if they don’t. For those readers moving into Ubuntu from Mac OS X Server, you’ll note that at installation time, if the hostname doesn’t match the A record and PTR for your server then it will install DNS and make them match up. The reason for this is that host names are a critical aspect in how many of the network services that modern services run. If you don’t have DNS or if you want to fire up DNS in the same manner that Mac OS X Server does it then let’s look at doing so here. First up, let’s get the packages that we’ll need installed using apt-get, which includes bind9 and dnsutils:
apt-get install bind9 dnsutils
Once those are installed, let’s define our zone and reverse zone in /etc/bind/named.conf.local:
zone “” { type master; file “/etc/bind/zones/”; }; zone “” { type master; file “/etc/bind/zones/”; };
Note: If you’re cut/copy/pasting here, the double-quotes are going to need to get replaced with unformatted ones. If you have other forward or reverse zones then you will need to add them using the same format as above. Once you’re done, save the file. Next, let’s tell the server where to look when attempting to resolve names that it does not host. This information is stored in the options array in /etc/bind/named.conf.options. This is currently commented out (commented lines start with //) so let’s uncomment the forwarders section (by removing the // in front of the lines) and change the IP of that forwarder from to the IP address of your server. It should look similar to the following when complete:
forwarders { };
Next, we’re going to create our
mkdir /etc/bind/zones touch /etc/bind/zones/ touch /etc/bind/zones/
Now that we’ve created our files, let’s edit them. First, open /etc/bind/zones/ and look for all instances of, replacing them with the domain name that you would like to use. Also, look for all of the records and make sure that they match with the name and IP that you would like to use, creating new lines for each new record: IN SOA ( 2007031001 28800 3600 604800 38400 ) IN NS IN MX 10 www IN A home IN A mta IN A ubuntu08 IN A
Next, we’ll populate the reverse zone file. You’ll need to replace my instances with your own as in the previous section. Open /etc/bind/zones/ in your favorite text editor and edit away:
@ IN SOA ( 2007031001; 28800; 604800; 604800; 86400 ) IN NS
Next, we’ll restart the DNS services to accept these massive changes we’ve made:
/etc/init.d/bind9 restart
Next, edit the /etc/resolv.conf file to set the DNS server and (optional) search domain. Then change it to look something like the following:
search nameserver
Finally, you can use dig and nslookup to test the lookups and make sure they work. For example:

November 22nd, 2010

Posted In: Ubuntu, Unix

Tags: , , , , , , , , , ,

I’ve done a number of articles on using Ubuntu 10 as a server recently, but haven’t actually looked at doing the base installation of an Ubuntu 10 host. In this example, I’ll look at using Ubuntu 10.04 Desktop. In many of the previous examples I’ve been looking at Ubuntu 10.10 Server; the reason I’m using 10.04 Desktop here is because I believe there is a smaller learning curve and that inherently Mac OS X Systems Administrators who might be following this thread actually like a GUI. There are a number of aspects of this type of setup that are simply not GUI oriented; however, the base OS can easily be, so here goes. First up, download the installer of Ubuntu from Then, install Fusion. Once installed you’ll be prompted with the welcome screen. Next, use Command-N to create a new virtual machine, orclick on the File menu and then select the New menu item (first in the list). The New Virtual Machine Assistant will then open. Click on the button to Continue without disc. The Installation Media screen of the New Virtual Machine Assistant will be next. Here, click on the radio button for Use operating system installation disk image file. You will then be prompted to select an iso. Browse to the file that you downloaded from Ubuntu before you got started and then click on the Choose button in the lower right hand corner of the screen. The Operating System and version should be filled in by default. Provided they are correct, click on the Continue button to proceed. You will then be prompted for credentials that the virtual machine will give the guest operating system when it is installed. Here, type the administrative user name and password that you want to use. You can also choose whether or not you want to make the home folder you use in Mac OS X available to the virtual machine as well as what type of access the virtual machine has to that directory. When you’re satisfied with your settings, click on the Continue button. At the Finish screen of the New Virtual Machine Assistant, you will be able to review the settings that have been provided to the virtual machine. You can change these later if you see fit. For now, let’s click on the Finish button. Finally, choose where you want to install the virtual machine at. By default, the virtual machine will be placed in the Virtual Machines folder of your home directory. I usually like to move it to a Virtual Machines directory on the root of the volume that houses my Virtual Machines, but you can place yours wherever you like. When you’ve selected the folder that best fits your needs, click on the Save button. The virtual machine will then install. This process can take some time, so it’s probably a good chance to grab a bite. When it’s done, you’ll be at the login screen for Ubuntu. Enter the username and password that you provided earlier in the process and then click on the Log In button. Once you have logged in, let’s get the networking straight. In the menu at the top of the screen, click on Settings in the VMware toolbar and then click on Network. By default, the virtual machine will be sharing the network connection of the Mac. Click on the second radio button (Connect directly to the physical network) and then the indicator light for the interface will go red. Wait for the light to go green, indicating that it’s picked up the correct interface and then close the Settings. The IP will then need to be set for the guest OS. From Ubuntu, click on the System menu at the top of the screen and then click on Preferences and then Network Connections. Here, click on the Auth eth0 interface and then click on the Edit button. You should now see the Editing Auth eth0 screen. Here, click on the IPv4 Settings tab and then provide the Address, Subnet mask (Netmask) and Gateway for your environment. You should also take this opportunity to provide a DNS server. Click on Apply to commit your changes and then reboot the virtual machine so the new network settings are enforced. When Ubuntu comes back online, you should then be able to ping your router or some other device on your network. If you decided to use Ubuntu Server then you will need to go to /etc/network/interfaces and add some lines to bring up the interface using nano or vi, then set the IP to static and then provide your settings. They would appear as follows:
auto lo iface lo inet loopback iface eth0 inet static address netmask gateway
Note: Check out ‘man interfaces’ for more information on building out your interfaces file. You would also need to provide DNS information in your /etc/resolv.conf file:
nameserver nameserver nameserver
Note: Check out man resolv.conf for more information on the correct syntax and options if you need more that what we have provided here. As you can see, doing so in the GUI vs. the command line is almost identical in terms of the amount of time it takes. Next, check the hostname. For this, let’s use the terminal emulator (not as spiffy as the one in Mac OS X, but nice nonetheless). Click on the Applications menu, Accessories and then Terminal. As with Mac OS X Server the forward and reverse names should match. Provided they do, you’re ready to get some services installed; otherwise you will need to set the hostname to be the same as the DNS name. Assuming the DNS name is
To then make it persistent across a restart, check /etc/hostname and replace the hostname with whatever you see there. Once set, you should see the hostname at the login window. Finally, I ran into an instance a few years back where Debian (not Ubuntu but close enough) wouldn’t change the hostname even after I tweaked the /etc/hosts and /etc/hostname files. Very annoying. The only thing that would work was to do it using sysctl (continuing on with the same example):
Assuming that your Ubuntu box isn’t also acting as your DNS server, you will also need to check the DNS to make sure it’s correctly set. You can use nslookup for this:

November 21st, 2010

Posted In: Mac OS X, Ubuntu, Unix, VMware

Tags: , , , , , , , , , , , , ,

OK, so you don’t necessarily call rtsp on Ubuntu QuickTime Streaming Server. Instead, you call it Darwin Streaming Server (DSS). But the end result is basically what you have exposed in Mac OS X Server, but running on Linux. You don’t have the same functionality in Server Admin, but it does work. And the key to what it does is use the rtsp protocol to stream supported files from the server to clients. It is a little tougher than just clicking on the start button, but too much tougher provided you follow these directions (thanks to the good folks of the DSS list that I’ve been a member of for a few years for taking such good notes, making this much simpler to write when I just have to move from Ubuntu 7 to 10.04). To get started (most all of this is going to need sudo or su), let’s use wget to download all the files that we’re going to need (except 1):
wget wget wget
Now let’s extract the tar file:
tar -xvf DarwinStreamingSrvr6.0.3-Source.tar
Now let’s create our qtss user and group:
addgroup –system qtss adduser –system –no-create-home –ingroup qtss qtss
We’re going to need the build-essential package from apt-get, so let’s install that before moving on:
apt-get install build-essential
The base 6.0.3 installer was only built for Mac OS X, so let’s apply the patches we used wget to pull down:
patch -p0 < dss-6.0.3.patch patch -p0 < dss-hh-20080728-1.patch
Now let’s cd into the actual dss installer directory and then grab a patched installer file, get rid of the old Install script and then grab a new one:
cd DarwinStreamingSrvr6.0.3-Source mv Install Install.old wget
Then we’ll make the Install script executable and run the Buildit (no, not Configure) then Install scripts:
chmod +x Install ./Buildit ./Install
Finally, fire up the DSS:
Now you should be able to go to a standard Mac OS X client and run a port scan of the rtsp port, 554 using stroke (swap the IP here with whatever IP or hostname that you’re using):
/Applications/Utilities/Network 554 554
DSS installs some sample movies into /usr/local/movies. Provided that the port is open, let’s open Safari and provide the following link to see if one of the stock sample movies will open:
Provided that you see the sample movie from Apple then you can move the sample movies elsewhere and drop your own in here. You’ve now got a fully functional DSS. The DSS will stream .mov, .mp4 and .3gp files. If you enable the QTSSHttpFileModule you can also stream mp3 files. If you go into the /etc/streaming folder you will see a number of files that look similar to what you have been working with on Mac OS X Server (assuming you’ve been working with Mac OS X Server). In here, you’ll find the qtusers and qtgroups files for managing users and groups in rtsp as well as the streamingserver.xml file, which is where the modules are loaded and unloaded. In /var/streaming you’ll also find a directory called logs, which is interestingly enough where the logs reside and another directory called playlists, which is where you will drop playlists in the event that you decide to make your own radio station. My music tastes are bad enough where I’ve never really considered this, but feel free to get all WKRP in Cincinnati if you so choose, I promise not to judge (or maybe just a little)… You’ll also end up likely looking to embed these rtsp streams (that seems to be what everyone does). If so, get to know the XML structure:
<?xml version=”1.0″?> <?quicktime type=”application/x-quicktime-media-link”?> <embed src=”rtsp://″ autoplay=”true” />
Ultimately, building and using QuickTime Streaming on Mac OS X Server is far superior in a number of ways to doing so in Linux. For starters, the steps here are all done by clicking on a Start button in Mac OS X Server. But even further than that, updates are even more rare to DSS. If you’re in the rack density game, a number of Mac mini servers in the right sized rack might just get you more bang for your square inch!

November 20th, 2010

Posted In: Mac OS X Server, Ubuntu, Unix

Tags: , , , , , , , , , , ,

There are a number of different ways to join Linux systems into an Active Directory domain. One is to use winbind, a popular part of Samba often used for this purpose. However, having had success with the Likewise Open directory services plug-in for Mac I decided to give their Linux solution a shot as well. After all, it is free (as in beer). And I am glad I did (well, I wasn’t when I was using Ubuntu Server 10.10, but backing back down to 10.04 (which is LTS after all) made it all better. To get started, let’s run apt-get to grab and install the likewise-open package:
apt-get -y install likewise-open
During the installation of the package, you’ll be asked for the realm name that you will eventually be joining to. Use your Active Directory domain name for most environments when prompted. Once installed, it couldn’t be easier to bind, just use the domainjoin-cli command that came with the package and tell it to ‘join’, followed by the domain name and then the user name. For example, if I was using a user called cedge to join to a domain called, my command might look like this:
domainjoin-cli join cedge
And then you’ll want to restart the host, or re-run the likewise-open startup:
update-rc.d likewise-open defaults /etc/init.d/likewise-open start
And viola, you now have an Ubuntu box bound to AD.

November 19th, 2010

Posted In: Ubuntu, Unix

Tags: , , ,

Next Page »