I’m obviously enjoying using Amazon for a number of testing applications (in addition to of course buying books and light bulbs from them, and not one showed up broken). So far, I’ve done articles on getting started with Amazon ec2, using the command line with ec2, whitelisting an IP address, deploying ec2 en masse, and setting up a static IP for ec2. But the S3 articles have been sparse. So, now let’s look at using Amazon’s storage service (S3) from the command line. Funny enough, if you’re going to upload your own custom Amazon Machine Instances (AMIs) you’ll need to leverage S3.
When you go to bundle an image, you will have a couple of commands to run. The ec2-bundle-vol will build a bundle and and ec2-upload.bundle will upload the bundle to Amazon using an S3 bucket.
The ec2-bundle-vol command can be run using the existing boot volume or you can specify a different volume to create an AMI from using the -v flag. You will also need to specify a destination using the -d flag (the destination will usually be /mnt) and you will need to put your private key into the image using the -k flag (the .pem files from earlier articles) . Also, the size of the AMI will be defined with a -s flag (in Megabytes) and the ec2 user id will be defined using the -u flag followed by the actual userid. Finally, if you would like, you can choose to exclude specified directories (using a comma seperated list) using the -e flag.
So the command, if you’re booted to a CentOS host that you want to use would use something like this:
ec2-bundle-vol -d /mnt -k ~root/pk-MYSTRINGOFNUMBERS.pem -u 1234567890 -s 4096
This will create a bundle along with a manifest file (which will be in the form of an XML file. Now, on S3 create a bucket, let’s just call it EC2, and then in here, let’s create a directory called bundles for all of our ec2 bundles to reside. The ec2-upload-bundle command would then be used to upload the bundle to Amazon. Here, you’ll use the -b flag to define the name of the bucket that was just created and then the -m flag to define the xml file (which btw will tell the command where to look for all of the parts of the image). Here, I used username and password but you could also use your AWS access key and secret access key by using the -a and -s flags respectively. So an example of the command would be:
ec2-upload-bundle -b EC2 -m mybundlelocation/my.manifest.xml -u 1234567890 -p MYPASSWORD -d bundles
Now that the bundle is on s3, let’s go ahead and register it with ec2. To do so, we’ll use the ec2-register command followed by the s3 bucket name we uploaded the image to and then the relative path to the manifest:
ec2-register ec2/ buckets/image.manifest.xml
The output of the above command will include a unique machine identifier, we’ll call it ami-id (almost as though we would use a $ami-id variable if we were scripting this). Now, you can run the ec2-run-instances command, specifying the unique identifier as follows (replacing ami-id with the actual ID):
ec2-run-instances ami-id
One way to leverage S3 is through the lens of ec2, another is by simply using s3. I covered using s3 a little bit when I discussed using it as a location to store assets from Final Cut Server, but I will cover using it from the command line soon.