I’m obviously enjoying using Amazon for a number of testing applications (in addition to of course buying books and light bulbs from them, and not one showed up broken). So far, I’ve done articles on getting started with Amazon ec2
, using the command line with ec2
, whitelisting an IP address
, deploying ec2 en masse
, and setting up a static IP for ec2
. But the S3 articles have been sparse. So, now let’s look at using Amazon’s storage service (S3) from the command line. Funny enough, if you’re going to upload your own custom Amazon Machine Instances (AMIs) you’ll need to leverage S3.
When you go to bundle an image, you will have a couple of commands to run. The ec2-bundle-vol will build a bundle and and ec2-upload.bundle will upload the bundle to Amazon using an S3 bucket.
The ec2-bundle-vol command can be run using the existing boot volume or you can specify a different volume to create an AMI from using the -v flag. You will also need to specify a destination using the -d flag (the destination will usually be /mnt) and you will need to put your private key into the image using the -k flag (the .pem files from earlier articles) . Also, the size of the AMI will be defined with a -s flag (in Megabytes) and the ec2 user id will be defined using the -u flag followed by the actual userid. Finally, if you would like, you can choose to exclude specified directories (using a comma seperated list) using the -e flag.
So the command, if you’re booted to a CentOS host that you want to use would use something like this:
ec2-bundle-vol -d /mnt -k ~root/pk-MYSTRINGOFNUMBERS.pem -u 1234567890 -s 4096
This will create a bundle along with a manifest file (which will be in the form of an XML file. Now, on S3 create a bucket, let’s just call it EC2, and then in here, let’s create a directory called bundles for all of our ec2 bundles to reside. The ec2-upload-bundle command would then be used to upload the bundle to Amazon. Here, you’ll use the -b flag to define the name of the bucket that was just created and then the -m flag to define the xml file (which btw will tell the command where to look for all of the parts of the image). Here, I used username and password but you could also use your AWS access key and secret access key by using the -a and -s flags respectively. So an example of the command would be:
ec2-upload-bundle -b EC2 -m mybundlelocation/my.manifest.xml -u 1234567890 -p MYPASSWORD -d bundles
Now that the bundle is on s3, let’s go ahead and register it with ec2. To do so, we’ll use the ec2-register command followed by the s3 bucket name we uploaded the image to and then the relative path to the manifest:
ec2-register ec2/ buckets/image.manifest.xml
The output of the above command will include a unique machine identifier, we’ll call it ami-id (almost as though we would use a $ami-id variable if we were scripting this). Now, you can run the ec2-run-instances command, specifying the unique identifier as follows (replacing ami-id with the actual ID):
One way to leverage S3 is through the lens of ec2, another is by simply using s3. I covered using s3 a little bit when I discussed using it as a location to store assets from Final Cut Server
, but I will cover using it from the command line soon.
krypted May 20th, 2009
Posted In: Consulting, Network Infrastructure, sites, Unix, VMware
AMI, EC2, ec2-bundle-vol, ec2-register, ec2-run, ec2-upload-bundle, s3
Yesterday I did a quick review of the various cloud offerings from Amazon
. Previous to that I had done a review of using S3, the Amazon storage service, with Mac OS X, primarily through the lens of using S3 as a destination for Final Cut Server archives
. Today I’m going to go ahead and look at using EC2 from Mac OS X. To get started, first download the EC2 tools from Amazon
Next, log into Amazon Web Services
. If you don’t yet have a login you will obviously need to create one to proceed. Additionally, if you don’t yet have a private key you’ll need one of those too – in that case there will be a big green box to create it when you first log in. When the keys are created you can double-click on the x.509 certificate file to install it into Keychain. This key is a private key so make sure not to give it out. You can return to this screen later if you need to.
Next, go to the AWS Management Console
. Because I don’t personally find the site terribly user friendly I like to keep the Management Console bookmarked. Once you have the Management Console open, click on Instances and then click on Launch Instance. You will then be greeted by a list of prebuilt virtual machines that you can use. Amazon has built Fedora and Windows for you, which will be listed under the QuickStart tab of the Launch Instances screen; however, you can also click on Community AMIs in order to use one that has been built and made available by others within the EC2 community. These include Debian, Ubuntu and CentOS (amongst others).
Once you have picked your poison, click on Select and you will then be prompted to create a key pair specifically for the instance. The reason for this is that you might have instances that you’ld like to distribute information for to people you wouldn’t want to access all of your images globally to your account. You can skip this step or enter a name for the keypair and click on Create. Now click on Continue and you’ll be prompted to create a security group. A security group controls the ports that are opened to/from your virtual machine. For Windows you’ll pretty much always want RDC (3389) open (pretty much) and for *nix, typically SSH. Amazon tries to make this easy and so pre-fills the form with common ports based on your use. Think of a security group like an Access Control List on a Cisco. You can resuse them across various instances. Next, click Continue.
Next, you’ll be asked to provide a name for the VM (aka AMI), a number of instances of the VM and whether the AMI is to be a smaller, standard item or whether it will be hit with high CPU utilization. You’ll also be able to select the security group to apply to the host based on the previous information. The name will be automatically filled in based on the template you chose to use, so you can actually click on the Change button if you’d like to supply a new name.
Next, click Launch and the AMI will start to fire up, becoming an instance. Windows AMIs will take a little longer in my experience than Linux AMIs. While the instance is booting, it is worth mentioning that at this point you’ll notice the option to launch/create volumes and what Amazon calls Elastic IPs. Amazon doesn’t provide an IP for free, as you may have noticed when you accepted their terms of service. Therefore, if you are going to create an instance that will have static access over the WAN using a static IP, you will need to go ahead and assign an elastic IP to it. Unless that is, you can communicate with the instance even if it has a dynamic IP (there are a ton of ways to do this). The volumes option allows you to build storage that is independent of the instance. This can be used to mount on multiple instances (although I haven’t found a way to do so concurrently) or to simply have storage independent of the instance so that you can easily move data.
Now click on Instances. Here, you’ll note that your newly created instance is listed. Click on it and then click on More Actions and select Get Password (where OS is the OS you chose to setup). Here, you’ll receive an option to decrypt the password using the Private Key. You can cat the .pem file that was downloaded when you setup the key and copy/paste the entire contents into the field. Once the field has been populated, click on the Decrypt button and you will see the Admin/root password for your new virtual host.
Next, click on Connect and you’ll find instructions to connect to your new instance (for Windows it will be a dynamic DNS entry to use RDC with). You can now login. Once you have connected it is as though you are in a typical VM environment. Next, you’ll want to take a look at the options for Bundle Tasks (if you’re using Windows), which allows you to duplicate an AMI into multiple instances. You’ll also want to look at Volumes, as mentioned previously and Snapshots, which can be used to back up the Volumes.
Overall, we were able to create a new instance of Fedora, Windows or Ubuntu (even those tuned to be Active Directory domain controllers, LAMP hosts or SQL), faster than if we installed it from scratch and without using any resources outside of Amazon to do so. Later, we’ll look at doing all of this from the command line. And don’t forget to stop your instance so that you don’t get billed for all that time that you’re not using it!
krypted May 1st, 2009
Posted In: Active Directory, Articles and Books, Business, Consulting, Network Infrastructure, SQL, Ubuntu, Unix, VMware, Windows Server
Amazon EC2, AMI, Connect, fedora, howto, Password, windows
There is a lot of talk about “the cloud” in the IT trade magazines and in general at IT shops around the globe. I’ve used Amazon S3 in production for some web, offsite virtual tape libraries (just a mounted location on S3) and a few other storage uses. I’m not going to say I love it for every use I’ve seen it used for, but it can definitely get the job done when used properly. I’m also not going to say that I love the speeds of S3 compared to local storage, but that’s kindof a given now isn’t it… One of the more niche uses has been to integrate it into Apple’s Final Cut Server
In addition to S3 I’ve experimented with CloudFront for web services (which seems a little more like Akamai than S3) and done a little testing of MapReduce for some of log crunching – although the MapReduce testing has thus far been futile compared to just using EC2 it does provide an effective option if used properly. Overall, I like the way the Amazon Machine Instances (AMI – aka VM) work and I can’t complain about the command line environment they’ve built, which I have managed to script against fairly easily.
The biggest con thus far (IMHO) about S3 and EC2 is that you can’t test them out for free (or at least not when I started testing them). I still get a bill for around 7 cents a month for some phantom storage I can’t track down on my personal S3 account, but it’s not enough for me to bother to call about… But if you’re looking at Amazon for storage, I’d just make sure you’re using the right service. If you’re looking at them for compute firepower then fire up a VM using EC2, read up on their CLI environment and enjoy. Beyond that, it’s just a matter of figuring out how to build out a scalable infrastructure using pretty much the same topology as if they were physical boxen.
I think the reason I’m not seeing a lot of people jumping on EC2 is the pricing. It’s practically free to test, but I think it’s one of those things where a developer has a new app they want to take to market and EC2 gives us a way to do that, but then when the developer looks at potentially paying 4x the intro amount in peak times for processing power (if a VM is always on then you would be going from $72 to $288 per month per VM without factoring data transfer to/from the VM at .1 to .17/GB) they get worried and just go to whatever tried and true route they’ve always used to take it to market. Or they think that it’s just going to do everything for them and then are shocked about the fact that it’s just a VM and get turned off… With all of these services you have to be pretty careful with transfer rates, etc.
I haven’t found a product to do this yet, but what I’d really like to have is use something like vSphere/vCenter or MS VMM that could provision, move and manage VMs, whether they sit on Amazon, a host OS in my garage or a bunch of ESX hosts in my office, or a customers office for that matter – and preferably with a cute sexy meter to tell me how much I owe for my virtual sandboxes.
krypted April 30th, 2009
Posted In: Business, Consulting, VMware
Amazon CloudFront, Amazon EC2, amazon s3, AMI, Virtual Machines, virtualization