krypted.com

Tiny Deathstars of Foulness

Before I post the new stencil, let me just show you how it came to be (I needed to do something, which required me to do something else, which in turn caused me to need to create this): programming Anyway, here’s the stencil. It’s version .1 so don’t make fun: AWS.gstencil. To install the stencil, download, extract from the zip and then open. When prompted, click on Move to move it to the Stencils directory. Screen Shot 2014-06-04 at 10.05.56 PMReopen OmniGraffle and create a new object. Under the list of stencils, select AWS and you’ll see the objects on the right to drag into your doc. Screen Shot 2014-06-04 at 10.09.04 PM Good luck writing/documenting/flowcharting!

June 5th, 2014

Posted In: cloud, Network Infrastructure

Tags: , , , , , , , ,

Earlier we looked at using s3cmd to interact with the Amazon S3 storage cloud.  Now we’re going to delve into using Another S3 Bash Interface.  To get started, first download the scripts and then copy the hmac and s3 commands into the ec2 folder created in previous walkthroughs. To use the s3 script, you need to store your Amazon secret key in a text file and set two environment variables. The INSTALL file included with the package has all the details. The only tricky part I ran into, and from the comments on Amazon, other people ran into, is how to create the secret key text file. Now go into your environment variables in ~/.bash_profile and add S3_ACCESS_KEY_ID (your S3 access key id) from the S3 site on Amazon Web Service and S3_SECRET_ACCESS_KEY (the name of the file with your S3 secret key). If the file that stores this key is called ~/SUPERSECRET then to create it, copy the key to your clipboard from the AWS site and then run echo -n and send the contents of the pasted line to a file:
echo -n MYPASTEDKEY > ~/SUPERSECRET
The -n switch tells echo to not include a new line character and results in a text file of exactly 40 bytes. Once I got the key file created correctly, the s3 script started working, and I was able to upload, download, and list objects in S3. Next, we’ll list your buckets:
s3 buckets
Then we’ll list the contents of a bucket called images:
s3 list images
Next, we’ll upload a file called emerald.png from the desktop of our computer:
s3 put images emerald.png ~/Desktop/emerald.png
Now let’s try and get the same file and just leave it somewhere else so we can compare the two:
s3 get images emerald.png ~/Documents/emerald.png
Now let’s get rid of the file:
s3  rm images emerald.png
And then to just remove all the files from the bucket:
rmrf images
If you notice, this toolkit is very similar to the s3cmd kit that we looked at earlier.  It’s a little more limited, but I thought it might come with less of a learning curve or be easier to script against depending on what you need.

May 21st, 2009

Posted In: Consulting, Network Infrastructure, sites, Unix, VMware

Tags: , , , , , ,

I’m obviously enjoying using Amazon for a number of testing applications (in addition to of course buying books and light bulbs from them, and not one showed up broken).  So far, I’ve done articles on getting started with Amazon ec2, using the command line with ec2whitelisting an IP addressdeploying ec2 en masse, and setting up a static IP for ec2.  But the S3 articles have been sparse.  So, now let’s look at using Amazon’s storage service (S3) from the command line.  Funny enough, if you’re going to upload your own custom Amazon Machine Instances (AMIs) you’ll need to leverage S3. When you go to bundle an image, you will have a couple of commands to run.  The ec2-bundle-vol will build a bundle and and ec2-upload.bundle will upload the bundle to Amazon using an S3 bucket. The ec2-bundle-vol command can be run using the existing boot volume or you can specify a different volume to create an AMI from using the -v flag.  You will also need to specify a destination using the -d flag (the destination will usually be /mnt) and you will need to put your private key into the image using the -k flag (the .pem files from earlier articles) .  Also, the size of the AMI will be defined with a -s flag (in Megabytes) and the ec2 user id will be defined using the -u flag followed by the actual userid.  Finally, if you would like, you can choose to exclude specified directories (using a comma seperated list) using the -e flag. So the command, if you’re booted to a CentOS host that you want to use would use something like this:
ec2-bundle-vol -d /mnt -k ~root/pk-MYSTRINGOFNUMBERS.pem -u 1234567890 -s 4096
This will create a bundle along with a manifest file (which will be in the form of an XML file.  Now, on S3 create a bucket, let’s just call it EC2, and then in here, let’s create a directory called bundles for all of our ec2 bundles to reside.  The ec2-upload-bundle command would then be used to upload the bundle to Amazon.  Here, you’ll use the -b flag to define the name of the bucket that was just created and then the -m flag to define the xml file (which btw will tell the command where to look for all of the parts of the image).  Here, I used  username and password but you could also use your AWS access key and secret access key by using the -a and -s flags respectively.  So an example of the command would be:
ec2-upload-bundle -b EC2 -m mybundlelocation/my.manifest.xml -u 1234567890 -p MYPASSWORD -d bundles
Now that the bundle is on s3, let’s go ahead and register it with ec2.  To do so, we’ll use the ec2-register command followed by the s3 bucket name we uploaded the image to and then the relative path to the manifest:
ec2-register ec2/ buckets/image.manifest.xml
The output of the above command will include a unique machine identifier, we’ll call it ami-id (almost as though we would use a $ami-id variable if we were scripting this).  Now, you can run the ec2-run-instances command, specifying the unique identifier as follows (replacing ami-id with the actual ID):
ec2-run-instances ami-id
One way to leverage S3 is through the lens of ec2, another is by simply using s3.  I covered using s3 a little bit when I discussed using it as a location to store assets from Final Cut Server, but I will cover using it from the command line soon.

May 20th, 2009

Posted In: Consulting, Network Infrastructure, sites, Unix, VMware

Tags: , , , , , ,

Render farms, cluster nodes and other types of distributed computing often require using a lot of machines that don’t have a lot of stuff running on them and are only needed during certain times.  Such is the life of a compute cluster, which is what EC2 is there for.  Because cluster nodes are so homogenous by nature you can deploy them en masse.  Picking up where I left off with deploying EC2 via the command line we’re going to look at spinning up let’s say 100 virtual machines with the large designation, from a pricing standpoint. As with the previous example, we’re going to use ami-767676 (although you’ll more than likely want to choose an image of your own using the ec2-describe-images command) as the AMI name and use a predefined key called my-keypair to access these hosts. The command would then be: ec2-run-instances -n 100 -t m1.large -k my-keypair ami-767676 The -n flag told ec2-run-instances how many instances to fire up and the -t flag told it what type of images. We can go a step further and start to automate the setup of software on these instances. When you use the ec2-run-instances you can specify a file to be copied to the instance. For example, -f MYSCRIPT.zip would copy a zipped script to the instances. You can also prebuild security groups and then use the -g flag to assign all of the instances you create to a security group that provides you with SSH access. Once you have a key, a local script and SSH access to the hosts it is straight forward to loop through the instances initiating the script. When you run the ec2-run-instance you will get a line similar to the following: INSTANCE i-767676 ami-767676 pending The second position here (i-767676 for the example) is the name of the instance, which can be obtained using >awk ‘{printf $2}’with the output, grabbing only the names. For some of mine I use a script stored within the cluster and not made available outside and have the script update itself using a method I described last week. There are tons of other ways of doing that kind of thing as well. Overall, the ability to provision servers for compute power has never been easier. No wait times for massive quantities of iron to roll in on pallets and then having the n00b rack mount it all. Now, just pay Amazon Web Services by the hour if/when you need a little firepower. Finally, it is worth noting that if you have a need where Amazon has provided an existing API to hook into EC2 (eg – SOAP), then that’s often a far more elegant (and therefore efficient) use of resources. Before you begin provisioning en masse, make sure it’s the right use for the right situation!

May 4th, 2009

Posted In: Business, Mass Deployment, Network Infrastructure, Unix, VMware

Tags: , , ,