Category Archives: cloud

cloud personal Product Management

When Product Management Meets Social Justice

In technology, we often find a lot of cool stuff that, as developers, engineers and yes, even product managers, we think is just plain cool. In agile development, we create epics, where we lay out customer stories and tie them into a set of features; however, while we’re working towards our goals we often find those technical places where we discover we can do something super cool. And we sometimes want to weave those into our stories as features in products simply because we want to make stuff that we’re technically proud of. But should we?

Too often we don’t consider what the social ramifications are to features. Time and time again we hear stories of what seemed like a cool feature that got abused. When we’re creating software, we think of the art. We want to change the world after reading too much Guy Kawasaki. We want to build sometimes just for the sake of building. And sometimes we come to a place where we think we just have to add something into a product. Then we stop and think about it, and we come to a place where we’re just torn about whether that feature is something that should go back to the obscure place we found it. And in times like that, when we’re torn about what to do, we have to remember that “we are the goodpeople” and do what’s right.

That is all.

cloud Network Infrastructure

New AWS OmniGraffle Stencil

Before I post the new stencil, let me just show you how it came to be (I needed to do something, which required me to do something else, which in turn caused me to need to create this):

programming

Anyway, here’s the stencil. It’s version .1 so don’t make fun: AWS.gstencil.

To install the stencil, download, extract from the zip and then open. When prompted, click on Move to move it to the Stencils directory.

Screen Shot 2014-06-04 at 10.05.56 PMReopen OmniGraffle and create a new object. Under the list of stencils, select AWS and you’ll see the objects on the right to drag into your doc.

Screen Shot 2014-06-04 at 10.09.04 PM

Good luck writing/documenting/flowcharting!

cloud Mac OS X Server

Finding The Data You Need

Especially in environments with files in Google Docs, Dropbox, Box, Wikis, file servers, portals and any other place that makes it hard to aggregate exactly what you need.

cloud Mac Security Network Infrastructure

Configure Syslog Options on a Meraki

Meraki has a syslog option. To configure a Meraki to push logs to a syslog server, open your Meraki Dashboard and click on a device. From there, click on “Alerts & administration”.

Screen Shot 2014-04-12 at 8.29.16 PM

At the “Alerts & administration” page scroll down to the Logging section. Click on the “Add a syslog server” link and type the IP address of your syslog servers name or IP. Put the port number into the Port field. Choose what types of events to export. This could be Event Log, Flows or URLs, where:

  • Event Log: The messages from the dashboard under Monitor > Event log.
  • Flows: Inbound and outbound traffic flows generate syslog messages that include the source and destination and port numbers.
  • URL: HTTP GET requests generate syslog entries.

Note that you can direct each type of traffic to a different syslog server.

Active Directory cloud Consulting iPhone Kerio Mac OS X Mac OS X Server Mac Security Mass Deployment Microsoft Exchange Server Network Infrastructure Windows Server

Dig TTL While Preparing For A Migration

Any time doing a migration of data from one IP to another where that data has a DNS record that points users towards the data, we need to keep the amount of time it takes to repoint the record to a minimum. To see the TTL of a given record, let’s run dig using +trace, +nocmd to turn off showing the version and query options, +noall to turn off display flags, +answer to still show the answer section of my reponse and most importantly for these purposes +ttlid to toggle showing the TTL on. Here, we’ll use these to lookup the TTL for the www.krypted.com A record:

dig +trace +nocmd +noall +answer +ttlid a www.krypted.com

The output follows the CNAME (as many a www record happen to be) to the A record and shows the TTL value (3600) for each:

www.krypted.com. 3600 IN CNAME krypted.com.
krypted.com. 3600 IN A 199.19.85.14

We can also lookup the MX using the same structure, just swapping out the a for an MX and the FQDN with just the domain name itself:

dig +trace +nocmd +noall +answer +ttlid mx krypted.com

The response is a similar output where

krypted.com. 3600 IN MX 0 smtp.secureserver.net.
krypted.com. 3600 IN MX 10 mailstore1.secureserver.net.

cloud Mac OS X

One Liner To Install gcloud for Managing App Engine Instances

I had previously been using the gcutil command. But I cheated a little with the one liner promise to get the new tool, gcloud, installed:

curl https://dl.google.com/dl/cloudsdk/release/install_google_cloud_sdk.bash | bash ; unzip google-cloud-sdk.zip ; ./google-cloud-sdk/install.sh

The installation shell script is interactive and will ask if you want to update your bash profile. Once run, kill your terminal app and the new invocation will allow you to log into App Engine using the gcloud command followed by auth and then login:

gcloud auth login

Provided you’re logged into Google using your default browser, you’ll then be prompted to Accept the federation. Click Accept.

Screen Shot 2014-01-03 at 11.14.21 PM

The gcloud command can then be used to check your account name:

gcloud config list

To then set a project as active to manage it, use the set option (or unset to not manage it any longer:

gcloud config set project kryptedmuncas

You can then use components, sql or interactive verbs to connect to and manage instances. Each of these commands are interfacing with the API, so if you ever find that you’ve exceeded what this simple command provides for, you can always hit the API directly as well. I found that the interactive command was my favorite as I could figure out what limitations I had using interactive and then try and figure out how to accomplish tasks with commands from there.

cloud Network Infrastructure SQL Ubuntu Unix VMware Windows Server

Scripting Azure On A Mac

Microsoft Azure is Microsoft’s cloud services. Azure can host virtual machines and act as a location to store files. However, Azure can do much more as well, providing an Active Directory instance, provide SQL database access, work with hosted Visual Studio, host web sites or provide BizTalk services. All of these can be managed at https://manage.windowsazure.com.

windows_azure_logo6

You can also manage Windows Azure from the command line on Linux, Windows or Mac. To download command line tools, visit http://www.windowsazure.com/en-us/downloads/#cmd-line-tools. Once downloaded, run the package installer.

Screen Shot 2013-11-29 at 10.51.01 PMWhen the package is finished installing, visit /usr/local/bin where you’ll find the azure binary. Once installed, you’ll need to configure your account from the windowsazure.com site to work with your computer. To do so, log into the windowsazure.com portal.

Screen Shot 2013-12-01 at 8.25.57 PM

Once logged in, open Terminal and then use the azure command along with the account option and the download verb:

azure account download

This account downloads the .publishsettings file for the account you’re logged in as in your browser. Once downloaded, run azure with the account option and the import verb, dragging the path to your .publishsettings file from https://manage.windowsazure.com/publishsettings/index?client=xplat:

azure account import /Users/krypted/Downloads/WindowsAzure-credentials.publishsettings

The account import then completes and your user is imported into azure. Once imported, run azure with the account option and then storage list:

azure account storage list

You might not have any storage configured yet, but at this point you should see the following to indicate that the account is working:

info: No storage accounts defined
info: account storage list command OK

You can also run the azure command by itself to see some neat ascii-art (although the azure logo doesn’t really come through in this spiffy cut and paste job):

info: _ _____ _ ___ ___________________
info:        /_\  |__ / | | | _ \ __|
info: _ ___ / _ \__/ /| |_| |   / _|___ _ _
info: (___ /_/ \_\/___|\___/|_|_\___| _____)
info: (_______ _ _) _ ______ _)_ _
info: (______________ _ ) (___ _ _)
info:
info: Windows Azure: Microsoft's Cloud Platform
info:
info: Tool version 0.7.4
help:
help: Display help for a given command
help: help [options] [command]
help:
help: Open the portal in a browser
help: portal [options]
help:
help: Commands:
help: account to manage your account information and publish settings
help: config Commands to manage your local settings
help: hdinsight Commands to manage your HDInsight accounts
help: mobile Commands to manage your Mobile Services
help: network Commands to manage your Networks
help: sb Commands to manage your Service Bus configuration
help: service Commands to manage your Cloud Services
help: site Commands to manage your Web Sites
help: sql Commands to manage your SQL Server accounts
help: storage Commands to manage your Storage objects
help: vm Commands to manage your Virtual Machines
help:
help: Options:
help: -h, --help output usage information
help: -v, --version output the application version

Provided the account is working, you can then use the account, config, hdinsight, mobile, network, sb, service, site, sql, storage or vm options. Each of these can be invoked along with a -h option to show a help page. For example, to see a help page for service:

azure service -h

You can spin up resources including sites, storage containers and even virtual machines (although you might need to create templates for VMs first). As an example, let’s create a new site using the git template:

azure site create --git

Overall, there are a lot of options available in the azure command line interface. The web interface is very simple, with options in the command line interface mirroring the options in the web interface. Running and therefore scripting around these commands is straight forward. I wrote up some Amazon stuff previously at http://krypted.com/commands/amazon-s3cmd-commands, but the azure controls are really full featured and I’m really becoming a huge fan of the service itself the more I use it (which likely means I’ll post more articles on it soon).

cloud Ubuntu Unix

25 Helpful Chrome OS Shell (crosh) Commands

To open Crosh:

Control-Alt-T

Find commands:

help

Find debugging commands:

help_advanced

To switch to a more bash-like command prompt:

shell

To see the version of Chrome OS running on your Chromebook:

sudo /opt/google/chrome/chrome –version

To show the operating system name:

uname -a

If the operating system is a bit old, update it using the update_engine_client command:

update_engine_client -update

To see the bios of your Chromebook, open up a command prompt (Control-Alt-T) and use the following command:

sudo /usr/sbin/chromeos-firmwareupdate -V

To record some sound from the microphone, use the sound command:

sound record NUMBEROFSECONDS

Look for (or grep for) BIOS version in the output.

To see the Vital Product Data, or configuration information such as time zone, UUID, IMEI, model, region, language, keyboard layout and serial number:

sudo dump_vpd_log --full --stdout

Or to be specific about what you’re looking for, grep for it:

sudo dump_vpd_log --full --stdout | grep "serial_number"

To capture some logs for debugging, use systrace:

sudo systrace

To manage the mouse and keyboard acceleration and autorepeat options, use the xset command:

xset m

Trace a network path (like traceroute or tracert):

tracepath www.google.com

To run standard network diagnostics:

network_diag

To capture some packets while troubleshooting network connections, use the packet_capture command:

packet_capture

Check the type, version, etc on your touchpad:

tpcontrol status

You can also debug network connections by logging data going through either the wifi, cellular or ethernet interface using the network_logging command. To do so for a normal 802.11 connection:

network_logging wifi

To configure WAP information:

wpa_cli

Accept an SSL Cert by using the enterprise_ca_approve command:

enterprise_ca_approve --allow-self-signed https://entca.krypted.com

Many standard Linux commands work as well, including route, mount, cat, cp, chmod, reboot, echo, tr, cut, mkdir, see, if/then, ls, cd, pwd, su, sudo, etc. To see IP address information:

sudo ifconfig eth0

To see all of the running processes:

top

To see a user’s hid and gid, use id:

id

To see more information To ping Google:

ping www.google.com

To connect to another system, you can use ssh and there’s an ssh_forget_host command to clear a given host from your hosts list.

To see a list of the commands you’ve run:

shell_history

Finally, to close the command prompt:

exit

cloud

Factory Reset (Powerwash) Chromebooks

If you ever loose track of the password on your Chromebook, find that the Chromebook is running oddly or want to sell a Chromebook, you can remove your Google account and readd it. The easiest way to do this is a feature called Powerwash. To pull it up, open Settings and then click on Advanced Settings. There, you’ll see the Powerwash button. Click it and then you will remove all of the user accounts installed on the device, basically performing a factory reset.

Powerwash can also be run by clicking Restart while holding down Control-Alt-Shift-R at the login screen. This brings up a Powerwash prompt where you simply need to click Powerwash to remove your data. The first time you login once the Powerwash process is complete, your apps and data start to sync back to the Chromebook.

cloud FileMaker Mac OS X Mac OS X Server Mac Security Mass Deployment Network Infrastructure Time Machine Xsan

Obtain Information From Watchman Monitoring Using a Script

Watchman Monitoring is a tool used to monitor computers. I’ve noticed recently that there’s a lot of traffic on the Watchman Monitoring email list that shows people want a great little (and by little I mean inexpensive from a compute time standpoint) monitoring tool to become a RMM (Remote Management and Monitoring) tool. The difference here is in “Management.” Many of us actually don’t want a monitoring tool to become a management tool unless we are very deliberate about what we do with it. For example, that script that takes a machine name of ‘rm -Rf /’ that some ironic hipster of a user decided to name their hard drive because, well, they can – well that script that was just supposed to run a fix permissions because that ironic jackass of a user in his v-neck with his funny hat and unkempt beard just accidentally cross-site script attacked himself and he’s now crying out of his otherwise brusque no-lense having glasses and you’re now liable for his data loss because you didn’t sanitize that computer name variable before you sent it to some script.

Since we don’t want the scurrilous attention of hipsters everywhere throwing caustic gazes at us, we’ll all continue using a standard patch management system like Casper, Absolute, Munki, FileWave, etc. Many organizations can still take value out of using Watchman Monitoring (and tools like Watchman) to trigger scripted events in their environment.

Now, before I do this I want to make something clear. I’m just showing a very basic thing here. I am assuming that people would build some middleware around something a little more complicated than curl, but given that this is a quick and dirty article, curl’s all I’m using for examples. I’m also not giving up my API key as that would be silly. Therefore, if I were using a script, I’d have two variables in here. The first would be $MACHINEID, the client/computer ID you would see in Watchman. This would be what you see in red here, when looking at an actual computer.

Screen Shot 2013-07-03 at 9.35.54 AM

The second variable is my API token. This is a special ID that you are provided from our friends at Watchman. Unless you’re very serious about building some scripts or middleware like right now, rather than bug them for it, give it a little while and it will be available in your portal. I’ve given the token $APITOKEN as my variable there.

The API, like many these days is json. This doesn’t send entire databases or even queries, but instead an expression of each variable. So, to see all of the available variables for our machine ID, we’re going to use curl (I like to add -i to see my headers) and do the following lookup:

curl -i https://318.monitoringclient.com/clients/$MACHINEID.json?auth_token=$APITOKEN

This is going to spit out a bunch of information, parsed with a comma, whereas each variable and then the contents of that variable are stored in quoted text. To delimit my results, I’m simply going to awk for a given position (using comma as my delimiter instead of the default space). In this case, machine name is what I’m after:

curl -i https://318.monitoringclient.com/clients/$MACHINEID.json?auth_token=$APITOKEN | awk -F"," '{ print $4}'

And there you go. It’s that easy. Great work by the Watchman team in making such an easy to use and standards compliant API. Because of how common json is I think integrating a number of other tools with this (kinda’ like the opposite of the Bomgar implementation they already have) is very straight forward and should allow for serious automation for those out there that are asking for it. For example, it would be very easy to say take this output and weaponize it to clear caches before bugging you:

“plugin_id”:1237,”plugin_name”:”Check Root Capacity”,”service_exit_details”:”[2013-07-01] WARNING:  92% (276GB of 297GB) exceeds the 90% usage threshold set on the root volume by about 8 GB.”

Overall, I love it when I have one more toy to play with. You can automatically inject information into asset management systems, trigger events in other systems and if need be, allow the disillusioned youth the ability to erase their own hard drives!