krypted.com

Tiny Deathstars of Foulness

The title of this one ended up a bit more FUDy than I’d prefer, but the content’s mostly what I provided.

With the rise of SMB-friendly backup solutions like CrashPlan, Carbonite, Mozy, and Backblaze, small businesses will choose to back up their systems with alternatives to expensive tape libraries, software to drive those libraries, and countless hours spent restoring files. As more cloud-based security attacks happen, businesses will realize that having a solid backup is one of the most important aspects to device security.

Read more: http://www.virtual-strategy.com/2016/02/03/executive-viewpoint-2016-prediction-bushel-major-security-breaches-will-change-how-small-?page=0,1#ixzz3zGet80fK

Oh, and in case anyone (Mosen/Dials) is bothered by the fact that I’m reblogging articles I do above and beyond what I do on Krypted, it’s my way of keeping track of all my other writings. And no, while I do write for Huff Post now, I don’t smoke weed (like ever). But thanks for your perspectives, I’ll try and up my game since you feel my contributions to the community were not enough while I was writing three books on Mac management (two are now shipping, the third will be shortly)… 😉

Oh, and I was serious about doing a long-term podcast. Now that my after-hours schedule is freeing up, I’m game to get on that! <3

February 5th, 2016

Posted In: Articles and Books, Bushel

Tags: , , , ,

Our friends at VMware continue to outdo themselves. The latest release of Fusion works so well with Windows Server 2013 that even I can’t screw it up. To create a virtual machine, simply open VMware Fusion and click New from the File menu.

Screen Shot 2014-04-06 at 3.43.26 PM
Click “Choose a disc or disc image.”

Screen Shot 2014-04-06 at 3.43.58 PM

Select your iso for Server 2012 and click on Open (if you have actual optical media it should have skipped this step and automatically sensed your installation media). Click Continue back at the New Virtual Machine Assistant screen.

Screen Shot 2014-04-06 at 3.45.26 PM

Click Continue when the Assistant properly shows the operating system and version.

Screen Shot 2014-04-06 at 3.50.07 PM

Enter a username, password and serial number for Windows Server if you want Fusion to create these things automatically and just complete an installation. If not, uncheck Easy Install (but seriously, who doesn’t like easy). Also, choose the version of Windows Server (note that there’s no GUI with the Core options). Click Continue.

Screen Shot 2014-04-06 at 3.50.55 PM

At the Finish screen, you can click Customize Settings if you would like to give the new virtual machine more memory or disk. Otherwise, just click Finish.

Screen Shot 2014-04-06 at 3.52.00 PM

When prompted, choose where the new virtual machine will live and click Save. The VM then boots into the Setup is starting screen. You will be prompted for a Core vs. a GUI install (I know, you picked that earlier). I choose a GUI, then click Next.

Screen Shot 2014-04-06 at 3.53.28 PM

When the setup is complete, login, run Software Update and you’re done!

April 7th, 2014

Posted In: Mac OS X, VMware, Windows Server, Windows XP

Tags: , , , , ,

When you’re moving virtual machines around, you’ll frequently use a tool such as vMotion. But what happens when you’re trying to load new virtual machines into VMware from the .vmdks on a client system or trying to archive a virtual machine that isn’t actually destined for another host? You can use nfs or ssh to access an ESX host, but there’s an even simpler way: the Datastore Browser.

To use the Datastore Browser, first login to the vSphere Client. If you’ll be archiving a virtual machine, from there, I would stop the virtual machine. Then click on the virtual machine in the sidebar and click on Summary to see the Resources available to the VM. In storage, right-click on the datastore that houses the virtual machine and click on Datastore Browser.

Screen Shot 2014-01-23 at 4.32.17 PM

The Datastore Browser opens. From here, you can browse assets, including the vmdk files, vmx files and logs. Click on one (or shift-click on many) and then click on the download button in the toolbar (alternatively click none and click on the upload button if you’d like to upload something).

Screen Shot 2014-01-23 at 4.32.36 PM

The download option brings up a browser so you can choose where to drop your assets off. Once done, you can deprovision storage or simply delete assets as needed.

January 27th, 2014

Posted In: VMware

Tags: , , , ,

OpenSolaris 2009.06 is the next generation of the OpenSolaris, the Open Source Solaris that has become the testing ground for new features bound for Sun’s popular Solaris Operating System.  The latest version of OpenSolaris sports a number of new features that environments both large and small are sure to find interesting, most of which have to do with more streamlined ways of managing disk, network and other resources – both in virtualization environments and with the operating system itself.

First up is package management (using the tool appropriately called Package Manager).  It’s now easier to install software managed/compiled by the OpenSolaris community.  The packaging environment for OpenSolaris can now access different repositories, which gives administrators the ability to try different packages, which have been developed by different groups or individuals in the event that one doesn’t work.  This creates an environment where packages installed from one repository might act differently than those installed from another, but overall I found it nice to have more choices.  Some of the more popular packages you install might be included in multiple repositories but it’s nice to have choices.  While the package management is better, it’s still not where the Ubuntu community is at.

But OpenSolaris also now sports a new web project called SourceJuicer, which is meant to assist in automating the IPS build process, which collects packages, allows for easier contribution and includes bug reporting.  While SourceJuicer gives hope for the future it still isn’t where it will need to be for wider OpenSolaris adoption.

OpenOffice.org performed beautifully without any complications and OpenSolaris found all of the drivers for my test systems automatically (1 Dell Precision T7500 and an older whitebox custom built by yours truly). Firefox was pre-loaded and zippy as always. Elisa is nice. It’s no Plex, but once I got SUNW-gnome-media-center running it was sleek and within a couple of minutes I was watching an old episode of Systm, albeit with choppy video because of a crappy video card in my testing servers.

Once you have all of your software installed, OpenSolaris has a number of features meant to make it easier to manage your system. Chief amongst these is Crossbow, which helps you to manage network resources intelligently. This includes the ability to assign two virtual machines to the same NIC without having to waste resources by doing so. I was also able to easily create virtual switches, virtual routers and create containers easily. The VLANs worked very well and while my testing did not net any quantifiable improvement in speed I don’t doubt that if I were using newer NICs I would realize some of the other benefits of the Crossbow system. Overall, the new network management is smooth, but lacks some of the virtualization integration features that might convince me to use VirtualBox on OpenSolaris vs. using VMware.

As a filer, OpenSolaris is also improved. I was able to use the ACLs to granularly define access to shares and limit access to the SMB server via IP and subnet, which granted I’ll likely not use, but is a nice feature to have. I setup the new HA (High Availability) clustering options and had an Active/Passive pair of servers up and running in minutes. I was able to unplug the network interface on one machine and continue working without any connection issues from Windows 7 or Mac OS X. However, I wasn’t able to then move into an Active/Active pair, but that could definitely be my own fault…

I found Time Slider, OpenSolaris’ competitor to Time Machine lacking in sleekness as compared to Time Machine, but to have a number of features, such as snapshots that make it worthwhile, especially when running on a file server. Having said that, I couldn’t quickly find a way to expire snapshots based on disk capacity, something that seems to all to commonly be overlooked. Again, I may figure out how to do so in the future (or someone may comment on this article and explain how, hint, hint). Or I might just write a script to do it for me…

I am still wrangling with trying to get it to build on a Sun Ultra 27 Workstation sitting in my lab, which is included in the hardware compatibility list, but it doesn’t seem to install properly. When I get around to figuring out why I’ll try and post an explanation. I was also unable to install VMware Tools on a virtual machine I was trying to run OpenSolaris on, but VirtualBox performed beautifully both as the host and the guest. I also had a bear of a time with iSCSI, but reading through the forums there will be more support for iSCSI released pretty soon.

Overall, I found the latest OpenSolaris to be a welcome addition to my lab – next up, getting all my OpenStorage moved over to the new rev…

July 25th, 2009

Posted In: Ubuntu, Unix

Tags: , , , ,

There is a lot of talk about “the cloud” in the IT trade magazines and in general at IT shops around the globe. I’ve used Amazon S3 in production for some web, offsite virtual tape libraries (just a mounted location on S3) and a few other storage uses. I’m not going to say I love it for every use I’ve seen it used for, but it can definitely get the job done when used properly. I’m also not going to say that I love the speeds of S3 compared to local storage, but that’s kindof a given now isn’t it… One of the more niche uses has been to integrate it into Apple’s Final Cut Server.

In addition to S3 I’ve experimented with CloudFront for web services (which seems a little more like Akamai than S3) and done a little testing of MapReduce for some of log crunching – although the MapReduce testing has thus far been futile compared to just using EC2 it does provide an effective option if used properly. Overall, I like the way the Amazon Machine Instances (AMI – aka VM) work and I can’t complain about the command line environment they’ve built, which I have managed to script against fairly easily.

The biggest con thus far (IMHO) about S3 and EC2 is that you can’t test them out for free (or at least not when I started testing them). I still get a bill for around 7 cents a month for some phantom storage I can’t track down on my personal S3 account, but it’s not enough for me to bother to call about… But if you’re looking at Amazon for storage, I’d just make sure you’re using the right service. If you’re looking at them for compute firepower then fire up a VM using EC2, read up on their CLI environment and enjoy. Beyond that, it’s just a matter of figuring out how to build out a scalable infrastructure using pretty much the same topology as if they were physical boxen.

I think the reason I’m not seeing a lot of people jumping on EC2 is the pricing. It’s practically free to test, but I think it’s one of those things where a developer has a new app they want to take to market and EC2 gives us a way to do that, but then when the developer looks at potentially paying 4x the intro amount in peak times for processing power (if a VM is always on then you would be going from $72 to $288 per month per VM without factoring data transfer to/from the VM at .1 to .17/GB) they get worried and just go to whatever tried and true route they’ve always used to take it to market. Or they think that it’s just going to do everything for them and then are shocked about the fact that it’s just a VM and get turned off… With all of these services you have to be pretty careful with transfer rates, etc.

I haven’t found a product to do this yet, but what I’d really like to have is use something like vSphere/vCenter or MS VMM that could provision, move and manage VMs, whether they sit on Amazon, a host OS in my garage or a bunch of ESX hosts in my office, or a customers office for that matter – and preferably with a cute sexy meter to tell me how much I owe for my virtual sandboxes.

April 30th, 2009

Posted In: Business, Consulting, VMware

Tags: , , , , ,

A few days ago I noticed a post in Tim O’Reilly’s twitter feed asking whether or not it would matter whether people ran a Mac or a PC once everyone had migrated to a cloud.  Well, there are a few things about Mac OS X that make it fairly difficult to run in a cloud environment:

  • EFI – Mac OS X doesn’t use a BIOS like most Operating Systems.  This makes the bootup process fairly difficult in a distributed computing environment where the Guest OS would be OS X and the Host OS would be something else.
  • Lack of Firepower – I love the Xserve.  I always have.  They’re some of the most beautiful rack mount servers you can get.  But even an Octacore is gonna’ choke if you throw too many VMs on it.  If I were architecting a large, distributed computing environment I would want some blades, an IBM Shark, etc.  Having said this, Xgrid could pose an interesting option if VMware or Parallels were to allow distributed processing through it.
  • Licensing – The Mac OS X Server software is the only software licensed for a cloud type of environment, if you read your EULA.  This has only recently been introduced and has left Mac OS X without Xen or other open source alternatives in the virtualization space.
Having said all of this, Mac OS X is a wonderful system.  There is a lot it has to offer and I, as much as anyone would like to see it capable of utilizing services like Amazon S3, but I would be on the lookout for some other strategic moves rather than a full-blown Mac OS X capable of running independently in a cloud environment.  For example:
  • Mac Backups to the Cloud – Time Machine, Bakbone, Atempo, Retrospect, etc.  I cannot imagine that one of them will not be able to back up to Google or Amazon S3 at some point in the near future.  GUI level support needs to be there for it to gain wide-scale adoption with the Mac user base (like using Backup.app to backup to MobileMe but with enough capacity to back up an Xsan and enough bandwidth to do full backups in less than 72 hours).
  • Xgrid – There needs to be some kind of port of Xgrid to Amazon EC2 or support from render farm companies for EC2 or some other cloud/grid computing platform.  
  • Apple – The Pro Apps will need to support SaaS, Software + Services, etc.  Many Apple users are leveraging Google Apps, but once it comes from Apple it will be legitimate.
So look for it.  You’ll notice the companies that are really leveraging trends in IT as they come to market with products that allow the Mac to leverage the cloud.  If Apple makes a push towards this then you’ll see more wide-scale adoption, but don’t expect much and you won’t risk getting too let down. 

August 20th, 2008

Posted In: Mac OS X, Mac OS X Server, MobileMe, Time Machine

Tags: , , , ,

Want to play with Virtual Machines but can’t get ahold of VMware or Parallels?  Then check out Q.  It’s small, quick and works like a charm:

http://www.kju-app.org/

May 10th, 2008

Posted In: Mac OS X, VMware

Tags: , ,

May 24th, 2007

Posted In: Mac OS X, Mac OS X Server, VMware

Tags: , , , ,

VMware and Parallels allow you to run Windows applications on top of a Windows operating system for Mac OS X.  But what if you don’t want to buy a whole operating system, install it, support it, etc.?  Well, there’s another tool that may work for you.  It’s called CrossOver and can be found at:

http://www.codeweavers.com/products/cxmac/

But it doesn’t work with just any old application.  The compatibility matrix can be found here:

http://www.codeweavers.com/compatibility/browse/name/

In my testing it also didn’t work with all of the applications listed on the compatibility matrix, or it worked but there were certain features that didn’t work.  So make sure to thoroughly test the applications you plan to use with it prior to purchasing.

February 17th, 2007

Posted In: Mac OS X, VMware

Tags: , ,

It starts innocently enough: you start running virtual machines (VMs) for testing.  Then you slowly get hooked.  The next thing you start to do is put your VMs into production.  Then you start buying only servers that have hypervisor built in.  Then you start looking at SANs to leverage virtual clusters and virtualize your volumes.  The next thing you know, you’re looking on vmware.com for a new virtual dog and banging your head on the keyboard when you can’t find one.  At this point there are 8 VMs running on every laptop and handheld in your organization, there are sometimes 3 different applications running VMs on one host and often there is not a single backup of any of them.

You see, there’s a lot of upside with going virtual, but once you start down that path it’s hard to turn back, and nor would you want to for the most part.  Buy many organizations make the plan to virtualize without factoring the full cost of doing so.  This is because as a virtualization initiative begins to gather steam you start to realize some pain points that everyone encounters along the way.  These tend to include virtual sprawl, lack of backups, the single point of failure on Host Operating Systems, etc.

So what do you do?  Well, take a step back and think about what you’re doing.  Put a plan in place.  Consider storage as a good starting point as it’s often the single largest capital investment you’ll make on any virtualization project.  Consider performance, both for the hardware that will run the VMs and the storage that they will sit on.  Consider throughput, IO and what is best left unvirtualized.  Then bust out your Visio or OmniGraffle and make a really nice network topology diagram that lays out where all of your pain points are.  Then take that diagram and the costs you’ve figured out in your virtualization plan and find someone to help you make sure it makes sound financial sense.  Then go to it!

But the problem we see is that too often the peer review of the plan by those in other parts of the organization goes undone.  And this is critical.  Because this will lead to buy-in far beyond your pay grade.  Sure it’s fine to install VMware Server, Xen, XenSource or Parallels Server and run a 1 to 1 deployment of VMs in your environment but if you want to leverage all the bells and whistles that you’ll start needing to leverage, like ESX, clustered CPU and RAM, snapshots or SAN based VMs then make sure you understand the long-term implications in terms of cost and support before you go down that path.

February 13th, 2007

Posted In: Business

Tags: , ,