Caching In – the magic behind vSphere’s CPU scheduler

One of the most important objectives of virtualizing a new or existing infrastructure is efficiency…both operational and financial. Virtualization wouldn’t be where it is today without a means of getting the most bang for the buck and clearly demonstrating the value-add of system consolidation — whether within your labs, server rooms, datacenters, or across the entire enterprise. To justify virtualization projects of any significance you have to hit your leadership where it hurts (tickles)…the corporate wallet. What better way to do that than consistently reducing acquisition and operational costs to your project?

Of course I’m not suggesting you’ll be swimming in cash (or a nice bonus) the moment you deploy your first hypervisor, although this is a first step in the right direction. Witnessing a rack of 10 x 2U servers reduced to a single host (i’m being conservative), while centralizing management and often increasing performance, is nothing short of wonderful. How about 100 of these same servers into a single rack? 100 loaded racks into 10?  Enough said. VMware’s value proposition is very clear in this arena. In keeping with my promise of no sales pitches, i’ll spare you the ROI/TCO chatter.  Just consider this – the cost of maintaining 100 legacy servers is drastically greater than acquiring 10 brand new uber-hosts sporting the latest chipsets, energy efficiency, memory/cpu capacity, and all the necessary vSphere licensing.

0 to Cloud in 6 Posts, Part 1: defining the cloud

Post 1 of 6:  insert definition here – defining the cloud

If I had a dime for every time I found myself defining the “Cloud” I would have collected myself a small fortune. Okay, maybe not a fortune, but somewhere in the area of $70 (after taxes). But if I was required to use an identical definition each time…well, I’d be broke. This is because the Cloud has many different meanings, often depending on who’s asking. It is fairly understood what the ultimate goal of cloud computing is and it looks something like this: providing infrastructure as a service — from somewhere…anywhere…it doesn’t matter where — and delivering it seamlessly, using proven industry standards, across the ether to some (any) end node. Whether it be an application, operating or development environment, or a desktop, the idea is to provide some calculated level of compute capability to the downstream workloads or users who demand it…as they demand it.
Was that clear enough? I’m on month seven here at VMware and have been trying to wrap my mind around cloud computing at a massive scale. I have access to some of the most innovative people and technology in this arena and yet this is the best I can define the Cloud.