Post 2 of 6: Getting Started – defining a success criteria

Next topic in this series is one that can make or break your journey to the cloud — defining what you will consider a “great success!” when all is said and done it is essential to keep things on track throughout the journey. You’ll need to set some realistic goals and objectives, get management buy-in, line up some IT/engineering resources, and maybe even get financial commitment (especially if this is usually a challenge) ahead of time. Determining where you are today vs. where you need to be ahead of time can not only accelerate the process, but will set some ground rules for everyone involved. Remember, this is a journey — it’ll take some time and commitment but you should never lose sight of the objectives.
Virtualize: So, what are your objectives? I’ll assume they are business-driven, a directive of sorts, or perhaps you’re at the starting line and need something to pitch because you know how the business will benefit with a cloud model. We’ll knock out the must have’s: reduce cost and overhead, be ‘green’, take control (of your infrastructure), simplify and centrally manage IT, and improve usability. For some of you the primary objective is to be/stay competitive. There is one word that sums all of this up — efficiency! At the end of the day it’s all about efficiency and this should be number one on your list of objectives. If your infrastructure is more than 50% virtualized you’ve got the right idea. If you are under 50% or starting from scratch, you’ve got some additional work to do…more on how to get there later in this series (hint: say no to physical provisioning). IT efficiency is key to success and happens to be the number one business driver behind virtualization and cloud computing. It’s also the best way to get management buy-in (the ROI pitch certainly helps accelerate adoption). And, of course, there’s everyone’s favorite initiative – Green IT.  Virtualization and consolidation are key to Green IT. It doesn’t take a lot of math to determine the energy savings you will realize when you go from 1000 servers to 100. The green priority can vary from being a driver of virtualization to simply a nice side effect. In terms of cloud, it falls under the overwhelming need to be efficient. From this point forward any new workloads should be virtualized…and you should have a plan in place by now for any existing physical workloads – we’ll call those “legacy” for the sake of making a point.
Standardize: Next on the list is Standardization — standardizing reduces overall fragmentation, management burden, complexity, and ensures your workloads will play nice between internal and public clouds or cloud providers. In the context of cloud, the standardization we’re mostly interested in occurs above the host hardware abstraction layer. Virtualizing your infrastructure adds a bit of vendor agnosticity – compute, storage, and network are simply resources up for grabs (a controlled grab, of course). Although it is important to consider deploying a (standardized) repeatable physical architecture for ease of scalability, the focus here is to standardize the services, process, and policies within the cloud. Services will include Infrastructure (IaaS: vCloud API, OVF), Cloud Application Platform (PaaS: vFabric), Desktop(DaaS: View, ThinApp), and Software (SaaS: Zimbra, Directory Services) among others. Process standards include the delivery and provisioning mechanisms and even workloads themselves, such as using a standard set of “master” templates. We’ll touch on the need for these services to be made available through self-service catalogs, be auto-provisioned, and centrally managed later. Standardizing will ensure your cloud will be manageable as it scales and will help you keep control of variations that have to be maintained individually.
Automate: So what’s next? What good is an ultra-efficient and standardized virtual architecture if you haven’t adopted a modern means of provisioning the workloads running on it? Once built, you want to enable authorized end users to be able to self-provision needed workloads based on a predetermined set of criteria, authority, and service level requirements. The reaction I normally get when I stress the importance of end user enablement is, “Why would I let end users provision their own workloads??”. Understand the end user can be anyone from a customer to development engineers to support personnel. Ultimately, the decision of who has access to which resources is up to the business and system administrators. Each use case will require an assessment to determine the proper access controls…but I digress.
Automation is alive and well in your virtualized infrastructure today. With vSphere hypervisor (ESX/ESXi) under vCenter management, chances are you have already taken advantage of automation tools such as DRS, HA, template provisioning, cloning, host profiles, etc. All these tools provide the ability to be more hands-off than ever before. But we’re talking Cloud here — you need to automate as much as possible to increase productivity, reduce time to deployment, and continue to drive down the costs of operations and maintenance. Taking automation to the cloud will include delivering an interactive user interface, providing point-and-click access to pre-built workloads (OS’s, applications, vApps), and delivering all these workloads in a service catalog offering. Once built based on the needs of the business, the service catalog provides a one-stop shop for application provisioning and deployment across however many organizations you need to support. Once provisioned these applications will need to find a home and have access to the appropriate compute, network, and storage resources — perhaps based on a set of predefined SLA’s or security requirements. Automating the placement of workloads across varying tiers of resources (resource pools) ensures your customers get the level of service they are paying for. Utilizing vSphere virtualization and cloud-centric tools, such as vCloud Director, will help you achieve new levels of hands-off automation that will continue to drive efficiency up and costs down.
Manage: Management is about taking the business processes that may already be in place in a typical datacenter and adopting them to the cloud. It will be more important than ever before to understand what your workloads are doing, what your capacity trends and needs are, how resources are being consumed, and who is consuming them. Implementing capacity management tools will ensure you stay one step ahead of the resource requirements by providing detailed resource consumption information and trend data. Tools such as VMware CapacityIQ will also help you understand where you may have to make some adjustments, plan next year’s budget (my favorite), or determine what taking on new business means to your infrastructure.
A private cloud may include many different sub organizations (business units), which can be managed as a single enterprise entity or independently based on the needs of the business. This is common in large enterprises with many sub-orgs that haven’t been able to figure out how to function as one. But that’s okay – consolidating IT services doesn’t mean consolidating business functions if that isn’t in the books. The use of virtual organizations and logical segregation of resources will help deliver IT as a service across all BU’s without compromising data integrity. The goal here is to consolidate resources but maintain each sub organization as it’s own entity – with its own SLA’s, access controls, and processes. This is all accomplished by using built-in security capabilities and policies for secure multi-tenancy, resources pools for resource guarantees, SLA definitions, etc. But that’s only half the battle – in order to support multiple organizations, which may have varying SLA and resource requirements, you need to understand the costs associated with each. Take advantage of Chargeback tools, such as VMware’s vCenter Chargeback, to pinpoint the costs associated with resource guarantees or levels of service. As the private cloud provider, you may offer multiple levels of service with different cost structures. You can then charge for those SLA’s or, at the very least, report back to the CIO what each org is consuming at varying levels of granularity. And there’s a side effect — attaching a dollar amount also helps prevent VM sprawl and unnecessary consumption of resources. Having the right management tools on hand will ensure you have a grasp on what your cloud is doing, will help you determine how and when to scale, and helps provide all the appropriate documentation and trend analysis needed to justify growth.
Optimize: And now for the final success criteria – Optimization. By now it seems you should be optimal, right? Well, you’re close but you may have a little more to do depending on your objectives and scope. We have focused on the success criteria associated with a single cloud delivered from a single datacenter. Optimizing sets you up for delivering services across multiple clouds sourced from beyond the boundaries of a single datacenter as if it is one. If delivering service levels to your customer means you need to take advantage of resources in your Washington datacenter, you should certainly be free to do that while your Miami datacenter is servicing other workloads and customers. You should also be able to move those workloads to another private cloud or even a public cloud provider as you see fit. The end user needs access to workloads and resources – who cares where those workloads are physically located. Optimizing the cloud helps control SLA’s, costs, security, and performance levels. Whether or not you’ll be delivering cloud services from multiple datacenters, optimizing the cloud ensures you can get there when you need to.
In this Series:
1 – insert definition here – defining the cloud
2 – Getting Started – defining a success criteria
6 – Get a Grip! – managing your cloud
++++
@virtualjad