One of my favorite things to do is whiteboard. In my line of work, the whiteboard allows me to tell a story…one that can be broad in coverage, yet tuned on-the-fly to best align with the needs of the audience. It started as a “cloud” whiteboard back when vCloud Director (vCD) was released and the first vCloud Suite offering was announced. The first storylines were all about VMware’s cloud and management framework and leveraging vCD to align with a set of industry-accepted characteristics that defined “cloud”. There have been several iterations over time as new technologies (and acquisitions) came to fruition, with an evolving storyline to highlight modern challenges and the transformative nature of the Software-Defined Datacenter.

The whiteboard has been delivered on your standard everyday office whiteboard, table-tops, glass walls, flip charts, notepads, napkins, and electronically via powerpoint, iPad, and digital sketch pads. Regardless of delivery medium, I have found the whiteboard to be the most effective means of articulating the often-confusing details and associated benefits of the Software-Defined Datacenter at any level of depth…and without yawn-generating, ADD-invoking death by powerpoint.

My most recent iteration of the SDDC whiteboard doubles as field and partner enablement, so I had to put a little more thought into the storyline to ensure it closely resembles how customers have typically leveraged vSphere, NSX, VSAN, and the vRealize Suite evolve their existing datacenters to quickly build and gain the benefits of SDDC. I also wanted to experiment with some new tools to make it fun…I used VideoScribe for the whiteboard animations and ScreenFlow for cleanup and audio track.

So, without further ado — I present to you my latest VMware SDDC Technical Whiteboard

 

As I  mentioned earlier, this version is being used for enablement, so I’ve also provided the storyline script below for your reading (and learning) pleasure…

It started with virtualization

VMware’s software-defined datacenter strategy is designed to help customers modernize their infrastructure by paving a path to move beyond basic (compute) virtualization and into a world where manual processes are replaced with analytics, policy-driven automation, and innovative tools to help make sense of it all.

Before we jump into how we get there, lets review where most of our customers are today…

#vsphere

It all started with VMware leading the charge with (inventing) x86 virtualization a couple decades ago. With virtualization, we abstracted and pooled the underlying infrastructure and delivered these resources through resources pools, datastores, and virtual switches…all managed with vCenter Server.

vSphere transformed the datacenter landscape and led to the levels of virtualization we enjoy today. Early virtualization was — and still is — an incredible CAPEX/OPEX story. Each time a physical server was virtualized, businesses were able to enjoy immediate returns on that investment, which were measured by an essential metric — the cost per application. Today, vSphere is the foundation of the majority of virtualized workloads around the world. This is still where the vast majority of our customers are — basic virtualization, minimal automation and many manual processes.

While VMware continues to heavily invest and innovate in this arena, the need to further drive down the cost per application requires evolving how apps and services are built, delivered, and managed.

#public_cloud

On the other side of the equation are the Public Cloud providers, such as vCloud Air, IBM, and AWS. The public cloud offers a compelling story for organizations that need the ability to scale beyond available private resources (for example) — either permanently or as needed to support scale-out and elastic apps. Some orgs are no longer interested in being in the infrastructure business all together and will leverage the seemingly infinite supply of compute resources available in the public cloud. There are many viable use cases for migrating apps to the public cloud, but an even more compelling strategy is to blur the lines between Private and Public resources to provide the benefits of public cloud without losing control of proven business processes. We’ll get back to that later.

#cloud_fabric

While we’ve got an eye on the public cloud, there is tremendous value in transforming existing / private infrastructures to align with the characteristics (and benefits) of a public cloud. To do this, we must further abstract the underlying resource providers — regardless of platform, provider and location — and build deliver a heterogeneous Cloud Fabric. The cloud fabric is a logical construct that aggregates resources into consumable buckets and provides the ability to assign admins to manage them. The fabric is backed by any combination of resource providers and service tiers. In this case we have some on-prem vSphere as well as public cloud resources. Once allocated, these buckets — or VDC’s — are elastic and can be expanded or contracted per the needs of the business.

#tenancy

Next up is tenancy…a set of  policies and logic to determine how users consume the cloud fabric — regardless of source of resources that make up the fabric — and which services will be available to any number of users or groups. We create user and consumption policies, sub-allocate available fabric resources, and entitle applications and services to the consumers per the business requirements. The consumers in this case can be functional groups, business units, or — in the IT Automating IT use case — consumers are the IT staff themselves.

#intelligent_ops

At this point we’ve abstracted and combined hybrid resources through the cloud fabric and defined consumption policies that align with business requirements. As far as I can tell, the cloud is optimized for peek performance and delivering high levels of assurance. Apps and VMs are running optimally and I have high confidence in reporting all of this to my leadership. Well, actually…i have no clue what’s happening holistically. Sure i can dig into vCenter to gain some visibility into individual VMs, but we’re running a diverse, heterogeneous, and hybrid cloud here. We need visibility into all aspects of an applications service dependancies….throughout its lifecycle. And for that, we’ve got Intelligent Operations (aka vROps).

First and foremost, Intelligent Ops delivers analytics — algorithms that are constantly churning to determine the overall health, performance and relative capacity of the software-defined datacenter. We care about what is about to happen, not so much what just happened. Traditional monitoring tools that alert me when my business critical Oracle RAC cluster just failed do me no good — i need to know something is about to fail…and some prescriptive analysis behind it. On a high level, we’ll focus on Health, Risk, and Efficiency…

  • Health – the Health metric provides visibility into overall health of all systems and answers the question “how am i doing?” and clearly identifies potential issues that may need immediate attention. Whether its an application starved for resources, network or storage IO issues, or an interface on a physical host in a specific cluster causing intermittent connectivity…Health quickly identifies the root cause and provides a path to remediation to ensure we can maintain SLA’s.
  • Risk – Risk indicates potential problems that might eventually degrade the performance of the SDDC, such as calculated resource shortfalls or capacity ceilings. The cloud model helps provide the perception of unlimited resources, so you can imagine the business backlash of running out of resources. Risk models expected constraints based on the rate of resource consumption (e.g. app provisioning) vs. available resources and provides guidance on what’s needed to address them. For example, risk will report that, based on deploying an average of 100 machines/week, you will run out of memory resources in 3 weeks. You can then model a solution to determine the appropriate response, such as adding more memory or hosts to the mix. But before you run off and throw more hardware at the problem, there may be an opportunity to recoup resources from machines that are underutilizing them. For that, we look at the Efficiency metrics.
  • Efficiency – Efficiency helps identify opportunities to optimize resource allocations based on the actual needs of each workload. So rather than running out and buying more hardware, i can pull a detailed report that identifies where cpu, memory, and storage resources are being wasted per machine. For example, out of the 5k machines in my environment, here’s a list of the top 200 suspected resource hogs…and sizing recommendations, based on the intimate knowledge ops has of each machine throughout its lifecycle. And, as a result, you may be able to put off investing in new hardware and further reducing that key business metric — the cost per application.

Lastly, another critical component of intelligent ops is the visibility into the thousands of logs being generated by the SDDC stack and the broader ecosystem. The log analysis engine, Log Insight, delivers highly scalable log management with intuitive dashboards and analytics to gain visibility into the underworkings of the entire stack. The result is deep operational visibility, faster troubleshooting and a more holistic approach to cloud ops.

#it_automation

At the top of the stack is the cloud management function that brings it all together…IT Automation, brought to you by vRealize Automation. A software-defined datacenter is just a datacenter without automation, so you can imagine the important role vRA plays in the stack. For starters, vRA is the primary user interface for policy-based consumption of the SDDC. It is the unified service design and delivery engine that allows IT to rapidly build and deliver traditional apps, cloud services and cloud-native / hybrid applications through a common service catalog, all while aligning with existing business processes, including governance and ITSM integration.

#extensibility

One of vRA’s most powerful features is delivered via it’s extensibility framework, which provides integration and orchestration of the broader SDDC stack and supporting ecosystem of hardware and tools to deliver app-centric everything. Key to delivering these capabilities is vRealize Orchestrator, a workflow engine which provides a library of native and 3rd-party plugins that extend the entire datacenter. These these technologies are designed to augment time-consuming and expensive human interaction — the manual processes that drive down efficiencies and increase costs each time an application is provisioned. Extensibility is available throughout the lifecycle of an application or service, for example, invoking an external process as “Day 2” action.

#costing

And, finally, the business needs to make sense of the SDDC and understand the bottom line — how does all of this impact cost? As i mentioned previously, a key metric we use to understand the cost impact is the total cost per application. But it is also important to get cost visibility into all aspects of the SDDC. For that, we leverage vRealize Business. vRB provides granular cost breakdown per component or for the entire operation…and everything in between. Using this information, we can now make cost-driven and actionable business decisions, such as where apps should be deployed and how resources should be consumed. We also gain visibility into the cost associated with wasted resources and can take action to reclaim them.

For the consumer, this is all accessible via the unified service catalog. Once authenticated, only applications and services specifically entitled to that user are available for consumption. Existing items are also available for lifecycle actions — including any custom actions — but only if they have been entitled.

#nsx

Let’s shift back to infrastructure for moment. Perhaps one of the most compelling technologies in VMware’s SDDC is NSX. NSX has redefined networking and security for the software-defined world. But don’t mistake NSX for just another SDN solution…it is a Network Virtualization platform that overlays and abstracts the underlying network infrastructure to break the boundaries of traditional network services, much like vSphere did for servers…and VSAN for storage. NSX unleashes applications from their physical boundaries and can wrap each application with dedicated networking and security services, regardless of the physical underpinnings.

As if that’s not awesome enough, VMware’s SDDC takes things up a notch — the combination of vRealize Automation and NSX provides the ability to build and provision application-centric networking and security policies that are bound to the lifecycle of any given app. vRA integrates natively with NSX and injects automation into the mix, providing dynamic services such as on-demand routed networks, on-demand NAT, on-demand load balancers and even on-demand security groups…that’s in addition to a standard consumption model where all NSX-backed services are available for consumption by applications. Apps and networks come together in vRA’s converged blueprint designer using natural drag-n-drop motions. Blueprinting enables the build once, deploy many approach, which drastically reduces overhead associated with repetitive processes. And with a click of a single checkbox, vRA and NSX can automatically deploy app isolation zones around each deployment and, with the help of NSX policies, deploy micro-segmented applications.

Perhaps just as important is the fact that these networking and security services are bound to the applications lifecycle. That means once the application is decommissioned, the networks, security groups, load balancers, and firewall policies bound to the app are also decommissioned, closing a well known security gap that can increase security risk.

#nsx_ops

And finally, one of the missing pieces of the SDDC portfolio, specifically around NSX, was analysis of the underlying network so that we can intelligently approach incorporating NSX services. Recently, VMware acquired a company called Arkin — now vRealize Network Insight — to fill this gap. vRNI provides visibility across virtual and physical networks and uses analytics to provide guidance for optimizing network performance and availability. Based on collected analysis, vRNI provides planning and recommendations for implementing micro-segmentation and ensuring existing policies are optimal for scale.

#benefits

So, lets review the benefits of VMware’s SDDC:

Drive efficiency through heavy use of automation, capacity management, and end-to-end analytics
Provide a greater level of control through policy-based management and service delivery
Incorporate security – statically or on-demand – at every layer of the SDDC, starting with the application itself
And, finally, reduce overall cost with a big focus on the cost per application…and deliver the reports to prove it!

#closing

And with that, i’ll leave you with our favorite motto — ANY APP, ANY DEVICE, ANY CLOUD — a transformative concept that only VMware SDDC can deliver.

 

+++++
@vitualjad