Network virtualization is by no means a new concept for VMware. Think about it for a moment — wherever vSphere (or any other VMware T1 or T2 hypervisor) has been implemented, a virtual switch exists and connects guest VMs to the physical world. That’s more than 500,000 customers globally, millions of vSphere hosts, and many more millions of virtual network ports backed by a standard (vSwitch) or distributed virtual switch (dvSwitch). In fact, if you count the network ports provisioned by vSphere and logically assigned to VM nics, one can argue that VMware is one of the top datalink providers on earth. Okay, perhaps that’s a stretch, but you get my point! VMware virtual networks have existed just about as long as VMware itself. And since the very beginning, there has been no shortage of innovation. The vSwitch has evolved in many ways, leading to new technologies, increased scope and scale, distributed architectures, open protocol support, ecosystem integration, and massive adoption. Over the years VMware has continued to introduce new networking technologies through organic maturity and strategic acquisition — ESXi platform security, dvSwitch (and associated services), vShield, vCloud Networking and Security (vCNS), etc. — and leveraged 3rd party integration into partner solutions, such as Cisco’s Nexus 1000v (a solution brought to market by tight collaboration between VMware and Cisco). The bottom line is VMware is no novice when it comes to networking, so it should have been no surprise when it’s ambitions to continue to evolve in this realm became evident.
And then Nicira happened…
In 2012, VMware acquired Nicira, which sent a few shockwaves across the industry. Nicira was a relatively small startup with it’s own great ambitions, which had embraced the concept of the Software-Defined Network (SDN) ahead of many and delivered a platform that was popular amongst large ISP’s, web-scale enterprises, and the open community. But most importantly, Nicira’s vision for the next-generation datacenter aligned magnificently with VMware’s own Software-Defined Datacenter (SDDC) strategy. That is, networks of tomorrow should be decoupled from hardware, quickly and easily reproducible, and totally automated. Oh, and this should all be done in a software control plane. This was Nicira’s Network Virtualization Platform (NVP). Although Nicira was doing great things on it’s own, the acquisition by VMware thrusted it into the limelight and dumped massive investment into it’s growth and development, all while accelerating VMware’s strategy. Together, the possibilities are endless (and a significant threat to traditional network vendors). Integrating NVP into VMware’s portfolio took some time, but in mid 2013 NSX, VMware’s network virtualization platform, emerged.
NSX is available in two flavors: NSX for vSphere (NSX-V) and NSX Multi-Hypervisor (NSX-MH). NSX-V is a VMware-centric solution that is designed to leverage the vSphere hypervisor and integrate directly into the kernel. The tight integration gives any new or existing vSphere customer immediate access to advanced networking capabilities directly from the vSphere Web Client. Once deployed, the solution provides network admins the ability to provision and manage enterprise-class network services across the VMware environment (i’ll cover the components and services later). As you’d expect, the massive vSphere install base makes NSX-V the most commonly deployed solution.
|NSX-V Logical View
NSX-MH is…well, for heterogeneous / multi-hypervisor deployments. The target environments for NSX-MH include ISP’s, OpenStack environments, and large enterprises needing to support heterogeneous platforms.
|NSX Architecture – Multi-Hypervisor / OpenStack
There are several differences between the two solutions as they are designed to integrate with and take advantage of core functionality of the supported platforms wherever and whenever possible. For example, NSX-V heavily leverages the dvSwitch throughout it’s architecture, while NSX-MH leverages the Open vSwitch (OVS) in a heterogenous environment for all supported hypervisors (except vSphere).
For the sake of this post, I will be focussing mostly on the NSX-V platform (specifically the current 6.1 version) simply referred to as NSX from this point on.
NSX Components Overview
NSX is a robust network virtualization platform, not a single tool or piece of software. From a deployment perspective, it is similar to a traditional physical network architecture, which can be comprised of several components, layers, and moving parts (and a plethora of vendors). But that’s where similarities stop, and NSX begins — that is, NSX abstracts the Control and Data planes and manages the now-logical network services through software and a common UI (a.k.a. the Management plane). This is NSX’s greatest value prop.
The Management Plane includes the primary user interfaces into NSX. In the case of NSX-V, these include:
|NSX Manager UI
- NSX Manager: provides a centralized management interface for configuring and provisioning your logical network architecture. The manager is responsible for deploying and configuring the distributed routing and firewall kernel modules into each of the ESXi hosts and installing the VXLAN logic during host preparation. It also configures the controller cluster and manages control plane security for all communications to the various components. NSX Manager is downloaded as a single .OVA (virtual appliance) and deployed to your vSphere environment. After deployment and a few basic configuration steps, NSX Manager is connected to vCenter Server and (optionally) Lookup service, for seamless integration into vSphere. From that point on, all management is done through the vSphere Web Client. You’ll come back to this UI if you need to reboot, change or troubleshoot vCenter registration, upgrade the NSX code, backup/restore the active configuration, or use the log dumps for troubleshooting (e.g. to share with VMware support if needed).
|NSX Management in the vSphere Web Client
- vSphere Web Client: once the NSX manager UI is used to register NSX with vCenter, all additional configurations, host prep, solution deployments, and overall management is done through the vSphere Web Client. Users are assigned role-based permissions to provide various management access to the registered NSX manager. The permissions will determine what a particular user will be able to do from the vSphere Web Client. One important note to consider…giving administrative access to any NSX function is optional. In other words, not all vCenter admins are NSX admins (nor should they be). This is a point of contention that I see a lot of organization struggling through. A new breed of “superuser” is emerging, but that is not necessarily a requirement. For the traditional network admins, role-based access is provided to allow them to use the vSphere Web Client to view, manage and operate the virtual network. Moving on…
- Message Bus Agent: Securely manages communications from the Management Plane to the underlying Hypervisors.
The Control Plane provides direct management of the logical networks and devices provisioned by NSX. Control plane functions include:
- NSX Controller: the NSX Controller is the conduit in which all VXLAN and Logical Routing configuration is pushed to the ESXi hosts (from the NSX Manager or otherwise). Controllers VA’s are initially deployed by the NSX Manager when invoked in the vSphere Web Client. Controllers are highly scaleable and provide native HA. They are deployed in odd-numbered clusters — starting with 3 nodes as a best practice — and are increased in pairs (again, best practice is to keep odd numbers) depending on scale, utilization, etc. Data and active workloads are “sliced” (or striped) across all controllers in a given cluster primarily for resiliency and performance. Slicing is similar to data striping in RAID algorithms but at the data object level. And similar to RAID, slicing will redistribute data in case of a controller outage.
- Logical Router Control VM: The Logical Router Control VM manages all control plane functions for the Distributed Logical Router. This is just the control mechanism…Logical Routing is a data plane function that lives inside ESXi. The Logical Router Control VM manages services such as L2 Bridging and and Dynamic Routing protocol support.
- User World Agent (UWA): the UWA is an secure communications client that facilitates and protects all the traffic between the NSX Controller(s) and the majority of the kernel modules loaded into ESXi using the control plane protocol (all but the distributed firewall traffic, which is a direct communication between the NSX Manager and the DFW kernel module). The UWA does it’s thing in the background and is not something that is user-configured or managed.
- NSX vSwitch: The NSX vSwitch is the combination of vSphere’s distributed virtual switch (dvSwitch) plus NSX hypervisor modules that delivers a variety of services across the data plane, including VXLAN, Distributed Firewall, and the Distributed Logical Router. Together, the NSX vSwitch and associated services below facilitate the L2 switching across the entire transport zone, traversing hosts, clusters and L3 boundaries where supported.
- VXLAN: VXLAN, or Virtual Extensible Local Area Network, provides cloud scale and breaks the boundaries of traditional VLANs. It’s an overlay technology that encapsulates ethernet frames in UDP packets. VXLAN provides the ability to extend L2 networks over L3 boundaries, providing the ability to use and consume capacity across clusters and even datacenters. VXLAN’s minimum MTU requirement is 1600 bytes and does not tolerate fragmentation very well (especially at scale). The MTU requirement is end-to-end, requiring all in-line physical and logical components to be set for proper function. However, the guest VM’s are oblivious to the transport network (and VXLAN in general) and do not need to be changed from the default setting of 1500 at the OS level. As of version 6, NSX supports three replication modes for broadcast traffic: Multicast, Unicast, or Hybrid mode. The right one to use will depending on use cases and network topology. VXLAN tunnel traffic is encapsulated and decapsulated by an associated VTEP kernel interface.
|VXLAN Frame Details, 1600 bytes
- Distributed Firewall (DFW): Take a traditional firewall, extract it from the bare-metal box it’s installed on and slap the services into a VM, now inject it’s services into every layer of the virtualized infrastructure — cluster, host, kernel — protecting data and workloads in ways you couldn’t effectively/efficiently do with a physical firewall. NSX DFW maintains a “trust nobody” posture and delivers end-to-end firewall and security enforcement from a single point of management. With integration into vSphere and it’s unique positioning inside the hypervisor, DFW can enforce security rules on named objects so it protects workloads regardless of it’s physical location or IP address.
- Distributed Logical Router (DLR): The DLR allows networks that are backed by either VXLAN logical switches or VLANs to be connected at L3 within the hypervisor. It performs much like a traditional router, except it adds a significant advantage as it exists on every ESXi host configured with NSX. This brings a great amount of scaleability, traffic insight, and performance. Each DLR is configured with one or more Logical Interfaces (LIFs), which are assigned IP addresses and behave as the gateway for each associated network. LIF information is propagated to each host, including a per-LIF ARP table. Currently, NSX supports OSPF and BGP dynamic routing protocols in addition to static routing.
- Edge Services Gateway: The Edge Services Gateway provides additional L3-L7 services in a scalable virtual appliance, including interface-based firewall, NAT, Load Balancing, VPN (IPSEC, SSL, L2VPN), DHCP services, and DNS relay capability. It also provide the North South routing for external networks in the datacenter…useful when you need connectivity from virtual to physical networks. The Edge gateway (VA) can be deployed highly-available in four different mem/cpu configurations (from compact to XL) based on service and scalability/throughput requirements.
NSX is positioned to transformed today’s network topology much like vSphere transformed compute — besides turn the industry on it’s head, NSX is redefining how networks are designed, provisioned, and managed in the software-defined datacenter. And just like compute hardware has become commodity in the world of hypervisors, NSX promises to break the legacy bind of hardware and associated network services.
I’ll wrap it up here for now. The next post in this series will focus on how all these pieces fit together, the traditional services and infrastructure they augment, and a selection of use cases (vRA!) that demonstrate the power of NSX and network virtualization up the management stack.