Earlier this month I hosted “vRA 6.2 Install and Config Live!“, an open-invite social event dubbed “vRA Live” (#vralive). To my surprise, I had 185 RSVP’s with more than 100 people — VMware partners, customers, and several of my peers — attending the 4 1/2+ hour online session. Although I tried to focus on the fundamentals of deploying vRA and associated services, the online Q&A and dialog provided by the experts panel added several examples, lesson’s learned, and plenty of colorful commentary. I couldn’t be more pleased with the turnout and hope to get the next session(s) queued up very soon!…
Network virtualization is by no means a new concept for VMware. Think about it for a moment — wherever vSphere (or any other VMware T1 or T2 hypervisor) has been implemented, a virtual switch exists and connects guest VMs to the physical world. That’s more than 500,000 customers globally, millions of vSphere hosts, and many more millions of virtual network ports backed by a standard (vSwitch) or distributed virtual switch (dvSwitch). In fact, if you count the network ports provisioned by vSphere and logically assigned to VM nics, one can argue that VMware is one of the top datalink providers on earth. Okay, perhaps that’s a stretch, but you get my point! VMware virtual networks have existed just about as long as VMware itself. And since the very beginning, there has been no shortage of innovation. The vSwitch has evolved in many ways, leading to new technologies, increased scope and scale, distributed architectures, open protocol support, ecosystem integration, and massive adoption. Over the years VMware has continued to introduce new networking technologies through organic maturity and strategic acquisition — ESXi platform security, dvSwitch (and associated services), vShield, vCloud Networking and Security (vCNS), etc. — and leveraged 3rd party integration into partner solutions, such as Cisco’s Nexus 1000v (a solution brought to market by tight collaboration between VMware and Cisco).…
Thanks to all who have shown interest in this event. I was expecting 50 RSVP’s…currently at 128! That just about guarantees this will be a fun (and informative) event. I have put together the following agenda based on feedback from the sign up survey.
The primary objective is to install, configure, and demonstrate vRA 6.2 from scratch. For this, I will follow the install and configure workflow I previously covered in my vCAC 6.0 POC and Detailed Implementation Guide. Although vRA 6.2 provides additional capabilities and a more streamlined installation, many of the concepts are the same.…
There are many hidden gems in the vSphere Web Client that are intended to make managing the environment much more efficient. This is one of my favorites. You can quickly schedule any of the supported tasks for a one-time shot or repeating.
On a Windows machine, hold the CTRL button *after* right-clicking on a VM object. Continue to hold the button until you select the [supported] task to schedule. On an OSX machine, i noticed the schedule icon will appear while pressing CTRL but only function with COMMAND instead (bug will be filed).
Using the custom property “VMware.VirtualCenter.Folder” will allow you to save provisioned machines to a folder other than the default “VRM” folder that is automatically created. Better yet, this custom property can be added to the Blueprint or Business Group level resulting in per-Blueprint or per-BG provisioning to a given folder.
You can also get a little more sophisticated and create a drop-down (or free text) field to allow users to select (or manually type) the desired folder by using a combination of this property and some Property Dictionary wizardry.
A Storage Reservation Policy is created in Infrastructure -> Reservations -> Reservation Policies within vCAC. You can create any number of Storage Reservation Policies and assign them to an accessible Storage Path (one that is accessible to the Business Group’s Resource Reservation). SRP’s are assigned per storage volume, meaning you can assign different volumes (VMDK’s) to different policies for multi-tiering within an application.
– Step 1: Ensure all desired Storage Paths are enabled in the Reservation
– Step 2: Create the Storage Reservation Policy
– Step 3: Edit the Cluster Configuration (Infrastructure -> Compute Resources -> Compute Resources -> Configuration tab) to the assign a Storage Reservation Policy for each Storage Path
– Step 4: Enable the SRP by editing the Blueprint’s storage volumes. You can also select “Allow user to see and change storage reservation policies” to allow users to change this setting during provisioning.
For starters, be sure to configure a vCO Endpoint (Infrastructure tab -> Endpoints -> Endpoints). In a POC or small environment you can point to the embedded vCO instances that ships with the vCAC VA. Otherwise point to an external vCO instance (note: if using an external instance, be sure to install the NSX 6.1 vCO plugin first).
Once the vCO Endpoint is configured, it’s time to add NSX support to the vSphere (vCenter) Endpoint. In vSphere (vCenter) Endpoint configuration, check the “Specify manager for network and security platform” box and enter the appropriate address / credentials for NSX. Be sure the account used has admin permissions (you can use the default admin account, or any account that has been added as NSX Admin users.
Snapshots are configured per-Blueprint in the Actions tab (this is not a typical Entitlement like most other actions). The UI allows you to specify whether or not to allow users to take and delete snapshots for machines provisioned off the blueprint. To add a bit more control, you can use the “Snapshot.Policy.Limit” and “Snapshot.Policy.AgeLimit” custom properties.