vRA 7.x focuses a lot on the user experience (UX), starting with one of the most critical — deploying the solution — then the second most critical, configuring it.  Following through with the promise of a more streamlined deployment experience, vRA 7’s release made a significant UX leap with the debut of the wizard-driven and completely automated installation of the entire platform and automated initial configuration.  And all of this in a significantly reduced deployment architecture.

The overall footprint of vRA has been drastically reduced. For a typical highly-available 6,x implementation, you would need at least 8 VA’s to cover just the core services (not including IaaS/windows components and the external App Services VA). In contrast, vRA 7’s deployment architecture brings that all down to a single pair of VA’s for core services. Once deployed, just 2 load-balanced VA’s will deliver vRA’s framework services, Identity Manager (SSO/vIDM), vPostgres DB, vRO, and RabbitMQ — all clustered and configurable behind a single load balance VIP and a single SSL cert. All that goodness, now down to 2 VA’s and all done automatically (!) during deployment.

While the IaaS (.net) components remain external, several services have moved to the VA(s). This will continue to be the case over time as more and more services make it over — eventually eliminating the Windows dependencies all together. So, for now — and in the spirit of UX — it’s all about making even those components a seamless part of the deployment.

High-Level Overview

  • Production deployments of vRealize Automation (vRA) should be configured for high availability (HA)
  • The vRA Deployment Wizard supports Minimal (staging / POC) and Enterprise (distributed / HA) for production-ready deployments, per the Reference Architecture
  • Enterprise deployments require external load balancing services to support high availability and load distribution for several vRA services
  • VMware validates (and documents) distributed deployments with F5 and NSX load balancers
  • This document provides a sample configuration of a vRealize Automation 7.2 Distributed HA Deployment Architecture using VMware NSX for load balancing

Implementation Overview

To set the stage, here’s a high-level view of the vRA nodes that will be deployed in this exercise. While a vRA POC can typically be done with 2 nodes (vRA VA + IaaS node on Windows), a distributed deployment can scale to anywhere from 4 (min) to a dozen or more components. This will depend on the expected scale, primarily driven by user access and concurrent operations. We will be deploying six (6) nodes in total – two (2) vRA appliances and four (4) Windows machines to support vRA’s IaaS services. This is equivalent to somewhere between a “small” and “medium” enterprise deployment. It’s a good balance of scale and supportability starting point.

A pair of VA’s will provide all the core vRA services. And in vRA 7.2, we now support (and recommend as a best practice) embedded vRO and vPostgres DB instances – these services will be automatically configured and clustered at deployment time. vIDM is also automatically configured across the two VA’s, but there will be a couple post-install config steps needed to provide highly-available access controls.

Rather than installing the required Distributed Execution Managers (DEM-O’s) and Endpoint Agents on dedicated hosts, I’m opting to collocate them on the IaaS servers – DEMs on the Web Servers and Agents on the Manager Servers. This is a supported configuration and works well until additional resources are needed. At that point moving these services to dedicated hosts is a straightforward process.

Virtual Machines

Name IP Address Description
fde-vrava01.mgmt.local vRealize Automation VA 01
fde-vrava02.mgmt.local vRealize Automation VA 02
fde-vraiaas01.mgmt.local vRA IaaS Services 1 (Web / DEM01)
fde-vraiaas02.mgmt.local vRA IaaS Services 2 (Web / DEM02)
fde-vraiaas03.mgmt.local vRA IaaS Services 3 (Mgr / Agent)
fde-vraiaas04.mgmt.local vRA IaaS Services 4 (Mgr / Agent)
fde-vraiaas05.mgmt.local (optional) vRA IaaS Services 5 (optional)
fde-vraiaas06.mgmt.local (optional) vRA IaaS Services 6 (optional)
vrademo.mgmt.local vRA VA VIP
vrademoweb.mgmt.local vRA IAAS WEB VIP
vrademomgr.mgmt.local vRA IAAS MGR VIP
fde-vrb01.mgmt.local vRealize Business for Cloud (vRBC) VA
NSX Components
fde-nsxmgr01.mgmt.local NSX Manager
fde-nsxesg01.mgmt.local NSX Edge Services Gateway
fde-nsxesg02.mgmt.local NSX Edge Services Gateway
fde-nsxdlr01.mgmt.local NSX Distributed Logical Router
fde-nsxdlr02.mgmt.local NSX Distributed Logical Router
fde-nsx-ctrl-01 NSX Controller 01
fde-nsx-ctrl-02 NSX Controller 02
fde-nsx-ctrl-02 NSX Controller 03
Shared Services
msbu-vc-demo.mgmt.local vCenter Server, Demo
fde-sql01.mgmt.local Dedicated SQL instance for vRA IaaS
mgmt-w-ad1.mgmt.local Active Directory / DNS
mgmt-w-ad2.mgmt.local Active Directory / DNS
fde-adfs01.mgmt.local Windows 2008 R2 ADFS (mgmt.local)
t-win2k12-lab Windows 2012 Template
t-centos7-lab CentOS 7 x64 Template
t-ubuntu-14-04-3 Ubuntu 14.04.3 x64 Template
t-photon-1.0-ga PhotonOS 1.0 Template

High-Level Deployment Architecture

vRA HA Deployment on NSX using Inline Load Balancing

  • 2 x vRealize Automation Virtual Appliances (VAs)
  • 4 x vRealize Automation IaaS Hosts (Windows OS)
  • NSX Edge Service Gateway (ESG) provides all load balancing
  • vRA components and Virtual Servers (VIPs) are on the same network


User Session Traffic, Inline Load Balancing

  • User connects to VIP (FQDN) of VA nodes
  • User traffic terminates at ESG
  • ESG references load balancing policy (round-robin) to determine destination node
  • ESG maintains user session with vRA appliances
  • User session does not hit IaaS nodes directly


vRA System Traffic

  • vRA VA’s communicate with IaaS nodes (web, mgr) using VIP address
  • vRA VA’s communicate with DEMs and Agents directly (not load balanced)

vRA Deployment Check List

There are a handful of [expected] external dependencies needed ahead of the deployment – Active Directory, rock-solid DNS, and MS SQL are prerequisites. And obviously we’ll need a vCenter server to deploy the nodes to and, eventually, use as a resource Endpoint for machine provisioning. And finally the NSX manager should be deployed and configured per best practices.

Document Review


  • All vSphere hosts configured with NTP, time in sync
  • Use NTP for all nodes or sync with hosts
  • NSX manager deployed and registered with vCenter per best practice
  • Download latest vRA 7.2 OVA’s


  • Use FQDN’s Everywhere
  • A (host) records for all vRA nodes (VA + IaaS)
  • A (host) record for all Load Balancer VIPs
  • Create CNAMEs for VIPs (optional, but recommended)

External Dependencies

  • Active Directory
  • Microsoft SQL Server (IaaS DB will be automatically created)
  • DNS

Service Accounts

  • Dedicated vRA service account (vrasrvc@mgmt.local used throughout this document)
  • Active Directory
  • Credentials for SQL (see required permissions in install doc)

SSL Certs

  • Self-signed certs can be generated during deployment CA certs recommended for production deployments
  • Replacing certs (to CA signed) post-deployment is supported but can be complex; best practice is to have them available during deployment
  • Review requirements in the installation guide (p. 59+) Signed Cert Request Tool: http://kb.vmware.com/kb/2107816
  • Troubleshooting Certs: http://kb.vmware.com/kb/2106583


  • Have a handle of your goals and objectives, stick to them
  • Review all prerequisites (per the documentation), remediate any potential issues prior to starting
  • While the installation wizard will fix most missing prerequisites, it’s a good idea (and time saver) to review and remediate IaaS prerequisites ahead of time

Next Step: 02 – Deploy and Configure NSX