Building a VMware vCloud Whitebox Lab

Part 1 – bill of materials

I’ve always believed that the best way to learn and truly immerse yourself into a new technology is to build it, break it, and build it again…then repeat steps 2 and 3 until you can do it in your sleep.  Although this doesn’t apply to all technologies, it certainly holds true for most of VMware’s portfolio.  I’d like to share the ins-and-outs of my own VMware solutions lab – EZLAB 2.0 – and walk you through what you’ll need to get started on one of your own.  Let’s dive right in…
Choosing the right hardware and software components that will build the foundation of your lab – compute, networks, storage, core, and shared services – is the first step to getting started.  I have gone through several iterations of my lab as it and the underlying technology have evolved over time, with each upgrade having me dump a bit more cash into the core components.  I tend to skimp on some things while investing a bit more on the components that affect scale and performance while providing a means to overcommit resources as much as possible.  RAM and shared storage (NAS) certainly fall into that category.  More on that later.  As for software, I rely almost exclusively on extended evals, freeware, or subscription licenses (ex: MSDN).  Just about any VMware software can be downloaded and evaluated for free at

Hardware BOM
As the title of this article suggests, this is a “whitebox” environment, meaning the underlining hardware is largely home built using readily available components from my favorite resellers.  This not only helps you control costs, but also allows you to hand-pick the components.  The beauty of building a VMware lab is the flexibility you’ll gain in terms of the hardware needed to support it.  Since vSphere is the core component of any VMware environment, anything you run on top of it will be abstracted from the bare-metal hardware.  This makes building your own hosts (servers) a viable option while not sacrificing performance and flexibility. There are a bunch of resources out there that will help you chose hardware that is compatible – although not necessarily supported – with vSphere.  Simply Google “vmware vsphere whitebox” and you’ll receive no less than 30,300 results (I just tried) – but chances are you won’t need more than the first page of results (actually, you won’t need more than this post!).  But rather than do all that, I’m going to share what has worked for me…
Compute (for 2 x vSphere Hosts): as I mentioned earlier, I chose to beef up components that affect overall performance of the environment – namely, RAM.  Even though this is just a lab, I don’t have the patience to deal with performance issues or twiddle my thumbs while I wait for vCenter to boot.  Plus, I will be building nested ESXi hosts (vESXi) for my cloud resource cluster and want to be able to provide plenty of resources to those virtual hosts – 32GB per host seems to be a sweet spot and drastically improves overall performance/stability of my lab.
What Qty x Description Why
Barebones system (incl. case, power supply, motherboard) 2 x Shuttle SH67H3 Intel Core i7 / i5 / i3 (LGA1155) Intel Socket H2 (LGA1155) Intel H67 Intel HD Graphics 2000/3000 integrated in the processor 1 x HDMI XPC Barebone


I used to build everything from scratch but have found these Shuttle barebones systems to be a quick-n-easy way to get going, especially now that they support up to 32GB of RAM.  And the price is right.  Plus, vSphere installs perfectly and even recognizes the onboard Realtec NIC.
Memory 2 x CORSAIR XMS 32GB (4 x 8GB) 240-Pin DDR3 SDRAM DDR3 1600 (PC3 12800) Desktop Memory Model   CMX32GX3M4A1600C11


You can get away with cutting memory in half (16GB), but why? – memory is one of those critical components that will help you achieve a high VM-to-host ratio.  32GB/host is wonderful.  Definitely worth the   investment.
CPU 2 x Intel Core i5-2320 Sandy Bridge 3.0GHz (3.3GHz Turbo Boost) LGA 1155 95W Quad-Core   Desktop Processor Intel HD Graphics 2000 BX80623I52320


Great CPU at a decent price. Even with only 4 cores per CPU (x 2 hosts), I have yet to hit any vCPU contentions.
Local Disk [optional] 2 x Crucial M4 CT064M4SSD2 2.5″ 64GB SATA III MLC Internal Solid State Drive (SSD)


I used 64GB SSD drives for booting ESXi because they’re cheap, fast, quiet, and will allow me to take advantage of vSphere’s host cache capability.  You can skip local drives all together and boot from USB or use AutoDeploy.
Optical [optional] 2 x SONY Black 18X DVD-ROM 48X CD-ROM SATA DVD-ROM Drive Model DDU1681S-0B – OEM


DVD-ROMs are so last decade, but for some reason I always add them to my hosts.  This is almost exclusively for installing vSphere from CD – you can also install from USB drives or even use AutoDeploy if you’re brave.  Call me traditional.
NIC 2 x Intel PRO/1000 Pt Dual Port Server Adapter


The shuttle’s motherboard comes with an onboard eth nic – I dedicate that for management and the 1000 PT’s 2 additional interfaces for IP storage and “Production” traffic (on a dvSwitch).  It’s a bit costly but you can go with a cheaper option, just make sure it will be recognized by vSphere.
Total Cost, Compute: $901 x 2 = $1,802 total
* Make it cheaper: cut memory down to 16GB per host and use a single-port add-on NIC (vs. the 2-port).  This will likely impact scale and network performance, but will shave at least $400 off the bill.  Make sure   you understand the impact of reducing network ports – you’ll either have to collapse data and storage networks on a standard vSwitch or sacrifice using a dvSwitch (w/dedicated uplink) while keeping storage independent.  We’ll cover these options in the setup (next post).
Network: you have several options here – I dedicate separate switches for data and storage traffic to ensure we’re not saturating the lines or overwhelming these SOHO (unmanaged) switches.  Feel free to consolidate networks, especially if your switch supports VLANs (managed).  Just keep in mind the needed port count – each host will have 2 data uplinks and 1 storage uplink, plus requirements from all your other network-attached devices.  I have a Brocade FCX6124 that would rival some production networks…bit of an overkill here though.
Switch(es) 2 x Cisco / Linksys 8-port 100/1000 switch


Dedicate one switch to all data traffic – mgmt., VM traffic, vMotion, etc – and the other one exclusively for IP Storage connectivity.  Keeps things nice and clean on the cheap.
Cabling 16 x Belkin 6FT Cat-6 (550Mhz) cables


Spend an extra dollar or so and upgrade to Cat-6 cables for optimal throughput for 1Gbps networks.  I swapped out all my cables when I tracked an I/O issue down to a bad Cat5e cable.  I was thoroughly annoyed that day.  I test all my cables prior to using them now.
Total Cost, Networking: $210 total
Storage: this is a critical one…I spent a little more than I had originally planned on my storage infrastructure.  I wanted to make sure I had the performance to support all the I/O expected from each host while still supporting vMotion/SvMotion requirements, multi-tiering, and advanced storage functionality.  Local storage was a no-no (that goes for local-shared) – I wanted zero host dependency and the least amount of overhead.  I spent some time reviewing different platforms, options, and stats.  This included home-built solutions like OpenFiler and FreeNAS, virtual storage, and 2 or 3 off-the-shelf solutions.  I ended up with a Synology NAS and have been thoroughly pleased so far.
NAS (main unit) Synology DS1512+ High Performance NAS Server, 5 Bays, scales up to 15 Drives with expansion unit, 2 x 1Gbps eth (bondable).


Not a cheap option, but this NAS provided everything I was looking for — multi protocol support (CIFS, NFS, iSCSI, FTP, etc), great performance, lots of bells-n-whistles in the form of add-on apps, and a an intuitive UI.  Each DSM+ software update has brought with it new features and capabilities.  The 5 built-in drive bays are filled with lovely solid-state drives.
NAS (expansion unit)* Synology 5-bay expansion unit.


The expansion unit adds 5 1TB SATA Drives for a lower-tier storage option and is linked to the main unit using a dedicated 3Gbps eSATA interface.
SATA Drives 5 x 1TB Western Digital Black, 7200RPM, 64MB cache.

$99 ea

Cheap, affordable storage capacity – you can probably get 2TB drives for right around the same price.  These drives fill the Synology expansion unit in a RAID-5 config.  They serve up 2 x 1TB iSCSI datastores, NFS mounts for shared data, ISOs, etc., and provide a means of backing up the environment.    Roughly 400 IOPS throughput.
SSD Drives* 5 x 256GB Crucial M4 SSD Drives

$249ea – down about 50% from less than a year ago!

I splurged.  These drives fill the 5 built-in bays in the main unit in a RAID-5 config and provide a sick amount of I/O (all things considered) for all core VMs in the lab – that is, every VM/App that is not disposable.  I started with 2 drives and built up to 5 over several months to spread the cost out.  The Synology grew the volume beautifully as I added each drive.  Even when loaded (30+ powered-on VMs), a Windows Server 2k8 VM boots to login in under 15 seconds.  I’ve seen up to 3,000 IOPS…1,800 steady.  Memory swapping barely affects the environment.
Total Cost, Storage $3,009 total
* Make it cheaper: cut out the SSD’s (and expansion unit) to save a ton of cash – but understand what the trade-off is – impact of memory swap on SATA, overall I/O performance hit, etc.  Or, better yet, you can find a happy medium and still reduce the overall investment.
Software BOM
Another bonus of all the software components living in VMs is the fact that I haven’t had to rebuild my core services – AD, DBs, DNS, etc – in several years even as I’ve completely ripped-and-replaced various components of the whitebox lab infrastructure.  Since VMware’s Cloud Infrastructure Suite 5.1 is just around the corner, I’m using all the latest-n-greatest VMware builds for my lab upgrade, including some beta and RC bits.
Shared VMs / Services: These VMs exist to provide basic domain services (Auth, DHCP, DNS, etc).  We also have a couple OS templates for future use.  I won’t be covering the install/config of these services – hoping that you’ve got that covered at this point.  Note that several of the VMware products you’ll be building depend on some of these services – but you’ll need to have vCenter up and running before you can build your templates for rapid deployment.  We’ll cover the order of deployment in the next post, “the setup”.
Primary Microsoft Active Directory (+add-ons) Domain Services, DHCP, DNS, Account Management – required for the majority of VMware products in some shape or form (many support other LDAP options as well).  These components are all running on a single Windows 2008 R2 VM.
Secondary Microsoft Active Directory (+add-ons) Same as above (minus DHCP).  Provides domain service redundancy.  Be sure to use a DRS rule to ensure these two VMs run on separate hosts when possible.
Microsoft SQL Server 2008 Microsoft SQL Server installed on a Windows 2008 R2 VM.  This is the core DB server for anything that requires an external database.
Windows 2008 R2 “golden image” Windows template for building out the windows-based servers.  Once your vCenter is online, build this VM with a base copy of Windows 2k8 R2 and ensure it’s up to date with patches and such.  Save as template.
RHEL 5.6 x64 “golden image” RedHat Linux template for building out the RHEL-based servers.
CentOS x64 “golden image” CentOS template for building out the linux-based servers.
VMware Solutions: here is a list of products/solutions running in EZLAB 2.0 – what you want to deploy is up to you.  As I mentioned previously, you can get eval licenses for the majority of these products (assuming you don’t already own them).  My lab utilizes the latest beta code for several products.  Don’t worry if you’re not participating in any of these betas – config and setup is going to be similar with the current GA versions of each product (I will provide an update at GA).  Head over to and start downloading…
What Where Why
VMware vSphere (ESXi) 5.x Installed on bare metal (host) World’s greatest hypervisor – installed on the 2 hosts we   built plus on 4 virtual (nested) ESXi hosts.
VMware vCenter Server 5.x
– vSphere Web Client
– vSphere SSO Server
Appliance (VCVA) vCenter is the management core of the VMware stack – I use the virtual appliance for quick setup.  You can optionally use the Windows-installed version if preferred.
VMware vCloud Director 5.x 1 x RHEL 5.5 x64 VM vCloud Director delivers the cloud framework, multi-tenancy, IT abstractions, and cloud management functionality.
VMware vCloud Connector 5.x Appliance Connect multiple clouds and/or vSphere environments (public or private) and manage them through a single pane of glass.
VMware vShield Edge 5.x Appliance Integrated, dynamic cloud security, managed by the vShield Manager.
VMware vFabric App Director 1.0 Appliance Build multi-tiered applications on-demand and deploy to vCloud.  AppDirector is based on app blueprints and helps orchestrate the rapid build/deployment of complex applications.
VMware vCenter Orchestrator 5.x Appliance Build workflows to automate repetitive tasks across the infrastructure.  vCO is used in the EZLAB as a key automation, orchestration and integration component.
VMware View 5.1
– View Manager
– View Security Server (opt.)
– View Composer
3 x Windows 2008 R2 VMs All management desktops – providing access to the EZLAB environment – are brokered by View connection server. This makes deliverying VDI and providing external access a piece of cake while not sacrificing performance.
VMware vCenter Operations Manager 5.6 beta
– Analytics VM
vApp Appliance (2 total VMs) The entire environment is watched and managed — from Health, Risk, and Efficiency perspective — using vCOps.  With vCOps I have real-time access to stats like performance, capacity, and overall health across EZLAB.
VMware vCenter Chargeback 2.0 Appliance Cost visibility and chargeback for the entire cloud.  Although I don’t provide external services, vCB is used to demo this capability.
VMware vCenter Infrastructure Navigator (VIN) 1.2 beta Appliance Understand relationships and dependancies between VMs across the cloud and use this info to make decisions on failover, migration, etc.
VMware Zimbra 8.0 beta Appliance Email and collaboration server – provides all internal messaging services for EZLAB.  I’ll eventually expand the role of Zimbra to take advantage of several of the new capabilties of Zimbra 8.0.
The Tool Belt: these tools of the trade belong in everyone’s virtual fanny pack tool belt.  They will make your life much easier (or are otherwise required).  Depending on your platform (Mac vs. Win), some may or may not apply.
Windows PowerShell
VMware vSphere Client
VMware vSphere PowerCLI 5
FileZilla Client
Terminal Services Client
RealVNC Viewer

Okay, that should be plenty to get you started.  Feel free to share any feedback as you build out your lab.


Follow virtualjad on Twitter
  1. Hi Jad ,

    when you will be posting the setup ?

    thanks .


  2. It's coming! (I know, i've been saying that for a while now). But really…it's coming.

  3. karima

    thanks mr jad el zein

  4. Hi Jad,

    Would you recommend installing the vRA management components like vRA appliance, Identify Server, IaaS, Postgress database and Clustered MS SQL databases in a different networks instead of same networks that is HA Management Cluster? If not why? Thank you.

  5. Good evening,
    My name is Mike and I just cam across this post in researching “white box” build guides to build a dedicated server for labbing at home whether it be towards mcsa, vmware, etc. I noticed the time of this writing is dated now, however my question was if I were to simply buy all these parts would that still hold up today for my needs?
    Please advise & assist.
    Regards, Mike

Leave Comment

Your email address will not be published. Required fields are marked *

clear formSubmit