One of the great capabilities of VMware Cloud Director is to build nested labs for development, testing, education or demonstration purposes. VMworld Hands on Labs used this capability for ages. Recently I got numerous questions on how to build such lab so I want to summarize all the important information here.
Requirements
- self-contained lab runing in VCD cloud environment with equivalent functionality to physical lab
- rapid deployment of multiple instances that do not interfere with each other
- reset, capture and clone capabilities
High Level Design
Each lab is running in a single routed vApp that is connected to the Org VDC Edge Gateway via vApp Edge backed by Tier-1 GW. Inside the vApp there is usually single transit network where all the VMs are connected. Nested VLANs can be used and internal routing is provided by vPod Router which is essentially a router VM (e.g. VyOS or similar). The lab can be accessed from ControlCenter Windows VM that acts as a jumbox to which you can connect via RDP but can also have additional roles (DNS, AD, …).
vApp Edge provides NAT from Org VDC network IP to vPod Router who then NATs the RDP traffic to ControlCenter.

Provider Design
We assume recent VCD version is used with NSX-T providing the networking functionality.
Org VDC needs to have Edge Cluster assigned in order to deploy vApp Edges. The Org VDC network must be overlay backed and cannot be VLAN backed.
If we plan to use vPod Router own DHCP server and nested ESXi workloads a specific NSX Segment Profile Template must be set up. In NSX, MAC Discover segment profile must be created with MAC Change and Learning enabled – this is needed due to VMs or vmkernel ports on nested ESXi hosts will have different MAC addresses than the ESXi NICs. Segment Security segment profile must be created with DHCP Server Block disabled (which is enabled by default).

Then in VCD in Resources > Infrastructure Resources > NSX-T we can create Segment Profile Template utilizing the two above segment profiles.

Then we can assign this Segment Profile Template to our Org VDC for vApp networks.

Finally if we want to enable rapid provisioning of the labs we can enable Fast Provisioning in Policies > Storage for the Org VDC which will deploy all vApps as linked clones from the catalog templates which will take seconds as opposed to minutes for full clones.

Lab Design
Lab design is fully in scope of the tenant. They can create labs as they wish – what follows is my recommended approach.
Create vApp with single routed vApp network called Transit and IP subnet 192.168.0.0/24 and an IP pool. This network will be connected to the Org VDC network. If you plan on using VLANs enable Allow Guest VLAN.

Deploy vPod Router VM. I prefer to use VyOS in a VM. Use at least two network interfaces where the first one needs to use IP allocation from the Transit IP Pool. Set the others to use DHCP. IP Pool allocation mode is necessary to allow later on NAT from vApp Edge. The DHCP config essentially tells VCD not to use its IPAM – we will set static IPs internally in the vPodRouter.


Guest customization should be generally disabled on all lab VMs, you will need to configure IPs manually from within each VM.
configure
set interfaces ethernet eth0 address 192.168.0.2/24
set protoctols static route 0.0.0.0/0 next-hop 192.168.0.1
set service ssh listen-address 192.168.0.2
commit
save
Now we can configure NAT to vPodRouter on the vApp Edge. At the vApp level go to Networks and click on the Transit network. In services we will configure Firewall to enable incoming traffic, make sure NAT is enabled and there is NAT rule to our first vNIC of vPodRouter.

You should now be able to SSH to your lab via the External IP shown above (10.100.11.3).
Note that the above approach will consume two IPs from the Org VDC network IP Pool. One for the vApp Edge and one for the vPodRouter NAT (1:1 SNAT/DNAT or reflexive – can be set in Org VDC Network Policies). The alternative approach is to change the NAT from IP Translation to Port Forwarding and forward only specific ports (22, 3389) to vPodRouter – then only single IP is used from the Org VDC Network Pool (Router External IP)

Now you should configure internal networking of the vApp – set up the other vPodRouter interfaces, create virtual interfaces if VLANs are to be used, configure NTP and optionally DHCP.
For example we will set up 192.168.110.0/24 subnet on the 2nd interface:
configure
set interfaces ethernet eth1 address 192.168.110.1/24
set interfaces ethernet eth1 mtu 1700
commit
save
Once this is done we can deploy ControlCenter – this is usually Windows server VM with AD, CA, DNS and other roles enabled and with enabled RDP service. You would connect it to the transit network but configure its IP to the secondary subnet: e.g. 192.168.110.10. Set VCD IP allocation to DHCP and as said disable guest customization and set IP internally inside the VM.
Lastly we configure NAT of RDP traffic on the vPodRouter and masquerade so all lab VMs can access internet.
configure
set nat destination rule 10 description “Portforward RDP to ControlCenter”
set nat destination rule 10 destination port 3389
set nat destination rule 10 inbound-interface name eth0
set nat destination rule 10 protocol tcp
set nat destination rule 10 translation address 192.168.110.10
set nat source rule 100 outbound-interface name eth0
set nat source rule 100 translation address masquerade
commit
save
You should now be able to RDP to the ControlCenter VM via the same IP as was SSH to the vPodRouter.
Capture this basic vApp to catalog with Make identical copy option.

This concludes the basic lab framework. The next step is to deploy additional VMs and appliances based on the lab needs.
Important Tips
Here is collection of helpful tips on lab setups
- Do not use .local for your domain name suffix. E.g. controlcenter.corp.local. The reason is that some DNS clients do not send .local domain to an external server and your DNS resolution will not work correctly.
- When deploying nested ESXi hosts make sure none of the vmk NICs is matching the MAC address of the ESXi VMNICs. By default vmk0 is using vmnic1 MAC address. This will break NSX MAC learning if you have multiple NICs on ESXi and teaming policy will decide to use other vmnic. The easiest is to deploy and configure ESXi and then reset MAC addresses in VCD.

- If you need storage for your nested ESXi hosts either use vSAN or TrueNAS or similar storage appliance.
- When using nested ESXi hosts you have the option to deploy other lab VMs either nested on those hosts or as regular VMs inside the vApp. The first option is easier as you do not need to modify the OVAs to work under VCD, but the performance will be much slower and your nested ESXi VMs need enough compute/storage. For example I prefer to deploy NSX Edge Nodes as vApp VMs and not as nested – but you cannot use the NSX UI workflow to do so.
- Some OVA appliances must be modified in order to deploy them to VCD as not all features VC support are enabled in VCD. For example ExtraConfig properties or DeploymentOptions must be removed or edited. Extract (unzip) OVA, edit OVF and upload to catalog only OVF and vmdk files.
- When deploying NSX make sure your MTU is set correctly on the vPodRouter. As you are running on physical overlay with potentially nested VLANs set it conservatively to allow for that additional overhead (e.g. 1700).
- Appliance networking is usually configured via Guest Properties. So make sure you configure them before powering on the VM. Always disable guest customization.

- Set vApp Start and Stop Order to start lab VMs in the correct order and account for time they need to start properly
