Quantcast
Channel: Tomas Fojta – Tom Fojta's Blog
Viewing all articles
Browse latest Browse all 245

Quick Guide to VCF Automation for VCD Administrators

$
0
0

VMware Cloud Foundation 9 (VCF 9) has been released and with it comes brand new Cloud Management Platform – VCF Automation (VCFA) which supercedes both Aria Automation and VMware Cloud Director (VCD). This blog post is intended for those people that know VCD quite well and want to understand how is VCFA similar or different to help them quickly orient in the new direction.

It should be emphasized that VCFA is a new solution and not just rebranding of an old one. However it reuses a lot of components from its predecessors. The provider part of VCFA called Tenenat Manager is based on VCD code and the UI and APIs will be familiar to VCD admins, while the tenant part inherits a lot from Aria Automation and especially for VCD end-users will look brand new.

Deployment and Architecture

VCFA is generaly deployed from VCF Operations Fleet Management (former Aria Suite LCM embeded in VCF Ops.

Fleet Management UI

Note that it is also possible to deploy VCFA via the VCF Installer as Day-0 action together with the whole VCF bringup, but that way you will have less flexibility on the networking topology.

It consists of one or more VMSP (VMware Management Services Platform) Appliances that run containerized services in a Kubernetes cluster. It is quite big (24 vCPUs, 96 GB RAM) because it combines all the services that were previously running in separate appliances. In the prelude namespace we can find the Postgresql DB, Tenant Manager, Orchestrator, RabbitMQ, Automation provisioning, Blueprinting and Catalog services, and Cloud Consumption Services. Other namespaces contain platform common logging and backup services.

VCFA pods running in namespace prelude

Small deployment type consists of a single appliance, while medium and large scale out to 3 with an internal or external load balancer. The appliance has only single network interface and can be connected to the default VLAN backed VCF VM management network, or optionaly to other DVPGs or NSX segments. The AVN (application virtual network) deployment is optional but still recommended for production scenarios as you get additional security and resilience options.

Recommended VCFA deployment in an overlay segment

Backup and restore is managed from the Fleet Management UI into an external SFTP target and can be scheduled with a specific retention policy.

Backup configuration

Multitenancy

Same as VCD, VCFA is fully multitenant however the tenancy is pushed down all the way to the infrastructure layer and not maintained only at the CMP level.

Network Multitenancy

For NSX each tenant (org) has their own NSX Project (which was optional in VCD). Overlay networking for workload isolation is essential. Each tenant will get a dedicated Transit Gateway (which is similar to Tier-0 VRF GW). The transit gateway connects to a single Provider Gateway (Tier-0) which provides external connectivity via associated public IP Space with individual IPs or whole blocks (prefixes) managed via quotas. The Provider Gateway is usually shared, but if a tenant needs specific (MPLS) external networking, then a tenant dedicated Provider Gateway will have to be deployed. Tenant or provider can create one or more VPCs – Layer 3 routing constructs, which consist of VPC router (Tier-1 GW) with subnets connected to them. The subnets can have internal IPs (from private VPC block) with automatic SNAT configured for egress communication or can have public IPs from the public IP space (fully routed). VPCs are connected upstream to tenant Transit GWs.

VPCs and IP Spaces

The VCFA networking concepts are quite similar to the VCD ones. VPC is similar to VCD Datacenter Group, VPC GW to Org VDC GW, Transit GW to a dedicated Provider GW.

The main differences are:

  • tenants can create their own VPCs
  • SNAT and route advertisement are automatically configured, tenants cannot manage their own NAT services
  • no GW nor distributed firewall currently
  • no VLAN networks nor fully isolated (disconnected) networks
  • transit GWs currently connect upstream to only single Provider GW

Compute Multitenancy

For the compute, tenancy is handled via Supervisor Namespaces. Where VCD would talk to vCenter (VC) via a single service account and create vSphere Resource Pools for tenant Org VDC, now in VCFA each workload vCenter Server will have one or more vSphere Supervisors. VCFA then talks to the Supervisor endpoints which manage tenant namespaces and objects inside them. From compute perspective Supervisor manages vSphere Zones – which are mapped to vSphere Clusters (currently 1:1). Namespaces are backed by individual Resource Pools in designated clusters (based on vSphere Zone selection).

Tenant VM workloads inside the namespace are managed by Supervisor VM Service. Tenant can also deploy their own TKG Clusters (Kubernetes), storage volumes or custom objects such as DSM databases. The management of the namespace is done within the tenant context which is proxied all the way from tenant UI CCS/API/VCF CLI/kubectl through Proxy running on the Tenant Manager to the Supervisor endpoint. This means the objects retain their ownership all the way. Needless to say the objects cannot be anymore managed via vCenter.

Comparison of VCFA and VCD managed VMs in vCenter Inventory Hierarchy

For networking the namespace uses one of available VPCs. For storage the regular vSphere storage policies are mapped as storage classes and is quite equivalent to what we are used to in VCD.

Identity Management

Each tenant organization can have their own Identity Provider which are the same as in VCD (LDAP, SAML and OIDC/OAuth). Local accounts are not supported (only single one can be created for initial access). LDAP endpoint needs to be accessible from VCFA appliances. There are two levels of Role Based Access Control. The provider can manage Org level rights the same way as in VCD via global roles and right bundles. However there is one big difference how VCD and VCFA handle object level access. While in VCD it would be at vApp or Catalog level, in VCFA it is at Project level.

Project is completely new concept in VCFA and should not be confused with NSX Projects. Org administrator can create Projects and assign org users (or group of users) to them with project level roles. So the same user in one project can have admin rights there, while in another just basic user rights. This is not possible in VCD where user has the same role in the whole Organization. There are multiple hardcoded projects roles mapped to most common use cases. The user will have access to all objects inside the project. Even to those that the user have not created. Namespaces are created inside a project. VPCs are orthogonal to Projects (no relationship). This new approach might be better than the VCD way in some scenarios but will not suit others so this requires some planning on the tenant part how to define Projects and user mapping.

Resource Management

VCFA introduces some new hierarchy constructs for resource management so lets try to describe them and map to VCD. In order to deploy VCFA, we need to have VCF instance with a management cluster. This is where Fleet Management will be deployed (VCF Operations + Fleet Management appliance). The Fleet Management can manage multiple VCF instances, where each instance consist of one management domain and additional workload domains. Each VCF instance has its own SDDC Manager. All these VCF instances are called Fleet.

In general it is expected to deploy a single VCFA instance in a Fleet. However, you can deploy additional ones but these will not be fully integrated with the VCF Ops. So we can say that Fleet with single VCFA is similar to a VCD Site with multiple availability zones (individual VCF instances or workload domains). Note that federating multiple VCFA instances (similar to VCD multi-site) is not yet possible.

Inside VCFA instance we define Regions. So here comes first clash of terminology. In VCD a Region would usually map to a site (Europe, US East, US West, Asia), in VCFA Region is mapped to a single NSX Domain with a collection of Supervisors. So this would usually be one or more workload domains from single VCF instance. So if US East is the Fleet, then Regions can be us-west-a, us-west-b. Essentially availability zones each with L2 networking domain. So we can say Regions are very similar to VCD Provider VDCs.

Terminology Mapping
Region Creation

Inside a Region we have one or more Supervisors, where each Supervisors manages vSphere Zones mapped (currently 1:1) to vSphere Clusters.

When you want to give a particular tenant / Organization access to resources, you need to select which Region and which Zones within the Region the Organization can consume the resources from and what is the quota – CPU/RAM limit and how much CPU/RAM can be reserved. Note that this must be done with Zones granularity (we do not have concept of elasticity of PVDC as in VCD). While this looks similar to Flex Org VDCs allocation, the main difference is that nothing is really consumed until a Namespace is created by the tenant carving out resources from the quota. Kind of similar to Provider VDC Grants that were introduced in the Three Tier VCD model in VCD 10.6 where subprovider would receive a quota and carve it out to their sub-tenants in forms of Org VDCs.

Region Quota Creation

Tenant can create arbitrary number of Namespaces as long as they fit within the allocated quota. Namespaces are sized based on Namespace Classes (similar to Org VDC templates) or with an arbitrary size. Creation of Namespace thus involves selection of Region, one or more Zones and a single VPC that was created inside that region.

Namespace Creation

It is hard to map Namespace construct to anything in VCD – it is something between an Org VDC and a vApp.

Catalog

Catalog handling is quite different in VCFA from VCD. In VCD Catalog would contain vApps which would exist as VMs in always powered off state in Org VDC tree structure. ISO images could be stored as well. VCFA Catalog equivalent is a Content Library which maps to vCenter Content Library and contains OVAs and ISO stored as files on a storage assigned to the Content Library. This means that deployment of a VM from a catalog is always an OVA to VM deployment and not simple and fast VC cloning operation in VCD. Also note that existing VMs cannot be captured into Content Library.

Extensibility

The provider side inherits similar mechanism to VCD extensibility via plugins and extension. However the tenant side does not support that mechanism yet, instead relies on Aria Automation orchestration. However we can already extend VCFA with DSM to provide Database-as-a-service or Object Storage, Private AI Foundation and other extensions like encryption or 3rd party backup are on the roadmap.

Data Services Extension in VCFA


Viewing all articles
Browse latest Browse all 245