VMware vCloud Foundation 4.2.1 Step by Step Phase1 – Preparation

Finally, after a year and half, I got a chance to deploy latest version of vCloud Foundation 4.2.1. It has been successfully deployed and tested. I have written couple blogs on earlier version (i.e. version 4.0), you can find them here.

https://virtualrove.com/vcf/

Let’s have a look at the Cloud Foundation 4.2.1 Bill of Materials (BOM).

Software ComponentVersionDateBuild Number
Cloud Builder VM4.2.125-May-2118016307
SDDC Manager4.2.125-May-2118016307
VMware vCenter Server Appliance7.0.1.0030125-May-2117956102
VMware ESXi7.0 Update 1d4-Feb-2117551050*
VMware NSX-T Data Center3.1.217-Apr-2117883596
VMware vRealize Suite Lifecycle Manager8.2 Patch 24-Feb-2117513665
Workspace ONE Access3.3.44-Feb-2117498518
vRealize Automation8.26-Oct-2016980951
vRealize Log Insight8.26-Oct-2016957702
vRealize Operations Manager8.26-Oct-2016949153

It’s always a good idea to check release notes of the product before you design & deploy. You can find the release notes here. https://docs.vmware.com/en/VMware-Cloud-Foundation/4.2.1/rn/VMware-Cloud-Foundation-421-Release-Notes.html

Let’s discuss and understand the installation flow,

Configure TOR for the networks that are being used by VCF. In our case, we have VyOS router.
Deploy a Cloud Builder VM on stand alone source ESXi or vCenter.
Install and Configure 4 ESXi Servers as per the pre-reques.
Fill the Deployment Parameters excel sheet carefully.
Upload “Deployment Parameter” excel sheet to Cloud Builder.
Resolve the issues / warning shown on the validation page of CB.
Start the deployment.
Post deployment, you will have a vCenter, 4 ESXi servers, NSX-T env & SDDC manager deployed.
Additionally, you can deploy VI workload domain using SDDC manager. This will allow you to deploy Kubernetes cluster.
Also, vRealize Suite & Workspace ONE can be deployed using SDDC manager.

You definitely need huge amount of compute resources to deploy this solution.
This entire solution was installed on a single ESXi server. Following is the configuration of the server.

Dell PowerEdge R630
2 X Intel(R) Xeon(R) CPU E5-2650 v3 @ 2.30GHz
256 GB Memory
4 TB SSD

Let’s prepare the infra for VMware vCloud Foundation.

I will call my physical esxi server as a base esxi in this blog.
So, here is my base esxi and VM’s installed on it.

dc.virtaulrove.local – This is a Domain Controller & DNS Server in the env.
VyOS – This virtual router will act as a TOR for VCF env.
jumpbox.virtaulrove.local – To connect to the env.
ESXi01 to ESXi 04 – These will be the target ESXi’s for our VCF deployment.
cb.virtaulrove.local – Cloud Builder VM to deploy VCF.

Here is a look at the TOR and interfaces configured…

Follow my blog here to configure the VyOS TOR.

Network Requirements: Management domain networks to be in place on physical switch (TOR). Jumbo frames (MTU 9000) are recommended on all VLANs or minimum of 1600 MTU.

And a VLAN 1634 for Host TEP’s, which is already configured on TOR at eth3.

Following DNS records to be in place before we start with the installation.

With all these things in place, out first step is to deploy 4 target ESXi servers. Download the correct supported esxi version ISO from VMware downloads.

VMware ESXi7.0 Update 1d4-Feb-2117551050*

If you check VMware downloads page, this version is not available for download.

Release notes says, create a custom image to use it for deployment. However, there is another way to download this version of ESXi image. Let’s get the Cloud Builder image from VMware portal and install it. We will keep ESXi installation on hold for now.

We start the Cloud Builder deployment once this 19 GB ova file is downloaded.

Cloud Builder Deployment:

Cloud Builder is an appliance provided by VMware to build VCF env on target ESXi’s. It is one time use VM and can be powered off after the successful deployment of VCF management domain. After deployment, we will use SDDC manager for managing additional VI domains. I will be deploying this appliance in VLAN 1631, so that it gets access to DC and all our target ESXi servers.

Deployment is straight forward like any other ova deployment. Make sure to you choose right password while deploying the ova. The admin & root password must be a minimum of 8 characters and include at least one uppercase, one lowercase, one digit, and one special character. If this does not meet, then the deployment will fail which results in re-deploying ova.

Once the deployment is complete. Connect to CB using winscp and navigate to ….

/mnt/iso/sddc-foundation-bundle-4.2.1.0-18016307/esx_iso/

You should see an ESXi image at this path.

Click on Download to use this image to deploy our 4 target ESXi servers.

Next step is to create 4 new VM’s on base physical ESXi. These will be our nested ESXi where our VCF env will get install. All ESXi should have identical configuration. I have following configuration in my lab.

vCPU: 12
2 Sockets, 6 cores each.
CPU hot plug: Enabled
Hardware Virtualization: Enabled

Memory: 56 GB

HDD1: Thick: ESXi OS installation
HDD2: Thin VSAN Cache Tier
HDD3: Thin VSAN Capacity Tier
HDD4: Thin VSAN Capacity Tier

And 2 network cards attached to Trun_4095. This will allow an esxi to communicate with all networks on the TOR.

Map the ISO to CD drive and start the installation.

I am not going to show ESXi installation steps, since most of you know it already. Let’s look at the custom settings after the installation.

DCUI VLAN settings should be set to 1631.

Crosscheck the DNS and IP settings on esxi.

And finally, make sure that the ‘Test Management Network’ on DCUI shows OK for all tests.

Repeat this for all 4 esxi.

I have all my 4 target esxi severs ready. Let’s look at the ESXi configuration that has to be in place before we can utilize them for VCF deployment.

All ESXi must have ‘VM network’ and ‘Management network’ VLAN id 1631 configured.
NTP server address should be in place on all ESXi.
SSH & NTP service to be enabled and policy set to ‘Start & Stop with the host’
All additional disks to be present on an ESXi as a SSD and ready for VSAN configuration. You can check it here.

If your base ESXi has HDD and not SSD, then you can use following command to mark those HDD to SSD.

You can either connect to DC and putty to ESXi or open ESXi console and run these commands.

esxcli storage nmp satp rule add -s VMW_SATP_LOCAL -d mpx.vmhba1:C0:T1:L0 -o enable_ssd
esxcli storage nmp satp rule add -s VMW_SATP_LOCAL -d mpx.vmhba1:C0:T2:L0 -o enable_ssd
esxcli storage nmp satp rule add -s VMW_SATP_LOCAL -d mpx.vmhba1:C0:T3:L0 -o enable_ssd
esxcli storage core claiming reclaim -d mpx.vmhba1:C0:T1:L0
esxcli storage core claiming reclaim -d mpx.vmhba1:C0:T2:L0
esxcli storage core claiming reclaim -d mpx.vmhba1:C0:T3:L0

Once done, run ‘esxcli storage core device list’ command and verify if you see SSD instead of HDD.

Well, that should complete all our requisites for target esxi’s.

Till now, we have completed configuration of Domain controller, VyoS router, 4 nested target ESXi & Cloud Builder ova deployment. Following VM’s have been created on my physical ESXi host.

I will see you in next post, where we talk about “Deployment Parameters” excel sheet in detail.

Thank you.

Are you looking out for a lab to practice VMware products..? If yes, then click on the below link to know more.

Leave your email address in the box below to receive notification on my new blogs.

One thought on “VMware vCloud Foundation 4.2.1 Step by Step Phase1 – Preparation

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s