VMware vCenter Error – ‘no healthy upstream’


You might encounter ‘no healthy upstream’ error message on newly installed vCenter. This is because of some unexpected parameters while deploying vCenter. I was not able to find the exact root cause for this error, however I knew the resolution from our discussion with technical experts long back.

To start with, here is how it looks on the vCenter when you try to access web client.

You will be able to access vCenter Server Appliance Management Interface at port 5480. Check the services here…

All services show as healthy. In fact, on the summary page shows health status as Good.

Everything looks fine but you can not access web client. I tried restarting vCenter server multiple times with no luck. Tried restarting all services from management interface. Nothing works.

Solution is to change network settings from vCenter Server Appliance Management Interface.

Click on networking & expand nic0.
Notice that the IP address shows as DHCP even if it was given as static.

Click on Edit at top right corner to edit the network settings & select your nic.

Expand the nic0 here and notice that IPv4 shows automatic.
Change this to manual.

Provide credentials on the next page.

Acknowledge the change. Take backup of vCenter if necessary. It also recommends to unregister extensions before you save this.

Also, check the next steps after settings are saved successfully.

 

Click on Finish and you should see the progress.

Access web client once this is finished. You should be able to access it.

Go back to vCenter Server Appliance Management Interface to verify. You should the ip as static.

The issue has been resolved. This is definitely because of some unexpected parameters while deploying the vCenter, since it does not show this error for every deployment. Anyways, wanted to write small blog on it to help techies to resolve the issues just in case if anyone see this error.
Thank you for reading.

Are you looking out for a lab to practice VMware products..? If yes, then click here to know more about our Lab-as-a-Service (LaaS).

Leave your email address in box below to receive notification on my new blogs.

VMware vCloud Foundation 4.2.1 Step by Step Phase3 – Deployment

Welcome back. We have covered the background work as well as deployment parameter sheet in earlier posts. If you missed it, you can find it here…

VMware vCloud Foundation 4.2.1 Step by Step Phase1 – Preparation
VMware vCloud Foundation 4.2.1 Step by Step Phase2 – Cloud Builder & Deployment Parameters

Its time to start the actual deployment. We will resolve the issues as we move on.
Let’s upload the “Deployment Parameter” sheet to Cloud Builder and begin the deployment.

Upload the file and Next.  I got an error here.

Bad Request: Invalid input
DNS Domain must match

Figured out to be an additional space in DNS Zone Name here.

This was corrected. Updated the sheet and NEXT.

All good. Validation process started.

To understand & troubleshoot the issues / failures that we might face while deploying VCF, keep an eye on vcf-bringup.log file. The location of the file is ‘/opt/vmware/bringup/logs/’ in cloud builder. This file will give you live update of the deployment and any errors which caused the deployment to fail. Use ‘tail -f vcf-bringup.log’ to get the latest update on deployment. PFB.

Let’s continue with the deployment…

Next Error.

“Error connecting to ESXi host esxi01. SSL Certificate common name doesn’t match ESXi FQDN”

Look at the “vcf-bringup.log” file.

This is because the certificate for an esxi gets generated after it was installed with default name and not when we rename the hostname. You can check the hostname in certificates. Login to an ESXi > Manage> Security & Users> Certificates

You can see here, Even if the hostname on the top shows “esxi01.virtualrove.local, the CN name in certificate is still the “localhost.localdomain”. We must change this to continue.

SSH to the esxi server and run following command to change the hostname, fqdn & to generate new certs.

esxcli system hostname set -H=esxi03
esxcli system hostname set -f=esxi03.virtualrove.local
cd /etc/vmware/ssl
/sbin/generate-certificates
/etc/init.d/hostd restart && /etc/init.d/vpxa restart
Reboot

You need to do this for all hosts by replacing the hostname in the command for each esxi respectively.

Verify the hostname in the cert once server boots up.

Next, Hit retry on cloud builder, and we should be good.

I am not sure why this showed up. I was able to reach to these IP’s from “Cloud Builder”.

 

Anyways, this was warning, and it can be ignored.

Next one was with host tep and edge tep.

VM Kernel ping from IP ‘172.27.13.2’ (‘NSXT_EDGE_TEP’) from host ‘esxi01.virtualrove.local’ to IP ” (‘NSXT_HOST_OVERLAY’) on host ‘esxi02.virtualrove.local’ failed
VM Kernel ping from IP ” (‘NSXT_HOST_OVERLAY’) from host ‘esxi01.virtualrove.local’ to IP ‘172.27.13.3’ (‘NSXT_EDGE_TEP’) on host ‘esxi02.virtualrove.local’ failed
VM Kernel ping from IP ” (‘NSXT_HOST_OVERLAY’) from host ‘esxi02.virtualrove.local’ to IP ‘172.27.13.2’ (‘NSXT_EDGE_TEP’) on host ‘esxi01.virtualrove.local’ failed
VM Kernel ping from IP ‘172.27.13.3’ (‘NSXT_EDGE_TEP’) from host ‘esxi02.virtualrove.local’ to IP ” (‘NSXT_HOST_OVERLAY’) on host ‘esxi01.virtualrove.local’ failed

VM Kernel ping from IP ‘172.27.13.2’ (‘NSXT_EDGE_TEP’) from host ‘esxi01.virtualrove.local’ to IP ‘169.254.50.254’ (‘NSXT_HOST_OVERLAY’) on host ‘esxi03.virtualrove.local’ failed
VM Kernel ping from IP ” (‘NSXT_HOST_OVERLAY’) from host ‘esxi01.virtualrove.local’ to IP ‘172.27.13.4’ (‘NSXT_EDGE_TEP’) on host ‘esxi03.virtualrove.local’ failed
VM Kernel ping from IP ‘169.254.50.254’ (‘NSXT_HOST_OVERLAY’) from host ‘esxi03.virtualrove.local’ to IP ‘172.27.13.2’ (‘NSXT_EDGE_TEP’) on host ‘esxi01.virtualrove.local’ failed
VM Kernel ping from IP ‘172.27.13.4’ (‘NSXT_EDGE_TEP’) from host ‘esxi03.virtualrove.local’ to IP ” (‘NSXT_HOST_OVERLAY’) on host ‘esxi01.virtualrove.local’ failed

First of all, I failed to understand APIPA 169.254.X.X. We had mentioned VLAN 1634 for Host TEP. It should have picked an ip address 172.16.34.X. This VLAN was already in place on TOR and I was able to ping the GW of it from CB. I took a chance here and ignored it since it was a warning.

Next, got warnings for NTP.

Host cb.virtaulrove.local is not currently synchronising time with NTP Server dc.virtaulrove.local
NTP Server 172.16.31.110 and host cb.virtaulrove.local time drift is not below 30 seconds
Host esxi01.virtaulrove.local is not currently synchronising time with NTP Server dc.virtaulrove.local

For ESXi, Restart of ntpd.service resolved issue.
For CB, I had to sync the time manually.

Steps to manually sync NTP…

ntpq -p
systemctl stop ntpd.service
ntpdate 172.16.31.110
Wait for a min and again run this
ntpdate 172.16.31.110
systemctl start ntpd.service
systemctl restart ntpd.service
ntpq -p

verify the offset again. It must be closer to 0.

Next, I locked out root password of Cloud Builder VM due to multiple logon failure. 😊

This is usual since the passwords are complex and sometimes you have to type it manually on the console, and top of that, you don’t even see (in linux) what you are typing.
Anyways, it’s a standard process to reset the root account password for photon OS. Same applies to vCenter. Check the small writeup on it on the below link.

Next, Back to CB, click on “Acknowledge” if you want to ignore the warning.

Next, You will get this window once you resolve all errors.

Click on “Deploy SDDC”.

Important Note: Once you click on “Deploy SDDC”, the bring-up process first builds VSAN on 1st ESXi server from the list and then it deploys vCenter on 1st ESXi host. If bring-up fails for any reason and if you figured out that the one of the parameter in excel sheet is incorrect, then it is tedious job to change the parameter which is already uploaded to CB. You have to use jsongenerator commands to replace the existing excel sheet in the CB. I have not come across such a scenario yet, however there is a good writeup on it from good friend of mine.

Retry Failed Bringup with Modified Input Spec in VCF

So, make sure to fill all correct details in “Deployment Parameter” sheet. 😊

Let the game begin…

Again, keep an eye on vcf-bringup.log file. The location of the file is ‘/opt/vmware/bringup/logs/’ in cloud builder. Use ‘tail -f vcf-bringup.log’ to get the latest update on deployment.

Installation starts. Good luck. Be prepared to see unexpected errors. Don’t loose hopes as there might several errors before the deployment completes. Mine took 1 week to deploy when I did it first time.

Bring-up process started. All looks good here. Status as “Success”. Let’s keep watching.

All looks good here. Till this point I had vCenter in place and it was deploying first NSX-T ova.

Looks great.

Glance at the NSX-T env.

Note that the TEP ip’s for host are from the vlan 1634. However, CB validation stage was picking up apipa.

NSX-T was fine. It moved to SDDC further.

Woo, Bring-up moved to post deployment task.

Moved to AVN (Application Virtual Networking). I am expecting some errors here.

Failed.

“A problem has occurred on the server. Please retry or contact the service provider and provide the reference token. Unable to create logical tier-1 gateway (0)”

This was easy one. vcf-bringup.log showed that it was due to missing DNS record for edge vm. Created DNS record and retry.

Next one,

“Failed to validate BGP Neighbor Perring Status for edge node 172.16.31.125”

Let’s look at the log file.

Time to check NSX-T env.

Tier-0 gateway Interfaces looks good as per out deployment parameters.

However, BGP Neighbors are down.

This was expected since we haven’t done the BGP configuration on TOR (VyOS) yet. Let’s get in to VyOS and run some commands.

set protocols bgp 65001 parameters router-id 172.27.11.253
This command specifies the router-ID. If router ID is not specified it will use the highest interface IP address.

set protocols bgp 65001 neighbor 172.27.11.2 update-source eth4
Specify the IPv4 source address to use for the BGP session to this neighbor, may be specified as either an IPv4 address directly or as an interface name.

set protocols bgp 65001 neighbor 172.27.11.2 remote-as ‘65003’
This command creates a new neighbor whose remote-as is <nasn>. The neighbor address can be an IPv4 address or an IPv6 address or an interface to use for the connection. The command is applicable for peer and peer group.

set protocols bgp 65001 neighbor 172.27.11.3 remote-as ‘65003’
set protocols bgp 65001 neighbor 172.27.11.2 password VMw@re1!
set protocols bgp 65001 neighbor 172.27.11.3 password VMw@re1!

Commit
Save

TOR configuration done for 2711 vlan. Let’s refresh and check the bgp status in nsx-t.

Looks good.

Same configuration to be performed for 2nd VLAN. I am using same VyOS for both the vlans since it’s a lab env. Usually, You will have 2 TOR’s and each BGP peer vlan configured respectively for redundancy purpose.

set protocols bgp 65001 parameters router-id 172.27.12.253
set protocols bgp 65001 neighbor 172.27.12.2 update-source eth5
set protocols bgp 65001 neighbor 172.27.12.2 remote-as ‘65003’
set protocols bgp 65001 neighbor 172.27.12.3 remote-as ‘65003’
set protocols bgp 65001 neighbor 172.27.12.2 password VMw@re1!
set protocols bgp 65001 neighbor 172.27.12.3 password VMw@re1!

Both BGP Neighbors are successful.

Hit Retry on CB and it should pass that phase.

Next Error on Cloud Builder: ‘Failed to validate BGP route distribution.’

Log File.

At this stage, routing has been configured in your NSX-T environment, both edges have been deployed and BGP peering has been done. If you check bgp peer information on edge as well as VyOS router, it will show ‘established’ and even routes from NSX-T environment appears on your VyOS router. Which means, route redistribution from NSX to VyOS works fine and this error means that there are no routes advertised from VyOS (TOR) to NSX environment. Let’s get into VyOS and run some commands.

set protocols bgp 65001 address-family ipv4-unicast network 172.16.31.0/24
set protocols bgp 65001 address-family ipv4-unicast network 172.16.32.0/24

Retry on CB and you should be good.

Everything went smoothly after this. SDDC was deployed successfully.

That was fun. We have successfully deployed vCloud Foundation version 4.2.1 including AVN (Application Virtual Networking).

Time to verify and check the components that have been installed.

SDDC Manager.

Segments in NSX-T which was specified in deployment parameters sheet.

Verify on the TOR (VyOS) if you see these segments as BGP published networks.

Added a test segment called “virtaulrove_overlay_172.16.50.0” in nsx-t to check if the newly created network gets published to TOR.

All looks good. I see the new segment subnet populated on TOR.

Let’s do some testing. As you see above, new segment subnets are being learned from 172.27.11.2 this interface is configured on edge01 VM. Check it here.

We will take down edge01 VM to see if route learning changes to edge02.

Get into nodes on nsx-t and “Enter NSX Maintenance mode” for edge 01 VM.

Edge01, Tunnels & Status down.

Notice that the gateway address has been failed over to 172.27.11.3.

All Fine, All Good. 😊

There are multiple tests that can be performed to check if the deployed environment is redundant at every level.

Additionally, use this command ‘systemctl restart vcf-bringup’ to pause the deployment when required.

For example, in my case NSX-T manger was taking time to get deployed, and due to an interval on cloud builder, it used to cancel the deployment assuming some failure. So, I paused the deployment after nsx-t ova job got triggered from CB and hit ‘Retry’ after nsx got deployed successfully in vCenter. It picked it up from that point and moved on.

You should have enjoyed reading the post. It’s time for you to get started and deploy VCF. See you in future posts. Feel free to comment below if you face any issues when you deploy the VCF environment.

Are you looking out for a lab to practice VMware products..? If yes, then click here to know more about our Lab-as-a-Service (LaaS).

Leave your email address in the box below to receive notification on my new blogs.

VMware vCloud Foundation 4.2.1 Step by Step Phase2 – Cloud Builder & Deployment Parameters

We have prepared the environment for VCF deployment. Its time to move to CB and discuss the “Deployment Parameters” excel sheet in detail. You can find my earlier blog here.

Login to Cloud Builder VM and start the deployment process.

Select “vCloud Foundation” here,

The other option “Dell EMC VxRail” to be used when your physical hardware vendor is Dell.

VxRail is hyper-converged appliance. It’s a single device which includes compute, storage, networking and virtualization resources. It comes with pre-configured vCenter and esxi servers. Then there is a manual process to convert this embedded vCenter into user manage vCenter, and that’s when we use this option. If possible, I will write a small blog on it too.

Read all prereqs on this page and make sure to fulfill them before you proceed.

Click on “Download” here to get the “Deployment Parameter” excel sheet.

Let’s dig into this sheet and talk in detail about all the parameters here.

“Prerequisites Checklist” sheet from the deployment parameter. Check all line items one by one and select “Verified” in the status column. This does not affect anywhere; it is just for your reference.

“Management Workloads” sheet.

Place your license keys here.

This sheet also has compute resource calculator for management workload domain. Have a look and try to fit your requirements accordingly.

“Users and Groups”: Define all passwords here. Check out the NSX-T passwords, as the validation fails if it does not match the password policy.

Moving on to next sheet “Hosts and Networks”.

Couple of things to discuss here,

DHCP requirement for NSX-T Host TEP is optional now. It can be defined manually with static IP pools here. However, if you select NO, then DHCP option is still valid.

Moving onto “vSphere Distributed Switch Profile” in this sheet. It has 3 profiles. Earlier VCF version had only one option to deploy with 2 pnics only. Due to high demand from customer to deploy with 4 pnics, this option was introduced. Let’s talk about this option.

Profile-1

This profile will deploy a single vDS with 2 or 4 uplinks. All network traffic will flow through the assigned nics in this vDS. Define the name and pNICs at row # 17,18 respectively.

Profile-2

This one deploys 2 VDS. You can see that the first vDS will carry management traffic and the other one is for NSX. Each vDS can have 2 or 4 pnics.

Profile-3

This one also deploys 2 vDS, just that the VSAN traffic is segregated instead of NSX in earlier case.

Select the profile as per your business requirement and move to next step.

Next – “Deploy Parameters”

Define all parameters here carefully. If something is not good, the cell would turn RED. I have selected VCSA size as small since we are testing the product.

Move to NSX-T section. Have a look at the AVN (Application Virtual Networking). If you select Yes here, then you must specify the BGP peering information and uplinks configuration. If it’s NO, then it does not do BGP peering.

TOR1 & TOR2 IPs interfaces configured on your VyOS. Make sure to create those interfaces. We will see it in detail when we reach to that level in the deployment phase.

We are all set to upload this “Deployment Parameter” sheet to Cloud Builder and begin the deployment. That is all for this blog. We will do the actual deployment in next blog.

Are you looking out for a lab to practice VMware products..? If yes, then click here to know more about our Lab-as-a-Service (LaaS).

Leave your email address in the box below to receive notification on my new blogs.

VMware vCloud Foundation 4.2.1 Step by Step Phase1 – Preparation

Finally, after a year and half, I got a chance to deploy latest version of vCloud Foundation 4.2.1. It has been successfully deployed and tested. I have written couple blogs on earlier version (i.e. version 4.0), you can find them here.

https://virtualrove.com/vcf/

Let’s have a look at the Cloud Foundation 4.2.1 Bill of Materials (BOM).

Software ComponentVersionDateBuild Number
Cloud Builder VM4.2.125-May-2118016307
SDDC Manager4.2.125-May-2118016307
VMware vCenter Server Appliance7.0.1.0030125-May-2117956102
VMware ESXi7.0 Update 1d4-Feb-2117551050*
VMware NSX-T Data Center3.1.217-Apr-2117883596
VMware vRealize Suite Lifecycle Manager8.2 Patch 24-Feb-2117513665
Workspace ONE Access3.3.44-Feb-2117498518
vRealize Automation8.26-Oct-2016980951
vRealize Log Insight8.26-Oct-2016957702
vRealize Operations Manager8.26-Oct-2016949153

It’s always a good idea to check release notes of the product before you design & deploy. You can find the release notes here. https://docs.vmware.com/en/VMware-Cloud-Foundation/4.2.1/rn/VMware-Cloud-Foundation-421-Release-Notes.html

Let’s discuss and understand the installation flow,

Configure TOR for the networks that are being used by VCF. In our case, we have VyOS router.
Deploy a Cloud Builder VM on stand alone source ESXi or vCenter.
Install and Configure 4 ESXi Servers as per the pre-reques.
Fill the Deployment Parameters excel sheet carefully.
Upload “Deployment Parameter” excel sheet to Cloud Builder.
Resolve the issues / warning shown on the validation page of CB.
Start the deployment.
Post deployment, you will have a vCenter, 4 ESXi servers, NSX-T env & SDDC manager deployed.
Additionally, you can deploy VI workload domain using SDDC manager. This will allow you to deploy Kubernetes cluster.
Also, vRealize Suite & Workspace ONE can be deployed using SDDC manager.

You definitely need huge amount of compute resources to deploy this solution.
This entire solution was installed on a single ESXi server. Following is the configuration of the server.

Dell PowerEdge R630
2 X Intel(R) Xeon(R) CPU E5-2650 v3 @ 2.30GHz
256 GB Memory
4 TB SSD

Let’s prepare the infra for VMware vCloud Foundation.

I will call my physical esxi server as a base esxi in this blog.
So, here is my base esxi and VM’s installed on it.

dc.virtaulrove.local – This is a Domain Controller & DNS Server in the env.
VyOS – This virtual router will act as a TOR for VCF env.
jumpbox.virtaulrove.local – To connect to the env.
ESXi01 to ESXi 04 – These will be the target ESXi’s for our VCF deployment.
cb.virtaulrove.local – Cloud Builder VM to deploy VCF.

Here is a look at the TOR and interfaces configured…

Follow my blog here to configure the VyOS TOR.

Network Requirements: Management domain networks to be in place on physical switch (TOR). Jumbo frames (MTU 9000) are recommended on all VLANs or minimum of 1600 MTU.

And a VLAN 1634 for Host TEP’s, which is already configured on TOR at eth3.

Following DNS records to be in place before we start with the installation.

With all these things in place, out first step is to deploy 4 target ESXi servers. Download the correct supported esxi version ISO from VMware downloads.

VMware ESXi7.0 Update 1d4-Feb-2117551050*

If you check VMware downloads page, this version is not available for download.

Release notes says, create a custom image to use it for deployment. However, there is another way to download this version of ESXi image. Let’s get the Cloud Builder image from VMware portal and install it. We will keep ESXi installation on hold for now.

We start the Cloud Builder deployment once this 19 GB ova file is downloaded.

Cloud Builder Deployment:

Cloud Builder is an appliance provided by VMware to build VCF env on target ESXi’s. It is one time use VM and can be powered off after the successful deployment of VCF management domain. After deployment, we will use SDDC manager for managing additional VI domains. I will be deploying this appliance in VLAN 1631, so that it gets access to DC and all our target ESXi servers.

Deployment is straight forward like any other ova deployment. Make sure to you choose right password while deploying the ova. The admin & root password must be a minimum of 8 characters and include at least one uppercase, one lowercase, one digit, and one special character. If this does not meet, then the deployment will fail which results in re-deploying ova.

Once the deployment is complete. Connect to CB using winscp and navigate to ….

/mnt/iso/sddc-foundation-bundle-4.2.1.0-18016307/esx_iso/

You should see an ESXi image at this path.

Click on Download to use this image to deploy our 4 target ESXi servers.

Next step is to create 4 new VM’s on base physical ESXi. These will be our nested ESXi where our VCF env will get install. All ESXi should have identical configuration. I have following configuration in my lab.

vCPU: 12
2 Sockets, 6 cores each.
CPU hot plug: Enabled
Hardware Virtualization: Enabled

Memory: 56 GB

HDD1: Thick: ESXi OS installation
HDD2: Thin VSAN Cache Tier
HDD3: Thin VSAN Capacity Tier
HDD4: Thin VSAN Capacity Tier

And 2 network cards attached to Trun_4095. This will allow an esxi to communicate with all networks on the TOR.

Map the ISO to CD drive and start the installation.

I am not going to show ESXi installation steps, since most of you know it already. Let’s look at the custom settings after the installation.

DCUI VLAN settings should be set to 1631.

Crosscheck the DNS and IP settings on esxi.

And finally, make sure that the ‘Test Management Network’ on DCUI shows OK for all tests.

Repeat this for all 4 esxi.

I have all my 4 target esxi severs ready. Let’s look at the ESXi configuration that has to be in place before we can utilize them for VCF deployment.

All ESXi must have ‘VM network’ and ‘Management network’ VLAN id 1631 configured.
NTP server address should be in place on all ESXi.
SSH & NTP service to be enabled and policy set to ‘Start & Stop with the host’
All additional disks to be present on an ESXi as a SSD and ready for VSAN configuration. You can check it here.

If your base ESXi has HDD and not SSD, then you can use following command to mark those HDD to SSD.

You can either connect to DC and putty to ESXi or open ESXi console and run these commands.

esxcli storage nmp satp rule add -s VMW_SATP_LOCAL -d mpx.vmhba1:C0:T1:L0 -o enable_ssd
esxcli storage nmp satp rule add -s VMW_SATP_LOCAL -d mpx.vmhba1:C0:T2:L0 -o enable_ssd
esxcli storage nmp satp rule add -s VMW_SATP_LOCAL -d mpx.vmhba1:C0:T3:L0 -o enable_ssd
esxcli storage core claiming reclaim -d mpx.vmhba1:C0:T1:L0
esxcli storage core claiming reclaim -d mpx.vmhba1:C0:T2:L0
esxcli storage core claiming reclaim -d mpx.vmhba1:C0:T3:L0

Once done, run ‘esxcli storage core device list’ command and verify if you see SSD instead of HDD.

Well, that should complete all our requisites for target esxi’s.

Till now, we have completed configuration of Domain controller, VyoS router, 4 nested target ESXi & Cloud Builder ova deployment. Following VM’s have been created on my physical ESXi host.

I will see you in next post, where we talk about “Deployment Parameters” excel sheet in detail.

Thank you.

Are you looking out for a lab to practice VMware products..? If yes, then click here to know more about our Lab-as-a-Service (LaaS).

Leave your email address in the box below to receive notification on my new blogs.

Lab-as-a-Service (LaaS)

To become an expert in VMware Virtualization, reading blogs is definitely the right way to enhance your knowledge. However, you will not get real feel of it unless & until you try it out yourself. You get real-time production feel when you do hands on and when you resolve those unexpected issues by yourself. VMware Workstation is best way to install, configure and try new products, experiment labs as far as you have good amount hardware configuration (i.e. memory, storage & cpu).

Huge amount of compute resources (i.e. memory, storage & cpu) needed for VMware labs is one of the barrier for most of the them and this sometime becomes an obstacle for an individual. To resolve this, we have put together complete lab solutions for users who wants to learn VMware virtualization and explore it by implementing it. We have huge amount of compute resources to be rented out, which can be used for any kind of labs. You will be provided requested amount of Memory, Storage & CPUs to do the labs, which will be accessible from Anywhere, Anytime. Additionally, will assist you to setup the lab.

At very minimal cost, we will provide lab setup, assistance & 24*7 support. Our labs will be able to accommodate following VMware products, which can be deployed and tested multiple times with real time experience. Here is the list of labs followed by the certification which you will be able to achieve.

Labs (All latest versions of the products)VMware Certifications
VMware vSphere – Install, Config, Manage (Including VSAN)VMware Certified Professional – DataCenter Virtualization
(VCP-DCV)
VMware NSX-T Data Center: Install, Configure, ManageVMware Certified Professional – Network Virtualization
(VCP-NV)
VMware vRealize Automation: Install, Configure, ManageVMware Certified Professional – Cloud Management and Automation
(VCP-CMA)
VMware Certified Professional – Desktop and MobilityVMware Horizon: Install, Configure, Manage
(VCP-DTM)
VMware vSphere – Deploy LabVMware Certified Advanced Professional – Data Center Virtualization Deploy (VCAP-DCV / VCIX-DCV)
VMware NSX – Deploy LabVMware Certified Advanced Professional – Network Virtualization Deploy (VCAP-NV / VCIX-NV)

Our labs are not limited to above products. It is equipped with huge configuration which will even help you to do POC’s for following VMware products before you implement it in your production environment. All required assistance & guidance will be provided to setup these labs.

VMware Validated Design (VVD)
VMware vCloud Foundation (VCF)
VMware vRealize Orchestrator (VRO)
VMware vRealize Operations Manager (vROPS)
VMware vRealize Log Insight (vRLI)
VMware vRealize Business (vRB)
VMware Site Recovery Manager (SRM)

We have our own SOP’s (Standard Operating Procedures) to build the env. Connect with us to get lab experience as if you have build customer production env.
For demo, We provide 2 hours lab access to new user without any cost.

Connect with us through the contact form using below link OR send an email to contact@virtualrove.com

Click Here to send an enquiry.

NSX-T 3.0 – Reverse Migration of VMkernel to Port Group

In this post, we will talk about the reverse migration of VMKernel adaptor from NSX-T to back to vCenter port group. If you did not get chance to look at my previous article on migration of vmk from vCenter to NSX-T, here is the link.

There can be multiple reasons for removing VMKernel ports from NSX-T. Here are some…
The third party application which had vmk in vCenter PG does not behave as expected,
The application itself (which uses vmk) is no longer needed,
You want to uninstall NSX-T from one of the host for any reason, you first have to move vmk’s from it or appropriate “Network Mappings for Uninstall” has to be in place before you move on.

Note: Uninstalling NSX-T Data Center from an ESXi host is disruptive if the physical interfaces or VMkernel interfaces are connected to N-VDS.

Here is one more important scenario mentioned at VMware docs, (copied from VMware site)

Transport node configuration on a node cannot be overriden if underlying segments or VMs are connected to that transport node. For example, consider a two ESXi host cluster, where host-1 is configured as transport-node-1, but host-2 is unprepared. Segments and VMs are connected to transport-node-1. After preparing host-1 as a transport node (associated to transport-zone-1), if you apply a transport node profile to that cluster (associated to transport-zone-2), then NSX-T does not override the transport node configuration with the transport node profile configuration. To successfully override configuration on host-1, power off the VMs and disconnect the segment before applying the transport node profile to associate host-1 to transport-zone-2 and disassociate it from transport-zone-1.

With that lets get started,

In previous post, I explained migration process of vmk from vCenter to NSX-T. Lets get started with reverting it back.
Verify the vmk location. It is on nsx-t logical switch “VLAN-1650” and the switch name is ‘data-nvds’

Back to NSX-T > System> Nodes> Select appropriate node> Action> ‘Migrate ESX Vmkernel and Physical Adaptors’

Select ‘Migrate to port group’ in this wizard.

Direction: Migrate to Port Groups
N-VDS: Select the target switch from where you want to remove vmk port.
Select the VMkernel Adapter (vmk3) and manually type the port group name ‘vDS-Test-1650’

Map the appropriate physical nics and uplinks in this wizard.
Note: Mapping physical nics here does not mean that it will remove mentioned nics from N-VDS.

Save.

Verify that the vmk3 is back to the VDS.

Test the connectivity to vmk from esxi. And we are done with the reverse migration of VMkernel port to the port group.  That’s it for this post. Will come back soon with new content for my next blog.

Cheers..!!!

Are you looking out for a lab to practice VMware products..? If yes, then click here to know more about our Lab-as-a-Service (LaaS).

Leave your email address in below box below to receive notification on my new blogs.

NSX-T 3.0 – VMkernel Migration to an N-VDS Switch

Most of the customers are moving to NSX-T environment. One common use case / questions would have been, what happens to existing VMkernel Adaptors OR how does the migration of VMkernel works in NSX-T. One of my recent customer had similar use case, wherein he had backup application running in the VMware vSphere environment which had 2 vmk’s and the plan was to migrate all networks in to NSX-T (Overlay or VLAN). There are ‘n’ number of things to consider before we plan for such migrations. First, we got the email confirmation from the application vendor on application compatibility with NSX-T. It was also important to get confirmation from the vendor if the backup application will still behave as expected and will be able to backup VM’s connected to Overlay Segments.


Note: Some 3rd party applications do not support or understand Opaque networks. (For vCenter, all networks that have been created in NSX are Opaque networks)


In my case, customer had to upgrade the backup application to vendor suggested version to make it compatible with NSX-T and to be able to backup VM’s connected to NSX-T (Overlay & VLAN) networks.


Some additional points…
Please keep in mind that we are talking about 3rd party application vmkernel adaptors and NOT vMotion, management or vsan vmk’s. The migration process will always not be the way it is mentioned in this article. It completely depends on customers env and at what point you are planning for this migration and for which vmk’s. Shared Compute, Edge & Management cluster with only 2 pics and not on vSphere 7.0 version will need proper planning and migration methodology. Greenfield env will give you flexibility to migrate vmkernel using network mapping while configuring hosts transport nodes, whereas brownfield env will eat your head. So plan and prepare wisely before you propose your plan to the customer.

Following is my lab setup for this post.
NSX-T 3.0 installed and configured.
Four hosts cluster prepared and configured for NSX-T. It is a shared cluster for all components.
Physical Adaptors – vmnic0, vmnic1 connected to vDS on vCenter. And vmnic2, vmnic3 connected to nvds in nsx-t.
BGP routing is in place.
Edge VM’s uplinks have been configured and connected to logical segments.
Port group name ‘vDS-Test-1650’ with vlan id 1650 is in place. This port group has VMkernel Adaptor 3 (vmk3) and it has been configured on all hosts in the cluster.
‘Test-10’ VM connected to ‘vDS-Test-1650’ for testing connectivity.

Here is the plan.
Create vlan based logical segment in nsx-t for 1650 network.(VLAN-1650 LS)
Move ‘Test-10’ VM from ‘vDS-Test-1650’ port group to ‘VLAN-1650 LS’ logical segment.
Migrate vmkernel adaptor 3 (vmk3) from port group to logical segment.
Test connectivity from test vm to vmk ip after migration.
Revert the configuration.

With that lets get started…

‘vDS-Test-1650’ port group on distributed switch.

‘Test-10’ VM connected to ‘vDS-Test-1650’

Verify the connectivity to ‘172.16.31.110’ (DC in my env) from Test-10 VM.

ESXi01 has vmk3 created with network label as vDS-Test-1650 port group.

Similar configuration on other hosts.


Time to create vlan based logical segment in nsx-t.
Log into NSX-T VIP> Networking> Segments> Add Segment
Name: VLAN-1650
TZ: Shared VLAN TZ
VLAN: 1650

VLAN based logical segment is ready to move the VM’s into it.
Test-10 VM> Edit Settings> Change the network to newly create logical segment.

Test-10 VM now sits on VLAN based logical segment in NSX-T. Test the connectivity to DC again.

Let’s move vmkernel from vCenter PG to NSX-T LS.
System> Fabric> Nodes> Host TN> Select 1st esxi and click on Action> ‘Migrate ESX VMkernel and Physical Adapters’

Select appropriate N-VDS to migrate to
Select the VMkernel Adaptor that you plan to migrate in to NSX-T.

And then the destination Logical switch that we created earlier.

Next > Select physical adaptors in N-VDS
Note: These vmnics have already been assigned to N-VDS and not the new ones.

Select physical nics and appropriate uplinks and SAVE.

You get a warning at this stage. Continue.

Once it is successful, verify it on the vCenter.
Notice that the vmk3 is sitting on the “data-nvds” instead of “DATA-VDS”

Testing connectivity to vmkernel (172.16.50.101) adaptor from the VM.

All Good. We have successfully migrated VMkernel (vmk3) to nsx-t. There may be situations where you want to revert back the configuration if expected results fails after vmk migration. I will cover the reverse migration in my next blog.

I hope that the blog has valuable information. See you all in next post.

Are you looking out for a lab to practice VMware products..? If yes, then click here to know more about our Lab-as-a-Service (LaaS).

Leave your email address in below box below to receive notification on my new blogs.

NSX-T 3.0 – Load Balancer Concept & Configuration

It’s been a while since I wrote my last blog on NSX-T. Recently, I had several discussions with one of the customer to setup a NSX-T Logical Load Balancer. Hence, wanted to write a small blog with generic example. This will give you basic understanding of the NSX-T load balancer and how it is setup.

Let’s check on some theory part.

The NSX-T Data Center logical load balancer offers high-availability service for applications and distributes the network traffic load among multiple servers. The load balancer distributes incoming service requests evenly among multiple servers. You can map a virtual IP address to a set of pool servers for load balancing. The load balancer accepts TCP, UDP, HTTP, or HTTPS requests on the virtual IP address and decides which pool server to use.

Some key points to keep in mind before we proceed.

  • Logical load balancer is supported only on the tier-1 gateway.
  • One load balancer can be attached only to a tier-1 gateway.
  • Load balancer includes virtual servers, server pools, and health checks monitors. It can host single or multiple virtual servers.
  • NSX-T LB supports Layer 4 (TCP,UDP) as well as Layer 7 (HTTP,HTTPS).
  • Using a small NSX Edge node to run a small load balancer is not recommended in a production environment.
  • The VIP (Virtual IP) for the server pool can be placed in any subnet.

Load balancers can be deployed in either inline or one-arm mode.

Inline Topology

In the inline mode, the load balancer is in the traffic path between the client and the server. Clients and servers must not be connected to the same tier-1 logical router. LB-SNAT is not required in this case.

One-Arm Topology

In one-arm mode, the load balancer is not in the traffic path between the client and the server. In this mode, the client and the server can be anywhere. LB-SNAT is always required in this case.

Health check monitors is another area of discussion, which is used to test whether each server is correctly running the application, you can add health check monitors that checks the health status of a server.

Let’s get started with setting up the simple example of NSX-T Logical Load Balancer.

Here is the background of the lab. I have an NSX-T environment already running in the LAB. For demo purpose, I have already done following configuration.

New NSX-T logical segment called ‘LB_1680’ (Subnet: 172.16.80.253/24)
Installed and configured 2 test Web servers. (OS: Centos7 with web server role and added sample html file)
Connected 2 new web severs to LB_1680 segment.

Verify that you can access the web severs and web page is displayed.

1st Web Server. (172.16.80.10)

2nd Web Server. (172.16.80.11)

That was all background work. Lets start configuring the Logical NSX-T Load Balancer.

We have to configure the Server Pool first and then move on to next configuration.

Login to NSX-T and navigate to Networking> Load Balancing> Server Pools> Add Server Pool

Name: WevServerPool
Algorithm: Round Robin (To distribute the load in pool members)
SNAT Translation Mode: Automap (leave it to default)

Next, Click on Select Members> Add members & enter the information for the 1st web server.

Follow the same procedure again for the 2nd web server.

Click on Apply and Save.

Make sure that the status is Success.

Next, Click on Virtual Server and ADD L7 HTTP

Name: WebVirtualServer

IP: 192.168.10.15 (This IP can be in any subnet & We will use this IP add to access the Web Server)
Port: 80
Server Pool: WebServerPool (Select the pool that you created in earlier step)

Save & Make sure that the status is Success.

Let’s move to Load Balancer tab and click on Add Load Balancer.

Name: Web-LB
Size: Small (note the sizing information at the point)
Attachment: Select your existing Tier-1 gateway.

Click on Save and then click on NO to complete the configuration.

Now, we have to attach this Load Balancer to Virtual Server that we created in earlier step.

Go back to ‘Virtual Servers’ and click on Edit.

Under the LB, select the LB that we just created and Save.

Make sure that the status is Success for LB, Virtual Server & Server Pools.

That’s It. We are done with the configuration of NSX-T Load Balancer. Its time to test it.

Try to access the VIP (192.168.10.15), This ip should load the web page either from Web-1 server or Web-2.

The VIP is hitting to my 2nd Web Server. Try to refresh the page.

Couple of refresh will route the traffic to 2nd Web Server. You might have to try in different browser or try Ctrl+F5 to refresh the page.

Hurray…!! We have just configured NSX-T LB.

This is how my network topology looks. Web-LB is configured at tier-1 gateway.

Remember, there is much more than this when it comes to customer production environment. We must take several other things into consideration (health monitors, SNAT, LB rules etc…), and it is not that easy as it sounds. This blog was written to give you basic understanding of NSX-T LB.

I hope that the blog has valuable information. See you all in next post.

Are you looking out for a lab to practice VMware products..? If yes, then click here to know more about our Lab-as-a-Service (LaaS).

Leave your email address in below box below to receive notification on my new blogs.

VMware vRealize Automation 8.1 – Part7 User Permissions, Roles & Branding

vRealize Automation uses VMware Workspace ONE Access, the VMware supplied identity management application to import and manage users and groups. After users and groups are imported or created, you can manage the role assignments for single tenant deployments using the Identity & Access Management page. This blog will focus on user permissions and the role that has to be assigned for a user to request an item from the catalog.

VMware vRealize Automation 8.1 – Part1: Cloud Assembly & Service Broker
VMware vRealize Automation 8.1 – Part2: Cloud Accounts, Projects & Cloud Zones
VMware vRealize Automation 8.1 – Part3: Flavor Mapping & Image Mapping
VMware vRealize Automation 8.1 – Part4: Network Profiles
VMware vRealize Automation 8.1 – Part5: Blueprints
VMware vRealize Automation 8.1 – Part6: Content & Catalog
VMware vRealize Automation 8.1 – Part7: User Permissions, Roles & Branding

We have already integrated our Active Directory in vIDM. And a user name ‘Broker’ was created. Refer to my earlier blog here.

https://virtualrove.com/2020/07/11/vmware-vrlcm-8-1-part3-identity-manager-ad-integration/

We will use ‘Broker’ user account to give permissions. So that ‘Broker’ can request catalog items from vRA.

Log into vRA> Identity & Access Management> Check the box for ‘Broker’ user under Active Users.

You will see all users here from our active directory, since we have integrated vIDM into vRA.

Edit Roles
Assign Org Role: Org Member
Assign Service Role: Service Broker
With Role: Service Broker User

Save.

This configuration will give ‘Broker’ user to access only ‘Service Broker’ page and request item from the catalog.

Logout and Log into vRA using Broker user.

Notice that the only service available is ‘Service Broker’

Click on it and request for a catalog item.

Notice that the ‘Requestor’ name is ‘Broker’.

‘Broker’ user will have access to request for an item

That was simple example of assigning user permissions, likewise you can define who can do what and what services should be available for a particular user.

Please check detailed documentation on user roles in vRA here on VMware Official Site.

https://docs.vmware.com/en/vRealize-Automation/8.1/Administering/GUID-F94CB09A-DD93-4571-9D39-7FC1E6FA68CF.html

We now move to ‘Branding’ part to give nice look to your vRA portal.

vRA allows you to do custom branding for each tenant. You can define logo and colors of your web page. By default, I see following default branding before I apply my own.

After custom branding, I see it like this.

I added a company logo, text color, background color & product name.

Log into vRA with IDM user. Click on ‘Branding’ tab and define parameters.

Apply.

It was that simple to do the branding of vRA portal. 😊

With that we have come to an end of this series. It’s always fun working on vRA. I have seen it since version 6.X. The end results are always satisfactory, and it simplifies your daily tasks. See you in next post.

Are you looking out for a lab to practice VMware products..? If yes, then click here to know more about our Lab-as-a-Service (LaaS).

Leave your email address in below box below to receive notification on my new blogs.