NSX 4.0 Series Part5-Migrate workload from VDS To NSX

Welcome back readers.

Please find the links below for all posts in this series.

NSX 4.0 Series Part1-NSX Manager Installation
NSX 4.0 Series Part2-Add a Compute Manager & Configure the NSX VIP
NSX 4.0 Series Part3-Create Transport Zones & Uplink Profiles
NSX 4.0 Series Part4-Prepare Host Transport Nodes
NSX 4.0 Series Part5-Migrate workload from VDS To NSX

Our NSX env is fully functional and we are ready to migrate workload from vCenter VDS to NSX env.

It’s always a good practice to verify the NSX env before we start working on it.

Login to NSX VIP and look for Alarms,

Check the cluster status,

And then look for host transport nodes if they are showing host status as UP,

For testing purposes, I have created 3 windows vm’s. All three vm’s connects to 3 different port groups on vCenter VDS. We will move these VM’s from vCenter VDS to NSX managed segments.

Following are test VM’s with their respective vds port groups. I have named these VM’s according to PG.

Next, we need to create Segments in NSX env. A Segment is nothing but the portgroup.

Let’s have a look at the types of Segments.

VLAN Baked Segments: In this type, you will define a VLAN ID for the segments, however you also have to make sure that this vlan configure exists on your physical top of the rack switch.

Overlay Backed Segments: This segment can be configured without any configuration on the physical infrastructure. It gets attached to Overlay Transport Zone and traffic is carried by a tunnel between the hosts.

As stated earlier, we would be only focusing on VLAN backed segments in this blogpost. Visit the following blog if you are looking for overlay backed segment.

Login to NSX and navigate to Networking> Segments,

Oops, I haven’t added license yet. If you do not have a license key, please refer to my following blog to get the eval licenses.

Add the license key here,

System> Licenses,

Then we move to create a VLAN backed segment in NSX. You can create vlan backed segments for all networks that exist on your TOR (top of the rack switches). For this demo, I will be using Management-1631, vMotion-1632 and VSAN-1633 networks.

In my lab env, following networks are pre-created on the TOR.

Login to NSX VIP> Networking> Segments> Add Segment

Name: VR-Prod-Mgmnt-1631
Transport Zone: VirtualRove-VLAN-TZ (This is where our esxi host transport nodes are connected)
VLAN: 1631

SAVE

Verify that the Segment status is Success.

Once the segment is created in NSX, go back to vCenter and verify if you see the newly created segment. You will see a letter “N” for all NSX create segments.

Click on the newly created Segment.

Note that the Summary section shows more information about the segment.

We will now move a VM called “app-172.16.31.185” from VDS to NSX.

Source VDS portgroup is “vDS-Management-1631”
Destination NSX Segment is “VR-Prod-Mgmnt-1631”

Verify that it is connected to VDS portgroup.

Login to the VM and start a ping to its gateway IP.

Login to the vCenter> Networking view> Right Click the source port group>

And select “Migrate VM’s to another network”.

In the migration wizard, select newly created NSX vlan backed segment in destination network,

Select the VM that needs to be migrated into the NSX env,

Review and Finish,

Monitor the ping command if we see any drops.

All looks good. NO ping drops and I can still ping to the vm ip from other machines in the network.

We have successfully migrated a VM into the NSX env.
Verify the network name in VM settings,

Click on the NSX segment in vCenter and verify if you see the VM,

You can also verify the same from NSX side,
Login to NSX> Inventory> Virtual Machines> Click on View Details for the VM that we just migrated,

You will see port information in details section,

You will not see port information for db vm, since it has not been migrated yet.

Remaining VM’s have been moved into the NSX env. Ports column shows “1” for all segments.

We see all 3 NSX segments in vCenter networking view,

Simple ping test in cross subnets.  From App To DB,

Well, all looks good. Our workload has been successfully migrated into NSX env.

So, what is the use case here…?
Why would customer only configure vlan backed segments…?
Why No overlay…?
Why No T1, T0 and Edge…?

You will surely understand this in my next blog. Stay tuned. 😊
Hope that this blog series has valuable information.

Are you looking out for a lab to practice VMware products…? If yes, then click here to know more about our Lab-as-a-Service (LaaS).

Leave your email address in the box below to receive notification on my new blogs.

NSX 4.0 Series Part4-Prepare Host Transport Nodes

In the previous blogpost, we discussed Transport Zones & Uplink Profiles. Please find the links below for all posts in this series.

NSX 4.0 Series Part1-NSX Manager Installation
NSX 4.0 Series Part2-Add a Compute Manager & Configure the NSX VIP
NSX 4.0 Series Part3-Create Transport Zones & Uplink Profiles
NSX 4.0 Series Part4-Prepare Host Transport Nodes
NSX 4.0 Series Part5-Migrate workload from VDS To NSX

In this blogpost, I will configure the host transport node for NSX. Basically, in this process, NSX vibs are installed on the ESXi node via NSX Manager. They are also referred to as kernel module. You can see the number of installed vibs on esxi by running following command,

Open up a putty session to one of the esxi and run this command,

esxcli software vib list

Filter the one for NSX by running following command,

esxcli software vib list | grep nsx

We don’t see any since we have not configured this host for NSX yet. Let’s revisit this after the NSX installation.

Note: Preparing ESXi host for NSX does not need host reboot.

Before we prep an esxi host for NSX, check the name of VDS,
vCenter> Click on ESXi host> Configure> Virtual Switches,

Note the VDS name. We will revisit here after NSX vibs installation.

Login to NSX VIP & navigate to System >Nodes >Host Transport Nodes.
Change the “Managed by” drop down to vCenter. Notice that the ‘NSX Configuration’ column shows ‘Not Configured’.

Select the first host & click on ‘Configure NSX’

Next,

Mode: Standard
Name: Select appropriate VDS from vCenter
Transport Zone: Select VLAN TZ that we created earlier.
Uplink Profile: VR-UplinkProf-01

Scroll down to Teaming policy uplink mapping,

Select Uplink1 & Uplink2 respectively.

Here, you are mapping vCenter VDS uplinks to NSX.

Click Finish to begin the installation.

Monitor the progress.

I got an error message here,

Failed to install software on host. Failed to install software on host. esxi127.virtualrove.local : java.rmi.RemoteException: [InstallationError] Failed to create ramdisk stagebootbank: Errors: No space left on device cause = (272, ‘Cannot reserve 272 MB of memory for ramdisk stagebootbank’) Please refer to the log file for more details.

Not sure why this came up. I have enough of compute resources in that cluster. Clicked on “Resolve”

And it was a success. 😊

Next, I see another error.

“The controller has lost connectivity.”

Clicked on “SYNC” here and it was all good.

1st ESXi node has been configured and ready for NSX. Verify the NSX version and node status.

Go back to vCenter> ESXi Host> Configure> Virtual Switches,

We now see the “NSX Switch” added as a prefix to VDS name.

Let’s re-run the command,

esxcli software vib list | grep nsx

We now see all NSX vibs installed on this host.

Let’s move to the next ESXi node and configure it in same way.

All 3 ESXi hosts have been configured for NSX.

That’s all for this post.

I hope that the blog has valuable information. See you all in the next post.

Are you looking out for a lab to practice VMware products…? If yes, then click here to know more about our Lab-as-a-Service (LaaS).

Leave your email address in the box below to receive notification on my new blogs.

NSX 4.0 Series Part3-Create Transport Zones & Uplink Profiles

In the previous blogpost, we discussed on Compute Manager and NSX VIP. Please find the links below for all posts in this series.

NSX 4.0 Series Part1-NSX Manager Installation
NSX 4.0 Series Part2-Add a Compute Manager & Configure the NSX VIP
NSX 4.0 Series Part3-Create Transport Zones & Uplink Profiles
NSX 4.0 Series Part4-Prepare Host Transport Nodes
NSX 4.0 Series Part5-Migrate workload from VDS To NSX

This post will focus on Transport Zones & Uplink Profiles.

It is very important to understand Transport Zones and Uplink Profiles to configure NSX env.

Transport Zone:

All types of hypervisors (that get added to NSX env) as well as EDGE VM are called transport nodes and these transport nodes needs to be a part of transport zones to see particular networks. Collection of transport nodes that define the maximum span for logical switches. A transport zone represents a set of similarly provisioned hypervisors and the logical switches that connect VMs on those hypervisors. It also has been registered with the NSX management plane and has NSX modules installed. For a hypervisor host or NSX Edge to be part of the NSX overlay, it must be added to the NSX transport zone.

There are two types of Transport Zones. Overlay and Vlan Transport Zones. I have already written a blog on previous versions of NSX and explained Transport Zones here…

There is already a lot of information on the web regarding this topic. You can also find VMware official documentation here…

https://docs.vmware.com/en/VMware-NSX-T-Data-Center/3.2/installation/GUID-F739DC79-4358-49F4-9C58-812475F33A66.html

In this blogpost, we will be only focusing on VLAN backed segments. NO overlay, No Edge, No BGP / OSPF routing.

Visit my NSX-T 3.0 blogpost series below if you are looking to configure Overlay, Edge and BGP Routing.

Let’s get the VLAN backed env in place. It’s simple and easy to understand. Here is the small design that explains what we are trying to accomplish here…

Time to configure VLAN Transport Zone,

Login to NSX VIP and navigate to System> Transport Zones> ADD Zone

Enter the name and select VLAN under traffic type,

Verify that the TZ is created,

Time to configure Uplink Profile,

An uplink profile defines how you want your network traffic to go outside of NSX env. This helps with the consistent configuration of the network adaptors.

Navigate to System > Fabric> Profiles > Uplink Profile,

> Add Profile,

Enter the name and description. Leave the LAG’s section for now. I will write another small blog explaining the LAG configuration in NSX env. Scroll down to Teamings,

Select the default policy to “Load Balance Source”
Type “U1,U2” in Active Uplink field. Input keywords really does not matter here, you can type any name comma separated.
Transport VLAN value to remain 0 in our case.

Teaming Policy Options:

Failover Order: Select an active uplink is specified along with an optional list of standby uplinks. If the active uplink fails, the next uplink in the standby list replaces the active uplink. No actual load balancing is performed with this option.

Load Balance Source: Select a list of active uplinks. When you configure a transport node, you can pin each interface of the transport node to one active uplink. This configuration allows use of several active uplinks at the same time.

A teaming policy defines how C-VDS (Converged VDS) uses its uplink for redundancy and traffic load balancing. Wait, what is C-VDS now…?

N-VDS (NSX Managed VDS): In earlier versions (prior to version 3.0), NSX used to install an additional NSX Managed Distributed Switch. So, one VDS (or VSS) for vSphere traffic and one N-VDS for NSX-T traffic. So, technically speaking, you need 2 more additional pnics for an additional N-VDS switch.

C-VDS (Converged VDS): NSX now uses existing VDS for NSX traffic. However, C-VDS option is only available when you use NSX-T 3 or higher with vSphere 7 along with the VDS version 7.0. You do not need additional pnics in this case.

We are done with the Uplink Profile configuration. More information on Uplink Profiles can be found here,

https://docs.vmware.com/en/VMware-NSX-T-Data-Center/3.2/installation/GUID-50FDFDFB-F660-4269-9503-39AE2BBA95B4.html

Check to make sure that the Uplink Profile has been created.

That’s all for this post. We are all set to prepare esxi host transport nodes.I hope that the blog has valuable information. See you all in the next post.

Are you looking out for a lab to practice VMware products…? If yes, then click here to know more about our Lab-as-a-Service (LaaS).

Leave your email address in the box below to receive notification on my new blogs.

NSX 4.0 Series Part2-Add a Compute Manager & Configure the NSX VIP

In the previous blogpost, we discussed NSX Manager installation and its requirements. Please find the links below for all posts in this series.

NSX 4.0 Series Part1-NSX Manager Installation
NSX 4.0 Series Part2-Add a Compute Manager & Configure the NSX VIP
NSX 4.0 Series Part3-Create Transport Zones & Uplink Profiles
NSX 4.0 Series Part4-Prepare Host Transport Nodes
NSX 4.0 Series Part5-Migrate workload from VDS To NSX

What is a Compute Manager…?

In simple words, vCenter is named as Compute Manager in NSX term.

NSX uses compute resources from the added compute managers to deploy management components. At the same time, it also fetches vSphere cluster information from vCenter / Compute Manager.

Let’s add a compute manager in our env.

Login to 1st NSX manager and navigate to System > Fabric > Compute Managers > 

Click on “Add Compute Manager”.

Fill in the required information here and click on Add.

You will be prompted to add the vCenter thumbprint. Click Add and Save. Here is how it will look after adding the compute manager.

Make sure that the compute manager is showing as Registered and its UP.

Important Note:

You can not remove a compute manager if you have successfully prepared ESXi host transport nodes & Edge nodes. You need to manually uninstall NSX from all ESXi host transport nodes and remove any edge nodes as well other components that got deployed from NSX before you try to remove a compute manager.

Next, we add 2 more NSX Managers in the env to form a NSX Cluster.

Why 3 NSX Managers…?

It’s a generic “Cluster” definition in IT world. Each cluster has a Quorum. And NSX Cluster too requires a quorum. Which means, 2 of its 3 members should be up at a given time for NSX env to function / operate properly.

So,
1 NSX Manager is single point of failure.
2 NSX Managers cannot full fill the cluster quorum definition / requirement since generic definition always refers to additional witness node in 2 node cluster configuration.
3 NSX Manager is perfect number to form a cluster.

Hence, 3 NSX Managers, Place them on 3 different physical ESXi nodes and configure anti-affinity to avoid two managers being on single esxi node.

Since it’s a lab env, We will not be deploy remaining 2 appliances. However, here is the link if you want to deploy 2 more appliances.

Next, we configure the VIP (Virtual IP) for NSX Cluster.

What is a VIP…?

A NSX cluster VIP is virtual ip that gets assigned to the cluster. A VIP redirects all requests to master / leader node from the cluster.

When we create a NSX cluster (usually 3 nodes), one of the node gets elected as mater / leader node. Any API and UI request coming in from clients is directed to the leader node. If Leader node is down for any reason, the cluster mechanism automatically elects new leader and all requests gets forward to new leader from the VIP. You also have to make sure that all NSX managers are deployed in the same subnet.

Login to 1st NSX Manager and navigate to System> Appliances…

I only have a single NSX manager in my lab environment. Make sure that the health of the Cluster is “Stable”.

Click on “SET VIRTUAL IP”

Enter the IP address and hit save.

It may take several minutes to add the VIP and NSX GUI might not be accessible for couple of minutes until all required services are up and running.

I see that the VIP has been configured and it is assigned to available NSX manager in the cluster.

I have also created a DNS record for the VIP. Going forward, we will be using this FQDN to access NSX UI.

Let’s login to NSX using VIP and make sure that the state is “Stable”.

All looks good. Let’s move to the next step to prepare Host Transport Nodes for NSX.

That’s all for this post.
I hope that the blog has valuable information. See you all in the next post.

Leave your email address in the box below to receive notification on my new blogs.

Are you looking out for a lab to practice VMware products…? If yes, then click here to know more about our Lab-as-a-Service (LaaS).

NSX 4.0 Series Part1-NSX Manager Installation

No more NSX-V & NSX-T…!!!

It’s only NSX from version 4 onwards.

This series of NSX 4.0 includes following parts.

NSX 4.0 Series Part1-NSX Manager Installation
NSX 4.0 Series Part2-Add a Compute Manager & Configure the NSX VIP
NSX 4.0 Series Part3-Create Transport Zones & Uplink Profiles
NSX 4.0 Series Part4-Prepare Host Transport Nodes
NSX 4.0 Series Part5-Migrate workload from VDS To NSX

It’s been a while since I wrote my last blog post. And finally, I found something interesting to write about. Even if this blog starts with NSX installation and configuration, am sure you will definitely find something more interesting as and when I write more. We will be focusing on NSX Security, DFW & Micro-Segmentation in upcoming blogs.

Let’s get started…

First thing first, Release Notes. It’s always good to read release notes for any of the newly launched products. Here is the VMware official link for NSX 4.0…

https://docs.vmware.com/en/VMware-NSX/4.0/rn/vmware-nsx-4001-release-notes/index.html

It’s a major release focusing on NSX networking, security and services. As it mentions in the release notes,

Some of the major enhancements are the following:

  •  IPv6 external-facing Management Plane introduces support for IPv6 communication from external systems with the NSX management cluster (Local Manager only).
  •  Block Malicious IPs in Distributed Firewall is a new capability that allows the ability to block traffic to and from Malicious IPs.

What interests me most are,

Distributed Firewall

  • Block Malicious IPs in Distributed Firewall is a new capability that allows the ability to block traffic to and from Malicious IPs. This is achieved by ingesting a feed of Malicious IPs provided by VMware Contexa. This feed is automatically updated multiple times a day so that the environment is protected with the latest malicious IPs. For existing environments, the feature will need to be turned on explicitly. For new environments, the feature will be default enabled.
  • NSX Distributed Firewall has now added support for these following versions for physical servers: RHEL 8.2, 8.4, Ubuntu 20.04, CentOS 8.2, 8.4.

In this series of blogs, I will be focusing on NSX 4.0 env setup and later NSX Networking Security, Distributed firewall, Micro-Segmentation & will also touch base on IDS/IPS.

With that let’s get started with NSX 4.0 manager installation. This is pretty simple and straight forward process to deploy NSX manger like previous versions. Download the OVA file from the below link.

Make sure to download the 1st OVA file from the link.

If you do not have access to download the file, you will see following message…

You either are not entitled or do not have permissions to download this product.
Check with your My VMware Super User, Procurement Contact or Administrator.

If you recently purchased this product through VMware Store or through a third-party, try downloading later.

Not to worry. Follow this link to get the trial version along with the trial license key…

Once you have the downloaded ova file, login to your vCenter and import the ova file. Before we import the ova, let me show you my existing env.

I have one vCenter with 3 VSAN enabled ESXi hosts.

Each host has 3 disks. All disks have been claimed for VSAN cluster.

Here is the total compute capacity for this lab,

VDS has been configured in the env and it has 2 physical uplinks,

Each host has 3 vmkernel adaptors.
Management – VLAN 1631
vMotion – VLAN 1632
VSAN – VLAN 1633

I know that this is basic and anyone looking at the NSX blog will defiantly know this. However, believe me, I have seen networking (physical) background people working on NSX DFW & LB without having complete understanding of base vSphere setup. I have detailed blog on how to configure base vSphere setup here,

Anyways, let’s get going with NSX manager installation.

Let’s have a look at the NSX manager compute capacity requirements. The thin virtual disk size is 3.8 GB and thick virtual disk size is 300 GB.

Import the NSX ova,

Map the NSX ova,

Fill out / select appropriate options in the wizard and hit continue. Make sure to create a DNS record for 1st NSX Manager.

Fill out the required parameters in the wizard. And we should be good to click on finish.

If any of the required parameters are not as expected, the ova will get deployed with hostname.local. In this case, you must redeploy the ova.

Power ON the NSX VM after the deployment. Wait for 10 mins for all services to start and login.

Once logged in, Click on TOP right corner question mark > About, to check the version.

Review the dashboard,

Click on System> Appliances>, and make sure the cluster status is Stable.

Since it’s a lab env, We will not deploy remaining appliances. However, here is the link if you want to deploy 2 more appliances,

Before you deploy 2 more appliances, you need to add a Computer Manager. We will see the steps to add a Compute Manager in my next blog.

That’s all for this post.
I hope that the blog has valuable information. See you all in the next post.

Are you looking out for a lab to practice VMware products…? If yes, then click here to know more about our Lab-as-a-Service (LaaS).

Leave your email address in the box below to receive notification on my new blogs.