Lab Overview - HOL-2126-01-NET - VMware NSX-T - Getting Started
Note: It will take more than 90 minutes to complete this lab. The modules are independent of each other so you can start at the beginning of any module and proceed from there. You can use the Table of Contents to access any module of your choosing.
The Table of Contents can be accessed in the upper right-hand corner of the Lab Manual.
Lab Module List:
Lab Captains:
This lab manual can be downloaded from the Hands-on Labs Document site found here:
This lab may be available in other languages. To set your language preference and have a localized manual deployed with your lab, you may utilize this document to help guide you through the process:
http://docs.hol.vmware.com/announcements/nee-default-language.pdf
Disclaimer: For over a decade, we have collaborated with Intel® to deliver innovative solutions that enable IT to continually transform their data centers. We have incorporated Intel® product and technology information within this lab to help users understand the benefits of how both hardware and software technology matter when trying to deploy in VMware’s ecosystem. We believe that this collaboration will have tremendous benefits for our customers.
Disclaimer: Due to the nature off the Hands on Labs environment you may see vSphere or NSX related alarms. These alarms are due to the lack of resources and can be safely ignored.
During this module, you will input text into the Main Console. Besides directly typing it in, there are two very helpful methods of entering data which make it easier to enter complex data.
You can also click and drag text and Command Line Interface (CLI) commands directly from the Lab Manual into the active window in the Main Console.
You can also use the Online International Keyboard found in the Main Console.
When you first start your lab, you may notice a watermark on the desktop indicating that Windows is not activated.
One of the major benefits of virtualization is that virtual machines can be moved and run on any platform. The Hands-on Labs utilizes this benefit and we are able to run the labs out of multiple datacenters. However, these datacenters may not have identical processors, which triggers a Microsoft activation check through the Internet.
Rest assured, VMware and the Hands-on Labs are in full compliance with Microsoft licensing requirements. The lab that you are using is a self-contained pod and does not have full access to the Internet, which is required for Windows to verify the activation. Without full access to the Internet, this automated process fails and you see this watermark.
This cosmetic issue has no effect on your lab.
Please check to see that your lab is finished all the startup routines and is ready for you to start. If you see anything other than "Ready", please wait a few minutes. If after 5 minutes your lab has not changed to "Ready", please ask for assistance.
Module 1 - Introduction to VMware NSX (15 minutes)
In this module, you will be introduced to NSX Data Center, its capabilities, and the components that make up the solution.
NOTE: Only focus on given VMs in this topology. Lab has additional VMs for other modules.
This diagram shows the topology that will be used for this lab. The lab includes four (4) ESXi hosts grouped as follows:
In addition, this lab includes a standalone KVM host to demonstrate the multi-hypervisor capabilities provided by NSX:
Finally, this lab includes four (4) VM form factor NSX Edge Nodes, configured as follows:
This diagram shows the logical network topology that will be used in this lab. The lab includes a single Tier-0 Gateway that has four (4) total connected interfaces, configured as follows:
For additional information: HERE
VMware NSX Data Center provides an agile software-defined infrastructure to build cloud-native application environments. NSX Data Center is focused on providing networking, security, automation, and operational simplicity for emerging application frameworks and architectures that have heterogeneous endpoint environments and technology stacks. NSX Data Center supports cloud-native applications, bare metal workloads, multi-hypervisor environments, public clouds, and multiple clouds.
In much the same way that server virtualization programmatically creates, snapshots, deletes and restores software-based virtual machines (VMs), NSX network virtualization programmatically creates, snapshots, deletes, and restores software-based virtual networks.
The perfect complement to NSX are Intel® Xeon® Scalable processors. The 2nd generation Intel® Xeon® Scalable processor incorporates advanced compute cores, a new memory hierarchy, connectivity, and acceleration designed to provide high performance and infrastructure efficiency across a wide range of network-intensive workloads. This processor platform delivers up to 1.58X performance improvement over the previous generation of Intel® Xeon® Scalable processors for network workloads, and supports up to five times more virtual network function (VNF) capacity when complemented with Intel® Quick Assist Technology and the Intel® Ethernet 800 Series Ethernet controllers.
NSX works by implementing three separate but integrated planes: management, control, and data. The three planes are implemented as a set of processes, modules, and agents residing on three types of nodes: manager, controller, and transport.
Data Plane
For additional information: HERE
The data plane performs stateless forwarding/transformation of packets based on tables populated by the control plane, reports topology information to the control plane and maintains packet-level statistics.
This is different from the control plane managed state such as MAC:IP tunnel mappings, because the state managed by the control plane is about how to forward the packets, whereas state managed by the data plane is limited to how to manipulate payload.
The Data Plane Development Kit (DPDK) is a set of data plane libraries and network interface controller drivers for fast packet processing. Starting with NSX-T Data Center 2.2, DPDK uses a set of optimizations with the Intel® Xeon® Scalable processor family to help improve the packet processing speed.
Compared to the standard way of packet processing, DPDK helps decrease the processor cost and increase the number of packets processed per second. In NSX-T, it is used in two ways, first as a dedicated general-purpose network appliance called Bare Metal Edge Node and secondly at the host level Enhance Datapath Mode, optimizing host throughput for specific use cases.
Control Plane
Computes all ephemeral runtime state based on the configuration from the management plane, disseminates topology information reported by the data plane elements, and pushes the stateless configuration to forwarding engines.
The set of objects that the control plane deals with includes VIFs, logical networks, logical ports, logical routers, IP addresses, and so on. The control plane is split into two parts in NSX:
Management Plane
The management plane provides a single entry point using either a REST API or the NSX user interface to the system, maintains user configuration, and performs operational tasks on all management, control, and data plane nodes in the system. The management plane is the sole source-of-truth for the configured (logical) system, as managed by the user via configuration.
NSX Manager
For additional information: HERE
NSX Manager provides the graphical user interface (GUI) and the REST APIs for creating, configuring, and monitoring NSX components, such as controllers, segments, and edge nodes.
NSX Manager is the management plane for the NSX ecosystem. NSX Manager provides an aggregated system view and is the centralized network management component of NSX. It provides a method for monitoring and troubleshooting workloads attached to virtual networks created by NSX. NSX Manager has been integrated into the NSX Controllers in a fully active, clustered configuration. It provides configuration and orchestration of:
NSX Controller
NSX Controller is an advanced distributed state management system that controls virtual networks and overlay transport tunnels. The NSX Controller function operates as a separate process within the NSX Manager cluster.
Traffic doesn’t pass through the controller; instead, the controller is responsible for providing configuration to other NSX Controller components such as the logical switches, logical routers, and edge configuration. Stability and reliability of data transport are central concerns in networking.
Open a browser by double clicking the Google Chrome icon on the desktop.
If you see this pop-up, click Cancel
Unlike NSX-V, which uses a vSphere Web Client plugin for administering NSX, NSX-T leverages a separate web interface for administration.
Log in to the NSX-T web interface using the following steps:
1.In the User name field type nsx-admin@corp.local
2.In the Password field type VMware1!
3.Click LOG IN (Note: You may need to click Log In twice)
Transport Zones
A transport zone controls which hosts a logical switch can reach. It can span one or more host clusters also known as transport nodes.
If two transport nodes are in the same transport zone, VMs hosted on those transport nodes can be attached to the NSX logical switch segments that are also in that transport zone. If VMs are attached to switches that are in different transport zones, the VMs cannot communicate with each other.
A Transport Zone defines a collection of hosts that can communicate with each other across a physical network infrastructure. VM communication between different hosts within the same TZ happens over one or more interfaces defined as a Tunnel End Point (TEP). VM communication to a physical network happens using logical routers and not TEP.
Host Transport Node
A node (ESXi, KVM, Bare Metal etc.) can serve as a transport node if it contains at least one hostswitch (NVDS). When creating a host transport node and adding it to a transport zone, NSX installs a hostswitch on the host. The hostswitch is used for attaching VMs to NSX logical switch segments and for creating NSX logical router uplinks and downlinks. It is possible to configure multiple transport zones using the same hostswitch.
Verify that both controller and manager operational status is UP
Verify that the following two ESXi hosts are listed as Host Transport Nodes.
Edge Transport Node
An NSX Edge Transport Node can be a physical or virtual form factor. NSX Edge Node provides routing services and connectivity to networks that are external to the NSX deployment.
When virtual machine workloads residing on different NSX segments communicate with one another through a T1, the distributed router (DR) function is used to route the traffic in a distributed, optimized fashion.
However, when virtual machine workloads need to communicate with devices outside of the NSX environment, the service router (SR), which is hosted on an NSX Edge Node, is used.
Verify that the following four edge nodes are listed as Edge Transport Nodes.
NSX Gateways
Note that a two-tiered routing topology is not mandatory. A single Tier-0 topology can be implemented. In that case, Layer 2 segments are connected directly to the T0 layer, and a Tier-1 router is not configured.
NSX gateway (T0 or T1) is comprised of up to two components. Each gateway can have one Distributed router (DR), and optionally one or more service routers (SR).
The Management Plane (MP) allocates a VNI and creates a transit segment, then configures a port on both the SR and DR, connecting them to the transit segment. The MP then automatically allocates unique IP addresses for both the SR and DR.
Verify the following Tier-0 Gateway exists with a Green Success Status
One of the new features in NSX 3.0 is LDAP configuration. Now, you can configure LDAP users for NSX unified appliance management.
Congratulations on completing Module 1! Next we will enabling routing between the different logical switches.
Please proceed to any module below which interests you the most.
To end your lab click on the END button.
Module 2 - Host and Edge Transport Node Preparation (30 minutes)
The goal of this lab is to explore host and edge transport node preparation in NSX
In this section we will review the NSX Manager and explore the various components that comprise the fabric. We will then validate that the host and edge transport nodes are connected to the NSX fabric.
Unlike NSX-V, which uses a vSphere Web Client plugin for administering NSX, NSX-T leverages a separate web interface for administration.
*We recommend that you attempt module 1 Introduction to NSX Data Center to navigate through NSX UI.
Login to the NSX web interface using the following steps:
We will now review the state of the NSX fabric.
The Nodes section, under Fabric, is where Host and Edge Transport Nodes are defined in the NSX fabric.
Host Transport Nodes are any hypervisor hosts that will participate in NSX overlay functions. NSX includes support for both vSphere/ESXi and KVM hypervisors. This lab is preconfigured with a single, standalone KVM host that is participating in the NSX fabric, as well as a single vCenter server.
Hypervisors can be added individually by selecting None: Standalone Hosts from the Managed by list, while vCenter servers and their associated inventory can be added by selecting the vCenter option.
When viewing Host Transport Nodes that are Managed by vCenter, the interface displays a list of all clusters in vCenter inventory. In this lab, there are two vCenter clusters: RegionA01-MGMT and RegionA01-COMP01.
Expanding the list of hosts in both clusters reveals that there are two hosts in each cluster.
Observe that the hosts in the RegionA01-COMP01 cluster show an NSX Configuration status of Success, while the hosts in RegionA01-MGMT are Not Configured. This means that hosts in the RegionA01-COMP01 cluster can participate in NSX overlay networking and security, while hosts in RegionA01-MGMT cannot.
In NSX the Edge Transport Node contains its own TEP within the Edge, and it does not require the hypervisor to perform encapsulation and decapsulation functions on its behalf. When an encapsulated packet is destined for an Edge, it is delivered in its encapsulated form directly to the Edge Node via its TEP address. This allows for greater portability of the Edge Node since it no longer has dependencies on the underlying kernel services of the host.
Do you know why the hosts in the RegionA01-MGMT cluster are not configured with NSX kernel modules?
The only VM workloads in the RegionA01-MGMT cluster are Edge Transport Nodes. Since these Edge Nodes natively perform their own encapsulation/decapsulation and each has its own TEP, there is no need to configure the hosts in the RegionA01-MGMT cluster with NSX kernel modules.
Uplink Profiles are assigned to Transport Nodes (Host and/or Edge) in the NSX environment and define the configuration of the physical NICs that will be used.
Enhanced Network Stack (ENS - also appears as Enhanced Data Path) is a networking stack mode which provides superior network performance when configured and enabled for optimization of Intel ® NICs. It is primarily utilized in NFV workloads, which require the performance benefits this mode provides.
ENS utilizes the DPDK Poll Mode driver model to significantly improve packet rate and latency for small message sizes.
Observe the following configuration of the nsx-default-uplink-hostswitch-profile Uplink Profile:
This profile states that two uplinks will be configured in a failover configuration. Traffic will normally utilize uplink-1, and will traverse uplink-2 in the event of a failure of uplink-1.
Input the following details in the new Uplink Profile
1. Hover over the Active Uplinks field under the Teaming configuration and click on the pencil icon to edit the Teaming Policy Uplinks with the following parameters
1. Hover over the Standby Uplinks field under the Teaming configuration and click on the pencil icon to edit the Teaming Policy Uplinks with the following parameters:
2. Click Add to complete the uplink profile creation
A Transport Zone defines the scope of where an NSX segment can exist within the fabric. For example, a dedicated DMZ cluster may contain a DMZ transport zone. Any segments created in this DMZ transport zone could then only be used by VM workloads in the DMZ cluster.
There are two types of Transport Zone in NSX, Overlay, and VLAN:
Second generation Intel® Xeon® Scalable processors (Intel® C620 Series Chipsets ) enhance NSX and reduce overhead for near-native I/O performance with SR-IOV. 10/40Gb Intel® Ethernet Network Adapters enable logical networks that allow VMs to communicate across subnets while reducing configuration and management requirements and increasing network responsiveness and flexibility.
This information is useful for seeing where a given Transport Zone is being used.
Observe the following configuration of the TZ-Overlay Transport Zone:
1. Click Add to create a new Transport Zone
Now it's time to review how uplink profiles and transport zones are combined to configure Host Transport Nodes in NSX. There are two ways that this can be done:
Now, lets explore Host Transport Node configuration
Observe the following details in the Edit Transport Node Profile dialog:
This profile states that Transport Zones, TZ-Overlay-1 & TZ-VLAN-1 will be associated with hosts in this profile. Their connectivity to the physical network will use the ESXI-Region01a-COMP01-loadbalanced-active-active-profile. Finally, when a TEP is provisioned on each host, it will assign an IP address from the region-a-tep-pool range of IP addresses.
We will just input the values to learn how to create a Transport Node Profile, but we will not save and create it
Observe that the RegionA01-COMP01 cluster is configured to use the ESXi-transport-node-profile Transport Node Profile. All hosts in this cluster will inherit the configuration that was defined in the profile.
A single, standalone KVM host has been provisioned as part of this lab and has been configured to participate in the NSX fabric.
Observe the following details in the Host Details tab of the Edit Transport Node dialog:
Observe the following details in the Configure NSX tab of the Edit Transport Node dialog:
This profile states that a single Transport Zone, TZ-Overlay-1, will be associated with the KVM Transport Node host. Its connectivity to the physical network will use the KVM-Region01a-single-nic-profile. Finally, when a TEP is provisioned on this host, it will assign an IP address from the region-a-tep-pool range of IP addresses.
We will now login to host kvm-01a and verify that the KVM hypervisor is running the web-03a.corp.local virtual machine. This workload has already been added to the NSX inventory, and will be used later in this lab.
virsh list
ifconfig nsx-vtep0.0
Similar to the way a Host Transport Node is configured, an Uplink Profile and one or more Transport Zones are used to define an Edge Transport Node in NSX. Edge Transport Nodes perform an important function in the NSX fabric. They host the Service Routers (SRs) that are used by Tier-0 and Tier-1 Gateways to perform stateful services such as NAT or load balancing. Most importantly, they host the Tier-0 Service Router that provides route peering between NSX overlay networking and the physical routed environment.
In this lab, there are four total Edge Transport Nodes, configured in two fault-tolerant clusters of two nodes each.
Observe the following details in the General tab of the Edit Edge Transport Node Profile dialog:
Observe the following details in the N-VDS-1 tab of the Edit Edge Transport Node dialog:
Observe the following details in the N-VDS-1 tab of the Edit Edge Transport Node dialog:
This profile states that this Edge Node will host a two Transport Zone, TZ-Overlay-1 & TZ-Uplink. One transport zone will be used for route peering with the physical network (TZ-Uplink), while the other transport zone will be used for overlay network services (TZ-Overlay-1). Their connectivity to the physical network will use the nsx-edge-single-nic-uplink-profile.
Finally, when a TEP is provisioned on the TZ-Overlay-1 transport zone, it will assign an IP address from the region-a-tep-pool range of IP addresses. No TEP will be provisioned on the VLAN transport zone, so the option is disabled.
2. Click CANCEL to return to the list of Edge Transport Nodes
As we reviewed, there are four Edge Transport Nodes defined in the NSX fabric. For fault tolerance, edge clusters should be configured in two clusters of two nodes each. In the next task, you will be creating the second edge cluster.
Observe the following details in the Edit Edge Cluster dialog:
Congratulations on completing Module 2! Next we will enabling routing between the different logical switches.`
Please proceed to any module below which interests you the most.
To end your lab click on the END button.
Module 3 - Logical Switching (15 minutes)
The goal of this lab is to explore Logical Switching in NSX.
Now that you have reviewed the NSX components and verified that everything is operating correctly, we will create a logical network segment and connect an existing workload to it.
A Segment is a Layer 2 overlay network that provides an isolated broadcast domain.
From the NSX Manager web interface:
Note that there are a number of preexisting Segments, including LS-web; LS-app; and LS-db. These are used in other lab modules to host a sample three-tiered application.
Enter the following details in the Add Segment dialog:
Observe that our new Segment, LS-new, is now visible in the list of Segments. During the Segment's creation, we connected it to the T0 router Tier-0-gateway-01 and assigned IP address 172.16.60.1/24 to this T0 interface.
We will now connect to vCenter to attach a sample VM workload to our new Segment.
Once logged in, navigate to Hosts and Clusters:
Confirm that LS-new is now selected.
Note the IP address for server web-04a. This is the same IP subnet we used when creating our new Segment earlier in this module.
Note: If you previously closed NSX Manager or it has timed out, click the nsxmgr-01a shortcut under RegionA folder in the toolbar and enter the following to login:
We will now view the Segment Ports that are configured for our example VM, allowing it to use network overlay services. In addition to creating the LS-new Segment and configuring web-04a to utilize it, a sample three-tiered app has been configured and included with this lab. Segment LS-web has been preconfigured as its web tier, and servers web-01a; web-02a; and web-03a have been connected to it. We will test network connectivity to VMs on these Segments later in this module.
In the list of displayed ports, verify that web-04a is configured to use Segment LS-new and that its status is Up.
Now that we have created LS-new and configured server web-04a to utilize it, we will test connectivity.
In the Command Prompt window, perform a ping test of web-04a by entering the following:
ping 172.16.60.14
The ping test should return successful replies from web-04a. You can also test pinging the web servers listed below on Segment LS-web.
A segment profile is a configuration template that can be applied to a segment port or a segment as a whole. When it’s applied to a segment, it is applied to all the segment ports of the segment but can still be overridden by a segment port specific configuration.
Several default profiles (read-only) for different use-cases are preconfigured.
The following default Segment Profiles are available:
We will now take a look at the default-segment-security-profile
Congratulations on completing Module 3! Next we will enabling routing between the different logical switches.`
Please proceed to any module below which interests you the most.
To end your lab click on the END button.
Module 4 - Logical Routing (30 minutes)
The goal of this lab is to demonstrate the Logical Routing capabilities of NSX
The lab environment we are building currently includes a Tier-0 Gateway which connects to outside networks. In this module, we will build a Tier-1 Gateway that will handle routing between a sample three-tiered application's network segments, and move those segments from the existing Tier-0 Gateway to the newly created Tier-1 Gateway.
In this lesson we will explore Logical Routing of East-West traffic in NSX.
If you do not already have an active session, login to the NSX web interface using the following steps:
NSX includes two tiers of routing: Tier-0 (T0) and Tier-1 (T1). An NSX deployment will typically consist of at least one T0 Gateway that includes Uplink connections to the physical network. This lab has been preconfigured with a three-tiered application spanning three NSX Segments: LS-web, LS-app and LS-db. These Segments have been connected to an existing T0 Gateway.
A Tier-0 Gateway can provide routing to multiple Tier-1 Gateways, allowing multiple isolated tenant environments to exist behind a single Tier-0 Gateway. In this module, we will examine the existing T0 Gateway, then create a new T1 Gateway and migrate the existing three-tiered app Segments over to it.
From the NSX Manager web interface:
The list of Linked Segments shows all Segments connected to this T0 Gateway. Observe that LS-web, LS-app and LS-db are currently connected to the existing Tier-0-gateway-01 router. The displayed segments may be different, depending on how many previous modules have been completed.
In this step we will create a new Tier-1 Gateway. We will migrate the existing three-tier app Segments to this Gateway, enabling East-West routing between the app tiers to occur within this new T1 Gateway.
Now that the Tier-1 Gateway has been created, we need NSX to advertise the Segments behind it to the physical network. When LS-web, LS-app and LS-db are directly connected to Tier-0-gateway-01, they are advertised via BGP. Once we move these Segments to our new Tier-1 Gateway, they will no longer be directly connected to the existing T0 and will therefore no longer be advertised. We will configure the new T1 Gateway to advertise its Connected Segments to its T0 gateway, allowing the networks to be advertised via BGP and making them reachable from outside of the NSX environment.
Verify that the new T1 Gateway Tier-1-gateway-01 has been created. Confirm that it is linked to T0 Gateway Tier-0-gateway-01, has 0 Linked Segments, and has a Status of Success.
Now that we have migrated our Segments from the existing Tier-0 Gateway to our new Tier-1 Gateway, we will test connectivity.
From the Command Prompt, verify you are able to ping the gateway IP addresses of the Segments we have connected to T1 Gateway Tier-1-gateway-01.
ping -n 2 172.16.10.1
ping -n 2 172.16.20.1
ping -n 2 172.16.30.1
We will now login to VM web-01a and verify that we can reach the other VMs that comprise our sample three-tiered application.
From the SSH session, verify you are able to ping the IP addresses of the app and db tier VMs that are now routing through Tier-1-gateway-01.
ping -c 2 app-01a.corp.local
ping -c 2 db-01a.corp.local
In this lesson we will explore Logical Routing of North-South traffic in NSX.
In NSX, the Edge Node provides computational power and North-South routing services to the NSX fabric. Edge Nodes are appliances with pools of capacity that can host routing uplinks as well as non-distributed, stateful services. Edge Nodes can be clustered for scalability and high availability, running in both active-active and active-standby configurations.
We will review the Edge Node and Edge Cluster configurations. We will then review the North-South connectivity provided to the existing Tier-0 Gateway by the Edge Cluster.
Basic configuration information can be viewed from the list of Edge Nodes, including its TEP address and Edge Cluster configuration. Observe the following:
Note: You can view any truncated field, such as Edge; Edge Cluster; or TEP IP Addresses by hovering the mouse pointer over the field. A tooltip will appear with the full value.
When used for North-South routing, an NSX Edge Transport Node will be configured with two Transport Zones:
We will now review the Transport Zone configuration of the Edge Transport Node.
Observe that nsx-edge-01 is configured with two Transport Zones: TZ-Uplink and TZ-Overlay-1. These two Transport Zones are used for the Northbound (VLAN) and Southbound (Overlay) interfaces of the Tier-0 gateway, respectively.
We will now review the configuration of the existing Tier-0 (T0) Gateway Tier-0-gateway-01. This T0 Gateway router is configured to use the Uplink connections provided by Edge Cluster edge-cluster-01, which is comprised of Edge Nodes nsx-edge-01 and nsx-edge-02.
Review the Uplink-1 interface configured on Tier-0-gateway-01. This is the North-South Uplink interface the T0 uses to peer with the external routed environment. Observe the following settings:
From this screen we can determine that the T0 Gateway has a single Uplink interface that uses IP address 192.168.100.3. This Uplink is hosted on the Edge Node nsx-edge-01 and is currently Up.
Border Gateway Protocol (BGP) is a communication protocol used by routers to exchange route information. When two or more routers are configured in this way and communicate with one another, they are called neighbors. We will now review Tier-0-gateway-01's BGP configuration.
Tier-0-gateway-01 is configured with one BGP neighbor. Review the settings for the 192.168.100.1 neighbor:
In this instance, Tier-0-gateway-01 is peering with a router at IP address 192.168.100.1 using BGP AS number 65002. Its status is currently Up, indicated as Success.
Introduced with NSX-T 3.0 is a new Network Topology visualization view. This view displays a graphical representation of the NSX environment, including Tier-0 and Tier-1 Gateways, Segments, and their connectivity to one another. As we observed in the previous steps:
NOTE: If previous modules in this lab have not been completed, the Network Topology view may differ from the one above.
tracert -d 172.16.10.11
Observe that:
We will now test connectivity to our sample three-tiered application.
Verify that access to our three-tiered web app is working.
In this lesson we will configure High Availability (HA) for Tier-0 and Tier-1 Gateways in NSX.
Recall from earlier in this module that Edge Transport Nodes provide computational power and North-South routing services to the NSX fabric. Tier-0 and Tier-1 Gateways can provision stateful services, such as NAT or Load Balancing, that are hosted on an Edge Transport Node.
In the event of a power or hardware failure, the loss of an Edge Node could occur. In this instance, any services hosted on that Edge Node would be lost as well. For this reason, Edge Transport Nodes are grouped into an Edge Cluster in NSX. Edge Clusters provide fault tolerance and resilience that can withstand individual failures within the cluster.
Observe that Edge Cluster edge-cluster-01 consists of two individual Edge Transport Nodes: nsx-edge-01 and nsx-edge-02.
We will now modify the existing Tier-0 Gateway to leverage NSX's Edge Clustering capabilities.
Observe that our existing Tier-0 Gateway is configured for an HA Mode of Active Active. This allows the use of multiple Edge Nodes in the Edge Cluster simultaneously. Also note that the Tier-0 Gateway is configured to use Edge Cluster edge-cluster-01.
Observe that the existing Uplink-1 interface is running on Edge Node nsx-edge-01 and is configured for IP address 192.168.100.3. If a failure were to occur on this Edge Node, North-South connectivity to the NSX environment would be lost. We will now add a second Uplink interface to the Tier-0 Gateway that leverages nsx-edge-02, the second Edge Node in Edge Cluster edge-cluster-01.
Confirm that our Tier-0 Gateway now has two interfaces: Uplink-1 and Uplink-2. Interface Uplink-2 exists on Edge Node nsx-edge-02 with IP address 192.168.100.4/24.
We will now configure BGP to advertise from the second interface that we defined on nsx-edge-02. This will allow BGP on the Tier-0 Gateway to establish peering from the interfaces on both nsx-edge-01 and nsx-edge-02. During normal operation, both Edges will be considered viable paths into and out of the NSX environment. In the event that an Edge Transport Nodes fail, its BGP neighbor state will be lost and its path information will be removed from the BGP routing table. Traffic will continue to flow through the remaining Edge Transport Node. Upon recovery of the lost Edge Transport Node, its BGP state will be reestablished and its path information will be added back to the BGP routing table automatically.
In addition to adding the secondary interface's IP address as a BGP Source, we also modified the BGP Hold Down and Keep Alive timers to 15 and 5 seconds, respectively. We lowered these values in order to speed up BGP reconvergence. This will be useful when we test Edge Node failure later in this module.
Log in to nsx-edge-02 with the following credentials:
Once you are authenticated to the Edge Node, maximize the PuTTY window for better visibility.
get logical-routers
Note the VRF number of the Logical Router SR-Tier-0-gateway-01.
NOTE: The VRF number of SR-Tier-0-gateway-01 may differ from the screenshot.
vrf 2
get bgp neighbor summary
Verify the neighbor relationship with 192.168.100.1 is showing a state of Estab (Established).
As you can see, we now have two edge nodes that have established connections with our external router, providing redundant North-South routing to the NSX environment.
In this lesson we will test Equal Cost Multi-Pathing (ECMP) by simulating an Edge Node failure.
We will now start a ping session to one of the sample web servers located on NSX Segment LS-web. Once this has been done, we will shut down nsx-edge-01, simulating a failure of an Edge Node. We should then observe BGP detecting the loss of connectivity to the Edge Node, routing all traffic through nsx-edge-02.
ping web-01a.corp.local
You should observe ping replies from the web server.
By configuring the second interface on nsx-edge-02, we now have two fault tolerant paths into the NSX environment. We will perform a traceroute before simulating an Edge Node failure, observing that traffic will fail over to the secondary interface of the Tier-0 Gateway.
tracert -d web-01a.corp.local
The first hop is the IP address of the vPod router (the gateway of your admin desktop). Observe the second hop, 192.168.100.3, which is the IP address of the Tier-0 Gateway interface on nsx-edge-01. Traffic is then routed to the Tier-1 gateway via the unadvertised network in hop three, and finally delivered to web-01a at 172.16.10.11 in the fourth and final hop.
NOTE: Because both paths are equally valid, your traceroute may traverse the Tier-0 interface on nsx-edge-02 instead of nsx-edge-01. If this is the case, your second route hop will display 192.168.100.4 instead of 192.168.100.3. If this occurs, please substitute nsx-edge-02 in the following steps to test fault tolerance.
We will now connect to vCenter and simulate a failure by powering down the Edge Node your trace route utilized. The loss of this Edge Node will cause all traffic to route through the remaining Edge Node.
Once logged in, navigate to Hosts and Clusters:
NOTE: Because both paths are equally valid, the output of the traceroute you performed earlier may traverse the Tier-0 interface on either nsx-edge-01 or nsx-edge-02. If your second route hop in the earlier traceroute displays 192.168.100.3, please power off nsx-edge-01. Likewise, if the second route hop in the earlier traceroute displays 192.168.100.4, please power off nsx-edge-02 to test fault tolerance.
Return to the Command Prompt window and perform the following:
ping web-01a.corp.local
Ping requests should time out for approximately 15 seconds before BGP reconverges and removes the failed path through nsx-edge-01. The amount of time required is determined by the BGP timers, which we changed to specify a Keep Alive time of 5 seconds and a Hold-Down time of 15 seconds.
Once 15 seconds have elapsed, repeat the ping test and verify reconvergence.
ping web-01a.corp.local
You should observe that connectivity has been restored and ping replies are being received for server web-01a. If this is not the case, please wait a moment and try again.
Since BGP has reconverged, its path to the Tier-0 Gateway should now be through its interface on nsx-edge-02 (or nsx-edge-01, as noted earlier).
tracert -d web-01a.corp.local
The first hop is the IP address of the vPod router (the gateway of your admin desktop). The second hop is now 192.168.100.4 (or 192.168.100.3), the IP address of the Tier-0 Gateway interface on nsx-edge-02 (or nsx-edge-01). Traffic is then routed to the Tier-1 gateway via the unadvertised network in hop three, and finally delivered to web-01a at 172.16.10.11 in the fourth and final hop.
Now that we have tested fault tolerance on the Edge Node, we will return the Edge Node VM to a Powered On state. Return to the vSphere Client then perform the following.
Congratulations on completing Module 4.
Please proceed to any module below which interests you the most.
To end your lab click on the END button.
Module 5 - Distributed Firewall and Tools (30 minutes)
The goal of this module is to demonstrate how the distributed firewall (DFW) and operational tools within NSX function and are configured. The Distributed Firewall in NSX-T 3.0 is installed by default with a final Allow all rule. This means that all traffic is permitted and Micro Segmentation is "off". To enable Micro Segmentation you need to change the last rule from Allow to Deny. In this module we will execute the following operations:
DFW Section:
Tools Section:
This diagram illustrates what virtual machines make up our 3 Tier Web App for testing
This diagram illustrates the port requirements for our 3 Tier Web App
In this Chapter we will review and configure the Distributed Firewall of NSX-T
By default the NSX DFW is configured with a final Allow All rule. This means that all traffic is allowed and in order to "enable Micro segmentation" or block all traffic we will need to change the final rule to Deny. Before we change the final rule lets verify our precreated 3 Tier App works as expected.
NOTE: Only focus on given VMs for the 3-tier app. Lab has additional VMs for other modules.
Note: Feel free to test Web-02a and Web-03a shortcuts to verify they work as well.
Now that we have verified our 3 Tier App works let's change our final rule to Deny All
Note: This log in is utilizing the new Active Directory integration in version 3.0
Note: Once we publish this change explicit allow rules must be made for communication to be allowed in the environment. Let's verify our 3 Tier App is being blocked.
Note: It may take up to 20 seconds for the page to timeout, you can also verify web-02a and web-03a cannot be accessed. Now that we know the app can not be reached lets enable the preconfigured rules and test again.
Now that the allow rules are enabled lets test the 3 Tier App connectivity and verify the rules are applied.
We have now enabled the DFW within NSX-T and enabled the preconfigured 3 Tier App rules. Next we will verify through logging the rules work as expected.
Note: You can also explore each individual rule to see the log label set for each.
To ensure you are viewing the page shown you may need to select Dashboard 1 under My Dashboards in the left side pannel.
We will now delete and reconfigure the rules and groups to take a more detailed look at how they are configured. If you would like to skip this configuration you can jump ahead to the tools section or the next module.
Do this for two of the three 3-tier groups (app_servers and db_servers)
Note: the 3-tier-web-servers group cannot be deleted because it is in use for another rule, we will reuse this group.
Note: This is just one way of creating a group of virtual machines, the groups you previously deleted utilized tags instead of static members.
Note: This is just one way of creating a group of virtual machines, the groups you previously deleted utilized tags instead of static members.
As a reminder here are the port requirements for the 3 Tier App to function. Next lets go to the Distributed Firewall section to create the rules.
Client Access Rule settings:
Name: Client Access
Sources: Any
Destinations: 3-tier-web-servers
Services: HTTPS
Action: Allow
Web to App Access Rule settings:
Name: Web to App Access
Sources: 3-tier-web-servers
Destinations: 3-tier-app-servers
Services: TCP_8443
Action: Allow
App to DB Access Rule settings:
Name: App to DB Access
Sources: 3-tier-app-servers
Destinations: 3-tier-db-servers
Services: MySQL TCP 3306
Action: Allow
Congratulations you have successfully configured micro segmentation rules for a 3 Tier App!!!
NSX provides a variety of tools to make operations and troubleshooting easier. Traceflow, IPFIX, Port Mirroring and Consolidated Capacity can be found under the Plan & Troubleshoot section. In this chapter, we will explore Traceflow as it relates to the 3-tier app DFW rules.
Here is a list of the tools and their function.
Before we explore Traceflow lets verify our 3-tier app rules are set as expected and the final rule in the DFW is set to deny.
Traceflow traces the transport node-level path of a packet injected at a logical port. Traceflow supports L2 and L3 and is supported across ESXi, KVM Edge (including NAT). Using details provided by the user, a Traceflow packet will be created and inserted into the source logical port. When this packet travels from source to destination, all the logical network entities will report the observations. These observations are consolidated and shown in UI.
Once the trace has completed, NSX-T Traceflow will provide a map and detailed information detailing the packets path.
This concludes the tools section of this module, feel free to view flows between different vm's and ports within the environment.
Congratulations on completing Module 5! Next we will explore NSX services.
Please proceed to any module below which interests you the most.
To end your lab click on the END button.
Module 6 - NSX Services (15 minutes)
The goal of this lab is to explore some of the various services available in NSX. In this module you will complete the following tasks:
NSX Edge Nodes are service appliances with pools of capacity, dedicated to running network and security services in the NSX fabric that cannot be distributed to the hypervisors. Edge Nodes are used to provide routed connectivity between the overlay and the physical infrastructure via the Service Router (SR) component of the Tier-0 Gateway, and can also provide additional centralized, non-distributed services such as load balancing, NAT and VPN. Services provided by the NSX Edge Transport Node include:
As soon as one of these services is configured or an external interface is defined on a Tier-0 or Tier-1 gateway, a Service Router (SR) is instantiated on the selected Edge node. The Edge node is also a transport node in NSX, hosting its own TEP address. This allows it to communicate with other nodes in the overlay network. NSX Edge Transport Nodes are typically configured for one or more Overlay Transport Zones, and will also be connected to one or more VLAN transport zones when used for North-South (Uplink) connectivity.
Beginning with the NSX-T Data Center 3.0 release, support for the Intel® QuickAssist Technology (QAT) is provided on bare metal servers.Intel® QAT provides the hardware acceleration for various cryptography operations, such as IPSec VPN bulk cryptography, offloading the function from the Intel® Xeon® Scalable processor.
The QAT feature is enabled by default if the NSX Edge is deployed on a bare metal server with an Intel® QuickAssist PCIe card that is based on the installed C62x chipset (Intel® QuickAssist Adapter 8960 or 8970). The single root I/O virtualization (SR-IOV) interface must be enabled in the BIOS firmware
NSX Edge Node is available for deployment in either a virtual machine (VM) or bare metal form factor. When deployed as a VM, the Edge Node benefits from native vSphere features such as Distributed Resource Scheduler (DRS) and vMotion. Deploying the Edge Node on bare metal allows direct access to the device's hardware resources, providing increased performance and lower latency than the VM form factor.
2nd Gen Intel® Xeon® Scalable processors, with Intel® Virtualization Technology (Intel® VT), built into and enhanced in five successive generations of Intel® Xeon® processors, enables live migration of VMs across Intel Xeon processor generations.
Consider the network bandwidth requirements within your data center when planning vMotion. A 10 GbE NIC can vMotion up to 8 VMs simultaneously.
Enter the following login credentials for the NSX Manager.
When deploying centralized services in the NSX fabric, the instance of that service is provisioned and realized on an NSX Edge Node. If the Edge Node hosting this service were to experience a failure, any services running on the Edge Node would also fail as a result. To prevent a failure from impacting these services, Edge Nodes are grouped into logical objects called Edge Clusters.
An Edge Cluster is a group of one or more Edge Nodes that specifies the fault domain for services and how they should be recovered. Your lab is provisioned with four Edge Transport Nodes: nsx-edge-01, nsx-edge-02, nsx-edge-03 and nsx-edge-04. We will now review the existing Edge Cluster configuration and create one additional cluster.
Observe that one Edge Cluster is currently configured: edge-cluster-01. We will review its configuration before creating a second cluster.
Observe that as stated above, there are four Edge Nodes. Edge Cluster edge-cluster-01 is configured to use Edge Nodes nsx-edge-01 and nsx-edge-02, indicated in the Selected column. nsx-edge-03 and nsx-edge-04 are displayed as Available and are not part of this Edge Cluster.
We will now define a new Edge Cluster that will be used for the services we configure in this module.
Confirm that Edge Cluster edge-cluster-02 was created successfully, and that it is comprised of 2 Edge Transport Nodes.
We will now create a load balancer in NSX. In this section of the module you will execute the following tasks:
In order to create a Load balancer we need a Tier-1 Gateway deployed to at least one edge gateway.
Note: Tier-1 gateways used for Load balancing services must be placed on edge nodes of medium or large size
Since we rely on the Tier-0 Gateway to re-distribute the routes from the Tier-1 to the physical fabric, we also need to allow the Load Balancer and SNAT routes to be re-distributed at the Tier-0 level as well. This has already been configured in your lab, but would typically need to be enabled. We will now review the route redistribution configuration.
Now that we have the T-1 and routing requirements set up let's create our Load Balancer.
Lets create a health monitor for our 3 Tier App
In this step we will create a new Server Pool. A Server Pool is a list of the systems the load balancer will monitor and deliver traffic to for a given Virtual Server.
We will now select a Health Monitor for this Pool. Health Monitors define how the load balancer will check the pool members to determine their ability to accept incoming connections.
The last step is to define a Virtual Server. The Virtual Server has an IP address that accepts incoming connections and routes them to a pool member. How a pool member is chosen gets specified during configuration, and can be based on a number of factors including availability; load; and number of connections.
Do the following to configure the virtual server settings
Our new load balancer configuration is now complete. We have configured a Virtual Server on port 443 with an IP address of 172.16.10.10, sending traffic to the web servers in Server Pool Web-servers-01. This server pool has two IP addresses, 172.16.10.11 and 172.16.10.12, corresponding to web-01a and web-02a respectively. The Load Balancer is monitoring the health and availability of the web application by connecting to the pool members every five seconds with the URL /cgi-bin/app.py and expecting a response that contains the text "Customer Database".
You should now see the test web application and the Customer Database information.
We will now log into the vCenter web client so we can manually fail one of the web servers and test fault tolerance. If you have an existing tab with the vCenter client, click to select it. Otherwise perform the following to open a new tab and connect to vCenter.
Recall the web server that served your request when we recently connected to the Webapp Virtual Server. We will now power that server off and verify that traffic gets directed to the remaining web server.
Before completing this section of the module, we will return the web server we had powered off to an operational state.
We will now explore IP Address Management (IPAM) via the DHCP Server component of NSX. In this section of the module you will execute the following tasks:
The first step in configuring our DHCP Server is to define it in the NSX Policy Manager interface.
The next step in configuring our DHCP Server is to attach it to a Gateway. In this exercise we will attach it to the existing T0 Gateway Tier-0-Gateway-01.
The final step in configuring DHCP services is to attach our new DHCP Server to the desired Segment. In this example we will use the existing Segment LS-web.
We will now log into the vCenter web client so we can manually fail one of the web servers and test fault tolerance. If you have an existing tab with the vCenter client, click to select it. Otherwise perform the following to open a new tab and connect to vCenter.
We will now reconfigure server web-01a to obtain a DHCP address from the NSX DHCP Server we just configured.
We will now reconfigure the server to request a DHCP address.
/root/2126-01/dhcp.sh
The web server will reboot and obtain an IP address via DHCP.
We will now return the web server to using its previously configured static IP address.
/root/2126-01/static.sh
The web server will reboot with its original IP address of 172.16.10.11.
Congratulations on completing Module 6.
Please proceed to any module below which interests you the most.
To end your lab click on the END button.
Module 7: NSX Operations Overview (15 minutes)
The Goal of this lab is to explore basic topology management, flow tracing and operational functions.
NSX operations:
This diagram illustrates the virtual machines that make up our 3 Tier Web App we will be exploring.
NSX provides a variety of tools to make operations and troubleshooting easier. In this module we will explore some of these tools.
Note: This log in is utilizing the new Active Directory integration in version 3.0
In this view we can see the relationship between configured T-0, T-1 gateways, logical segments and connected virtual machines.
In this view we can see the name and details of the T-0 gateway
Take a moment to explore the topology, hover over items to see more detail, click on groups of VM's to expand and collapse the view.
Once you are done exploring Click Security -> Security overview ->Configuration to return to this view.
Once you are done exploring Click Inventory -> Inventory overview ->Configuration to return to this view.
Once you are done exploring Click System-> System overview ->Configuration to return to this view.
Flowtracing is the last tool we will look at in this module. This is a repeat of the tools section in Module 5 - Distributed Firewall and tools. If you have already completed Module 5 you can skip this section and end your lab. The flowtracing tool tests data plane connectivity between two vms by injecting traffic.
Congratulations on completing Module 7.
Please proceed to any module below which interests you the most.
Now that you’ve completed this lab, try testing your skills with VMware Odyssey, our newest Hands-on Labs gamification program. We have taken Hands-on Labs to the next level by adding gamification elements to the labs you know and love. Experience the fully automated VMware Odyssey as you race against the clock to complete tasks and reach the highest ranking on the leaderboard. Try the Network Security Odyssey lab.
To end your lab click on the END button.
Conclusion
Thank you for participating in the VMware Hands-on Labs. Be sure to visit http://hol.vmware.com/ to continue your lab experience online.
Lab SKU: HOL-2126-01-NET
Version: 20201208-143034