VMware Hands-on Labs - HOL-2026-01-NET


HOL-2026-01-NET - VMware NSX-T Data Center - Getting Started

Lab Guidance


Note: It will take more than 90 minutes to complete this lab. You should expect to only finish 2-3 of the modules during your time.  The modules are independent of each other so you can start at the beginning of any module and proceed from there. You can use the Table of Contents to access any module of your choosing.

The Table of Contents can be accessed in the upper right-hand corner of the Lab Manual.

Lab Module List:

 Lab Captains:

This lab manual can be downloaded from the Hands-on Labs Document site found here:

http://docs.hol.vmware.com

This lab may be available in other languages.  To set your language preference and have a localized manual deployed with your lab, you may utilize this document to help guide you through the process:

http://docs.hol.vmware.com/announcements/nee-default-language.pdf


 

Location of the Main Console

 

  1. The area in the RED box contains the Main Console.  The Lab Manual is on the tab to the Right of the Main Console.
  2. A particular lab may have additional consoles found on separate tabs in the upper left. You will be directed to open another specific console if needed.
  3. Your lab starts with 90 minutes on the timer.  The lab can not be saved.  All your work must be done during the lab session.  But you can click the EXTEND to increase your time.  If you are at a VMware event, you can extend your lab time twice, for up to 30 minutes.  Each click gives you an additional 15 minutes.  Outside of VMware events, you can extend your lab time up to 9 hours and 30 minutes. Each click gives you an additional hour.

 

 

Alternate Methods of Keyboard Data Entry

During this module, you will input text into the Main Console. Besides directly typing it in, there are two very helpful methods of entering data which make it easier to enter complex data.

 

 

Click and Drag Lab Manual Content Into Console Active Window

You can also click and drag text and Command Line Interface (CLI) commands directly from the Lab Manual into the active window in the Main Console.  

 

 

Accessing the Online International Keyboard

 

You can also use the Online International Keyboard found in the Main Console.

  1. Click on the Keyboard Icon found on the Windows Quick Launch Task Bar.

 

 

Activation Prompt or Watermark

 

When you first start your lab, you may notice a watermark on the desktop indicating that Windows is not activated.  

One of the major benefits of virtualization is that virtual machines can be moved and run on any platform.  The Hands-on Labs utilizes this benefit and we are able to run the labs out of multiple datacenters.  However, these datacenters may not have identical processors, which triggers a Microsoft activation check through the Internet.

Rest assured, VMware and the Hands-on Labs are in full compliance with Microsoft licensing requirements.  The lab that you are using is a self-contained pod and does not have full access to the Internet, which is required for Windows to verify the activation.  Without full access to the Internet, this automated process fails and you see this watermark.

This cosmetic issue has no effect on your lab.  

 

 

Look at the lower right portion of the screen

 

Please check to see that your lab is finished all the startup routines and is ready for you to start. If you see anything other than "Ready", please wait a few minutes.  If after 5 minutes your lab has not changed to "Ready", please ask for assistance.

 

Module 1 - Introduction to NSX-T (15 minutes)

Introduction to NSX-T Data Center


In this module you will be introduced to NSX-T Data Center, its capabilities, and the components that make up the solution.


 

Lab Topology

 

This diagram shows the topology that will be used for this lab. The lab includes four (4) ESXi hosts grouped as follows:

In addition, this lab includes a standalone KVM host to demonstrate the multi-hypervisor capabilities provided by NSX-T:

Finally, this lab includes four (4) VM form factor NSX Edge Nodes, configured as follows:

 

 

NSX Logical Lab Topology

 

This diagram shows the logical network topology that will be used in this lab. The lab includes a single Tier-0 Gateway that has four (4) total connected interfaces, configured as follows:

 

 

What is NSX-T Data Center?

VMware NSX-T Data Center ("NSX-T") is designed to address emerging application frameworks and architectures that have heterogeneous endpoints and technology stacks. In addition to vSphere, these environments may also include KVM-based hypervisors, containers and public clouds. NSX-T allows IT and development teams to choose the technologies best suited for their particular applications. NSX-T  is also designed for management, operations and consumption by development organizations – in addition to IT.

In much the same way that server virtualization programmatically creates, snapshots, deletes and restores software-based virtual machines (VMs), NSX-T network virtualization programmatically creates, snapshots, deletes, and restores software-based virtual networks.

With network virtualization, the functional equivalent of a network hypervisor reproduces the complete set of Layer 2 through Layer 7 networking services (for example, switching, routing, access control, firewalling, QoS) in software. As a result, these services can be programmatically assembled in any combination to produce unique, isolated virtual networks in a matter of seconds.

NSX-T works by implementing three separate but integrated planes: management, control and data. The three planes are implemented as a set of processes, modules and agents residing on three types of nodes: manager, controller and transport.

 

 

NSX-T Components (Part 1)

 

Data Plane

The data plane performs stateless forwarding/transformation of packets based on tables populated by the control plane and reports topology information to the control plane, and maintains packet level statistics.

The data plane is the source of truth for the physical topology and status of the components, for example, VIF location, tunnel status, and so on. If you are dealing with moving packets from one place to another, you are in the data plane. The data plane also maintains status of and handles failover between multiple links/tunnels. Per-packet performance is paramount with very strict latency or jitter requirements. Data plane is not necessarily fully contained in kernel, drivers, userspace, or even specific userspace processes. Data plane is constrained to totally stateless forwarding based on tables/rules populated by control plane.

The data plane also may have components that maintain some amount of state for features such as TCP termination. This is different from the control plane managed state such as MAC:IP tunnel mappings, because the state managed by the control plane is about how to forward the packets, whereas state managed by the data plane is limited to how to manipulate payload.

Control Plane

Computes all ephemeral runtime state based on configuration from the management plane, disseminates topology information reported by the data plane elements, and pushes stateless configuration to forwarding engines.

The control plane is sometimes described as the signaling for the network. If you are dealing with processing messages in order to maintain the data plane in the presence of static user configuration, you are in the control plane (for example, responding to a vMotion of a virtual machine (VM) is a control plane responsibility, but connecting the VM to the logical network is a management plane responsibility). Often the control plane is acting as a reflector for topological info from the data plane elements to one another for example, MAC/Tunnel mappings for TEPs. In other cases, the control plane is acting on data received from some data plane elements to (re)configure some data plane elements such as, using VIF locators to compute and establish the correct subset mesh of tunnels.

The set of objects that the control plane deals with include VIFs, logical networks, logical ports, logical routers, IP addresses, and so on.

The control plane is split into two parts in NSX-T, the central control plane (CCP), which runs on the NSX Manager/Controller cluster, and the local control plane (LCP), which runs on the transport nodes, adjacent to the data plane it controls. The Central Control Plane computes an ephemeral runtime state based on configuration from the management plane, and disseminates information reported by the data plane elements via the local control plane. The Local Control Plane monitors local link status, computes ephemeral runtime state based on updates from data plane and CCP, and pushes stateless configuration to forwarding engines. The LCP shares fate with the data plane element which hosts it.

Management Plane

The management plane provides a single API entry point to the system, maintains user configuration, handles user queries, and performs operational tasks on all management, control and data plane nodes in the system.

The management plane is responsible for any function related to querying, modifying or persisting user configuration, while the control plane is responsible for dissemination of that configuration down to the correct subset of data plane elements. The result is that data may belong to multiple planes, depending on which stage of existence it is in. The management plane also handles querying recent status and statistics from the control plane, and sometimes directly from the data plane.

The management plane is the sole source-of-truth for the configured (logical) system, as managed by the user via configuration. Changes are made using either a REST API or the NSX-T user interface.

NSX also contains a management plane agent (MPA) that runs on all cluster and transport nodes. Example use cases are bootstrapping configurations such as central management node address(es) credentials, packages, statistics and status. The MPA generally runs independently of the control and data planes and can be restarted independently if necessary; however, there are scenarios where state is shared because they run on the same host. The MPA is both locally and remotely accessible. MPA runs on transport nodes, control nodes and management nodes for node management. The MPA may also perform data plane related tasks on transport nodes as well.

Tasks that operate on the management plane include:

 

 

NSX-T Components (Part 2)

 

NSX Manager

NSX Manager provides the graphical user interface (GUI) and the REST APIs for creating, configuring and monitoring NSX-T components, such as controllers, segments and edge nodes.

NSX Manager is the management plane for the NSX-T ecosystem. NSX Manager provides an aggregated system view and is the centralized network management component of NSX-T. It provides a method for monitoring and troubleshooting workloads attached to virtual networks created by NSX-T. It provides configuration and orchestration of:

NSX Manager allows seamless orchestration of both built-in and external services. All security services, whether built-in or 3rd party, are deployed and configured by the NSX-T management plane. The management plane provides a single window for viewing services availability. It also facilitates policy based service chaining, context sharing and inter-service event handling. This simplifies the auditing of the security posture, streamlining application of identity-based controls (for example, active directory and mobility profiles).

NSX Manager also provides REST API entry-points to automate consumption. This flexible architecture allows for automation of all configuration and monitoring aspects via any cloud management platform, security vendor platform or automation framework.

The NSX-T Management Plane Agent (MPA) is an NSX Manager component that lives on every node (hypervisor). The MPA is in charge of enacting the desired state of the system, and for communicating non-flow-controlling (NFC) messages such as configuration, statistics, status and real time data between transport nodes and the management plane.

In earlier versions of NSX, the Manager function was provided by a standalone VM appliance. As of NSX-T 2.4, the functions of the NSX Manager have been integrated into the NSX Controllers in a fully active, clustered configuration. This provides the benefit of maintaining fewer infrastructure VMs, while also providing scalability and resilience across NSX Manager nodes.

NSX Controller

NSX Controller is an advanced distributed state management system that controls virtual networks and overlay transport tunnels.

NSX Controller is deployed as a cluster of highly available virtual appliances that are responsible for the programmatic deployment of virtual networks across the entire NSX-T architecture. The NSX-T Central Control Plane (CCP) is logically separated from all data plane traffic, meaning any failure in the control plane does not affect existing data plane operations. Traffic doesn’t pass through the controller; instead the controller is responsible for providing configuration to other NSX Controller components such as the logical switches, logical routers and edge configuration. Stability and reliability of data transport are central concerns in networking. The NSX Controller function operates as a separate process within the NSX Manager cluster.

N-VDS Switches

The N-VDS switch provides the capability in the NSX platform to create isolated logical L2 networks with the same flexibility and agility that exists for virtual machines.

A cloud deployment for a virtual data center has a variety of applications across multiple tenants. These applications and tenants require isolation from each other for security, fault isolation, and to avoid overlapping IP addressing issues. Endpoints, both virtual and physical, can connect to logical segments and establish connectivity independently from their physical location in the data center network. This is enabled through the decoupling of network infrastructure from logical networking (i.e., separation of underlay network from overlay network) provided by NSX-T network virtualization.

A logical switch provides a representation of Layer 2 switched connectivity across many hosts with Layer 3 IP reachability between them. If you plan to restrict logical networks to a limited set of hosts or you have custom connectivity requirements, you may find it necessary to create additional logical switches.

 

 

NSX-T Components (Part 3)

 

Gateway Routers

NSX-T Gateway routers provide both North-South and East-West connectivity, enabling tenants to access public networks, as well as connectivity between different networks within the same tenants.

A Gateway router is a configured partition of a traditional network hardware router, commonly referred to as virtual routing and forwarding (VRF). It replicates the hardware's functionality, creating multiple routing domains within a single router. Gateway routers perform a subset of the tasks that can be handled by the physical router, and each can contain multiple routing instances and routing tables. Using Gateway routers can be an effective way to maximize router usage, because a set of Gateway routers within a single physical router can perform the operations previously performed by several pieces of equipment.

NSX-T supports a two-tier logical router topology: the top-tier Gateway router, called a Tier-0 (T0) and the secondary-tier Gateway router called a Tier-1 (T1). This structure gives both provider administrator and tenant administrators complete control over their services and policies. Administrators control and configure T0 routing and services, while tenant administrators control and configure T1. The north edge of a T0 interfaces with the physical network, and is where dynamic routing protocols can be configured to exchange routing information with physical routers. The south edge of T0 connects to one or more T1 routing layer(s) and receives routing information from them. To optimize resource usage, the T0 layer does not push all routes coming from the physical network towards T1, but does provide default route information.

Southbound, the T1 router hosts logical switch Segment interfaces as defined by the tenant administrators and provides one-hop routing capability between them. For Tier-1 attached subnets to be reachable from the physical network, route redistribution towards the Tier-0 layer must the enabled. However, this is not performed by a traditional routing protocol (such as OSPF or BGP) running between the Tier-1 and Tier-0 layers. Inter-tier routes are passed directly to the appropriate routers using the NSX-T control plane.

Note that a two-tiered routing topology is not mandatory. If there is no need for provider/tenant isolation, a single Tier-0 topology can be implemented. In this scenario, Layer 2 segments are connected directly to the T0 layer, and a Tier-1 router is not configured.

A Gateway router is comprised of up to two components: a distributed router (DR), and optionally one or more service routers (SR).

The DR is kernel based and spans hypervisors, providing local routing functions to those VMs that are connected to it, and also exists in any edge nodes the logical router is bound to. Functionally, the DR is responsible for one-hop distributed routing between logical switches and/or Gateway routers connected to this logical router, and functions similar to the distributed logical router (DLR) in earlier viersions of NSX.

The SR is responsible for delivering services that are not currently implemented in a distributed fashion, such as stateful NAT, load balancing, DHCP or VPN services. Service Routers are deployed on the Edge node cluster that is selected when the T0/T1 router is initially configured.

To reiterate, a Gateway router in NSX-T always has an associated DR, regardless of whether it's deployed as a T0 or a T1. It will also have an associated SR created if either of the following is true:

The NSX-T management plane (MP) automatically creates the structure connecting the service router to the distributed router. The MP allocates a VNI and creates a transit segment, then configures a port on both the SR and DR, connecting them to the transit segment. The MP then automatically allocates unique IP addresses for both the SR and DR.

NSX Edge Node

NSX Edge Node provides routing services and connectivity to networks that are external to the NSX-T deployment.

When virtual machine workloads residing on different NSX segments communicate with one another through a T1, the distributed router (DR) function is used to route the traffic in a distributed, optimized fashion.

However, when virtual machine workloads need to communicate with devices outside of the NSX environment, the service router (SR), which is hosted on an NSX Edge Node, is used. If stateful services are required - for example, network address translation - the SR will also perform this function and must receive the traffic as well, whether the stateful service is associated with a T0 or a T1 router.

Common deployments of NSX Edge Node include DMZ and multi-tenant Cloud environments, where the NSX Edge Node creates virtual boundaries for each tenant through the use of service routers. 

Transport Zones

A transport zone controls which hosts a logical switch can reach. It can span one or more host clusters. Transport zones dictate which hosts and, therefore, which VMs can participate in the use of a particular network.

A Transport Zone defines a collection of hosts that can communicate with each other across a physical network infrastructure. This communication happens over one or more interfaces defined as a Tunnel End Point (TEP).

If two transport nodes are in the same transport zone, VMs hosted on those transport nodes can be attached to the NSX-T logical switch segments that are also in that transport zone. This attachment makes it possible for the VMs to communicate with each other, assuming the VMs otherwise have Layer 2/Layer 3 reachability. If VMs are attached to switches that are in different transport zones, the VMs cannot communicate with each other. Transport zones do not replace Layer 2/Layer 3 reachability requirements, but they place a limit on reachability.

A node can serve as a transport node if it contains at least one hostswitch. When creating a host transport node and adding it to a transport zone, NSX-T installs a hostswitch on the host. The hostswitch is used for attaching VMs to NSX-T logical switch segments and for creating NSX-T gateway router uplinks and downlinks.

In previous versions of NSX, a hostswitch could host a single transport zone; configuring multiple transport zones required multiple hostswitches on the node. However, as of NSX-T 2.4 it is possible to configure multiple transport zones using the same hostswitch.

 

 

Glossary of Components

The common NSX-T concepts that are used in the documentation and user interface.

Control Plane

Computes runtime state based on configuration from the management plane. Control plane disseminates topology information reported by the data plane elements and pushes stateless configuration to forwarding engines.

Data Plane

Performs stateless forwarding or transformation of packets based on tables populated by the control plane. Data plane reports topology information to the control plane and maintains packet level statistics.

External Network

A physical network or VLAN not managed by NSX-T. You can link your logical network or overlay network to an external network through an NSX Edge. For example, a physical network in a customer data center or a VLAN in a physical environment.

Host Transport Node

Hypervisor node that has been registered with the NSX-T management plane and has NSX-T modules installed. For a hypervisor host to be part of the NSX-T overlay, it must be added to the NSX-T fabric.

Edge Transport Node

Edge node that has been registered with the NSX-T management plane. The Edge Transport Node hosts the NSX Service Routers (SR) that are associated with Tier-0 and Tier-1 routers, including Uplink connectivity to External Networks as well as stateful services such as NAT.

Profile

Represents a specific configuration that can be associated with an NSX Edge cluster. For example, the fabric profile might contain the tunneling properties for dead peer detection.

Gateway Router

NSX-T routing entity that provides distributed East-West routing. A gateway router also links a Tier-1 router with a Tier-0 router.

Logical Router Port

Logical network port which can attach to either a logical switch segment port or a physical network uplink port. Logical Router Ports are also used to connect the LR to SR services such as Network Address Translation (NAT), Load Balancing, Gateway Firewall, VPN etc.

Segment / Logical Switch

Segments, called logical switches in previous versions of NSX, are API entities that provide virtual Layer 2 switching for both VM and router interfaces. A segment gives tenant network administrators the logical equivalent of a physical Layer 2 switch, allowing a group of VMs to communicate on a common broadcast domain. A segment is a logical entity that exists independent of the underlying infrastructure and spans many hypervisors. It provides network connectivity to VMs regardless of their physical location, allowing them to migrate between locations without requiring any reconfiguration.

In a multi-tenant cloud, many segments can exist side-by-side on the same hypervisor hardware, with each Layer 2 segment isolated from the others. Segments can be connected using gateway routers, and gateway routers can provide uplink ports connected to the external physical network.

Logical Switch Port

Logical switch attachment point to establish a connection to a virtual machine network interface or a logical router interface. The logical switch port reports applied switching profile, port state, and link status.

Management Plane

Provides single API entry point to the system, persists user configuration, handles user queries, and performs operational tasks on all of the management, control and data plane nodes in the system. Management plane is also responsible for querying, modifying and persisting user configuration.

NSX Controller Cluster

Deployed as a cluster of highly available virtual appliances that are responsible for the programmatic deployment of virtual networks across the entire NSX-T architecture. NSX Manager and NSX Controller services both exist in the NSX Controller Cluster.

NSX Edge Cluster

Collection of NSX Edge node appliances that are logically grouped for high-availability monitoring.

NSX Edge Node

Component that provides computational power to deliver IP routing and IP services functions. Service Routers (SR), used for Uplink connectivity and stateful services, are provisioned on Edge node appliances.

NSX-T Hostswitch or KVM Open vSwitch (OVS)

Software that runs on the hypervisor and provides physical traffic forwarding. The hostswitch or OVS is invisible to the tenant network administrator and provides the underlying forwarding service that each logical switch relies on. To achieve network virtualization, a network controller must configure the hypervisor hostswitches with network flow tables that form the logical broadcast domains the tenant administrators defined when they created and configured their logical switches.

Each logical broadcast domain is implemented by tunneling VM-to-VM and VM-to-logical router traffic, using the tunnel encapsulation protocol Geneve. The network controller has a global view of the data center and ensures that the hypervisor hostswitch flow tables are updated as VMs are created, moved or removed.

NSX Manager

Management function that exists as a component of the NSX Manager Cluster. In prior versions of NSX, the NSX Manager was a dedicated virtual appliance. As of NSX-T 2.4, the NSX Manager function and Controller Cluster functions are consolidated into a single cluster called the NSX Manager Cluster.

Open vSwitch (OVS)

Open source software switch that acts as a hypervisor hostswitch within XenServer, Xen, KVM and other Linux-based hypervisors. NSX Edge switching components are based on OVS.

Overlay Logical Network

Logical network implemented using Layer 2-in-Layer 3 tunneling such that the topology seen by VMs is decoupled from that of the physical network.

Physical Interface (pNIC)

Network interface on a physical server that a hypervisor is installed on.

Tier-0 (T0) Logical Router

Provider gateway router is also known as Tier-0 gateway router, and interfaces with the physical network. Tier-0 gateway router is a top-tier router and can be configured as an active-active or active-standby cluster of service routers. The gateway router runs BGP and peers with physical routers via the service router. In active-standby mode, the gateway router can also provide stateful services.

Tier-1 (T1) Gateway Router

Tier-1 gateway router is the second tier router that connects to one Tier-0 gateway router for northbound connectivity, and one or more overlay networks for southbound connectivity. Tier-1 gateway router can also be configured in an active-standby cluster of services when the router is configured to provide stateful services.

Transport Zone

Collection of transport nodes that defines the maximum span of logical switches. A transport zone represents a set of similarly provisioned hypervisors, and the logical switches that connect VMs on those hypervisors.

VM Interface (vNIC)

Network interface on a virtual machine that provides connectivity between the virtual guest operating system and the standard vSwitch or vSphere Distributed Switch. The vNIC can be attached to a logical port. You can identify a vNIC based on its Unique ID (UUID). The vNIC is equivalent to a network interface card (NIC) on a physical machine.

TEP

Tunnel End Point. Tunnel endpoints enable hypervisor hosts to participate in an NSX-T network overlay. The NSX-T overlay deploys a Layer 2 network over an existing physical network fabric by encapsulating frames inside of packets, and transferring the encapsulated packets over the underlying transport network. The underlying transport network can consist of either Layer 2 or Layer 3 networks. The TEP is the connection point at which encapsulation and decapsulation takes place.

VIrtual Network Interface (VNI)

The network identifier associated with a given logical switch. As Layer 2 segments are created in NSX, an associated VNI is allocated. This VNI is used in the encapsulated overlay packet, and facilitates Layer 2 separation.

 

Module 2 - Host Preparation and Logical Switching (45 minutes)

Host Preparation and Logical Switching - Module Overview


The goal of this lab is to explore Logical Switching in NSX-T.


Preparing Host and Edge Transport Nodes for NSX-T


In this section we will review the NSX Manager and explore the various components that comprise the fabric. We will then add a KVM host to the NSX fabric and validate that it is connected.


 

Launch Google Chrome

 

 

 

Open NSX-T Web Interface

 

Unlike NSX-V, which uses a vSphere Web Client plugin for administering NSX, NSX-T leverages a separate web interface for administration.

 

 

Login to the NSX-T Web Interface

 

Login to the NSX-T web interface using the following steps:

  1. In the User name field type admin
  2. In the Password field type VMware1!VMware1!
  3. Click LOG IN

 

 

NSX Home page

 

The NSX home page provides a view of the components that you can configure within NSX. NSX-T 2.4 includes a new Policy Manager user interface (UI) that simplifies configuration tasks. Policy Manager is a  component of the NSX Manager that utilizes a declarative policy engine to enact configuration state within the NSX environment.

  1. Click on the Dashboard selector to reveal the dropdown
  2. Click System

 

 

NSX System Dashboard

 

The NSX System Dashboard provides you an overall view of the NSX components and the health of each one.

  1. Click System in the top level navigation menu

 

 

NSX System Overview

 

The System tab in the NSX Policy Manager is where initial configuration steps are performed. This includes defining Edge and Transport Nodes in the Fabric tab, integrating Policy Manager with Active Directory, defining NSX administrative users, performing NSX upgrades and scheduling backups.

The top level System menu defaults to the Overview tab on the left. The Overview tab allows you to check the health of the NSX Management Cluster, including CPU, RAM and interface information.

NOTE: This Hands On Lab has been configured with a single Management Node. This is not a supported configuration and has been done to conserve resources within the lab environment. A supported deployment of NSX-T will consist of three clustered, fault-tolerant NSX Management Node appliances.

New in NSX-T 2.4, the NSX Manager, Controller and new Policy Manager functions have been consolidated into a single clustered set of three Management Nodes. As a result, the administrative user interface is accessible from any of the three Management Nodes. A shared Virtual IP can be defined on this page, which would then be reachable across all Management Nodes.

 

 

Navigate to Nodes in the NSX-T UI

 

We will now review the state of the NSX fabric.

  1. Click Fabric in the menu on the left side of the NSX-T System user interface
  2. Click Nodes to view the nodes currently defined in NSX

 

 

View Host Transport Node Configuration

 

The Nodes section, under Fabric, is where Host and Edge Transport Nodes are defined in the NSX fabric.

Host Transport Nodes are any hypervisor hosts that will participate in NSX overlay functions. NSX-T includes support for both vSphere/ESXi and KVM hypervisors. This lab is preconfigured with a single, standalone KVM host that is participating in the NSX fabric, as well as a single vCenter server. A single NSX-T deployment can manage up to 16 vCenter servers.

Hypervisors can be added individually by selecting None: Standalone Hosts from the Managed by list, while vCenter servers and their associated inventory can be added by selecting the vCenter option.

  1. Click the dropdown next to Managed by
  2. Click vCenter from the list of available options

 

 

 

Uplink Profiles are assigned to Transport Nodes in the NSX-T environment, and define the configuration of the physical NICs that will be used.

  1. On the left side of the NSX-T user interface, click Profiles
  2. Uplinks Profiles should be selected by default. If it is not, click to select
  3. Click nsx-default-uplink-hostswitch-profile
    • NOTE: Click on the name of the Uplink Profile, not the checkbox to its left

 

 

View Overlay Transport Zone Configuration

 

A Transport Zone defines the scope of where an NSX segment can exist within the fabric. For example, a dedicated DMZ cluster may contain a DMZ transport zone. Any segments created in this DMZ transport zone could then only be used by VM workloads in the DMZ cluster.

There are two types of Transport Zone in NSX-T, Overlay and VLAN:

  1. On the left side of the NSX-T user interface, click Transport Zones
  2. Click TZ-Overlay
    • NOTE: Click on the name of the Transport Zone, not the checkbox to its left

 

 

 

Revisiting Host Transport Node Configuration

 

Now it's time to review how uplink profiles and transport zones are combined to configure Host Transport Nodes in NSX-T. There are two ways that this can be done:

  1. On the left side of the NSX-T user interface, click Profiles
  2. Click Transport Node Profiles
  3. Click the checkbox to the left of ESXi-transport-node-profile to select it
  4. Click Edit to review the configuration

 

 

View Edge Transport Node Configuration

 

Similar to the way a Host Transport Node is configured, an Uplink Profile and one or more Transport Zones are used to define an Edge Transport Node in NSX-T. Edge Transport Nodes perform an important function in the NSX fabric. They host the Service Routers (SRs) that are used by Tier-0 and Tier-1 Gateways to perform stateful services such as NAT or load balancing. Most importantly, they host the Tier-0 Service Router that provides route peering between NSX overlay networking and the physical routed environment.

In this lab, there are four total Edge Transport Nodes, configured in two fault-tolerant clusters of two nodes each.

  1. Click Edge Transport Nodes
  2. Click the checkbox to the left of nsx-edge-01 to select it
  3. Click Edit to review the configuration

 

Observe the following details in the General tab of the Edit Edge Transport Node Profile dialog:

  1. Click the N-VDS tab to view the N-VDS configuration

 

Observe the following details in the N-VDS tab of the Edit Edge Transport Node dialog:

This profile states that this Edge Node will host two Transport Zones, TZ-Overlay and TZ-VLAN. One transport zone will be used for route peering with the physical network (TZ-VLAN), while the other transport zone will be used for overlay network services. Their connectivity to the physical network will use the nsx-edge-single-nic-uplink-profile-large-mtu. Finally, when a TEP is provisioned on the TZ-Overlay transport zone, it will assign an IP address from the TEP-ESXi-Pool range of IP addresses. No TEP will be provisioned on the VLAN transport zone, so the option is disabled.

  1. Click CANCEL to return to the list of Edge Transport Nodes

 

 

View Edge Cluster Configuration

 

As we reviewed, there are four Edge Transport Nodes defined in the NSX fabric. For fault tolerance, these edges have been configured in two clusters of two nodes each.

  1. Click Edge Clusters
  2. Click the checkbox to the left of edge-cluster-01 to select it
  3. Click Edit to review the configuration

 

Observe the following details in the Edit Edge Cluster dialog:

  1. Click CANCEL to return to the list of Edge Clusters

 

 

Log into kvm-01a

 

 We will now login to host kvm-01a and verify that the KVM hypervisor is running the web-03a.corp.local virtual machine. This workload has already been added to the NSX inventory, and will be used later in this lab.

  1. Click the PuTTY icon in the taskbar. This will launch the PuTTY terminal client

 

  1. Scroll through the list of Saved Sessions until kvm-01a.corp.local is visible
  2. Click kvm-01a.corp.local to highlight it
  3. Click Load to load the saved session
  4. Click Open to launch the SSH session

 

  1. Enter virsh list to view the virtual machine workloads currently running on this KVM host and confirm that VM web-03a is running
virsh list

 

 

  1. Enter ifconfig nsx-vtep0.0 into the command-line on kvm-01a to see that the TEP interface has been created with an IP address of 192.168.130.61 and an MTU of 1600.
ifconfig nsx-vtep0.0

 

Logical Switching in NSX-T


Now that you have reviewed the NSX-T components and verified that everything is operating correctly, we will create a logical network segment and connect an existing workload to it.


 

Create a new Logical Switch Segment

 

A Segment, called a Logical Switch in earlier versions of NSX, is a Layer 2 overlay network that provides an isolated broadcast domain. To create a new Segment in NSX-T, please perform the following.

From the NSX Manager web interface:

  1. Click Networking in the top level navigation menu
  2. Click Segments in the menu on the left side of the NSX-T Networking user interface
  3. Click ADD SEGMENT

Note that there are a number of preexisting Segments, including LS-web; LS-app; and LS-db. These are used in other lab modules to host a sample three-tiered application.

 

 

Add Segment

 

Enter the following details in the Add Segment dialog:

  1. Segment Name: LS-new
  2. Uplink & Type: Tier-0-gateway-01
  3. Transport Zone: TZ-OVERLAY
  4. Click Set Subnets to bring up the Set Subnets dialog

 

 

Set Subnets

 

Enter the following details in the Set Subnets dialog:

  1. Click ADD SUBNET
  2. Gateway: 172.16.60.1/24
  3. Click ADD
  4. Click APPLY

 

 

Add Segment (cont)

 

Upon returning to the ADD SEGMENT dialog, confirm that Subnets now displays 1, then complete configuration of the new Segment:

  1. Click SAVE

 

 

Complete and Verify Segment Creation

 

  1. Click NO to complete and exit the ADD SEGMENT dialog

Observe that our new Segment, LS-new, is now visible in the list of Segments. During the Segment's creation, we connected it to the T0 router Tier-0-gateway-01 and assigned IP address 172.16.60.1/24 to this T0 interface.

 

 

Attach Application VM to the new Segment

 

We will now connect to vCenter to attach a sample VM workload to our new Segment.

  1. Click on the new tab icon
  2. Click on the RegionA folder
  3. Click on RegionA vSphere Client (HTML)

 

 

Login to vCenter

 

  1. Click the checkbox for Use Windows session authentication
  2. Click Login

 

 

Find VM web-04a

 

Once logged in, navigate to Hosts and Clusters:

  1. Click on the Menu button
  2. Click on Hosts and Clusters

 

 

Power on VM web-04a

 

  1. Click the Power on icon

 

 

View IP Address for VM web-04a

 

Note the IP address for server web-04a. This is the same IP subnet we used when creating our new Segment earlier in this module.

  1. Periodically click the refresh button until the IP address for web-04a is visible. This may take one to two minutes

 

 

Return to NSX Manager

 

  1. Click to return to the NSX Manager tab

Note: If you previously closed NSX Manager or it has timed out, click the VMware NSX | Login shortcut in the toolbar and enter the following:

 

 

View Segment Configuration: LS-new

 

We will now view the Segment Ports that are configured for our example VM, allowing it to use network overlay services. In addition to creating the LS-new Segment and configuring web-04a to utilize it, a sample three-tiered app has been configured and included with this lab. Segment LS-web has been preconfigured as its web tier, and servers web-01a; web-02a; and web-03a have been connected to it. We will test network connectivity to VMs on these Segments later in this module.

  1. Click Networking in the top level navigation menu
  2. Click Segments in the menu on the left side of the NSX-T Networking user interface
  3. If necessary, scroll through the list of Segments until LS-new is visible
  4. Click the arrow to the left of LS-new to expand it
  5. Click the arrow to the left of PORTS to expand it
  6. It may take one to two minutes before the correct number of segment ports are displayed. You may need to periodically click the REFRESH button until the interface shows 1 Segment Port
  7. Click the 1 link to the right of Segment Ports to display the Segment Ports dialog

 

 

View Segment Configuration: LS-web

 

We will now view the Segment Ports that have been preconfigured for our sample web tier VMs, allowing them to use network overlay services.

  1. If necessary, scroll through the list of Segments until LS-web is visible
  2. Click the arrow to the left of LS-web to expand it
  3. Click the arrow to the left of PORTS to expand it
  4. Click the 3 link to the right of Segment Ports to display the Segment Ports dialog

 

 

Verify Connectivity to Web VMs

 

Now that we have created LS-new and configured server web-04a to utilize it, we will test connectivity.

  1. Click the Command Prompt icon in the taskbar to launch a Windows Command Prompt

 

Module 2 Conclusion


Congratulations on completing Module 2!  Next we will enabling routing between the different logical switches.`

Congratulations on completing Module 2.

Please proceed to any module below which interests you the most.


 

How to End Lab

 

To end your lab click on the END button.  

 

Module 3 - Logical Routing (60 minutes)

Logical Routing - Module Overview


The goal of this lab is to demonstrate the Logical Routing capabilities of NSX-T

  • We will configure Logical Routing of East-West traffic
  • We will configure Logical Routing of North-South traffic
  • We will configure High Availability (HA)
  • We will configure Equal Cost Multi-Pathing (ECMP)

 

Logical Routing Topology

 

The lab environment we are building currently includes a Tier-0 Gateway which connects to outside networks. In this module, we will build a Tier-1 Gateway that will handle routing between a sample three-tiered application's network segments, and move those segments from the existing Tier-0 Gateway to the newly created Tier-1 Gateway.

 

Logical Routing of East-West Traffic


In this lesson we will explore Logical Routing of East-West traffic in NSX-T.


 

Launch Google Chrome

 

  • If not continuing from a previous module, open a browser by double clicking the Google Chrome icon on the desktop.

 

 

Open NSX-T Web Interface

 

Unlike NSX-V, which uses a vSphere Web Client plugin for administering NSX, NSX-T leverages a separate web interface for administration.

  • If not already open, launch the NSX-T web interface by clicking on the VMware NSX | Login bookmark in the bookmark toolbar of Google Chrome.

 

 

Login to the NSX-T Web Interface

 

If you do not already have an active session, login to the NSX-T web interface using the following steps:

  1. In the User name field type admin
  2. In the Password field type VMware1!VMware1!
  3. Click LOG IN

 

 

View Existing Tier-0 Gateway

 

NSX-T includes two tiers of routing: Tier-0 (T0) and Tier-1 (T1). An NSX-T deployment will typically consist of at least one T0 Gateway that includes Uplink connections to the physical network. This lab has been preconfigured with a three-tiered application spanning three NSX Segments: LS-web, LS-app and LS-db. These Segments have been connected to an existing T0 Gateway.

A Tier-0 Gateway can provide routing to multiple Tier-1 Gateways, allowing multiple isolated tenant environments to exist behind a single Tier-0 Gateway. In this module, we will examine the existing T0 Gateway, then create a new T1 Gateway and migrate the existing three-tiered app Segments over to it.

From the NSX-T Manager web interface:

  1. Click Networking in the top level navigation menu
  2. Click Tier-0 Gateways in the menu on the left side of the NSX-T Networking user interface
  3. Click the 3 link under Linked Segments for T0 Tier-0-gateway-01 to display the Linked Segments dialog. This number may be different if you have not completed the previous modules.

 

 

Create New Tier-1 Gateway

 

In this step we will create a new Tier-1 Gateway. We will migrate the existing three-tier app Segments to this Gateway, enabling East-West routing between the app tiers to occur within this new T1 Gateway.

  1. Click Tier-1 Gateways in the menu on the left side of the NSX-T Networking user interface
  2. Click ADD TIER-1 GATEWAY

 

 

Add Tier-1 Gateway

 

  1. In the Tier-1 Gateway Name field, enter Tier-1-gateway-01
  2. In the Linked Tier-0 Gateway field, select Tier-0-gateway-01
  3. Click SAVE

 

 

Verify Creation of New Tier-1 Gateway

 

Verify that the new T1 Gateway Tier-1-gateway-01 has been created. Confirm that it is linked to T0 Gateway Tier-0-gateway-01, has 0 Linked Segments, and has a Status of Up.

 

 

Connect Segment LS-web to Tier-1 Gateway

 

  1. Click Segments in the menu on the left side of the NSX-T Networking user interface
  2. Click the More Options icon to the left of LS-web to display its Options menu, then click Edit

 

 

Connect Segment LS-app to Tier-1 Gateway

 

  1. Click the More Options icon to the left of LS-app to display its Options menu, then click Edit

 

 

Connect Segment LS-db to Tier-1 Gateway

 

  1. Click the More Options icon to the left of LS-db to display its Options menu, then click Edit

 

 

Verify Connectivity From Admin Desktop

Now that we have migrated our Segments from the existing Tier-0 Gateway to our new Tier-1 Gateway, we will test connectivity.

 

 

Verify East-West Connectivity From web-01a

 

 We will now login to VM web-01a and verify that we can reach the other VMs that comprise our sample three-tiered application.

  1. Click the PuTTY icon in the taskbar. This will launch the PuTTY terminal client

 

  1. Scroll through the list of Saved Sessions until web-01a.corp.local is visible
  2. Click web-01a.corp.local to highlight it
  3. Click Load to load the saved session
  4. Click Open to launch the SSH session
  • If prompted, click Yes to accept the server's host key
  • If not automatically logged in, use username vmware and password VMware1! to log into web-01a

 

Logical Routing of North-South Traffic


In this lesson we will explore Logical Routing of North-South traffic in NSX-T.


 

Review Edge Transport Node

 

In NSX-T, the Edge Node provides computational power and North-South routing services to the NSX fabric. Edge Nodes are appliances with pools of capacity that can host routing uplinks as well as non-distributed, stateful services. Edge Nodes can be clustered for scalability and high availability, running in both active-active and active-standby configurations.

When used for North-South routing, an NSX Edge Node will be configured with two Transport Zones:

  • VLAN Transport Zone: Used to connect to the physical network fabric and peer with the physical core router
  • Overlay Transport Zone: Used to connect to the NSX overlay network and communicate with other Edge and Host Transport notes in the NSX fabric

We will review the Edge Node and Edge Cluster configurations. We will then review the North-South connectivity provided to the existing Tier-0 Gateway by the Edge Cluster.

  1. Click System in the top level navigation menu
  2. Click Fabric in the menu on the left side of the NSX-T System user interface
  3. Click Nodes
  4. Click Edge Transport Nodes

Basic configuration information can be viewed from the list of Edge Nodes, including Transport Zone and Edge Cluster configuration. Observe the following:

  • Each Edge Node is configured with two Transport Zones: TZ-Overlay and TZ-VLAN
  • Edge Nodes nsx-edge-01 and nsx-edge-02 are members of Edge Cluster edge-cluster-01
  • Edge Nodes nsx-edge-03 and nsx-edge-04 are members of Edge Cluster edge-cluster-02

Note: You can view any truncated field, such as Edge; Transport Zone; or Edge Cluster by hovering the mouse pointer over the field. A tooltip will appear with the full value.

 

 

View Tier-0 Gateway

 

We will now review the configuration of the existing Tier-0 (T0) Gateway Tier-0-gateway-01. This T0 Gateway router is configured to use the Uplink connections provided by Edge Cluster edge-cluster-01, which is comprised of Edge Nodes nsx-edge-01 and nsx-edge-02.

  1. Click Networking in the top level navigation menu
  2. Click Tier-0 Gateways in the menu on the left side of the NSX-T Networking user interface
  3. Click the arrow to the left of Tier-0-gateway-01 to expand it
  4. Click the arrow to the left of INTERFACES to expand it
  5. Click the 1 link to the right of External and Service Interfaces to display the Interfaces dialog

 

 

View Tier-0 Gateway BGP Configuration

 

Border Gateway Protocol (BGP) is a communication protocol used by routers to exchange route information. When two or more routers are configured in this way and communicate with one another, they are called neighbors. We will now review Tier-0-gateway-01's BGP configuration.

  1. Click the arrow to the left of BGP to expand it
  2. Click the 2 link to the right of BGP Neighbors to display the BGP Neighbors dialog

 

 

View Tier-0 Gateway Connections

 

Now that we have viewed the configuration between the Tier-0 Gateway and its Uplink, we will explore the connection between Tier-0 and Tier-1 Gateways.

In the previous section, we created a Tier-1 Gateway and connected our sample three-tiered application Segments. While configuring this Tier-1 Gateway, we chose Tier-0-gateway-01as its Linked Tier-0 Gateway. When this was done, NSX automatically allocated address space and assigned IP addresses to connect the two Gateways. This was done transparently and without any explicit configuration.

In this step, we will review the auto-created subnet connecting the two Gateways.

  1. Click Advanced Networking & Security in the top level navigation menu
  2. Click Routers in the menu on the left side of the NSX-T Advanced user interface
  3. Click the Tier-0-gateway-01 link

 

 

View Tier-1 Gateway Connections

We will now explore the Tier-1 side of the connection between Gateways.

 

 

Open Command Prompt

 

  1. Click the Start Menu icon in the Windows Task Bar
  2. Click Command Prompt

 

 

Trace Route Path from Main Console to Virtual Machines

 

  1. Trace route path from your Admin Console to web server web-01a on Segment LS-web
tracert -d 172.16.10.11

Observe that:

  • The first hop is the physical gateway (192.168.110.1)
  • The physical router then routes the packet to the Uplink interface of Tier-0-gateway-01 (192.168.100.3)
  • Tier-0-gateway-01 routes the packet to its connected interface of Tier-1-gateway-01. This is the autoconfigured subnet we reviewed above (100.64.176.1)
  • Finally, the packet is delivered to server web-01a on NSX Segment LS-web (172.16.10.11)

 

 

Test Connectivity to Sample Web App

 

We will now test connectivity to our sample three-tiered application.

  1. Open a new Tab
  2. Click on the 3 Tier App bookmark
  3. Select Web-01a (Running on esx-01a)

 

 

Web Site is Working

 

Verify that access to our three-tiered web app is working.

 

High Availability (HA)


In this lesson we will configure High Availability (HA) for Tier-0 and Tier-1 Gateways in NSX-T.


 

Review Edge Cluster Configuration

 

  1. Click System in the top level navigation menu
  2. Click Fabric in the menu on the left side of the NSX-T System user interface
  3. Click Nodes
  4. Click Edge Clusters
  5. Click the 2 link for edge-cluster-01's Edge Transport Nodes

Recall from earlier in this module that Edge Transport Nodes provide computational power and North-South routing services to the NSX fabric. Tier-0 and Tier-1 Gateways can provision stateful services, such as NAT or Load Balancing, that are hosted on an Edge Transport Node.

In the event of a power or hardware failure, the loss of an Edge Node could occur. In this instance, any services hosted on that Edge Node would be lost as well. For this reason, Edge Transport Nodes are grouped into an Edge Cluster in NSX-T. Edge Clusters provide fault tolerance and resilience that can withstand individual failures within the cluster.

 

 

Modify Existing Tier-0 Gateway

 

We will now modify the existing Tier-0 Gateway to leverage NSX-T's Edge Clustering capabilities.

  1. Click Networking in the top level navigation menu
  2. Click Tier-0 Gateways in the menu on the left side of the NSX-T Networking user interface
  3. Click the More Options icon to the left of Tier-0-gateway-01 to display its Options menu, then click Edit

 

 

 

Observe that our existing Tier-0 Gateway is configured for an HA Mode of Active Active. This allows the use of multiple Edge Nodes in the Edge Cluster simultaneously. Also note that the Tier-0 Gateway is configured to use Edge Cluster edge-cluster-01.

  1. Click the arrow to the left of INTERFACES to expand it
  2. Click the 1 link to the right of External and Service Interfaces to display the Set Interfaces dialog

 

 

 

  1. Click the arrow to the left of Uplink-A to expand it

Observe that the existing Uplink-A interface is running on Edge Node nsx-edge-01 and is configured for IP address 192.168.100.3. If a failure were to occur on this Edge Node, North-South connectivity to the NSX environment would be lost. We will now add a second Uplink interface to the Tier-0 Gateway that leverages nsx-edge-02, the second Edge Node in Edge Cluster edge-cluster-01.

  1. Click ADD INTERFACE

 

 

 

  1. Click the arrow to the left of Uplink-B to expand it

Confirm that our Tier-0 Gateway now has two interfaces: Uplink-A and Uplink-B. Interface Uplink-B exists on Edge Node nsx-edge-02 with IP address 192.168.100.4/24.

  1. Click CLOSE

 

 

Open PuTTY

 

  1. Click the Start Menu icon in the Windows Task Bar
  2. Click PuTTY

 

 

Connect to nsx-edge-02a

 

  1. Scroll through the list of Saved Sessions until nsx-edge-02.corp.local is visible
  2. Click nsx-edge-02.corp.local to highlight it
  3. Click Load to load the saved session
  4. Click Open to launch the SSH session

 

 

Log in to nsx-edge-02a.corp.local

 

Log in to nsx-edge-02a.corp.local with the following credentials:

  1. Password: VMware1!VMware1!

Once you are authenticated to the Edge Node, maximize the PuTTY window for better visibility.

  1. Click the Windows Maximize icon

 

 

List Logical Routers Connected to edge-02a.corp.local

 

  1. Get a list of Logical Routers connected to edge-02a by running the following command:
get logical-routers

Note the VRF number of the Logical Router SR-Tier-0-gateway-01.

NOTE: The VRF number of SR-Tier-0-gateway-01 may differ from the screenshot.

 

 

Verify BGP Neighbor Relationship with Upstream Router

 

  1. Enter the VRF routing context on the Edge Node by entering the following command (NOTE: Replace "6" in the command below with the VRF number found in the previous step):
vrf 6
  1. Get the BGP neighbor status by running the following command:
get bgp neighbor summary

Verify the neighbor relationship with 192.168.100.1 is showing a state of Estab (Established).

 

 

HA Confirmed

As you can see, we now have two edge nodes that have established connections with our external router, providing redundant North-South routing to the NSX environment.

 

Equal Cost Multi-Pathing (ECMP)


In this lesson we will test Equal Cost Multi-Pathing (ECMP) by simulating an Edge Node failure.


 

Open PuTTY

 

  1. Click the Start Menu icon in the Windows Task Bar
  2. Click PuTTY

 

 

Open Command Prompt

 

We will now start a ping session to one of the sample web servers located on NSX Segment LS-web. Once this has been done, we will shut down nsx-edge-01, simulating a failure of an Edge Node. We should then observe BGP detecting the loss of connectivity to the Edge Node, routing all traffic through nsx-edge-02.

  1. Click the Start Menu icon in the Windows Task Bar
  2. Click Command Prompt

 

 

Ping VM web-01a (Before Failure)

 

  1. Ping web-01a.corp.local on NSX Segment LS-web:
ping web-01a.corp.local

You should observe ping replies from the web server.

 

 

Power Off Edge Node nsx-edge-01

 

We will now connect to vCenter and simulate a failure by powering down Edge Node nsx-edge-01. The loss of this Edge Node will cause all traffic to route through the remaining Edge Node, nsx-edge-02.

  1. Click on the new tab icon
  2. Click on the RegionA folder
  3. Click on RegionA vSphere Client (HTML)

 

 

Ping VM web-01a (During Failure)

 

Return to the Command Prompt window and perform the following:

  1. Ping web-01a.corp.local:
ping web-01a.corp.local

Ping requests should time out for approximately 3 minutes. This occurs because default BGP timers specify a Keep Alive time of 60 seconds and a Hold-Down time of 180 seconds. These settings can be adjusted in a production environment.

 

 

Ping VM web-01a (After Reconvergence)

 

Once three minutes have elapsed, repeat the ping test and verify reconvergence.

  1. Ping web-01a.corp.local:
ping web-01a.corp.local

You should observe that connectivity has been restored and ping replies are being received for server web-01a. If this is not the case, please wait a moment and try again.

 

 

Power On Edge Node nsx-edge-01

 

Now that we have tested fault tolerance on the Edge Node, we will return nsx-edge-01 to a Powered On state. Return to the vSphere Client then perform the following.

  1. Right click on nsx-edge-01
  2. Click Power
  3. Click Power On

 

Module 3 Conclusion



 

You've finished Module 3

Congratulations on completing Module 3.

Please proceed to any module below which interests you the most.

 

 

How to End Lab

 

To end your lab click on the END button.  

 

Module 4 - NAT and DHCP (30 minutes)

NAT and DHCP - Module 4 Overview



 

NAT and DHCP with NSX-T

The goal of this lab is to demonstrate the Network Address Translation (NAT) capabilities and Dynamic Host Configuration Protocol (DHCP) server in NSX-T.

Module 4 Lessons:

  • Create a new Tenant Logical Router (T-1 Gateway), specifying an edge node cluster and configure NAT on it.
  • Create a new Logical Switch, configure a DHCP server on it and test DHCP.

 

Configuring Network Address Translation (NAT)


In this chapter, we will create a new T1 Gateway and enable and configure NAT. This NAT will allow access to the internal web servers from a Natted address.

 

If you still have your browser open to the NSX Manager please skip the following steps. Otherwise perform the following steps.

  1. Click on the Google Chrome link on the desktop

 

Open a New tab in Chrome

 

  1. Click to open a new tab

 

 

Select Favorite for NSX Manager

 

  1. Click on the VMware NSX | Login link in the bookmark bar

 

 

NSX Manager Login

 

Enter the following login credentials for the NSX Manager.

  1. User name: admin
  2. Password: VMware1!VMware1!
  3. Click on Login

 

 

Navigate to the Tier-1 Gateway section in the Simplified UI

 

  1. Click on the Networking Section
  2. Click on Tier-1 Gateways

 

 

Create a New T1 Gateway

 

 

  1. Click on Add Tier-1 Gateway
  2. Name the new Gateway Tier-1-Gateway-02
  3. Select Tier-0-gateway-01 as the Tier-0 Gateway to link to
  4. Select edge-cluster-02 to run the services on
  5. Click Save to save the settings

 

 

Change focus to the Segments section

 

  1. Click No to end the t-1 Gateway configuration
  2. Click Segments to enter the Logical Segments configuration

 

 

Edit logical segment LS-web

 

  1. Click the three dots next to Logical Segment LS-web

 

 

Edit the LS-web segment

 

  1. Click edit

 

 

Move Logical segment LS-web to new T-1 Gateway

 

  1. Select the new Tier-1-Gateway-02 object you created in the drop down list
  2. Click Save

 

 

Confirm your new changes

 

You have moved a logical segment from one Gateway to another.

  1. Use the scroll bar to scroll to the bottom
  2. Verify your changes were saved
  3. Click Close Editing

 

 

Navigate to NAT rule creation on new Tier-1 Gateway

 

 

  1. Click on NAT
  2. Select Tier-1-Gateway-02 from the drop down list

 

 

Add DNAT rule

 

Configure your DNAT rule as follows:

  1. Click on Add NAT Rule
  2. Name the rule web03a-DNAT
  3. Select DNAT action
  4. Input 80.80.80.1 as the destination IP
  5. Input 172.16.10.13 as the Translated IP
  6. Click Save

 

 

Add SNAT rule

 

Configure your SNAT rule as follows:

  1. Click on Add NAT Rule
  2. Name the rule web03a-SNAT
  3. Select SNAT action
  4. Input 172.16.10.13 as the Source IP
  5. Input 80.80.80.1 as the Translated IP
  6. Click Save

 

 

Review NAT rules

 

Verify your rules are configured properly

 

 

Navigate back to the Tier-1 Gateway settings

 

  1. Click on Tier-1-Gateways
  2. Click the three dots next to Tier-1-Gateway-02

 

 

Edit the Tier-1 gateway

 

  1. Click Edit

 

 

Change the route Advertisement

 

In order for the Tier 0 gateway to advertise the new NAT IP address and gateway IP on the 172.16.10.0/24 subnet via BGP we need to change the Route advertisement settings.

  1. Expand Route Advertisement
  2. Enable All connected Segments & Service Ports
  3. Enable All NAT IP's
  4. Click Save

 

 

 

Review changes

 

  1. Use the scroll bar on the right hand side to scroll to the bottom
  2. Verify changes were saved
  3. Click Close Editing

 

 

Testing the NAT

 

Now, let us test to see if the NAT rule is working.

  1. Click on the Windows Start icon
  2. Click on Command Prompt

 

 

Ping Test

 

  1. Perform the following ping test. Type the following in the command prompt window
ping 80.80.80.1

You should get a successful ping test

Congratulations!  You have successfully set up NAT and SNAT services to web-03a.corp.local.

 

Configuring Dynamic Host Configuration Protocol (DHCP)



 

Getting started with DHCP

 

If you still have your browser open to the NSX Manager please skip the following steps. Otherwise perform the following steps.

  1. Click on the Google Chrome link on the desktop

 

 

Open a New tab in Chrome

 

  1. Click to open a new tab

 

 

Select Favorite for NSX Manager

 

  1. Click on the VMware NSX | Login link in the bookmark bar

 

 

NSX Manager Login

 

Enter the following login credentials for the NSX Manager.

  1. User name: admin
  2. Password: VMware1!VMware1!
  3. Click on Login

 

 

Navigate to the DHCP section in the Simplified UI

 

  1. Click on Networking on the top bar
  2. Click on IP Address Management
  3. Click on DHCP

 

 

DHCP Server Creation and Settings

 

Add the DHCP server and input the following settings:

  1. Click ADD SERVER
  2. Choose DHCP Server from the drop down list
  3. Name the server DHCP-01
  4. Set the IP address as 172.16.150.1/24
  5. Choose edge-cluster-02 from the drop down
  6. Click SAVE

 

 

DHCP server created

 

Now that your new DHCP server has been created we need to set the T-0 router to utilize it.

 

 

Navigate back to edit the T-0 Gateway

 

 

 

Edit the T-0 gateway

  1. Click Tier-0 Gateways
  2. Click the three dots next to Tier-0-gateway-01

 

  1. Click Edit

 

 

Attach the DHCP Server to the T-0 Gateway

 

  1. Click on the No IP Allocation link

 

 

Select the DHCP server

 

  1. Select DHCP Local Server from the Type drop down box
  2. Select DHCP-01 from the DHCP Server drop down box
  3. Click Save

 

 

Verify DHCP server was connected

 

  1. Verify IP Address Management displays "Local | 1 Servers"
  2. Click Save

 

 

Confirm changes and close editing

 

  1. Confirm the changes were saved
  2. Click Close Editing

 

 

Navigate to Segments section in the Simplified UI

 

  1. Click on Networking
  2. Click on Segments

 

 

Create new Logical Segment

 

  1. Click on Add Segment on the top left hand side
  2. Type LS-DHCP-web for the Segment Name
  3. Choose Tier-0-gateway-01 from the drop down under uplink & type
  4. Choose TZ-Overlay | Overlay as the Transport Zone
  5. Click Set Subnets

 

 

Set Logical Segment Subnet and IP

 

 

  1. Click ADD SUBNET
  2. Input 172.16.50.1/24
  3. Add DHCP range 172.16.50.101-172.16.50.110
  4. Click new DHCP entry
  5. Click Add
  6. Once the Subnet is added click Apply

 

 

Save the new segment

 

  1. Now that you are done with the Logical Segment configuration Click Save on the bottom left

 

 

 

Launch a new tab for vCenter

 

  1. Click to open a new tab
  2. Click on RegionA folder
  3. Click  HTML5 Client

 

 

Login to vCenter

 

  1. Select the "Use Windows session authentication"
  2. Click Login

 

 

Edit settings of web-04a

 

  1. Change focus to  Hosts and Clusters
  2. Select web-04a
  3. Click on Actions
  4. Select Edit Settings

 

 

Edit network adapters

 

  1. Un-select Connect at power on for Network adapter 1
  2. Select Connect at power on for Network adapter 2
  3. Click the drop down arrow and choose browse for Network adapter 2

 

 

Select LS-DHCP-web logical segment

 

  1. Select LS-DHCP-web logical segment
  2. Click OK

 

 

Save the new settings

 

  1. Verify the settings below and click OK
  • Network adapter 1 = disconnected
  • Network adapter 2 = LS-DHCP-web
  • Network adapter 2 = connected

 

 

Power on Web-04a

 

Now that you are back on the VM Summary page, lets power on the VM.  Note: If this vm was already powered on from a previous module select Restart Guest OS instead of Power on

  1. Select web-04a
  2. Click on Actions
  3. Select Power
  4. Click on Power On

 

 

Launch Web Console

 

  1. Click on Launch Remote Console to connect to the VM

 

 

Select VMRC

 

  1. If prompted choose VMRC
  2. Click OK

 

 

Login to web-04a

 

We will now log into the VM and see if DHCP works.

  1. Login name: root Password: VMware1!

 

 

Verify DHCP address

 

 

Shut down eth0

 

In order to keep eth0 from conflicting for our next test we need to shut it down

  1. Type the following command to shut eth 0 down
ifconfig eth0 down
  1. To verify that it worked type
ifconfig

You should not see eth0 in the list any more.

 

 

Launch a command prompt window

 

Note: You may need to press Ctrl + Alt to exit the console screen

  1. Click the windows icon
  2. Select Command Prompt

 

 

Verify web-04a communication via ping

 

  1. To verify communication type:
ping 172.16.50.101

You will get a reply from 172.16.50.101

Congratulations you have configured and tested DHCP!!!

 

Module 4 Conclusion



 

You've finished Module 4

Congratulations on completing Module 4.  In this module you learned about how NAT and DHCP can be deployed in NSX-T.

Please proceed to any module below which interests you the most.

 

 

How to End Lab

 

To end your lab click on the END button.  

 

Module 5 - Load Balancing (15 minutes)

Load Balancing Module Overview


Load Balancing Module Overview

The Goal of this lab is to explore load balancing in NSX-T.  In this module you will execute the following tasks:

Create a new load balancer using the new Simplified UI

Create health checks for HTTPS services on Web-01a and Web-02a web servers.

Create a VIP to load balance web server traffic to 2 separate Web servers.

Test the load balancing services.


 

Lab topology:

 

 

Configuring Load Balancing


 

If you still have your browser open to the NSX Manager please skip the following steps. Otherwise perform the following steps.

  1. Click on the Google Chrome link on the desktop

 

Open a New tab in Chrome

 

  1. Click to open a new tab

 

 

Select Favorite for NSX Manager

 

  1. Click on the VMware NSX | Login link in the bookmark bar

 

 

NSX Manager Login

 

Enter the following login credentials for the NSX Manager.

  1. User name: admin
  2. Password: VMware1!VMware1!
  3. Click on Login

 

 

Navigate to the Tier-1 Gateways section

 

In order to create a Load balancer we need a Tier-1 Gateway deployed to at least one edge gateway.

  1. Click on Networking
  2. Click on Tier-1 Gateways

 

 

Create a new Tier-1 Gateway

 

  1. Click ADD TIER-1 GATEWAY
  2. Name the new Gateway Tier-1-LB-01
  3. Select Tier-0-gateway-01
  4. Select edge-cluster-02 Note: Tier-1 gateways used for Load balancing services must be placed on edge nodes of medium or large size
  5. Expand Route Advertisement

 

 

Set Route Advertisement settings

 

Since this Tier-1 Gateway is going to be used for Load balancing and we will also be using NAT we need to enable NAT and VIP route redistribution on the Tier-1 gateway

  1. Use the scroll bar to scroll down to the bottom
  2. Enable Redistribution for All NAT, LB VIP's and LB SNAT's
  3. Click the Save button

 

 

 

 

 

Click no to continue

 

  1. Click No to continue

 

 

Navigate to the Tier-0 Gateways section

 

  1. Click on Networking
  2. Click Tier-0 Gateways

 

 

Edit Tier-0-gateway-01

 

  1. Click the three dots next to Tier-0-gateway-01
  2. Click on edit

 

 

Edit route redistribution on the T-0

 

  1. Expand the Route Re-distribution section
  2. Click the number next to Route Re-Distribution

 

 

Set Route Re-distribution

 

Since we rely on the T-0 Gateway to re-distribute the routes from the T-1 we also need to allow the LB and NAT routes to be re-distributed at the T-0 level as well.

  1. Check box the LB VIP and LB SNAT IP check boxes
  2. Click Apply

 

 

Save the settings

 

  1. Click the Save button

 

 

Close editing

 

  1. Click close editing

 

 

Navigate to the Load Balancing Section in the Simplified UI

 

  1. Click on Networking
  2. Click on Load Balancing

 

 

Create new Load Balancer

 

Now that we have the T-1 and routing requirements set up let's create our Load Balancer.

  1. Click on ADD LOAD BALANCER
  2. Name the Load balancer LB-01
  3. Select Small for the size
  4. Select or previously created Tier-1-LB-01 gateway
  5. Click SAVE

 

 

Click no to continue

 

  1. Click NO to continue

 

 

Create Health Monitor

 

Lets create a health monitor for our 3 Tier App

  1. Select MONITORS
  2. Click ADD ACTIVE MONITOR
  3. Choose HTTPS

 

 

Set monitor parameters

 

  1. Name the monitor Webapp-monitor
  2. Input 443 for the port
  3. Click Configure next to HTTP Request

 

 

Set HTTP Request configuration

 

  1. Input /cgi-bin/app.py as the HTTP Request URL
  2. Click on HTTP Response Configuration

 

 

Set HTTP Response configuration

 

  1. Add Customer Database as the HTTP Response Body
  2. Click Apply

 

 

Save the new monitor settings

 

  1. Click Save to save the new monitor settings

 

 

Create a new Server pool

 

Let's create a new server pool

  1. Click SERVER POOLS
  2. Click Add server pool tab
  3. Type Web-servers-01 for the pool name
  4. Choose the webapp-monitor we created as the monitor
  5. Leave the SNAT translation mode at the default Automap
  6. Click Select Members

 

 

Add pool members

 

 

  1. Click ADD Member and fill out the following details
  2. Name: web-01a
  3. IP: 172.16.10.11
  4. Port: 443
  5. Click Add

 

 

Repeat the process to add a second web server

 

  1. Click ADD Member and fill out the following details
  2. Name: web-02a
  3. IP: 172.16.10.12
  4. Port: 443
  5. Click Add
  6. Click apply

 

 

Save the new server pool

 

  1. Click Save

 

 

Create virtual server

 

 

  1. Click VIRTUAL SERVERS
  2. Click Add Virtual Server
  3. Choose L4 TCP from the drop down.

 

 

Configure the Virtual Server

 

Do the following to configure the virtual server settings

  1. Type Webapp-VIP for the web server name
  2. Set the IP address to 172.16.10.10 (This IP has a DNS record of webapp.corp.local)
  3. Type port 443 in for the port
  4. Be sure to Click Add Items to save the 443 port
  5. Select LB-01 we created earlier for the Load Balancer
  6. Select server pool Web-servers-01 we created earlier
  7. Click Save

 

 

 

Verify Virtual Server comes up

 

 

  1. Click on the refresh button a couple of times
  2. Verify the Virtual server comes UP Note: the Up status may take 30 seconds to a minuite

 

 

 

Test the new VIP

 

  1. Open a new tab in Chrome
  2. Choose the Webapp shortcut from the drop down

 

 

Verify our VIP works

 

You will now see our test web application and the Customer database information

  1. Note down the web server you were connected to

 

 

Launch a new tab for vCenter

 

Lets log into the vSphere client so we can manually fail one of the web servers.

  1. Click to open a new tab
  2. Click on RegionA folder
  3. Click  HTML5 Client

 

 

Login to vCenter

 

  1. Select the "Use Windows session authentication"
  2. Click Login

 

 

Select the Web server you connected to

 

The new Load balancer should have connected to web server web-02a however may have connected to web-01a.  Lets go power off the one it connected to.

  1. Select the web server you noted down in the previous step.
  2. Click on Actions

 

 

Power the web server off

 

  1. Select Power from the Actions drop down list
  2. Power off the virtual machine

 

 

Verify the Web Application

 

  1. Change focus back to the Web application Chrome TAB
  2. Click refresh
  3. Verify you are now connected to the remaining web server pool member. (Note this should be web-01a)

 

Module 5 Conclusion


Congratulations on completing Module 5.

Please proceed to any module below which interests you the most.


 

How to End Lab

 

To end your lab click on the END button.  

 

Module 6 - Distributed Firewall and Tools (15 minutes)

Distributed Firewall and Tools - Module Overview



 

DFW and Tools

The goal of this module is to demonstrate how the distributed firewall (DFW) and operational tools within NSX-T function and are configured.  The Distributed Firewall in NSX-T 2.4 is installed by default with a Connectivity Strategy as "Blacklist".  This means that all traffic is permitted and Micro Segmentation is "off".  In this module we will execute the following operations:  

 

 

HOL-2026 Logical Diagram

This diagram illustrates what virtual machines make up our 3 Tier Web App for testing

 

 

 

3 Tier Web App ports

This diagram illustrates the port requirements for our 3 Tier Web App

 

 

Distributed Firewall



 

Distributed Firewall in NSX-T

In this Chapter we will review and configure the Distributed Firewall of NSX-T

By default the NSX-T Connectivity Strategy is Blacklist, this means that all traffic is allowed and Blacklist or "Deny" firewall rules need to be created in order to block traffic.  Lets verify our precreated 3 Tier App works as expected.

 

 

Connect to 3 Tier Web App

 

  1. Double click on the Chrome icon on the desktop

 

  1. Click on the 3 Tier App bookmark bar folder
  2. Click on the Web-01a shortcut

 

 

Verify connectivity to Web-01a

 

  1. Verify you have successfully connected to Web-01a and it has retrieved data from the App server

Note: Feel free to test Web-02a and Web-03a shortcuts to verify they work as well.  

Now that we have verified our 3 Tier App works let's change the Connectivity Strategy to Whitelist

 

 

Login to NSX-T Manager

 

 

  1. Click to open a new tab
  2. Click NSX-T shortcut to launch log in page

 

  1. Input admin for the user name and VMware1!VMware1! for the password
  2. Click Log in

 

 

Navigate to the DFW management page

 

  1. Click on Security
  2. Click on Distributed Firewall
  3. Click on Blacklist

 

  1. Click the radio button next to Whitelist to change the Connectivity Strategy
  2. Click Save to save the configuration
  3. Now that the Connectivity Strategy is Whitelist explicit allow rules must be made for communication to be allowed in the environment.  Let's verify our 3 Tier App is being blocked.

 

 

Verify 3 Tier App traffic is blocked

 

  1. Switch back to the first Chrome tab
  2. Click the 3 Tier App bookmark folder
  3. Click the Web-01a shortcut
  4. Verify the App can no longer be accessed

Note:  It may take up to 20 seconds for the page to timeout, you can also verify web-02a and web-03a cannot be accessed.  Now that we know the app can not be reached lets enable the preconfigured rules and test again.

 

 

Switch back to the NSX-T management tab

 

  1. Click the NSX-T management tab

 

 

Explore the preconfigured 3 Tier App rules

 

  1. Verify you are in the application section of the DFW
  2. Expand the 3 Tier App section
  3. Review the preconfigured rules required for the 3 Tier App to function

 

 

Enable the preconfigured DFW rules

 

  1. Enable each preconfigured rule by clicking the enable / disable slider to the right of the rule
  2. Click Publish to save the settings
  3. Now that the allow rules are enabled lets test the 3 Tier App connectivity

 

 

Test 3 Tier App connectivity

 

  1. Switch back to the first Chrome tab
  2. Click the 3 Tier App bookmark folder
  3. Click the Web-01a shortcut
  4. Verify the App can be accessed again

We have now enabled the DFW within NSX-T and proven that the preconfigured 3 Tier App rules work as expected.  We will now delete and reconfigure the rules and groups to take a more detailed look at how they are configured.  If you would like to skip this configuration you can jump ahead to the next module.

 

 

Delete the preconfigured 3 Tier App policy

 

  1. Switch back to the NSX-T Chrome tab
  2. Click the 3 dots next to the 3 Tier App policy
  3. Click Delete policy

 

 

Publish the changes

 

 

Navigate to the Groups screen in the Inventory

 

 

Delete the preconfigured groups

 

  1. Delete the app_servers, db_servers, and web_servers groups by following the next step for each

 

 

  1. Click the the three dots
  2. Click Delete
  3. Click Delete to confirm

Do this for all three groups (app_servers, db_servers, and web_servers)

 

 

Refresh the screen

 

  1. Click refresh
  2. Verify the groups were deleted

 

 

Create Web Servers group

 

  1. Click Add Group
  2. Input web_servers
  3. Click Set Members

 

 

Select Web Server group members

 

  1. Click on Members
  2. Select Category Virtual Machine from the drop down
  3. Scroll to the bottom of the list
  4. Check box all 4 Web Servers
  5. Click Apply

Note: This is just one way of creating a group of virtual machines, the groups you previously deleted utilized tags instead of static members.

 

 

  1. Click Save to save the group

 

 

Create App Servers group

 

  1. Click Add Group
  2. Input app_servers
  3. Click Set Members

 

 

Select App Server group members

 

  1. Click on Members
  2. Select Category Virtual Machine from the drop down
  3. Check box app-01a
  4. Click Apply

Note: This is just one way of creating a group of virtual machines, the groups you previously deleted utilized tags instead of static members.

 

 

  1. Click Save to save the group

 

 

Create DB Servers group

 

  1. Click Add Group
  2. Input db_servers
  3. Click Set Members

 

 

Select DB Server group members

 

  1. Click on Members
  2. Select Category Virtual Machine from the drop down
  3. Check box db-01a
  4. Click Apply

Note: This is just one way of creating a group of virtual machines, the groups you previously deleted utilized tags instead of static members.

 

 

  1. Click Save to save the group

 

 

Verify your new groups have been created

 

  1. Verify your three groups were created
  2. Optional: You can click each group's View Members link to verify the correct vm's are included.

 

 

3 Tier App port requirements

 

As  a reminder here are the port requirements for the 3 Tier App to function. Next lets go to the Distributed Firewall section to create the rules.

 

 

Navigate to the DFW section

 

  1. Click Security
  2. Click Distributed Firewall
  3. Click Application

 

 

Create a new Security Policy

 

  1. Click Add Policy
  2. Type 3 Tier App in the text field

 

 

Add Client Access rule

 

  1. Click the three dots next to the 3 Tier App Policy
  2. Click Add Rule

 

  1. Click on the name field and name the rule Client Access
  2. Leave the Source as Any
  3. Click on the pencil icon under Destinations

 

  1. Check the check box next to web_servers group you created earlier
  2. Click Apply to save the Destination

 

  1. Click the pencil icon under Services

 

  1. Type HTTPS in the search box to find the HTTPS service
  2. Check the check box next to the HTTPS service
  3. Click Apply

 

 

Verify and publish the Client Access Rule

 

  1. Verify the Client Access rule is configured as follows
  2. Click Publish

Client Access Rule settings:

Name:  Client Access

Sources: Any

Destinations: web_servers

Services: HTTPS

Action: Allow

 

 

Create Web to App access rule

 

  1. Use the same process to create the Web to App Access rule as you did for the Client Access rule
  2. Click Publish

Web to App Access Rule settings:

Name:  Web to App Access

Sources: web_servers

Destinations: app_servers

Services: TCP_8443

Action: Allow

 

 

Create App to DB access rule

 

  1. Use the same process to create the App to DB Access rule as you did for the Client Access rule
  2. Click Publish

App to DB Access Rule settings:

Name:  App to DB Access

Sources: app_servers

Destinations: db_servers

Services: HTTP

Action: Allow

 

 

Test the 3 Tier App

 

  1. Switch back to the 3 Tier App Chrome tab
  2. Click the 3 Tier App bookmark folder
  3. Click web-01a shortcut
  4. Verify the 3 Tier App functions properly

Congratulations you have successfully configured micro segmentation rules for a 3 Tier App!!!  

 

Module 6 Conclusion



 

You've finished Module 6

Congratulations on completing Module 6.

Please proceed to any module below which interests you the most.

 

 

Test Your Skills!

 

Now that you’ve completed this lab, try testing your skills with VMware Odyssey, our newest Hands-on Labs gamification program. We have taken Hands-on Labs to the next level by adding gamification elements to the labs you know and love. Experience the fully automated VMware Odyssey as you race against the clock to complete tasks and reach the highest ranking on the leaderboard. Try the Network Security Odyssey lab

 

 

How to End Lab

 

To end your lab click on the END button.  

 

Module 7 - NSX-T and vRealize Network Insight

Introduction


VMware vRealize Network Insight helps customers build an optimized, highly available and secure network infrastructure across multi-cloud environments. It accelerates micro-segmentation deployment, minimizes business risk during application migration and enables customers to confidently manage and scale NSX deployments. 

This part of the lab is presented as a Hands-on Labs Interactive Simulation. This will allow you to experience steps which are too time-consuming or resource intensive to do live in the lab environment. In this simulation, you can use the software interface as if you are interacting with a live environment.

*** SPECIAL NOTE ***    The simulation you are about to do is comprised of three parts:

Part 1 - Building Microsegmentation with vRealize Network Insight

Part 2 - NSX Events and Alerting with vRealize Network Insight

Part 3 - Monitoring NSX-T with vRealize Network Insight

Click on each link to open the interactive simulation. To continue to the next part of the simulation you will need to click on "Return to the Lab" in the upper right of the screen.


Module 7 Conclusion



 

You've finished Module 7

Congratulations on completing Module 7.

Please proceed to any module below which interests you the most.

 

 

Test Your Skills!

 

Now that you’ve completed this lab, try testing your skills with VMware Odyssey, our newest Hands-on Labs gamification program. We have taken Hands-on Labs to the next level by adding gamification elements to the labs you know and love. Experience the fully automated VMware Odyssey as you race against the clock to complete tasks and reach the highest ranking on the leaderboard. Try the NSX-T Odyssey lab

 

 

How to End Lab

 

To end your lab click on the END button.  

 

Conclusion

Thank you for participating in the VMware Hands-on Labs. Be sure to visit http://hol.vmware.com/ to continue your lab experience online.

Lab SKU: HOL-2026-01-NET

Version: 20200602-171106