VMware Hands-on Labs - HOL-1903-01-NET


Lab Overview - HOL-1903-01-NET - Getting Started with VMware NSX Data Center

Lab Guidance


Note: It may take more than 90 minutes to complete this lab. You should expect to only finish 2-3 of the modules during your time. The modules are independent of each other so you can start at the beginning of any module and proceed from there. You can use the Table of Contents to access any module of your choosing.

The Table of Contents can be accessed in the upper right-hand corner of the Lab Manual.

NSX Data Center for vSphere is VMware's network virtualization platform for the Software-Defined Data Center (SDDC), delivering networking and security features entirely in software, abstracted from the underlying physical infrastructure.

In this lab, you will be introduced to the core capabilities of VMware NSX Data Center in a vSphere environment. You will gain hands-on experience with Logical Switching, Distributed Logical Routing, Dynamic Routing and Logical Network Services.

Lab Module List:

Lab Captains:

This lab manual can be downloaded from the Hands-on Labs Document site found here:

http://docs.hol.vmware.com

This lab may be available in other languages. To set your language preference and have a localized manual deployed with your lab, you may utilize this document to help guide you through the process:

http://docs.hol.vmware.com/announcements/nee-default-language.pdf


 

Location of the Main Console

 

  1. The area in the RED box contains the Main Console. The Lab Manual is on the tab to the Right of the Main Console.
  2. A particular lab may have additional consoles found on separate tabs in the upper left. You will be directed to open another specific console if needed.
  3. Your lab starts with 90 minutes on the timer. The lab can not be saved. All your work must be done during the lab session.  But you can click the EXTEND to increase your time. If you are at a VMware event, you can extend your lab time twice, for up to 30 minutes. Each click gives you an additional 15 minutes. Outside of VMware events, you can extend your lab time up to 9 hours and 30 minutes. Each click gives you an additional hour.

 

 

Alternate Methods of Keyboard Data Entry

During this module, you will input text into the Main Console. Besides directly typing it in, there are two very helpful methods of entering data which make it easier to enter complex data.

 

 

Click and Drag Lab Manual Content Into Console Active Window

You can also click and drag text and Command Line Interface (CLI) commands directly from the Lab Manual into the active window in the Main Console.  

 

 

Accessing the Online International Keyboard

 

You can also use the Online International Keyboard found in the Main Console.

  1. Click on the Keyboard Icon found on the Windows Quick Launch Task Bar.

 

 

Click once in active console window

 

In this example, you will use the Online Keyboard to enter the "@" sign used in email addresses. The "@" sign is Shift-2 on US keyboard layouts.

  1. Click once in the active console window.
  2. Click on the Shift key.

 

 

Click on the @ key

 

  1. Click on the "@ key".

Notice the @ sign entered in the active console window.

 

 

Activation Prompt or Watermark

 

When you first start your lab, you may notice a watermark on the desktop indicating that Windows is not activated.  

One of the major benefits of virtualization is that virtual machines can be moved and run on any platform. The Hands-on Labs utilizes this benefit and we are able to run the labs out of multiple datacenters. However, these datacenters may not have identical processors, which triggers a Microsoft activation check through the Internet.

Rest assured, VMware and the Hands-on Labs are in full compliance with Microsoft licensing requirements. The lab that you are using is a self-contained pod and does not have full access to the Internet, which is required for Windows to verify the activation. Without full access to the Internet, this automated process fails and you see this watermark.

This cosmetic issue has no effect on your lab.  

 

 

Look at the lower right portion of the screen

 

Please check to see that your lab is finished all the startup routines and is ready for you to start. If you see anything other than "Ready", please wait a few minutes.  If after 5 minutes your lab has not changed to "Ready", please ask for assistance.

 

 

Allow vmware-cip-launcher.exe

 

On occasion, the lab may be provisioned with Chrome settings reset to their default value. If this occurs, you may receive the dialog prompt shown above. Please perform the following steps to allow the launcher to run with the vSphere Web Client (Flash):

  1. Click to select Always open these types of links in the associated app.
  2. Click to select Open vmware-cip-launcher.exe.

You may then proceed with the rest of the lab normally.

 

 

Minimize Recent Tasks and Recent Objects in vSphere Web Client

 

Due to the screen resolution of the Hands-On Labs desktop, some components of the NSX user interface may appear constrained or missing during this lab. In order to maximize usable screen space, it is recommended to minimize the Recent Objects and Recent Tasks panels in the vSphere Web Client (Flash). To do this, please complete the following:

  1. Click the pin icon in the top right of the Recent Objects panel.
  2. Click the pin icon in the top right of the Recent Tasks panel.

 

Module 1 - NSX Manager Installation and Configuration (15 minutes)

Introduction


VMware NSX Data Center is the leading network virtualization platform, bringing the operational model of virtual machines to the network. Just as server virtualization provides extensible control of virtual machines running on a pool of server hardware, network virtualization with NSX Data Center provides a centralized API to provision and configure virtual network services that run on a single physical network.

Logical networks decouple virtual machine connectivity and network services from the physical network, providing customers with the flexibility to place or migrate virtual machines anywhere in the data center while still supporting layer-2 / layer-3 connectivity and layer 4-7 network services.

Within this module we will be using an Interactive Simulation to focus on how to perform the actual deployment of NSX Data Center within your environment. Within the lab environment the actual deployment has already been completed for you.

In the Interactive Simulation, you will see how to:


 

NSX Components

 

Note that a cloud management platform (CMP) is not a component of NSX, but NSX provides integration into virtually any CMP via the REST API and out-of-the-box integration with VMware CMPs.

The primary components of NSX are broken down into three categories:

Management Plane - The NSX management plane is built by the NSX Manager, the centralized network management component of NSX. It provides the single point of configuration and REST API entry-points.

Control Plane - The NSX control plane runs in the NSX Controller cluster. The NSX Controller is an advanced distributed state management system that provides control plane functions for NSX logical switching and routing functions. It is the central control point for all logical switches within a network and maintains information about all hosts, logical switches (VXLANs), and distributed logical routers.

Data Plane - The NSX data plane consists of the NSX vSwitch, which is based on the vSphere Distributed Switch (VDS) with additional components to enable services. NSX kernel modules, userspace agents, configuration files, and install scripts are packaged in VIBs and run within the hypervisor kernel to provide services such as distributed routing and logical firewall and to enable VXLAN bridging capabilities.

 

Hands-on Labs Interactive Simulation: NSX Installation and Configuration - Part 1


This part of the lab is presented as a Hands-on Labs Interactive Simulation. This will allow you to experience steps which are too time-consuming or resource intensive to do live in the lab environment. In this simulation, you can use the software interface as if you are interacting with a live environment.

*** SPECIAL NOTE ***    The simulation you are about to do is comprised of two parts. The first part will finish at the end of NSX Manager configuration. To continue to the second half of the simulation you will need to click on "Return to the Lab" in the upper right of the screen. The manual will also outline the steps at the conclusion of the NSX Manager configuration.

  1. Click here to open the interactive simulation. It will open in a new browser window or tab.
  2. When finished, click the “Return to the lab” link to continue with this lab.

 


Hands-on Labs Interactive Simulation: NSX Installation and Configuration - Part 2


This part of the lab is presented as a Hands-on Labs Interactive Simulation. This will allow you to experience steps which are too time-consuming or resource intensive to do live in the lab environment. In this simulation, you can use the software interface as if you are interacting with a live environment.

  1. Click here to open the interactive simulation. It will open in a new browser window or tab.
  2. When finished, click the “Return to the lab” link to continue with this lab.

 


Module 1 Conclusion


In this module, we showed the simplicity with which NSX can be installed and configured to provide layer two through seven services within software.

We covered the installation and configuration of the NSX Manager appliance which included deployment, integrating with vCenter and configuring logging and backups. We then covered the deployment of NSX Controllers and installation of the VMware Infrastructure Bundles (VIBs), which are kernel modules pushed down to the hypervisor to provide NSX services. Finally, we showed the automated deployment of VXLAN Tunnel Endpoints (VTEPs), creation of a VXLAN Network Identifier (VNI) pool and the creation of a Transport Zone.


 

You have completed Module 1

Congratulations on completing Module 1.

If you are looking for additional information on deploying NSX, please review the NSX 6.4 Documentation Center via the URL below:

Proceed to any of the following modules:

Lab Module List:

Lab Captains:

 

 

How to End Lab

 

To end your lab click on the END button.  

 

Module 2 - Logical Switching (30 minutes)

Logical Switching - Module Overview


In this module, we will explore the following components of VMware NSX:


Logical Switching


In this section, we will be doing the following:

  1. Confirm the configuration readiness of the hosts.
  2. Confirm logical network preparation.
  3. Create a new logical switch.
  4. Attach the logical switch to the NSX Edge Services Gateway.
  5. Add VMs to the logical switch.
  6. Test connectivity between VMs.

 

Access vSphere Web Client (Flash)

 

  1. Bring up the vSphere Web Client (Flash) via the icon on the desktop labeled Google Chrome.

 

 

Login to the vSphere Web Client (Flash)

 

If you are not already logged into the vSphere Web Client:

(The home page should be the vSphere Web Client.  If not, Click on the vSphere Web (Flash) icon in Google Chrome.)

  1. Type in administrator@vsphere.local into User name
  2. Type in VMware1! into Password
  3. Click Login

 

 

Navigate to Networking & Security in vSphere Web Client

 

  1. Click Home icon.
  2. Click Networking & Security.

 

 

View the deployed components

 

  1. Click Installation and Upgrade.
  2. Click Host Preparation.
  3. Click to select a cluster from the list (RegionA01-COMP01 in this example) to view information about the NSX state of the hosts in that cluster.

You will see that the network virtualization components, also called the data plane components, are installed on the hosts in our clusters. These components include the following: Hypervisor level kernel modules for Port Security, VXLAN, Distributed Firewall and Distributed Routing.

Firewall and VXLAN functions are configured and enabled on each cluster after the installation of the network virtualization components. The port security module provides the VXLAN function, while the distributed routing module is enabled once the NSX edge logical router control VM is configured.

 

 

The topology after the host is prepared with data path components

 

 

 

View the VTEP configuration

 

  1. In the list of hosts displayed, scroll right so the VIEW DETAILS link is visible.
  2. Click VIEW DETAILS to view information about that host's VTEP kernel port and IP address.

VXLAN configuration can be broken down into three important steps:

As shown in the diagram, the hosts have been configured with VXLAN Tunnel End Point (VTEP) interfaces. The environment uses the 192.168.130.0/24 subnet for the VTEP pool.

 

 

Segment ID and Multicast Group Address Configuration

One of the key challenges with VXLAN deployment in the past was that multicast protocol support was required from physical network devices. This challenge is addressed in the NSX Platform by providing a controller based VXLAN implementation, which removes the need to configure multicast in the physical network. NSX provides three options for Broadcast, Unknown and Multicast (BUM) traffic: Multicast, Unicast and Hybrid.  This option is defined globally as part of the transport zone, but can be explicitly defined per logical switch as well.

The three modes of replication available in NSX are as follows:

 

 

View Segment ID Configuration

 

  1. Click on Logical Network Settings.
  2. Note the Segment ID Pool assigned to the environment. As logical switches are created in NSX, the next unused Segment ID is allocated and assigned to each new logical switch.
  3. Note that the Multicast addresses field is blank. As previously mentioned, this is because the default mode of the lab environment is Unicast and therefore has no multicast requirements.

 

 

View Transport Zones

 

  1. Click Transport Zones.
  2. Select the RegionA0_Global_TZ Transport Zone by clicking the radio button.

After viewing the various NSX components and VXLAN configuration, we will now create an NSX logical switch. The NSX logical switch defines a logical broadcast domain, or network segment, to which an application or virtual machine can be logically connected. An NSX logical switch provides a Layer 2 broadcast domain, similar to a VLAN, but without the physical network configuration typically associated with a VLAN.

 

 

View Logical Switches

 

  1. Click Logical Switches on the left hand side.

Notice that there are a number of logical switches already defined in this lab. These have been pre-populated and will assist in completing the various modules provided by the lab. In a new NSX deployment, the list of logical switches would be empty.

The next step will be to create a new logical switch. Once this switch has been created, we will migrate existing VMs to the newly created network and provide them with connectivity to the NSX environment.

 

 

Create a new Logical Switch

 

  1. Click on the Green Plus icon to create a new Logical Switch.
  2. Name the Logical Switch: Prod_Logical_Switch.
  3. Confirm that RegionA0_Global_TZ is selected as the Transport Zone.
  4. Confirm that Unicast is selected as the Replication mode.
  5. Confirm the Enable IP Discovery box is checked. IP Discovery enables ARP Suppression and is explained below.
  6. Click OK.

Selecting "Enable IP Discovery" activates ARP (Address Resolution Protocol) suppression. ARP is used to determine the destination MAC (Media Access Control) address from an IP address by means of sending a broadcast on a layer 2 segment. If an ESXi host with this NSX Virtual Switch receives ARP traffic from a VM (Virtual Machine) or an Ethernet request, the host sends the request to the NSX Controller which holds an ARP table. If the NSX Controller has the information in its ARP table, it is returned to the host, which in turn responds to the VM.

 

 

Attach the new Logical Switch to the NSX Edge Services Gateway for external access

 

  1. Click to select the newly created Prod_Logical_Switch.
  2. Click the Actions menu.
  3. Click Connect Edge.

 

 

Connect the Logical Switch to the NSX Edge

 

NSX Edge can be installed as a Logical (Distributed) Router or as an Edge Services Gateway.

We will cover more details on the NSX Edge and routing in subsequent modules.

For now, we will connect our logical switch to the NSX Edge Services Gateway, Perimeter-Gateway-01. This will provide connectivity between VMs that are connected to the logical switch and the rest of the environment.

  1. Click the radio button to the left of Perimeter-Gateway-01 to select it.
  2. Click Next.

 

 

Attach Logical Switch to NSX Edge

 

  1. Click the radio button to the left of vnic7 to select it.
  2. Click Next.

 

 

Name the interface

 

  1. Enter Prod_Interface for Name.
  2. Select Connected.
  3. Click the Green Plus icon to configure IP address and subnet information for this interface.

 

 

Assign an IP Address to the Interface

 

  1. Enter 172.16.40.1 as the Primary IP Address (Leave the Secondary IP Address blank).
  2. Enter 24 for the Subnet Prefix Length.
  3. Verify your settings are correct and click Next.

 

 

Complete the Interface editing process

 

  1. Click Finish.

 

 

Attach web-03a and web-04a to the newly created Prod_Logical_Switch

 

  1. Click to select the newly created Prod_Logical_Switch.
  2. Click the Actions menu.
  3. Click Add VM.

 

 

Add virtual machines to Logical Switch

 

  1. Search for VMs with "web" in their names.
  2. Select web-03a.corp.local and web-04a.corp.local.
  3. Click the right arrow to add the selected VMs to this logical switch.
  4. Click Next.

 

 

Select virtual machines' vNIC to add to Logical Switch

 

  1. Select the vNICs of the two web VMs.
  2. Click Next.

 

 

Complete addition of VMs to Logical Switch

 

  1. Click Finish.

 

 

The topology after Prod_Logical_Switch is connected to the NSX Edge Services Gateway

 

You have now configured a new logical switch and provided it with connectivity to the external network via the Perimeter-Gateway-01 Edge Gateway. You have also added two virtual machines to the new logical switch.

 

 

Test connectivity between web-03a and web-04a

Now we will test the connectivity between web-03a and web-04a.

 

 

Go to Hosts and Clusters

 

  1. Click Home icon.
  2. Click Hosts and Clusters.

 

 

Expand the Clusters

 

Expand the RegionA01-COMP01 and RegionA01-COMP02 clusters. You should see that the two VMs, web-03a.corp.local and web-04a.corp.local, are on two different compute clusters. Note that these two VMs were added to the logical switch in the previous steps.

 

 

Open PuTTY

 

  1. Click the Windows Start button.
  2. Click the PuTTY application icon from the Start Menu.

You are connecting from the Main Console, which is in the 192.168.110.0/24 subnet. The traffic will pass through the Perimeter-Gateway-01 NSX Edge, and then to the web server.

 

 

Open SSH session to web-03a

 

  1. Scroll through the list of Saved Sessions until web-03a.corp.local is visible.
  2. Click web-03a.corp.local to select it.
  3. Click Load to retrieve the session information.
  4. Click Open to start a PuTTY session to the VM.

 

 

Login to the VM

 

Note: If you encounter difficulties connecting to web-03a.corp.local, please review your previous steps and verify that they have been completed correctly.

 

 

Ping web server web-04a.corp.local

 

Type ping -c 2 web-04a to send two pings instead of a continuous ping.

ping -c 2 web-04a

Note: web-04a.corp.local has an IP address of 172.16.40.12. If required, you can also ping by IP address.

If you see DUP! packets, this is due to the nature of VMware's nested lab environment. This will not happen in a production environment.

Do not close your PuTTY session. Minimize the window for later use.

 

Scalability and Availability


In this section, we will take a look at the scalability and availability of NSX controllers. The NSX controller cluster is the control plane component responsible for managing the switching and routing modules across hypervisors. The NSX controller cluster consists of three NSX controller nodes that each manage specific logical objects. The use of an NSX controller cluster for the management of VXLAN based logical switches eliminates the need for multicast support from the physical network infrastructure.

For resiliency and performance, production deployments must deploy a NSX controller cluster comprised of three NSX controller nodes. The NSX controller cluster represents a scale-out distributed system, where each NSX controller node is assigned a set of roles. The assigned role defines the types of task that can be implemented by the NSX controller node. The current supported configuration allows for fully active load sharing as well as redundancy.

To improve the scalability of the NSX architecture, a “slicing” mechanism is utilized to ensure that all NSX controller nodes can be active at any given time.

If an NSX controller(s) fails, data plane (VM) traffic will not be affected. Traffic will continue to flow, as the logical network information has already been pushed down to the logical switches (the data plane). However, you will not be able to edit (add/move/change) without the control plane (NSX controller cluster).

In addition, NSX now includes Controller Disconnected Operation (CDO) capabilities. CDO mode creates a special logical switch that all hosts join. This adds an additional layer of redundancy to data plane connectivity when controllers may not be accessible to hosts in the NSX environment.  More information about CDO mode can be found via a link at this module's conclusion.


 

NSX Controller Scalability and Availability

 

  1. Click Home icon.
  2. Click Networking & Security.

 

Module 2 Conclusion


In this module, we demonstrated the following benefits of the NSX platform:

  1. Network agility, including easy provisioning and configuring of logical switches to interface with virtual machines and external networks.
  2. Scalability of the NSX architecture, such as a transport zone spanning multiple clusters or the NSX controller cluster's ability to provide networking services without reconfiguring the physical network.

 

You have completed Module 2

Congratulations on completing Module 2.

If you are looking for additional information on deploying NSX, please review the NSX 6.4 Documentation Center via the URL below:

Additional information about NSX Controller Disconnected Operation (CDO) mode is available at the following link:

Proceed to any of the following modules:

Lab Module List:

Lab Captains:

 

 

How to End Lab

 

To end your lab, click on the END button.  

 

Module 3 - Logical Routing (60 minutes)

Logical Routing - Module Overview


In the previous module, we experienced the ease and convenience of creating isolated logical switches/networks with just a few clicks. To provide communication across these isolated layer 2 networks, routing support is essential. In the NSX platform, the Distributed Logical Router allows you to route traffic between logical switches entirely in the hypervisor. By incorporating this logical routing component, NSX can reproduce complex routing topologies in the logical space. For example, a three-tier application will be connected to three logical switches, and routing between the tiers will be handled by this Distributed Logical Router.

This module will help us understand some of the routing capabilities supported in the NSX platform, and how to utilize them while deploying a three-tier application.

In this module, we will be doing the following:


 

Special Instructions for CLI Commands

 

Many of the modules will have you enter Command Line Interface (CLI) commands.  There are two ways to send CLI commands to the lab.

First to send a CLI command to the lab console:

  1. Highlight the CLI command in the manual and use Control+c to copy to clipboard.
  2. Click on the console menu item SEND TEXT.
  3. Press Control+v to paste from the clipboard to the window.
  4. Click the SEND button.

Second, a text file (README.txt) has been placed on the desktop of the environment allowing you to easily copy and paste complex commands or passwords in the associated utilities (CMD, Putty, console, etc). Certain characters are often not present on keyboards throughout the world.  This text file is also included for keyboard layouts which do not provide those characters.

The text file is README.txt and is found on the desktop.  

 

Dynamic and Distributed Routing


A distributed logical router (DLR) is a virtual appliance that contains the routing control plane, while distributing the data plane in kernel modules to each hypervisor host. The DLR control plane function relies on the NSX controller cluster to push routing updates to the kernel modules. This allows for optimized east/west routing within each local hypervisor, removing the need to hairpin traffic through a single point on the network.

You will first explore the configuration of distributed routing, to see the benefits of performing routing at the kernel level.


 

A Look at the Current Topology and Packet Flow

 

The above picture shows this lab's environment where both Application VM and Database VM reside on the same physical host. The red arrows show the traffic flow between the two VMs.

  1. The traffic leaves the Application VM and reaches the host.
  2. As the Application VM and Database VM are not on the same network subnet, the traffic will need to be sent to a layer 3 device. The NSX Edge, Perimeter Gateway, resides in the Management Cluster and functions as the layer 3 gateway. The traffic is sent to the host on which the Perimeter Gateway resides.
  3. The traffic reaches the Perimeter Gateway.
  4. The Perimeter Gateway routes the packet and sends it back to the host on the destination network.
  5. The routed traffic is sent back to the host on which the Database VM resides.
  6. The traffic is delivered to the Database VM by the host.

At the end of this lab, we will review the traffic flow diagram after distributed routing has been configured. This will help to understand the positive impact distributed routing has on network traffic.

 

 

Access vSphere Web Client (Flash)

 

  1. Bring up the vSphere Web Client (Flash) via the icon on the desktop labeled Google Chrome.

 

 

Login to the vSphere Web Client (Flash)

 

If you are not already logged into the vSphere Web Client:

(The home page should be the vSphere Web Client.  If not, Click on the vSphere Web (Flash) icon in Google Chrome.)

  1. Type in administrator@vsphere.local into User name
  2. Type in VMware1! into Password
  3. Click Login

 

 

Confirm Three Tier Application Functionality

 

  1. Open a new browser tab.
  2. Click the Customer DB App bookmark.

 

Before you start configuring your application for Distributed Routing, let's confirm that the three-tier application is working correctly. The three tiers of the application (web, app and database) are located on different logical switches, with an NSX Edge Services Gateway providing routing between them.

 

 

Removal of the App and DB Interfaces from the Perimeter Edge

 

As you saw in the earlier topology, the three tiers of the application reside on tier-specific logical switches that are routed by the Perimeter Gateway (NSX ESG). We are going to update this topology by removing the App and DB interfaces from the Perimeter Gateway. After deleting these interfaces, we will move them to the Distributed Router (NSX DLR). To save time, a Distributed Router has already been deployed for you.

  1. Click vSphere Web Client browser tab.
  2. Click Home icon.
  3. Click Networking & Security.

 

 

Add App and DB Interfaces to the Distributed Router

 

We will begin configuring Distributed Routing by adding the App and DB interfaces to the Distributed Router (NSX Edge).

  1. Double-click Distributed-Router-01.

 

 

Configure Dynamic Routing on the Distributed Router

 

Return to the vSphere Web Client browser tab.

  1. Click Routing.
  2. Click Global Configuration.
  3. Click Edit to change Dynamic Routing Configuration.

 

 

Edit Dynamic Routing Configuration

 

  1. Select the IP address of the Uplink interface as the default Router ID. In this case, the Uplink interface is Transit_Network_01 and the IP address is 192.168.5.2.
  2. Click OK

Note: The router ID is a 32 bit identifier denoted as an IP address. It is important in the operation of OSPF as it indicates the router's identity in an autonomous system. In our lab scenario, we are using a router ID that is the same as the IP address of the uplink interface on the NSX edge. The screen will return to the Global Configuration section with the option to Publish Changes.

Confirm that the Router ID field displays the IP address associated with the Transit_Network Logical Switch: 192.168.5.2. If the configuration change did not apply successfully, this value will remain blank. If this occurs, repeat steps 1 and 2 above to reapply the Router ID.

 

 

Configure OSPF Specific Parameters

 

We will be using OSPF as our dynamic routing protocol between Perimeter-Gateway-01 and Distributed-Router-01. This will allow the two routers to exchange information about their known routes.

  1. Click OSPF.
  2. Click Edit to change OSPF Configuration. This will open the OSPF Configuration dialog box.

 

 

Configure OSPF Routing on the Perimeter Edge

 

Next, we will configure dynamic routing on Perimeter-Gateway-01 (NSX Edge) to restore connectivity to our three-tier application.

  1. Click Back repeatedly until you have returned to the NSX Edges summary page with the list of Edges.

 

 

Review New Topology

 

The new topology shows route peering between Distributed Router and Perimeter Gateway (NSX Edge). Routes to any network connected to the Distributed Router will be passed via OSPF to the Perimeter Gateway (NSX Edge). In addition, we also pass all routes from the Perimeter Gateway to the physical network's vPod Router via BGP.

The next section will cover this in more detail.

 

 

Verify Communication to the Three-Tiered App

 

Now routing information is being exchanged between the Distributed Router and Perimeter Gateway. Once  routing between the two NSX Edges has been established, connectivity to the three-tier web application will be restored. Let's verify that routing is functional by accessing the 3-tier Web Application.

  1. Click on the HOL - Customer Database browser tab (this tab was opened in a previous step). The page may display a 504 Gateway Time-out message from the previous attempt.
  2. Click Reload.

Note: It may take a moment for route propagation to occur, as the lab is a nested environment.

 

 

Dynamic and Distributed Routing Completed

In this section, we have successfully configured dynamic and distributed routing. In the next section, we will review centralized routing with the Perimeter Gateway (NSX Edge).

 

Centralized Routing


In this section, we will look at various elements to see how routing is configured northbound from the NSX Edge Services Gateway (ESG). This includes how dynamic routing is controlled, updated, and propagated throughout the NSX environment. We will verify that routes are exchanged between the NSX perimeter ESG appliance and the virtual router appliance (vPod Router) that runs and routes the entire lab.

Special Note: On the desktop, you will find a file named README.txt. It contains the CLI commands needed in the lab exercises. If you can't type them, you can copy and paste them into the PuTTY sessions. If you see a number with "french brackets - {1}," this tells you to look for that CLI command for this module in the text file.


 

Current Lab Topology

 

The above diagram shows the current topology, where OSPF is used to exchange routes between Perimeter Gateway and Distributed Router. In addition, we also see the northbound link from Perimeter Gateway to the vPod Router. These routers exchange routing information via BGP.

 

 

Look at OSPF Routing in Perimeter Gateway

First we will confirm the Web App is functional, then we will log into the NSX Perimeter Gateway to view OSPF neighbors and see existing route information. This will show how the Perimeter Gateway is learning routes from not only the Distributed Router, but the vPod router that is running the entire lab as well.

 

 

Confirm Three Tier Application Functionality

 

  1. Open a new browser tab.
  2. Click Customer DB App bookmark.

 

 

View BGP Neighbors

 

Let's look at the BGP neighbors of Perimeter-Gateway-01.

  1. Enter show ip bgp neighbors.
show ip bgp neighbors

 

 

Reviewing Displayed BGP Neighbor Information

 

Let's review the information on BGP neighbors.

  1. BGP neighbor is 192.168.100.1 - This is the router ID of the vPod Router inside the NSX environment.
  2. Remote AS 65002 - This is the autonomous system number of the vPod Router's external network.
  3. BGP state = Established, up - This means the BGP neighbor adjacency is complete and the BGP routers will send update packets to exchange routing information.

 

 

Review Routes on Perimeter Edge and their Origin

 

Observe the available routes on Perimeter-Gateway-01.

  1. Enter show ip route.
show ip route

 

 

Controlling BGP Route Distribution

There may be a scenario where you would only want BGP routes to be distributed within the virtual environment, but not advertised to the physical world. Route distribution can be easily controlled and filtered from within the NSX Edge configuration.

 

ECMP and High Availability


In this section we will add a second Perimeter Gateway to the network, then use ECMP (Equal Cost Multipath Routing) to scale out Edge capacity and availability.  With NSX we are able to perform an in-place addition of an Edge device and enable ECMP.

ECMP is a routing feature that allows packet forwarding to occur over multiple, redundant paths. These paths can be added statically or as a result of metric calculations by dynamic routing protocols like OSPF or BGP. One a next hop is selected for a particular source and destination IP address pair, the route cache stores the selected path. All packets for that flow go to the selected next hop. The Distributed Logical Router uses an XOR algorithm to determine the next hop from a list of possible ECMP paths. This algorithm uses the source and destination IP address on the outgoing packet as sources of entropy.

In this module we will configure a new Perimeter Gateway, and establish an ECMP cluster between the Perimeter Gateways and the Distributed Logical Router to leverage increased capacity and availability.  We will test availability by shutting down one of the Perimeter Gateways and watching the resulting routing paths.


 

Navigate to NSX in vSphere Web Client

 

  1. Click vSphere Web Client browser tab.
  2. Click Home icon.
  3. Click Networking & Security.

 

 

Modify the Perimeter Gateway Edge

 

We first need to modify the existing Perimeter Gateway NSX Edge to remove the secondary IP address:

  1. Click NSX Edges.
  2. Double-click Perimeter-Gateway-01.

 

 

 

  1. Click Manage.
  2. Click Settings.
  3. Click Interfaces.
  4. Click to select vNIC 0.
  5. Click the Edit pencil.

 

 

Remove the Secondary IP Address

 

  1. Click on the Edit pencil.
  2. Click on the Cross icon to delete the Secondary IP Addresses.

We will temporarily use this secondary IP address in the following steps for our new Perimeter Gateway.

 

 

Confirm the Change

 

  1. Click OK.

Note: If the OK and Cancel buttons are not visible, you may need to click and drag the Edit NSX Edge Interface dialog window. This is due to the limited screen resolution available in the lab.

 

 

Go back to the NSX Edges

 

  1. Click Back repeatedly until you have returned to the NSX Edges summary page with the list of Edges.

 

 

Add Additional Perimeter Gateway Edge

 

 

 

Select and Name Edge

 

  1. Select Edge Services Gateway for Install Type.
  2. Enter Perimeter-Gateway-02 for Name.
  3. Click Next.

 

 

Set Password

 

  1. Enter VMware1!VMware1! for Password.
  2. Enter VMware1!VMware1! for Confirm password.
  3. Check Enable SSH access.
  4. Click Next.

Note: All passwords for NSX Edges are 12-character complex passwords.

 

 

Add Edge Appliance

 

  1. Click Green Plus icon. The Add NSX Edge Appliance dialog box will appear.
  2. Select RegionA01-MGMT01 for Cluster/Resource Pool.
  3. Select RegionA01-ISCSI01-COMP01 for Datastore.
  4. Select esx-04a.corp.local for Host.
  5. Click OK.

 

 

Continue Deployment

 

  1. Click Next.

 

 

 

  1. Click Green Plus icon. This will add the first interface.

 

 

Select Switch Connected To

 

We have to pick the northbound switch interface (a distributed port group) for this Perimeter Gateway.

  1. Click the Select link, to the right of Connected To.
  2. Click Distributed Virtual Port Group.
  3. Click the radio button to the left of Uplink-RegionA01-vDS-MGMT to select it.
  4. Click OK.

 

 

Name and Add IP

 

  1. Enter Uplink as Name.
  2. Select Uplink as Type.
  3. Click Green Plus icon.
  4. Enter 192.168.100.4 as Primary IP Address.
  5. Enter 24 as Subnet Prefix Length.
  6. Click OK.

 

 

Add Edge Transit Interface

 

  1. Click Green Plus icon. This will add the second interface.

 

 

Select Switch Connected To

 

We have to pick the northbound switch interface (VXLAN Backed Logical Switch) for this Perimeter Gateway.

  1. Click Select under Connected To.
  2. Click Logical Switch.
  3. Click the radio button to the left of Transit_Network_01 (5005) to select it.
  4. Click OK.

 

 

Name and Add IP

 

  1. Enter Transit_Network_01 as Name.
  2. Select Internal as Type.
  3. Click Green Plus icon.
  4. Enter 192.168.5.4 as Primary IP Address.
  5. Enter 29 as Subnet Prefix Length.  Please ensure the correct Subnet Prefix Length (29) is provided or the lab will not function.
  6. Click OK.

 

 

Continue Deployment

 

Ensure the IP Addresses and Subnet Prefix Length information match the values listed in the graphic above.

  1. Click Next.

 

 

Remove Default Gateway

 

We are removing the default gateway since information is received via OSPF.

  1. Uncheck Configure Default Gateway.
  2. Click Next.

 

 

Default Firewall Settings

 

  1. Check Configure Firewall default policy.
  2. Select Accept as the Default Traffic Policy.
  3. Click Next.

 

 

Finalize Deployment

 

  1. Click Finish. This will start the deployment.

 

 

Edge Deploying

 

It will take a couple of minutes for the NSX Edge to deploy.

  1. The NSX Edges section will show that there is 1 Installing while Perimeter-Gateway-02 is being deployed.
  2. The status for Perimeter-Gateway-02 will indicate that it is Busy. This means the deployment is in process.
  3. Click the refresh icon in the vSphere Web Client to update the deployment status of Perimeter-Gateway-02.

Once the status for Perimeter-Gateway-02 indicates that it is Deployed, we can continue to the next step.

 

 

Configure Routing on New Edge

 

We will need to configure OSPF on Perimeter-Gateway-02 (NSX Edge) before ECMP can be enabled.

  1. Double-click Perimeter-Gateway-02.

Note: If the full name of the gateway is not visible, hovering the mouse pointer over the name will provide a tooltip.

 

 

Enable ECMP

 

We are now going to enable ECMP on the Distributed Router and Perimeter Gateways

  1. Click Back repeatedly until you have returned to the NSX Edges summary page with the list of Edges.

 

 

Topology Overview

 

At this stage, this is the topology of the lab.  This includes the new Perimeter Gateway that has been added, routing configured, and ECMP turned on.

 

 

Verify ECMP Functionality from Distributed Router

 

Let's now access the distributed router to ensure that OSPF is communicating and ECMP is functioning.

  1. Click Home icon.
  2. Click VMs and Templates.

 

 

Verify ECMP Functionality from vPod Router

 

Note: To release your cursor from the window, press Ctrl+Alt keys.

Now we will look at ECMP from the vPod Router, which simulates a physical router in your network.

  1. Click the PuTTY icon on the taskbar.

 

 

Shutdown Perimeter Gateway 01

 

We will simulate a node going offline by shutting down Perimeter-Gateway-01.

Return to vSphere Web Client browser tab.

  1. Expand RegionA01.
  2. Right-click Perimeter-Gateway-01-0.
  3. Click Power.
  4. Click Shut Down Guest OS.

 

 

Test High Availability with ECMP

 

With ECMP, BGP, and OSPF in the environment, we are able to dynamically change routes in the event of a failure in a particular path.  We will now simulate one of the paths going down, and route redistribution occuring.

  1. Click the Command Prompt icon on the taskbar.

 

 

Access Distributed Router VM Console

 

  1. Click Distributed-01-0 browser tab.

When the VM console launches in the browser tab, it will appear as a black screen. Click inside the black screen and press Enter a few times to make the VM console appear from the screensaver.

 

 

Power Up Perimeter Gateway 01

 

Return to vSphere Web Client browser tab.

  1. Expand RegionA01.
  2. Right-click Perimeter-Gateway-01-0.
  3. Click Power.
  4. Click Power On.

 

 

Return to Ping Test

 

 

 

Access Distributed Router VM Console

 

  1. Click the Distributed-01-0 browser tab.

When the VM console launches in the browser tab, it will appear as a black screen. Click inside the black screen and press Enter a few times to make the VM console appear from the screensaver.

 

Prior to moving to Module 4 - Please complete the following cleanup steps


If you plan to continue to any other module in this lab after completing Module 2, you must complete the following steps or the lab will not function properly going forward.


 

Delete Second Perimeter Edge Device

 

Return to vSphere Web Client browser tab.

  1. Click Home icon.
  2. Click Networking & Security.

 

 

Delete Perimeter-Gateway-02

 

We need to delete the Perimeter-Gateway-02 that we created.

  1. Click NSX Edges.
  2. Click to select Perimeter-Gateway-02.
  3. Click the Red X to delete Perimeter-Gateway-02.

 

 

Confirm Delete

 

  1. Click Yes.

 

 

Disable ECMP on DLR and Perimeter Gateway-01

 

  1. Double-click Distributed-Router-01.

 

 

Disable ECMP on Distributed Router

 

  1. Click Manage.
  2. Click Routing.
  3. Click Global Configuration.
  4. Click Stop.

 

 

Publish Change

 

  1. Click Publish Changes to push the configuration change.

 

 

Return to Edge Devices

 

  1. Click Back repeatedly until you have returned to the NSX Edges summary page with the list of Edges.

 

 

Access Perimeter Gateway 01

 

  1. Double-click Perimeter-Gateway-01.

 

 

Disable ECMP on Perimeter Gateway 01

 

  1. Click Manage.
  2. Click Routing.
  3. Click Global Configuration.
  4. Click Stop.

 

 

Publish Change

 

  1. Click Publish Changes to push the configuration change.

 

 

Enable Firewall on Perimeter Gateway 01

 

  1. Click Manage.
  2. Click Firewall.
  3. Click Start.

 

 

Publish Change

 

  1. Click Publish Changes to update the configuration on Perimeter-Gateway-01 (NSX Edge).

 

Module 3 Conclusion


In this module, we covered the routing capabilities of the NSX Distributed Logical Router (DLR) and Edge Services Gateway (ESG) by performing the following tasks:

  1. Migrated Logical Switches from Edge Services Gateway (ESG) to the Distributed Logical Router (DLR).
  2. Configured dynamic routing between ESG and DLR via OSPF.
  3. Reviewed the centralized routing capabilities of ESG, including dynamic route peering.
  4. Demonstrated scalability and availability of ESG by deploying a second ESG and establishing route peering between them via equal-cost multi-path (ECMP) routing configuration.
  5. Removed ESG2 and ECMP route configurations.

 

You have completed Module 3

Congratulations on completing Module 3.

If you are looking for additional information on deploying NSX, please review the NSX 6.4 Documentation Center via the URL below:

Proceed to any of the following modules:

Lab Module List:

Lab Captains:

 

 

How to End Lab

 

To end your lab, click on the END button.  

 

Module 4 - Edge Services Gateway (60 minutes)

Edge Services Gateway - Module Overview


The NSX Edge Services Gateway (ESG) provides network edge security and routing services into and out of the virtualized environment. As demonstrated in previous modules, the ESG has a number of options for passing north-south traffic in a scalable, resilient manner.

The NSX Edge Distributed Logical Router (DLR) provides kernel based, optimized East-West routing within the NSX environment. Distributed routing allows workloads that reside within the NSX environment to effectively communicate directly with one another, removing the need to hairpin traffic through a traditional routing interface.

In addition to its routing capabilities, the NSX Edge Services Gateway also offers many advanced gateway services. These include DHCP, VPN, NAT, Load Balancing and a traditional Layer 3 Firewall. Leveraging these capabilities is done by simply enabling them within the ESG's configuration.

We will use this module to explore some of these services offered by the ESG. By the end of this module, you will have had an opportunity to do the following:

A full description of these and other Edge Services Gateway features can be found in a link at the end of this module.


Deploy Edge Services Gateway for Load Balancing


The NSX Edge Services Gateway includes load balancing functionality. Implementing a load balancer provides many advantages, including efficient resource utilization, scalability and resiliency at the application level. This can result in shorter response times for applications, the ability to scale an application beyond the capabilities of a single server, and can also be used to reduce load on a backend server through use of HTTPS offload.

The Load Balancing service is capable of load balancing TCP or UDP at Layer 4, and HTTP or HTTPS at Layer 7.

In this section, we will deploy and configure a new NSX Edge Appliance as a "One-Armed" Load Balancer.


 

Validate Lab is Ready

 

Validation checks ensure all components of the lab are correctly deployed and once validation is complete, status will be updated to Green/Ready. It is possible to have a Lab deployment fail due to environment resource constraints.

 

 

Gain screen space by collapsing the right Task Pane

 

Clicking on the Push-Pins will allow task panes to collapse and provide more viewing space to the main pane.  You can also collapse the left-hand pane to gain the maximum space.

 

 

Navigate to Networking & Security in vSphere Web Client

 

  1. Click Home icon.
  2. Click Networking & Security.

 

 

Creating a New Edge Services Gateway

 

We will configure the Load Balancing service on a new Edge Services Gateway. To begin the deployment of a new Edge Services Gateway, please perform the following:

  1. Click NSX Edges.
  2. Click the Green Plus icon.

 

 

Defining Name and Type

 

  1. Enter OneArm-LoadBalancer as the Name.
  2. Click Next.

 

 

Configuring Admin Account

 

  1. Enter VMware1!VMware1! as Password.
  2. Enter VMware1!VMware1! for Confirm password.
  3. Check Enable SSH access.
  4. Click Next.

Note: All passwords for NSX Edges are 12-character complex passwords.

 

 

Defining Edge Size and VM placement

 

There are four different appliance sizes for Edge Service Gateway. The specifications are as follows:

We will be selecting a Compact sized Edge for this new Edge Services Gateway, but note that Edge Service Gateways can also be reconfigured to any other size after deployment. To continue with the new Edge Service Gateway creation:

  1. Click Green Plus icon. This will open the Add NSX Edge Appliances pop-up window.

 

 

Cluster/Datastore placement

 

  1. Select RegionA01-MGMT01 as Cluster/Resource Pool.
  2. Select RegionA01-ISCSI01-COMP01 as Datastore.
  3. Select esx-05a.corp.local as Host.
  4. Click OK.

 

 

Configure Deployment

 

  1. Click Next.

 

 

Placing a new network interface on the NSX Edge

 

Since this is a one-armed load balancer, it will only need one network interface.

  1. Click the Green Plus icon.

 

 

Configuring the new network interface for the NSX Edge

 

We will be configuring the first network interface for this new NSX Edge.  

  1. Enter WebNetwork as the Name.
  2. Select Uplink as the Type.
  3. Click Select.

 

 

Selecting Network for New Edge Interface

 

The one-armed load balancer's interface will reside on the same network as the two web servers that it will provide Load Balancing services for.

  1. Click Logical Switch.
  2. Click the radio button to the left of Web_Tier_Logical_Switch (5006) to select it.
  3. Click OK.

 

 

Configuring Subnets

 

  1. Click Green Plus icon. This allows you to configure the IP address of this interface.

 

 

Configure IP Address and Subnet

 

To add a new IP address to this interface:

  1. Enter 172.16.10.10 as the Primary IP Address.
  2. Enter 24 as the Subnet Prefix Length.
  3. Click OK.

 

 

Confirm List of Interfaces

 

Ensure the IP Address and Subnet Prefix Length information match the information in the picture above.

  1. Click Next.

 

 

Configuring the Default Gateway

 

  1. Enter 172.16.10.1 as the Gateway IP.
  2. Click Next.

 

 

Configuring Firewall and HA options

 

  1. Check Configure Firewall default policy.
  2. Select Accept as the Default Traffic Policy.
  3. Click Next.

 

 

Review of Overall Configuration and Complete

 

  1. Click Finish to begin the deployment.

 

 

Monitoring Deployment

 

It may take a few moments for the NSX Edge to deploy.

  1. The NSX Edges section will show that there is 1 Installing while OneArm-LoadBalancer is being deployed.
  2. The status for OneArm-LoadBalancer will indicate that it is Busy. This means the deployment is in process.
  3. Click the refresh icon on the vSphere Web Client to update the deployment status of OneArm-LoadBalancer.

Once the status for OneArm-LoadBalancer indicates that it is Deployed you can proceed to the next step.

 

Configure Edge Services Gateway for Load Balancer


Now that the Edge Services Gateway is deployed, we will now configure load balancing services.


 

Configure Load Balancer Service

 

The diagram above depicts the final topology for the load balancer provided by the new NSX Edge Services Gateway (ESG) OneArm-LoadBalancer. Once configured, this ESG will reside on the existing Web_Tier_Logical_Switch. Gateway connectivity to the Logical Switch is provided by the existing Perimeter-Gateway-01 ESG.

The load balancing service of this ESG will accept incoming client connections on a Virtual Server IP address of 172.16.10.10. Upon receiving a new, inbound connection request, the load balancer will associate the request with one internal server from a list of predefined Pool Members. In this example, there will be two pool members: web-01a.corp.local (172.16.10.11) and web-02a.corp.local (172.16.10.12).

This allows the load balancer to provide a single endpoint for inbound clients, while transparently distributing those connections across multiple internal web servers. If a web server fails or otherwise becomes unavailable, the load balancer detects this and removes the server from its list of active pool members.  Service Monitors are used to periodiocally check the health of all pool members, and once the failed server returns to service, it will be reinstated as an active pool member.

 

 

Configure Load Balancer Feature on OneArm-Load Balancer

 

  1. Double-click OneArm-LoadBalancer.

 

 

Navigate to New NSX Edge

 

  1. Click Manage.
  2. Click Load Balancer.
  3. Click Global Configuration.
  4. Click Edit to change Load Balancer global configuration.

 

 

Edit Load Balancer Global Configuration

 

To enable the Load Balancer service:

  1. Check Enable Load Balancer.
  2. Click OK.

 

 

Creating a New Application Profile

 

An Application Profile is how we define the behavior of a typical type of network traffic. These profiles are applied to a virtual server (VIP) which handles traffic based on the values specified in the Application Profile.  

Utilizing profiles can make traffic-management tasks less error-prone and more efficient.  

  1. Click Application Profiles
  2. Click Green Plus icon. This will open the New Profile window.

 

 

Configuring a New Application Profile HTTPS

 

For the new Application Profile, configure the following:

  1. Enter OneArmWeb-01 as Name.
  2. Select HTTPS as Type.
  3. Check Enable SSL Passthrough. This configures the load balancing service to pass HTTPS connections through the load balancer uninspected, terminating on the pool server.
  4. Click OK.

 

 

Define Custom HTTPS Service Monitor

 

Monitors ensure that the pool members serving a virtual server are up and functioning. The default HTTPS monitor will use a basic HTTP GET request ("GET /"). We will define a custom monitor that will perform a health check for an application specific URL. This will verify that the web server is responding to connections, and will also check for the proper functioning of our application.

  1. Click Service Monitoring.
  2. Click the Green Plus to define a new Monitor.

 

 

Define the New Monitor

 

Enter the following information to define the new monitor:

  1. Enter custom_https_monitor for Name.
  2. Select HTTPS for Type.
  3. Enter /cgi-bin/app.py for URL.
  4. Click OK.

 

 

Create New Pool

 

A group of servers of Pool is the entity that represents the nodes that traffic is getting load balanced to. We will be adding the two web servers web-01a and web-02a to a new pool. To create the new pool:

  1. Click Pools.
  2. Click Green Plus icon. This will open the New Pool pop-up window.

 

 

Configuring New Pool

 

For the settings on this new Pool, configure the following:

  1. Enter Web-Tier-Pool-01 as the Name.
  2. Select default_https_monitor as the Monitors.
  3. Click Green Plus icon.

 

 

Add members to the pool

 

  1. Enter web-01a as the Name.
  2. Enter 172.16.10.11 as the IP Address / VC Container.
  3. Enter 443 for the Port.
  4. Enter 443 for the Monitor Port.
  5. Click OK.

Repeat the process above to add the second pool member using the following information:

 

 

Save Pool Settings

 

  1. Click OK.

 

 

Create New Virtual Server

 

A Virtual Server is the entity that accepts incoming connections from the "front end" of a load balanced configuration. User traffic is directed to the IP address of the virtual server, and is then redistributed to pool members on the "back-end" of the load balancer. To configure a new Virtual Server on this Edge Services Gateway and complete the load balancing configuration, perform the following:

  1. Click Virtual Servers
  2. Click Green Plus icon. This will open the New Virtual Server pop-up window.

 

 

Configure New Virtual Server

 

Please configure the following options for this new Virtual Server:

  1. Enter Web-Tier-VIP-01 as the Name.
  2. Enter 172.16.10.10 as the IP Address.
  3. Select HTTPS as the Protocol.
  4. Select Web-Tier-Pool-01.
  5. Click OK.

 

Edge Services Gateway Load Balancer - Verify Configuration


Now that we have configured the load balancing services, we will verify the configuration.


 

Test Access to Virtual Server

 

  1. Open a new browser tab.
  2. Click to expand the list of bookmarks.
  3. Click to select the 1-Arm LB Customer DB bookmark.
  4. Click on Advanced.

 

 

Ignore SSL error

 

  1. Click on Proceed to 172.16.10.10 (unsafe).

 

 

Test Access to Virtual Server

 

You should be successful in accessing the one-armed Load Balancer.

  1. Click the refresh icon. This will allow you to see the load balancer distribute connections across both pool members.

Note: Due to browser caching in Chrome, subsequent refreshes may not appear to utilize both servers.

 

 

Show Pool Statistics

 

Return to vSphere Web Client browser tab.

To see the status of the individual pool members:

  1. Click on Pools.
  2. Click Show Pool Statistics.
  3. Click on "pool-1". We will see each member's current status.
  4. Close the window by clicking the X.

 

 

Monitor (Health Check) Response Enhancement

 

To aid troubleshooting, the NSX Load Balancer's "show ...pool" commands yield detailed information about pool member failures. We will create two different failures and examine the response using show commands on the Edge Gateway OneArm-LoadBalancer-0.

  1. Enter loadbalancer in the search box. The search box is located at the top right corner of vSphere Web Client.
  2. Click on "OneArm-LoadBalancer-0".

 

 

Open Console Load Balancer Console

 

  1. Click on Summary.
  2. Click on VM console.

 

 

Login to OneArm-LoadBalancer-0

 

  1. Login as admin.
  2. Enter VMware1!VMware1! as password.

 

 

Examine pool status before failure

 

  1. Enter show service loadbalancer pool.
show service loadbalancer pool

Note: The status of pool members, web-01a and web-02a are shown to be "UP".

 

 

Start PuTTY

 

  1. Click on PuTTY in the taskbar.

 

 

SSH to web-01a.corp.local

 

  1. Scroll down to web-01a.corp.local.
  2. Select web-01a.corp.local.
  3. Click Load.
  4. Click Open.

 

 

Stop Nginx Service

 

We will shutdown HTTPS to simulate the first failure condition

  1. Enter systemctl stop nginx.
systemctl stop nginx

 

 

Loadbalancer console

 

  1. Enter show service loadbalancer pool.
show service loadbalancer pool

Because the service is down, the failure detail shows the Load Balancer's Health Monitor process was unable to establish an SSL session.

 

 

Start NGINX Service

 

Switch back to the Putty SSH session for web-01a.

1. Enter systemctl start nginx.

systemctl start nginx

 

 

Shutdown web-01a

 

Return to vSphere Web Client browser tab.

  1. Enter web-01a in the search box. The search box is located at the top right corner of vSphere Web Client.
  2. Click on web-01a.

 

 

Power off web-01a

 

  1. Click Actions.
  2. Click Power.
  3. Click Power Off.
  4. Click Yes to confirm.

 

 

Check the Pool status

 

  1. Enter show service loadbalancer pool.
show service loadbalancer pool

Because the VM is currently down, the failure detail shows that the client could not establish L4 connection as oppose to L7 (SSL) connection in previous step.

 

 

Power web-01a on

 

Return to vSphere Web Client browser tab.

  1. Click Actions.
  2. Click Power.
  3. Click Power On.

 

 

Conclusion

In this lab, we have deployed and configured a new Edge Services Gateway and enabled load balancing services for the 1-Arm LB Customer DB application.

This concludes the Edge Services Gateway Load Balancer lesson. Next, we will learn more about the Edge Services Gateway Firewall.

 

Edge Services Gateway Firewall


The NSX Edge Firewall monitors North-South traffic to provide perimeter security capabilities. This is in contrast to NSX Distributed Firewall, where policy is applied at the virtual NIC of every VM.

The Edge Firewall helps you meet key perimeter security requirements, such as building DMZs based on IP/VLAN constructs, tenant-to-tenant isolation in multi-tenant virtual data centers, and providing traditional, routed firewall enforcement to physical devices where Distributed Firewall is not an option.


 

Working with NSX Edge Firewall Rules

Both the Edge Services Gateway and Logical Router contain a tab for firewall configuration, although there are significant differences in where this policy gets applied. When applying firewall rules to a Logical Router, these policies are applied to the Logical Router control virtual machine, and are not a component of the data plane. To protect data plane traffic, Logical (Distributed) Firewall rules can be used for East-West protection at the virtual NIC, or rules at the NSX Edge Services Gateway level can be applied for North-South protection, or when routing between physical VLAN-backed port groups.

When rules are created in the NSX Firewall user interface that are applicable to an NSX Edge Gateway, they are displayed on the Edge in read-only mode. When rules exist in multiple locations, they are displayed and enforced in the following order:

  1. User-defined rules from the Firewall user interface (Read Only).
  2. Auto-configured rules (automatically created rules that enable control traffic for Edge services).
  3. User-defined rules on NSX Edge Firewall user interface.
  4. Default rule.

 

 

Open Network & Security

 

  1. Click Home icon.
  2. Click Networking & Security.

 

 

Open an NSX Edge

 

  1. Click NSX Edges.
  2. Double-click Perimeter Gateway-01.

 

 

Open Manage Tab

 

  1. Click Manage.
  2. Click Firewall.
  3. Click to select Default Rule.
  4. Click the pencil icon under the Action column.
  5. Select Deny in the Action field.

Note: NSX provides three available actions for a firewall rule, as follows:

 

 

Publish Changes

 

We will not be making permanent changes to the Edge Services Gateway Firewall setting.

  1. Click Revert to roll back changes.

 

 

Adding Edge Services Gateway Firewall Rule

 

Now that we are familiar with editing an existing Edge Services Gateway firewall rule, we will add a new edge firewall rule to block the Control Center's access to the Customer DB Application.

  1. Click Green Plus icon to add a new firewall rule.
  2. Hover mouse over the upper right corner of the Name column and click the pencil icon.
  3. Enter Main Console FW Rule as the Rule Name.
  4. Click OK.

 

 

Specify Source

 

Hover mouse in the upper right corner of the Source column and click Pencil icon.

  1. Click Object Type drop down menu and select IP Sets.
  2. Click New IP Set... hyperlink.
  3. Enter Main Console as the Name.
  4. Enter 192.168.110.10 as the IP address.
  5. Click OK.

 

 

Specify Source

 

  1. Select IP Sets from the Object Type list.
  2. Click to select Main Console from the list of Available Objects.
  3. Click the right arrow. This will move the object to the list of Selected Objects.
  4. Confirm Main Console is in the list of Selected Objects and click OK.

 

 

Specify Destination

 

Hover mouse in the upper right corner of the Destination column and click Pencil icon.

  1. Select Logical Switch from the Object Type list.
  2. Click to select Web_Tier_Logical_Switch from the list of Available Objects.
  3. Click the right arrow. This will move the object to the list of Selected Objects.
  4. Confirm Web_Tier_Logical_Switch is in the list of Selected Objects and click OK.

 

 

Configure Action

 

  1. Click the pencil icon under the Action column.
  2. Select Reject in the Action field.
  3. Click OK.

Note: The reason Reject was chosen instead of Deny was to expedite the failure of the web server in the following steps. If Deny is selected, the flow is dropped and will eventually time out. Because Reject is chosen above, an ICMP message is sent to the Main Console when a connection attempt is made, immediately informing the operating system that the connection failed. It is recommended to use Deny as a general security best practice.

 

 

Publish Changes

 

  1. Click Publish Changes to update the configuration on Perimeter-Gateway-01 (NSX Edge).

 

 

Test New Firewall Rule

 

Now that we have configured a new FW rule that will block the Control Center from accessing the Web Tier logical switch, let's run a quick test:

  1. Open a new browser tab.
  2. Click the Customer DB App bookmark.

Verify the Main Console cannot access the Customer DB App. We should see a browser page that states the web site cannot be reached. Now, lets modify the FW rule to allow the Main Console access to the Customer DB App.

 

 

Change the Main Console FW Rule to Accept

 

Return to vSphere Web Client browser tab.

  1. Click the pencil icon under the Action column of the Main Console FW Rule.
  2. Select Accept in the Action field.
  3. Click OK.

 

 

 

Publish Changes

 

  1. Click Publish Changes to update the configuration on Perimeter-Gateway-01 (NSX Edge).

 

 

Confirm Access to Customer DB App

 

Return to Customer DB App browser tab.

  1. Click Refresh icon.

Since the Main Console FW rule has been changed to "Accept", the Main Console can now access the Customer DB App.

 

 

Delete Main Console FW Rule

 

  1. Click to select the Main Console FW Rule.
  2. Click the Red X to delete the selected firewall rule.
  3. Click OK to confirm.

 

 

Publish Changes

 

  1. Click Publish Changes to update the configuration on Perimeter-Gateway-01 (NSX Edge).

 

 

Conclusion

In this lab, we learned to modify an existing Edge Services Gateway Firewall rule, and to configure a new Edge Services Gateway Firewall rule that blocks external access to the Customer DB App.

This concludes the Edge Services Gateway Firewall lesson. Next, we will learn more about how the Edge Services Gateway manages DHCP services.

 

DHCP Relay


In a network where there are only single network segments, DHCP clients communicate directly with their DHCP server. DHCP servers can also provide IP addresses for multiple networks, including networks that are not on the same segment as the server itself. Due to the broadcast nature of DHCP, when serving IP addresses for ranges outside its local network, the DHCP server is is unable to communicate directly with requesting clients.

In these situations, a DHCP Relay agent is used to relay the DHCP Request broadcasted by clients. This is done by directing the broadcast request to a designated DHCP server as a unicast packet. The DHCP server will select a DHCP scope based upon the address range from which the unicast is originating. The DHCP response packet is provided to the Relay address, which is then rebroadcast on the original network to the client.

Areas to be covered in this lab:

In this lab, the following items have been preconfigured:


 

Lab Topology

 

This diagram illustrates the topology that will be created and used in this lab module.

 

 

Access NSX Through the vSphere Web Client

 

  1. Click Home icon.
  2. Click Networking & Security.

 

 

Create New Logical Switch

 

We must first create a new Logical Switch that will run our new 172.16.50.0/24 network.

  1. Click Logical Switches.
  2. Click the Green Plus icon to create a new Logical Switch.

 

 

Connect Logical Switch to Perimeter Gateway

 

We will now attach the logical switch to an interface on the Perimeter Gateway.  This interface will be the default gateway for the 172.16.50.0/24 network with an address of 172.16.50.1.

  1. Click NSX Edges.
  2. Double-click Perimeter-Gateway-01.

 

 

Configure DHCP Relay

 

Staying inside of the Perimeter Gateway, we must do the global configuration of DHCP Relay.

  1. Click Manage.
  2. Click DHCP.
  3. Click Relay.
  4. Click Edit.

 

 

Create Blank VM for PXE Boot

 

We will now create a blank VM that will PXE boot from the DHCP server we are relaying to.

  1. Click Home icon.
  2. Click Hosts and Clusters.

 

 

Access Newly Created VM

 

Next we will open a console to this VM and watch it boot from the PXE image. It receives this information via the remote DHCP server we configured earlier.

  1. Click PXE VM.
  2. Click Summary.
  3. Click VM Console.

 

 

Verify DHCP Lease

 

While we wait for the VM to boot, we can verify the address used in the DHCP Leases.

  1. Go to the desktop of the Main Console and double-click the DHCP icon.

 

 

Access Booted VM

 

  1. Click PXE VM browser tab.

 

 

Verify Address and Connectivity

 

The widget in the upper-right corner of the VM will show statistics, along with the IP of the VM. This should match the IP shown in DHCP earlier.

 

 

Conclusion

In this section, we completed the creation of a new network segment, then relayed the DHCP requests from that network to an external DHCP server. In doing so, we were able to access additional boot options of this external DHCP server and PXE into a Linux OS.

Next, we will explore Edge Services Gateway L2VPN services.

 

Configuring L2VPN


In this section, we will be utilizing the L2VPN capabilities of the NSX Edge Gateway to extend a L2 boundary between two separate vSphere clusters. To demonstrate this capability, we will deploy an an NSX Edge L2VPN Server on the RegionA01-MGMT01 cluster and an NSX Edge L2VPN Client on the RegionA01-COMP01 cluster and finally test the tunnel status to verify a successful configuration.


 

Opening Google Chrome and Navigating to the vSphere Web Client

 

  1. Open the Google Chrome web browser from the desktop (if not already open).

 

 

Navigate to Networking & Security Section of the vSphere Web Client

 

  1. Click Home icon.
  2. Click Networking & Security.

 

 

Creating the NSX Edge Gateway for the L2VPN-Server

 

To create the L2VPN Server service, we must first deploy an NSX Edge Gateway for the service to run on.  

  1. Click on NSX Edges.
  2. Click on Green Plus icon.

 

 

Configuring a new NSX Edge Gateway: L2VPN-Server

 

  1. Enter L2VPN-Server for Name.
  2. Click Next.

 

 

Configure Settings for New NSX Edge Gateway: L2VPN-Server

 

  1. Enter VMware1!VMware1! for Password.
  2. Enter VMware1!VMware1! for Confirm password.
  3. Check Enable SSH access.
  4. Click Next.

 

 

Preparing L2VPN-Server NSX Edge for L2VPN Connections

Before we configure the newly deployed NSX Edge for L2VPN connections, we need to complete the following steps:

  1. Adding a Trunk Interface to the L2VPN-Server Edge Gateway.
  2. Adding a Sub Interface to the L2VPN-Server Edge Gateway.
  3. Configuring dynamic routing (OSPF) on the L2VPN-Server Edge Gateway.

 

 

Setting the Router ID for this NSX Edge

 

Next, we will be configuring dynamic routing on this Edge Gateway.

  1. Click Routing.
  2. Click Global Configuration.
  3. Click Edit to change Dynamic Routing Configuration.

 

 

Configuring OSPF on the L2VPN-Server NSX Edge

 

  1. Click OSPF.
  2. Click Green Plus icon under Area to Interface Mapping.

 

 

Enable OSPF Route Redistribution

 

  1. Click Route Redistribution.
  2. Click Edit to change Route Redistribution Status.
  3. Check OSPF.
  4. Click OK.

 

 

Configuring L2VPN Service on L2VPN-Server NSX Edge

The 172.16.10.1 address belongs to the L2VPN-Server Edge Gateway and routes are being distributed dynamically via OSPF. Next, we will configure the L2VPN service on this Edge Gateway so that the Edge acts as "Server" in the L2VPN connection.

 

 

Deploying the L2VPN-Client NSX Edge Gateway

Now that the server side of the L2VPN is configured, we will deploy a new NSX Edge Gateway to act as the L2 VPN client. Before deploying the NSX Edge Gateway L2 VPN Client, we need to configure the Uplink and Trunk distributed port groups on the distributed virtual switch.

 

 

Configuring the L2VPN-Client NSX Edge Gateway

 

  1. Double-click L2VPN-Client.

 

Native Bridging


NSX provides in-kernel software L2 Bridging capabilities, allowing organizations to seamlessly connect traditional workloads running on legacy VLANs to virtualized networks using VXLAN. L2 Bridging is commonly used in brownfield environments to simplify the introduction of logical networks, or when physical systems require L2 connectivity to virtual machines operating on an NSX Logical Switch.

This module will guide us through the configuration of a L2 Bridging instance between a traditional VLAN and an Access NSX Logical Switch.


 

Special Instructions for CLI Commands

 

Many of the modules will have you enter Command Line Interface (CLI) commands.  There are two ways to send CLI commands to the lab.

First to send a CLI command to the lab console:

  1. Highlight the CLI command in the manual and use Control+c to copy to clipboard.
  2. Click on the console menu item SEND TEXT.
  3. Press Control+v to paste from the clipboard to the window.
  4. Click the SEND button.

Second, a text file (README.txt) has been placed on the desktop of the environment allowing you to easily copy and paste complex commands or passwords in the associated utilities (CMD, Putty, console, etc). Certain characters are often not present on keyboards throughout the world.  This text file is also included for keyboard layouts which do not provide those characters.

The text file is README.txt and is found on the desktop.  

 

 

Access vSphere Web Client

 

 

 

Verify Initial Configuration

 

We will verify the initial configuration as shown in the picture above. The lab environment comes with a Port Group on the Management & Edge cluster, named "Bridged-Net-RegionA01-vDS-MGMT". The web server VMs, web-01a and web-02a, are attached to the Web-Tier-01 Logical Switch. The Web-Tier-01 Logical Switch is isolated from the Bridged-Net.

 

 

Migrate Web-01a to RegionA01-MGMT01 Cluster

 

  1. Click Home icon.
  2. Click VMs and Templates.

 

 

View Connected VMs

 

  1. Click Home icon
  2. Click Networking.

 

 

Migrate Web_Tier_Logical_Switch to Distributed Logical Router

 

  1. Click Home icon.
  2. Click Network & Security.

 

 

Configure NSX L2 Bridging

 

We will enable NSX L2 Bridging between VLAN 101 and the Web-Tier-01 Logical Switch, allowing web-01a.corp.local to communicate with the rest of the network. This configuration will enable both an L2 Bridge and a Distributed Logical Router interface on the same Web-Tier Logical Switch.

 

 

Verify L2 Bridging

NSX L2 Bridging has now been configured. Next, you will verify L2 connectivity between the web-01a VM that is attached to VLAN 101, and the web-02a VM that is attached to the Web_Tier_Logical_Switch.

 

 

L2 Bridging Module Cleanup

If you want to proceed with other modules in this Hands-On Lab, make sure to follow the steps below to remove L2 Bridging, as the example configuration used in this specific scenario could conflict with other sections, such as L2VPN.

 

 

Migrate Web-01a back to RegionA01-COMP01 Cluster

 

  1. Click Home icon.
  2. Click VMs and Templates.

 

Module 4 Conclusion


In this module, we touched on the advanced features of NSX Edge Services Gateway:

  1. Deployed a new Edge Services Gateway (ESG) and configured it as a one-armed load balancer.
  2. Modified and created firewall rules on the existing ESG.
  3. Configured DCHP Relay via ESG.
  4. Configured L2VPN via ESG.

 

You have completed Module 4

Congratulations on completing Module 4.

If you are looking for additional information on deploying NSX, please review the NSX 6.4 Documentation Center via the URL below:

Proceed to any of the following modules:

Lab Module List:

Lab Captains:

 

 

How to End Lab

 

To end your lab, click on the END button.  

 

Conclusion

Thank you for participating in the VMware Hands-on Labs. Be sure to visit http://hol.vmware.com/ to continue your lab experience online.

Lab SKU: HOL-1903-01-NET

Version: 20181105-164904