Lab Overview - HOL-SDC-1625 - VMware NSX Advanced
Welcome to the VMware NSX Advanced (HOL-SDC-1625) Hands-On Lab!
This lab will demonstrate many of the newer and advanced features provided by VMware NSX for vSphere. During the first three modules we show the new Cross-vCenter capabilities provided by NSX 6.2; module 4 and 5 describe NSX capabilities of doing L2 Bridging and L2VPN respectively. Module 6 provides insight on how to manage NSX using the new Central CLI, and in module 7 we show how to programmatically control NSX through RESTful APIs. Some of these modules might require a lot of time: do not expect to be able to complete the entire lab with a single session.
In order to achieve the best learning experience we suggest, If you don't have any previous hands-on experience on NSX, to start with the "VMware NSX Introduction" (HOL-SDC-1603) lab as it provides more basic content.
Thank you and enjoy the labs!
Lab Module List:
Each module can be run independently from others. However, as both Module 2 and 3 have a dependency on the Multi-vCenter configuration done on Module 1, a Fast Forward script is provided and must be launched if you're planning to take Module 2 or 3 without having done Module 1. Additional details will be provided in the manual sections of Modules 2 and 3.
Lab Captains:
This lab manual can be downloaded from the Hands-on Labs Document site found here:
http://docs.hol.vmware.com/HOL-2016
This lab may be available in other languages. To set your language preference and have a localized manual deployed with your lab, you may utilize this document to help guide you through the process:
http://docs.hol.vmware.com/announcements/nee-default-language.pdf
Many of the modules will have you enter Command Line Interface (CLI) commands. There are two ways to send CLI commands to the lab.
First to send a CLI command to the lab console:
Second, a text file (README.txt) has been placed on the desktop of the environment providing you with all the user accounts and passwords for the environment.
When you first start your lab, you may notice a watermark on the desktop indicating that Windows is not activated.
One of the major benefits of virtualization is that virtual machines can be moved and run on any platform. The Hands-on Labs utilizes this benefit and we are able to run the labs out of multiple datacenters. However, these datacenters may not have identical processors, which triggers a Microsoft activation check through the Internet.
Rest assured, VMware and the Hands-on Labs are in full compliance with Microsoft licensing requirements. The lab that you are using is a self-contained pod and does not have full access to the Internet, which is required for Windows to verify the activation. Without full access to the Internet, this automated process fails and you see this watermark.
This cosmetic issue has no effect on your lab.
The Hands-On Labs environment should automatically adapt the screen resolution to the size of the browser window, to provide an optimal experience.
You can however manually modify the resolution from the Control Panel to fit you screen.
This session may contain product features that are currently under development.
This session/overview of the new technology represents no commitment from VMware to deliver these features in any generally available product.
Features are subject to change, and must not be included in contracts, purchase orders, or sales agreements of any kind.
Technical feasibility and market demand will affect final delivery.
Pricing and packaging for any new technologies or features discussed or presented have not been determined.
This lab consists in two separate sites (named Site A and Site B), each one with its own vCenter, dedicated vSphere 6.0 hosts, storage and network subnets. The vCenters share a common Platform Services Controller (PSC), that allow common Identity Management, Licensing and Application Interaction across sites.
Site A consists in two clusters:
The two clusters share a common Management subnet (192.168.110.0/24), as well as a vMotion (10.10.30.0/24) and an IP storage network (10.10.20.0/24).
Networks dedicated to NSX VTEPs (VXLAN Tunnel End Points) are separate, as "Management & Cluster Cluster" is using the 192.168.120.0/24 subnet, while "Compute Cluster A" is using 192.168.130.0/24. The VTEP subnets are routed together by an external router (vpodrouter).
Site B consists in a single cluster:
Site B has its own Management subnet (192.168.210.0/24), as well as separate vMotion (10.20.30.0/24), IP storage (10.20.20.0/24) and VTEP (192.168.230.0/24) networks.
Management, vMotion and VTEP networks are routed across the different sites to allow common management, Cross-vCenter vMotion and network extensibility (VXLAN).
All VTEP networks have a 1600 byte MTU configured, to allow VXLAN encapsulation to occur.
The picture shows the vSphere hosts and how VMs are placed across the different clusters, as well as the Distributed Virtual Switches (DVS). VTEP IP addresses associated to the hosts are also displayed.
"Management & Edge Cluster" and "Compute Cluster A" are both managed by vCenter Server A (vcsa-01a.corp.local); "Compute Cluster B" is managed by vCenter Server B (vcsa-01b.corp.local). Both vCenters are connected to a common Platform Services Controller (psc-01a.corp.local) that resides on the management network of Site A.
NSX Transport Zone (TZ) configuration in the lab consists in the following:
Virtual Machine web-04a resides on the "Management & Edge Cluster" and is attached on a VLAN-backed Port Group on VLAN 101: it will be used to demonstrate the L2 bridging capabilities on NSX in Module 4.
When first accessing the lab, the topology shown in the picture is already implemented. Three NSX Logical Switches are configured (Web-Tier-01, App-Tier-01, DB-Tier-01) and a 3-Tier Web Application is configured, with a Load Balancer configured in One-Arm mode on the web tier. The Logical Switches are attached to a Distributed Logical Router (DLR), which is in turn attached to an NSX Edge Services Gateway (ESG). OSPF Dynamic Routing guarantees route exchange information between the DLR and the ESG, while BGP is used for routing between the ESG and the external router.
A second ESG is already configured on the Site A Uplink subnet and will be used during the module exercises.
Upon completition of Modules 1,2,3 and 4, the topology in the picture will be configured in the lab, as follows:
L2VPN Topology created in Module 5 is not shown.
Module 1 - Cross-vCenter NSX Networking
NSX 6.2 allows you to manage NSX environments containing multiple vCenter servers from a single primary NSX Manager.
Cross-vCenter NSX environments have many advantages:
In a cross-vCenter NSX environment, you can have multiple vCenter Servers, each of which must be paired with its own NSX Manager. One NSX Manager is assigned the role of primary NSX Manager, and the others are assigned the role of secondary NSX Manager.
The primary NSX Manager is used to deploy a universal controller cluster that provides the control plane for the cross-vCenter NSX environment. The secondary NSX Managers do not have their own controller clusters.
The primary NSX Manager can create universal objects, such as universal logical switches. These objects are replicated to the secondary NSX Managers by the NSX Replicator Service. You can view these objects from the secondary NSX Managers, but you cannot edit them there. You must use the primary NSX Manager to manage universal objects. The primary NSX Manager can be used to configure any of the secondary NSX Managers in the environment.
On both primary and secondary NSX Managers, you can create objects that are local to that specific vCenter NSX environment, such as logical switches, and logical (distributed) routers. They will exist only within the vCenter NSX environment in which they were created. They will not be visible on the other NSX Managers in the cross-vCenter NSX environment.
NSX Managers can be assigned the standalone role. This is equivalent to pre-NSX 6.2 environments with a single NSX Manager and single vCenter.
The primary NSX Manager can create universal objects that are replicated to secondary NSX Managers.
There is only one primary NSX Manager in a Cross-vCenter NSX environment.
The primary NSX Manager is used to deploy a universal controller cluster that provides the control plane for the multi-vCenter NSX environment. The secondary NSX Managers do not have their own controller clusters.
The primary NSX Manager can:
*****This is a cross-site lab so always pay attention to which site you are working and performing tasks!*****
Note: If the "Use Windows Session Authentication" is not available, User name: CORP\Administrator with a Password: VMware1! can be used to login.
Notice a new column has been added for Role. You will see 192.168.110.15 is now Primary and 192.168.210.15 is Standalone.
You will see a new zone scope of Universal type named Universal-Transport-Zone.
After a primary Multi-vCenter NSX Manager has been created, secondary NSX Managers can be created. Secondary NSX Managers are synced to receive control clusters that are configured and deployed by the primary NSX Manager. There can be seven secondary NSX Managers in a Multi-vCenter environment.
You now have two NSX Managers configured to work as one in a primary and secondary role.
Universal logical switches allow layer 2 networks to span multiple sites without having to purchase any additional physical network devices.
When you create a logical switch, if you select a universal transport zone, you create a universal logical switch. This switch is available on all clusters in the universal transport zone. The universal transport zone can include clusters in any vCenter in the multi-vCenter NSX environment.
There can be only one universal transport zone in a multi-vCenter NSX environment.
You must use a universal logical router to route between universal logical switches. If you need to route between a universal logical switch and a logical switch, you must use an Edge Services Gateway.
Both primary and secondary NSX Managers can have logical switches that are local to the environment in which they are created.
When you are adding logical switches, it is important to have in mind a particular topology that you are building. For example, the following simple topology shows two logical switches connected to a single distributed logical router (DLR). In this diagram, each logical switch is connected to a single VM. The two VMs can be on different hosts or the same host, in different host clusters or in the same host cluster. If a DLR does not separate the VMs, the underlying IP addresses configured on the VMs can be in the same subnet. If a DLR does separate them, the IP addresses on the VMs must be in different subnets (as shown in the example).
Following the procedure in the previous step, create Web, App, and DB tier universal logical switches.
Segment ID numbers for the logical switches will be assigned in the order fo their creation.
Now that the environment contains multiple universal logical switches, the operation of those logical switches will be verified both within a vCenter, and across vCenters.
Attaching a VM in vCenter B to the Web Tier Universal logical switch will allow a test of connectivity across vCenter domains.
1. In the open console command, login with the credentials root \ VMware1!
2. Enter the following command to test cross vCenter connectivity on the ULS-Web-Tier-02 universal logical switch: ping -c 3 172.17.10.12
ping -c 3 172.17.10.12
You are on web-03a with IP Address of 172.17.10.11.
(Remember to use the SEND TEXT option.)
This demonstrates connectivity between two VMs on the same universal logical switch managed by different vCenters.
Routing provides the necessary forwarding information between Layer 2 broadcast domains, thereby allowing you to decrease the size of Layer 2 broadcast domains and improve network efficiency and scale.
NSX extends this intelligence to where the workloads reside for East-West routing. This allows more direct VM-to-VM communication without the costly or timely need to extend hops. At the same time, NSX logical routers provide North-South connectivity, thereby enabling tenants to access public networks.
To verify the VMs attached to a given logical switch you have 2-options:
Note: This view is site specific. You will only see the objects connected to the logical switch within the currently selected site in previous step.
Universal Logical (Distributed) Routers offer centralized administration and a routing configuration that can be customized at the universal logical router, cluster, or host level.
When you create a universal logical router you must choose whether to enable local egress, as this cannot be changed after creation. Local egress allows you to control what routes are provided to ESXi hosts based on the locale ID. If you do not enable local egress the locale ID is ignored and all ESXi hosts connected to the universal logical router will receive the same routes. Whether or not to enable local egress in a cross-vCenter NSX environment is a design consideration, but it is not required for all cross-vCenter NSX configurations.
Note: The option labeled 'Enable Local Egress' will be discussed at the end of the Cross vCenter Logical Routing section. Please not its existence for now.
Note: Do not configure an IP address on the HA interface.
In this step, routing will be configured on the Universal Logical (Distributed) Router which was just created.
Double-click the new Universal Distributed Router
Verify that the OSPF process is permitted to learn any Connected prefixes as shown above.
Now that a logical routing topology has been established, connectivity between VMs attached to different logical switches must be verified.
Click in side console and press enter.
1. In the open console command, login with the credentials root \ VMware1!
2. Enter the following command to test cross vCenter connectivity on the ULS-Web-Tier-02 universal logical switch: ping -c 3 172.17.20.11 (Remember to use the SEND TEXT option.)
ping -c 3 172.17.20.11
This demonstrates connectivity between two VMs on the different universal logical switches managed by different vCenters.
Enter the following command to test cross vCenter connectivity on the ULS-Web-Tier-02 universal logical switch: ping -c 3 172.17.30.11
ping -c 3 172.17.30.11
This confirms connectivity to the DB tier across vCenter environments.
In this step, OSPF routing will be configured on Perimeter-Gateway-02 in order to peer with Universal-Distributed-Router.
In the 'NSX Edges' view, double-click on 'Perimeter-Gateway-02'.
Publish Changes.
As Area 0 is already configured on the Perimeter Gateway, configuration can proceed directly to mapping an interface to that area.
In order to enable a three tiered application across the cross vCenter infrastructure, a one-armed load balancer will be connected to the universal logical web tier. The functionality of the application will then be verified.
To access the interface configuration of the edge:
Output in the format shown in the screenshot should be displayed.
Please note the IP address and hostname of the currently accessed web server from within the load balancer pool.
To further test the application, holding shift while clicking on the 'Refresh' button will bypass the cached entry and create a new session to the other web server as the pool uses the round-robin algorithm.
All sites in a multi-site cross-vCenter NSX environment can use the same physical routers for egress traffic. However, if egress routes need to be customized, the local egress feature must be enabled when the universal logical router is created. This allows you to customize routes at the universal logical router, cluster, or host level.
This example of a cross-vCenter NSX environment in multiple sites has local egress enabled. The edge services gateways (ESGs) in each site have a default route that sends traffic out through that site's physical routers. The universal logical router is configured with two appliances, one in each site. The appliances learn routes from their site's ESGs. The learned routes are sent to the universal controller cluster. Because local egress is enabled, the locale ID for that site is associated with those routes. The universal controller cluster sends routes with matching locale IDs to the hosts. Routes learned on the site A appliance are sent to the hosts in site A, and routes learned on the site B appliance are sent to the hosts in site B.
When you create a universal logical router you must choose whether to enable local egress, as this cannot be changed after creation. Local egress allows you to control what routes are provided to ESXi hosts based on the locale ID. If you do not enable local egress the locale ID is ignored and all ESXi hosts connected to the universal logical router will receive the same routes.
Each NSX Manager is assigned a Locale ID, which is set to the NSX Manager UUID by default. Using a sitespecificuplink, each site can have a local routing configuration. This allows NSX 6.2 to support up to eight sites with local egress.
You can override the locale ID at the following levels:
Module 2 - Cross-vCenter NSX Security
Note: If you have not completed Cross-vCenter NSX Networking (module 1), execute the 'FastForward.ps1' script on the control center desktop by double-clicking on it.
Distributed Firewall in a cross-vCenter NSX environment allows centralized management of rules that apply to all vCenter Servers in your environment. It supports cross-vCenter vMotion which enables you to move workloads or virtual machines from one vCenter Server to another and seamlessly extends your software defined datacenter security.
As your datacenter needs scale out, the existing vCenter Server may not scale to the same level. This may require you to move a set of applications to newer hosts that are managed by a different vCenter Server. Or you may need to move applications from staging to production in an environment where staging servers are managed by one vCenter Server and production servers are managed by a different vCenter Server. Distributed Firewall supports these cross-vCenter vMotion scenarios by replicating firewall policies that you define for the primary NSX Manager on up to seven secondary NSX Managers.
Once you designate an NSX Manager as the primary Manager, you can create universal rules within a universal section. These rules are replicated on all secondary NSX Managers in your environment. Rules in other sections remain local to the appropriate NSX Manager.
The following Distributed Firewall features are not supported in a cross-vCenter NSX environment:
Service Composer is not supported in a cross-vCenter NSX environment, so you cannot use it to create distributed firewall rules in the universal section.
The following objects can be created and used as rule semantics within Distributed Firewall Rules in the universal section.
In this section, multiple universal IP set objects will be created to serve as the basis of cross-vCenter Distributed Firewall Rules.
Each module in this Hands On Lab 1625 is self contained, but Module 3 requires the successful completion of the previous Modules 1 and 2. Given the time constraints of Hand On Lab, we have provided a "Fast Forward" PowerShell script which automates the configuration steps in Module 1 and 2 so that you can continue with Module 3 standalone.
If you are starting Module 3 without previously successfully completing Module 1 and 2, please execute the FastForward.ps1 PowerShell script located on the windows desktop.
The script itself is very simple and does minimal validation as the primary focus is to complete Fast Forward activities. Running the script multiple times will generate errors due to duplicate objects already exist, and possibly create multiple Universal Logical Switches but the lab will not "break". Future versions of this script will improve error checking and rely on API response codes instead of Sleep timers.
Open an web browser from the lab desktop, and open the bookmark labeled 'Site A Web Client'.
Create the following New IP Sets.
It is now time to create Universal Firewall Rules based on the newly created Universal IP Set objects.
1. In the open console command, login with the credentials root \ VMware1!
2. Enter the following command to test cross vCenter connectivity on the ULS-Web-Tier-02 universal logical switch: ping -c 3 172.17.10.12 (Remember to use the SEND TEXT option.)
ping -c 3 172.17.10.12
This demonstrates that previously functional connectivity is being blocked by the Web MicroSegmentation rule.
In this section, additional Universal Firewall Rules will be configured based on Universal Security Group objects.
It is now time to create additional Universal Firewall Rules based on Universal Security Group objects.
Create the final Universal Firewall Rule in this environment. The rule is shown is the screenshot above.
Expand the 'Default Section Layer3' firewall section at the bottom of the frame.
1. In the open console command, login with the credentials root \ VMware1!
2. Enter the following commands to test cross-vCenter connectivity between the VMs attached to the Universal Logical Switches:
ping -c 3 172.17.10.12
ping -c 3 172.17.20.11
ping -c 3 172.17.30.11
This demonstrates that previously functional connectivity is being blocked by the Universal Distributed Firewall Rules.
Please note the IP address and hostname of the currently accessed web server from within the load balancer pool.
3. PressShift while clicking on the 'Refresh' button will bypass the cached entry and create a new session to the other web server as the pool uses the round-robin algorithm.
This concludes the cross-vCenter Security module.
First you will set the Default Rule back to Allow on the Primary NSX Manager.
Remember that rule synchronization does not apply to the Default Section.
Module 3 - Local and Universal Design Considerations
Module 3 will review routing considerations between networks connected via a Local Distributed Logical Router and between networks connected via Universal Distributed Logical Router.
Currently the Lab is configured so that the two 3-Tier Web Applications need to traverse individual North/South Edge Services Gateways to access each other requiring all traffic to traverse the common VLAN connecting the North/South Perimeter Edge Services Gateways to the core network.
In this lab, we will perform the following activities:
Notes:
ESG = Edge Services Gateway Used as Perimeter Routers and Load Balancers in this lab module.
DLR = Distributed Logical Routers. Used as kernel based routers in this lab module. Two types of DLR - A Local DLR (LDLR) is used within a single vCenter environment, and a Universal DLR (UDLR) is shared across multiple vCenter environments. We will use both a Local and Universal DLR in this lab module.
The 3-Tier Local Web Application is configured across a single data center with local NSX network objects. Web access is via the OneArm-LoadBalancer-01 configured with Domain Name htps://webapp.corp.local resolving to IP address 17.16.10.10.
The network topology for the 3-Tier Universal Web Application is shown. A Universal Distributed Logical Router connected to Universal Logical Switches provide L2 adjacency to Virtual Machines across two data centers.
Ingress and Egress are via the single North/South Edge Services Gateway named Perimeter-Gateway-02 located in Data Center Site A Management & Edge Cluster.
Web access is via the OneArm-LoadBalancer-02 configured with Domain Name https://webapp-universal.corp.local resolving to IP address 17.17.10.10.
As shown, routing between the two 3-Tier applications is through their respective Perimeter-Gateways via the core VLAN.
Validation checks ensure all components of the lab are correctly deployed and once validation is complete, status will be updated to Green/Ready. It is possible to have a Lab deployment fail due to environment resource constraints.
Each module in this Hands On Lab 1625 is self contained, but Module 3 requires the successful completion of the previous Module 1. Given the time constraints of Hand On Lab, we have provided a "Fast Forward" PowerShell script which automates the configuration steps in Module 1 so that you can continue with Module 3 standalone.
If you are starting Module 3 without previously successfully completing Module 1, please execute the FastForward.ps1 PowerShell script located on the windows desktop.
The script itself is very simple and does minimal validation as the primary focus is to complete Fast Forward activities. Running the script multiple times will generate errors due to duplicate objects already exist, and possibly create multiple Universal Logical Switches but the lab will not "break". Future versions of this script will improve error checking and rely on API response codes instead of Sleep timers.
In this exercise, we will validate the Local Web Application is functioning and correctly load balancing between the two web server VM's. If the application does not function, this indicates the previous Exercises miss-configured the lab or the lab did not start correctly. This application is configured to work from initial lab deployment.
The IP subnet if 172.16.10.X represents the Web-Tier-02 Logical Switch connected to the Universal-Distributed-Router with a gateway address of 172.16.10.1. Edge Services Gateway Load Balancer is on 172.16.10.10 and is configured to load balance between VM's web-01a and web-02a.
The Universal Web Application is a 3-Tier application connected to Universal Logical Switches and a Universal Logical Router with VM's split between two data centers.
In this exercise, we will validate the Universal Web Application is functioning and correctly load balancing between two web servers.
Note: If the application does not function, this indicates a failure for the Lab to deploy, the previous Module 1 steps were not performed correctly, or the PowerShell Fast Forward script did not complete correctly. Redeploying the lab and using the Fast Forward script is likely the fastest path to resolution.
If your attempt to bring up the web page fails shortly after the Fast Forward PowerShell script has completed, Universal Distributed Logical Router and associated Edge appliance deployment may not have completed populating the routing table. Wait a minute and try again.
The IP subnet if 172.17.10.X represents the ULS-Web-Tier-02 Logical Switch connected to the Universal-Distributed-Router with a gateway address of 172.17.10.1. Edge Services Gateway Load Balancer is on 172.17.10.10 and is configured to load balance between VM's web-03a and web-01b.
This demonistrates the application and universal network objects are correctly configured and functioning together with an NSX Edge Services Gateway configured to provide load balancing for the application web tier.
In this step, we will use the NSX Manager CLI interface to validate the routing configuration on Edge Perimeter-Gateway-01 and Perimeter-Gateway-02. Perimeter-Gateway-01 will only know about the routes associated with the Local Distributed Logical Router (172.16.X.X) and Perimeter-Gateway-02 will only know about the routes associated with the Universal Distributed Logical Router (172.17.X.X).
show edge all
The active Edges in the NSX environment are displayed.
show edge edge-3 ip route
The routing table from edge-3 Perimeter-Gateway-01 is displayed. Note the routes from the Local Distributed Logical Router (172.16.X.X) are displayed only. Access to the networks connected to the Universal Distributed Logical Router (172.17.X.X) are via the core network via the default route 0.0.0.0/0.
show edge edge-5 ip route
The routing table from edge-5 Perimeter-Gateway-02 is displayed. Note the routes from the Universal Distributed Logical Router (172.17.X.X) are displayed only. Access to the networks connected to the Local Distributed Logical Router (172.16.X.X) are via the core network via the default route 0.0.0.0/0.
Equal Cost Multi Pathing is now going to be enabled across two Perimeter Edge Services Gateways and the Local and Universal Distributed Logical Routers in preparation for some multi pathing changes. We will start with ESG Perimeter-gateway-01.
Optionally, click on the Use Windows session authentication box then click Login. You will be logged on to the Site A vSphere Web Client using the credentials of the Control Center Windows machine. (Corp\Administrator).
The menu options are very slightly different between Local Distributed Logical Routers and Edge Services Gateways so we will step through the sequence. Feel free to click ahead.
The menu options are only very slightly different between Local Distributed Logical Routers and Universal Distributed Logical Routers, so we will step through the sequence.
Note that the id of the edge may not match the screen image due to a unique UUID is created when the edge is created.
Although ECMP is now enabled on the Local and Universal DLR's and two Perimeter ESG's supporting the DLR's, we have not introduced multiple paths into the environment, so there is no change to the current routing configuration. The environment is now prepared for equal cost multi pathing.
This is one example of an ECMP network topology. There are others.
The ESG interface configuration screen needs the full screen to see the buttons at the bottom of the window. Maximizing the browser window will allow you to drag the screen so that you can click on the OK button.
The ESG interface configuration screen needs the full screen to see the buttons at the bottom of the window. Dragging the Edit Interface window will allow you to drag the screen so that you can click on the OK button.
This successfully added a new interface to the ESG Perimeter-Edge-01. Not that on the 192.168.5.8/29 network, the existing Perimeter-Edge-02 is 192.168.5.9/29 and the UDLR universal-logical router is 192.168.5.10-11/29.
Towards the bottom of the screen you have the Area to Interface Mapping.
In the next step, I have closed the "Recent Tasks" window.
OSPF Dynamic Routing has now been configured on the new interface.
The ESG interface configuration screen needs the full screen to see the buttons at the bottom of the window. Dragging the Edit Interface window will allow you to drag the screen so that you can click on the OK button.
This successfully added a new interface to the ESG Perimeter-Edge-02. Note that on the 192.168.5.0/29 network, the existing Perimeter-Edge-01 is 192.168.5.1/29 and the LDLR local-logical router is 192.168.5.2-3/29.
Towards the bottom of the screen you have the Area to Interface Mapping.
In the next step, I have closed the "Recent Tasks" window.
OSPF Dynamic Routing has now been configured on the new interface.
The Local Distributed Logical Router is connected to both Perimeter Edges via its Local Logical Switch Transit-Network-01 subnet 192.168.5.0/29.
The Universal Distributed Logical Router is connected to both Perimeter Edges via its Universal Logical Switch ULS-Transit-Network-02 subnet 192.168.5.8/29.
All routing between the Local DLR and Universal DLR is via the Perimeter Edges and not the core network.
The following exercises will require you enter commands and/or configuration. The text boxes within the lab manual such as the example below combined with the SEND TEXT function, allow you to copy and paste commands and configuration into the lab.
As an example only, the steps to use the SEND TEXT function are as follows:
ls -l | grep test
In this step, we will use the NSX Manager CLI interface to validate the routing configuration on Edge Perimeter-Gateway-01 Perimeter-Gateway-02.
show edge all
The active Edges in the NSX environment are displayed.
show edge edge-3 ip route
The routing table from edge-3 Perimeter-Gateway-01 is displayed. Note the routes from the Local Distributed Logical Router (172.16.X.X) and the Universal Distributed Logical Router (172.17.X.X) are displayed showing connectivity from this Edge to both the Local and Universal Distributed Logical Routers.
show edge edge-5 ip route
The routing table from edge-5 Perimeter-Gateway-02 is displayed. Note the routes from the Local Distributed Logical Router (172.16.X.X) and the Universal Distributed Logical Router (172.17.X.X) are displayed showing connectivity from this Edge to both Local and Universal Distributed Logical Routers.
With the changes to the Perimeter Gateways, we now have routing between the Local Distributed Logical Router and Universal Distributed Logical Router via the Perimeter Gateways using Equal Cost Multi Pathing without requiring traffic to traverse the core network.
Universal and Local Distributed Logical Routers are a powerful capability of NSX allowing for Layer 2 and Layer 3 network extensibility across multiple vCenter environments. Combined with other VMware technologies, such as vSphere Site Recovery Manager, multisite BC/DR solutions and VM mobility are easily accomplished with NSX for network virtualization.
Module 3 will review routing considerations between networks connected via a Local Distributed Logical Router and between networks connected via Universal Distributed Logical Router.
Currently the Lab is configured so that the two 3-Tier Web Applications need to traverse individual North/South Edge Services Gateways to access each other requiring all traffic to traverse the common VLAN connecting the North/South Perimeter Edge Services Gateways to the core network.
In this lab, we will perform the following activities:
Notes:
ESG = Edge Services Gateway Used as Perimeter Routers and Load Balancers in this lab module.
DLR = Distributed Logical Routers. Used as kernel based routers in this lab module. Two types of DLR - A Local DLR (LDLR) is used within a single vCenter environment, and a Universal DLR (UDLR) is shared across multiple vCenter environments. We will use both a Local and Universal DLR in this lab module.
The 3-Tier Local Web Application is configured across a single data center with local NSX network objects. Web access is via the OneArm-LoadBalancer-01 configured with Domain Name htps://webapp.corp.local resolving to IP address 17.16.10.10.
The network topology for the 3-Tier Universal Web Application is shown. A Universal Distributed Logical Router connected to Universal Logical Switches provide L2 adjacency to Virtual Machines across two data centers.
Ingress and Egress are via the single North/South Edge Services Gateway named Perimeter-Gateway-02 located in Data Center Site A Management & Edge Cluster.
Web access is via the OneArm-LoadBalancer-02 configured with Domain Name htps://webapp-universal.corp.local resolving to IP address 17.17.10.10.
Validation checks ensure all components of the lab are correctly deployed and once validation is complete, status will be updated to Green/Ready. It is possible to have a Lab deployment fail due to environment resource constraints.
In this exercise, we will validate the Local Web Application is functioning and correctly load balancing between the two web server VM's. If the application does not function, this indicates the previous Exercises miss-configured the lab or the lab did not start correctly. This application is configured to work from initial lab deployment.
The IP subnet if 172.16.10.X represents the Web-Tier-02 Logical Switch connected to the Universal-Distributed-Router with a gateway address of 172.16.10.1. Edge Services Gateway Load Balancer is on 172.16.10.10 and is configured to load balance between VM's web-01a and web-02a.
The Universal Web Application is a 3-Tier application connected to Universal Logical Switches and a Universal Logical Router with VM's split between two data centers.
In this exercise, we will validate the Universal Web Application is functioning and correctly load balancing between two web servers.
If the application does not function, this indicates a failure for the Lab to deploy, the previous Exercises were not performed correctly, or the PowerShell Fast Forward script did not complete correctly.
If your attempt to bring up the web page fails shortly after the Fast Forward PowerShell script has completed, Universal Distributed Logical Router and associated Edge appliance deployment may not have completed populating the routing table. Wait a minute and try again.
The IP subnet if 172.17.10.X represents the ULS-Web-Tier-02 Logical Switch connected to the Universal-Distributed-Router with a gateway address of 172.17.10.1. Edge Services Gateway Load Balancer is on 172.17.10.10 and is configured to load balance between VM's web-03a and web-01b.
This demonistrates the application and universal network objects are correctly configured and functioning together with an NSX Edge Services Gateway configured to provide load balancing for the application web tier.
A new Universal Logical Switch is required to support the migration of VMs from a Local Logical Switch
Optionally, click on the Use Windows session authentication box then click Login. You will be logged on to the Site A vSphere Web Client using the credentials of the Control Center Windows machine. (Corp\Administrator).
A new Universal Logical Switch will be created.
The app-01a VM is now disconnected from App-Tier-01Logical Switch and connected to the new ULS-App-Tier-01Universal Logical Switch.
We will now remove the existing Gateway address for the App-Tier network from the Local DLR.
Note that the id of the edge may not match the screen image due to a unique UUID is created when the edge is created.
This will create a new interface in the Universal Distributed Logical Router connected to the Universal Logical Switch.
In this exercise, we will validate the Local Web Application is functioning and correctly load balancing between the two web server VM's. If the application does not function, this indicates the previous Exercises miss-configured the lab or the lab did not start correctly. This application is configured to work from initial lab deployment.
The IP subnet if 172.16.10.X represents the Web-Tier-02 Logical Switch connected to the Universal-Distributed-Router with a gateway address of 172.16.10.1. Edge Services Gateway Load Balancer is on 172.16.10.10 and is configured to load balance between VM's web-01a and web-02a.
Local Logical Switches are connected to Local Distributed Logical Routers and Universal Logical Switches are connected to Universal Distributed Logical Routers.
The migration of VM's between Local and Universal Logical Switches is no more than a step by step process via the vSphere Web UI or via NSX REST API.
Module 4 - NSX L2 Bridging
NSX provides in-kernel software L2 Bridging capabilities , that allow organizations to seamlessly connect traditional workloads and legacy VLANs to virtualized networks. L2 Bridging is widely used in brownfield environments, to simplify the introduction of logical networks, as well as other scenarios that involve physical systems that require L2 connectivity to virtual machines.
This module will guide you through the configuration of a L2 Bridging instance between a traditional VLAN and a NSX Logical Switch.
In NSX-V 6.2 this function has been enhanced, by allowing bridged Logical Switches to be connected to Distributed Logical Routers. This operation was not permitted in previous versions of NSX.
The picture above shows the L2 Bridging enhancements provided in NSX 6.2:
You will now configure NSX L2 Bridging with NSX 6.2 in the newly supported configuration.
You will now access the vSphere Web Client with a browser (either Mozilla Firefox or Google Chrome) from the ControlCenter, in order to do the necessary configurations for this module.
You can now verify the initial configuration. The environment comes with a Port Group on the Management & Edge cluster, named "vds-mgt_Bridge Network", configured on VLAN ID 101. A web server VM, named "web-04a", is attached to this Port Group which, at the moment, is isolated from the network. The picture shows the topology.
Verify that the Port Group is configured on physical VLAN ID 101.
Once the console window is open, click in the middle of the screen and type any key to make the screen blanker go away.
ping -c 3 172.16.10.1
Wait until the ping times out: you have verified that the VM is isolated, as there are no other devices on VLAN 101 and the L2 Bridging is not configured yet.
You will now enable NSX L2 Bridging between VLAN 101 and the Web-Tier-01 Logical Switch, so that VM "web-04a" will be able to communicate with the rest of the network. With NSX-V 6.2 is now possible to have a L2 Bridge and a Distributed Logical Router connected to the same Logical Switch. This represents an important enhancement as it simplifies the integration of NSX in brownfield environments, as well as the migration from legacy to virtual networking.
Do not close the "web-04a" console tab as you will need it later.
NSX Bridging is implemented in the hypervisor and is configured in the Logical Router configuration page. You will configure L2 Bridging on the Logical Router that is already deployed in the environment ("Local-Distributed-Router").
Verify the published configuration. You will notice the "Routing Enabled" message: it means that this L2 Bridge is also connected to a Distributed Logical Router, which is an enhancement in NSX-V 6.2.
NSX L2 Bridging has been configured. You will now verify L2 connectivity between the "web-04a" VM, attached on VLAN 101, and the machines connected "Web-Tier-01" Logical Switch. You will also reconfigure the NSX Load Balancer to add "web-04a" to the pool.
ping -c 3 172.16.10.1
The ping is now successful: you have verified connectivity between a VM attached on VLAN 101 and the Distributed Logical Router that is the default gateway of the network, through a L2 Bridge provided by NSX!
Note: you might experience "duplicate" pings during this test (responses appearing as DUPs): this is due to the nature of the Hands-On Labs environment and is not going to happen in a real scenario.
You will now add "web-04a" to the existing NSX Load Balancer pool.
You'll notice that there are currently 2 web servers in the pool: web-01a (172.16.10.11) and web-02a (172.16.10.12).
In the dialog box, type the new member information as follows:
You have now verified that "web-04a" is accessible through the Load Balancer, and is also able to communicate to the application server ("app-01a", 172.16.20.11) on another subnet via the NSX Distributed Logical Router (otherwise the application wouldn't work).
Now that network configuration is verified, you can configure the Distributed Firewall (DFW) to enable Microsegmentation on the web tier network for the bridged VM as well.
At the moment, VLAN 101 is bridged with "Web-Tier-01" Logical Switch, so that the VMs "web-04a" (172.16.10.14) , "web-01a" (172.16.10.11) and "web-02a" (172.16.10.12) share the same L2 segment.
NSX Distributed Firewall filters traffic on the Virtual Machine Network Interface Card (vNIC), thus applies on VMs attached both on Logical Switches and Distributed Port Groups.
The current DFW configuration has a "block" rule that inhibits communications between web-01a and web-02a. You will add web-04a to a Security Group and verify that Microsegmentation can be achieved on VMs connected through L2 Bridging as well.
Initially, there is no policy that restricts "web-04a" communication.
ping -c 3 web-01a
The ping is successful, meaning that the Distributed Firewall allows the communication.
Note: you might experience "duplicate" pings during this test (responses appearing as DUPs): this is due to the nature of the Hands-On Labs environment and is not going to happen in a real scenario.
You will notice there is a "Web Tier Micro-segmentation" rule that blocks intra-group traffic within the "Local_Web_Tier" Security Group.
As previously observed, this rule does not prevent "web-04a" to ping "web-01a" as the former VM is not in the "Local_Web_Tier" Security Group.
Verify that "web-04a" is now part of the Security Group and close the popup window
Now verify that the Micro-segmentation rule is enforced on "web-04a" as well.
ping -c 3 web-01a
The ping is not successful, meaning that the Distributed Firewall is correctly blocking the communication. Microsegmentation is enabled on all web VMs, either connected to a Distributed Port Group on a VLAN, or to a Logical Switch, even when L2 Bridging is enabled between the two.
If you want to proceed with the other modules of this Hands-On Lab, make sure to follow the following steps to disable the L2 Bridging, as the example configuration realized in this specific environment could conflict with other sections, such as L2VPN.
You will see only the "Bridge-01" instance that you created before, which is highlighted by default.
Verify that the Bridge instance has been deleted.
Congratulations, you have successfully completed the NSX L2 Bridging module!
Module 5 - NSX L2VPN
In this module you will be utilizing the L2VPN capabilities of the NSX Edge Gateway to extend a L2 boundary between two separate vSphere sites. To demonstrate a use case based on this capability, you will be placing a VM located on Site B of your lab environment within the pool of load balanced servers that are balanced by the Edge Gateway "OneArm-LoadBalancer-01." The topology diagram above gives a simplified picture of the environment you will end up with at the end of this section.
Move to the next step to continue and validate whether your lab environment is ready.
Validation checks ensure all components of the lab are correctly deployed and once validation is complete, status will be updated to Green/Ready. It is possible to have a Lab deployment fail due to environment resource constraints.
Given the amount of data being shown within the vSphere Web Client, one might find it advantageous to slightly lower the zoom of the Google Chrome Browser in the Control Center to increase the viewing area within the vSphere Web Client. To decrease the Zoom:
You will be logged on to the Site A vSphere Web Client using the credentials of the Control Center Windows machine. (Corp\Administrator).
To create the L2VPN Server service, you must first deploy an NSX Edge Gateway for that service to run on.
The New NSX Edge wizard will appear, with the first section "Name and Description" displayed. Enter in the following values corresponding to the following numbers. Leave the other fields blank or at their default values.
In the Settings section of the New NSX Edge Wizard, perform the following actions:
In the "Add NSX Edge Appliance" modal popup window that appears, enter in the following values:
Ensure that the Logical Switch tab is selected, and perform the following actions:
Before continuing, review the following settings:
For the Firewall and HA section, configure the following properties:
Before you configure the newly deployed NSX Edge for L2VPN connections, a number of preparatory steps will need to be taken first, including:
1.) Removing the Logical Interface (LIF) from the DLR "Local-Distributed-Router" that represents the IP address of 172.16.10.1.
2.) Adding a Trunk Interface to the L2VPN-Server Edge Gateway.
3.) Adding a Sub Interface to the L2VPN-Server Edge Gateway.
4.) Configuring dynamic routing (OSPF) on the L2VPN-Server Edge Gateway.
The next set of steps will have you walking through these four actions. Continue to the next step to begin with removing the LIF from the existing "Local-Distributed-Router"
Navigate to the list of NSX Edges again - and make sure the dropdown list next to NSX Manager at the top is set to "192.168.110.15 (Role: Primary)".
To remove the LIF representing the IP address of "172.16.10.1," perform the following actions:
Now that the LIF is removed, you will be configuring a Trunk Interface on the L2VPN-Server Edge Gateway, and a Sub Interface on that Trunk interface to take the place of the LIF you just removed.
In the Edit NSX Edge Interface window that comes up, enter the following values:
In the "Connect NSX Edge to a Network" popup, perform the following actions:
In the "Add Sub Interface" popup, enter in the following values.
Next, you will be configuring dynamic routing on this Edge Gateway.
Stay within the Routing sub-tab, and
In the popup for "New Area to Interface Mapping," configure the following values:
In the "New Redistribution criteria" popup, configure the following values:
Once complete, all pre-requisites have been performed to continue on with configuring the L2VPN service on this Edge Gateway.
Now that the 172.16.10.1 address belongs to the L2VPN-Server Edge Gateway, and it is now distributing its routes dynamically via OSPF, you will begin to configure the L2VPN service on this Edge Gateway so that the Edge acts as "Server" in the L2VPN.
In the L2 VPN server settings, configure the following values:
In the "Select Object" popup, perform the following actions:
That conludes the configuration for the L2 VPN. Next, you will be deploying another new NSX Edge Gateway on the vSphere installation on Site B, which will act as the L2 VPN client.
Now that the server side of the L2VPN is configured, you will move on to deploying another NSX Edge Gateway to act as the L2 VPN client.
For the options here, select the following values:
For the Settings section, configure the following values:
In the "Add NSX Edge Appliance" popup, configure the following values:
For the parameters on this interface, enter in the following values:
In the "Connect NSX Edge to a Network" popup, perform the following actions:
In the Default Gateway Settings section, configure the following values:
In the "Firewall and HA" section, perform the following actions:
Just like with the L2VPN-Server Edge Gateway, there is also a need to add a Trunk interface to this Edge. To bring up the configuration window for the new interface, perform the following actions:
In the Edit NSX Edge Interface popup, enter the following values:
Configure the Sub Interface with the following values:
To begin configuring the L2VPN client, perform the following actions:
For the client settings, enter in the following values:
To add a new Sub Interface to the L2 VPN service, perform the following actions:
To enable the L2VPN Client service, click the Enable button here as shown in the screenshot.
Now that the L2VPN is currently up, it's time to add the web-02b VM to the Load Balancer pool on the OneArm-LoadBalancer-01 Edge Gateway on Site A.
ping -c 3 172.16.10.10
ping -c 3 172.16.10.11
ping -c 3 172.16.10.1
Once you're done, go back to the list of NSX Edges within the Networking & Security section of the vSphere Web Client.
In the Edit Pool window, you are going to simply add one more member for the load balancing pool - web-02b which is located at the IP address of 172.16.10.13.
Enter in the following values for the "New Member' window:
You may need to click the refresh button a couple of times, but you should eventually see that the page is also being served by the VM web-02b on IP address 172.16.10.13, successfully demonstrating the addition of a new Load Balanced Pool member from another site via a L2VPN connection!
For the next section, you'll be deploying a standalone Edge Gateway specifically for the purpose of acting as a L2VPN client. This is to simulate the use case of connecting a L2VPN to a remote site that has a vSphere instance that is not managed by NSX. That means you'll need to remove your L2VPN-Client NSX Edge Gateway.
In the previous section, you had deployed two separate NSX Edge Gateways - one on Site A which acted as the L2VPN server (aptly named "L2PVPN-Server"), and one on Site B which acted as the L2VPN client. The goal of that exercise was to be able to show how one could extend a L2 domain across two separate sites, and demonstrate that by including a VM on Site B in the Load Balancer pool of a Load Balancer service hosted by an Edge Gateway on Site A.
At the end of that section, you were told to delete the L2VPN-Client Edge Gateway. In this section, you will be simulating the same use case as the previous section, but this time, the assumption is that Site B does not have NSX deployed, so instead you will have to deploy what is called a "Standalone Edge." This is an OVF file that represents an Edge Gateway, but has the specific purpose of acting as a L2VPN client to be deployed on a vSphere instance that is not managed by NSX. By the end of this section, you will have replicated the same service functionality of the previous section, and return L2 connectivity between both sites and the three Web VMs shown above in the topology.
Next, perform the following actions:
In the window that comes up, perform the following actions:
In the "Review details" section, perform the following actions:
In the "Select name and folder" section of the OVF deployment wizard, perform the following actions:
In the "Select storage" section of the OVF deployment wizard, perform the following actions:
At this point, you'll start to notice that this is starting to look like the configuration of how you configured the NSX Edge Gateway earlier to act as the L2VPN client, and that's a very astute observation. Here you will be configuring which of the two Standalone Edge network interfaces acts as the Public connectivity interface, and which acts as the Trunk interface. Perform the following actions:
*Note: In a production environment, one would set these three passwords to be something different, but please keep in mind that they need to be at least twelve characters in length.
Next, you will be configuring the credentials for the Standalone Edge as follows:
Configure the Uplink Interface as follows:
For the L2VPN settings, enter in the following values:
*Note - You will be changing the VLAN of the VM Network port group on Site B to VLAN 103 later in this section. The number 1 in parentheses represents the Tunnel ID of 1.
For the Sub Interfaces settings, enter in the following values:
In the "Ready to complete" section of the OVF deployment wizard, perform the following actions:
Monitor the progress of the OVF being deployed. Once it has finished and powered on, continue to the next step.
When utilizing an Edge Gateway deployed directly from NSX that acts as the L2VPN client, the below steps aren't needed, but because we're simulating a remote site that is *not* managed by NSX, one will need to modify some settings on the Distributed Virtual Switch, as well as the Trunk port group on Site B. Continue to to the next step to begin that process.
Open the PuTTY application, and perform the following actions in the PuTTY configuration window:
Part of getting the Trunk port group on the Distributed Virtual Switch ready is obtaining the DVPort ID for the interface of the Standalone Edge that's on the Trunk Port group. The interface will always be the "eth1" interface, and one should look for the corresponding DVPort ID for that interface by running the following command after SSH'd to ESXi host "esx-01b.corp.local."
Remember you can the SEND To Console CLI commands.
esxcfg-vswitch -l
Make a note of that DVPort ID for the next step. In the screenshot above it's ID 65, but it may be different for your lab once you reach this step.
To enable the sink port on the Trunk port group, run the following command in the same SSH session you have open for esx-01b.corp.local:
Where <DVPortID> is listed, replace it with the DVPort ID obtained from the previous step. For example, based on the screenshot from the previous step, the command given would be:
net-dvs --enableSink 1 -p 65 vds-site-b
net-dvs -l | grep "port\ [0-9]\|SINK\|com.vmware.common.alias"
Which should return something like the screenshot shown here. You may close this SSH session and continue to the next step now.
To edit the VLAN of the VM Network Port Group...
That concludes the changes needed for the distributed port groups to submit the Standalone Edge's L2VPN client. At this point, connectivity should be restored between the VM "web-02b" and the other VMs on the 172.16.10.0/24 network that are on Site A. Continue to the next step to confirm that connectivity.
ping 172.16.10.13
If the pings return, then connectivity should be restored, and you can now open up a web browser and attempt to see if the VM "web-02b" is still reachable from the Load Balancer configured in the previous section.
After opening the Google Chrome web browser, perform the following actions:
Congratulations! You have completed the module for the L2VPN service in this lab. Next up - you will be learning about the new centralized CLI commands available from the NSX Manager.
Module 6 - NSX Operations Central CLI
In this module, you will be exploring a terrific new feature to assist in operational activities when working with NSX for vSphere, the new NSX Operations Central CLI.
In previous versions, if an administrator wanted to gain details on constructs such as the NSX Edge Gateways (as well as the services running on them), Distributed Logical Routers, and Logical Switches, they would require console access to one or more of the following:
In NSX for vSphere version 6.2, one can simply gain access to the console of the NSX Manager for gaining such details, rather than jumping between multiple console/SSH sessions. This provides administrators with a more streamlined path for accessing operational data, and can help speed up the troubleshooting of issues that may occur within an environment where NSX for vSphere is deployed.
Before going into more detail about what those new commands are, and the types of scenarios you will be going through in this module, please proceed to the next step where you will be performing a number of setup steps to get logged into the primary NSX Manager.
Validation checks ensure all components of the lab are correctly deployed and once validation is complete, status will be updated to Green/Ready. It is possible to have a Lab deployment fail due to environment resource constraints.
You will be performing a number of commands on the primary NSX Manager (nsxmgr-01a.corp.local) via the Windows SSH client PuTTY.
Within the PuTTY Configuration window, to load the connection details for the primary NSX Manager (nsxmgr-01a.corp.local)...
This will open a new SSH session with the admin user to the primary NSX Manager. In the window that comes up, you will be prompted for the password of the admin user. Enter in the password VMware1! and press the enter key to continue.
Before continuing, it is worth taking a look at a list of the available commands in the NSX Manager in NSX for vSphere version 6.2. To do so, type in the following command at the command prompt:
list
As you can see, there are quite a few new options available for obtaining information from the NSX Manager CLI, including options that used to require an administrator to gain console access to individual NSX Edge Gateways, ESXi Hosts, or NSX Controllers. Specifically, you'll be looking at the commands for dealing with the following situations:
Please leave this PuTTY window open, and proceed to the next step, where you will start with taking a look at some commands that will help out in troubleshooting VXLAN connectivity.
Note: All commands mentioned in this module are written in their entirety in the file "README.txt" on the Control Center desktop, in case you would like to simply copy and paste the commands.
In the previous versions of NSX for vSphere, it was necessary to SSH/console into the NSX Manager, ESXi hosts, and NSX Controllers to get troubleshooting information for constructs like:
In this section, you will be exposed to a number of new commands available from the NSX Manager CLI to allow an administrator to gather details regarding the above in one place. Continue to the next step to start gathering details about the vSphere Clusters, ESXi Hosts, and Virtual Machines on matters pertaining to network virtualization.
Note: You may find it useful to increase the width of the PuTTY window, or maximize the size of the window before proceeding as some of the output from the following commands may run over more than one line, which may make it difficult to read.
For a majority of the commands covered in this module, it's required to know a specific Host-iD (ESXi Host), Domain-ID (vSphere Cluster), VM-ID (Virtual Machine), and/or vNIC-ID (Virtual NIC on Virtual Machine). The following commands will walk you through how to obtain those values. The specific IDs will be mentioned in the later sections of this module, but this is how one would normally find those identifiers.
To list all clusters, for NSX Manager-01a found in Site A, run the following command:
show cluster all
The output received should be similar to the screenshot above. With columns for:
Let us find out more information about "Compute Cluster A," so continue to the next step to find out how to retrieve a list of hosts for that cluster.
To list the set of hosts in a vSphere cluster, run the following command:
show cluster domain-c33
You will remember that "domain-c33" was the value under the "Cluster-ID" column for the previous command run, so that's what will be passed into this command to get a list of hosts in the "Compute Cluster A" vSphere cluster. The output received should look like what's shown in the screenshot, with columns for:
Perhaps in a troubleshooting scenario it's found that a VM that's lacking connectivity happens to be a host that isn't fully prepared for NSX, and does not have a status of "Ready" for it's installation status of network virtualization components? Now you know an quick way to query hosts to find out without having to log into the vSphere Web Client.
You will find out more about the VMs on the two hosts in this cluster in the following step.
To list the Virtual Machines (VMs) on each of the hosts, run the following two commands:
show host host-28
show host host-32
The values "host-28" and "host-32" refer to the Host IDs of both esx01a.corp.local and esx02a.corp.local, respectively. These values will be passed into the "show host" command to retrieve the following values about the VMs running on these hosts:
Going forward, you will be singling out the virtual machine "app-01a", so the VM ID "vm-217" will be used quite often for other commands used in this module. To find out more about the vNIC details of "app-01a," proceed to the next step.
Enumerating the vNIC details of a VM can be performed with the following command
show vm vm-217
Here, we are passing the VM ID of "vm-217" which is referring to the VM "app-01a." The output received should be similar to the screenshot shown, with the following properties:
Next, to gather more details regarding that specific vNIC.
To obtain further details about the vNIC of a VM, use the command "show vnic <vNIC ID>." For the vNIC of the "app-01a" VM, enter in the following command:
show vnic 502e8300-5e65-8aa6-8593-49472d923190.000
In the results, you will notice the following details being returned:
Now that you have been introduced to some preliminary commands to gather information such as the VM ID, vNIC ID, Host ID, and Cluster ID, you will be using those to dive into some additional details about the Logical Switches themselves in the next section.
Next up, you'll be going through the set of options available in the "show logical-switch" command. Specifically, you'll be going through the following scenarios:
Continue to the next step to learn about how to list all logical switches within the new NSX Manager Central CLI.
To list all the Logical Switches under the management of this NSX Manager, run the following command
show logical-switch list all
If the command has been run successfully, you should see results similar to the screenshot here. The values you should observe returned from this command are:
Next, you'll be obtaining some additional details on logical switches specific to an ESXi host.
To list the verbose details of all logical switches on a host, run the following command (you may replace host-28 with host-32 to view details for the other host):
show logical-switch host host-28 verbose
This will display quite a lot of information regarding the logical switches running on the host, and you will need to use the Space key to fully scroll through all of it before getting the command prompt to come back up again. Some details worth noting here are:
Next, you wil observe some statistics pertaining to logical switches on a particular ESXi host.
To obtain statistics regarding logical-switches on a host, run the following command:
show logical-switch host host-28 statistics
The value "host-28" here can be replaced with "host-32" to obtain statistics for the second host. In the case of a lack of L2 traffic between VMs on the same Logical Switch on different ESXi hosts, the counters here would be of use to watch for troubleshooting purposes.
If you wanted to know what hosts have VMs on a specific VXLAN Network Identifier (VNI), you can run the following command:
show logical-switch list vni 5002 host
Here, we're displaying all ESXi hosts that currently have VMs on a specific VNI. The details shown are:
1.) ID - This is specifically referring to the Host ID, a unique identifier for the ESXi host.
2.) HostName - The fully qualified domain name for the ESXi host.
3.) VdsName - The Distributed Virtual Switch that the ESXi host resides on with the associated logical switch/VNI.
To determine if a particular VNI has VTEP/MAC Table information on a NSX Controller, perform the following command:
show logical-switch controller controller-2 vni 5002 vtep
It's possible that this command may not return any details, so if that occurs, replace the "controller-2" in the command above with either "controller-1" or "controller-3". The IP addresses shown here refer to the VTEP IP addresses on the ESXi hosts.
To list ARP Table entries for a specific VNI on a NSX Controller, run the following command:
show logical-switch controller controller-2 vni 5002 arp
As before, it's possible that this command may not return any details, so in the event of that occurring, try replacing "controller-2" in the command above with either "controller-1" or "controller-3". The IP address shown here belongs to a VM.
To display the MAC Address/VTEP table for a given VNI on a NSX Controller, input the following command:
show logical-switch controller controller-2 vni 5002 mac
If no results return, try again with one of the other two NSX Controllers (controller-1 and controller-3). You may also notice that the MAC address shown here belongs to the IP address 172.16.20.11, as per the previous command.
To display a number of statistics pertaining to the VNI on a NSX Controller, run the following command:
show logical-switch controller controller-2 vni 5002 statistics
Try swapping out "controller-2" in the command above with either "controller-1" or "controller-3".
To show details pertaining to joined VNIs for a specific host on a NSX Controller, provide the command:
show logical-switch controller controller-2 host 192.168.110.51 joined-vnis
show logical-switch controller controller-2 host 192.168.110.52 joined-vnis
In this command, "controller-2" can be replaced with either "controller-1" or "controller-3", and the host IP referenced here is for ESXi host's VTEP IP address. In the screenshot above, the VTEP IP addresses for hosts "esx-01a.corp.local" (192.168.110.51) and "esx-02a.corp.local" (192.168.110.52) are provided.
That only covers some of the new commands available regarding Logical Switches. In the next part, you'll be exploring what's available for troubleshooting Logical Routers.
Next, you will learn about some of the commands available to troubleshoot the distributed logical routers you have deployed in your NSX for vSphere environment. Before we jump into the commands themselves, take a look at the next page to get a refresher on some of the unique identifiers that will be utilized for some of these commands, and what they refer to. Feel free to come back to that page during the rest of the steps in this section if you want.
As with the logical switch commands, there are a number of unique identifiers to be referenced when running some of the new logical-router commands in the NSX Manager CLI. These are provided here for ease of reference:
vSphere Clusters
Management & Edge Cluster : domain-c41
Compute Cluster A : domain-c33
ESXi Hosts
esx-01a.corp.local : host-28
esx-02a.corp.local : host-32
Virtual Machines
web-01a: vm-216
web-02a: vm-219
web-03a: vm-223
app-01a: vm-217
app-02a: vm-264
db-01a: vm-218
db-02a: vm-266
Now that you have gotten a refresher on some of the identifiers for the objects you will be looking at, move on to the next step to learn how to obtain a list of all deployed distributed logical routers.
To list all distributed logical routers, run the following command:
show logical-router list all
The following values will be returned:
Note - The router with the ID of 30d40 will only appear if you have gone through the steps in the previous modules to create a Universal Distributed Logical Router. If you have not, that is ok, and you may continue with the lab.
To display what ESXi hosts a specific Distributed Logical Router exists, enter the following commands:
show logical-router list dlr 1388 host
That will return the unique ID of the ESXi host, as well as the "friendly" host name of the ESXi host where a given DLR is recognized.
To display physical network connection information for given ESXi host where a distributed logical router is deployed at, run the following commands:
show logical-router host host-28 connection
show logical-router host host-32 connection
This will display information regarding the Distributed Virtual Switch that the logical routers on the ESXi host utilize, as well as the number of logical interfaces (LIFs).
Additionally, information regarding the uplinks used by the DVS, and statistics regarding the number of packets dropped, replaced, and skipped are shown.
A good quick way to obtain useful information regarding a DLR on a specific ESXi host can be to run the following command (shown here for two hosts):
show logical-router host host-28 dlr 1388 verbose
show logical-router host host-32 dlr 1388 verbose
Details such as the name of the DLR, the DLR ID, the number of LIFs and routes, the type of DLR (global or universal), and the NSX Controller owning the slice pertaining to this DLR are shown.
Another useful command which can display the ARP table for a given DLR on an ESXi host is the following (once again, shown for two different hosts):
show logical-router host host-28 dlr 1388 arp
show logical-router host host-32 dlr 1388 arp
One will notice that some of the IP addresses appear on both hosts, that would be because these are the interfaces of the logical-routers themselves (which would exist on all hosts).
It is definitely useful to be able to obtain routing table information for a given logical-router, and rather than SSH'ing to every host to obtain that information, one can run the following command within the NSX Manager:
show logical-router host host-28 dlr 1388 route
show logical-router host host-32 dlr 1388 route
If one was curious to find out what the routing table entries for a given DLR are for on the controller which own's that DLR's slice, they could run the following command:
show logical-router controller controller-2 dlr 1388 route
The previous command around obtaining verbose information for a Distributed Logical Router can be used to obtain the IP address for the controller that should be used in the above command. If the above command comes back with no results, try replacing "controller-2" with "controller-1" or "controller-3".
To display the Logical Interface (LIF) configuration present on a controller for a given DLR, run the following command:
show logical-router controller controller-3 dlr 1388 interface
The IP address of the LIF will be shown, as well as the unique ID for the LIF (see the column "Id"). If the command does not return anything, try replacing "controller-3" with "controller-1" or "controller-2".
Rather than running the aforementioned verbose command, one can obtain a smaller subset of information for a given DLR on a specific controller by running the following command (commands shown below for DLR-IDs 30d40 and 1388):
show logical-router controller controller-3 dlr 1388 brief
Information shown includes:
Next up, you will learn about one of the biggest time savers the Central CLI offers - the ability to obtain information about NSX Edge Gateways, all without having to enable SSH on an Edge (or console into one)!
Remember when one had to enable SSH or utilize a VMRC session to get access to an Edge Gateway and obtain details about the services running on it?
Those days are gone, welcome to the age of the Central CLI!
This section will go over some of the *many* options available to the NSX Administrator with regards to the Edge Gateways. Provided in the next step are a set of Reference IDs for the vSphere clusters, ESXi hosts, and Virtual Machines in the environment for easy reference. After that, you will jump right in to going over some of the Edge Gateway related commands available in the Central CLI located on the NSX Manager.
These are provided here for ease of reference:
vSphere Clusters
Management & Edge Cluster : domain-c41
Compute Cluster A : domain-c33
ESXi Hosts
esx-01a.corp.local : host-28
esx-02a.corp.local : host-32
Virtual Machines
web-01a: vm-216
web-02a: vm-219
web-03a: vm-223
app-01a: vm-217
app-02a: vm-264
db-01a: vm-218
db-02a: vm-266
As mentioned before, feel free to come back to the Reference-IDs section if you happen to forget how to obtain any of the unique identifiers for objects like clusters, hosts, or VMs. Next up, you'll learn how to obtain a list of all deployed NSX Edge Gateways in an environment.
To list all deployed NSX Edge Gateways for a given NSX Manager (this will *not* show Edge Gateways across multiple sites), run the following command
show edge all
The values returned refer to the following:
Now that you're able to identify the list of NSX Edge Gateways, as well as their unique identifier (the "Edge ID" value), continue to the next step to learn how to show more details about a specific Edge Gateway.
To get a quick view of what services are enabled on an Edge Gateway, as well as some other miscellaneous information, input the following command (edge-4, the OneArm-LoadBalancer-01 Edge is used here):
show edge edge-4
Returned is the following data...
Given that one can tell the Load Balancer service is running from the above command, let's take a look at some more details about how the load balancer is configured.
To show the Load Balancer configuration of an Edge Gateway (if the service is configured), run the following command:
show edge edge-4 configuration loadbalancer
What will be returned is a JSON message detailing the various details of the Load Balancer configuration. What if you wanted to be able to see if there has been any errors on the load balancing service? On the next step, you will find how how to obtain that information.
To retrieve the list of errors for the Load Balancer service on the OneArmed-LoadBalancer-01 Edge Gateway, perform the following command:
show edge edge-4 service loadbalancer error
The total number L7 Request/Response errors seen by the load balancer service will be displayed. In a troubleshooting scenario, it may be also worth looking at some details with regards to the flows running on an Edge Gateway. Continue to the next step to learn how.
To display the current flowtable of a given NSX Edge Gateway (edge-4 is used in this example), enter the following command:
show edge edge-4 flowtable
Returned will be the total number of flows currently active on the Edge Gateway, showing details such as the protocol in use, the source/destination IP addresses, the source/destination ports, and the number of packets and bytes seen for the particular flow.
Displaying dynamic routing information quickly for any Edge Gateway is a huge benefit of the new Central CLI, and in this example you'll be taking a look at the OSPF details for the Edge Gateway "Perimeter-Gateway-01". To do so, enter the command:
show edge edge-3 ip ospf
In addition to the ability to display dynamic routing details, one can also show the routing table of a given edge. Continue to the next step to learn how to do so.
To retrieve the routing table of a particular Edge Gateway, type the following command in (once again, the Edge Gateway "Perimeter-Gateway-01" is used here):
show edge edge-3 ip route
This command will show all static and dynamically obtained routes for the Edge Gateway.
In case you also wanted to be able to obtain more details regarding the firewall on the Edge Gateway, such as the top flows matching for any particular firewall rule, the following command can be utilized:
show edge edge-4 firewall flows topN 10
In the example within the screenshot, you'll notice flows matching Firewall rules "131074," as well as "131073."
Next, you'll learn about some of the Central CLI commands available for obtaining information about the Distributed Firewall.
In this last section of the module for exploring the Central CLI, you'll be looking at some of the commands available to deal with the Distributed Firewall (DFW). As in previous sections, the next step will include a list of Reference IDs for objects such as vSphere clusters, ESXi Hosts, and VMs for quick reference.
These are provided here for ease of reference:
vSphere Clusters
Management & Edge Cluster : domain-c41
Compute Cluster A : domain-c33
ESXi Hosts
esx-01a.corp.local : host-28
esx-02a.corp.local : host-32
Virtual Machines
web-01a: vm-216
web-02a: vm-219
web-03a: vm-223
app-01a: vm-217
app-02a: vm-264
db-01a: vm-218
db-02a: vm-266
Feel free to come back to those Reference IDs if needed to get a quickly reference some of the unique identifiers used in the steps coming up. Next, you will learn how to show the current status of the Distributed Firewall on all vSphere clusters managed by a particular NSX Manager.
To show the current status of the Distributed Firewall on a cluster, run the following command:
show dfw cluster all
This will provide the following details:
In case there's a particular host in a cluster for which you are noticing some issues pertaining to the DFW on, the next command may come in handy.
To show the installation status of the Distributed Firewall service on all hosts for a given vSphere cluster, issue the following command (Compute Cluster A is used below):
show dfw cluster domain-c33
This will provide the following details:
To drill down even further to the Virtual Machine level, continue on to the next step.
To enumerate the specific DFW filters on a specific Virtual Machine (VM), as well as the vNICs on that VM, run the following command (app-01a is used here as an example):
show dfw vm vm-217
Worth mentioning are the following returned details:
You can take the values from the list of filters and vNICs to obtain more details pertaining to the Distributed Firewall and how it's applied to a VM. Continue to the next step to learn how.
To displays the firewall rules attached to any specific filter attached to a Virtual Machine's vNIC, run the following command (the vNIC for app-01a is used in this example):
show dfw host host-32 filter nic-37844-eth0-vmware-sfw.2 rules
What will be output are the actual rules for the specific filter passed with the command. It's worth noting the rule numbers "rule 1001-rule 1008" as they'll be referenced in the next command, which will help show some statistics for each of the rules.
To show usage statistics for a given DFW filter, perform the following command (once again, the filter applied to the vNIC for app-01a is used here):
show dfw host host-32 filter nic-37844-eth0-vmware-sfw.2 stats
Displayed will be the number of times a particular DFW rule has been utilized for the given vNIC's filter passed to the command. To obtain further details regarding each flow that matches a given filter, continue to the last step of this section.
To display flow details for a given DFW filter, use the following command:
show dfw host host-32 filter nic-37844-eth0-vmware-sfw.2 flows
In the above example, there are no flows currently retrieved for this filter, but as you may notice, the number of flows will be shown with regards to active (L3/L4), active (L2), and dropped (L2, L3, L4) flows.
That concludes the section of this Hands on Lab covering the new commands available for troubleshooting purposes on the NSX Manager CLI. Next up, learn about the RESTful API that's available with NSX for vSphere, and some of the actions one can perform with it!
Module 7 - NSX Automation
NSX is designed ground up with a RESTful API. The NSX REST API can be used to both configure NSX and to provision NSX logical network services. The NSX REST API can be called directly or indirectly from various programing languages. Many orchestration and automation tools such as vRealize Automation via vRealize Orchestrator can call the NSX REST API to perform Layer 2 through Layer 7 network orchestration and automation.
To demonstrate NSX RESTful API calls, you will be using the RESTClient extension to the Mozilla Firefox browser. RESTClient is a debugger for RESTful Web Services and is a useful tool when working with various REST API's.
A REpresentational State Transfer (REST) API defines a set of simple principles which are loosely followed by most API implementations.
REST leverages the strength of HTTP to send data (Headers and Bodies) between Clients and Servers.
The term Uniform Resource Locator (URL) and Uniform Resource Identifier (URI) can be used interchangeably when working with REST. Although the Mozilla Firefox RESTClient uses a field called URL, it is in fact a URI field.
Resources (building blocks) are linked together by embedded hyperlinks in HTML documents or URI references, and resources can expose the state via representations containing both metadata (such as size, media type, or character set) and content (binary image or text document).
REST Clients specify the desired interaction (HTTP request message as defined by RFC 2616). Each HTTP method has specific, well-defined semantics within the context of a REST API’s resource model:
The REST APIs use a HTTP response message to inform clients of their request’s result (as defined in RFC 2616). Five categories are defined:
§HTTP response message to inform clients of their request’s result (As Defined in RFC 2616):
When configuring RESTful API's to provision NSX services, the following are a list of the important request headers:
Mozilla Firefox provides an extension called RESTClient. This extension has been installed into Mozilla Firefox within the lab environment so no download or configuration is required. The RESTClient is a useful tool for testing all RESTful API's.
HTTP Request Headers are required to ensure correct use of the NSX API
If you accidentally enter the Header details incorrectly, Headers can be deleted by clicking on the "x" next to the Header entry. In this example, Content-Type: application/json was incorrectly entered.
As an example only, bad headers can be deleted by clicking on the "x".
The following exercises will require you enter commands and/or configuration. The text boxes within the lab manual such as the example below combined with the SEND TEXT function, allow you to copy and paste commands and configuration into the lab.
As an example only, the steps to use the SEND TEXT function are as follows:
ls -l | grep test
NSX REST API Exercise 1, we will Query the currently controller configuration to learn their running id's. This will allow you to then perform additional NSX REST API requests against the configured id. This demonstrates no prior knowledge of the controller configuration is needed. Programmatically, performing various queries against a running NSX system will allow information to be gathered to then allow changes to be made.
https://nsxmgr-01a.corp.local/api/2.0/vdn/controller
Of importance is the id of each controller identified by the XML <id>controller name</id>. In this environment, the three controllers are known as controller-1, controller-2, controller-3, but this could be different. Programming languages can parse the XML response and pass specific fields to subsequent calls enabling complex actions to be performed via API.
NSX REST API Exercise 2, we will assume there is existing NSX Controller Syslog configuration on controller-1 and we will proceed to delete the perform the following actions via the NSX REST API.
https://nsxmgr-01a.corp.local/api/2.0/vdn/controller/controller-1/syslog
The NSX API DELETE will not generate any additional data so the Response Body (Highlight) tab will be empty.
NSX REST API Exercise 3, we will now query the current Syslog configuration of controller-1. Given that we have just deleted the Syslog configuration, we would expect to receive a response to indiciate there is no configuration.
You have identified there is no Syslog configuration and this is an appropriate response given we deleted the Syslog configuration in the previous Exercise.
In the previous exercise, we validated no Syslog configuration exists for controller-1. NSX REST API Exercise 4, we will add Syslog configuration to controller-1. Note, no change to the URL or Headers is required as we are working with the same REST object.
<controllerSyslogServer>
<syslogServer>192.168.110.24</syslogServer>
<port>514</port>
<protocol>UDP</protocol>
<level>INFO</level>
</controllerSyslogServer>
The NSX API POST request will not generate any additional data so the Response Body (Highlight) tab will be empty.
NSX REST API Exercise 5, we will now query the current Syslog configuration of controller-1. The previous exercise configured Syslog so this query request should return the current configuration. Note, no change to the URL or Headers is required as we continue to work with the same REST object.
You have now successfully used the NSX API to change the NSX configuration:
NSX network constructs including Switches, Routers, and Firewall rules can be created with the NSX REST API. Automation software such as vRealize Automation combined with vRealize Orchestrator will leverage the NSX REST API to create network objects in Multi Machine application blueprints. NSX REST API Exercise 6 will demonstrate the creation of a new Logical Switch.
Logical Switches are subordinate to a NSX Transport Zone so the first step before a switch can be created is identify the name of the Transport Zone or "vdnscope". Once we know the name of the vdnscope, we can use this name in the request to create a Logical Switch.
https://nsxmgr-01a.corp.local/api/2.0/vdn/scopes
You have identified the id of the Global Transport Zone is vdnscope-1. With this information, you can now create Logical Switches using this id. This demonstrates how the configuration of the transport zones can be discovered so that new logical switches can be programmatically added to the correct transport zone as part of an orchestration workflow.
https://nsxmgr-01a.corp.local/api/2.0/vdn/scopes/vdnscope-1/virtualwires
<virtualWireCreateSpec>
<name>Test-Logical-Switch-01</name>
<description>Created by REST API</description>
<tenantId>virtual wire tenant</tenantId>
<controlPlaneMode>UNICAST_MODE</controlPlaneMode>
</virtualWireCreateSpec>
The NSX API POST request will also generate a name for the new logical switch.
You have identified the id of the new Logical Switch. With this id, programmatically you can connect this new Logical Switch to a Distributed Router and/or VM's. Note: The above virtualwire id may be different.
Logical Switches created by API are no different to Logical Switches created using the vCenter Web UI. We will now review the vCenter Web UI and validate we can see the new Logical Switch.
The NSX REST API is a full function API that allows all capabilities of NSX to be consumed programatically. The NSX REST API is commonly used with orchestration engines such as vRealize Automation via vRealize Orchestrator to provision network services as part of a multi tier application blueprint. With the NSX Manager DNS/IP address and credentials, the complete network topology of a NSX-vSphere environment can be learned via the NSX REST API.
As part of the NSX Documentation library, the REST API is fully documented allowing any aspect of NSX to be orchestrated and automated.
Conclusion
Thank you for participating in the VMware Hands-on Labs. Be sure to visit http://hol.vmware.com/ to continue your lab experience online.
Lab SKU: HOL-SDC-1625
Version: 20150914-105335