VMware Hands-on Labs - HOL-SDC-1319


Lab Overview HOL-SDC-1319

Introduction


Please Read:

Many of the modules will have you enter Command Line Interface (CLI) commands. A text file has been placed on the desktop of the environment allowing you to easily copy and paste complex commands or passwords in the associated utility (CMD, Putty, console, etc). Certain characters are often not present on keyboards throughout the world. This text file is also included for keyboard layouts which do not provide those characters.

The text file is named kb-input.txt. The file is divided into module sections and numbered. The manual will have a number associated with every CLI command. That command will be numbered in the file for you to copy and paste.

Note: It will potentially take more than 90 minutes to complete the lab. We request that you complete Modules 1, 2, and 3 in your first sitting. The rest of the modules can be completed in the second sitting.Modules 5, 6 and 7 are dependent on Modules 2 and 3 to be completed, therefore we have created a script to complete Modules 2 and 3 so that you can advance when you login the second time. Details on running the script are articulated in Module 5.

Thank you and enjoy the labs!

Virtualization reaches its full potential when all data center resources -- including networking and networks services -- are virtualized. Today, each virtual machine's IP addressing, L2/L3 connectivity, and associated network services (i.e. NAT, security, and QoS policies) are defined in network hardware, tying that virtual machine to its physical location in the data center.

Network virtualization overcomes this limitation by allowing virtual machines to connect to logical networks rather than attaching directly to physical networking hardware.

In the lab we will demonstrate how virtual machines on multiple Hypervisors can be connected to logical networks using the NSX platform. Once the virtual machines are connected to logical networks they become completely mobile and decoupled from the underlying network infrastructure. The NSX platform also pins various network services directly to the virtual machine ports, thus allowing those services to move along with the virtual machines.

In this lab you will also get a preview of the new NSX vSwitch for ESXi.

Lab Module List:

Module 1: Gives and overview of the lab and explains various components (30 Minutes)

Module 2: Gives detailed instructions on creating logical switches and attaching VMs to them. (30 Minutes)

Module 3: Talks about the NSX L3 Gateway service that is used to access the workloads attached to logical networks. (30 Minutes)

Module 4: Talks about the NSX L2 Gateway service that is used to bridge the workloads on the logical networks to the workloads on the traditional VLAN backed networks. (30 Minutes)

Module 5: Focuses on security for virtual machines attached to logical networks. (30 Minutes)

Module 6: Focuses on the NSX API (30 Minutes)

Module 7: Troubleshooting NSX (45 Minutes)

 

Lab Captains: Ray Budavari, Ben Lin, and Amit Kumar Agrawal


 

NSX

 

NSX is a network virtualization platform that enables programmatic control of networking capabilities in cloud data centers. Just as server virtualization provides flexible control of virtual machines running on a pool of server hardware, network virtualization with NSX provides a centralized API to provision and configure many isolated logical networks that run on a single physical network.

Logical networks decouple virtual machine connectivity and network services from the physical network, giving cloud providers and enterprises the flexibility to place or migrate virtual machines on any hypervisor anywhere in the data center (or spanning multiple data centers) while still supporting layer-2 / layer-3 connectivity and layer 4-7 network services.

 

Lab Scenario


ABC Medical is a multi-national medical technology company headquartered in San Francisco, CA. They plan to expand their existing datacenter by acquiring more space from their hosting provider. This new infrastructure will host new SaaS applications and existing applications that will be migrated at a later date.

A new web based SaaS application will be the first to be deployed in the new environment. This is a three tier application with web servers, application servers and database servers. The requirements for this application are:

We will use NSX to improve the speed and agility of deploying networking and security.


 

Lab Goals

Module 1 - Review the configured NSX environment and associated components.

Module 2 - Create Logical Switches for web, application, and database workloads. Configure Logical Ports to connect virtual machines to Logical Switches. Verify connectivity between virtual machines across heterogenous hypervisors in different L2 segments.

Module 3 - Create a distributed L3 Gateway Service to route between Logical Switches. Validate routing between Logical Switches.

Module 4 - Create an L2 Gateway Service connected to a physical network to enable a P2V of an existing physical server

Module 5 - Secure logical networks with Security ACLs, Security Profiles, and Port Isolation.

Module 6- Use the NSX API inspector to browse the API and provision virtual networks

Module 7 - Troubleshoot NSX environment using available tools

 

 

Lab Components

 

NSX Controller Cluster nodes:

The NSX Controller Cluster is a distributed system that accepts logical network configuration instructions, calculates the required flow entries, and inserts flow entries into virtual switches running on the transport nodes (hypervisor switches and NSX appliances).

NSX Service Node:

NSX Service Nodes assist with the packet replication required for broadcast/multicast and unknown unicast flooding in overlay logical networks. The Controller Cluster manages all Service Node appliances as a resource pool, automatically spreading traffic across the available Service Nodes and masking individual node failures.

NSX Gateway nodes and Gateway Services:

An NSX Gateway Service consists of one or more NSX Gateways nodes that attach a logical network to a physical network not managed by NSX. Each Gateway Service can operate as an L2 Gateway Service sending traffic to a physical L2 segment, or as an L3 Gateway Service mapped to a physical router port.

Open vSwitch for KVM:

Open vSwitch is an open source virtual switch that enables network automation through programmatic extension, while still supporting standard management interfaces and protocols.

NSX vSwitch for ESXi:

A new virtual switch providing kernel level integration for VMware vSphere and managed by the NSX Controller Cluster.

 

 

Lab Architecture

 

In this lab, a routed transport network is used to demonstrate how virtual networks can span across L3 boundaries.

Please take sometime to understand the network topology presented.

In the physical network topology (shown in the Grey box) there are multiple routed networks. The vPod router handles routing between the different networks.

The virtual network topology has been created by using NSX Network Virtualization Platform.

Virtual Machines

 

Module 1 - NSX Components

NSX Components


VMware NSX is a platform for network virtualization that exposes a complete suite of logical networking elements and services (logical switches, routers, firewalls, etc.) with isolation and multi-tenancy through programmable APIs.

The VMware NSX platform is assembled with five basic components: Controller Cluster, Hypervisor vSwitches, Service Nodes, Gateways, and NSX Manager.


 

NSX Controller

The NSX Controller Cluster is the focal point, a cluster of x86 systems that manages transport nodes, holds a global view of the network, and exposes a web services API.

The Controller Cluster maintains the entire state of the network and enforces consistency between the logical network view (defined by the NSX API) and the transport network view (implemented by remote virtual switches).

Features:

NOTE: The installation and configuration of NSX components has already been completed. Your task is to verify each component status.

 

 

Login to Controller

 

Launch the Putty client, select nsx-ctl-01a from the saved sessions, and click Open.

Login credentials: admin/VMware1!

Note: The login credentials are also listed in the file called kb-input.txt on the desktop.

 

 

View Controller interfaces

 

View the network interfaces of the NSX Controller. There is a single interface connected to the Management network. The controller does not have a connection to the Transport network, all communication to the hypervisors is through the Management network. {1}

# show network interfaces

 

 

View controller roles

 

Each Controller Node is assigned a set of roles that define the type of tasks the node can implement. By default, each Controller Node is assigned all roles. Controller Nodes can perform the following roles:

Type the following to view the control cluster roles. {2}

# show control-cluster management-address

# show control-cluster roles

 

 

View cluster status

 

View the nodes that are part of the controller cluster {3}

# show control-cluster startup-nodes

In this lab, there is a single controller node. For production, the control cluster requires at least three controller nodes to provide high availability. Persistent data is replicated across multiple Controller nodes to prevent data loss.

Next, view the controller cluster status: {3}

# show control-cluster status

The cluster is up with all roles enabled and activated.

Close the Putty session before proceeding further.

 

 

NSX Service Node

The Service Node offloads the task of packet replication from the hypervisors participating in the transport zone. This includes:

 

 

 

Login to Service Node

 

Launch the Putty client and SSH to the Service Node (nsx-sn-01a).

Use credentials: admin/VMware1!

 

 

View Service Node interfaces

 

For network interface information, type: {4}

# show network interfaces

breth0 is connected to the management network (192.168.110.0/24).

breth1 is connected to the transport network (192.168.150.0/24).

The service node has a tunnel to each hypervisor in the transport zone for offloading packet replication.

 

 

View cluster connection

 

Validate the connection to the controller cluster by typing: {5}

# show switch managers

Connections established between the controller cluster and the Service Node OpenvSwitch are displayed.

Close the Putty session before proceeding further.

 

 

NSX Gateway

An NSX Gateway is a physical x86 appliance that connects logical networks to the data center’s physical network or to physical applications. Logical network traffic is tunneled to the NSX Gateway, which decapsulates the traffic and sends it to a directly attached physical network.

There are several deployment options. A Gateway can provide L3 access to workloads connected to logical networks via a physical upstream router that is connected to the Internet, or the Gateway can reside in a remote customer premises, enabling a cloud customer to seamlessly link (L2 bridging) their physical and cloud networks.

 

 

Login to Gateway

 

Launch two instances of the Putty client and SSH to the Gateway nodes (nsx-gw-01a) and (nsx-gw-02a)

Use credentials: admin/VMware1!

 

 

View Gateway interfaces

 

{6}

# show network interfaces

breth0 is connected to the management network (192.168.110.0/24).

breth1 is connected to the transport network (192.168.150.0/24).

 

 

View cluster connection

 

To view the connection between the gateway and the controller cluster: {7}

# show switch managers

 

Close the Putty session before proceeding further.

 

ESXi and NSX vSwitch


NSX introduces kernel-level virtual switch integration for VMware vSphere with the NSX vSwitch. The installation and configuration of the NSX vSwitch has already been performed in the lab.


 

Login to NSX Manager

 

Launch the Chrome web browser (this provides the best experience for NSX Manager and the vSphere Web Client). The default home page is the NSX Manager login screen.

Enter credentials: admin/VMware1!

Note: The login credentials are also listed in the file called kb-input.txt on the desktop.

 

 

View ESXi configuration

 

From the NSX Manager dashboard, under Hypervisor Software Version Summary, click on the number next to ESXi

 

 

List ESXi hosts

 

The status of both ESXi servers are listed. Click on one of the ESXi hosts to get more details.

 

 

View ESXi details

 

View the properties and inspect the bridges configured on the ESXi host. Notice that system type is listed as "ESXi" indicating direct integration with ESXi through the NSX vSwitch.

 

 

NSX vSwitch

 

The NSX vSwitch is a next-generation virtual switch for VMware vSphere designed to be remotely programmed by the NSX Controller Cluster. Installed on the ESXi kernel, the NSX vSwitch is the best option for performance, integration and supportability in an NSX environment.

The Controller Cluster populates the NSX vSwitch with two types of state information:

Logical Network State

Logical Transport State

In the diagram above, the virtual machines connected to the NVS integration bridge belong to two Logical Switches. The integration bridge is a special bridge (switch) created on each hypervisor. All virtual machines on each hypervisor connect to the integration bridge, which is managed by the NSX Controller Cluster.

 

 

Login to vSphere Client

 

Launch a new tab from the Chrome web browser and click on the vSphere Web Client bookmark.

Enter credentials: root/VMware1!

Note: The login credentials are also listed in the file called kb-input.txt on the desktop.

 

 

List vCenter objects

 

Click on vCenter in the left navigation bar.

 

 

List Hosts

 

Click on Hosts in the left navigation bar.

 

 

Select Host

 

Click on esx-01a.corp.local

 

 

View VMkernel adapters

 

1. Click on Manage

2. Click on Networking

3. Click on VMkernel adapters

vmk3 is used by the NSX vSwitch and connected to the Transport network (192.168.150.0/24).

 

 

View Physical adapters

 

Click on Physical adapters

vmnic1 is the uplink for the nsx-vswitch.

 

 

View TCP/IP stacks

 

Click on TCP/IP configuration

NSX vSwitch is using a separate TCP/IP Stack - nsxTcpipStack

 

 

View ESXi virtual machines

View the virtual machines which will be attached to logical switches.

 

 

List vCenter objects

 

Click on vCenter in the left navigation bar

 

 

List Virtual Machines

 

Next, click on Virtual Machines in the left navigation bar

 

 

View VM power state

 

Verify that the following virtual machines are powered on:

The lgcy-sv-01a virtual machine will be powered on in a later module.

 

KVM and Open vSwitch


Kernel-based VIrtual Machine (KVM) provides hardware virtualization for the Linux kernel.

Open vSwitch is an open source software switch designed to be used as a vSwitch in virtualized server environments. Open vSwitch currently is available for any Linux-based virtualization platform and is part of the mainline kernel as for Linux 3.3.


 

View Open vSwitch configuration

 

NOTE For this lab the Open vSwitch on the KVM Hypervisor has already been configured to be managed by the NSX Controller. Please verify the configuration.

Launch putty and open a connection to the KVM host (kvm-01b)

Enter credentials: nsx-admin/VMware1!

Note: The login credentials are also listed in the file called kb-input.txt on the desktop.  

 

 

View network configuration

 

Validate transport network interface IP address and routes to various networks. {8}

# ifconfig eth2

# route -n

10.20.20.0/24 is the KVM storage network

192.168.150.0/24 is the transport network for ESXi hosts

192.168.210.0/24 is the KVM management network

192.168.250.0/24 is the KVM Transport network

 

 

 

 

Verify connectivity to ESXi hosts

 

Validate connectivity to ESXi servers. {9}

# ping -c 3 192.168.150.51

# ping -c 3 192.168.150.52

 

 

Verify Open vSwitch configuration

 

Change to root user (password: VMware1!)

{10}

# sudo bash -login

Review the contents of the openvswitch folder and existing configuration

# ls -al /etc/openvswitch

# ovs-vsctl show

You should see a connection to the controller cluster and an integration bridge (br-int).

 

 

View Tenant virtual machines

Libvirt is an open source API and management tool for platform virtualization. It is used in this lab to manage the KVM virtual machines.

 

 

List virtual machines

 

Validate KVM domains (virtual machines) defined on the KVM host {10}

# virsh list --all

Close the Putty session before proceeding further.

 

NSX Manager


NSX Manager is a web-based graphical interface built using the NSX API.

Primary uses:

Not intended for:

For this lab, NSX Manager is used to provide a greater understanding of how logical networking components are configured and operated. Actual deployments leverage a cloud management platform (i.e. vCloud Automation Center or OpenStack) to automate the provisioning of virtual networks via the NSX API.


 

Login to NSX Manager

 

Launch the Putty client and SSH to the NSX Manager (nsx-mgr-01a)

Enter credentials: admin/VMware1!

Note: The login credentials are also listed in the file called kb-input.txt on the desktop.

 

 

View network interfaces

 

Verify the configured interfaces. {11}

# show network interfaces

There is a single configured bridge, breth0, connected to the management network.

Close the Putty session before proceeding further.

 

 

Login to NSX Manager

 

Launch a web browser. The default home page is the NSX Manager login screen.

Enter credentials: admin/admin

 

 

View Dashboard

 

The NSX Manager “Dashboard” tab provides a summarized view of:

Take a moment to inspect all the presented information, then select Controller Cluster from the top menu bar.

 

 

View Controller Cluster

 

This page provides details on the controller cluster configuration. In standard environments, three to five controllers are deployed for availability and fault tolerance.

NOTE: If NSX Manager does not show any information (blank page), re-size the browser to a larger size. This will fix the issue.

 

 

View Network Components

 

Click on Network Components from the top menu. This gives us a view of all the transport elements configured. Ensure that all components are showing admin status of Enabled and connected status of Yes.

Clicking on any component provides a detailed view including component properties, status, statistics, transport connectors, physical interfaces, logical switch ports, etc.

 

 

View Transport Zone

 

Physical network connectivity between transport nodes is modeled in the API as a transport zone. A transport zone corresponds to a physical network used to send data traffic between OVS devices.

From the Network Components view, Click on Global-TZ under Transport Zone to view details.

 

 

End of Module 1

This concludes the walk through and review of the following components:

 

Module 2 - Logical Switching

Lab Topology


For module 2, create logical switches for Web-Tier, App-Tier, and DB-Tier. Attach virtual machines to ports on each logical switch and verify connectivity. The logical switches span compute domains in a routed transport network.


 

Logical View

 

The target state logical view is shown above.

 

Lab Fast Forward


Important Note on the Fast-Forward-NSX Script:

Modules 5, 6 and 7 of this lab depend on modules 2 and 3 to be completed.

If you decide to complete modules 2 and 3 now and come back later to complete the remaining modules, then you will have to run a python script called "fast-forward-nsx". This script will complete modules 2 and 3 for you so that you can proceed with the remaining modules. Note that Module 4 does not have a dependency on Modules 2 and 3.

Details on how to run this script are described in Module 5.

 


Create Logical Switches


A Logical Switch is an abstraction that implements L2 Ethernet services similar to a physical Ethernet switches.

In this module, we'll create logical switches for web, application, and database tiers.

NOTE: For the lab we are manually creating logical entities to provide the context around how NSX wires up each component. Typically a cloud management platform (CMP) is used to automate the provisioning of networks via the NSX API. For more information on CMP integration, visit the VMware booth.


 

Logical Switch Topology

 

 

 

Create Web Logical Switch

 

From the Dashboard, click the Add button next to Switches.

 

 

Logical Switch name

 

In the dialog box:

Set the Display Name for the logical switch to 'Web-Tier'

Click Next

 

 

Switch Properties

 

Leave Port Isolation Enabled unchecked

Use default setting for Replication Mode (Service Nodes)

Click Next

 

 

Add Binding

 

Next, add a binding that specifies the encapsulation and transport zone associated with the logical switch

Click on Add Binding

 

 

 

Create Transport Zone Binding

 

A transport zone represents a physical transport network in your datacenter. Defining multiple transport zones can be useful if your data center uses multiple physical transport networks that have different performance or security characteristics. The transport zone allows the Controller Cluster to understand what transport connectors can communicate directly when implementing a logical switch.

The transport zone binding connects a logical switch to the transport network that will carry its traffic. The binding specifies the transport zone and the transport type.

Select Transport Type and Transport Zone type

Click OK to create the binding

STT (Stateless Transport Tunneling) is a tunnel encapsulation protocol that enables overlay networks, similar to VXLAN. STT utilizes a TCP-like header inside the IP header to leverage TSO (TCP Segmentation Offload) on physical NICs for increased performance.

 

 

Verify Transport Zone Binding

 

View transport zone binding information and click Save. (skip the Logical Router step for now)

 

 

Create App / DB Switches

Follow the same procedure to create Logical Switches for App and DB tiers.

App-Tier Logical Switch:

DB-Tier Logical Switch:

 

 

Verify Logical Switches

 

In the Dashboard, the Summary of Logical Components lists the logical component types as well as the number of registered and active components.

Click on the number next to Switches.

Confirm that Fabric status is Up for all three logical switches.

 

 

Verify connectivity between web servers

Prior to connecting servers to the Web-Tier logical switch, verify that they do not have connectivity.

 

 

Login to vSphere Web Client

 

From the web browser click on the vSphere Web Client bookmark.

Enter credentials: root/VMware1!

 

 

Open console for web-sv-01a

 

Right click on the web-sv-01a virtual machine and select Open Console.

The Console may take a few moments to open, then click within the console and hit ENTER repeatedly to bring up the login prompt. It may take a while for the console session to load.

 

 

Ping web servers

 

Login to web-sv-01a using credentials root/VMware1!

Ping web-sv-02a (on KVM host) {10}

# ping -c 3 172.16.10.12

Ping web-sv-03a (on KVM host)

# ping -c 3 172.16.10.13

Ping lb-sv-01a (on the same exs-01a host)

# ping -c 3 172.16.10.10

All ping attempts should fail since the virtual machines are not connected to the logical switch yet.

 

Create Logical Ports


Logical networks are exposed using the logical switch and logical router entities, and each logical switch or logical router includes one or more logical ports. Logical ports can implement security and QoS policies, and expose port counters for metering or debugging.

Each logical port includes an Attachment that describes either the VM interface or physical network that acts as a source/sink of traffic sent in and out of that logical port.

NOTE: For the lab we are manually creating logical entities to provide the context around how NSX wires up each component. Typically a cloud management platform (CMP) is used to automate the provisioning of networks via the NSX API. For more information on CMP integration, visit the VMware booth.


 

Attach web-sv-01a VM to the Web-Tier Logical Switch

 

From the dashboard, click the Add button next to Switch Ports

 

 

Select Logical Switch

 

From the drop down menu, choose Web-Tier logical switch.

Click Next.

 

 

Logical Port name

 

In the dialog box:

Set the Display Name for the logical switch to 'web-sv-01a'

Click Next

 

 

Switch Port Properties

 

Leave Port Number and Logical Queue UUID blank

Leave Admin Status Enabled checked

Click Next

 

 

Mirror Targets

 

Do not add a mirror target. Click Next.

 

 

Attachment

 

Click on the Attachment Type drop down menu and select VIF. A VIF Attachment connects the VM’s virtual interface (VIF) to a logical switch port.

Click on the Hypervisor drop down menu and select esx-01a

Click on the VIF drop down menu and select the MAC ending in 28:c3

Click Save & View to finish

 

 

 

Verify switch configuration

 

Once you hit Save & View in the above step, the Logical Switch Port details are displayed.

Ensure that the Fabric, Admin and Link status are up.

 

 

Attach web-sv-02a VM to the Web-Tier Logical Switch

 

Follow the same procedure as before to attach the web-sv-02a VM on KVM host to the Web-Tier logical switch.

Click on Dashboard, then under the Summary of Logical Components section click Add next to Switch Ports.

web-sv-02 Logical Switch Port

 

 

Attach web-sv-03a VM to the Web-Tier Logical Switch

 

Follow the same procedure as before to attach the web-sv-03a VM on KVM host to the Web-Tier logical switch.

Click on Dashboard, then under the Summary of Logical Components section click Add next to Switch Ports.

web-sv-03a Logical Switch Port

 

 

Attach app-sv-01a VM to the App-Tier Logical Switch

 

Follow the same procedure as before to attach the app-sv-01a VM on ESX-01a host to the App-Tier logical switch.

Click on Dashboard, then under the Summary of Logical Components section click Add next to Switch Ports.

app-sv-01a Logical Switch Port

 

 

Attach db-sv-01a VM to the DB-Tier Logical Switch

 

Follow the same procedure as before to attach the db-sv-01a VM on ESX-02a host to the DB-Tier logical switch.

Click on Dashboard, then under the Summary of Logical Components section click Add next to Switch Ports.

db-sv-01a Logical Switch Port

 

 

Attach lb-sv-01a VM to the Web-Tier Logical Switch

 

Follow the same procedure as before to attach the load balancer lb-sv-01a VM on ESX-01a host to the Web-Tier logical switch.

Click on Dashboard, then under the Summary of Logical Components section click Add next to Switch Ports.

lb-sv-01a Logical Switch Port

 

 

Verify Logical Switch Port Status

 

In the Dashboard, the Summary of Logical Components section lists the logical component types as well as the number of registered and active components.

Click on the number next to Switch Ports to view the list of logical switch ports.

Confirm that Link and Fabric is Up for all six logical switch ports.

 

 

Verify connectivity between web servers

Now the web, application, and database virtual machines are connected to the corresponding logical switches. The virtual machines reside on a mix of hypervisors (ESXi, KVM) located in different L2 segments.

Verify that the load balancer and web servers have connectivity since they are connected to the same logical switch.

 

 

Login to vSphere Web Client

 

From the web browser click on the vSphere Web Client bookmark.

Enter credentials: root/VMware1!

 

 

Open console for web-sv-01a

 

Right click on the web-sv-01a virtual machine and select Open Console.

Click within the console and hit ENTER or CTRL+ATL+DELETE to bring up the login prompt.

 

 

Ping web servers

 

Login to web-sv-01a using credentials root/VMware1!

{11}

Ping web-sv-02a

# ping -c 3 172.16.10.12

Ping web-sv-03a

# ping -c 3 172.16.10.13

Ping lb-sv-01a

# ping -c 3 172.16.10.10

The ping attempts succeed because all three virtual machines are now connected to the same logical switch. You have created a logical network that spans between two separate layer 2 segments and different hypervisors!

Note: You may see DUP ping packets because this lab is hosted in a nested promiscuous environment

 

 

Verify ARP cache

 

View the ARP cache to validate that web-sv-01a has entries for the other web server virtual machines {12}

# arp -n

 

 

Ping other servers

 

Ping app-sv-01a

# ping -c 3 172.16.20.11

Ping db-sv-01a

# ping -c 3 172.16.30.11

The pings fail since there is no routing configured between the logical switches.

 

 

Access web application

 

From the web browser click on the NSX Web Application bookmark. The NSX Web Application fails to load since there is no connection from the desktop to the web servers.

 

Module 3 - Logical Routing

Lab Topology


In Module 3, enable routing by creating an L3 Gateway Service and a Logical Router. This allows for routing between logical switches and routing between the desktop and the logical networks through the L3 Gateway Service.


 

Logical View

 

The target state logical view is shown above.

 

Lab Fast Forward


Important Note on the Fast-Forward-NSX Script:

Modules 5, 6 and 7 of this lab depend on modules 2 and 3 to be completed.

If you decide to complete modules 2 and 3 now and come back later to complete the remaining modules, then you will have to run a python script called "fast-forward-nsx". This script will complete modules 2 and 3 for you so that you can proceed with the remaining modules. Note that Module 4 does not have a dependency on Modules 2 and 3.

Details on how to run this script are described in Module 5.


Create L3 Gateway Service


An L3 Gateway Service lets you connect logical router ports to physical networks via interfaces on NSX Gateway nodes. Multiple Gateways can be attached to an L3 Gateway Service to provides increased scalability and availability to the logical routers that rely on it.

To provide routing between the web, application, database, and physical networks:


 

Create L3 Gateway Service

 

From the dashboard, click Add next to Gateway Services.

 

 

Service Type

 

Select L3 Gateway Service as the Gateway Service Type

Click Next.

 

 

Service Name

 

Enter L3GWService for the Display Name.

Click Next.

 

 

Transport Nodes

 

Click on Add Gateway

 

 

Add Gateway

 

Select nsx-gw-01a

For Device ID choose breth0

Click OK

 

 

View Gateway details

 

Verify the Gateway and click Save

 

 

Create Logical Router

 

A Logical Router is an abstraction that provides a standard IPv4 service model for layer-3 packet forwarding.

Logical routers are often configured as layer-3 gateways to external, physical networks. Optionally, on each logical router you may configure destination NAT (DNAT) rules to alter the destination address of packets and/or source NAT (SNAT) rules to alter the source address of packets.

From the Dashboard, click on Add next to Routers

 

 

Display Name

 

Enter Distributed-Router as the Display Name.

Click Next.

 

 

Properties

 

Change Routing Type to Single Default Route

Enter 192.168.130.2 as the default gateway IP address

Click Next.

 

 

Distribution

 

Select Distributed Logical Router. This provides one-hop routing of VM-to-VM traffic among virtual machines connected over the logical router.

Click Next.

 

 

L3 Gateway Service

 

Select the L3 Gateway Service created previously (L3GWService)

Choose Advanced under Logical Router Port

Click on Update

 

 

Configure Logical Router Port

 

Enter L3Uplink for Logical Router Port

Click Next.

 

 

Properties

 

Click on Add IP Address

 

 

Create IP Address Prefix

 

Enter IP address Prefix: 192.168.130.10/24 (This prefix is in the VM vDSwitch Network)

Click OK

 

 

View details

 

Click OK to go back to the L3 Router wizard.

Click Save to finish.

 

 

Create Logical Router Patch Port

 

From the Dashboard, click on Add next to Router Ports

 

 

Select Logical Router

 

Select the Logical Router created in the previous step (Distributed-Router)

Click Next

 

 

Display Name

 

Enter the name for the patch port (Web-GW)

This patch port will be the gateway for the Web-Tier Logical Switch.

 

 

Properties

 

Click on Add IP Address

 

 

Create IP Address Prefix

 

Enter the IP Address Prefix: 172.16.10.1/24

This is the default gateway of the Web-Tier logical network.

Click OK thenclick Next

 

 

Attachment Type

 

Leave attachment type set to None. This creates an empty router port that will be used by the Logical Switch.

Click Save

 

 

Create Patch Ports for App and DB

Repeat the steps to create patch ports for App-Tier and DB-Tier networks. Use the following parameters:

App-Tier

DB-Tier

 

 

View router ports

 

From the Dashboard, under the Summary of Logical Components section click on the number next to Router Ports. The routers ports created for Web/App/DB have a link status of Unknown since we have not yet connected Logical Switches to the Router Patch Ports.

 

 

Attach Logical Switches to Patch Port

The final step is to connect the logical switches to the logical router patch ports.

 

 

Create Logical Switch Port

 

From the dashboard, click on Add next to Switch Ports

 

 

Select Logical Switch

 

From the drop down, select the Web-Tier Logical Switch

Click Next

 

 

Display Name

 

Enter Web-Patch as the Display Name

Click Next

 

 

Properties

 

Leave properties at defaults

Click Next

 

 

Mirror Targets

 

Do not add a mirror target

Click Next

 

 

Attachment Type

 

Click on the Attachment Type drop down and select Patch to Logical Router Port

Click on the Logical Router drop down and select Distributed-Router

Click on the Peer Port UUID drop down and select Web-GW

Click Save

 

 

Attach App and DB to patch ports

Repeat the steps to connect App-Tier and DB-Tier Logical Switches to previously created Logical Router patch ports.

Use the following parameters:

App-Tier

DB-Tier

 

 

View Switch Ports

 

At this point, the logical switches are connected to the logical router.

From the Dashboard, under the Summary of Logical Components click on the number next to Switch Ports. All switch ports should show Up for Link and Fabric states.

 

 

View Router Ports

 

From the dashboard, click on the number next to Router Ports. All routers ports should now show Up for Link and Fabric states.

 

 

Test Connectivity

Now the web, application, and database logical switches are connected to the logical router. The virtual machines reside on a mix of hypervisors (ESXi, KVM) located in different L2 segments.

Verify that the routing works between all the logical switches.

 

 

Login to vSphere Web Client

 

From the web browser click on the vSphere Web Client bookmark.

Enter credentials: root/VMware1!

 

 

Open console for web-sv-01a

 

Right click on the web-sv-01a virtual machine and select Open Console.

Click within the console and hit ENTER to bring up the login prompt.

 

 

Ping gateway / App / DB servers

 

Login to web-sv-01a using credentials root/VMware1!

Ping web default gateway {13}

# ping -c 3 172.16.10.1

Ping App and DB default gateways

# ping -c 3 172.16.20.1

# ping -c 3 172.16.30.1

Ping App and DB servers

# ping -c 3 172.16.20.11

# ping -c 3 172.16.30.11

The ping attempts succeed because the logical switches are now routed through the logical router.

 

 

Access web application

 

Since the logical router has an uplink to the physical gateway, the web application is now reachable from the desktop.

From the web browser click on the NSX Web Application bookmark to load the web application.

 

Module 4 - Physical to Logical

Lab Topology


In this module, the legacy application on the physical network will be migrated to a logical network. Through an L2 Gateway Service, the logical network will be bridged to the physical network, ensuring that the application remains accessible to users. The IP address of the legacy machine will not be changed.


 

Logical View

 

The target state logical view is shown above.

 

Migrate Legacy Server


ABC Medical is in the process of migrating existing legacy applications to the next-generation cloud infrastructure for improved performance and availability. To simplify the migration process, L2 bridging will be configured so that IP addressing does not have to change.

The following steps are needed:

The goal is to seamlessly migrate the application into the logical space without having to re-IP the virtual machine.

 


 

Review Legacy Application

 

Open a new tab in the Chrome web browser and click the NSX Legacy Application bookmark.

 

 

View Legacy Application

 

The page for the NSX Legacy Test Application appears.

 

 

Decommission Server

 

The legacy server (lgcy-sv-01a) is hosted on the kvm-01b. We need to decommission the legacy server before proceeding with the migration.

Launch putty and connect to kvm-01b

Use credentials: nsx-admin/VMware1!

 

 

List virtual machines

 

View all virtual machines on kvm-01b {14}

# virsh list --all

lgcy-sv-01a is the legacy server.

 

 

Power off lgcy-sv-01a

 

Power off lgcy-sv-01a {14}

# virsh shutdown lgcy-sv-01a

# virsh list --all

 

 

 

Test Connectivity

 

Return to the Chrome web browser and click the NSX Legacy Application bookmark.

Note: It may take a few seconds for the page to refresh.

 

 

View Legacy Application

 

Since the Legacy Application has been decommissioned, the web page will return an error.

 

 

New Legacy Application

After decommissioning the legacy application, it has been converted and uploaded to the next-generation cloud environment.

Login to the vSphere Web Client

Credentials: root/VMware1!

 

 

Power on new Legacy Application

 

Navigate to the list of Virtual Machines.

Right click on lgcy-sv-01a and select Power On.

 

Create Legacy Logical Switch


To provide connectivity for the new Legacy Application, create a logical switch and logical port connection

From the browser, load the NSX Manager interface.


 

Create Logical Switch

 

From the dashboard, click the Add button next to Switches

 

 

Logical Switch name

 

In the dialog box:

Set the Display Name for the logical switch to 'Legacy'

Click Next

 

 

Switch Properties

 

Leave Port Isolation Enabled unchecked

Use default for Replication Mode

Click Next

 

 

Add Binding

 

Next, add a binding that specifies the encapsulation and transport zone associated with the logical switch

Click on Add Binding

 

 

Create Transport Zone Binding

 

Select Transport Type and Transport Zone type

Click OK

 

 

Verify Transport Zone Binding

 

View transport zone binding information and click Save. (skip the Logical Router step for now)

 

 

Create Legacy Logical Port

 

From the dashboard, click the Add button next to Switch Ports

 

 

Select Logical Switch

 

From the drop down menu, choose Legacy logical switch.

Click Next.

 

 

Logical Port name

 

In the dialog box:

Set the Display Name for the logical switch to 'lgcy-sv-01a'

Click Next

 

 

Switch Port Properties

 

Leave Port Number and Logical Queue UUID blank

Leave Admin Status Enabled checked

Click Next

 

 

Mirror Targets

 

Do not add a mirror target. Click Next.

 

 

Attachment

 

Click on the Attachment Type drop down menu and select VIF. A VIF Attachment connects the VM’s virtual interface (VIF) to a logical switch port.

Click on the Hypervisor drop down menu and select esx-02a

Click on the VIF drop down menu and select the MAC ending in eb:34

Click Save to finish

 

Create L2 Gateway Service


An L2 Gateway Service lets you connect logical switch ports to physical network interfaces exposed via an NSX Gateway. For each such interface, the Gateway exposes a bridge-id (for example, breth0 for physical interface eth0). Multiple Gateways can be attached to the same L2 Gateway Service for increased scalability and redundancy.

Here we will create an L2 Gateway Service, add a Gateway node, then connect the Legacy Logical Switch to the L2 Gateway Service.


 

Create L2 Gateway Service

 

From the dashboard, click Add next to Gateway Services.

 

 

Service Type

 

Select L2 Gateway Service as the Gateway Service Type

Click Next

 

 

Service Name

 

Enter L2GWService for the name

Click Next

 

 

Transport Nodes

 

Click on Add Gateway

 

 

Add Gateway

 

Select nsx-gw-02a

For Device ID choose breth0

Click OK

 

 

View Gateway details

 

Verify the Gateway and click Save

 

 

Create Logical Port for Gateway Service

 

From the dashboard, click the Add button next to Switch Ports

 

 

Select Logical Switch

 

From the drop down menu, choose Legacy logical switch.

Click Next.

 

 

Logical Port name

 

In the dialog box:

Set the Display Name for the logical switch to 'L2GWS'

Click Next

 

 

Switch Port Properties

 

Leave Port Number and Logical Queue UUID blank

Leave Admin Status Enabled checked

Click Next

 

 

Mirror Targets

 

Do not add a mirror target. Click Next.

 

 

Attachment

 

Configure the following:

Attachment Type: L2 Gateway

L2 Gateway Service: L2GWService

Leave VLAN blank

Click Save to finish

 

 

Test Connectivity

 

Return to the web browser and click the NSX Legacy Application bookmark.

Note: It may take a few seconds for the page to refresh.

 

 

View Legacy Application

 

With the L2 Gateway Service bridging between the Legacy Logical Switch and the physical network, we can now access the migrated legacy application hosted in the next-generation environment. No IP addresses were changed in the migration of the application.

One of the key benefits of moving the application to a virtual network is the ability to apply performance and security controls through NSX. These topics will be explored in the next module.

NOTE: In NSX Manager the Legacy Logical Switch and L2GWS logical port may show link down and fabric down. If L2 bridging is working, do not worry about the switch and port statuses.

 

Module 5 – Security

Lab Fast Forward Script


As previously mentioned the modules 5, 6 and 7 of this lab depend on modules 2 and 3 to be completed.

If you are returning back to this lab or have decided to skip modules 2 and 3, you can run the python script called "fast-forward-nsx". This script will complete modules 2 and 3 for you so that you can proceed with the remaining modules.

Details on how to run this script are clearly articulated below.


 

Access the KVM Hypervisor

 

Using the putty tool on the control center desktop, access the kvm-01b hypervisor.

 

 

Login to KVM Hypervisor

 

login: nsx-admin

password: VMware1!

 

 

Run the script

 

{15} Run the executable python script called fast-forward-nsx.py

Command: python ./fast-forward-nsx.py create

The output of the script will return the last line as "NSX environment configured up to the end of Module 3, you can now continue with your lab"

You are now ready to proceed with the remaining modules.

 

 

Verify the script worked

 

Access the Chrome browser on the control center desktop.

Click the NSX Manager-Login bookmark.

login:admin

password: admin

Click on Dashboard

You should see that the Logical and Transport components are already configured.

 

Port Isolation


ABC Medical would like the ability to segment the Web Servers so they cannot communicate with each other.


 

Overview

 

Port Isolation configures a Logical Switch in isolated mode similar to Private VLAN functionality on a physical switch. Logical Port to Logical Port communication is blocked on the isolated Logical Switch. Only traffic entering or leaving the logical switch through a L2/L3 Gateway Service is allowed.

Primary use cases include shared services networks such as Internet access or L4-7 Services networks where virtual machines may be on one VLAN/Subnet.

 

 

Enable Port Isolation for Web-Tier

 

To segment the web servers, enable port isolation for the Web-Tier logical switch.

From the Dashboard, click on the number next to Switches.

 

 

 

Select Logical Switch

 

Click on the settings gear next to Web-Tier LogicalSwitch and select Edit configuration.

 

 

Enable Port Isolation

 

Go to the Properties tab and select Port Isolation Enabled.

Click Save.

 

 

Verify configuration

 

View Logical Switches. The Web-Tier logical switch now has Port Isolation enabled.

 

 

Verify Port Isolation

Once Port Isolation is enabled, all virtual machines on the Web-Tier logical switch will not be able to communicate.

Use the vSphere Web Client to verify lack of connectivity.

 

 

Login to vSphere Web Client

 

From the web browser click on the vSphere Web Client bookmark.

Enter credentials: root/VMware1!

 

 

Open console for web-sv-01a

 

Right click on the web-sv-01a virtual machine and select Open Console.

Click within the console and hit ENTER to bring up the login prompt.

 

 

Ping web servers

 

Login to web-sv-01a using credentials root/VMware1!

{16}

Ping lb-sv-01a

# ping -c 3 172.16.10.10

Ping web-sv-02a

# ping -c 3 172.16.10.12

Ping web-sv-03a

# ping -c 3 172.16.10.13

The ping attempts fail because Port Isolation is enabled on the logical switch.

Port Isolation does not apply to resources bridged to the logical switch through an L2 Gateway. In that scenario, virtual machines would be able communicate with resources connected to the physical network.

 

 

Disable Port Isolation

Revert the changes made to the Web-Tier Logical Switch.

 

 

Select Logical Switch

 

Click on the settings gear next to Web-Tier and select Edit configuration.

 

 

Enable Port Isolation

 

In the Edit wizard, go to the Properties tab and deselect Port Isolation Enabled.

Click Save.

 

 

Verify configuration

 

View Logical Switches. The Web-Tier logical switch has Port Isolation disabled.

 

Access Control Lists


After configuring the logical entities to provide connectivity to the web application, secure the environment so only required communication is allowed.

Application security requirements are:

Access Control Lists (ACLs) provide L3/L4-aware distributed firewall services that filter security on a per-logical port basis and support filtering traffic into/out of networks.


 

Web-Tier Security ACL

 

Only allow inbound TCP 80 & 443 to the Web-Tier Logical Switch.

From the dashboard, click the Add button next to ACLs

 

 

Display Name

 

Set the Display Name to Web-ACL

Click Next

 

 

Egress

 

Egress refers to rules applying to traffic egressing from the Logical Router Port to the virtual machine.

Click on Add Egress Rule

 

 

Add Egress Rules

 

Add four egress rules to the Security ACL. Rules are applied from top to bottom.

Rule number: 1

Rule number: 2

Rule number: 3

Rule number: 4

Click Next

 

 

Ingress Rules

 

Ingress refers to rules applying to traffic ingressing to the Logical Router Port from the virtual machine.

Do not add any Ingress Rules

Click Save

 

 

Edit Logical Router Port

 

From Dashboard, click on the number next to the Router Ports.

Edit the Web-GW Logical Router Port and add the Web-ACL.

 

 

Add ACL

 

Under Access Control, select Web-ACL from the dropdown

Click Save

 

 

App-Tier Security ACL

 

Only allow TCP 8443 from the Web-Tier to the App-Tier.

From the dashboard, click the Add button next to ACLs

 

 

Display Name

 

Set the Display Name to App-ACL

Click Next

 

 

Egress

 

Egress refers to rules applying to traffic egressing from the Logical Router Port to the virtual machine.

Click on Add Egress Rule

 

 

Add Egress Rules

 

Add three egress rules:

Rule number: 1

Rule number: 2

Rule number: 3

Click Next

 

 

Ingress Rules

 

Ingress refers to rules applying to traffic ingressing to the Logical Router Port from the virtual machine.

Do not add any Ingress Rules

Click Save

 

 

Edit Logical Router Port

 

Edit the App-GW Logical Router Port and add the App-ACL.

 

 

Add ACL

 

Under Access Control, select App-ACL from the dropdown

Click Save

 

 

DB-Tier Security ACL

 

Only allow TCP 3306 from the App-Tier to the DB-Tier

From the dashboard, click the Add button next to ACLs

 

 

Display Name

 

Set the Display Name to DB-ACL

Click Next

 

 

Egress

 

Egress refers to rules applying to traffic egressing from the Logical Router Port to the virtual machine.

Click on Add Egress Rule

 

 

Add Egress Rules

 

Add two egress rules:

Rule number: 1

Rule number: 2

Click Next

 

 

Ingress Rules

 

Ingress refers to rules applying to traffic ingressing to the Logical Router Port from the virtual machine.

Do not add any Ingress Rules

Click Save

 

 

Edit Logical Router Port

 

Edit each DB-GW Logical Router Port and add the DB-ACL

 

 

Add ACL

 

Under Access Control, select DB-ACL from the drop down

Click Save

 

 

Test Connectivity

Now that security policies are in place, verify that the web application is still accessible while communication between application tiers are enforced.

 

 

Ping virtual machines from desktop

 

Open a command prompt and ping the web (172.16.10.10-13), application (172.16.20.11), and database servers (172.16.30.11). {16}-{17}

All pings will fail.

Close the command prompt

 

 

Open console for web-sv-01a

 

Login to the vSphere Web Client.

Right click on the web-sv-01a virtual machine and select Open Console.

Click within the console and hit ENTER to bring up the login prompt.

 

 

Ping servers

 

Login to web-sv-01a using credentials root/VMware1!

{17}

Ping app-sv-01a

# ping -c 3 172.16.20.11

Ping db-sv-01a

# ping -c 3 172.16.30.11

The ACL blocks communication from the web servers to the other servers. Open up consoles to app-sv-01a and db-sv-01a and run ping tests.

 

 

Access web application

 

The final test is to see if we can still access the web application. Click on the NSX Web Application bookmark to load the web application.

Since the required application ports are allowed between logical networks, the application loads properly.

 

Port Security


Port security provides a mechanism to whitelist a set of MAC and IP addresses pairs whose packets can travel through a given logical switch port. Each entry in the whitelist is called an allowed address pair. Optionally, an allowed address pair may consist of an allowed MAC address only, without a corresponding IP address.

For any port on which port security is active, traffic is filtered as follows.

• On logical port ingress, a packet is dropped unless its source MAC and source IP match an allowed address pair.

• On logical port egress, a packet is dropped unless its destination MAC and destination IP match an allowed address pair.

 


 

Configure Port Security

 

Configure address-pairs on web-sv-01a, which is connected to the Web-Tier logical switch.

From the Dashboard, click on the number next to Switch Ports.

 

 

Edit Logical Switch Port

 

Click on the gear next to web-sv-01a and select Edit

 

 

Add Address Pair

 

Under Port Security, click on Add Address Pair

 

 

Create Address Pair

 

Click on Insert Attached Mac, this populates the MAC address of virtual machine connected to the port.

Enter IP address 172.16.10.11, this is the IP address of the web-sv-01a virtual machine

Click OK then click Save

 

 

Verify connectivity between web servers

Verify that the load balancer and web servers do not have connectivity when web server IP address is changed.

 

 

Login to vSphere Web Client

 

From the web browser click on the vSphere Web Client bookmark.

Enter credentials: root/VMware1!

 

 

Open console for web-sv-01a

 

Right click on the web-sv-01a virtual machine and select Open Console.

Click within the console and hit ENTER to bring up the login prompt.

 

 

Ping web servers

 

Login to web-sv-01a using credentials root/VMware1!

Ping web-sv-02a

# ping -c 3 172.16.10.12

Ping web-sv-03a

# ping -c 3 172.16.10.13

Ping lb-sv-01a

# ping -c 3 172.16.10.10

The ping attempts succeed.

 

 

Change IP address

 

Verify the IP address of the web-sv-01a {18}

# ifconfig

Change the IP address of the web-sv-01a

# ifconfig eth0 172.16.10.23 netmask 255.255.255.0 up

 

 

 

Ping web servers

 

Ping web-sv-02a

# ping -c 3 172.16.10.12

Ping web-sv-03a

# ping -c 3 172.16.10.13

The pings fail with Destination Host Unreachable.

 

 

Revert IP Address

 

Revert the IP address of the web-sv-01a {19}

# ifconfig eth0 172.16.10.11 netmask 255.255.255.0 up

Ping web-sv-02a

# ping -c 3 172.16.10.12

Since the IP / MAC matches the configured address pair, the pings are successful.

 

Module 6 - NSX API

Using the API Inspector


One of the developers has requested access to web-sv-03a in an isolated environment to perform testing. ABC Medical is currently working on integrating their Cloud Management Platform with the NSX API.

For now, use the API Inspector to make API calls that migrate web-sv-03a to a new logical switch.


 

Important Note: Lab Fast Forward Script

Modules 2 and 3 are required to be completed before you continue with this module. We have created a python script "fast-forward-nsx.py" to complete those modules for you so that you can proceed with the lab. Please refer to the Lab Fast Forward Script section in Module 5 for detailed instructions on running the script.

 

 

NSX API

In previous modules we created and configured network elements through NSX Manager to provide context. In real-world deployments, a Cloud Management Platform integrates with NSX to automate the provisioning of network components.

The NSX API is a RESTful JSON API. Each API call is an HTTP request and response pair, with request and response data objects formatted as JSON text. In a RESTful API, API calls are operations on objects, with each object being represented by a unique URL. Different HTTP methods correspond to different operations on these objects. Developers can use a variety of programming languages to communicate with the NSX API. Any language with libraries for HTTP requests and JSON parsing will work.

 

 

Documentation

 

Access API Documentation by going to Tools & Troubleshooting and clicking on API Documentation

 

 

The API Inspector

 

The API Inspector provides access to a searchable list of all API calls to simplify integration with the NSX API. Clicking on an individual API method name shows an auto-generated web form based on the JSON schema for that lets you to exercise all API calls and see the raw output.

The API Inspector helps developers understand the exact code required to perform a function and speeds up integration efforts.

Mouse over Tools & Troubleshooting and then click on API Inspector

 

 

Detach web-sv-03a

Before web-sv-03a can be attached to another logical switch, delete the logical port attachment.

 

 

Delete Logical Port

 

In the API Inspector search box, type "delete logical" to filter the list of API calls.

Select Delete Logical Switch Port

 

 

Enter parameters

 

The API Inspector prompts for all the required URI tokens associated with the API call.

For Delete Logical Switch Port, enter the following:

When all required tokens are provided, a green checkbox appears next to Forms Valid.

Click on Submit Request

 

 

View Results

 

The results of the DELETE request are displayed.

 

 

Verify Deletion

 

From Dashboard, in the Summary of Logical Components section, click on number next to Switch Ports. You should not see the web-sv-03a switch port in the list.

The port mapped to web-sv-03a VM has been deleted and the VM can be attached to any other logical switch.

 

 

Create Logical Switch

Create a new logical switch called Test using the NSX API.

 

 

Create Test Logical Switch

 

In the API Inspector search box, type "create logical" to filter the list of API calls.

Select Create a Logical Switch

 

 

Enter parameters

 

Enter the following:

(click on the + next to Transport Zone to show more fields)

Click Submit Request

 

 

View Results

 

The request generates the following response from the NSX Controller.

 

 

Create Logical Switch Port

Create a Logical Switch Port on the Test Logical Switch

 

 

Create Logical Switch Port

 

In the API Inspector search box, type "create logical" to filter the list of API calls.

Select Create a Logical Switch Port

 

 

Enter parameters

 

Enter the following:

Click Submit Request

 

 

View Results

 

The request generates the following response from the NSX Controller.

 

 

Update Logical Switch Port Attachment

The final step is to attach web-sv-03a to the previously created Logical Switch Port.

 

 

Update Logical Switch Port Attachment

 

In the API Inspector search box, type "attachment" to filter the list of API calls.

Select Update Logical Switch Port Attachment

 

 

Enter parameters

 

Enter the following:

(Click on the Request Body drop down and choose VifAttachment)

Click Submit Request

 

 

View Results

 

The request generates the following response from the NSX Controller.

 

 

Verify Results

 

Navigate within NSX Manager to view the results of the API calls. There should be:

The API Inspector is a powerful tool that significantly simplifies integration with the NSX API.

 

Using the REST client


The developer has completed development and testing of the web server in the isolated network. The web server needs to be moved back to the Web-Tier network.

Use a standard REST client to invoke the NSX API calls for migrating web-sv-03a back to the Web-Tier network.


 

Detach web-sv-03a

Before web-sv-03a can be attached to another logical switch, delete the current logical port attachment.

 

 

Delete Logical Switch Port

 

From NSX Manager, navigate to the list of logical switch ports.

Click on the gear on the right of web-sv-03a and select Delete

 

 

REST Overview

Representational state transfer (REST) defines a set of simple principles which are loosely followed by most API implementations. REST leverages strength and constraints of HTTP to send data (Headers and Bodies) between Clients and Servers. REST elements include:

 

 

 

REST Client

 

From the desktop, launch the Firefox web browser.

Click on the RESTClient icon to launch the extension.

 

 

Set Content Type

 

Click on Headers and select Custom Header {20}

Click Okay

 

 

Login

 

To interact with the NSX API, first login and retrieve an authentication token. {21}

Set the following:

Click SEND

NOTE: If you receive a 404 error, (Tools > Clear Recent History > Select Cache > Clear Now) flush the firefox cache, and relaunch the browser.

 

 

Login Response

 

The POST request response is displayed below.

Select and copy the nvp_sessionid value

 

 

Set cookie

 

Add a new custom header

Click Okay

NOTE: If a subsequent API request fails due to authentication, relogin to get a new token.

 

 

Create Logical Switch Port

Create a Logical Switch Port on the Web-Tier Logical Switch

 

 

Create Logical Switch Port

 

In the API Inspector search box, type "create logical" to filter the list of API calls.

Select Create a Logical Switch Port

 

 

Enter parameters

 

Enter the following:

Clickon Show Formatted Request

 

 

View Formatted Request

 

The formatted request to create a logical switch port is displayed.

Copy the Request URL and Request BODY into the RESTClient as specified in the next step.

The Content-Type specified in the Request Headers will also be required in the next step.

 

 

Submit request via RESTClient

 

Request Body:

{

"display_name": "web-sv-03a",

"type": "LogicalSwitchPortConfig"

}

The Request Body can also be copied and pasted from the API Inspector

Click SEND

 

 

 

 

View Results

 

The request generates the following response from the NSX Controller.

 

 

Update Logical Switch Port Attachment

The final step is to attach web-sv-03a to the previously created Logical Switch Port.

 

 

 

Update Logical Switch Port Attachment

 

In the API Inspector search box, type "attachment" to filter the list of API calls.

Select Update Logical Switch Port Attachment

 

 

Enter parameters

 

Enter the following:

(Click on the Request Body drop down and choose VifAttachment)

Click Show Formatted Request

 

 

View Formatted Request

 

The formatted request to update a logical switch port attachment is displayed.

Copy the Request URL and Request Body into the RESTClient as specified in the next step.

 

 

Submit request via RESTClient

 

Click SEND

 

 

 

 

View Results

 

The request generates the following response from the NSX Controller.

 

 

Verify Results

 

Navigate within NSX Manager to view the results of the API calls. The third web server is reconnected to Web-Tier logical switch.

 

Module 7 – Troubleshooting

Port Connections


Being able to quickly monitor and troubleshoot networking issues in the logical and physical space is of paramount importance. Just as server virtualization led to new innovations in the management space, tools built on top of network virtualization platforms can provide similar benefits.

NSX Manager includes a Port Connections Tool to test connectivity between a pair of logical ports. It provides a visual depiction of all forwarding elements required in order to provide network forwarding between logical ports.


 

Important Note: Lab Fast Forward Script

Modules 2 and 3 are required to be completed before you continue with this module. We have created a python script "fast-forward-nsx.py" to complete those modules for you so that you can proceed with the lab. Please refer to the Lab Fast Forward Script section in Module 5 for detailed instructions on running the script.

 

 

Port Connections Tool

The Port Connections Tool accepts input for logical ports on a given logical switch and returns detailed information about the relevant logical and transport layer components. This allows the operator to quickly visualize the path between a given pair of logical ports and troubleshoot any problems that may be present. The sections of the tool are summarized below.

 

 

Access Port Connections

 

Mouse over Tools & Troubleshooting tab then select Port Connections

 

 

Verify logical port to logical port connectivity on the same logical switch

 

Select Web-Tier as the Logical switch.

Select web-sv-01a and web-sv-02a for Logical Switch Ports

Click Go

Click on All Sections: Expand

 

 

VM Section

 

View the VM section. This displays MAC and UUID information of the connected Web virtual machines.

 

 

Logical Switch Port

 

View the Logical Switch Port section. This shows information on the status of the logical switches and their associated UUIDs.

 

 

Logical Forwarding Elements

 

View Logical Forwarding Elements. This displays information about the the logical switch and the logical router that the virtual machines are connected to.

 

 

Traceflow

 

View Traceflow, click the Ping button that corresponds to the source port from which the test packet will be injected. A result of Delivered indicates success. To see a more detailed trace of the packet’s route, use the Logical Switch Port inspector page as explained in the next section.

NOTE: If the Destination Port is not a VIF Attachment, Traceflow will send a broadcast. The button text changes to Broadcast Ping to indicate this.

 

 

Transport Nodes

 

View Transport Nodes. This gives information of the hypervisors and the integration bridges on which these virtual machines are provisioned.

Review the information associated with the NSX vSwitch integration bridge on the ESXi host and the Open vSwitch integration bridge on the KVM host.

 

 

Transport Connectors

 

View Transport Connections. This gives information on the tunneling protocol used, the Transport Zone UUID, and the IP address end points for the tunnels (the ESXi and KVM Hypervisors).

 

 

Tunnels

 

This is the most popular view, showing a graphical view of all the components necessary for end-to-end connectivity.

Green arrows indicate that unidirectional tunnels are up between the components (Hypervisors, Service Nodes, and Gateways)

 

Logical Port Monitoring


NSX provides the ability to monitor workloads attached to logical ports. A workload is a tenant virtual machine or a physical application attached to a logical port. Workloads and logical ports are logical entities in NSX, and this allows NSX to ensure that the network policies and statistics associated with a workload follow that workload when it is moved in the NSX domain.


 

Logical Port Statistics

Logical port counters provide statistics information about the actual workload or VLAN attached to a logical port. The statistics remain associated with the virtual machine, even when the virtual machine is powered down or migrated within the NSX domain.

 

 

Collect port statistics

 

From the API Inspector, select the Read Logical Switch Port Statistics API

 

 

Enter URI Tokens

 

Click Submit Request

 

 

View Response

 

The response gives the port statistics for the web-sv-01a virtual machine.

NOTE: If the port statistics are showing all values to be zero, start a ping to the web-sv-01 (172.16.10.11) from the command line of the control center and redo the API request.

 

Traceflow



 

Traceflow

 

Traceflow is used to inject traffic into logical space and provides a mechanism to test connectivity between a pair of logical ports. Keep the following in mind when using the tool:

Port Connections includes a simple Traceflow validation method.

From the dashboard, click on the number next to Switch Ports.

 

 

Select the Logical Switch Port

 

Click on the web-sv-01a logical switch port

 

 

Inject Traceflow

 

In the Traceflow section click Inject.

 

 

Build a packet to be injected

 

In the Inject Packet window, choose the Source MACand Destination MACaddresses.

Source MAC = web-sv-01 MAC address = 00:50:56:87:28:c3

Destination MAC = web-sv-02a MAC address = 52:54:00:08:03:71

You may provide a Payload for the packet. If a payload is not provided, a default payload is inserted.

Choose the Frame Size. Be sure toleave sufficient space for all headers and the payload you are sending. For this example, 1500 is used.

Specify the Timeout in milliseconds (min. 1000 ms; max. 10000 ms). This specifies how long Traceflow will wait to observe whether the packet has been delivered to all destinations in the Logical Switch.

Select Ethertype to be an IP packet.

Click Save

 

 

Interpret Traceflow Results

 

If the test produces observations, the results display a Delivered summary row and a set of per-hop observation rows.

Summary results

The summary row shows the following information:

Response: OK if the packet successfully reached its destination; Error otherwise.

Delivered: Delivered or Not Delivered. A status of Delivered indicates the packet was delivered to all destinations successfully.

Source and Destination MAC address

Frame Size: Total size of test packet, including headers

Ethertype: Type of test packet sent.

Observations: How many times the packet was observed being forwarded or delivered.

Forwards: How many times the packet was forwarded.

Timeout: User-specified length of this test in milliseconds. Traceflow reports all forwards and deliveries that occur within this period. Do not confuse this with Timestamp Delta (ms), described below.

Time Stamp: Time when this test started.

Per-hop results

For a given test, the Traceflow results also display one row per hop in the packet’s traversal of the Logical Switch. Each row shows:

Type: Whether this hop represented forwarding or delivery.

Source and Destination Transport Node: These are links to the Transport Node inspector page for the starting and ending node of this hop.

Timestamp Delta (ms): Elapsed time in milliseconds from start of test until the NSX Controller received the observation of this hop.

Remote IP Address: Shown only for forwards, this is the IP address to which the packet was forwarded.

Connector:Shown only for forwards, this is the Transport Zone on which the packet was forwarded.

Logical Port: Shown only for deliveries, this is the logical port to which the packet was delivered.

 

Conclusion


Congratulations on completing the NSX lab and joining the network virtualization revolution! In the lab you demonstrated the power and flexibility of NSX by:

For more information, visit the VMware booth in the Solutions Exchange. Also check out the following sessions:


Conclusion

Thank you for participating in the VMware Hands-on Labs. Be sure to visit http://hol.vmware.com/ to continue your lab experience online.

Lab SKU: HOL-SDC-1319

Version: 20140321-161512