VMware Hands-on Labs - HOL-SDC-1412


Lab Overview

HOL-SDC-1412 - IT Outcomes – Data Center Virtualization and Standardization


DataCenter Virtualization and Standardization

This lab demonstrates how to use vCenter Operations (vCOPS), Virtual SAN (VSAN) and NSX (Network Virtualization Platform) and other features to setup, optimize, plan and expand your infrastructure to meet the needs of your applications.

The intent of this lab is not to go into all the details of a particular technology, but to show how these technologies work together to provide standardization and consolidation to your datacenter infrastructure.

This lab is pre-built with all the required configuration for vCenter, VSAN, NSX and vCOPS. We will first review these configurations and also access a sample 3-tier application that has been deployed on this infrastructure. We will then expand the infrastructure capacity by adding a new ESXi host to the cluster. In doing so you will see how easy it is expand the Virtual SAN storage and extend Network Virtualization. Once this is done we will deploy more workloads in the 3-tier application to make use of this new additional infrastructure capacity.

The lab will also show how to use vCenter Operations (vCOPS) to check if there is a Capacity Risk in the environment, as well as provide the ability to generate reports showing current and future trends.

This lab is focused on two main features of vSphere Software Defined Datacenter -  Virtual SAN and Network and Security Virtualization with VMware NSX.  The lab is broken up into seven modules and can be taken in any order.  

The Modules are:

Module 1 - Lab Overview (15 minutes)

Module 2 - Overview of vCenter and VSAN Configuration (15 minutes)

Module 3 - Overview of NSX Configuration (15 minutes)

Module 4 - Capacity Reporting and vCenter Operations (15 minutes)

Module 5 - Increase Compute Cluster Capacity (15 minutes)

Module 6 - Expand 3-Tier Application Capacity (15 minutes)

Module 7 - Lab Conclusion

The average time taken to complete each module is written next to it and depending on experience with a particular technology your time may vary.  You have 90 minutes per lab session, so please plan accordingly.


 

Wait for Ready Status

 

In the next few steps you will be asked to access the 3-Tier Web Application, however before you access it, please make sure the Lab Status  'Ready'.

 

 

ABC Medical Corporation Profile

For lab purposes we have created a company profile for ABC Medical Corporation.

It has a small datacenter that has a production deployment of the ABC Medical Point of Sales (POS) application. This is a classic 3-tier application with 2 web servers, an application server and a database server.

The company has a requirement to set up an infrastructure that it is scalable from a storage and networking perspective and has therefore chosen to deploy the VSAN and NSX technologies from VMware.

They are now looking to expand the compute, storage and networking capacity of their datacenter by adding a new ESXi hypervisor to their environment. Once they add this new capacity they will deploy more virtual machines to expand the point of sales application.

 

Module 1 - Lab Overview (15 Min)

Physical Topology


In this section we will review the physical topology of the lab.


 

Hypervisors, Storage, Network Connectivity and the 3-Tier Application.

 

Please refer the below diagram

Hypervisors:

The hypervisors shown in Red color are esx-01a.corp.local and esx-02a.corp.local and they are part of the Management and Edge Cluster. This cluster will host the NSX Controller cluster and other NSX Edge services.

The hypervisors shown in Blue color are esx-03a.corp.local, esx-04a.corp.local, esx-05a.corp.local, esx-06a.corp.local and they are part of the Compute Cluster. This cluster will host the 3-tier application virtual machines.

Note: The hypervisor esx-06a.corp.local is not part of the Compute Cluster initially and will be added to that cluster during Module 5 in the lab.

Physical Networking:

The Management Network (192.168.110.0/24) is a common network across all Hypervisors and also connects to vCenter, NSX Manager and ControlCenter.

The vMotion Network (10.10.30.0/24) is used for vMotion traffic.

The VXLAN Network (192.168.120.0/24) is used to carry VXLAN traffic between all Hypervisors.

HQ-Uplink Network (192.168.130.0/24) is used to connect this datacenter to the corporate network. The ControlCenter will have access to the 3-tier application via this network.

Storage:

The Management and Edge Cluster hypervisors have NFS shared storage via stgb-l-01a storage appliance. This appliance is connected via the storage network (10.10.20.0/24)

The Compute Cluster hypervisors have been configured for shared VSAN storage that is accessible via the VSAN storage network (10.20.20.0/24)

vCenter and NSX Manager:

vCenter is pre-configured and accessible on the Management Network on 192.168.110.22

NSX Manager pre-configured and accessible on the Management Network on 192.168.110.42

vCOPS pre-configured to connect to vCenter on the Management Network on 192.168.110.70

Application Virtual Machines:

In this lab we are using a simple 3-tier application which is hosted on the Compute Cluster and is using the shared VSAN datastore.

There are 2 web servers (web-sv-01a and web-sv-02a), one application server (app-sv-01a) and one database server (db-sv-01a).

 

Application Topology


In this section we will review the 3-tier application topology.


 

3-Tier Application

 

This lab contains a simple 3 tier application.

There are 2 Web servers called web-sv-01a and web-sv-02a, which are connected to the NSX logical called "Web-Tier-01"

There is one Application server called app-sv-01a, which is connected to the NSX logical switch called "App-Tier-01".

There is one Database server called db-sv-01a, which is connected to the NSX logical switch called "DB-Tier-01".

The storage for all the virtual machines is distributed across the vSAN storage pool in the compute cluster (esx-03a, esx-04a and esx-05a)

These 3 tiers can communicate with each other via the NSX virtual distributed logical router which is a kernel module installed in each hypervisor. The control plane of this distributed router is a virtual appliance called "vDR-01" (which is deployed on the Mgmt and Edge Cluster). The distributed router communicates via OSPF with the NSX Edge services gateway called "Edge-gw-01" (which is also a virtual appliance deployed on the Mgmt and Edge Cluster).

The NSX distributed firewall (DFW) is a statefull firewall kernel module which enforces its rules at the virtual-nic level. In this lab the DFW is used to filter communication to the web servers from the outside world as well as communication between the web server, application server and the database server.

The Edge services gateway "Edge-gw-01" also runs the NSX load balancer which is used to load balance HTTPS requests across the web servers.

The ControlCenter can the access this 3-tier application via the load-balancer virtual IP (192.168.130.3) hosted on the NSX Edge service gateway (Edge-gw-01).

 

Access the 3-Tier Application


In this section we will access the 3-Tier application that has already been built in this lab. We have built this simple application just for demo purposes.


 

Check if the Lab is in "Ready" state.

 

Just a reminder again before you proceed. Please check if the lab status is Ready.

It usually takes about 5 minutes from the time the lab is launched.

 

 

ABC Medical Point of Sale Application

 

The ABC Medical Point of Sale Application can be access via Firefox web browser.

Click and open the Firefox web browser from the ControlCenter task bar.

Click on the bookmark called "ABC Medical POS App"

This will launch a connection to one of the web servers and you should be able to see a page similar to the one shown below.

Try refreshing the browser page a couple of times and you will see that request is being load balanced across both the web servers web-sv-01a and web-sv-02a.

NOTE: If the ABC Medical PoS App link brings up an error message saying the "Connection was Interrupted" or "internal server error" please click the link one more time. If you need to please close the browser tab and re-open then click the ABC Medical PoS App one more time.

The ControlCenter access the application via the Load Balancer virtual IP (on port 443) that is configured on the NSX Edge Services Gateway.

The Load balancer will load balance the request across both web server on port 443.

The web server communicates with the application server on port 8443.

The application server communicates with the database server on port 3306.

 

 

Module 2 - Overview of vCenter and VSAN Configuration (15 Min)

vCenter and DataCenter Configuration


In this section we will review the vCenter objects and how the Datacenter is configured.


 

Locate Firefox

 

From the desktop locate the Firefox browser icon and double click on it.

 

 

Login to vSphere Web Client

 

Open up Firefox

1. Click on the vSphere Web Client bookmakr

2. Username: administrator@vsphere.local password: VMware1!

3. Click Login

 

 

 

Cluster Configuration

 

From the vSphere Client:

  1. If necessary, click Home tab
  2. Then click Hosts and Clusters

 

 

Review Clusters

 

Expand out Datacenter Site A. There are two clusters in this lab configuration.

Compute Cluster which will contain the applications as well as the VSAN cluster storage.

Mgmt-Edge Cluster - this is the management cluster for NSX Controllers and NSX Edge services. Please note that in production environments a cluster like this one can be used to deployed various common infrastructure services.

 

 

Compute Cluster Configuration

 

Expand the Compute Cluster. Currently there are three hosts within the cluster.

esx-03a , esx-04a , esx-05a.

This cluster will provide compute resources (CPU, Disk, Storage and Network) to the application. This cluster will also be expanded to provide additional compute in later sections.

 

 

Compute Cluster Virtual Machines

 

The Compute Cluster currently has five Virtual Machines. In a later section of the lab you will add additional Virtual Machine to this cluster. For now, we just want to review the existing configuration. The five virtual machines are:

app-sv-01a

db-sv-01a

web-sv-01a

web-sv-02a

 

 

Management Cluster Configuration

 

Expand out Mgmt-Edge-Cluster. There are two hosts within this cluster.

esx-01a, esx-02a

The management cluster will provide network services and virtual infrastructure for network services such as Firewall, Load Balancing and other Edge services.

 

 

Management Cluster Virtual Machines

 

The Mgmt-Edge-Cluster currently has five Virtual Machines. The virtual machines consist of NSX Controllers and Edge Gateway services for the virtualized network. The current Virtual machine inventory is:

3 x NSX Controllers - the NSX controllers are the control point for all logical switches within a network and maintains information about virtual machines, hosts, logical switches, and VXLANs.

Edge-gw-01-0 - the edge provides network edge security and gateway services to isolate a virtualized network.

vDR-01-0 - vDR-01-0 is another edge service.

 

 

vCenter Distributed Switch Overview


In this section we will review the Virtual Distributed Switch configuration.


 

Networking Section of vSphere Web Client

 

In this section you will review the Virtual Distributed Switch configuration. The Virtual Distributed switch provides a framework for advanced networking and ease of management for our virtual networking infrastructure.

You should still be logged into the vSphere Web Client.

  1. Click the Networking section

 

 

Compute-VDS

 

  1. Expand the Datacenter Site A section
  2. Click and highlight Compute-VDS Distributed Switch.
  3. Click Summary.

There are three hosts and a number of VM's which will we review later.

 

 

View Compute-VDS Topology

 

Now lets review the Compute-VDS topology.

  1. Click Manage Tab
  2. Click Settings
  3. Click and Highlight Topology

Notice the Distributed Switch port groups connected to dvUplink1.

There are various port-groups defined:

Compute-VDS-Management: This is used for management traffic.

Compute-VDS-vMotion: This is used for vMotion traffic.

Compute-VDS-VSAN: This is used for VSAN traffic.

The other port-groups have been dynamically created by NSX and are used for VXLAN traffic on network 192.168.120.0/24

 

 

Mgmt-Edge-VDS

 

Highlight Mgmt-Edge-VDS and click Summary.

Notice there are ten networks and five virtual machines currently associated with this Distributed Switch.

 

 

Mgmt-Edge-VDS Topology

 

Now lets review Mgmt-Edge-VDS Topology.

Make sure Mgmt-Edge-VDS is highlighted.

  1. Click Manage.
  2. Click Settings.
  3. Click Topology.

There are various port-groups defined:

Mgmt-Edge-VDS-Management: This is used for management traffic.

Mgmt-Edge-VDS-vMotion: This is used for vMotion traffic.

Mgmt-Edge-VDS-Storage: This is used for NFS storage traffic.

Mgmt-Edge-VDS-HQ-UPlink: This is used to connect to the corporate network (or external network)

The other port-groups have been dynamically created by NSX and are used for VXLAN traffic on network 192.168.120.0/24

 

Virtual SAN Overview


In this section we will review VMware's Virtual SAN (VSAN) technology. VSAN is used in this lab to present Software Defined Storage to our 3 Tier Web Application. ABC Medical will deploy application infrastructure into their Compute cluster and will expand the VSAN storage by adding a ESXi Host.


 

What is VSAN

 

Virtual SAN is a new software-defined storage solution that is fully integrated with vSphere. Virtual SAN aggregates locally attached disks in a vSphere cluster to create a storage solution that rapidly can be provisioned from VMware vCenter during virtual machine provisioning operations. It is an example of a hypervisor-converged platformthat is, a solution in which storage and compute for virtual machines are combined into a single device, with storage’s being provided within the hypervisor itself as opposed to via a storage virtual machine running alongside other virtual machines.

Virtual SAN is an object-based storage system designed to provide virtual machinecentric storage services and capabilities through a SPBM platform. SPBM and virtual machine storage policies are solutions designed to simplify virtual machine storage placement decisions for vSphere administrators.

Virtual SAN is fully integrated with core vSphere enterprise features such as VMware vSphere High Availability (vSphere HA), VMware vSphere Distributed Resource Scheduler (vSphere DRS), and VMware vSphere vMotion®. Its goal is to provide both high availability and scale-out storage functionality. It also can be considered in the context of quality of service (QoS) because virtual machine storage policies can be created to define the levels of performance and availability required on a pervirtual machine basis.

Virtual SAN can scale out to 32 hosts and has achieved more than 2 million IOPS in benchmark testing of a single Virtual SAN cluster.

 

 

Customer Benefits

 

VMware recognizes the significant cost of storage in many virtualization projects. Many projects stall, or are canceled due to the fact that to meet the storage requirements of the project, the storage simply becomes too expensive.

Using a hybrid approach of SSD for performance and HDD for capacity, VSAN is aimed at re-enabling projects that require a less expensive storage solution.

 

 

Use Cases

 

 

VSAN Requirements


The following section will review the hardware and software requirements necessary to create a Virtual SAN cluster.


 

vCenter Server

Virtual SAN requires a vCenter Server running 5.5 Update 1 or higher. Virtual SAN can be managed by both the Windows version of vCenter Server and the vCenter Server Appliance (VCSA). Virtual SAN is configured and monitored via the vSphere Web Client and this also needs to be version 5.5 Update 1 or higher.

 

 

VSPHERE

Virtual SAN requires at least 3 vSphere hosts (where each host has local storage) in order to form a supported Virtual SAN cluster. This is to allow the cluster to meet the minimum availability requirements of tolerating at least one host failure. The vSphere hosts must be running vSphere version 5.5 Update 1 at a minimum. With fewer hosts there is a risk to the availability of virtual machines if a single host goes down. The maximum number of hosts supported is 32, which is the largest supported vSphere Cluster.

Each vSphere host in the cluster that contributes local storage to Virtual SAN must have at least one hard disk drive (HDD) and at least one solid state disk drive (SSD).

 

 

DISK & Network

The VMkernel port is labeled Virtual SAN. This port is used for inter-­cluster node communication and also for read and writes when one of the vSphere hosts in the cluster owns a particular virtual machine, but the actual data blocks making up the virtual machine files are located on a different vSphere host in the cluster. In this case, I/O will need to traverse the network configured between the hosts in the cluster.

 

VSAN Lab Configuration


This lab HOL-1412 has a VSAN cluster already configured. Later in the lab you will add a host to expand the VSAN cluster compute and storage. However, for now we want to just review the correct VSAN configurations and get an idea of what has been done.


 

Login to the vSphere web client

 

If you are not already logged into the vSphere Web Client then open up Firefox and launch the vSphere Web Client.

Login with:

username: administrator@vsphere.local

password: VMware1!

 

 

Click on Hosts and Clusters

 

Click on Hosts and Clusters.

 

 

Compute Cluster VSAN General Information

 

Highlight Compute Cluster

Click on Manage Tab

Click on General under Virtual SAN

As we can see Virtual SAN is Turned ON and the Add Disks to Storage is set to Manual. When set to Manual, VSAN will prompt you to claim disks for use when adding hosts to VSAN clusters.

Also notice that there are currently 3 Hosts and a Total Capacity of 23.25 GB. The network status is Normal. The network status is normal because all of the hosts are configured and communicating properly over the VSAN VMkernel network that was initially created for VSAN.

 

 

Review Disk Configuration of VSAN Compute Cluster

 

Click on Disk Management under Virtual SAN.

Notice that there are 3 Hosts all contributing storage to the VSAN cluster. Each host has a Disk Group associated with it. A Disk Group is basically a combination of SSD and HDDs. Each Disk Group has a SSD in front of the disks for caching (read and write caching).

 

 

View the existing vsanDatastore in Web Client

 

  1. Click on the Storage tab in the Web Client.
  2. Expand out Datacenter Site A
  3. Click and highlight the vsanDatastore.

 

 

Details of vsanDatastore

 

Make sure the Summary tab is chosen. We see the Details of the vsanDatastore, we can see Location, Type, Hosts and Virtual Machines. We can also see the URL, this is what would show up as the link to the Storage Provider for policy based management.

 

Module 3 - Overview of NSX Configuration (15 Min)

What is VMware NSX


VMware NSX is the network virtualization platform that delivers the operational model of a virtual machine for the network. Just as server virtualization provides flexible control of virtual machines running on a pool of server hardware, network virtualization with NSX provides a centralized API to provision and configure many isolated logical networks that run on a single physical network.

Logical networks decouple virtual machine connectivity and network services from the physical network, giving cloud providers and enterprises the flexibility to place or migrate virtual machines anywhere in the data center while still supporting layer-2 / layer-3 connectivity and layer 4-7 network services.


 

NSX Components

 

In this lab we will not go into the details of NSX components but here is a quick overview.

The NSX Controller Cluster is a distributed and scale out system that supports VXLAN and Distributed Routing Functions.

The NSX Manager is the centralized network management component of NSX, and manages the network and network services across the vCenter Server environment.

The NSX edge provides firewall, routing, and other gateway services like load balancing, NAT and DHCP to the logical networks. There are two types of NSX edge deployment possible. You can install NSX edge either as a logical router or a services gateway.

There are four kernel modules deployed on each vSphere host as part of NSX configuration. These hypervisor level modules handle data path functionality of the following functions

  1. Port Security
  2. VXLAN
  3. Distributed firewall (DFW)
  4. Distributed Routing (DR)

 

 

Explore NSX configuration


In this section we will explore all the NSX configuration that has already been setup.


 

Launch Firefox Browser

 

Launch the Firefox web browser and click the vCenter Web Client bookmark

 

 

Login to vSphere Web Client

 

Click the vSphere Web Client bookmark and then login with the following credentials:

Username: administrator@vsphere.local

Password: VMware1!

 

 

 

Access the NSX configuration

 

Access the NSX configuration page by clicking the Network and Security tab.

 

 

NSX Managers and Controllers

 

Click the Installation section. Under the Management tab you should see the configured NSX manager and the Controller Cluster nodes that have been deployed. Note that the controllers are in a healthy state and the status is Normal.

 

 

Host Preparation

 

Access the Host Preparation tab and note that the Management and Edge Cluster and the Compute Cluster have been configured for network virtualization and the status of the modules is Enabled.

Note: If for some reason there is Red Alert sign, please click on the Resolve link that will appear next to it.

 

 

Logical Network Preparation

 

Access the Logical Network Preparation tab and click the VXLAN Transport section.

Review that the VMKNic IP addresses for the VXLAN Tunnel End-Point (VTEP) have been configured.

 

 

VXLAN Segment ID

 

Under the Logical Network Preparation tab click Segment ID. Review the VXLAN Segment ID's that have been configured for use in this lab.

Also note that there are no Multicast addresses assigned and this is due to VXLAN being configured to be used in unicast mode.

 

 

Transport Zone

 

The transport zone allows the NSX Controller Cluster to understand what transport connectors can communicate directly when implementing a logical switch. For example, with an overlay logical switch, the Controller Cluster must know which hypervisor IP addresses can be used to create direct tunnel connections.

In the Transport Zone section you will see that a default zone called "TZ-Default" has been configured with a Control Plane Mode set to Unicast.

In the next step we can review the configuration by first selecting the transport zone "TZ-Default" and then Click Actions and then Edit Settings.

 

 

Transport Zone Settings

 

You will see that the "TZ-Default" transport zone has been configured for Unicast control plane and both the Mgmt/Edge Cluster and Compute Cluster have been configured to be a part of it. Click Cancel to exit.

Note: In production environments if you create a new vCenter Cluster then that cluster will have to be added to the Transport Zone in this view.

 

 

Logical Switches

 

Click Logical Switches to view the configuration. Review that 4 logical switches have already been created.

The Web-Tier-01 logical switch is using the VXLAN Segment ID 5000. Similarly the App-Tier-01 logical switch is using 5001 and so on.

Double Click the Web-Tier-01 logical switch to view its configuration in the next step.

 

 

Web-Tier-01 logical switch configuration

 

Click Related Objects and then Virtual Machines tab. Review that the web servers web-sv-01a and web-sv-02a have already been configured to connect to this logical switch.

 

 

Navigate Back to Networking & Security Home Page

 

Click on the Networking & Security button to go back to the main Networking & Security page.

 

 

Logical Distributed Router (Control Plane Appliance)

 

Access NSX Edges section. You will see that there are 2 appliances deployed. One is a Logical Router called vDR-01 and the other is the NSX Edge called Edge-gw-01.

The NSX distributed router is a kernel module deployed on each hypervisor that is part of the transport zone while its control plane is a virtual appliance (vDR-01) that is deployed on the Mgmt and edge Cluster.

Note: To review the configuration of vDR-01, select it and double click.

 

 

vDR-01 Configuration

 

Click Mange, then Settings and Configuration.

Review that it has been deployed on esx-01a.corp.local (Mgmt and Edge Cluster) and is using the NFS shared storage. Since we are not using HA in this lab only one instance called "vDR-01-0" has been deployed.

 

 

vDR-01 Interfaces

 

Click Manage, then Settings and Interfaces.

Review that the distributed router is configured to connect to Web-Tier-01, App-Tier-01 and DB-Tier-01 logical switches. It is also connected to the Transit-Tier-01 logical switch which connects to the NSX Edge "Edge-gw-01".

 

 

vDR-01 Routing

 

Review the routing configuration in the below screens

Click Manage, then Routing.

In the GlobalConfiguration section you can see that the default gateway has been set to use the Transit-Uplink-01 interface which maps to the Transit-Tier-01 logical switch as seen in the interfaces section. Also the Router ID is configured to use the Transit-Uplink-01 address of 192.168.10.2.

In the OSPF section you will see that both the OSPF protocol address and the forwarding address has been configured plus the Transit-Uplink-01 interface is set to use the Area-ID 10.

In the Route Redistribution section, you can see that route distribution has been enabled for OSPF protocol and the router has been configured to redistribute all the connected routes into OSPF.

Note: Click on the Networking and Security tab to exit this view.

 

 

NSX Edge Services Gateway

 

In this section we will review the NSX Edge "Edge-gw-01" configuration. Double Click on the NSX Edge to review.

 

 

Edge-gw-01 Configuration

 

Click Manage, then Settings and Configuration. Review that NSX Edge has been deployed on esx-02a.corp.local and is using the shared NFS storage. Also note that since we are not using HA in this lab, only one instance of the Edge "Edge-gw-01-0" has been deployed.

 

 

Edge-gw-01 Interfaces

 

Click Manage, then Settings and Interfaces. Review that there are 2 interfaces attached. One connectes to the Transit-Tier-01 logical switch (that connects to the Distributed Router) and the other connects to the Mgmt-Edge-VDS-HQ-Uplink port-group with 192.168.130.3 IP address.

 

 

Edge-gw-01 Load Balancer

 

In this section we will review all the load balancer configuration.

Click Manage, then Load Balancer and Global Configuration. Note that the load balancing capability has been enabled.

Click Application Profiles. Note that a "Web-Basic-Profile-01" application profile has been created for HTTPS requests.

Click Pools. Note that "Web-Pool-01" has been configured to include the web servers in the 3-tier application. If you click on Pool Statistics you will notice that only web-sv-01a and web-sv-02a are active.

Click Virtual Servers. Note that the load balancer virtual IP of 192.168.130.3 has been configured to use the Web-Basic-Profile-01 and Web-Pool-01 server pool.

 

 

Edge-gw-01 Routing

 

In this section we will review the routing configuration on the Edge-gw-01 services gateway.

Click Manage, then Routing and GlobalConfiguration: Note that default gateway of 192.168.130.1 is configured (which is on the vPOD router). Also note that OSPF is enabled with a Router ID of 192.168.130.3.

Click on the OSPF tab: Note that OSPF is configured on the Transit-gw-01 interface that is mapped to the Transit-Tier-01 logical switch that connects to vDR-01 distributed router.

Note: to exit this configuration view click on the Network and Security tab as shown.

 

 

Distributed Firewall (DFW)

 

The Distributed Firewall is a kernel module that is enabled on all the hypervisors in the transport zone. It provides a zero-trust and micro-segmentation security model wherein all workloads are connected to the distributed firewall at the virtual nic level.

Click on Firewall section and then Configuration and General.

Review the Firewall rules that are configured.

Only HTTPS traffic is allowed to access the Web-Servers from any source IP.

The Web-Servers cannot communicate with each other on any port (micro-segmentation).

The Web-Servers are allowed to access the Application-Servers on port 8443.

The Application-Servers are allowed to access the Database-Servers on MySQL port 3106.

Also note that the last rule (rule 7 in this setup) is configured to block any traffic.

 

 

Web-Security-Group

 

The web-security-group is a dynamic security group containing only web servers.

Click on the group to view its current members.

To Close, Click on the x button on the pop-up window.

 

 

Application-Security-Group

 

The app-security-group is a dynamic security group containing only application servers.

Click on the group to view its current members.

To Close, Click on the x button on the pop-up window.

 

 

Database-Security-Group

 

The database-security-group is a dynamic security group containing only database servers.

Click on the group to view its current members.

To Close, Click on the x button on the pop-up window.

 

Module 4 - vCenter Operations Capacity Reporting (15 Min)

vCenter Operations (vCOPS) Reporting Overview


In this section you will learn the basics in regards to vCenter Operations' reporting capabilities. Later in the module you will look at actual reports generated from the HOL-SDC-1412 lab environment showing capacity and risk issues.


 

Locate Firefox on the Desktop

 

Launch the Firefox Browser.

 

 

Open vCOPS (vCenter Operations)

 

Click on the vCOPS Bookmark, this will open the vCenter Operations Manager page.

Login with the following credentials:

Username: admin

Password: VMware1!

 

 

Report Tab

 

From the tabs in the upper portion of the screen, click on Reports.

The reports section contains a number of reports that can be run and saved as a PDF or CSV file. The reports can also be scheduled and emailed to an individual or group.

 

 

Generate Virtual Machine Capacity Overview Report

 

Click on the Run Now button to start the generation of the report.

 

 

Report Queued

 

Observe that that report is now Queued in the Status section.

 

 

Open PDF

 

Click the PDF link to open up the report as a PDF.

 

 

Capacity Report Review

 

Scroll down and find the Trend and Forecast chart within the PDF.

From the chart we can see the Remaining VMs Original is higher than the forecasts Remaining VM's. So it would appear that the Virtual Infrastructure could start to deplete resources quickly especially if this environment were to expand.

Note: Your chart may differ from the chart above and the data could reflect a different scenario since your lab is fresh and there is not much historical data. The purpose of the lab is to demonstrate the vCenter Operations reporting capabilities.

Close the PDF file and return to the vCenter Operations interface.

 

 

Identify Cluster Capacity Issues


In this section we will review vCenter Operations Reports which will show that our virtual infrastructure is going to possibly run out of compute resources. This will help in determining when additional computer resources are needed and how much. In this lab you will not run the reports but just review existing report information. vCenter Operations Manager provides Capacity Management features such as:


 

Review Compute Cluster Planning View

 

We are going to focus on the Datacenter for this exercise.

1. Expand out World , vCSA-01a

2. Click and highlight Datacenter Site A

3. Click on Planning

Make sure the Summary button is highlighted.

 

 

 

Review Planning Report - Virtual Machine Capacity

 

**NOTE: The data that you see in the vCops Reports could differ from the screenshots because the vCOPS will be a fresh deployment when you start the lab. The data shown above and in the lab from the reports is mature data that was part of a historical calculation.**

Scroll down a little to see the "Time Remaining and Trend Information" columns. There should be some compute with Time Remaining getting kind of low.

In this module we will also be reviewing some reports that were generated by vCenter Operations Manager. The module will focus on reports that show Capacity and Planning details within the vSphere environment. The reports will include information such as when a compute resource is getting low, how much time before a compute resource is exhausted and what would our capacity look like if we were to add additional resources to the cluster. These reports have been pre-generated for this lab, so they will just be reviewed the purposes of the lab. If you want to know more about vCenter Operations Manager there is a dedicated Hand On Lab for that product.

If you click on the screenshot you will notice that the vSphere environment is running out of compute CPU, Memory and Disk Space. There are only 31 days remaining before Host CPU is exhausted. There are only 4 days before Memory is exhausted. Disk Space has already been exhausted inside the vSphere Environment. It is important to remediate these issues as soon as possible. A Software Defined Datacenter is flexible and agile. By standardizing on VMware platform and Software Defined compute, it is simple and easy for I.T. to add and expand our compute to meet the business needs.

In order to Export this report we can click Export, then download as a PDF or CSV file.

 

 

Export the Planning Report to PDF

 

Click on the Export link in the upper right hand corner of the screen. This will allow us to export a PDF or CSV file for review.

 

 

Submit the Adobe PDF Export Option

 

Leave all options to the default and click Submit.

 

 

Open Planning Summary Report

 

1. Click "Open With" and leave Adobe Reader (default) as is.

2. Click Ok.

 

 

Review Exported Planning PDF Report

 

vCenter Operations allows for the export of reports via PDF or CSV, for our lab we will just review the report above. Your report numbers will look different than the numbers above but you should get an idea of the value of the reports.

According to our Capacity Remaining report, we can see that even Network I/O Received Rate is close to being exhausted. If additional workloads were to be added to the cluster, then not only would CPU, Memory and Disk Space be low, but Networking capacity would also be threatened. The Capacity Remaining Report tells us a number of things about the environment:

Keep in mind that this is just one of a number of various capacity, trending, optimization and right-sizing reports that vCenter Operations Manager can generate.

 

What-If Scenarios Overview


A what-if scenario is a supposition about how capacity and load change if certain conditions are changed without making actual changes to your virtual infrastructure. If you implement the scenario, you know in advance what your capacity requirements are.

You can create new what-if scenarios for hosts and clusters on the Summary tab and the Views tab under the Planning tab. The list of what-if scenarios for a selected view appears on the Views tab under the Planning tab. vCenter Operations Manager assigns names, such as Add 1 New VM, to scenarios.

You can create scenarios with hosts, virtual machines, and datastores in the vSphere Client inventory.

Applicable What-If Scenarios

Cluster

Host

 


 

What-If Exercise

This section of the lab is designed to show you how to use vCenter Operations to determine if a cluster or host has compute capacity to handle additional virtual machines. This is a exercise to show the feature set but will have little impact on the rest of this lab. This module can be done independently and not interfere with any other modules.

 

 

Launch Firefox from the Desktop

 

Locate and open Firefox browser from the desktop.

 

 

Click on vCOPS Bookmark

 

When Firefox is opened, click on the vCOPS bookmark link.

Then login to vCOPs using the following credentials:

username: admin

password: VMware1!

 

 

Compute Cluster Planning Section

 

In order to start the What-If scenario a cluster or host needs to be highlighted. For this lab we will use Compute Cluster to simulate the impact of adding additional virtual machines.

1. Click Compute Cluster

2. Highlight Planning

3. Click Views

4. Make sure All Views is highlighted

5. Click Virtual Machine Capacity

6. Click "Create New Scenario"

The "Create New Scenario" link will bring up the Scenario wizard.

 

 

 

Select Type of Change

 

After clicking on the What-If scenario button you will be presented with the What-If Scenario Wizard.

For our example we will model adding additional virtual machines to the Compute Cluster then review the results.

1. Highlight radio button for Virtual Machines

2. Click Next

Note: You will not do this in the lab, we will just review the process.

 

 

Selection of Virtual Machine Scenario

 

Make sure the Add Virtual Machines by specifying profile of new virtual machines radio button is selected. I

 

 

Select New Virtual Machine Configuration

 

For our exercise add the following criteria

  1. Virtual Machine Count : 5
  2. vCPU: 2 (leave speed to default)
  3. Memory: 2 GB
  4. Specify Virtual Disk Configuration: 2 Virtual Disks / Thin Disk at 20 GB

You could also leave modify the Utilization to match the screenshot as well.

Note: The criteria is just to mimic some of the lab configurations, however feel free to add your own scenario and modify the criteria.

 

 

Ready to Complete

 

Click Finish.

 

 

Review What-If Scenario Results

 

On the bottom section of the page you will see the results of the what if scenario. Please browse and look at the types of information you can get.

The What-If scenario results will show a number of Details around the current actual and then after the additional VM's are added. The types of details you get are:

We are done with vCenter Operations Manager, so please close Firefox.

 

Module 5 - Increase Compute Cluster Capacity (15 Min)

Review Host Networking Configuration


In this section we will review the Host Networking configuration on the host that will be added to the Compute Cluster.


 

Locate the Firefox Browser Icon

 

Locate and open the Firefox Browser from the desktop.

 

 

Login to vSphere Web Client

 

Login to the vsphere web client with the following credentials just like in previous steps of the lab:

administrator@vsphere.local

VMware1!

 

 

 

Hosts and Clusters

 

Click on Hosts and Clusters.

 

 

esx-06a vSwitch Configuration

 

Click on esx-06a host.

Click on Manage.

Click on Network.

Click on Virtual Switches.

Esx-06a is connected to vSwitch0 and has one physical adapter.

 

 

esx-06a VMkernel Networks

 

Click on VMkernel adapters.

Notice that there are three VMkernel Port Groups.

Management - 192.168.110.56

VSAN - 10.20.20.56

vMotion - 10.10.30.56

 

 

 

Migrate Host to Distributed Switch


In this section we will modify and migrate a ESXi host to join a Distributed Switch.


 

Navigate to Networking

 

In the vSphere web client, click on the Home icon , then click on the Networking icon.

 

 

Expand out Datacenter Site A

 

Expand out DataCenter Site A in order to see the Virtual Distributed switches within the environment.

 

 

Add Host Networking

 

Right click on Compute-VDS and click on Add and Manage Hosts. This wizard will allow us to add esx-06a to the Distributed Switch.

 

 

Add and Manage Hosts

 

Select the Add host radio button. Then click Next.

 

 

Select Hosts

 

Click the green plus sign next to New Hosts...

 

 

Select Appropriate Host

 

In the Select New Hosts box select esx-06a.corp.local and click OK.

 

 

Review that Host is Selected

 

 

 

Select Network Adapter Tasks

 

In the Select Network Adapter Tasks we will leave Manage Physical Adapters and Manage VMKernel Adapters checked. Do not check any other boxes.

 

 

Assign Uplink

 

Next we need to add the physical network adapter to the Distributed Switch.

Click on vmnic0 and choose Assign Uplink.

 

 

Choose dvUplink1

 

Choose dvUplink1 and click OK.

 

 

Review vmnic0 Configuration

 

Notice that vmnic0 is assigned to

Uplink: dvUplink1

Uplink Port Group: Compute-VDS-DVUplink

Click Next

 

 

Manage VMKernel Network Adapters

 

From here we will Migrate the vmk0, vmk1 and vmk2 VMkernel adapters to the corresponding Distributed Switch Port Groups.  

For now Highlight vmk0 then click on Assign Port Group.

 

 

Migrate Management Port Group

 

Highlight Compute-VDS-Management port group and click OK.

 

 

Review vmk0 Migration

 

Notice that vmk0 has been Reassigned to Compute-VDS-Managment.

Click vmk1 and Assign port group.

 

 

Migrate vMotion Port Group

 

From the Destination Port Group box choose Compute-VDS-vMotion.

 

 

Review vmk1 Migration

 

Review vmk1 has been Reassigned to Compute-VDS-vMotion.

 

 

Migrate vmk2 Port Group

 

Click on vmk2 and click Assign Port Group. Then highlight Compute-VDS-vSAN and click OK.

 

 

Review Migration of all VMkernel Port Groups

 

Review that vmk2 is now going to be mapped to Compute-VDS-VSAN and click Next.

 

 

Analyze Impact

 

Analyze Impact of the configuration change. Notice there is not impact. So click Next.

 

 

Ready to Complete

 

Review the Ready to Complete. Click Finish.

 

 

Compute-VDS Hosts

 

Now lets make sure Compute-VDS contains the esx-06a ESXi host.

Highlight the Compute-VDS Distributed switch, then click on Related Objects.

 

 

Review Compute-VDS Host Configuration

 

Click on Hosts.

Notice esx-06a.corp.local is not Connected to the Compute-VDS. We have not yet moved it to the Compute Cluster, you will do that in a later module.

 

Move Host to Compute Cluster


In this section we will move the host esx-06a to the VSAN compute cluster. This host was previously configured with the necessary VSAN networking. So now we are ready to expand our VSAN compute cluster and add additional compute and storage to the cluster. This shows the flexibilty in which we can expand cluster compute within a Software Defined Datacenter.


 

Click on Hosts and Clusters

 

Within the vSphere Web Client click on the Home Tab, then click on Hosts and Clusters.

 

 

Move Host into Compute Cluster

 

Now we need to choose esx-06a.corp.local and move it to the Compute Cluster which is configured for VSAN.

1. Expand out DataCenter Site A

2. Highlight esx-06a.corp.local

3. Click on Actions

4. Then click on Move To...

 

 

 

Choose Cluster

 

From the Move To... dialog box do the following

1. Expand out Datacenter Site A

2. Highlight Compute Cluster

3. Click OK

 

 

Review Compute Cluster Configuration

 

Ensure that esx-06a.corp.local is showing up in the Compute Cluster now.

 

Review Expanded VSAN Configuration


In this section we will review how the VSAN storage and disk capacity have been expanded since we have added esx-06a to the cluster.


 

Review VSAN General Page

 

Since a new host has been added to the Compute Cluster the VSAN General page shows 4 Hosts now participating in the VSAN cluster. The network status is normal but we will still be moving the Host to the Distributed Switch later in order to add Networking capacity for our application. Notice also that the Total Capacity of VSAN Datastore has increased to 31.00 GB. In order to get to this page do the following:

1. Click on Hosts and Clusters tab

2. Click on Compute Cluster

3. Click on Manage tab

4. Click on Settings

5. Expand Virtual SAN (if necessary) and Click on General.

 

 

Review VSAN Disk Groups

 

Since a host has been added to the VSAN cluster we can review its participation and see which disks are contributing to the storage. Do the following:

  1. Click on Disk Management

This page we can see esx-06a.corp.local has 2 of 2 disks in use and is connected to the VSAN cluster. The bottom of the page shows the two disks within esx-06a that are part of the VSAN cluster. You may need to scroll down to see esx-06.

Optional: Click on esx-06a.corp.local and review the disk configuration at the bottom of the page.

 

 

Review New VSAN Configuration

 

Now lets look at the existing VSAN summary.

1. Click on the Storage tab in vCenter.

 

 

VSAN Capacity

 

From within the Storage view in the vSphere Web Client do the following:

  1. Expand out Datacenter Site A
  2. Click on vsanDatastore
  3. Click on the Summary tab

In the upper right hand corner you see Storage Free and Capacity, notice it has increased slightly. The storage increased because the disks within esx-06a contributed to the overall storage of the VSAN datatstore. This shows how easy it is to add addtional capacity to the datastore without any downtime.

 

 

 

Review Hosts

 

Lets make sure all of our Hosts are connected and have a normal Status in the VSAN cluster.

1. Click on Related Objects

2. Click on Hosts

Review the Status of the Hosts. You should see 4 hosts in the VSAN cluster, esx-03a, esx-04a, esx-05a, esx-06a.

 

VSAN VM Storage Policies


When you use Virtual SAN, you can define virtual machine storage requirements, such as performance and availability, in the form of a policy. The policy requirements are then pushed down to the Virtual SAN layer when a virtual machine is created.

When you enable Virtual SAN on a host cluster, a single Virtual SAN datastore is created. In addition, enabling Virtual SAN configures and registers Virtual SAN storage providers.

A storage capability is typically represented by a key-value pair, where the key is a specific property that the datastore can offer and the value is a metric, or range, that the datastore guarantees for a provisioned object, such as a virtual machine or virtual disk.

When you know storage requirements of your virtual machines, you can create a storage policy referencing capabilities that the datastore advertises.

For the purposes of this lab we will not define any policies within VSAN. That can be accomplished in the dedicated VSAN lab.


 

Storage Policy Attributes

There are number of Policy Attributes that can be applied:

  1. Number of Failures to tolerate - defines number of host, disk or network failures a virtual machine object can tolerate. For n failures tolerated, n+1 copies of the virtual machine object are created.
  2. Number of Disk Stripes per Object - number of HDD's across which each replica of virtual machine object is striped.
  3. Object Space Reservation - percentage of the logical size of the object that should be reserved, or thick provisioned, during virtual machine creation. The rest of the storage object is thin provisioned.
  4. Flash read cache Reservation - SSD capacity reserved as read cache for the virtual machine object.
  5. Force Provisioning - if the option is set to Yes, the object will be provisioned even if the policy specified in the storage policy is not satisfiable by the datastore.

 

 

VM Storage Policy Decisions

Do I want availability for this VM?

Do I want additional performance (above the default) for this VM?

Do I want the VM to be thinly provisioned?

Do I want the VM deployed even if VSAN cannot meet the policy?

 

Module 6 - Expand 3-Tier Application Capacity (15 Min)

Deploy Virtual Machine from Template


In this section we will increase the capacity of the Web-Server Tier in the application by deploying a new web-server instance.


 

Locate Firefox

 

From the desktop locate the Firefox browser icon and double click on it.

 

 

Login to vSphere Web Client

 

Login to the vsphere web client with the following credentials just like in previous steps of the lab:

administrator@vsphere.local

VMware1!

 

 

 

VMs and Templates

 

Access the VMs and Templates menu in the Home screen of vCenter.

 

 

Deploy web-sv-03a virtual machine

 

The web-sv-template is located under Datacenter Site A.

Select the template and Click on Deploy a new virtual machine.

 

 

Name the virtual machine

 

Enter web-sv-03a as the name of the virtual machine and select the Datacenter Site A. Click Next.

 

 

Deploy virtual machine on host esx-06a

 

Select esx-06a.corp.local as the host to deploy the virtual machine and click Next.

 

 

VSAN Datastore

 

Select the vsanDatastore for the virtual machine and click Next.

 

 

Clone Options

 

Select all the three options for Cloning for this deployment. Click Next.

 

 

Guest Customization

 

A Web Server guest customization template has already been created for this lab. Click on it to select it and then Click Next.

 

 

Configure IP address

 

Enter 172.16.10.13 as the IP address of this web-sv-03a virtual machine. Click Next.

Note: This IP address has already been populated in the load balancer web pool as shown in the previous section.

 

 

Choose Network Interface

 

Select Virtual Hardware, then choose the Web-Tier-01 logical switch from the Network adapter 1 drop down menu. Click Next.

Note: Once you click the drop down menu you might have to scroll to your right to view the switch name.

 

 

Review Configuration

 

Review the configuration and select Finish.

 

 

Observe the deployment

 

On the Recent Tasks on the right side you will see the deployment in progress. Once the deployment is complete vCenter will customize the virtual machine settings based on the configuration provided. It will change the IP address of the virtual machine and attach it to the Web-Tier-01 logical switch.

 

Confirm the update to NSX Distributed Firewall


In this section we will confirm if the new web-sv-03a virtual machine has been added to the Web-Security-Group.


 

Access NSX

 

Access the NSX configuration by selecting Networking and Security.

 

 

View the Web-Security-Policy

 

Expand the Web-Security-Policy and view the Web-Security-Group. As previously mentioned this security group is dynamic group and matches all web servers therefore you should see the newly created web-sv-03a virtual machine in that group. Click on the x in the pop-up window close it.

 

Access the expanded 3-tier Application


In this section we will access the 3-tier web application.


 

Launch Firefox from Desktop

 

Locate and open Firefox from the Desktop.

 

 

Access ABC Medical POS application

 

The web server tier of this application has now expanded to include 3 web servers (web-sv-01a, web-sv-02a and web-sv-03a).

Access the application by clicking the bookmark called "ABC Medical POS App".

Refresh the browser a few times and you will see that this HTTPS request is been load balanced across all the three web servers in the tier.

NOTE: If the ABC Medical PoS App link brings up an error message saying the "Connection was Interrupted" or "internal server error" please click the link one more time. If you need to please close the browser tab and re-open then click the ABC Medical PoS App one more time.

 

Lab Conclusion

Lab Conclusion


This lab demonstrated how ABC Medical was able to increase capacity and provide resources to existing applications. ABC Medical Corp. had standardized on vsphere, VSAN and NSX and had realized the benefits of the Software Defined Data Center. However, once they needed to add additional capacity and virtual machines this lab was able to demonstrate how easy it was to realize the benefits of a standardized Virtual Datacenter.

By using monitoring tools such as vCenter Operations (vCOPS), ABC Medical was able to see they had Capacity Risk in the environment, and by being able to simply add ESXi hosts to a VSAN cluster was able to increase compute and storage. Also ABC Medical was able to dynamically and simply modify the network to adjust for increases in the needs of the business. NSX is a powerful Software-Defined-Network technology that can virtualize the network and bring a company truly into the Software Defined Datacenter.

 


Conclusion

Thank you for participating in the VMware Hands-on Labs. Be sure to visit http://hol.vmware.com/ to continue your lab experience online.

Lab SKU: HOL-SDC-1412

Version: 20150227-140532