VMware Hands-on Labs - HOL-SDC-1420


Lab Overview - HOL-SDC-1420 - OpenStack with VMware vSphere and NSX

Lab Overview


This hands on lab consists of three modules:

  1. Lab Environment: Overview of OpenStack and a description of this lab environment running OpenStack on vSphere with NSX. (10 minutes)
  2. What is VIO (VMware Integrated OpenStack)
  3. Module #1: Overview of OpenStack and vSphere integration around OpenStack compute and storage functions. (60 minutes)
  4. Module #2: Overview of OpenStack and VMware NSX integration around OpenStack networking functions. (60 minutes)
  5. Module #3: VMware Integrated OpenStack (VIO) Deployment and Installation. This is a Video and Text Only lab to give an Architectural Overview and show the Installation/Deployment Process. (30 minutes)

Please Note:  This lab consists of BETAcode for both NSX and VIO.  There may be issues as you walk through the lab and we try to call them out as you walk through the steps.  

Lab Captains:

IMPORTANT: Please make sure the Lab Status shows READY.  If it FAILS or TIMES OUT, the lab will not work.

 

 


 

Lab Status - READY

 

 

Lab Environment

What is OpenStack?


OpenStack is an open source software that delivers a framework of services for API based infrastructure consumption. OpenStack framework requires hardware or software based infrastructure components and management tools to build a functional OpenStack cloud. The "plug-in" architecture of OpenStack services enables various vendors (such as VMware) to integrate their infrastructure solutions (such as vSphere and NSX) to deliver an OpenStack cloud.  This next section is for those who have no exposure to OpenStack, feel free to skip to the next section if you are familiar with OpenStack.


 

OpenStack is Cloud API Layer in a Cloud Technology Stack

 

A typical cloud technology stack consists of following major components

  1. Hardware Infrastructure
  2. Software Infrastructure (or virtualization layer)
  3. Cloud API layer that enables consumption and orchestration of underlying cloud infrastructure
  4. Cloud Management Layer that provides governance, resource planning, financial planning etc and potentially manages multiple underlying cloud fabrics
  5. Applications running on top of cloud infrastructure

In a non-cloud datacenter model, an application owner would contact one or more datacenter administrators, who would then deploy the application on the application owner's behalf using software infrastructure tools (e.g., VMware vSphere) to deploy the application workloads on top of physical compute, network, and storage hardware.  

OpenStack is a software layer that sits on top of the software infrastructure and enables an API based consumption of infrastructure. OpenStack enables "self-service" model in which application owners can directly request and provision the compute, network, and storage resources needed to deploy their application.

The primary benefits of self-service are increased agility from applications owners getting "on demand" access to the resources they need and reduced operating expenses by eliminated manual + repetitive deployment tasks.  

 

 

Role of OpenStack in Cloud Technology Stack

 

OpenStack Cloud API Layer adds following services in the cloud technology stack.

 

 

Anatomy of OpenStack

 

OpenStack splits infrastructure delivery functions into several different services.  Each of these services is known by its project code name:

OpenStack services orchestrate and manage the underlying infrastructure and expose APIs for end users to consume the resources. OpenStack's strength is that it is a highly customizable framework, allowing those deploying it to choose from a number of different technology components, and even customize the code themselves.  

 

OpenStack on VMware Infrastructure


In this step, we will review the VMware vSphere infrastructure for the lab and how OpenStack has been deployed within this infrastructure.  


 

Lab Scope

 

The focus of this lab is on how the compute, image, and storage portions of OpenStack interact with VMware vSphere. You will provision virtual servers and virtual disk volumes via OpenStack and learn how these capabilities are implemented on the back-end by using vCenter API's.  

You will also get an overview on how to manage virtual networks by leveraging the VMware NSX plugin for Neutron.

Please Note: OpenStack's object storage named Swift is outside the scope of this lab.

 

 

Understanding Administrators vs Users

 

This lab environment has the following main components.

Let's review these components in more detail.

 

 

Overview of Lab Environment

This lab environment has the following main components.

Let's review these components in greater detail.

 

 

Access vCenter via the Web Client

 

Launch Chrome web browser and select the vSphere Web Client bookmark link.

This will bring up the login screen for vSphere Web Client.

Enter the following credentials:

          Username: root

          Password: VMware1!

          Then click the Login button.  

NOTE: This might take 2-3 minutes to authenticate, this is a known issue and will be fixed.

 

 

View vCenter Hosts and Clusters

 

  1. Click on the Home tab.
  2. Then click on the Hosts and Clusters icon under Inventories.

 

 

View vCenter Inventory

 

Navigate the Inventory, expanding the vCenter vcsa-01a, datacenter named Datacenter and cluster named Compute elements until you see the two ESX hosts.

vSphere compute capacity can be exposed to OpenStack on a per-vSphere cluster basis since OpenStack models the entire cluster as a single pool of capacity.

In this lab, we will expose the cluster named Compute to OpenStack for self-service consumption by a role named Cloud Users.

 

 

View Cluster Properties

 

  1. Click on the cluster named Compute in the Inventory window
  2. Then click on the Summary tab of the cluster.

This will be the cluster exposed as OpenStack capacity.  OpenStack can handle any cluster size up to the vSphere scalability limits and can provision workloads to multiple clusters for larger sized deployments.  

Although this particular cluster will be exposed and leveraged by OpenStack, as an administrator, you are still able to take advantage of key vSphere features like DRS and HA by enabling them in the vSphere Web Client.

Please Note: DRS with auto-placement must be enabled for any cluster used by OpenStack, as OpenStack relies on DRS to spread VMs across all capacity available in the cluster.  

 

 

View Shared Datastore

 

  1. Click on the Related Objects tab.
  2. Select Datastores.

Please NoteThis cluster has a shared datastore named ds-site-a-nfs01. This datastore is accessible from all ESX hosts in the cluster allowing for DRS, HA, and vMotion to function correctly. This datastore will be used for both the primary disk of the servers we create via OpenStack Nova as well as persistent block storage volumes created via OpenStack Cinder.

 

 

NSX - Networking and Security

 

The VMware NSX platform is an important element to OpenStack.  Without it, we lose the ability to create complex and multi-tenant networks -- a key capability in a software defined datacenter.  

NSX provides Networking Services such as L2 networks, L3 routing, Floating IPs, Security Groups and more. OpenStack delivers these services through the Neutron service and API's.

In this lab we will cover the main networking concepts that are required by a cloud user to consume cloud infrastructure in a self-service function. Go back to Home and Click on the Networking & Security Tab.

 

 

 

NSX - Installation Status

 

If you click on Installation, you can see the status of NSX on the environment as well as whether certain capabilities are functional.  You can click around but PLEASE do not change anything or you might have a poor lab experience.

 

 

VMware Integrated OpenStack - Manager Virtual Appliance

 

  1. Go back to the  Hosts and Custers view.
  2. Then select the VMware Integrated OpenStack under the Management Cluster.

VMware Integrated OpenStack (VIO) is installed first by deploying a vApp.  The vApp has 2 components, one which is the management-server and second, is a openstack-template which has the base images for all the components within VIO.  After installation, the VIO plugin should show up as seen in the next step.

The management server is responsible for deploying, managing, configuring, scaling VIO.  

 

 

VMware Integrated OpenStack Plugin

 

The VIO plugin allows you to deploy the VIO environment and gives you access to the different configuration options of OpenStack.  You can also add additional compute clusters through this plugin.  Feel free to click around and check the different options you have access to from this interface. Go back to Home and click on the VMware Integrated OpenStack icon. You can poke around here but we will be looking into the plugin a little later.

 

 

Summary of Infrastructure

You have now completed reviewing the underlying infrastructure for a simple OpenStack on vSphere deployment.  

This is a single vSphere cluster with two ESXi hosts. DRS is enabled and all ESXi hosts have access to a shared datastore.

We have vCenter for managing ESX hosts and datastores. And an NSX appliance provides networking services.

 

 

What is VMware Integrated OpenStack (VIO)

What Is VMware Integrated OpenStack



 

VMware Integrated OpenStack (VIO) Architecture

 

VIO is based atop VMware's Software Defined Data Center infrastructure. With purpose built drivers for each of the major OpenStack services, VIO optimizes consumption of Compute, Storage and Network resources. VIO also includes OpenStack specific management extensions for the vCenter Client, vCenter Operations Manager and LogInsight to allow use of existing tools to operate and manage your OpenStack cloud.

1) Nova Compute. The vCenter Driver exposes Compute resources to the OpenStack Nova service through vCenter API calls. Resources are presented as cluster level abstractions. The Nova scheduler will choose the vSphere cluster for new instance placement and vSphere DRS will handle the actual host selection and VM placement. This design enables OpenStack Instance VMs to be treated by vSphere as any other VM. Services like DRS, vMotion and HA are all available.

2) Cinder and Glance. The VMDK driver exposes Storage resources to the OpenStack Cinder service as block devices through datastore/VMDK abstractions. This means that any vSphere datastore, including VSAN, can be used as storage for boot and ephemeral OpenStack disks. Glance images may also be stored as VMDK's or OVA files in datastores.

3) Neutron Networking. The NSX driver supports both the vSphere Virtual Distributed Switch (vDS) and NSX for true software defined networking. Customers can leverage their existing vDS to create provider networks that can isolate OpenStack tenant traffic via VLAN tagging. They can also take advantage of NSX to provide dynamic creation of logical networks with private IP overlay, logical routers, floating IPs and security groups, all enabled across a single physical transport network.

4)Management Integration. The vSphere web client has been extended to include OpenStack specific meta data to allow searching by terms appropriate to your VM's (Tenant, Flavor, Logical Network, etc.). VIO also includes a vSphere Client plugin for managing OpenStack consumption of vSphere resources. Management packs for vCOPS and LogInsight allow for OpenStack specific monitoring based on metadata extracted from the OpenStack services.

 

 

VMware Integrated OpenStack (VIO) vAPP

 

Creating your Production Grade OpenStack Environment with VIO is a two step process. The first step is to deploy the VIO virtual appliance (vApp) into vCenter. The vApp has two components; a Management Server VM and an OpenStack Template VM. As part of the vApp deployment you will also get a new OpenStack vCenter Plugin that will drive the next step in the installation process. Through the plugin, you will Deploy an OpenStack Cluster. This launches a process that clones the OpenStack Template VM to create all of the OpenStack Management VMs. Once the cloning is complete, orchestration playbook's are executed to size the VMs appropriately for the services they will provide and then to install, configure and start the appropriate OpenStack services on each node. The end result is a running, Production Grade OpenStack cluster. We have included a video of the vApp Deployment process in the next step.

 

 

Deploy vApp Video (2:41)

In this video you will see how to deploy the VMware Integrated OpenStack (VIO) vApp into vCenter.  You will walk through the configuration options and verify that the vApp has deployed and that the vCenter plugin has been installed. Due to time and HOL resource constraints, we cannot provide live deployment of this vApp as part of the lab.

 

 

VMware Integrated OpenStack (VIO) Management VM's

 

As mentioned previously, VIO is a production grade OpenStack deployment based on a reference architecture developed through customer best practices and the VMware Network Systems Business Unit internal cloud. It is designed to be highly available through the use of vSphere capabilities like HA and DRS, and through the use of redundant components. The core OpenStack services are deployed as follows:

 

 

Deploy VMware Integrated OpenStack

In this video you will see how to deploy the VMware Integrated OpenStack environment.

 

 

Validate OpenStack Cluster Is Available Video

In this video you will log in to OpenStack Horizon to verify that the compute resources have been made available and to quickly create an OpenStack Project and User. Then you will create a Network and deploy an instance.

 

Module 1 - OpenStack Compute and Storage with vSphere (60 Min)

VMware Integrated OpenStack (VIO) vCenter Overview



 

vCenter Web Client - VMware Integrated OpenStack Administration

 

Click on VMware Integrated OpenStack.

 

 

OpenStack Clusters Overview

 

To check the current VIO deployment

Click on Summary.

 

 

Summary Overview

 

You can see here that the Connected Server is pointing to the management server that is in the VMware Integrated OpenStack vApp.  Prior to installation, this page will ask you to choose the management-server for your VIO deployment.  

 

 

VMware Integrated OpenStack Manage and Monitor

 

Click on Monitor to see the VIO configuration of all the OpenStack components.

 

 

Manage Different VIO Components

 

Click through the different tabs to see how each of the components are managed.  You will notice that you can view the different pieces of the OpenStack environment.  

 

 

Adding Additional Compute Resources to VMware Integrated OpenStack

 

The benefit of VIO is that you can quickly add additional compute clusters and it will automatically register with OpenStack.  We will not do it here but you would create a new compute cluster and add it through this interface for quick scaling out of resources.

 

OpenStack Administrator View


In this step, we will learn about an OpenStack service called Horizon. Horizon provides a web portal for both administrators and users. Administrators can use the UI for common tasks such as creating users, managing their quotas, check infrastructure usage, etc. In Horizon, cloud administrators have a different view when compared to cloud users. While cloud administrators can see and manage all infrastructure resources, cloud users can only see inventories created by them.

We will start with an orientation of the Horizon Web UI for cloud administrators and then switch to a cloud user view later.


 

Login as Administrator

 

  1. To access Horizon, launchChrome from the desktop.
  2. Then select the Login - VIO bookmark. This will launch the OpenStack Horizon Web UI running on the VIO appliance.  You will probably see a certificate untrusted warning, proceed anyway.  We are still in the process of building the certificates for the environment.

When you see the login screen, please enter the following credentials:

Username: admin

Password:VMware1!

Please Note!!: The Initial login could take a few seconds, to a couple of minutes. Please be patient if you notice a delay.  During this time tokens are being stored and cached on the Memcache servers.  You may notice faster response time as you continue to use the environment due to the caching of the tokens.

Then scroll down and clickSign in.

Once logged in, you should see the 'admin' tab on the left already selected.  

 

 

Orienting with Admin view of Horizon Portal

 

Upon initially logging in as 'admin', note the following key tabs:

  1. At the top is a drop down menu that allows an admin to switch views to a specific user. For example, if an admin wants to see what resources are visible to a particular user, they can select the user from drop down list. For now, please ensure that the drop down has 'admin' as the selected user.
  2. There is a 'Project' tab. Every user in OpenStack belongs to a project (more info on this in next section). An admin belongs to an 'admin' project that is created by default. A project contains all the instances, volumes and other inventories created by all users belonging to the project.
  3. Since we have logged in as admin, you will note an 'Admin' tab at the bottom. Click on the Admin tab to expand it.

 

 

View Hypervisor Resources

 

  1. Click on the Hypervisors tab within the Admin Panel.  
  2. Notice that there is only a single hypervisor shown. The reason behind this is that OpenStack sees each vSphere Cluster as a single hypervisor where workloads can be placed. This allows for key vSphere features like DRS, HA and vMotion to still be used in the background without confusing OpenStack.  

Please Note: The resources of this hypervisor represent the resources of the vSphere cluster. In this case, the two ESX hosts combined, and the shared datastore. The memory shown is less than the combined total of the hosts because ESX reserves some memory for operations.

 

 

View Flavors

 

  1. Click on the Flavors tab under the Admin panel.
  2. Flavors represent the different options users will have in terms of how "large" a VM they want to create. The cloud administrator can define what flavors are supported in an OpenStack deployment, and cloud users can then select from the set of flavors exposed to them.

Please Note: In this lab, we will utilize only the default set of flavors.  

 

 

View Images

 

  1. Click on the Images tab under the Admin Panel.
  2. This shows a list of all images that will be available to tenants to choose from when they look to create a virtual machine. Cloud administrators will typically upload a variety of "public" images to be made available to their cloud users. Cloud users are able to further extend this set of images with their own custom images.

Please Note: For simplicity, we have already uploaded a single Debian Linux image for use in this lab. The VMDK disk format indicates that it can be used with vSphere.

 

 

View Network

 

  1. Click on the Project panel (top of left side margin). Make sure you click on Project and not Adminas the options look very similar.
  2. Select the Network tab under the Project panel.
  3. Select the Network Topology tab under the Network panel.

This will present a graphical view of the network. For this lab, we have pre-created an 'External-Shared' Network. This network will be used later in the lab for compute and storage. In the second module of this lab, we will provide instructions on creating user defined networks and connecting them using routers.

 

Manage Projects, Quotas and Users


In OpenStack, users are grouped in containers called projects. These projects are effectively tenantsin the OpenStack cloud environment. Each project has an assigned quotaof compute, network and storage resources that are shared by all users in that project. Projects are isolated from each other, that is, users in one project can't see the users and resources of other projects.

 

 


 

Manage Projects and Quotas

Users must be associated with at least one project, though they may belong to more than one. Therefore, you should add at least one project before adding any users.

 

 

Creating a Project

 

  1. Select the Admin tab on the left-hand navigation bar.
  2. Click the Identity Panel tab under Admin.
  3. Click the Projects tab under the Identity Panel.
  4. Click the Create Project button. 

 

 

Enter details of Project 1

 

You are prompted for a project name and an optional description.

  1. Enteruser1-project in the Name field.
  2. Verify that the check box to Enable this project is checked.
  3. Click on the Project Members tab.

Please note that you have to enter the project name as documented above. Subsequent sections in the lab will make use of this configuration.

 

 

Add Admin as Project Member

 

  1. Under Project Members tab click on the "+" button next to admin to make it a member of the user1-project. This step is required in order to pull instance metadata from the vSphere Web Client.
  2. Then Click the Create Project button.

Your project is now listed with the rest of the tenants. We will now proceed to the definition of additional parameters for this new project.

 

 

Create additional Project

 

We will use two projects to demonstrate multi-tenancy. Let's create another project similar to the previous one.

  1. Enteruser2-project in the Name field.
  2. Select the check box to enable this project. Optional: You can add the admin user as a member of user2-project like in the previous step by clicking on the Project Members tab. This will display instance metadata information from the vSphere Web Client for VM's created under user2.
  3. Then Click the Create Project button.

Your project is now listed with the rest of the tenants. We will now proceed to the definition of additional parameters for this new project.

 

 

Quota

 

Quotas are used to set operational limits around the resources assigned to a project. By implementing quotas, OpenStack cloud administrator can predictably allocate capacity to  tenants and prevent one tenant from monopolizing shared resources.

1. Focus on user-2-project.

2. Click on on the More button in the pull-down menu under the Projects tab and select Modify Quotas, or click Edit Project and then navigate to the Quota tab to see the various options.

 

 

View Quota

 

Optional: Explore the default quotas for this tenant and change them if desired. ClickSave when done.

 

 

Creating a User

 

We will now create a user for the previously created project.

  1. Click the Admin tab.
  2. Select the Identity Panel tab.
  3. Select the Users tab.
  4. Click Create User to display the user menu.

 

 

Enter details of user1

 

1. Enter or select the following data into the fields to create a user named user1.

2. Click the Create User button.

Please note that you have to enter the username and password as documented above. Subsequent sections in the lab will make use of this configuration.

 

 

Create additional User

 

1. Enter or select the following data into the fields to create an additional user named user2.

2. Click the Create User button.

 

 

Sign Out Admin User

 

We are now complete with the OpenStack infrastructure setup.

Now that we have two self-service cloud users created, each with their own project and resource quota we must click the Sign Out button in the top right-hand corner of the browser page. This action will now take you back to the login page.

 

 

Housekeeping Items - Changing Image Adapter Type


Some housekeeping items before using the lab.


 

Login to Putty

 

In the beginning when we set up this lab, we uploaded our Debian image with the storage adapter type of IDE.  However, in order to hot-add and mount a volume (which we will be doing at a later time), we need to change IDE to SCSI.  We thought we might as well make this a lab activity to give you some command line experience!

1.  Double click on the putty icon on the desktop named "controller" .  Once a putty windows opens up, double click on controller01.

 

 

Type in credentials

 

Type root for login and vmware for the password.

At the prompt, type

source /root/cloudadmin.rc

This provides the current user the necessary credentials to access the OpenStack API.  This will allow us to run administrator commands through the BASH shell.

 

 

Show images through command line

 

Type 
glance image-list 
to show a list of images currently in OpenStack.

Double click on the ID next to Debian-Small.  This should copy the ID into your clipboard.

 

 

Change adapter to lsiLogic adapter

 

type:

 glance image-update <ID> --property vmware_adaptertype=lsiLogic 

where <ID> is located, paste your ID that you copied in the previous step.  Make sure you type lsiLogic with a capital L. That's it! You have changed the adapter type to the correct adapter so we can mount volumes while the virtual machine is running later in the lab.

 

Provisioning VM Instances via OpenStack Horizon Dashboard


An instance is the OpenStack terminology for a virtual machine. From Horizon, users can provision instances and can attach them to existing OpenStack networks. In this section, we will illustrate the process of creating instances from OpenStack, and later we will observe how those applications get instantiated in vSphere.


 

Login as User

 

Now we will shift our vantage point and take on the role of a cloud user (user1) who wants to provision a VM via the self-service OpenStack Horizon web UI. 

Log into the Horizon Web UI, this time using the following credentials:

User Name: user1

Password: user1

Please Note: In this lab, we will use the OpenStack term "instance", which simply is another term to describe a VM.  

 

 

View Quota Usage

 

When a cloud user first logs in, they are shown how much of their current quota limits have been used.

Since we haven't done anything yet, all categories show 0 resources used except for Security Groups. One security group is used by the “Internal Shared” network available to all users for the purposes of this lab. We will revisit networking in greater detail with the second module of this lab.

 

 

Launching an Instance

 

  1. Click on the Instances tab
  2. Then click on the Launch Instance button.

 

 

Instance Details

 

Under the Details tab:

Enter a name for the instance ("HOL" for example)

For Flavor, select 'm1.small' from the dropdown

Enter a count of 2 for the Instance Count.

Select Boot from Image from the drop-down list and select the default image name of Debian-Small (2.0GB) from the other drop-down list.

WARNING:  if one of the instances fail, terminate the instance by checking the box next to the failed instance and clicking "TERMINATE INSTANCE" on the upper right hand corner.  Boot another instance by following the steps above and choosing 1 for instance count.  This is a known issue that is fixed in a later release.

 

 

Access and Security

 

  1. Click on the Access & Security tab
  2. Ensure that default is selected underSecurity Groups.

When instances are mapped to the default Security Group, the following network connectivity restrictions apply:

 

 

Networking

 

  1. Click on the Networking tab and then on the + button next to the TestNet network to map it to nic:1. The TestNet network has been pre-created for this lab section. in Module 2 you will be creating an on-demand network and exploring the power of OpenStack networking when integrated with VMware NSX. Notice how in this OpenStack workflow, an instance does NOT have a vNIC (or vCNICs) until you map one (or more) network(s) to it.
  2. Click the Launch button to create the instance.

 

 

Instance Build Process

 

You can monitor the creation of the two launched instances directly from Horizon.

 

 

Instance Details

 

Click on the instance name to view the instance details.

 

 

View Instance Overview

 

In the Overview sub-tab, notice the basic information about the instance, including items from the dialog used to create the instance.

There are a few values that Nova will have generated for you:

  1. When finished reviewing the instance click the Instances tab to go back to Instances Table view.

 

 

Console Access

 

Since the addresses may be different, make note of the IP Addresses of each of your instances.

  1. Verify that the Status for both Instances shows Active.
  2. Click on More for the first Instance and selectConsole to cross-launch the VM console.

 

 

View Instance Console

 

OPTIONAL:

Right-Click on the link labeled Click here to show only console and choose "Open link in new tab" to open this VM console on a separate tab.  This will make your life easier.  You may see a "security certificate" warning.  Proceed anyway.  If the console doesn't work, try opening it up in a new tab by right clicking on the "Click here to show only console" and opening it in the new tab.  You may have to click on the empty screen and wait a little bit.  

Login

Verify connectivity to the second instance you created by pinging the IP Address of the other instance.

In the next module, (Module 2) we will be exploring these network operations in greater detail.

 

 

View Updated Quota Usage

 

Click on the browser Instances - VMware Integrated tab.

  1. Click on the Overview tab in the Project panel on the left-hand side of the browser (Please Note: This is different from the Overview link within the Instance section).

Notice that the quota for user1 has been updated to indicate the consumption corresponding to a single VM instance of flavor 'm1.tiny'.

 

 

Sign Out user1

 

Now end your session as user1 by clicking the Sign Out button in the upper right-hand corner of the page.

 

 

View Quota of Another User

 

At the login page again, log in now as user2:

Username: user2

Password: user2

As you login, notice that in the overview tab that user2 still has a full quota available, this is due to the fact that user2 is in a different project as user1.

Click on the Instances tab in the project panel. Notice that user2 cannot see any of the other instances created by user1. The resources consumed by different cloud users are hidden from each other. Only the cloud administrator can see all instances, either by logging into Horizon, or by accessing the vCenter GUI directly.  

Click the Sign Out button in the upper right-hand corner of the window.

 

Viewing the Provisioned VM Instance via the vSphere Web Client


OpenStack seamlessly integrates with vSphere in the compute layer to offer cloud administrators the ability to leverage powerful VMware features that may not necessarily be available in a pure OpenStack deployment.


 

High Level Architecture

 

A VMware developed plugin is the "glue" that integrates between OpenStack's compute module Nova and vCenter. Nova compute instructions for instance provisioning and operations are translated to vCenter API calls that are executed on the infrastructure owned by vCenter. Currently, the resolution that can be achieved is at a vSphere cluster level. This means that OpenStack will see each vSphere cluster as an independent target for standing up compute capacity. In result, all intra-cluster operations are then independent from OpenStack and can be used transparently. Things like DRS, HA, FT, etc. will continue to work, making this solution very attractive for enterprise administrators wanting to benefit from open orchestration (OpenStack) without sacrificing VMware's unmatched enterprise-level functions.

 

 

vCenter Operations for Provisioned Instances

 

  1. LaunchChrome web browser from the desktop.
  2. Click on the vSphere Web Client shortcut to open a tab to the vCenter Web Client.
  3. Login to vCenter using the following credentials:

Username: root

Password: VMware1!

 

 

VMs in vSphere Web Client

 

Click on the Hosts and Clusters icon.

 

 

VMs in vSphere Web Client

 

  1. From the Hosts and Clusters window-pane view, selectanyone of the provisioned VM's (referenced in vCenter by their respective UUID's). If you cannot see any of the newly provisioned VM's by Open Stack, please click the refresh button near the login name at the rop right-hand of the vCenter Web Client window.
  2. Click on the Summary tab for the selected VM.
  3. Takenote of the ESX server hosting this VM as well as the IP address of the VM. (UUID could differ from the screenshot above.)
  4. Repeat steps 1-3 for the second VM in the 'Compute'.

Please Note: The 'Compute'has Distributed Resource Scheduler (DRS) enabled, which means that vSphere will balance the compute utilization and spread out VM's across all hosts in the cluster evenly. In our case, each one of our two VM's will land on a separate host, given this is a two host cluster. Other VM properties, specifically network properties, will be explored in greater detail in Module 2.

 

Provisioning Persistent Block Storage via Cinder


Why do we need volumes at all?

Similar to Amazon Web Services (AWS), in OpenStack, the instance you have provisioned already has a local disk, but this disk will not persist if the instance is terminated.

Imagine a workload where 1-hour of computation needs to occur at the end of each business day. Ideally, you would like to only spin up the instance when necessary for 1-hour per day. However, if you were only using a local disk, you would lose any data you generated between runs. This is where volumes come in. They are a persistent storage that can be attached and detached on-demand to any running VM.

Switch back to the Chrometab associated with the OpenStack Horizon Web GUI and log in as user1.


 

View Volumes

 

  1. Click on the Volumes tab within the Project pane on the left-hand side of the screen. Notice that at this point, there are no volumes provisioned.
  2. To create a volume for persistent block storage, click the Create Volume button in the upper right-hand corner of the page.

 

 

Create a Volume

 

In this dialog, we only need to set the following fields:

Click the Create Volume button.

NOTE: If there is an error, please SSH into controller01 through the putty window with (root/vmware) and type the following:

service cinder-api restart && service cinder-volume restart && service cinder-scheduler restart

 

 

 

Attach Volume to an Instance (Part 1)

 

At this point, the volume is not yet usable. Notice the the status of the 'data-volume1' volume is listed as 'Available'. We need to attach the volume to an instance that can read and write data to the block storage device.

Select the Edit Attachments option in the 'data-volume1' row.

 

 

Attach Volume to an Instance (Part 2)

 

Choose the following values:

  1. Attach to Instance: Select the first HOL instance from the list.
  2. Click the Attach Volume button.

This will take you back to the Volume list page, where you will see the attachment occurring. Wait until the status of 'data-volume1' has changed to 'In-Use' before proceeding, indicating that the volume is now attached to the VM instance.

 

 

Disk Mount Point

 

Once the volume is attached to the instance and the status changes to 'In-Use', the mount point where the volume is mounted inside the instance is displayed under the 'Attached To' column. In this case, the mount point is /dev/sdb.

 

 

View Disk Details from Instance

 

Access the VM console for the first HOL instance once more.  If you have it open in a new tab, just click inside of the screen to regain access or if you closed it, follow steps below.  

Click on the Instances tab in the Projects panel on the left-hand side of the screen.

Click on the instance name (displayed as a link) of the first HOL instance in the Instance Name column.

Select the Console tab, right click on the "click here to show only console" and open in new tab.  The login credentials are: Login: root Password: vmware

Type the following command at the prompt to view disk details:

 df -h

The command output will not list /dev/sdb as it has not yet been scanned, formatted or mounted.

 

 

Format and Mount the Newly Attached Volume

 

Run the following command at the prompt to have the OS rescan for attached disk devices:

echo "- - -" > /sys/class/scsi_host/host0/scan

There is a space between each "-".  Once you see output indicating it found /dev/sdb, press enter to get a new prompt.  NOTE: If nothing shows up, try replacing host0 with host1 or host2.  You should see some output after this command.

Since this is a new block device, it will not have any partitions or file systems on it, so create an EXT3 filesystem at the prompt with:

mkfs.ext3 /dev/sdb

Type 'y' and press enter to confirm that it should create a partition table. You should see some output for the next minute.

Finally, make a directory and mount the new filesystem to that directory:

mkdir /mnt/persistent-data
mount /dev/sdb /mnt/persistent-data

Run the the following command to confirm that you now have a 2GB primary disk and a 10GB extra disk available:

df -h

The output should now include /dev/sdb in the list of volumes, as shown in the image above.

Please Note: Make sure to include spaces in command syntax otherwise they won't work.

 

 

Create Test Files

 

Now, to emphasize the point about persistent and non-persistent data, we will create two files, one on the primary non-persistent storage, and one on the secondary block storage device.

First, on the primary storage, we will create a file in the root users's home directory:

echo "Hello non-persistent World" > /root/test-file1.txt

Second, on the attached block storage, will we create a file in /mnt/persistent-data:

echo "Hello persistent World" > /mnt/persistent-data/test-file2.txt

Please Note: Make sure to include spaces in command syntax otherwise they won't work.

 

 

Detach a Volume (Part 1)

 

Detach the volume from the first HOL instance, to which it is currently attached.

  1. Click on Volumes tab on the left to go back to the Volumes page.
  2. Click on Edit Volume in the row corresponding to the Volume 'data-volume1'.
  3. Click on Edit Attachment.

 

 

Detach a Volume (Part 2)

 

Click on the Detach Volume button.

 

 

Detach a Volume (Part 3)

 

Confirm volume detach by clicking on Detach Volume button again.

 

 

Wait for Volume to be Available

 

Wait until the detach is complete and the data-volume1 volume is again has a status of 'Available'.

 

 

Attach the Volume to Second Instance (Part 1)

 

  1. Click on Volumes tab in the Projects pane on the left-hand side of the screen.
  2. Then clickEdit Attachments in the 'data-volume1' row.

 

 

Attach the Volume to Second Instance (Part 2)

 

Choose the following values:

  1. Attach to Instance: Select theother HOL instance.
  2. Click the Attach Volume button.

This will take you back to the Volume list page, where you will see the attach occurring. Wait until the status of 'data-volume1' has changed to 'In-Use' before proceeding, indicating that the volume is now attached to the VM instance.

 

 

Disk Mount Point

 

Once the volume is attached to the instance and the status changes to In-Use, the mount point where the volume is mounted inside the instance is displayed under the 'Attached To' column. In this case, the mount point is /dev/sdb.

 

 

Mount the Persistent Block Volume to Second Instance

 

Click on the Instances tab in the Projects pane on the left-hand side of the screen.

Click on the instance name (displayed as a link) of the HOL instance in the Instance Name column.

Select the Console tab, right click on the "click here to show only console" and open in new tab.  The login credentials are: Login: root Password: vmware

Run the following command at the prompt to have the OS rescan for attached disk devices:

echo "- - -" > /sys/class/scsi_host/host0/scan

There is a space between each "-".  Once you see output indicating it found /dev/sdb, press enter to get a new prompt.  NOTE: If nothing shows up, try replacing host0 with host1 or host2.  You should see some output after this command.

(Note: We do not need to format the volume, as it was already formatted when attached to the first HOL instance).

Pressenter to get a new command prompt and type in the following commands in order:

mkdir /mnt/persistent-data
mount /dev/sdb /mnt/persistent-data
df -h

The output of the df command should show /dev/sdb in the list of mounted volumes.

Please Note: Make sure to include spaces in command syntax otherwise they won't work.

 

 

Test What Files Exist

 

Now let's look at the file system to see what files exist.

First, look in the root home directory with the following command:

ls /root

Notice that no files exist here. Specifically the file we created at ~/test-file1.txt in the test1 instance is not available in test2, since the primary disk for an instance is lost when that instance is terminated. The test2 VM has a completely fresh copy of the Debian image.

Next, look in the directory with the mounted volume and the following commands:

ls /mnt/persistent-data
cat /mnt/persistent-data/test-file2.txt

You should see the original test-file2.txt file we created from test1, since this is the same disk volume and filesystem that was attached to test1 before it was terminated (Note: You can ignore the lost+found directory, this is created automatically by the operating system).

Now you have seen both types of disk storage options that are available in OpenStack.

 

View Cinder Volumes in vSphere Web Client


Now let's switch from the role of a cloud user to that of a cloud administrator to see how Cinder Volumes are implemented with vSphere.


 

View Multiple Disks attached to the Instance

 

Switch to the vSphere Web Client tab in Firefox.

If the cluster Compute is not already visible, click and expand the Host and Clusters tab of inventory until you can see cluster Compute and all of its hosts and VMs.

Click the refresh button within the Web Client to see the results of what OpenStack has done in the background since you last looked at the Web Client.

Notice that there is a powered-on VM with the UUID as it's name.

Click on this VM and view the Summary tab.

Notice in the 'VM Hardware' window that this VM now has two hard disks. One 1 GB hard disk that represents the primary disk and a second 10 GB hard disk that represents the Cinder volume attached to the VM.

 

 

 

View the "Shell" VM for Housing the Cinder Volume VMDK

 

Additionally, notice there is a VM in the inventory that is in the powered off state and has a name starting with "volume-".

Click on this VM name in the inventory and view the Summary tab.

Notice in the 'VM Hardware' window, this VM has a single hard disk with a size of 10 GB that matches the size of the Cinder volume we created. This is a "shell" VM to house the 10 GB VMDK corresponding to the Cinder volume in scenarios when the volume is not attached to any "real" running VM.

 

Viewing OpenStack Metadata Related to Instances in vCenter


In this chapter we will demonstrate how vCenter Web Client Plug-in for OpenStack provides a great deal of assistance for vSphere administrators when operating vSphere infrastructure with OpenStack frameworks.

The Plug-In provides vSphere administrators the ability to identify OpenStack instances and some of their respective metadata from the vCenter Server.


 

Choose an OpenStack VM to view Metadata

 

Choose any OpenStack deployed VM in the list.

Please Note:

 

 

OpenStack Metadata

 

You'll find at the bottom right of the VM summary page a new window named OpenstackVM that contains lots of OpenStack Instance metadata info.

It's a great way to know which OpenStack user deployed which OpenStack flavor on which tenant, connected to which network.

Please Note: The credentials used earlier in the Plug-in configuration for Keystone access needs to have allowed access to all OpenStack Projects. This has been done for you already in the environment.

 

Access to Powerful vCenter Features


Enterprise Cloud Administrators can easily leverage all the features that vSphere offers, while still extracting the benefits of an OpenStack orchestration layer. In this exercise, we will execute a vMotion of one of the provisioned instances (VMs) and observe how connectivity and operations remain functional, without affecting the view that OpenStack has of the environment.


 

Execute a vMotion and Verify Connectivity

 

  1. Click on one of the VM's in the environment.
  2. Take note of the current active host.

 

 

VM Details

 

Right-Click on the VM you just explored and selectOpen Console.

 

 

VM Network Properties

 

  1. Login to the VM using root/vmware as the credentials.  (the console prompt may look different than what you see above)
  2. Display the network configuration for the VM using ifconfig.
  3. Ping the IP of the second VM to verify connectivity.

IMPORTANT: Make sure to typeCtrl-C to abort the ping and then Ctrl+Alt to exit out of the VM console to get back to the vSphere Web Client screen.

 

 

Initiate VM Migration

 

  1. Back into the vSphere Web Client, right-click on the VM and selectMigrate.
  2. There will be a warning message that shows up indicating that you want to move a resource that's currently managed by VIO.  Click Yes.
  3. In the Migrate screen, selectChange Host.
  4. Click the Next button.

 

 

Select Destination Host

 

  1. In the Select Destination Resource screen, make sure you select the Compute resource.
  2. Check the Allow host selection within this cluster checkbox.
  3. Click the Next button.

 

 

Select Destination Host

 

  1. Select the other host in the Compute.
  2. Use the default settings and Click the Next button,then Next again and Finish to initiate the vMotion.  You may skip this step of the actual vMotion if you are in a hurry.  We wanted to demonstrate that the capabilities of vCenter still exist and can be used under OpenStack .

 

 

Relocating the Virtual Machine

 

You can see the virtual machine migrating.

 

 

Confirm Successful VM Migration

 

Verify that the VM has been successfully moved to the other ESXi host in the Compute cluster.

 

 

Verify Network Connectivity

 

Back on the VM console, verify that connectivity to this VM's peer is still successful. To determine the VM IP addresses, you can refer to the OpenStack Dashboard and verify the Instance details for the provisioned VMs.

 

Cleaning Up Instances


In this section, we will decommission the

infrastructure directly from the OpenStack Horizon interface and verify resources are correctly cleaned up in the vSphere environment.


 

Terminating Instances

 

  1. Log into the Horizon interface, if not logged in already, using credentials user1/user1
  2. Navigate toProject > Compute > Instances and select the two VM's previously created.
  3. Click on Terminate Instances button to destroy these VM's.

 

 

Monitor Instance Decommission

 

Verify that the instances are correctly deleted.

 

 

Verify VM Deletion in vCenter

 

Back in vCenter, verify that the VM's have disappeared from the Compute cluster.  You can ignore the meta file as they are used to accelerate the deployment of instances.  They are not live running VM's.

CONGRATULATIONS!!! You have completed Module 1.

 

Bonus Chapter: OpenStack Management Best Practices


OpenStack Compute supports the VMware vSphere platform using a Compute Driver.

The driver aggregates the ESX hosts in each cluster to present one large hypervisor entity for each cluster to the Compute scheduler.

When a virtual machine makes its way into a vCenter cluster, it can use all the vSphere features: Motion, High Availability, and Dynamic Resource Scheduling (DRS).

Please note the following best Practices / current limitations :

Consult OpenStack documentation for further details:

http://docs.openstack.org/trunk/config-reference/content/vmware.html


Module 2 - OpenStack Networking with NSX (60 Min)

Introduction


In the traditional model of networking, users attach VMs to existing networks which are mostly hardware defined. However, relying on hardware defined, pre-existing networks makes a private cloud inflexible, hinders scalability and doesn't support cloud use cases. Cloud users need the flexibility to create network topologies and modify network access policies to suit their applications.

For example, in most SaaS services Application Servers, Database Servers and Web Servers are all required to run on different networks. Additionally while Web Servers need to be accessible from the internet, application and database VM's need to block internet access. These types of customized network topologies and network access controls are provided by VMware NSX through the OpenStack Neutron plug-in available with VIO.


 

NSX Architecture

 

VMware NSX brings lots of benefits when we compare it to the traditional OpenStack networking relying on VLANs:

Scale

Management and Operations

Advanced Network Services

In summary, NSX offers:

This module will give you an overview of key NSX benefits that empower cloud users to realize custom network topologies and control various aspects of network access.

 

 

Overview of the objectives

 

In this module you'll learn how to build a virtual network topology which leverages the NSX OpenStack Plug-In.

As you can see above, we'll start by creating a web-tier subnet (orange) and a logical router. Then we'll deploy two instances on that subnet to test Layer-2 connectivity between the two instances.

In the following steps, we'll add another subnet (db-tier, green) connected to our logical router. Then we'll move to deploy another instance on to test Layer-3 connectivity between the Orange and Green subnets.

In the remaining steps we'll demonstrate the benefits of VMware NSX when powering up OpenStack Networking,

But let's not spoil all the fun right now!

 

 

Build a simple Virtual Network Topology


In this section we will prepare a virtual topology to demonstrate how Neutron NSX plugin enables programmatic control of networking capabilities in multi-tenant cloud data centers.

All the following operations can be completed by calling OpenStack API or by using OpenStack CLI but we'll be using OpenStack Horizon Dashboardin this section.

We will be using OpenStack CLI later in the lab.


 

Authenticate to OpenStack Horizon Dashboard

 

To access Horizon, launchChrome from the desktop and then:

  1. Select the Login - VIO bookmark. This will launch the OpenStack Horizon Web UI.  
  2. When you see the login screen, please enter the following credentials: Username: adminPassword:VMware1!
  3. Then scroll down and clickSign in.

 

 

Overview

 

1. Select Project > Compute > Overview to display the quotas available for the admin tenant. You may also see the Instances created in the previous module.

You have been assigned the default OpenStack quota of:

Please Note: You will not be able to consume all those resources on the Hands-On Lab environment.

This module will focus on OpenStack Networking

2. Click on Network in the left-hand navigation pane to open up this section.

 

 

Create a Logical Network

 

A logical network segment allows VM's to communicate on the same logical subnet across the infrastructure independent from the configuration of the physical IP fabric.

  1. Select the Networks tab under the Network pane.
  2. Then click on the Create Network button.

 

 

Name your logical network

 

  1. Type in your logical network segment name: web-tier.
  2. Then click the Next button to continue.

Please Note: Leave the Admin State checked. If you uncheck the Admin State, the logical network no longer forwards any packets.

 

 

Add a Subnet

 

Click on the Subnet tab and fill out the form with the following details:

  1. Create Subnet: checked
  2. Subnet Name: web-subnet
  3. Network Address: 10.0.0.0/24
  4. IP Version: IPv4
  5. Click the Next button to continue.

Please Note:

 

 

Subnet Detail

 

The Subnet Detail tab offers the opportunity to disable DHCP or allow us to customize the associated settings like DHCP Allocation Pools, DNS Name Servers or Host Routes.

We aren't changing anything in this window, so you can now click the Create button.

 

 

Logical Network configuration details

 

You should now see your newly created web-tier network in the list of available networks. It is already in ACTIVE state.

You can easily add more subnets or completely delete it by first clicking on More and choosing the corresponding action.

For now click on the web-tier link to get all the details regarding this logical segment.

 

 

Network Detail

 

This Network Detail view allow you to add/delete Subnets or Edit Ports to change the Admin or Mac Learning State. But right now we don't have any Ports to edit on this Logical Switch. You can come back here later on to see the Port being added when for example an attachment will be made with a Logical Router.

Please Note:

As we construct our virtual network topology, it is useful to get a graphical view of the environment. This can be easily achieved by selectingNetwork Topology on the left side margin.

 

 

Network Topology

 

You should see your newly created web-tier logical network which isn't connected to anything yet.

'External Network' network was pre-created in your lab by OpenStack admins and shared with all Projects to provide external connectivity to your applications.

We now need to create a Logical Router to route traffic from web-tier to the External network. All the VMs connected on the web-tier logical network will be using this router as the default gateway.

Click on the Create Router button.

 

 

Create Router

 

  1. Give your logical router a name: logical-router
  2. Then click the Create Router button.

 

 

Set Gateway (Part1)

 

You should see a green popup message with a Success message saying: "Router logical-router was successfully created."

  1. Your router isn't connected to anything yet, to connect it to the External Network click on Routers in the left-hand window pane.
  2. Now click on the Set Gateway button.

 

 

Set Gateway (Part 2)

 

  1. In the External Network selection box, chooseExternal Network
  2. Then click the Set Gateway button.

You should see a message saying: "Success: Gateway interface is added."

 

 

Add Interface (Part 1)

 

You still need to connect your logical router to the web-tier logical network.

To add an interface to your router, click on logical-router.

 

 

Add Interface (Part 2)

 

Then click on the Add Interface button.

 

 

Add Interface (Part 3)

 

  1. Selectweb-tier 10.0.0.0/24 (web-subnet) from the drop-down list.
  2. Then click the Add Interface button.

A message saying "Success: Interface added 10.0.0.1" will appear shortly.

 

 

Network Topology

 

Your logical router is now connected to an uplink (External Network) and to your logical network (web-tier).

You can confirm by looking again at the Network Topology UI.

Select the Network Topology tab along left-hand window pane in the Horizon UI.

It should look like the screenshot above, if not please review the previous steps to correct any mistakes.

Please Note:

 

 

Bonus for Creating a Simple Virtual Network Topology

 

To celebrate creating a Simple Virtual Network Topology, here are some bonus CloudCred points.

Interested in playing? Scan the QR Code and you will be taken to a website where you can create an account!

 

Testing L2 Network Connectivity


To test our web-tier logical network we first need to deploy at least two instances on it.


 

Deploy new Instances attached to user defined networks

Once our Instances will be active on our web-tier subnet, we'll use Ping to test their connectivity.

 

 

Instances

 

  1. Click on the Compute panel section in the left-hand window pane of the Horizon UI.
  2. Click on Instances tab under the Compute panel.
  3. Then click on the Launch Instance button.

 

 

Enter Instance Details

 

Fill out the Details tab form with the following values:

  1. Select m1.small for flavor
  2. Select 2 for instance count
  3. SelectBoot from Image from the 'Instance Boot Source:' pull down menu.
  4. Then select Debian-Small imagefrom the 'Image Name:' pull down menu.

 

 

Attach to User Defined Network

 

  1. Now select the Networking tab
  2. Click on the + button next to your web-tier network. This logical network will then move to the 'Selected Networks' area. It means our instances will be connected to web-tier when they launch.
  3. Click on the Launch buttonto start the provisioning process.

 

 

Launching the Instances

 

You should see 2 VM's being started and spawned.  They will be shown with diagonal moving lines which indicate they are not ready yet and are still being provisioned.

 

 

View Instances

 

The two test instances that we just launched are listed above. If they are in a 'Scheduling' or 'Spawning" state, just give it a few seconds until they move into an 'Active' state.

Take note of the IP addresses for the two VM's (in this example, 10.0.0.3 and 10.0.0.4), you will need this info soon. IP addresses were taken from the IP Address range specified when we created the 'web-tier'logical network (i.e. 10.0.0.0/24).

Please Note: Since the OpenStack IceHouse release, nova will now waits until the VM is wired correctly before it boots the instance. It's a callback procedure. Nova will defer booting the instance until Neutron has informed it that the device has been configured appropriately.

 

 

Graphical View of Network Topology

 

  1. Click the Network tab to open the pane.
  2. Then click on Network Topology tab under the Network pane in the left-hand navigation window. You will see that both test VM's are shown connected to the web-tier network.
  3. Next we will confirm the two VMs can actually communicate together. For this test, we will use the "ping" command from a VM console. To launch the console, hover your mouse over one of the test VM's and click on the open console link. This will launch the console in a separate browser window.

 

 

Test Connectivity

We can now test connectivity with "Ping".

 

 

Login to Console

 

You may have to bypass certificate checking by clicking on "Proceed Anyway."

Once inside the console window, authenticate using Login: rootPassword: vmware

 

 

Ping Instance

 

First check the IP address of the VM you've selected:

ifconfig

Ping the other test instance IP address (10.0.0.3 in our example).

Please Note: If you forgot the IP addresses for your test instances, you can find it back on the Instances list in the Horizon Dashboard.

ping -c 2 10.0.0.3

You can also ping the logical-router.

ping -c 2 10.0.0.1

This should validate that both VMs can communicate, they each have L2 network connectivity since they are attached to the same logical network.

Congratulations you've successfully completed the first portion of this module.

You can now close the console window.

 

Topology with VMs on different User Defined Networks


In a previous chapter, we tested IP connectivity between VMs connected to the 'web-tier' logical network.

In this lesson we will create another logical network called 'db-tier' and provision an instance on it. Then we will connect the two logical networks 'db-tier' and 'web-tier' using our logical router.


 

Create a DB Tier Network

 

In Horizon, navigate to Project > Network > Networks and click on Create Network.

 

 

Create a DB Tier Network

 

Name the network db-tier and click Next.

 

 

Configure DB Tier Subnet

 

1. Enter the network parameters for teh db-tier subnet:

2. Click Next.

 

 

Configure DB Tier Subnet

 

EnsureEnable DHCP is enabled and clickCreate.

 

 

Add the DB Tier Interface to the Logical Router

 

1. Click on Network > Routers

2. Under Name, click on the logical-roter hyperlink

 

 

Add Interface to the Router

 

Click on Add Interface.

 

 

Add Interface to the Router

 

Under Subnet, select the db-tier interface and then click on Add Interface.

 

 

Create a DB Instance and connect it to the DB Network

 

1. Boot an Instance with name db with the Debian image you used earlier. Go to Compute > Instances.  Launch Instance.

2. Click on the Networking tab.

 

 

Attach the Instance to the DB Network

 

Attach the instance to the db-tier network and click Launch.

 

 

View New Topology

 

 

 

Verify Connectivity

 

Connect to the console of the db instance and login using the credentials admin / vmware

 

 

Verify Connectivity

 

Ping one of the web instances to verify routing is working correctly.

 

 

Troubleshooting

 

If the previous ping wasn't successful, access the Network Topology tab within the Horizon Dashboard menu to check the topology. You may have to authenticate again, as a reminder here are the credentials you should use:

The topology should look like the one above (color and positioning may differ), with all components active. Hover over each of them to verify their status.

If you really can't figure out what's happening, then please ask for help!

 

Security Groups


Security groups are sets of IP filter rules that are applied to an instance's networking. All projects have a "default" security group. Lets review our current set of rules.


 

Access and Security

 

  1. In the left-hand window pane, click on the Compute section to open it.
  2. Then click on the Access & Security tab.
  3. Click on the Manage Rules button for the default Security Group.

 

 

Security Group Rules

 

As you can see, all traffic leaving the instance is allowed (Egress) but only traffic from the same Security Group (default) will be allowed to enter the Instances (Ingress).

Please Note: The OpenStack Nova configuration parameter 'allow_same_net_traffic' controls whether the rules apply to hosts that share a network. When set as true (which is default in nova's configuration), hosts on the same subnet are not filtered and are allowed to pass all types of traffic between them.

Click on the Access & Security tab to go back to the previous screen.

 

 

Create Security Group (Part 1)

 

Click onthe Create Security Group button.

 

 

Create Security Group (Part 2)

 

Provide a name and a description:

  1. Name: web-sg
  2. Description: Web Frontend Security Group
  3. Then click on the Create Security Group button.

A popup should come up: 'Successfully created security group: web-sg'

 

 

Edit Rules

 

Now lets edit our web-sg security rules, click on the Manage Rules button.

 

 

Delete Rules

 

These are the default security group rules.  Let's go ahead and delete them all and see what happens.  Click on all the checkboxes and Delete Rules

 

 

Confirm Delete

 

Confirm Delete.

 

 

Add "ALL ICMP" Ingress to web-tier security group

 

On that same page after deletion, click on Add Rule

Rule: ALL ICMP

Direction: Ingress

Remote: Security Group

Security Group: web-tier

Click Add.

REPEAT above except for Direction will be Egress.

 

 

Both Egress and Ingress Rules

 

After completing the above steps, you should see 2 rules on your screen.

 

 

Assign web-sg to both of our test instances

 

  1. Click on the Instance menu in the left-hand window pane.
  2. Then click on the More button to open up the menu of the last instance in the list. Write down or remember the IP address of this test instance (10.0.0.3).
  3. Finally, click on Edit Security Groups.

 

 

Switch instance to the web-sg Security Group

 

To switch your instance to the newly created web-sg Security Group:

  1. Click on the '-' button in front of the default Security Group to remove it from your instance.
  2. Then click on the '+' button in front of the web-sg group. The two groups should now be inverted.
  3. Click the Save button.

REPEAT for other test instance.  Both instances should be on web-sg security group.

 

 

Test Connectivity

 

Go back to console of the server that you changed the security group on.  

Now confirm that you currently cannotping the db instance but you can still ping the test instances which sit on the web-tier.

What we just did was that we put this web server in a security group that allowed only ping access to servers in the web-tier security group.  If you go ahead and change it back to default, you will see that you can ping the nodes again after adding the default group back as the security group.  

 

 

Switch Instance back to the default Security Group

 

Please go back to the Edit SecurityGroup popup window for your instance to revert it back to the default Security Group and then click the Save button.  If you try and ping again, you should see that they should ping.

 

Conclusion

Thank you for participating in the VMware Hands-on Labs. Be sure to visit http://hol.vmware.com/ to continue your lab experience online.

Lab SKU: HOL-SDC-1420

Version: 20150330-180357