VMware Hands-on Labs - HOL-1820-01-EMT


Lab Overview - HOL-1820-01-EMT - VMware Integrated OpenStack (VIO)

Lab Guidance


Note: It will take more than 90 minutes to complete this lab. You should expect to only finish 2-3 of the modules during your time.  The modules are independent of each other so you can start at the beginning of any module and proceed from there. You can use the Table of Contents to access any module of your choosing.

The Table of Contents can be accessed in the upper right-hand corner of the Lab Manual.

In this Hands-On-Lab we provide you with an introduction to VMware Integrated OpenStack, and demonstrate how it integrates with NSX, Log Insight, vRealize Automation, and vRealize Operations Manager. We let you take the reigns in launching workloads, attaching volumes, create logical networks, create security policies and consume LBaaS. All of this while providing granular visibility and reporting.

Disclaimer:

Lab Module List:

 Lab Captains:

Content Lead:

This lab manual can be downloaded from the Hands-on Labs Document site found here:

http://docs.hol.vmware.com

This lab may be available in other languages.  To set your language preference and have a localized manual deployed with your lab, you may utilize this document to help guide you through the process:

http://docs.hol.vmware.com/announcements/nee-default-language.pdf


 

Location of the Main Console

 

  1. The area in the RED box contains the Main Console.  The Lab Manual is on the tab to the Right of the Main Console.
  2. A particular lab may have additional consoles found on separate tabs in the upper left. You will be directed to open another specific console if needed.
  3. Your lab starts with 90 minutes on the timer.  The lab can not be saved.  All your work must be done during the lab session.  But you can click the EXTEND to increase your time.  If you are at a VMware event, you can extend your lab time twice, for up to 30 minutes.  Each click gives you an additional 15 minutes.  Outside of VMware events, you can extend your lab time up to 9 hours and 30 minutes. Each click gives you an additional hour.

 

 

Alternate Methods of Keyboard Data Entry

During this module, you will input text into the Main Console. Besides directly typing it in, there are two very helpful methods of entering data which make it easier to enter complex data.

 

 

Click and Drag Lab Manual Content Into Console Active Window

You can also click and drag text and Command Line Interface (CLI) commands directly from the Lab Manual into the active window in the Main Console.  

 

 

Accessing the Online International Keyboard

 

You can also use the Online International Keyboard found in the Main Console.

  1. Click on the Keyboard Icon found on the Windows Quick Launch Task Bar.

 

 

Activation Prompt or Watermark

 

When you first start your lab, you may notice a watermark on the desktop indicating that Windows is not activated.  

One of the major benefits of virtualization is that virtual machines can be moved and run on any platform.  The Hands-on Labs utilizes this benefit and we are able to run the labs out of multiple datacenters.  However, these datacenters may not have identical processors, which triggers a Microsoft activation check through the Internet.

Rest assured, VMware and the Hands-on Labs are in full compliance with Microsoft licensing requirements.  The lab that you are using is a self-contained pod and does not have full access to the Internet, which is required for Windows to verify the activation.  Without full access to the Internet, this automated process fails and you see this watermark.

This cosmetic issue has no effect on your lab.  

 

 

Look at the lower right portion of the screen

 

Please check to see that your lab is finished all the startup routines and is ready for you to start. If you see anything other than "Ready", please wait a few minutes.  If after 5 minutes your lab has not changed to "Ready", please ask for assistance.

 

Module 1 - Introduction to VMware Integrated OpenStack (30 Minutes)

Introduction to OpenStack


Note: if you are already familiar with basic OpenStack concepts, then you can skip this lesson and proceed to the next lesson that talks about VMware Integrated OpenStack


 

What is OpenStack?

OpenStack is open source software that delivers a framework of services for API based infrastructure consumption. OpenStack framework requires hardware or software based infrastructure components and management tools to build a functional OpenStack cloud. The "plug-in" architecture of OpenStack services enables various vendors (such as VMware) to integrate their infrastructure solutions (such as vSphere, NSX, and VSAN) to deliver an OpenStack cloud.

 

 

OpenStack is a Cloud API layer in a cloud technology stack

A typical cloud technology stack consists of following major components:

In a non-cloud datacenter model, an application owner would contact one or more datacenter administrators, who would then deploy the application on the application owner's behalf using software infrastructure tools (e.g., VMware vSphere) to deploy the application workloads on top of physical compute, network, and storage hardware.

OpenStack is a software layer that sits on top of the software infrastructure and enables an API based consumption of infrastructure. OpenStack enables a "self-service" model in which application owners can directly request and provision the compute, network, and storage resources needed to deploy their application.

The primary benefits of self-service are increased agility from applications owners having "on demand" access to the resources they need and reduced operating expenses by eliminating manual + repetitive deployment tasks.

 

 

OpenStack Components

 

OpenStack splits infrastructure delivery functions into several different services. Each of these services is known by its project code name:

OpenStack services orchestrate and manage the underlying infrastructure and expose APIs for end users to consume the resources. OpenStack's strength is a highly customizable framework, allowing those deploying it to choose from a number of different technology components, and even customize the code themselves.

 

VMware Integrated OpenStack (VIO)


Note: This section introduces VMware Integrated OpenStack, its components and installation requirements. If you want you can skip this section and proceed to the next.

VMware Integrated OpenStack (VIO) is a VMware supported OpenStack distribution prepared to run on top of an existing VMware infrastructure. VIO will empower any VMware Administrator to easily deliver and operate a production grade OpenStack cloud on VMware components. This means that you will be able at to take advantage of all of VMware vSphere's great features like HA, DRS or VSAN for your OpenStack cloud and also extend and integrate it with other VMware management components like vRealize Operations and vRealize Log Insight.


 

VMWARE INTEGRATED OPENSTACK COMPONENTS

VIO consists of two main building blocks, the VIO Manager and the OpenStack components. It is packaged as an OVA file that contains the Manager server and an Ubuntu Linux virtual machine to be used as the template for the different OpenStack components.

 

 

VIO INSTALLATION REQUIREMENTS

To be able to successfully deploy VMware Integrated OpenStack you will need at least the following:

The hardware requirements are around 56 vCPU, 192GB of memory and 605GB of storage.

To that you have to add NSX for vSphere required resources like the NSX Manager, the three NSX Controllers and the NSX Edge pool.

 

VIO Architectural Components


In this section we give an overview of the different VMware product integrations that are used with VIO. If this is the first time you are looking at VIO then its beneficial do to a quick review of this section before you proceed to the next module.


 

VMWARE INTEGRATED OPENSTACK (VIO) ARCHITECTURE

 

VIO is based atop VMware's Software Defined Data Center infrastructure. With purpose built drivers for each of the major OpenStack services, VIO optimizes consumption of Compute, Storage and Network resources. VIO also includes OpenStack specific management extensions for the vCenter Client, vCenter Operations Manager and LogInsight to allow use of existing tools to operate and manage your OpenStack cloud.

 

 

NOVA COMPUTE INTEGRATION

The vCenter Driver exposes Compute resources to the OpenStack Nova service through vCenter API calls. Resources are presented as cluster level abstractions. The Nova scheduler will choose the vSphere cluster for new instance placement and vSphere DRS will handle the actual host selection and VM placement. This design enables OpenStack Instance VMs to be treated by vSphere as any other VM. Services like DRS, vMotion and HA are all available.

 

 

CINDER AND GLANCE INTEGRATION

The VMDK driver exposes Storage resources to the OpenStack Cinder service as block devices through datastore/VMDK abstractions. This means that any vSphere datastore, including VSAN, can be used as storage for boot and ephemeral OpenStack disks. Glance images may also be stored as VMDK's or OVA files in datastores.

 

 

NEUTRON NETWORKING INTEGRATION

The NSX driver supports both the vSphere Virtual Distributed Switch (vDS) and NSX for true software defined networking. Customers can leverage their existing vDS to create provider networks that can isolate OpenStack tenant traffic via VLAN tagging. They can also take advantage of NSX to provide dynamic creation of logical networks with private IP overlay, logical routers, floating IPs and security groups, all enabled across a single physical transport network.

 

 

MANAGEMENT INTEGRATION

The vSphere web client has been extended to include OpenStack specific meta data to allow searching by terms appropriate to your VM's (Tenant, Flavor, Logical Network, etc.). VIO also includes a vSphere Client plugin for managing OpenStack consumption of vSphere resources. Management packs for vRealize Operations Manager and Log Insight allow for OpenStack specific monitoring based on metadata extracted from the OpenStack services.

 

What's new with VIO 4.0


This section talks about some of the new features that have been released in the VIO 4.0 architecture.


 

NEW FEATURES IN THIS RELEASE

VMware Integrated OpenStack enables the rapid deployment of OpenStack on a VMware vSphere virtual platform. This release provides the following new features and enhancements.

 

Conclusion


 

Congratulations on completing  Module 1.

If you are looking for additional documentation on VIO, try one of these:

Proceed to any module below which interests you most.


 

Evaluate VMware Integrated OpenStack

Would you like to see how VMware Integrated OpenStack could work in your data center?  Request a free 60-day evaluation here to try it out in your own environment.

 

 

How to End Lab

 

To end your lab click on the END button.  

 

Module 2 - Getting Started with VIO (60 Minutes)

Introduction


In this module we will review the VMware Integrated OpenStack (VIO) deployment, access the VIO admin interface, explore how to create VIO projects and users, create VIO instances(VM's), create persistent VIO storage volumes, and explore how VIO components interact with the vSphere Web Client.


Review VIO Deployment


In this part of the lab, we will explore using OpenStack and how it leverages the VIO plugin to deploy VMs


 

Checking Lab Status

 

You must wait until the Lab Status is at Ready before you begin.  If you receive an error message, please end the lab and redeploy another.

 

 

Launch Google Chrome

 

  1. Click on the Desktop Icon for Google Chrome.

 

 

Login to vCenter

 

  1. Click on the RegionA vCenter bookmark to open the vSphere Web Client in a new tab.
  2. Click the checkbox for Windows session authentication (since you are logged in as Administrator you can use this short-cut). You can also use administrator@vsphere.local as user name with password VMware1! to log in.
  3. Click the Login button.

Please Note: The first time you login to vSphere Web Client takes a bit longer and in some cases up to a minute.

 

 

VMware Integrated OpenStack Plug-In

 

  1. Click the vSphere web client Home button
  2. Click on the VMware Integrated OpenStack plugin icon

 

 

VIO Plug-In

 

  1. Click on the Configuration Tab

 

 

VIO Plug-In - Monitor

 

As you can see, the VIO deployment information is provided including:

 

 

VIO Deployment Validation

 

  1. Click on OpenStack Deployments under Deployment List

 

 

VIO Deployment Status

 

Make sure your OpenStack deployment is shown as Running.   If the Status is not shown as Running, you may need to restart the lab.

 

Accessing the VIO Deployment


In this section we will start using VIO Horizon portal to login to the VIO management interface.


 

Lets Start Using OpenStack Horizon

 

We will now start using OpenStack by logging into the Horizon portal (Not to be confused with VMware Horizon EUC products). Horizon provides a web portal for both administrators and users. Administrators can use the UI for common tasks such as creating users, managing their quotas, check infrastructure usage, etc. In Horizon, cloud administrators have a different view when compared to cloud users. While cloud administrators can see and manage all infrastructure resources, cloud users can only see inventories created by them.

We will start with an orientation of the Horizon Web UI for cloud administrators and then switch to a cloud user view later.  

  1. Click on the blank tab to open a new window
  2. Click on the VIO bookmark in your browser bar (https://vio.corp.local)

 

 

Login To OpenStack Horizon

 

  1. Select the OpenStack authentication method
  2. Type the Domain: local
  3. Type the User Name: admin
  4. Type the Password: VMware1! (case-sensitive)
  5. Click the Connect button

Please Note: The first time you login to Horizon Dashboard takes a bit longer and in some cases up to a minute.

 

 

OpenStack Admin Overview

 

Upon initially logging in as 'admin', note the following key tabs:

At the top is a drop down menu that allows an admin to switch views to a specific user. For example, if an OpenStack administrators wants to see what resources are visible to a particular user, administrators can select a different user from drop down list. For now, please ensure that the drop down has 'local - admin' as the selected user.

  1. Click on the Admin panel

 

 

Running Instances

 

  1. Click on Project panel
  2. Click on Instances panel

In this lab there are already two deployed instances test-vm-1 and test-vm-2 are both Ubuntu instances.

 

 

Network Review

 

Now, we will look at the current network topology that has been setup with OpenStack.

  1. Select the Network tab under the Project panel

 

 

Network Topology

 

  1. Select the Network Topology tab under the Network panel

For this lab, we have pre-created networks called "external-network"  and "test-network".

The two networks represent a tenant network (test-network), and a provider network (external-network).  When you have multiple clients, they would each get their own tenant network, but all would share the provider network as a gateway to external resources such as the Internet or any corporate systems.

The test-vm1 and test-vm2 instances are deployed on the tenant "test-network"

 

 

Admin - Networks

 

  1. Click on Admin

In order to use the "test-network" to create instances in the next lesson we will need to make the "test-network" a shared resource.

 

 

Make the "test-network" Shared

 

We have purposely created the "test-network" as shown to be only seen by the "admin" tenant. Next we will make that network "shared" so that other users can also utilize it with instances they create.

  1. Select Networks
  2. Click Edit Network for the "test-network"

 

 

Update Network Settings

 

Now we will share the "test-network" with other users so that we can utilize the "test-network" in subsequent lessons.

  1. Check the Shared box
  2. Click Save Changes

 

Projects and Users


In OpenStack, users are grouped in containers called projects. These projects are effectively tenants in the OpenStack cloud environment. Each project has an assigned quota of compute, network and storage resources that are shared by all users in that project. Projects are isolated from each other, that is, users in one project can't see the users and resources of other projects. Users must be associated with at least one project, though they may belong to more than one.

In this section we will create a couple of projects and assign users to them.


 

Creating Projects

 

  1. Click the Admin tab on the left-hand navigation bar
  2. Click the Identity tab under Admin
  3. Click the Projects tab under the Identity Panel
  4. Click the Create Project button

 

 

Working with Quotas

Quotas are used to set operational limits around the resources assigned to a project. By implementing quotas, OpenStack cloud administrators can predictably allocate capacity to tenants and prevent one tenant from monopolizing shared resources.

 

 

Creating a New User

 

  1. Select the Users tab
  2. Click Create User to display the user menu

 

User Instance


An instance is the OpenStack's terminology for a virtual machine. Users can provision instances and attach them to existing or new OpenStack networks.


 

Creating User's Instance

In this section, we will illustrate the process of creating instances from OpenStack.

 

 

Login as vio-user1

 

Now that you have logged out as admin, you will need to login as user vio-user1 to create your new Instance.

Log into the Horizon Web UI, this time using the following credentials:

  1. Authenticate Using: Openstack
  2. Domain: local
  3. User Name: vio-user1
  4. Password: VMware1!
  5. Click on Connect

 

 

User Overview

 

From the overview section, you are shown how much of the user's current quota limits have been used.

Since we haven't done anything yet, all categories show 0 resources used except for Security Groups. One security group is used by the “Internal Shared” network available to all users for the purposes of this lab. We will revisit networking in greater detail later on.

 

 

Launch an Instance

 

  1. Click on the Instances tab, on the left hand side.
  2. Click on the Launch Instance button.

 

 

Instances built

 

Now you can view the instances being built. These instances are built on PhotonOS, they are small and it should launch within a minute.

Ensure that the instances are Running.

  1. Click on the vio-user-test1 Instance to view details of the build

 

 

Instance Details

 

Here you can find more details about the instance such as the IP provisioned and the unique ID that OpenStack provisioned for this instance.

  1. Click on Instances to go back the instance table view

 

 

Instance options

 

  1. Click the drop down at the far right of an instance

Here you can find all the options that are available to you. Note some of the useful options are Console, View Log, Reboot and Shut Off Instance options.

 

 

Overview of Instances

 

Now, lets go back to the overview screen and see how it has been updated.

  1. Click on Overview link on the left side of the page.

You can now see that graphs have been updated to reflect the new instances that have been created.

 

Volumes


Why do we need volumes at all?   In OpenStack, the instance you have provisioned already has a local disk, but this disk will not persist if the instance is terminated.

Imagine a workload where 1-hour of computation needs to occur at the end of each business day. Ideally, you would like to only spin up the instance when necessary for 1-hour per day. However, if you were only using a local disk, you would lose any data you generated between runs. This is where volumes come in. They are a persistent storage that can be attached and detached on- demand to any running VM.


 

Working with Volumes

 

  1. Click on the Volumes tab within the Project pane on the left-hand side of the screen.
  2. Click the Create Volume button.

This will start your creation of a persistent volume

 

 

Create volume

 

Fill in the following:

  1. Volume Name: vio-user1-data-volume1
  2. Size (BG): 5
  3. Click on Create Volume

 

 

New Volume

 

Please wait as your volume is deployed. Wait until the status changes to Available.

  1. Click on the Dropdown button next to Edit Volume
  2. Select Manage Attachments

We will now attach the volume to an Instance

 

 

Attach Volume to an Instance

 

  1. Select the vio-user-test-1 instance
  2. Click Attach Volume

Note the Volume Instance GUID may be different then the example.

 

 

Volume attached

 

Now you return to the Volumes page

Wait for the Volume to show In-Use.

Once the Volume is attached you will see /dev/sdb as the attach point on instance vio-user-test-1

 

 

Start console through OpenStack

 

  1. Click on Instances
  2. Click the Actions drop-down menu for vio-user-test-1
  3. Select Console

 

 

Show only console

 

  1. Click on "Click here to show only Console"

 

 

Login to the VM

 

Login with the following

username: root

password: VMware1!VMware1!

 

 

 

View Disk Details

 

  1. At the command prompt run the following command to scan all the attached disks on this instance.
echo "- - -" > /sys/class/scsi_host/host0/scan
  1. Once the disks are scanned run the below command to view all the partitions
fdisk -l

You will notice that the second hard drive is showing up but it is not formatted or mounted.

 

 

Partition new volume

 

Run the following command to format the /dev/sdb disk

fdisk /dev/sdb
  1. Command (m for help): n

This will create a new partition

  1. Select (default p): Press Enter <leave default setting>
  2. Partition Number: Press Enter <leave default setting>
  3. First Sector: Press Enter <leave default setting>
  4. Last Sector: Press Enter <leave default setting>
  5. Command (m for help): w

This will write changes and exit the fdisk tool.

 

 

Format the new volume

 

  1. Run the fdisk -l command again. You will notice that a new partition called /dev/sdb1 has been created.
fdisk -l
  1. Format the partition with the below command
mkfs.ext4 /dev/sdb1

 

 

 

Mount Partition

 

  1. Next run the commend below to create a directory to mount point.
mkdir /mnt/persistent-data
  1. The command below will mount /dev/sdb1 to the mount point we created.
mount /dev/sdb1 /mnt/persistent-data
  1. Next we will use the commend below to check to see if the persistent drive is mounted.
df -h

You should now see the /dev/sdb1 disk in the list of mounted drives.

 

 

Test File

 

To test that persistent volume are working, we will create a file on the persistent and non persistent volume. Then we will attach the persistent volume to a different instance. Make sure to click Enter after each command.

  1. Create Non-Persistent file in non-persistent volume
echo "Hello non-persistent World" > /root/test-file1.txt
  1. Create a persistent file on the persistent volume
echo "Hello persistent World" > /mnt/persistent-data/test-file2.txt
  1. Change directory to the persistent-data folder
cd /mnt/persistent-data/
  1. List the files and validate that test-file2.txt file exists
ls
  1. View the test-file2.txt contents
cat test-file2.txt

Output should show Hello persistent World.

  1. Click the back button on your browser to end the full screen console.

 

 

Edit Volumes

 

Now that we have formatted the drive and created the test files, we will detach the volume from our instance and attach to the other instance.

  1. Click on Volumes
  2. Click on the Actions drop down menu
  3. Select Manage Attachments

 

 

Detach Volume

 

  1. Click Detach Volume

This will detach the volume from your existing Instance and allow you to attach it to another.

 

 

Confirm Detach Volume

 

  1. Click Detach Volume when prompted to confirm.

 

 

Volume status change

 

Now you will notice that the Attached To field is empty.

 

 

Volume available to attach

 

Now you will attach the Volume to the other instance and test to see if the file is there.

  1. Click on the Actions Drop-down for vio-user1-data-volume1
  2. Select Manage Attachments

 

 

Change volume attachment

 

Select vio-user-test-2 instance and attach the persistent volume to it.

  1. Select the vio-user-test-2 instance from the drop down list
  2. Click Attach Volume

 

 

Volume attached

 

Now you see that your volume is attached and ready to use on the vio-user-test-2 Instance. You will also notice the mount point of this drive.

 

 

Start console through OpenStack

 

  1. Click on Instances
  2. Click on vio-user-test-2 instance

 

 

Open Console - vio-user-test-2

 

  1. Click on Console

 

 

Make console full screen

 

  1. Click on "Click here to show only Console"

 

 

Log into the VM

 

Login with the following

username: root

password: VMware1!VMware1!

 

 

 

Rescan for new volume

 

  1. Run the following command at the prompt to have the OS rescan for attached disk devices.
echo "- - -" > /sys/class/scsi_host/host0/scan
  1. Run fdisk again to verify the operating system can see the new drive
fdisk -l

You should see the drive listed as /dev/sdb1

 

 

Mount the persistent disk

 

Next we will create a directory and mount the disk. We will not have to format this disk since it has been formatted perviously.

  1. Create a directory to be used as a mount point
mkdir /mnt/persistent-data
  1. Mount /dev/sdb1 to the newly created mount point
mount /dev/sdb1 /mnt/persistent-data
  1. Run df -h to verify that the persistent-data disk is mounted
df -h

Observe that the /dev/sdb1 disk shows up as mounted under /mnt/persistent-data

 

 

Look for file on Volume once we mount it

 

Now to check if the file we created is still there.

  1. Change directory to the persistent-data mount point
cd /mnt/persistent-data
  1. View the directory to verify that test-file2.txt exists
ls
  1. View the contents of the file to verify that is our previously created persistent disk
cat test-file2.txt

Output should show Hello persistent World.

  1. Click the BACK button on your browser to end the full screen console.

 

vCenter Client and Openstack


Now, we will go into the vCenter Client and see what information is shared between OpenStack and vCenter.


 

Return to the vSphere Web Client

 

  1. Click the vSphere Web Client Tab

 

 

vCenter client and OpenStack

 

  1. Click Home
  2. Click Host and Clusters
  3. Expand the RegionA01-COMP01 compute cluster
  4. Click on the vio-user-test-1 VM within the RegionA01-COMP01 compute cluster.
  5. Click the Summary tab for vio-user-test-1 VM.

Here you can see information that was found in OpenStack Horizon is available within vCenter.

 

 

Review NSX Security Groups

 

Note that this VM is part of the NSX Security group which was chosen when we creating this VM in OpenStack.

 

 

Review OpenStack VM details in vCenter

 

Note the OpenStack VM section which contains details about the VM we created in OpenStack.

 

 

Notes provide more info from Openstack

 

The notes section of the VM is updated when it gets deployed by OpenStack.

 

 

Shell VM for Cinder Volume VMDK

 

Notice in the Inventory tree there is a VM named vio-user1-data-volume1 and it is powered off.

  1. Click on VM vio-user1-data-volume1
  2. Click on VM Hardware

 

Environment Cleanup


In this section we will clean up the VM instances that were created in this Module.


 

Cleaning up Instances

 

Return to your VIO Horizon webpage and log in as the user vio-user1, if you were logged out.

  1. Click the VIO Horizon Tab

 

 

 

Delete Instances

 

We will now need to remove instances used in this module.

  1. Click on the Instance tab
  2. Select both instances
  3. Click on Delete Instances

 

 

Deleting Instances

 

You should now see the task for each instance change to Deleting.

Once the task is done, all instances should be removed from the list

 

 

Verify VM Deletion in vCenter

 

Return to vCenter tab and verify that the instances have now been removed.

Notice that the vio-user-data-volume1 shell VM is still there since it is tied to the persistent storage that was created earlier in this module.

 

Conclusion


 

Congratulations on completing  Module 2.

If you are looking for additional documentation on VIO, try one of these:

Proceed to any module below which interests you most.


 

Evaluate VMware Integrated OpenStack

Would you like to see how VMware Integrated OpenStack could work in your data center?  Request a free 60-day evaluation here to try it out in your own environment.

 

 

How to End Lab

 

To end your lab click on the END button.  

 

Module 3 - VIO Basic Networking (60 Minutes)

Module Objectives & Introduction


In the traditional model of networking, users attach VM's to existing networks which are mostly hardware defined. However, relying on hardware defined, pre-existing networks makes a private cloud inflexible, hinders scalability and doesn't support majority of cloud use cases. Cloud users need the flexibility to create network topologies and modify network access policies to suit their applications.

In most IaaS/SaaS environments, services such as Web, Application and Database Servers are all required to run on different L2 networks. Additionally while Web Servers need to be accessible from the internet, Application and Database Server VM's need to block internet access. These types of customized network topologies and network access controls are provided by VMware NSX through the OpenStack Neutron plug-in available with VMware VIO.


 

VIO Architecture with NSX Focus

 

VMware VIO supports two deployment options for networking. One option utilizes the vDS with the more traditional VLAN backed port-groups and the other with VMware NSX. In this module we will be focusing on the VIO + NSX model and it's subsequent features. With that said, this module will assume the lab user has some background and basic understanding of VMware NSX and/or has taken other NSX related Hands on Labs.

Some of the many bene ts of VIO + NSX include:

 

 

NSX-v Architecture with VIO Consumption

 

VMware NSX brings lots of benefits when we compare it to a traditional OpenStack networking configuration relying on VLAN's:

Scale

Management and Operations

Advanced Network Services

 

 

Overview Of Module Objectives

Module 2 is broken down into three main sections:

1. Basic Virtual Networking

In this section, we focus on building out a few instances (OpenStack terminology for VM's) that connect into a couple virtual networks with a logical router providing connectivity between virtual networks and as an external path out. We also demonstrate how configurations made in Horizon Dashboard get translated via the Neutron Plugin into NSX in vCenter.

2. Security Groups & Micro-Segmentation

In this section, we focus on creating and understanding Security Groups and also implement Micro-Segmentation. VIO together with NSX provides not only a Distributed Firewall feature-set but also Micro-Segmentation where a Security Group policy can be used to allow/deny access between instances on the same L2 network. This feature has become very important and increasingly popular in setting appropriate security boundaries without having to rely on traditional L2 boundaries.

3. Advanced Networking

In this section, we focus on setting up Static Routing, Enabling/Disabling NAT and Distributed Routing. Most of the Advanced Networking section has to be completed via Neutron CLI since Horizon Dashboard does not yet have workflows for them. We also switch between CLI and NSX in vCenter to demonstrate that the Neutron Plugin is properly mapping commands over ot NSX. We will also demonstrate LBaaS.

 

Environment Setup


The objective of this section is to provide steps to get web browser tabs opened to the appropriate portal pages in preparation for the rest of the module.


 

Clean Up (If Necessary)

 

If you are starting this module and have previously completed other modules of this lab, please make sure to delete and remove any artifacts that may be left over. While each module in this lab are related to one another and configured in an intuitive chronological order, they are also designed to be autonomous, self contained and do not build from one another. Meaning that you do not need to take Module 1 in order to take Module 2, etc.

 

 

Access vSphere Web Client

 

Launch the Google Chrome browser and access the vSphere Web Client

  1. Click on the vCenter Web Client bookmark to open the vSphere Web Client in a new tab. (It may already be open.)
  2. Check the box "Use Windows session authentication"
  3. Click the Login button.

Please Note: The first time you login to vSphere Web Client takes a bit longer and in some cases up to a minute.

 

 

Access Openstack Horizon Dashboard

 

  1. On a new tab Click on the VIO bookmark to open the Horizon Dashboard login portal.
  2. Select OpenStack in the dropdown for Authenticate using
  3. Leave Domain as local
  4. User Name and Password should autofill but if not type admin for User Name and VMware1! for password
  5. Click the Connect button.

Please Note: The first time you login to Horizon Dashboard takes a bit longer and in some cases up to a minute.

 

Logical Networks


The objective for this section is to create tenant networks and check how they manifest in NSX.


 

View Current Network

 

First let's see what logical networks already exist.

  1. Click on Project pane.
  2. Click on Network sub-pane.
  3. Click on Networks.

We can see that two networks have already been pre-created for us. The first is an External Network which has a special designation and will serve as our gateway out of OpenStack. The second is a regular logical network named test-network that was initially created by the admin project and then shared with other projects.

 

 

Create Network (Virtual)

 

  1. Click the + Create Network button to start the workflow.

 

 

Network Name

 

  1. Create a network name called "HOL-network".
  2. Confirm Admin State checkbox is UP
  3. Click the Next button.

 

 

Subnet and Network Address

 

  1. Type in "HOL-subnet" for the Subnet Name field.
  2. Type in "11.0.0.0/24" for the Network Address field.
  3. Click the Next button.

 

 

Provide Subnet Detail

 

The Subnet Detail tab offers us the opportunity to configure DHCP, DNS Name Servers or Host Routes.

  1. Confirm that Enable DHCP checkbox is checked.
  2. Type in "11.0.0.10,11.0.0.19" as the IP range for the Allocation Pools field.
  3. Click the Create button to complete this step.

 

 

Confirm Network Creation

 

You should now see your newly created "HOL-network" network in the list of available networks. It is already in the ACTIVE state.

You can easily add more subnets or completely delete the existing network by first clicking on More and choosing the corresponding action.

  1. Click on the "HOL-network" link to get all the details regarding this network segment.

 

 

Network Detail

 

This Network Detail view allow you to add/delete Subnets or Edit Ports. You can also come back here later to see a new port being added when an attachment is made with a Logical Router.

This view also allows you to see the ID tied to the network. This ID is useful when troubleshooting and will help you directly correlate the entry to the NSX logical switch in vCenter.

  1. Note the ID above for this network. (Your lab may differ since the ID is randomly generated)

 

 

Compare ID with NSX in vCenter

 

  1. Click on the vSphere Web Client tab to see how this OpenStack Network appears in NSX.
  2. Click the Network & Security icon.

 

 

NSX Logical Switches

 

  1. Click on Logical Switches menu item in the Navigator window to see the list of NSX logical switches.
  2. Note the same ID matches a logical switch

Note that the ID from Horizon Dashboard matches the ID of the Logical switch created by NSX. NSX receives these configurations through API calls from Openstack via the Neutron plugin.

 

 

Network Topology

 

  1. Click on the Horizon Dashboard tab.
  2. Click on Network Topology.

You should see your newly created "HOL-network" logical network which isn't connected to anything yet.

Please Note: "External-Network" network was pre-created in your lab by OpenStack admins and shared with all Projects to provide external connectivity to your applications.

 

Logical Routers


We now need to create a Logical Router to route traffic from "HOL-Network" to the "External-Network". All the VM's connected on the HOL-Network logical network will be using this router as the default gateway.


 

Create Logical Router

 

Access the Openstack Horizon dashboard. Make sure that you are still logged in as Admin.

  1. Click on Project
  2. Click on Network Sub tab
  3. Click on Routers
  4. Click on Create Router

 

 

Complete Logical Router

 

  1. Type in "HOL-Router-Exclusive" as the name.
  2. Make sure that the Admin state is UP
  3. Select "External-network" as External Network
  4. Select "Centralized/Exclusive" as Router Mode
  5. Select Compact as Router Size
  6. Click Create Router

Note: We have chosen to create an exclusive/central router here. This router will not be shared with other tenants or projects.

 

 

Confirm New Router Creation

 

Confirm that the new HOL-Router-Exclusive now shows up in the Routers tab and is in Admin State "UP".

 

 

Confirming Network Topology

 

  1. Click on the Network Topology tab. On the right pane you will see a diagram being built on what has already been created.
  2. Hover over the "HOL-Router-Exclusive" and you will see its details.

 

 

Connect Router To Logical Network

 

  1. Click on Routers
  2. Click on HOL-Router-Exclusive

 

 

Add Router Interface

 

  1. Click on the Interfaces tab.
  2. Click on +Add Interface to add a new interface on this router.

Note the Router ID, in your lab this might be different as its generated by Openstack. We will use this ID later to verify the creation of the router in NSX.

 

 

Select Subnet

 

  1. Select "HOL-network: 11.0.0.0/24 (HOL-subnet)" in the Subnet drop down field.
  2. Click the Submit button.

A message saying "Success: Interface added 11.0.0.1" will appear shortly.

 

 

Router Interfaces

 

Note the interfaces connected to the router.

  1. 169.254.128.10 IP address is on the interface that connects to the MetaData Proxy network. We will not cover the details in this lab and its more suited for an in depth design discussion on VIO and NSX.
  2. 11.0.0.1 IP address is on the interface that connects to the "HOL-Network" logical network that was created previously. This will be the default gateway for all Tenant VM's that will be launched on this network.

 

 

Confirm Router and Network Attachment

 

Now lets navigate to the Network Topology view to see how the newly created "HOL-network" network looks like attached to our "HOL-Router-Exclusive".

  1. Click on Network Topology.
  2. If everything was completed correctly you should see the "HOL-Router-Exclusive" connected to "HOL-network"

 

 

Verify Router Creation in NSX

 

Access the Home page of vCenter Web Client.

  1. Click on Networking and Security.

 

 

Verify NSX Edge Creation

 

Verify the NSX Edge creation in NSX Edge tab.

  1. Click on NSX Edges
  2. Click on the Edge that starts with HOL-Router-Exclusive. Note: In your setup the this NSX Edge could have a different ID.

 

 

View NSX Edge Configuration

 

On this screen you can access and view all the NSX Edge configuration that was done through Openstack.

  1. Click on Manage tab
  2. Click on Settings tab
  3. Click on Interfaces

Note the interfaces connected to the router.

  1. 192.168.0.214 IP address is on the interface that is connected to the external network.
  2. 169.254.128.10 IP address is on the interface that connects to the MetaData Proxy network. We will not cover the details in this lab and its more suited for an in depth design discussion on VIO and NSX.
  3. 11.0.0.1 IP address is on the interface that connects to the "HOL-Network" logical network that was created previously. This will be the default gateway for all Tenant VM's that will be launched on this network.

 

Tenant Instances


In this section we will launch instances on the "HOL-network" network that was created previously.


 

Launch Instance

 

Make sure that you are still logged in as "Admin" user.

  1. Click on Project tab
  2. Click on Compute sub tab
  3. Click on Instances within the Compute tab
  4. Click on Launch Instance

 

 

Verify Instance Creation in vCenter

 

Access the vCenter Web Client and go to Host and Clusters.

  1. Click and expand the RegionA01-COMP01 cluster.
  2. Click on the HOL-Photon instance

Note the name of the instance as the same Openstack Image ID appended to it.

Also note that vCenter has the information that this image was created by Openstack and it has the flavor, tenant and network information.

 

Floating IP Address


In this section we will attach Floating IP address to the instance that we created previously.


 

Access and Security

 

Make sure that you are logged in as Admin

Note that we have already allocated 2 Floating IPs to nginx-1 and wordpress-1 VMs

  1. Click on Project
  2. Click on Network
  3. Click on Floating IPs
  4. Click on Allocate IP to Project

 

 

Associate IP to instance

 

  1. Note the new Floating IP address that has been allocated to your project.
  2. The status is still "down" because it has not been allocated yet to any instance.
  3. Click on Associate

Note: The IP address that has been allocated could be different in your lab deployment.

 

 

Verify Floating IP allocation in NSX

 

  1. Access the vCenter Web Client tab and go to the Home Screen.
  2. Click on Networking and Security.

 

Security Groups


Security Groups are sets of IP filter rules that are applied to an instance. VIO together with NSX provides not only a Distributed Firewall feature-set but also Micro-Segmentation where a Security Group policy can be used to allow or disallow access between instances on the same L2 network. This feature has become very important and increasingly popular in setting appropriate security boundaries without having to rely on traditional L2 boundaries.

For more information on NSX Micro-Segmentation please consider taking NSX specific labs.

All projects in OpenStack have a default security group. Lets review our current rule set.


 

Review deployed instances

 

Make sure are still logged in as Admin user

  1. Click on Project
  2. Click on Compute
  3. Click on Instances - Observe the instances that are deployed
  4. Click on the HOL-Photon instance

 

 

Observe default security group

 

This screen shows the resultant security policy applied to the VM.

Observe that only the default security policy has been applied since it was the only one that was selected at the time of deployment.

This security policy allows all connectivity from the VM to 0.0.0.0/0 and ::/0 addresses, however it does not allow any communication that is initiated from outside.

 

 

Access and Security

 

  1. Click on Network
  2. Click on Security Groups
  3. Click on Mange Rules corresponding to the default security group.

 

 

View Policy details for the default security group

 

Observe the security policies defined in this default security group.

You can see that Egress rules (from VM to outside) allow all communication from the VM to any IPv4 IPv6 destination.

However Ingress rules (to the VM) are only allowed from other VM's within the same security group.

 

 

Access and Security - HTTP access

 

  1. Click on Security Groups
  2. Click on Manage Rules for allow-http security group.

 

  1. Type allow-http for the Security Group name
  2. Click on Create Security Group

 

 

Create Security Group Rules - allow-http

 

  1. Click on Manage Rules for the newly created allow-http Security Group

 

  1. Click on Add Rule

 

  1. Choose HTTP in the Rule dropdown list
  2. Click Add

 

Confirm that your screen now looks similar, 2 default Egress rules and the new Ingress rule for HTTP that we just added

 

 

Change associated security policy

 

Now we will change the associated the security group with HOL-nginx-1 VM.

  1. Click on Compute
  2. Click on Instances
  3. Click on the dropdown menu associated with HOL-Photon VM
  4. Click on Edit Security Groups

 

 

Edit Instance - Security Groups

 

  1. Click on the "-" sign to remove the default security group
  2. Click on the "+" sign to add the allow-http security group
  3. Click on Save

 

 

Access HOL-Photon-1 server

 

 

 

Verify Configuration in NSX

 

We will now verify the security group configuration in NSX

Switch to the vCenter tab and go to the Home Tab.

  1. Click on Networking and Security

 

 

NSX Distributed Firewall Configuration

 

  1. Click on the Firewall tab.
  2. Search for the security group section called "SG Section: allow-http" and expand that section. Note: There may be 2 sections that say allow-http. Look for the one at the bottom
  3. Click on the allow-http security group in the Applied To column

Observe the security rules. The section and the rules within them were directly orchestrated via Openstack using the neutron NSX-V plugin. The VM's get populated within the security group as the tenant attaches that security-group to their VM.

 

Load Balancer


VMware Integrated OpenStack 4.0 supports LBaaS v2.0.

This task includes creating a health monitor and associates it with the LBaaS pool that contains the LBaaS server instances. The health monitor is a Neutron service that checks if the instances are still running on the specified protocol-port.


 

Verify that HOL-Photon-1 server is reachable

 

In the previous chapter recall that we had attached a floating IP to the HOL-nginx-1 VM, and then enabled the allow-http security for allowing access to port 80.

Verify that connectivity to HOL-Photon-1 server is still working.

  1. Open a new tab on the chrome browser and enter the floating IP 192.168.0.157 for the HOL-Photon-1 server.

If you see the "Welcome to nginx on Photon" banner, connectivity to HOL-nginx-1 server is working fine.

Note: In your lab setup, the floating IP address may be different.

 

 

Disassociate the floating IP address with HOL-Photon-1 server

 

We will configure LB in this chapter. Therefore we need to un associate the Floating IP attached to HOL-nginx-1 server.

Switch to Openstack Tab and make sure you are still logged in as "admin"

  1. Click on Project
  2. Click on Network
  3. Click on Floating IPs
  4. Click on Dissociate

 

 

Verify HOL-Photon-1 server is not reachable

 

We have disassociated the Floating IP with HOL-Photon-1 server. It should not be reachable anymore.

  1. Open a new tab or go to an existing one that was used to access the HOL-Photon-1 server.
  2. Type the IP address 192.168.0.157 in the address bar.

Verify that the connection times out and the server cannot be reached.

 

 

Configure Load Balancer

 

Switch back to VIO tab and make sure you are still logged in as Admin user.

  1. Click on Project
  2. Click on Network
  3. Click on Load Balancers
  4. Click on Create Load Balancer

Review the settings for the newly create Load Balancer

 

 

Floating IP to LB VIP Association

 

  1. Click on the drop down next to the Edit button
  2. Select Associate Floating IP

 

  1. Click on the drop down box and select 192.168.0.157 under Floating IP addresses
  2. Click on the Associate button

Note: As previously mentioned in your lab the Floating IP could be different.

 

 

Verify Load Balancer VIP is reachable

 

192.168.0.157 is the Floating IP for the HOL-Photon-Pool IP 11.0.0.13. This VIP load balances members in the HOL-Photon-Pool.

  1. Open a new tab or go to a previously open one.
  2. Enter the IP of the Floating IP 192.168.0.157.

Verify you have connectivity to HOL-Photon-1/2 servers which are being load balanced.

 

 

Verify Load Balancer Configuration in NSX

 

Now that we have configured LB via VIO, let us verify and view the resulting configuration in NSX

Switch to the vCenter Web Client and login using native Windows credentials. username: Administrator@corp.local password: VMware1!

  1. Click on the vSphere Web Client tab
  2. Go to the Home Screen and click on Networking and Security

 

Environment Clean-Up


In this section we will delete the logical networks, instances, routers etc that were created in this section.


 

Delete Instances

 

Make sure you are logged in as Admin user in VIO.

  1. Click on the OpenStack tab
  2. Click on Project
  3. Click on Compute
  4. Click on Instances
  5. Select the HOL-Photon-1 and HOL-Photon-2 VM's
  6. Click on Delete Instances

 

 

Verify Instance Deletion

 

Confirm the instances have been deleted.

 

 

Delete Load Balancer VIP

 

  1. Click on Network
  2. Click on Load Balancers
  3. Click on HOL-Photon-Pool

 

  1. Click on Listener 1

 

  1. Click on the Default Pool ID

 

  1. Click on the Health Monitor ID

 

  1. Click on the drop down box by Edit
  2. Click on Delete Health Monitor

 

  1. Click on Delete Health Monitor

 

  1. Click on the drop down box by Edit Pool
  2. Click on Delete Pool

 

  1. Click on Delete Pool

 

  1. Click on the drop down box by Edit
  2. Click on Delete Listener

 

  1. Click on Delete Listeners

 

  1. Click on the drop down box by Edit
  2. Click on Delete Load Balancer

 

  1. Click on Delete Load Balancers

 

 

Delete Router

 

  1. Click on Routers
  2. Click the checkbox by HOL-Router-Exclusive
  3. Click on Delete Routers

 

Confirm the Router has been deleted

 

 

Delete Network

 

We have deleted all components that were connected on HOL-Network. We can now delete HOL-Network

  1. Click on Networks
  2. Click the checkbox by HOL-network
  3. Click on Delete Networks

 

 

End of Section

You have now completed this Module. Hope it was informative and you enjoyed it.

 

Conclusion


 

Congratulations on completing  Module 3.

If you are looking for additional documentation on VIO, try one of these:

Proceed to any module below which interests you most.


 

Evaluate VMware Integrated OpenStack

Would you like to see how VMware Integrated OpenStack could work in your data center?  Request a free 60-day evaluation here to try it out in your own environment.

 

 

How to End Lab

 

To end your lab click on the END button.  

 

Module 4: Using VMware Realize Solutions to Operationalize OpenStack (30 Minutes)

Overview of OpenStack Operations, Log Insight, and vRealize Operations


The purpose of the next section is to provide an overview of troubleshooting and management tools that are available for your VIO deployment.  There are many components within OpenStack that should be managed and in order to quickly diagnose and troubleshoot issues, the right tools should be in place to suppor your operational teams.  We will be diving into Log Insight, vRealize Operations, and native capabilities within VIO vCenter plugin.


 

Operationalizing OpenStack

OpenStack by nature is a myriad of different open source projects pulled together to provide a common platform for deploying compute, storage and network. Because of the distributed nature of the platform, it can be complex and fragile at times.  The OpenStack community has published a guide:

http://docs.openstack.org/openstack-ops/content/

that talks about the different pieces of OpenStack and operational aspects of supporting an OpenStack environment.  What you will clearly realize is that the documentation provides a lot of insight into what should be checked, how to check it, where to find logs, etc.  However, it makes very few suggestions as to the tooling (for obvious reasons to be unbiased) around operations.  Regardless of which tool you use, you absolutely need to have an infrastructure health management and logging tool at the very minimum. In this section we will discuss the benefits of vRealize Log Insight and vRealize Operations Manager and why these tools have been designed to help simplify and manage large complex environments like OpenStack.

 

 

vRealize Log Insight for OpenStack

 

vRealize Log Insight is a real-time log management platform that focuses on delivering high performance search across physical, virtual, and cloud environments. Log Insight is extremely intuitive to use and the integration with the VMware suite of solutions makes capturing logs extremely easy.  

Specifically as it relates to OpenStack, a special Log Insight OpenStack management pack (there are over 30 management packs for various solutions) can be downloaded for free. This integrated management pack, enables operators to view OpenStack relevant information within a handful of pre-created dashboards. Custom dashboards can also be created.  

OpenStack is log heavy-- each service has a handful of logs and correlating all the information across the different services is extremely painful without a centralized logging service.  Having a logging mechanism in place when managing an OpenStack environment is a must.

Additional management packs allow the operator to view information related to the vSphere and NSX environments. There are dashboards pre-created for these solutions as well. This makes Log Insight immediately useful out of the box for whatever application you want to collect logs from!

 

 

vRealize Operations Manager for OpenStack

 

Similar to Log Insight, vRealize Operations Manager plays a crucial role in managing an OpenStack environment.  Part of managing OpenStack is keeping a close eye on the infrastructure health of your cloud.  Are you close to running out of memory? CPU? Storage?  Do you have network/storage IO issues?  How do you manage 50K VM's? Are there parts of my OpenStack infrastructure that are over committed and performing poorly as a result?  Are there any anomalies?  Are my services up and running?  You can tell, there can be a tremendous amount of information to collect to get the real health of your environment.  However, you want the information in digestable form.  You don't want to be collecting and viewing 50,000 CPU, memory, storage, network metrics.  That would be impossible.   vRealize Operations Manager simplifies this by collecting all the data but rolling up a health score and explaining why.  

The OpenStack management pack for vRealize Operations offer pre-created dashboards to quickly view the health of the environment all the way up to the services that are running within the OpenStack infrastructure.  Are my keystone services running?  Is my nova-compute running?

vRealize Operations also has integration into NSX, so that you can monitor the Networking infrastructure underpinning your OpenStack deployment.

So many questions can be answered through vRealize Operations Manager and in conjunction, both Log Insight and vRealize Operations are the foundation for keeping your OpenStack cloud healthy so you can sleep easy at night and keep your users happy.

 

Troubleshooting with Log Insight and vRealize Operations


vRealize Operations is a platform that allows you to automate IT operations, manage performance and gain visibility across physical and virtual infrastructure.  There is a large ecosystem around vRealize Operations and the management packs relevant to VIO are the OpenStack management pack and the NSX-vSphere management pack.  We will get an overview of what these two management packs provide.


 

Before we Start the Section - Administrative Tasks

 

Before we begin, let's start a quick scenario for this troubleshooting section.

  1. Click Windows button
  2. Type PuTTY to search
  3. Click on PuTTY in the results

 

 

vRealize Operations and Log Insight Overview

 

Click on Google Chrome to launch the browser if it is not already open.

  1. Click on vRealize Operations on the toolbar

 

 

vRealize Operations Part 2

 

From Log Insight, we were able to troubleshoot and determine that there might be something wrong with our compute infrastructure.  Remember that the nova-scheduler did not pass the ComputeFilter so let's take a look at our Compute infrastructure in OpenStack.

1.  If you still have vRealize Operations tab, click on that, otherwise click on vRealize Operations on the toolbar

2.  The Login fields should be filled in for you. Click Login

user:   admin

password: VMware1!

 

 

Summary

Some of you may be thinking to yourself, "Well, the first thing i would have checked would have been the nova-compute service and I wouldn't have to even look at the logs!"  While you may be right in this specific case in hindsight, many errors are NOT infrastructure related errors.  For example, if a configuration file was wrong, or some metadata in the image was incorrect, or the instance was launched with some strange flags that caused no hosts to be found -- checking whether services are up would not have helped. Through this exercise though, we teach you how to fish. For future problems, you can walk through whatever framework you prefer for troubleshooting, leveraging the tools at hand to help you accelerate the troubleshooting process.

 

 

Conclusion


 

Congratulations on completing  Module 4.

If you are looking for additional documentation on VIO, try one of these:

Proceed to any module below which interests you most.


 

Evaluate VMware Integrated OpenStack

Would you like to see how VMware Integrated OpenStack could work in your data center?  Request a free 60-day evaluation here to try it out in your own environment.

 

 

How to End Lab

 

To end your lab click on the END button.  

 

Module 5: VIO API Consumption and Automation (60 Minutes)

Environment Setup


The objective of this section is to provide steps to get web browser tabs opened to the appropriate portal pages in preparation for the rest of the module.


 

Check Lab Status

 

You must wait until the Lab Status is at Ready before you begin.  If you receive an error message, please end the lab and redeploy another.

 

Introduction


In this Module we will see how to consume the OpenStack APIs with different tools like  

There are many more tools you can use with OpenStack, like

One of the main purposes of VMware Integrated OpenStack (VIO) is to enable this rich ecosystem of tools around the multi-tenant OpenStack APIs on top of the proven VMWare SDDC stack consisting of vSphere, NSX and VSAN. Since this is an advanced Module we will mostly use command line tools but nevertheless use the OpenStack Horizon Dashboard to check the results. In even more advanced setups the CLI commands could be integrated in CI/CD Pipelines using Jenkins, vRealize CodeStream, Concourse or whatever tool you like most.


CLI Tools: Working with the Nova, Neutron, Glance and Cinder APIs


The OpenStack Community offers a set of bundled CLI binaries packaged with the OpenStack project clients. These clients utilize Python API libraries to interact with their corresponding project APIs. A new universal OpenStack Client (openstack) is replacing these individual ones and the old clients are being deprecated step by step.

In this section, we will work with the following projects and use the new openstack command wherever possible  (which is installed on the OpenStack Management Server (OMS) in a Python virtualev environment):

The Glance and Heat projects will be used in sections dedicated to these projects.


 

Basic Nova and Neutron CLI operations

 

Nova is the compute project in OpenStack and it provides self-service access to scalable, on-demand compute resources. Refer to Module 1 of this lab (HOL-SDC-1820) for additional information on Nova. Let's run a few Nova openstack commands on the OpenStack Management Server to get you familiarised with the openstack CLI tool.

  1. Open Putty from your ControlCenter.

 

Using HEAT Templates


Heat is the main project in the OpenStack Orchestration program and implements an orchestration engine to launch multiple composite cloud applications based on templates in the form of text files (Infrastructure as Code). Two template formats can be used:

Heat also supports a subset of AWS resources using the AWS CloudFormation template format. The OpenStack Heat service heat-api-cfn will then translate the CloudFormation JSON input to YAML. The overall Architecture of the Heat Project can be seen in the following figure:

 


 

The Heat YAML file format

A YAML file has various sections but the three important ones are

 

 

Parameters

Parameters can be used to specify the configuration details of OpenStack resources (e.g. external networks, images, flavors, etc.) that are used or created with the Heat template. You can specifiy or override the default values of parameters when you deploy a Heat template with the Horizon Web Interface or the Openstack CLI. In the following example we define the default value for web_image which will be used to create a web server instance web-svr-01 later on in the Heat template.

parameters:
  web_image:
    type: string
    description: Name of image to use for servers
    default: ubuntu-16.04.1-server-amd64

This parameter can be overwritten e.g. by

openstack stack create --template example.yaml --parameter "web_image=cirros" stack-1

 

 

Resources

Resources are infrastructure objects that can be created in OpenStack (e.g. networks, routers, load balancers, instances, cinder volumes, etc.). They have a specific type like OS::Neutron::Net and are configured with properties which can be specific values (172.16.10.0/24) , parameters ({ get_param: dns }) or other resources ({ get_resource: web_network_01 }) defined in the Heat template.

resources:
  # Create the web logical switch and configure DHCP.
  web_network_01:
    type: OS::Neutron::Net
    properties:
      admin_state_up: true
      name: web-network-01
  web_subnet_01:
    type: OS::Neutron::Subnet
    properties:
      name: web-subnet-01
      cidr: 172.16.10.0/24
      enable_dhcp: true
      dns_nameservers: [ { get_param: dns } ]
      gateway_ip: 172.16.10.1
      network_id: { get_resource: web_network_01 }

 

 

Outputs

Outputs will report information back about the running stack (e.g. floating IPs, Load Balancer VIPs, etc.) and can be used to access certain resources created by Heat or further process them with other tools.

outputs:
  web-srv-01_public_ip:
    description: Floating IP address of Web1 on the public network
    value: { get_attr: [ web-srv-01_floating_ip, floating_ip_address ] }

 

 

Open PuTTY session to Developer VM

 

FIrst open a new Putty session to your util-01a.corp.local developer VM.

 

 

Create new Stack

 

In the Putty session the admin-openrc.rc resource file (located in the directory ~/my_project/bin) to access VIO is automatically loaded and you are put into a Python virtual environment so you can start executing Openstack CLI commands. Let's

  1. List all Heat Stacks (the list should be empty)
  2. Change to the heat Directory
  3. Create a new Stack using the YAML file two-tier-central-routing.yaml
openstack stack list

cd heat

openstack stack create -t two-tier-central-routing.yaml stack-1

 

 

List Stack Resources

 

The Heat template we used (two-tier-central-routing.yaml) spins up a 2-tier app with one web server and one app server, each attached to a logical network which is connected to an exclusive router. We also create a simple security group to allow HTTP and SSH traffic to the web server. Since the deployment of the stack takes some time, let's take a look at the resources the stack is beginning to create (keep in mind that your output coud be different).

  1. List all Stack Resources of stack-1
openstack stack resource list stack-1

 

 

Get more Information about a Stack Resource

 

If you need more information about a specific stack resource (e.g. web_network_01) you can do the following:

  1. Show the Stack Resource web_network_01
openstack stack resource show stack-1 web_network_01

 

 

List all Stacks

 

We can also check that the stack status is CREATE_IN_PROGRESS with the following command:

  1. List all Heat Stacks
 openstack stack list

 

 

Open Chrome Browser

 

  1. If not already open click on the Chrome Icon on the Windows Quick Launch Task Bar to open the Chrome Browser.

 

 

Log into Horizon Dashboard

 

If your VIO Tab in Google Chrome is still open and you are logged in, you can ignore the folowing step.

  1. Click on the VIO Bookmark Icon in Chrome
  2. Choose OpenStack (not the VMware Identity Manager) as the Authentication Method
  3. Enter local as Domain
  4. Enter the User Name admin
  5. Enter the Password VMware1!

 

 

Check Stack Status with Horizon Dashboard

 

  1. Select Project
  2. Click on Orchestration
  3. Select Stacks

Wait until the Stack stack-1 is in Status Create Complete (Click on Stack again to refresh the view). Then

4.    Click on stack-1

 

 

Show Stack Resources in Horizon Dashboard

 

  1. Click on Resources

You should see a similar list as before with the command openstack stack resource list stack-1 but the list is now complete and all resource have the status CREATE_COMPLETE.

 

 

Get Outputs of deployed Stack with Horizon Dashboard

 

 

List Stack Output with OpenStack CLI

 

To list all the outputs of our deployed stack-1 and to get the Floating IP we are looking for let's do the following:

  1. List the Stack Output of stack-1
  2. Show the Floating IP of web-srv-01
openstack stack output list stack-1

openstack stack output show stack-1 web-srv-01_public_ip

 

 

 

Ping and SSH into Web Server

 

 

Network Topology of deployed Stack

 

Let's take a last look at our Stack stack-1:

  1. Click on Project
  2. Select Network
  3. Click on Network Topology

You see the web and app VM which are attached to their respective networks web-network-01 and app-network-01. Both networks are connected to the heat-router-01 (click on the icon if you want) which is connected to the external-network for external connectivity.

 

 

Delete Stack

Now it's time to delete our stack and check after a while it has been successfully deleted.

  1. Delete the stack-1
  2. List all Heat Stacks (should return an empy list after a while)
openstack stack delete stack-1

openstack stack list

 

 

Final Remarks about Heat Templates

You can also update an existing stack from a modified template file using a command like

openstack stack update --template mystack.yaml mystack    

but tools like Hashicorp's Terraform seem more appropriate for this kind of task.    

This completes this section of Module 5.

 

IaaS Automation with Terraform & Packer



 

What is Terraform?

Terraform is a tool for building, changing, and versioning infrastructure safely and efficiently. Terraform can manage existing and popular Cloud service providers as well as custom in-house solutions.

Configuration files describe the components needed to run a single application or your entire datacenter. Terraform generates an execution plan describing what it will do to reach the desired state, and then executes it to build the described infrastructure. As the configuration changes, Terraform is able to determine what changed and create incremental execution plans which can be applied, which is one of the reasons why it is a very popular tool.

Terraform builds a graph of all your resources, and parallelizes the creation and modification of any non-dependent resources. Because of this, Terraform builds infrastructure as efficiently as possible, and operators get insight into dependencies in their infrastructure.

 

 

Terraform Configuration Syntax

Terraform uses a very simple configuration syntax which consists of

In this lab we are going to use the OpenStack provider to talk to the VIO API. This is how the content of the file provider.tf looks like:

provider "openstack" {
  user_name = "${var.openstack_user_name}"
  tenant_name = "${var.openstack_tenant_name}"
  password  = "${var.openstack_password}"
  auth_url  = "${var.openstack_auth_url}"
}

Here we are referencing variables defined in the file variables.tf:

variable "openstack_user_name" {
    default  = "admin"
}
variable "openstack_tenant_name" {
    default  = "admin"
}
variable "openstack_password" {
    default  = "VMware1!"
}
variable "openstack_auth_url" {
    default  = "https://vio.corp.local:5000/v3"
}
variable "image" {
    default = "ubuntu-16.04.1-server-amd64"
}
...

Terraform by default assembles all the configuration files (*.tf) in the current directory into one big configuration.

 

 

Terraform Resources

All your infrastructure components are defined by Terraform resources and references between them. Each Terraform provider gives you different resource types, so Terraform is not an abstraction layer between different Cloud providers. AWS for example has other resource types than Openstack, but they look very similar. Here are some examples of resource types we are going to use in our Terraform scenario:

 

 

Creating Networks attached to a Router

In our 1-tier scenario Terraform will

How the network topology is being built can be seen from the following code snippet:

resource "openstack_networking_network_v2" "terraform" {
  name = "terraform"
  admin_state_up = "true"
}
resource "openstack_networking_subnet_v2" "terraform" {
  name = "terraform"
  network_id = "${openstack_networking_network_v2.terraform.id}"
  cidr = "10.0.0.0/24"
  ip_version = 4
  dns_nameservers = ["192.168.110.10"]
}
resource "openstack_networking_router_v2" "terraform" {
  name = "terraform"
  admin_state_up = "true"
  external_gateway = "${var.external_gateway}"
}
resource "openstack_networking_router_interface_v2" "terraform" {
  router_id = "${openstack_networking_router_v2.terraform.id}"
  subnet_id = "${openstack_networking_subnet_v2.terraform.id}"

When defining new resources we can reference properties of other resources

${openstack_networking_network_v2.terraform.id

and variables

${var.external_gateway}

Terraform takes care of building a depency graph of all the resources which is used to build the infrastructure in the correct order.

You can also ask Terraform to give you an overview of all resources which will be created or reconfigured (in case you just added some more resources or changed some resource properties in your configuration files). This is a big difference to OpenStack Heat.

 

 

Terraform Plan

 

 In the util-01a PuTTTY session change to the terraform directory and use terraform plan to check which resources Terraform will deploy in the next step

  1. Change to terraform Directory
  2. Execute terraform plan
cd /home/viouser/terraform

terraform plan

You should get 8 resources to add, 0 to change and 0 to destroy.

 

 

Terraform Apply

 

Now it's time to actually create the 8 configured resources with the following command:

  1. Execute terraform apply
terraform apply

Be patient, this will take some time. After a sucessful run you should see the floating IP as output (address = 192.168.0.161) but you can get this info anytime you want with the following command:

terraform output

 

 

SSH into deployed VM

 

We should be able to log into the VM with our SSH key, so let's try if this works.

  1. SSH into deployed VM (use the floating IP you got from terraform output).
 ssh ubuntu@192.168.0.161

 

 

Network Topology

 

As a last check let's get a visual overview of our 1-tier scenario we deployed with Terraform. Go to the Horizon Dashboard and

  1. Click on Project
  2. Select Network
  3. Click on Network Topology

to get a nice view of the Network topology.

 

 

Terraform destroy

 

As said before you can also very cleanly update or destoy infrastructure created with Terraform. We will do the latter using the following command:

  1. Execute terraform destoy
terraform destroy

Keep the Horizon Dashboard in the background so you can watch the destruction process (which can take a few minutes) also visually.

 

 

What is Packer?

Packer is an Open Source tool for creating identical machine images for multiple platforms (EC2, vSphere, VirtualBox, OpenStack, etc.) from a single source configuration. Packer is able to use tools like Ansible, Puppet, Chef. etc. to install software onto the image.

A machine image is a single static unit that contains a pre-configured operating system and installed software which is used to quickly create new running machines.

 

 

Packer Configuration Validation

 

In the util-01a PuTTY session change to the putty directory and validate the simple Packer file packer-template.json we pre-created.

  1. Change to packer Directory
  2. Validate the packer-template.json File
  3. Inspect the packer-template.json File
cd /home/viouser/packer/

packer validate packer-template.json

cat packer-template.json

We are using the test-network (UUID: fa7cbb18-6841-4e76-82ef-8a38f2303f9f) to create new images, starting from the base image ubuntu-16.04.1-server-amd64 and using flavor 2 (m1.small) to deploy it. We also use a pre-created security group packer-build to access the instance with SSH during the build process. Packer creates his own temporary SSH keys and a correpsonding OpenStack keypair to do that.

Last but not least we use two Packer provisioniers. One is copying the Harbor certificate to the instance and the other one is executing a simple shell script, which basically places the certificate to the right directory with the correct permissions:

sudo sh -c "echo 'domain corp.local' >> /etc/resolv.conf"
sudo mkdir /usr/share/ca-certificates/harbor
sudo chown root:root /home/ubuntu/ca.crt
sudo mv /home/ubuntu/ca.crt /usr/share/ca-certificates/harbor

Real-world Packer examples would execute e.g. some Ansible playbooks to install software packages and configure the operating system or deployed agents.

 

 

Packer Build

 

Let's build the new OpenStack image.

  1. Execute the packer build packer-template.json Command
packer build packer-template.json

 

 

Check new OpenStack Image

 

Let's check if the new image is indeed available in OpenStack.

  1. List all Images
  2. Show Details of ubuntu-docker-image
openstack image list

openstack image show ubuntu-docker-image

 

 

Cleanup

 

You could now use this new image together with Terraform, Heat or other tools to create even more complex scenarios. In our case we are going to delete the image to clean up the lab.

  1. Delete the ubuntu-docker-image
  2. List all Images (to check ubuntu-docker-image has been deleted)
openstack image delete ubuntu-docker-image

openstack image list

 

 

Summary

In this section you have seen how to work with infrastructure as code by building your own OpenStack Images with Packer and building compex scenarios with Terraform. The underlying code can be revisioned using e.g. git and tested wih a release pipeline in vRealize Codestream, Jenkins or similar tools.

This completes this section of Module 5.

 

Using VIO as vRA Endpoint


In this lab we will be leveraging vRealize Automation to publish XaaS (Anything as a Service) blueprint items within a catalog for requesting by an end user.

There are a few steps which have already been taken care of in the lab.


 

Open PuTTY

 

  1. Click on the PuTTy shortcut in the task bar

 

 

Open oms.corp.local Session in Putty

 

  1. Double click on the saved session for oms.corp.local

 

 

Federation between vRA & OpenStack

The following command to configure the Federation between vRA and OpenStack has already been run for you. It is only shown here for your information. When this command is run it does the following:

viocli vros enable --vra_tenant {vRA Tenant Name in uppercase, e.g. VSPHERE.LOCAL} --vra_host {IP or FQDN or vRA host} --vra_admin {vRA System Admin name} --vros_host {OMS server’s IP} --verbose

 

 

Check Federation between vRA & OpenStack

  1. Type history | grep vra_host and validate that this command has been previously run.

 

 

 

Open vRealize Orchestrator (vRO)

 

  1. Open a browser window and browse to https://vra-01a.corp.local/
  2. Click on the link for vRealize Orchestrator Control Center

 

 

Change to a different Domain

 

  1. Click on the link for Change to a different domain

 

 

Select vsphere.local as Domain

 

  1. Select vsphere.local from the drop down list
  2. Click Next

 

 

vRealize Orchestrator Sign in

 

  1. Type username Administrator and password VMware1!
  2. Click Sign in

 

 

Manage Plug-Ins in vRO

 

  1. Click on the Manage Plugins icon (may need to scroll down)

 

 

Check Status of OpenStack Plugin in vRO

  1. Find the OpenStack Plug-in and ensure the status is Enabled on the right hand side.

 

 

 

Review vRA Service Account in VIO

  1. Open a new Tab
  2. Click on the VIO Shortcut
  3. Click on the dropdown box and select OpenStack
  4. Click Connect

 

 

 

Review OpenStack User (1)

  1. Click on Users on the left hand side
  2. Type vra-service in the search field to narrow down the user list
  3. Click on Filter
  4. Click on vra-service user

 

 

 

Create OpenStack User (2)

  1. Click the Edit button

 

  1. Click on the drop down for Primary Project and select admin (note: you may need to scroll down to see it)
  2. Click on the Update User button

 

 

 

Manage Members in admin Project

  1. Click on Projects on the left hand side
  2. Click on Manage Members on the right hand side for the admin project

 

 

 

Check vra-service Membership in admin Project

 

  1. Click the + to add vra-service to the Project Members list

 

  1. Click the drop down box and click to select the admin role
  2. Click Save

 

 

Open Putty Session

 

  1. Click on the PuTTy shortcut in the task bar

 

 

Open oms.corp.local PuTY Session

 

 

Review User Mapping

 

 

Open vRealize Automation in Chrome Browser

 

  1. Open a new tab
  2. Click on the vRealize Automation shortcut
  3. Ensure the domain is set to corp.local and cloudadmin as the username and VMware1! as the password
  4. Click Sign in

 

 

 

Blueprint Design in vRA

 

  1. Once logged in, click on the Design tab at the top
  2. Click on XaaS on the left hand side menu

 

 

XaaS Blueprints

 

  1. Click on XaaS Blueprints on the left hand side menu
  2. Here you will see 4 blueprints currently in Draft status, these are the blueprints we are going to use for OpenStack
  3. Click on the row to highlight a blueprint (Note: Do not click directly on the name or you will be taken into the blueprint itself)
  4. Click on Publish on the menu bar
  5. Repeat for the other 3 XaaS blueprints

 

 

Catalog Management in vRA

  1. Click on Administration at the top
  2. Click on Catalog Management on the left hand side

 

 

 

Review Catalog Items

  1. Click on Catalog Items on the left hand side
  2. Ensure that Openstack Service shows in the Service column for the 4 blueprints we just published

 

 

 

Create Approval Policy

 

  1. Click on Administration tab on the top
  2. Click on Approval Policies on the left hand side
  3. Click on the +New button to add an approval policy

 

 

New Approval Policy (1)

  1. In the search bar type XaaS and press enter
  2. Select the result for Service Catalog - Catalog Item Request - XaaS Blueprint
  3. Click OK

 

 

 

New Approval Policy (2)

  1. Type OpenStack Approval for the name
  2. Click the green plus sign by Levels

 

 

 

Create new Level

  1. Type OpenStack Request for the Name
  2. Type devmgr for Approvers
  3. Click on the search icon
  4. Click on Development Manager from the pop up window
  5. Click OK

 

 

 

New Approval Policy (3)

  1. Click the Dropdown box for Status and change to Active
  2. Click OK

 

  1. Click on Administration tab
  2. Click on Catalog Management

 

  1. Click on Entitlements

 

 

 

Deactivate Entitlements for Dev Users

 

  1. Click on the row to highlight the Development Users entitlement
  2. Click on the Deactivate button

 

 

Check Entitlements for Dev Users

 

  1. You should now see the Development Users listed as Inactive

 

 

Catalog Management

 

At this point we have created the approval policy and defined who the approver will be (Development Manager), the next step is to actually assign the policy to catalog items. We have a few different ways we can assign the policy. One way would be to assign it to a service (In this case, Openstack Services) and then each catalog item underneath it would automatically have that approval policy assigned to it. Alternatively, if we wanted to be more granular, we could assign the approval policy to a specific catalog item or items as well. For this lab we are going to choose the Openstack Service.

  1. Click on the Administration tab at the top
  2. Click on Catalog Management on the left hand side

 

 

OpenStack User Entitlements

 

  1. Click on Entitlements
  2. Click on Openstack User Entitlements

 

 

Modify Policy

Approvals in vRealize Automation are extremely flexible, we could apply the approval at a Service level which would then apply to every catalog item associated with that service. In our lab, we are going to apply it to a single catalog item. In order to accomplish this, we will manually add the catalog item in and then associate the policy with it.

  1. Click on the Items and Approvals tab
  2. Click on the green + for Entitled Items

 

 

 

Add Items to Policy

  1. Select the create an Openstack project catalog item
  2. Click the dropdown and select OpenStack Approval
  3. Click OK

 

 

 

Check and Finish Entitlement

 

  1. Confirm that you now have create an Openstack project with the OpenStack Approval listed in the Entitled Items section
  2. Click Finish

 

 

Testing Approval Policy

  1. Log in as devuser

 

  1. Click on the Catalog tab
  2. Click Request on create an Openstack project

 

  1. Type dev-project as the project name
  2. Click Submit

 

  1. Click the OK button

 

  1. Click on Requests tab at the top

At this point you can see the request was submitted but it is waiting on an approval from the Development Manager user before it will actually run the workflow.

 

  1. Logout of devuser

 

  1. Login as devmgr

 

  1. Click on Inbox tab at the top
  2. Click on the approval number on the left hand side of the results

 

  1. Click on View request

 

  1. Here the approver can review the configuration of the request, such as the project name that devuser submitted
  2. Click on the Close button

 

  1. Type Approved in the Justification field
  2. Click on Approve

 

At this point the workflow will initiate. Next we will log back in as devuser and confirm.

  1. Click the Logout link at the top right corner

 

  1. Login as devuser with a password of VMware1!

 

  1. Click on the Requests tab at the top
  2. Note the status of the request we had just submitted.

 

Next up is to Deploy a HEAT Stack

  1. Click on the Catalog tab
  2. Click on the catalog icon for deploy a heat stack

 

  1. Click the drop down box and select dev-project

 

  1. Click the drop down box and select test-stack

 

  1. Click the Submit button

 

  1. Click on the Requests tab
  2. We can see that this went immediately to In Progress instead of Waiting for Approval because of how we configured the approval policy.

 

Next we will use the workflows to tear down the VIO configuration we just went through.

  1. Click on the Catalog tab
  2. Click Request on delete a heat stack

 

  1. Select the drop down box and choose dev-project

 

  1. Select the drop down box and choose test-stack

 

  1. Click on Submit

 

Lastly we will delete the project itself

  1. Click on Catalog tab
  2. Click Request on delete an Openstack project

 

  1. Click on the drop down box and select dev-project

 

  1. Click on Submit

 

 

Conclusion


Congratulations on completing  Module 5.

If you are looking for additional documentation on VIO, try one of these:

Proceed to any module below which interests you most.

 


 

Evaluate VMware Integrated OpenStack

Would you like to see how VMware Integrated OpenStack could work in your data center?  Request a free 60-day evaluation here to try it out in your own environment.

 

 

How to End Lab

 

To end your lab click on the END button.  

 

Module 6: Advanced VIO Topics (15 Minutes)

Introduction


This Module will explore the advanced feature Firewall as a Service (FWaaS v1 ) available in VIO 4.0.


FWaaS


The Firewall-as-a-Service (FWaaS) plug-in adds perimeter firewall management to OpenStack Networking. It operates at the perimeter by filtering traffic at the OpenStack Networking tenant router, which needs to be of type Centralized/Exlusive. This distinguishes it from security groups, which operate at the instance level.


 

Setup

In our environment the two pre-deployed VM test-vm-1 and test-vm-2 are attached to test-network. This network is connected to a centralized/exlusive router test-router which is attached to external-network. The security groups of both VMs are

We will now create Firewall rules which restrict some of this traffic, even if the security groups allow them!

 

 

Log into VIO Horizon Web Interface

 

If not already logged into the Horizon Web Interface, open the Chrome Browser and

  1. Click on VIO
  2. Select OpenStack as Authentication source
  3. Enter local as Domain
  4. Use admin as User Name
  5. Use VMware1! as Password

 

 

Create Firewall Rules

 

We will now create two Firewall rules which allow us to

Keep in mind that all other traffic on test-router will be blocked by default, so by not adding any further rules

In the VIO Horizon Web Interface

  1. Click on Project
  2. Open the Section Network
  3. Select the subsection Firewalls
  4. Select the Tab Firewall Rules
  5. Click on Add Rule

 

 

Allow ICPM to test-vm-2

 

The first rule will allow ICPM (e.g. ping) traffic to test-vm-2 (but not test-vm-1). In the Pup-up

  1. Enter the Name allow-icmp
  2. Select the Protocol ICMP
  3. Select the Action ALLOW
  4. Enter the Source IP Subnet 192.168.0.0/16 (includes all Lab VMs)
  5. Enter the Floating IP 192.168.0.15/32 of the test-vm-2 as Destination IP Address
  6. Click on Add

 

 

Allow SSH to all hosts via NAT IP

 

The second rule will allow ssh access to both VMs. Click on Add Rule again and

  1. Enter the Name allow-ssh
  2. Select the Protocol TCP
  3. Select the Action ALLOW
  4. Enter the Source IP Subnet 192.168.0.0/16 (includes all Lab VMs)
  5. Enter the  range 192.168.0.0/24 as Destination IP Subnet
  6. Enter the port 22 as Destination Port
  7. Click on Add

 

 

Confirm Firewall Rules

 

You should now see both Firewall rules we just created. Quickly review them.

 

 

Firewall Policies

A firewall policy is an ordered collection of firewall rules. So if the traffic matches the first rule, the other rules are not executed. If the traffic does not match the current rule, then the next rule is executed. A firewall policy has the following attributes:

Shared: A Firewall policy can be shared across tenants. Thus it can also be made part of an audit workflow wherein the Firewall policy can be audited by the relevant entity that is authorized.

Audited: When audited is set to True, it indicates that the Firewall policy has been audited. Each time the Firewall policy or the associated Firewall rules are changed, this attribute will be set to False and will have to be explicitly set to True through an update operation.

Name: The name field is required, all others are optional.

 

 

Add Firewall Policy

 

Let's reate a Firewall policy using the Firewall rules we created before.

  1. Select the Tab Firewall Policies
  2. Click on Add Policy

 

 

Define Firewall Policy Name

 

In the first Tab Policy

  1. Enter the mandatory Name test-policy

Do NOT click Add yet!

 

 

Add Roules to Firewall Policy

 

Let's add the two Firewall rules we created before to the Firewall policy:

  1. Click on the Rules Tab
  2. Click on the + sign to add the allow-icmp rule
  3. Click on the + sign to add the allow-ssh rule

 

 

Review and Add Firewall Pilicy

 

You should now see both Firewall rules in the Selected Rules section. Finally

  1. Click the Button Add

to add the Firewall Policy.

 

 

Create Firewall

 

In the final step we will now create a Firewall by using the Firewall policy test-policy which includes two firewall rules:

  1. Click on the Tab Firewalls
  2. Click on Create Firewall

 

 

Add Firewall Name and Policy

 

In the pop-up menu

  1. Enter the Name test-firewall
  2. Choose the Policy test-policy

Do NOT click Add yet!

 

 

Select Router

 

In the next step

  1. Click on the Routers Tab
  2. Click on the + sign to add the test-router

 

 

 

Review and add Firewall

 

You should now see the test-router in the section Selected Routers and can

  1. Click on Add

to finally add the Firewall.

 

 

 

Networking & Security in vSphere Web Client

 

Now let's see what the Neutron NSX-Plugin created behind the scenes. In the vSphere Web Client

  1. Click on the Home Icon
  2. Select the Menu Entry Networking & Security

 

 

Select test-router

 

Since we selected the test-router as our Firewall Target let's check the corresponding Edge Service Gateway (ESG) in NSX:

  1. Click on NSX Edges
  2. Double Click the Entry edge-25 (or test-router-e6e0f105-73e1-4a52-97f2-b8f602d29a1e)

 

 

NSX ESG Firewall Rules

 

To check which Firewall rules are active on the ESG test-router

  1. Click on Manage
  2. Click the Firewall Tab

The first rule allows local VM traffic, the second and third are our FWaaS rules (allow-icmp and allow-ssh) and the last one blocks all other traffic.

 

 

Open PuTTY Session to util-01a

 

If you do not have a PuTTY Session to util-01a open already, then

  1. Click on the PuTTY icon on your ControlCenter Desktop
  2. Double Click on util-01a.corp.local

 

 

Check Firewall

 

We will now check if our Filrewall rules were applied to the test-router and over-rule the Security Group definitions. In the util-01a PuTTY Session let's

  1. Ping the test-vm-1
  2. Ping the test-vm-2
  3. SSH into the test-vm-1 as root
  4. Test HTTP access to test-vm-1

by entering the following commands (use CTRL-C to terminate a command).

 ping -c 2 test-vm-1
 ping -c 2 test-vm-2
 ssh root@test-vm-1
 wget test-vm-1

If you want you can also try the following commands as a final check of the FWaaS configuration.

ssh root@test-vm-2
wget test-vm-2

 

 

Cleanup

 

To clean up the lab we will just delete the Firewalls and not the Firewall Policies and Rules which can be reused.

  1. Click on the Tab Firewall
  2. Click on the test-firewall Checkbox
  3. Click on Delete Firewall

 

 

Delete Firewall

 

Confirm the Deletion of the Firewall:

  1. Click on Delete Firewalls in the pop-up window.

This completes the FWaaS section of Module 5.

 

Conclusion


Congratulations on completing  Module 6.

If you are looking for additional documentation on VIO, try one of these:

Proceed to any module below which interests you most.

 


 

Evaluate VMware Integrated OpenStack

Would you like to see how VMware Integrated OpenStack could work in your data center?  Request a free 60-day evaluation here to try it out in your own environment.

 

 

How to End Lab

 

To end your lab click on the END button.  

 

Conclusion

Thank you for participating in the VMware Hands-on Labs. Be sure to visit http://hol.vmware.com/ to continue your lab experience online.

Lab SKU: HOL-1820-01-EMT

Version: 20180525-125452