VMware Hands-on Labs - HOL-1886-01-EMT


Lab Overview - HOL-1886-01-EMT - vCloud Network Functions Virtualization

Lab Guidance


Note: It will take more than 90 minutes to complete this lab. You should expect to only finish 2-3 of the modules during your time.  The modules are independent of each other so you can start at the beginning of any module and proceed from there. You can use the Table of Contents to access any module of your choosing.

The Table of Contents can be accessed in the upper right-hand corner of the Lab Manual.

 

The primary objective of this lab is to introduce Network Function Virtualization (NFV) and the VMware NFV common platform solution. This includes rapid application deployment, horizontal scaling of virtual network functions (VNFs), multi-vendor VNF support, & multi-domain support.  In addition, VNF blueprinting, modeling, deployment, and service lifecycle management are critical components.  This HOL will enable users to learn 3 primary aspects of NFV as follows:  

Lab Module List:

 Lab Captains:

This lab manual can be downloaded from the Hands-on Labs Document site found here:

http://docs.hol.vmware.com

This lab may be available in other languages.  To set your language preference and have a localized manual deployed with your lab, you may utilize this document to help guide you through the process:

http://docs.hol.vmware.com/announcements/nee-default-language.pdf


 

Location of the Main Console

 

  1. The area in the RED box contains the Main Console.  The Lab Manual is on the tab to the Right of the Main Console.
  2. A particular lab may have additional consoles found on separate tabs in the upper left. You will be directed to open another specific console if needed.
  3. Your lab starts with 90 minutes on the timer.  The lab can not be saved.  All your work must be done during the lab session.  But you can click the EXTEND to increase your time.  If you are at a VMware event, you can extend your lab time twice, for up to 30 minutes.  Each click gives you an additional 15 minutes.  Outside of VMware events, you can extend your lab time up to 9 hours and 30 minutes. Each click gives you an additional hour.

 

 

Alternate Methods of Keyboard Data Entry

During this module, you will input text into the Main Console. Besides directly typing it in, there are two very helpful methods of entering data which make it easier to enter complex data.

 

 

Click and Drag Lab Manual Content Into Console Active Window

You can also click and drag text and Command Line Interface (CLI) commands directly from the Lab Manual into the active window in the Main Console.  

 

 

Accessing the Online International Keyboard

 

You can also use the Online International Keyboard found in the Main Console.

  1. Click on the Keyboard Icon found on the Windows Quick Launch Task Bar.

 

 

Activation Prompt or Watermark

 

When you first start your lab, you may notice a watermark on the desktop indicating that Windows is not activated.  

One of the major benefits of virtualization is that virtual machines can be moved and run on any platform.  The Hands-on Labs utilizes this benefit and we are able to run the labs out of multiple datacenters.  However, these datacenters may not have identical processors, which triggers a Microsoft activation check through the Internet.

Rest assured, VMware and the Hands-on Labs are in full compliance with Microsoft licensing requirements.  The lab that you are using is a self-contained pod and does not have full access to the Internet, which is required for Windows to verify the activation.  Without full access to the Internet, this automated process fails and you see this watermark.

This cosmetic issue has no effect on your lab.  

 

 

Look at the lower right portion of the screen

 

Please check to see that your lab is finished all the startup routines and is ready for you to start. If you see anything other than "Ready", please wait a few minutes.  If after 5 minutes your lab has not changed to "Ready", please ask for assistance.

 

Introduction


Introduction to NFV and Orchestration using the VMware vCloud NFV Platform


 

What is Network Functions Virtualization (NFV)?

 

With about 75 percent of x86 server workloads in enterprise IT globally being virtualized, adoption of virtualization is widespread. Success in Enterprise IT virtualization has encouraged communication service providers (CSPs), who provide fixed and mobile telephony to adopt these technologies to virtualize their network service functions.

Network Functions Virtualization (NFV) is a network architecture concept that specifies how virtualization technologies can be used to virtualize network service functions. The European Telecommunications Standards Institute (ETSI) published specifications on NFV infrastructure to define a baseline for NFV adoption by CSPs. VMware has developed an ETSI conforming solution that CSPs can use for Carrier Grade NFV service delivery. Communication Service Providers can now leverage NFV to reduce CapEx and OpEx costs, while improving service agility, differentiation and opportunities for monetization.  

The figure shows the ETSI NFV reference architecture.

 

 

What is the vCloud NFV platform?

VMware’s vCloud NFV is a proven Carrier Grade NFV service delivery platform. vCloud NFV’s “Common Platform” approach for Network Functions Virtualization Infrastructure (NFVI) lifecycle management, provides a feature rich modular foundation, an industry leading operations platform and a diverse partner ecosystem, that allows service providers to build their own custom NFV services delivery stack.

 

 

The vCloud NFV Common Platform approach

 

Nearly all virtualized network function platforms deployed today are built as virtual network function (VNF) silos. In a silo architecture, the virtualized network function is tightly coupled with dedicated compute and storage resources and proprietary management platforms. Resources in a silo architecture model are only used for a single network function or a number of network functions built by a single vendor. Silo architecture models provide limited, if any, multivendor VNF support and make it difficult to remove existing VNFs, add new vendor VNFs, or build complex service constructs that require the coordinate support of multiple VNFs.

The vCloud NFV Common Platform approach gives CSPs the ability to place all VNFs on a common platform that leverages a single shared pool of compute, storage and network resources, supports deployment of multi-vendor VNFs and makes it easy to remove and replace existing VNFs, or add new VNFs based on CSP service delivery requirements. The common platform model offers a more attractive economic model by leveraging shared resource pools and also decreases single vendor leverage by enabling easy VNF portability.

 

 

VMware vCloud NFV Infrastructure (NFVI) architecture

 

The vCloud NFV platform is a modular NFV Infrastructure (NFVI) platform. VMware delivers compute virtualization with vSphere, network virtualization with NSX, and software defined storage with VMware Virtual SAN (VSAN). Secure multi-tenancy is enabled by a choice of Virtual Infrastructure Managers: VMware vCloud Director or VMware Integrated OpenStack. Support for Fault, Configuration, Accounting, Performance and Security (FCAPS) is provided by VMware vRealize Operations™ and VMware vRealize Log Insight.

 

 

VMware vCloud NFV modular architecture

 

One of the key design concepts of vCloud NFV is helping CSPs avoid vendor lock-in. The vCloud NFV platform supports a rich partner ecosystem of Virtual Network Functions (VNF), VNF managers (VNFM), service orchestration, service assurance, service optimization and analytics vendors. Over 30 vendor platforms can be easily integrated into the vCloud NFV platform. The figure above shows the modular approach of the vCloud NFV platform and highlights how a multi-vendor approach can be used to create a custom NFV service delivery platform to meet the diverse requirements of Communications Service Providers.

 

 

Demonstrating the Value of NFV

One of the key advantages of NFV is the ability for CSPs to quickly deploy services to the market. In deploying NFV, the ability to perform application modeling, application blueprinting, provisioning, configuration, VNF deployment, and end-to-end service activation is critical and helps with faster service deployment. This lab will demonstrate this capability. In demonstrating this capability on a carrier-grade platform for a Voice over Long Term Evolution (VoLTE) solution, multiple VNFs are needed. These VNFs are combined and deployed as VMs or vApps in a typical production environment. VNFs work in concert to provide a complete VoLTE solution. Taking advantage of the concept of virtualization to abstract resources, multiple VNF functions are installed on a single virtual machine in this lab.

 

 

Lab Architecture

 

This figure illustrates the lab deployment architecture. VMware provides the NFVI platform, which can host multi-vendor VNFs or multiple VNFs from a single vendor in the vCloud NFV environment. Gigaspaces, a VMware Partner, has provided Cloudify (http://www.getcloudify.org) as the orchestration engine in the NFVI platform.  Athonet (http://www.athonet.com), a VMware partner, provided the VNFs for virtual Evolved Packet Core (vEPC) and for virtual IP Multimedia Subsystem (vIMS), as well as the Home Subscriber Server (HSS). The following VNFs from Athonet were deployed on the VMware infrastructure:

 

Module 1 - Getting Started (15 min)

How Fast “Fast” Really Is - From VNF to VoLTE Service Deployment?


One of the key advantages of NFV is the ability of communication service providers (CSPs) to quickly deploy new services. At Interop Tokyo 2016, VMware demonstrated a reliable, model-driven approach for VoLTE mobile services. The demonstration included application modeling, application blueprinting, provisioning, configuration, VNF deployment, end-to-end service activation, and making live mobile calls using 4G LTE devices, with the entire process taking less than ten minutes, start to finish. This is the industry’s first and fastest NFV demonstration, proving that NFV is real and achievable for telecom environments in multi-vendor ecosystems, with service provider-expected quality of service. This demonstration is proof of VMware infrastructure platform readiness for volume deployment in production for service providers.

To complete a voice call, at a minimum, a small cell radio access node installed at the premises is needed to provide tower access for air interface. This is not possible due to various constraints in a HOL lab environment. In this module, you can watch the following video that shows the entire sequence that takes place, including users making internal and external calls upon completion of service activation.


 

VMware Orchestrated VoLTE Service Delivery (Video 5:42)

 

 

VoLTE Service Deployment

In Module 2 and Module 3, you will perform all the steps as seen in this video until the system is ready for deployment and service activation.

Another key advantage of NFV is the horizontal scaling capability that exists for service providers. Based on service demand, service providers can scale the network by adding additional VNF instances very quickly in an NFV environment without the need to wait for new hardware and network setup. If service providers see a burst in traffic, new vIMS and vEPC instances can be instantiated from the blueprints and deployed using existing infrastructure, saving on CAPEX and OPEX.

In Module 4, you will walk through the steps of not only scale-out, but also scale-in (the ability to conserve capacity when not needed, a function that was infeasible in traditional telephony).

 

Module 2 - TOSCA based Blueprinting & Modeling (30 minutes)

Introduction


This module introduces the new concepts of orchestration, blueprinting, and modeling for NFV.

In the sphere of music, orchestration refers to the identification, organization, and instructions necessary for a collection of musicians to produce a complex musical work. This includes the identification of the types and number of players, and their specific instructions for playing. In computing, the metaphor stops somewhat short of providing the actual "music" that the different components will play together. In computing terminology, orchestration refers to the number and types of components and their interconnections. This involves modeling specific computing platforms, networking components, software components, and the information necessary for all of these to collaborate. After the model is defined, an orchestrator can interpret it. Beyond modeling the computing environment and software installation tasks, an orchestration can define post-deployment policies and actions, which can operate on the runtime system that the orchestration models. For example, the orchestrator might receive CPU metrics and add CPU capacity dynamically. Alternatively, the orchestration might define processes that update software on a portion of the system.


 

What is TOSCA?

TOSCA (Topology Orchestration Specification for Cloud Application) is an OASIS specification that defines a language for modeling applications and associated infrastructure. A TOSCA model (typically rendered in YAML, a human friendly data serialization standard for all programming languages) describes all components of a system, along with the relationships between them in a distinctly object-oriented fashion. Each object (or node) in the model represents a thing or relationship to be orchestrated. A node can be a virtual machine, a firewall, a software program, or anything else. The nodes are described as "types" with operations and properties. The model is meant to be the input to an orchestrator, which can interpret it and perform specific "workflows." Cloudify is such an orchestrator. It understands an input model and then automates the execution of workflows against it. The most fundamental workflow is "install,” which traverses the model, orders the instantiation of model components (such as a VM) by honoring defined model relationships (for example, connected-to or contained-in), and executes associated code to render the model on its configured target platform (such as VMware vCloud). Beyond being an executor of workflows, Cloudify also provides a runtime called the Cloudify Manager. The Cloudify Manager runs alongside the systems it deploys, and provides services such as metric gathering, a web UI, security, blueprint storage, auto-scaling/healing, and arbitrary automated policy execution.

From the perspective of NFV, TOSCA and Cloudify provide an open, neutral language and platform for describing, deploying, and managing VNF forwarding graphs or service chains. Due to the open, unopinionated nature of the platform, TOSCA and Cloudify are a powerful way to model and realize network applications involving virtual and non-virtual network functions and arbitrary related software in a VIM/Cloud-neutral way.

The process of TOSCA VNF blueprinting refers to the creation of a YAML descriptor and related artifacts. The YAML descriptor and related artifacts are together called a "blueprint." The blueprint can also reference plugins to provide multi-VIMs and other capabilities. The YAML descriptor defines all components of an architecture, included VMs, VNF images, network definitions and relationships, and whatever scripting or other configuration is needed to prepare the system to run. Without such a model, a typical ad-hoc approach consists of a proprietary hybrid of scripts and configuration. The YAML model provides a versionable artifact that describes a complex deployment.

 

Solution Blueprinting


You are a network engineer for Rainpole Telecom and you need to understand blueprinting and modeling for your new VoLTE service. Rainpole Telecom is currently utilizing Cloudify as an orchestration platform and Athonet VNFs for their VoLTE service delivery. Modeling is accomplished with the creation of a YAML blueprint based on TOSCA principles. As such, nodes for each automated component must be defined. In this module, we will deploy a blueprint model and workflow.

A typical blueprint creation workflow includes:

The vEPC and vIMS blueprints that have been created are based on Athonet VNFs components, which are modeled and available based on OS and application software binaries. The vEPC blueprint is based on the VNFs for the following core network nodes: MME, S-GW, P-GW, and PCRF. The vIMS blueprint is based on the SBC and I/P-CSCF VNFs.


 

Open Chrome Browser from Windows Quick Launch Task Bar

 

Click on the Chrome Icon on the Windows Quick Launch Task Bar

 

 

Upload a Blueprint

 

1. In the bookmark bar, click Cloudify UI.

2. After the Cloudify UI is open, you will need to login.

Username: admin     Password: VMware1!

3. Click Login.

 

 

Graphical view

 

The graphical view of the topology described in the blueprint gives a high-level view of the deployment model. This view shows compute-related components such as virtual machines and the VNF (vEPC and vIMS) residents of those machines. Connected-to relationships are represented by lines. Contained-in relationships are represented by drawing the contained component inside the container. Note the vEPC inside the epc_host as an example of a contained-in relationship.

 

 

Source view

 

1. By scrolling down in the current page you will find a section called Blueprint Sources

2. Now by selecting blueprint.yaml from the file tree, you can scroll through the components that make up the build of this service

The main part of blueprint files is written in YAML in a declarative DSL (domain-specific language). It describes the logical representation of an application, which is called a topology. In a blueprint, you describe the application’s components, how they relate to each other, how they are installed and configured, and how they are monitored and maintained.

Other than the YAML itself, a blueprint can comprise multiple resources such as configuration and installation scripts (or Puppet Manifests, Chef Recipes, and so on), code, and basically any other resource you require for running your application.

All files in the directory that contains the blueprint file are also considered part of the blueprint, and paths described in the blueprint are relative to that directory.

You can scroll through the config source view to see all the configuration parameters.

 

 

Conclusion

You should now have a firm understanding of the TOSCA blueprinting model and its components as well as the role that TOSCA plays in the service lifecycle.

Now let's move to Module 3 to learn about deploying blueprints and enabling service activation, which is the last step towards enabling the ability to do voice calls.

 

 

Module 3 - VoLTE Solution Deployment (30 minutes)

Introduction


In this module let's revisit the Network Function Virtual Infrastructure.


 

Network Function Virtual Infrastructure (NFVI)

 

The virtual infrastructure is setup in a hardware-agnostic manner, using compute and network virtualization provided by the vSphere and VMware NSX platform. The VNF software runs as application software, fully decoupled from the operating system. Thus, each VNF runs as a native application. The Cloudify orchestration application is deployed as a VM. Blueprints of deployment profiles are pre-built. A specific workflow profile is selected from the Cloudify orchestration interface. Upon uploading of the profile, the Topology view shows the various VNF interfaces, relationships, and connectivity. This provides a point where the users have ability to go and modify the blueprint if any changes are needed.

The system now performs automatic validation of configuration parameters, and service provisioning is initiated for each vApp and VNF. Based on the properties set in the blueprint and entity relationships as defined in the blueprint, the applications are automatically configured, network IP address assignments are done, needed OS customization and fine tuning is completed, and VNFs are powered up automatically. First, the vIMS vApp is automatically deployed, as deployment starts. Because vEPC is deployed on top of this, the characteristics not known before vIMS deployment are now automatically collected in the blueprint and fed into the vEPC system so that the right operational entity relationships are established dynamically. Access to the HSS and the network is also programmed dynamically in the blueprint to enable VNFs to contact the HSS and register themselves as part of the service activation process. The entire process provides asynchronous updates at the Cloudify user interface layer, and vCloud Director can be refreshed to see status updates.

 

Solution Deployment


You are a network engineer for Rainpole Telecom and you need to understand the orchestration and deployment model for your new VoLTE service. Rainpole Telecom is currently utilizing Cloudify as an orchestration platform and Athonet VNFs for their VoLTE service delivery. Modeling is accomplished by the creation of a YAML blueprint based on TOSCA principles. As such, nodes for each automated component must be defined. In this module, we will deploy a blueprint model and workflow. To begin this process, we must log in to the Cloudify Console.


 

Open Chrome Browser from Windows Quick Launch Task Bar

 

1. Click on the Chrome Icon on the Windows Quick Launch Task Bar

 

Module 4 - Lifecycle Management of a VoLTE core (30 minutes)

Introduction


Lifecycle management is provided from the NFV management and orchestration (MANO) layer of the ETSI architecture. Lifecycle management is a set of functions required to manage the instantiation, maintenance, and termination of VNFs. It includes various functions like auto-scaling, self healing, VNF retirement, and day-2 operational aspects, such as patching and upgrades across the solution base. We will learn about scaling and VNF retirement in this module.


Auto-Scaling


Initial deployment and service activation sets the system in motion to quickly provide value added services to customers. One of the key advantages of NFV is the horizontal scaling capability that exists for service providers. Based on service demand, service providers can scale the network by adding additional VNF instances very quickly in an NFV environment without the need to wait for new hardware and network setup. If service providers see a burst in traffic, new vIMS and vEPC instances can be instantiated from the blueprints and deployed using existing infrastructure, saving on CAPEX and OPEX.

As the deployment runs, it is monitored for performance. Should it be required to scale out the deployment to handle more load (in this case more calls) then that activity can be performed through the Cloudify Manager interface. If it is determined that the platform is underutilized, it can also be scaled back in. In the instance of this lab, a scale-out event entails adding a vEPC and a vIMS VNF to the running deployment service chain. Conversely, a scale-in event reduces each of those VMs by one (assuming there is more than one of each prior to the event being initiated).

In this module, we will walk through the steps to scale the current environment.


 

Open Chrome Browser from Windows Quick Launch Task Bar

 

1. Click on the Chrome Icon on the Windows Quick Launch Task Bar

 

 

Login to Cloudify

 

1. Click on Cloudify UI from the Bookmarks bar

2. Enter credentials to login to Cloudify UI

Username: admin    Password: VMware1!

3. Click Login

 

 

Open Cloudify

 

1. After the Cloudify Console is open, click the Deployments from the left menu.

2. Then select demo-deployment

 

 

Execute Workflow

 

1. Click Execute workflow

 

 

Start execution

 

1. A new window will open. Click the drop-down menu and select Scale.

 

 

Scale-out

 

  1. If you were to actually scale-out the deployment, you would enter scale_group in the node_id field and a positive number in the delta field.
  2. However, for the purposes of this lab, do not enter a value. Click Cancel.

 

 

Scale-out activity

 

Scale-out activity launches a workflow where Cloudify Manager contacts vCloud Director and instantiates two new VMs, one for vEPC and one for vIMS. The remaining VMs will remain unchanged. The VMs are created and started, operating system images are installed, network interfaces are deployed, and applications are installed and configured. The two new instances would then join the pool of the original running instances to effectively double their pool capacities.

In the Topology view of the deployment, the green numbers increment over the vEPC and vIMS VMs.

Now let's walk through the scale-in process.

 

 

Execute workflow

 

Scrolling back to the top.

1. Click Execute Workflow.

2. Click Scale

 

 

Scale-in

 

  1. If you were to actually scale-in the deployment, you would enter scale_group in the node_id field and a negative number in the delta field.
  2. However, for the purposes of this lab, do not enter a value. Click Cancel.

 

 

Scale-in activity

 

Scale-in activity launches a workflow where Cloudify Manager contacts vCloud Director, and one of the vIMS instances and one of the vEPC instances are unregistered from the pool, stopped, and the VMs are deleted.

In the Topology view of the deployment, the tiny green numbers decrease over the vEPC and vIMS VMs.

 

 

Conclusion

When a deployment is in production, it is rarely just deployed and left to run. These manual operations display the agile nature with which this VoLTE deployment can be expanded to accommodate additional calls and scaled back in when demand has decreased. By leveraging these capabilities, you can adapt the infrastructure to exactly what is needed and make the most efficient use of the cloud infrastructure.

 

Retiring Applications



 

Open Chrome Browser from Windows Quick Launch Task Bar

 

  1. Click on the Chrome Icon on the Windows Quick Launch Task Bar.

 

 

Login to Cloudify

 

1. Click on Cloudify UI from the Bookmarks bar

2. Enter credentials to login to Cloudify UI

Username: admin    Password: VMware1!

3. Click Login

 

 

Open Cloudify

 

1. After the Cloudify Console is open, click the Deployments from the left menu.

2. Then select demo-deployment

 

 

Execute workflow

 

1. Select Execute workflow from the demo_deployment1 blueprint.

 

 

Start execution

 

1. A new window will open. Click the drop-down menu and select Uninstall.

 

 

Confirm

 

1. Select Execute.

 

 

Verify Success

 

After Execute has been clicked, it will return you to the topology view and green check marks will disappear.  Scroll down to Deployment Events to validate that the workflow has succeeded.

This process can take up to 10 minutes.

 

 

Start deployment removal process

 

Now that the uninstall has occurred, let’s retire the deployment.

1. To begin, click the Deployments from the left menu.

2. Select the three lines in the top right of demo-deployment

3. Select Delete

 

 

Confirm delete

 

1. Select Yes.

 

 

Conclusion

 

You will now see that the demo_deployment has been removed from the list, completely removing the deployment.

Upon completion of this lab module, you have seen how to manage the lifecycle of the deployment in the platform. You have scaled it out, scaled it back in, and finally deleted it.

 

Conclusion

Lab Conclusion


Now that you have completed all four modules of this lab, you should have a clearer understanding of application modeling, application blueprinting, provisioning, configuration, VNF deployment, and end-to-end service activation, and how making live mobile calls using 4G LTE devices, can be accomplished using vCloud NFV platform.

While this deployment is focused on a VoLTE service, these steps and methods can be applied to any VNF-based service deployment on this platform, whether it is for vEPC- or vCPE-based solutions.


Conclusion

Thank you for participating in the VMware Hands-on Labs. Be sure to visit http://hol.vmware.com/ to continue your lab experience online.

Lab SKU: HOL-1886-01-EMT

Version: 20170920-152745