VMware Hands-on Labs - HOL-1730-USE-2


Lab Overview - HOL-1730-USE-2 - Cloud Native Apps With Photon Platform

Lab Guidance


Note: It will take more than 90 minutes to complete this lab. You should expect to only finish 2-3 of the modules during your time.  The modules are independent of each other so you can start at the beginning of any module and proceed from there. You can use the Table of Contents to access any module of your choosing.

The Table of Contents can be accessed in the upper right-hand corner of the Lab Manual.

Photon Platform is a distributed, multi-tenant host controller optimized for containers. The Photon Platform delivers:

The objective of this lab is to provide an introduction to Photon Platform constructs and architecture, then deep dive into how to consume Infrastructure as a Service (IaaS) using this platform.  Finally, the user will learn how to deploy OpenSource frameworks and applications onto Photon Platform using standard deployment methods for the frameworks.

Lab Module List:

 Lab Captains:

 

This lab manual can be downloaded from the Hands-on Labs Document site found here:

[http://docs.hol.pub/HOL-2017]

This lab may be available in other languages.  To set your language preference and have a localized manual deployed with your lab, you may utilize this document to help guide you through the process:

http://docs.hol.vmware.com/announcements/nee-default-language.pdf


 

Location of the Main Console

 

  1. The area in the RED box contains the Main Console.  The Lab Manual is on the tab to the Right of the Main Console.
  2. A particular lab may have additional consoles found on separate tabs in the upper left. You will be directed to open another specific console if needed.
  3. Your lab starts with 90 minutes on the timer.  The lab can not be saved.  All your work must be done during the lab session.  But you can click the EXTEND to increase your time.  If you are at a VMware event, you can extend your lab time twice, for up to 30 minutes.  Each click gives you an additional 15 minutes.  Outside of VMware events, you can extend your lab time up to 9 hours and 30 minutes. Each click gives you an additional hour.

 

 

Activation Prompt or Watermark

 

When you first start your lab, you may notice a watermark on the desktop indicating that Windows is not activated.  

One of the major benefits of virtualization is that virtual machines can be moved and run on any platform.  The Hands-on Labs utilizes this benefit and we are able to run the labs out of multiple datacenters.  However, these datacenters may not have identical processors, which triggers a Microsoft activation check through the Internet.

Rest assured, VMware and the Hands-on Labs are in full compliance with Microsoft licensing requirements.  The lab that you are using is a self-contained pod and does not have full access to the Internet, which is required for Windows to verify the activation.  Without full access to the Internet, this automated process fails and you see this watermark.

This cosmetic issue has no effect on your lab.  

 

 

Alternate Methods of Keyboard Data Entry

During this module, you will input text into the Main Console. Besides directly typing it in, there are two very helpful methods of entering data which make it easier to enter complex data.

 

 

Click and Drag Lab Manual Content Into Console Active Window

You can also click and drag text and Command Line Interface (CLI) commands directly from the Lab Manual into the active window in the Main Console.  

 

 

Accessing the Online International Keyboard

 

You can also use the Online International Keyboard found in the Main Console.

  1. Click on the Keyboard Icon found on the Windows Quick Launch Task Bar.

 

 

Click once in active console window

 

In this example, you will use the Online Keyboard to enter the "@" sign used in email addresses. The "@" sign is Shift-2 on US keyboard layouts.

  1. Click once in the active console window.
  2. Click on the Shift key.

 

 

Click on the @ key

 

  1. Click on the "@" key.

Notice the @ sign entered in the active console window.

 

 

Look at the lower right portion of the screen

 

Please check to see that your lab is finished all the startup routines and is ready for you to start. If you see anything other than "Ready", please wait a few minutes.  If after 5 minutes you lab has not changed to "Ready", please ask for assistance.

 

Module 1 - What is Photon Platform (15 minutes)

Introduction


This module will introduce you to the new operational model for cloud native apps.  You will walk through the Photon Platform control plane management architecture and will get a guided introduction to image management, resource management and multi-tenancy.  You will use a combination of the Management UI and CLI to become familiar with Photon Platform.   For a detailed dive into platform, proceed to Module 2 - Cloud Admin Operations.

1) What is Photon Platform and what is the architecture

2) Cloud Administration - Multi-Tenancy and Resource Management in Photon Platform

3) Cloud Administration - Images and Flavors.


What is Photon Platform - How Is It Different From vSphere?


The VMware Photon Platform is a new infrastructure stack optimized for cloud-native applications. It consists of Photon Machine and the Photon Controller, a distributed, API-driven, multi-tenant control plane that is designed for extremely high scale and churn.

Photon Platform has been open sourced so we could engage directly with developers, customers and partners.   If you are a developer interested in forking and building the code or just want to try it out, go to vmware.github.com

Photon Platform differs from vSphere in that it has been architected from the ground up to provide consumption of infrastructure through programmatic methods.   Though we provide a Management UI, the primary consumption model for DevOps will be through the Rest API directly or the CLI built on top of it.

The platform has a native multi-tenancy model that allows the admin to abstract and pool physical resources and allocate them into multiple Tenant and Project tiers.  Base images used for VM and Disk creation are centrally managed and workload placement is optimized through the use of Linked Clone (Copy On Write) technology.  

The Control plane itself is architected as a highly available, redundant set of services that facilitates large numbers of simultaneous placement requests and prevents loss of service.  

Photon Platform is not a replacement for vCenter.   It is designed for a specific class of applications that require support for the services described above.  It is not feature compatible with vCenter, and does not implement things like vMotion, HA, FT - which are either not a requirement for Cloud Native Applications, or are generally implemented by the application framework itself.

The High Level architecture of the Photon Controller is as shown on the next page.


 

Photon Platform Overview - High Level Architecture     (Developer Frameworks Represent a Roadmap.  Not all are implemented in the Pre-GA Release)

 

 

 

Cloud Administration - Multi-Tenancy and Resource Management


Administration at cloud scale requires new paradigms.  Bespoke VMs nurtured through months or years are not the norm.  Transient workloads that may live for hours, or even minutes are the order of the day.  DevOps processes that create continuous integration pipelines need programatic access to infrastructure and resource allocation models that are dynamic, Multi-tenant - and do not require manual admin intervention.  Photon Platform implements a hierarchical tenant model.   Tenants represent a segmentation between companies, business units or teams.  Cloud resources are allocated to Tenants using a set of Resource Tickets.  Allocated resources can be further carved up into individual projects within the Tenant.  Let's dive in and explore Multi-tenancy and resource management in Photon Platform.


 

Connect To Photon Platform Management UI

 

  1. From the Windows Desktop, Launch a Chrome or Firefox Web Browser

 

 

Photon Controller Management UI

 

  1. Select the Photon Controller Management Bookmark from the Toolbar or enter http://192.168.120.10 in the browser.

 

 

The Control Plane Resources

 

The Photon Platform environment contains Management Resources and Cloud Resources. Resources designated as Management are used for Control Plane VMs.  Resources designated as Cloud are used for Tenants that will be running applications on the cloud.   In our simple Lab deployment we have 2 ESXi hosts and 1 Datastore and we have designated that all of the resources can be used as Management and Cloud.  In a Production Cloud, you would tend to separate them.  Our management Plane also only consists of a single node.   Again, in a production cloud, you can scale this out significantly to provide multiple API endpoints for consuming the infrastructure and to provide high availability.  

  1. Click on Management

Note1:  We are seeing some race conditions in our lab startup.  If you see no Host or Datastore data in this screen, you will need to restart the Photon Controller Management VM.  Details are in the next step.

Note2: If the browser does not show the management panel on the left, then change the Zoom to 75%.  Click on the 3-bar icon on the upper right and find the Zoom.

 

 

Execute This Step Only If You Had No Host or Datastore Data In The Previous Screen

 

From the Windows Desktop:

  1. Click on the Putty Icon
  2. Select PhotonControllerCLI connection
  3. Click Open  - You are now in the PhotonControllerCLI VM
  4. ssh into the PhotonController Management VM.  Execute:  ssh esxcloud@192.168.120.10  Password is vmware
  5. You must change to the root user.  Execute;  su    Password is vmware
  6. Reboot the VM.  Execute: reboot      This should take about 2 minutes to complete

 

 

Control Plane Services

 

The Photon Platform Control Plane runs as a set of Java Services deployed in Docker Containers that are running in a MGMT VM.   Each MGMT VM will run a copy of these services and all meta-data is automatically synced between the Cloud_Store service running in each VM to provide Availability.

  1. Click on Cloud

 

 

Cloud Resources

 

This screen shows the resources that have been allocated for use by applications running on this cloud:

  1. Two hosts have been allocated as available to place application workloads.
  2. One Tenant has been created.  (We will drill further into this in a minute.)
  3. We have set no resource limit on vCPU or Storage, but we have created a Resource-Ticket with a limit of 1000GB of RAM and Allocated all 1000GB to individual projects.  ( You will see the details in a minute)

 

 

 

 

Tenants

 

  1. Click on Tenants

 

 

Our Kubernetes Tenant

 

We have created a Single Tenant that has been used to create a Kubernetes Cluster (You will use this in Module 3).  You can see that a limit has been placed on Memory resource for this tenant and 100% of that resource has been allocated to Projects within the Tenant.

  1. Click on Kube-Tenant

 

 

Kube-Tenant Detail

 

You can see a little more detail on what has been allocated to the tenant.  The User Interface is still a prototype.  We will use the CLI in module 2 to drill into how these resources are really allocated.

Notice that the Project within the Kube-Tenant is using only 1% of the total Memory allocated to it.   You may have to scroll to the bottom of the screen to see this.

  1. Click on Kube-Project

 

 

Kube-Project Detail

 

At the project detail level we can see the actual consumption of allocated resources and the VMs that have been placed into these allocations.  We have deployed a Kubernetes Cluster, which contains a Master and 2 worker node VMs.  You will immediately notice that this model is about allocating large pools and managing consumption rather than providing a mechanism for management of individual VMs.   (Note:  These VMs will be used in Module 3.  If you delete them, you will have to restart the lab environment in order to take that module.

 

 

Kube Tenant Resource-Ticket

 

Remember that resource limits are created for a Tenant by providing the Tenant with one or more Resource-Tickets.  Each Resource Ticket can be carved up into individual projects.  Lets add a Resource-Ticket to Kube-Tenant

  1. Click on Kube-Tenant and Scroll the screen to the bottom

 

 

Create Resource-Ticket

 

  1. Click on Resource Ticket
  2. Click on the + sign
  3. Enter Resource Ticket Name  (No Spaces in the Name)
  4. Enter numeric values for each field
  5. Click OK
  6. Optionally, Click on Projects and follow the Tenant Create steps to Create a New project to allocate the Resource Ticket to.

You have now made additional resource available to Kube Tenant and can allocate it to a new Project.  Check the Tenant Details page to see the updated totals.  You can create a new project if you want, but we will not be using it in the other modules.  To do that, click on Projects

 

Cloud Administration - Images and Flavors


Continuing on the theme from the previous lesson,  Cloud automation requires abstractions for consumption of allocated resources as well as centralized management of images used for VM and Disk creation.   In this lesson, you will see how Images and Flavors are used as part of the operational model to create Cloud workloads.


 

Images

 

Photon Platform provides a centralized image management system.  Base images are uploaded into the system and can then be used to create both VMs and disks within the environment.  Users can upload either an OVA or VMDK file.   Once a VM is deployed, and potentially modified, its disk can be saved as an image in the shared image repository.  The image repository is a set of Datastores defined by the Administrator.  Datastores can be local or shared storage.  When a user creates a VM or disk, a linked clone is created from the base image to provide the new object.  This copy on write technology means that the new disk takes up very little space and captures only the disk changes from the original image. Users can optimize the images for performance or storage efficiency by specifying whether the image should be copied to Cloud datastores immediately on upload or only when a placement request is executed.  This is referred to as an EAGER or ON_DEMAND image in Photon Platform

  1.  Click on the gear in the upper right of the screen and then Images.

 

 

Kube-Image

 

You notice that we have a few images in our system.  The Photon-management image is the image that was used to create the Control Plane management VMs mentioned in the earlier steps, and the kube image that was used for the Kubernetes Cluster VMs you also saw earlier.  You will use the PhotonOS and Ubuntu images in a later module.

1.          Click the X to close the panel.

 

 

Flavors

 

  1. Click on the gear again and then Click Flavors

When you are done, close the images panel so that you can see the gear icon again

 

 

Kube-Flavor

 

Flavors need a bit of explanation.  There are three kinds of Flavors in Photon Platform; VM, Ephemeral Disk and Persistent disk Flavors.  Ephemeral disks are what you are used to with your current ESXi environment.  They are created as part of the VM create and their lifecycle is tied to the VM.  Persistent disks can be created independent from any VM and then subsequently attached/detached.  A VM can be created, a persistent disk attached, then if the VM dies, the disk could be attached to another VM.  Flavors define the size of the VMs (CPU and RAM), but also define the characteristics of the storage that will be used for ephemeral (Boot) disks and persistent storage volumes.    You will specify the vm and disk flavors as part of the VM or Disk creation command.  

  1. In our environment we have created specific VM flavors to define the size of our Kubernetes Master and Worker node vms.  Notice that the Master node Flavor will create a larger VM than the other Flavors
  2. Click on Ephemeral Disks

 

 

Ephemeral Disk Flavors

 

Notice That we have Four Ephemeral Disk Flavors in our environment.  We haven't done much with them here, but there are two primary use cases for Disk flavors.   The first is to associate a Cost with the storage you are deploying, in order to facilitate Chargeback or Showback.   The second use case is Storage Profiles.  Datastores can be tagged based on whatever criteria may be needed (Availability/Performance/Cost/Local/Shared/etc.) and the flavor can specify that tag.   The tag will become part of the scheduling constraints when Photon Platform attempts to place a disk.     Persistent disks work the same way.  Though we haven't yet created a persistent disk, we will do so in module 2.

 

 

Persistent Disk Flavors

 

  1. Click on Persistent Disks

We have a single persistent disk flavors for you.  It is used in our Kubernetes Cluster.  You will create another Flavor when you create persistent disks in Module 2.

 

Conclusion


Cloud Scale administration requires a different way of operating.  Administrators do not have the luxury of meticulously caring for individuals VMs.  There are just too many of them, and they tend to have short lifetimes.  Administration is about thinking at scale - abstracting huge amounts of physical resources, pooling them together and then allocating parts of the pools to entities that consume them through programmatic interfaces.  

You now have a basic understanding of what Photon Platform is - and how it is different from vSphere.  You have seen that the operational model for administrators is very different from what you might be used to with UI driven management through vCenter.  You have been introduced to Multi-Tenancy and a new paradigm for resource allocation through Resource Tickets, as well as a different consumption model using Images and Flavors.  

In Module 2, you will deep dive into the Infrastructure As A Service components of Photon Platform.


 

You've finished Module 1

 

Congratulations on completing  Module 1.

If you are looking for additional information on Photon Platform,

Proceed to any module below which interests you most. [Add any custom/optional information for your lab manual.]

 

 

How to End Lab

 

To end your lab click on the END button.  

 

Module 2 - Cloud Admin Operations With Photon Platform - IaaS Deep Dive (60 minutes)

Introduction


This module will engage you in the Cloud Native operational model by setting up the environment and deploying a container application through Photon Platform API.  You will learn how to define tenant resources, create images, flavors, vms, and networks.  You will also be introduced to persistent disks which are independent of your VM lifecycle and extend Docker volumes to multiple hosts.  You will use both the CLI and management UI in performing these tasks.   Finally, you will build an application with (nginx) to display a web page with port mapping to show some basic networking capabilities.  Basic troubleshooting and Monitoring through LogInsight and Grafana will also be performed.  

1)     Multi-tenancy and Resource management in Photon Platform

You will use the Photon Platform CLI to create tenants, allocate resources (CPU, Memory, storage) through the use of Resource Tickets and carve those resources into individual projects.  This lesson will also provide you with a basic overview of working with the CLI.

2)     Set up Cloud VM operational elements through definition of base images, flavors, networks and disks

Photon Platform includes centralized management of base images used for VM and Disk creation.  You will be introduced to managing those images.  VM and disk profiles are abstracted through a concept called Flavors.  You will see how to define those flavors, as well as use them to create VMs and Persistent disks.    You will create a network and combine it with a Flavor and Image to create a VM.    (Note: ESXi Standard networking is used in this lab, however NSX support is also available)

3)     Persistent disks enable container restart across hosts

Persistent Disks are different from standard vSphere ephemeral disks in that they are not tied to the lifecycle of  a VM.  You will create a Persistent disk and see that it can be attached to a VM, then detached and reattached to a second VM.  You will combine this with Docker Volumes to allow container data to persist across hosts.

4)     Monitor and Troubleshoot Applications running on Photon Platform

See how Photon Platform integration with LogInsight and Graphite/Grafana simplify Troubleshooting and Monitoring of applications across distributed infrastructure.

     

     


Multi-Tenancy and Resource Management in Photon Platform


You will use the Photon Platform CLI to create tenants, allocate resources (CPU, Memory, storage) through the use of Resource Tickets and carve those resources into individual projects.  This lesson will also provide you with a basic overview of working with the CLI.


 

Login To CLI VM

 

Photon Platform CLI is available for MAC, Linux and Windows.  For this lab, the CLI is installed in a Linux VM.  

From the Windows Desktop:

  1. Click on the Putty Icon
  2. Select PhotonControllerCLI connection
  3. Click Open

Authentication should be done through SSH keys, however if you are prompted for a password use vmware

 

 

Verify Photon CLI Target

 

The Photon Platform CLI can be used to manage many instances of the Control Plane, so you must point it to the API Endpoint for the Control Plane you want to use.

  1. Execute the following command:  
photon target show

It should point to the endpoint referenced in the image. If it does not then execute:  

photon target set http://192.168.120.10:9000

Note:  If you are seeing strange HTTP: 500 errors when executing photon CLI commands, then execute the next step.   We are sometimes seeing race conditions on startup of the labs that require a reboot of the Photon Controller services.

 

 

Execute This Step Only If You Had photon HTTP Errors In The Previous Step

 

  1. ssh into the PhotonController Management VM.  Execute:  ssh esxcloud@192.168.120.10  Password is vmware
  2. You must change to the root user.  Execute;  su    Password is vmware
  3. Reboot the VM.  Execute: reboot      This should take about 2 minutes to complete
  4. Now return to the previous step that caused the HTTP: 500 error and try it again.

 

 

Photon CLI Overview  

 

The Photon CLI has a straightforward syntax.  It is the keyword "photon", followed by the type of object you want to work on (vm, disk, tenant, project, etc.) and then a list of arguments.   We will be using this CLI extensively in the module.  Context sensitive help is available by appending -h or --help onto any command.

  1. Execute:  
photon -h

 Note:  If you experience problems with keyboard input not showing up in the Putty session, this is probably because the Taskbar is blocking the Command prompt:

Type:  Clear and hit Return to move the prompt to the top of the screen

 

 

Photon CLI Context Help

 

From that list we might want to take action on a VM.  So let's see the command arguments for VMs.

  1. Execute:  
photon vm -h

As we go through the module, use the help command to see details of the actual commands you are executing.

 

 

Create Tenant

 

Photon Platform implements a hierarchical tenant model.   Tenants represent a segmentation between companies, business units or teams.  Cloud resources are allocated to Tenants using a set of Resource Tickets.  Allocated resources can be further carved up into individual projects within the Tenant.

Let's start by creating a new Tenant for our module.

  1. Execute the following command:  
photon tenant create lab-tenant

Hit Return on the Security Group Prompt.  Photon Platform can be deployed using external authentication.  In that case you would specify the Admin Group for this Tenant.  We have deployed with no authentication to make the lab a little easier.

Once you have created the Tenant, you must set the CLI to execute as that Tenant.   You can do this or refer to the Tenant with CLI command line switches.  There is an option to enable Authentication using Lightwave, the Open Source Identity Management Platform from VMware.  We have not done that in this lab.

  1. Execute the following command:  
photon tenant set lab-tenant

 

 

Create Resource Ticket

 

Creating a Resource Ticket specifies a pool of resources that are available to the Tenant and can later be consumed through the placement of workloads in the infrastructure.

  1. Execute the following command:  
photon resource-ticket create --name lab-ticket --limits "vm.memory 200 GB, vm 1000 COUNT"

2.     To view your Resource Tickets, Execute the following command: 

photon resource-ticket list

We have allocated 200 GB of memory and placed a limit of 1000 VMs for this Tenant.  Other resources are unlimited because we have not specified a Limit.

3.     Also note the Entity UUID printed after the command completes.  You will use UUIDs to manipulate objects in the system and they can always be found by using "photon

       entity-type list" commands.  "Entity-type" can be one of many types, like:  vm, image, resource-ticket, cluster, flavor, etc.

 

 

Create Project

 

Tenants can have many Projects.  In our case, we are going to create a single project within the lab-tenant Tenant.  This project will only be allocated a subset of the resources already allocated to the Tenant.  Notice that the Tenant has a limit of 200GB and 1000 VMs, but the project can only use 100GB and create 500 VMs.

1.     To create the Project, Execute the following command:  

photon project create --resource-ticket lab-ticket --name lab-project --limits "vm.memory 100 GB, vm 500 COUNT"

2.     To view your Projects, Execute the following command: 

photon project list   

       Notice that you can see the Limit that was set and the actual Usage of the allocated resources.

3.     To Set the CLI to the Project, Execute the following command:  

photon project set lab-project

Now we have a Tenant with resources allocated to it and Project that can consume those resources.  Now we will move on to create objects within the Project.

 

Set Up Cloud VM Operational Elements Through Definition of Base Images, Flavors, Networks and Persistent Disks


Photon Platform includes centralized management of base images used for VM creation.  You will be introduced to managing those images.  VM and disk profiles are abstracted through a concept called Flavors.  You will see how to define those flavors, as well as use them to create VMs and Persistent disks.    You will create a network and combine it with a Flavor and Image to create a VM.    (Note: ESXi Standard networking used in this lab, however NSX support is also available)


 

View Images

 

Photon Platform provides a centralized image management system.  Base images are uploaded into the system and can then be used to create VMs within the environment.  Users can upload either an OVA or VMDK file.   Once a VM is deployed, and potentially modified, its disk can be saved as an image in the shared image repository.  The image repository is a set of Datastores defined by the Administrator.  Datastores can be local or shared storage.  When a user creates a VM, a linked clone is created from the base image to provide the new object.  This copy on write technology means that the new disk takes up very little space and captures only the disk changes from the original image.  Users can optimize the images for performance or storage efficiency by specifying whether the image should be copied to Cloud datastores immediately on upload or only when a placement request is executed.

1.       To see the images already uploaded, execute the following command:  

photon image list

Do not upload an image in this environment because of bandwidth constraints, however the command to do it is:   photon image create filename -name PhotonOS

Notice that your photon image list command shows several images that have been uploaded for you.  1) photon-management is the image used to create the original management plane VMs and any new management VMs that you add in the future.  2) kube is the boot image for the nodes in a running Kubernetes Cluster that you will use in Module 3    3) PhotonOS is the latest version of our Photon Linux distro, which ships with Docker configured and is optimized for container deployment.   You will use this image later in this module.

Each image has a Replication Type; EAGER or ON_DEMAND.  EAGER images are copied to every datastore tagged as CLOUD, so VMs can be cloned very quickly - at the expense of storing many copies of the image.   ON_DEMAND images are downloaded to the datastore where the scheduler decided on placement at the time of the placement.  The creation takes longer, but storage usage is more efficient.

2.     To see more detail on a particular image, execute the following command:  

photon image show  "UUID of image"     UUID of the image is in the photon image list command results.

 

 

View Flavors

 

Flavors need a bit of explanation.  There are three kinds of Flavors in Photon Platform; VM, Ephemeral Disk and Persistent disk Flavors.  Ephemeral disks are what you are used to with your current ESXi environment.  They are created as part of the VM create and their lifecycle is tied to the VM.  

Persistent disks can be created independently from any VM and then subsequently attached/detached.  A VM can be created, a persistent disk attached, then if the VM dies, the disk could be attached to another VM.  

Flavors define the size of the VMs (CPU and RAM), but also define the characteristics of the storage that will be used for ephemeral (Boot) disks and persistent storage volumes.    You will specify the vm and disk flavors as part of the VM or Disk creation command.  

  1. To view existing Flavors, Execute the following command:  
photon flavor list

In our environment we have created specific VM flavors to define the size of our Kubernetes Master and Worker node vms.  Notice that the Master node Flavor will create a larger VM than the other Flavors.

 

 

Create New Flavors

 

We are going to create 1 of each type of Flavor to be used in this module:

1.      Execute:

photon -n flavor create -n my-vm -k vm -c "vm.cpu 1 COUNT,vm.memory 1 GB"

VMs created with this Flavor will have 1 vCPU and 1 GB of RAM

2.      Execute:

photon -n flavor create -n my-pers-disk -k persistent-disk -c "persistent-disk 1.0 COUNT"

This Flavor could have been tagged to match tags on Datastores so that storage Profiles are part of the Disk placement.  In this case we have simply added a COUNT.  This could be used as a mechanism for capturing Cost as part of a Chargeback process.

3.     Execute:

photon -n flavor create -n my-eph-disk -k ephemeral-disk -c "ephemeral-disk 1.0 COUNT"

 

4.      To easily see the Flavors you just created, execute: 

photon flavor list |grep my-

 

 

Create Networks

 

By default Photon Controller will discover the available networks on your Cloud Hosts and choose one of them for VM placement.  To limit the scope of this discovery, you can create a network object and reference it when creating a vm or cluster.  This network object is also the basis for creating logical networks with NSX.  That functionality will be available shortly after VMworld 2016.   In our lab environment, there is only one Portgroup available, so you wouldn't actually need to specify a network in your VM create command, but we are going to use it to show the functionality.  We have already created this network for you.

1.     If you needed to create a network you would issue the following command:    photon network create -n lab-network -p “VM Network” -d “My cloud Network”

The -p option is a list of the portgroups that you want to be used for VM placement.  Its essentially a whitelist of networks available to the scheduler when evaluating where to place a VM.   The -d option is just a description of your network.

2.     To easily see the Network we have created, execute:

photon network list

 

 

Create VM

 

We are now ready to create a VM using the elements we have gone through in the previous steps.

 

1.      Execute the following command:  

photon vm create --name lab-vm1 --flavor my-vm --disks "disk-1 my-eph-disk boot=true" -w UUID of your Network  -i UUID of your PhotonOS image

Note:  You can get the UUID of your network with the command:  photon network list and the UUID of your image with the command: photon image list

Let's break down the elements of this command.  --name is obvious.  it's the name of the VM.  --flavor says to use the my-vm flavor you defined above to size the RAM and vCPU count.  --disks is a little confusing.  disk-1 is the name of the ephemeral disk that is created.  It will be created using the my-eph-disk flavor you created earlier.  We didn't do much with that flavor definition, however it could have defined a Cost for Chargeback, or been tagged with a storage profile.  The tag would have been mapped to a datastore tag and would be part of the scheduling constraints used during VM placement.  Boot=true means that this is the boot disk for this VM.  -w is optional and contains the UUID of the network you just created.  -i is the UUID of the Image that you want to use.  In this case, we want to the PhotonOS image.  To get the UUID of the image, execute:   photon image list

 

 

 

Create a Second VM

 

This VM will be used later in the lab, but its very easy to create now.

2.        Execute the following command:  

photon vm create --name lab-vm2 --flavor my-vm --disks "disk-1 my-eph-disk boot=true" -w UUID of your Network  -i UUID of your PhotonOS image

Note:  The easiest way to create this is to hit Up Arrow on your keyboard to get to the previous photon vm create command.  Then hit left arrow key until you get to the name and change the 1 to a 2.  Finally hit Return to execute.  

 

 

Start VM

 

The VMs were created, but not powered on.  We want to power on the first VM only.  The second VM needed to be powered off for now.

1.        To start the VM, execute:  

photon vm start UUID of lab-vm1

The UUID of the VM is at the end of the Create VM command output.   You can also get it by executing photon vm list

 

 

Show VM details

 

More information about the VM can be found using the show command.

1.      To show VM details, execute:  

photon vm show UUID of lab-vm1

Notice that you can see the disk information and the Network IP.  The IP metadata takes a couple of minutes to migrate from ESXi into the Photon Platform Cloudstore, so you may not see it right away, even if you see it through the vSphere Client.

 

 

Stop VM

 

We are going to shutdown the VM in order to attach a Persistent Disk to it.  Our boot image is not configured to support hot add of storage so we will shut the VM down first.

1.          To Stop the VM, Execute:  

photon vm stop UUID of lab-vm1

 

 

Persistent Disks

 

So far we have created a VM with a single Ephemeral disk.  If we delete the VM, the disk is deleted as well.  In a Cloud environment, there is the need to have ephemeral VMs that may be created/destroyed frequently, but need access to persistent data.  Persistent Disks are VMDKs that live independently of individual Virtual Machines.  They can be attached to a VM, and when that VM is destroyed, can be attached to another newly created VM.  We will also see later on that Docker Volumes can be mapped to these disks to provide persistent storage to containers running in the VM.  Let's create a persistent disk.  

1.         To Create a persistent disk, Execute:  

photon disk create --name disk-2 --flavor my-pers-disk --capacityGB 2

Let's look at the details:  --name is the name of the disk  --flavor says to use the my-pers-disk flavor to define placement constraints and  --capacity of the disk will be 2 GB

2.         More information about the disk can be found using:  

photon disk show UUID of the Disk

Notice that the disk is DETACHED, meaning it is not associated with any VM.  Let's ATTACH it to our VM.

 

 

Attach Persistent Disk To VM

 

Now we will attach that newly created persistent disk to the VM we created previously.

1.        To find the VM UUID, Execute:  

photon vm list

2.        To find the Disk UUID, Execute:

photon disk list

3.        To attach the disk to the VM, Execute:

photon  vm attach-disk “uuid of lab-vm1” --disk “uuid of disk”

 

 

Show VM Details

 

Now we will see the attached Disk using the VM Show command again.

1.       To Show VM details, execute:  

photon vm show UUID of lab-vm1

Notice that you can see the disk information and both disk-1 (the ephemeral boot disk) and disk-2 (your newly added persistent disk) are attached to the VM.

 

Map Persistent Disks To Docker Volumes To Enable Container Restart Across Hosts


Persistent Disks are different from standard vSphere ephemeral disks in that they are not tied to the lifecycle of  a VM.  You will use your previously created persistent disk to store Web content for Nginx.  Web content stored in an individual container is static.  It must be manually updated or files must be copied in to each container that might present it.  Our content will be presented to the containers through Docker volumes that will be mounted on our persistent disk.  So it can be changed in one place and made available wherever we present it.  We will make changes to the content on one Docker host, then attach the disk to a new host and create a new container on that host.  The website on that host will reflect the changed content.  Docker volumes provide the ability to persist disks across containers.  Photon Platform persistent disks extend that capability across Docker hosts.


 

Deploy Nginx Web Server

 

We will use your two previously created VMs, lab-vm1 and lab-vm2, for these exercises.  Lets start the VM and get the IP address for lab-vm1

1.      To find the vm UUID, Execute:  

photon vm list

2.      To start lab-vm1, Execute:

photon vm start UUID of lab-vm1

2.      To find the vm IP for lab-vm1, Execute:

photon vm networks UUID of lab-vm1

Note:  It may take a couple of minutes for the IP address to be updated in the Photon Controller Meta Data and appear in this command.  Keep trying, or log into vCenter and grab the IP from there

 

 

Connect to lab-vm1

 

1.       From the CLI, execute:  

ssh root@IP of lab-vm1      password is VMware1!

 

 

Setup filesystem

 

The storage device is attached to the VM, however we still need to format the disk and mount the filesystem.  We have provided a script to execute these steps for you.

1.         To set up the filesystem, Execute:  

./mount-disk-lab-vm1.sh

2.         You will see that the device /dev/sdb is mounted at /mnt/dockervolume.   This is the Persistent disk you previously created.

 

 

Create The Nginx Container With Docker Volume

 

We will now create an Nginx container on our Docker host (lab-vm1).  The container will have a volume called /volume that is mounted on /mnt/dockervolume from the host.  This means that any changes to /volume from the container will be persisted on our physical persistent disk.

1.       To create the nginx container,  Execute:  

docker run -v /mnt/dockervolume:/volume -d -p 80:80 192.168.120.20:5000/nginx

Let's look at this command:  docker run creates a container.  The -v says to create a Docker volume in the container that is mounted on /mnt/dockervolume from the host.  The -d means to keep the container running until it is explicitly stopped.  The -p maps container port 80 to port 80 on the host.   So you will be able to access the Nginx Web Server on port 80 from your browser.  Lastly, nginx is the Docker image to use for container creation.  Notice that the image is specified as IP:port/image.  This is because we are using a local Docker registry and have tagged the image with the ip address and port of the registry.

 

 

Verify Webserver Is Running

 

1.       Open one of the Web Browsers on the desktop

2.       Enter the IP address of lab-vm1.  The IP may be different from the one in the image above.  It is the same IP you used in the previous ssh command from the CLI.  The

         default http port is 80, so you do not need to enter it.  You should see the Nginx homepage.

 

 

Modify Nginx Home Page

 

We will copy the Nginx default home page to our Docker volume and modify it.  Once we have done that, we will move the disk to a new VM, Create a new container with Docker Volume and verify that the changes we made have persisted.  

1.       Connect to your running container.  From the CLI, you should still have have an ssh connection to lab-vm1, Execute

docker exec -it “first3CharsOfcontainerID” bash

This command says to connect to the container through an interactive terminal and run a bash shell.  You should see a command prompt within the container.  If you cannot find your containerID, Execute:  docker ps to find it

2.        To see the filesystem inside the container and verify your Docker volume (/volume), Execute:  

df

3.        We want to copy the Nginx home page to our Persistent disk.  Execute:  

cp /usr/share/nginx/html/index.html /volume

4.        To Exit the container, Execute:  

exit

 

 

Edit The Index.html

 

You will use the vi editor to make a change to the index.html page.   If you are comfortable with vi and html, then make whatever modifications you want.  These are the steps for a very simple modification.

1.          Execute:  

vi /mnt/dockervolume/index.html

2.          Press the down arrow until you get to the line 14 with "Welcome To Nginx"

3.          Press right arrow until you are at the character "N" in "Nginx"

4.          Press the "cw" keys to change word and type "the Hands On Lab At VMWORLD 2016 "

5.          Press the "esc" key and then ":" key

6.          At the ":" prompt, enter "wq" to save changes and exit vi

7.          At the Linux Prompt: Type "exit" to close the ssh session.  You are now back in the Photon CLI

 

 

Detach The Persistent Disk

 

We now want to remove this disk from the VM.  Remember that detaching the disk does not delete it.  Detach the Persistent Disk from lab-vm1

1.          To get the UUID of the lab-vm1, Execute:

photon vm list

2.          To get the UUID of the Persistent Disk, Execute:  

photon disk list

3.          Execute:  

photon vm detach-disk UUID of lab-vm1 --disk UUID of disk-2

Reminder that you can get the UUID of the VM with photon vm list and the UUID of the disk with photon disk list commands.

 

 

Attach The Persistent Disk To New VM

 

You will attach the persistent disk to the lab-vm2 VM you created earlier.

1.          To get the UUID of lab-vm2, Execute:  

photon vm list

2.          To attach the disk to lab-vm2, Execute:

photon  vm attach-disk “uuid of lab-vm12” --disk “uuid of disk”

 

 

Start and Connect to lab-vm2

 

1.          To start the VM lab-vm2, Execute:  

photon vm start UUID lab-vm2 

2.          To get the network IP of lab-vm2, Execute:  

photon vm networks UUID lab-vm2   

Note: You may have to wait a minute or two for the IP to appear.  If you are impatient you can open the vsphere client and get it there.

3.          From the CLI, execute:  

ssh root@IP of lab-vm2      password is VMware1!

 

 

Setup Filesystem

 

The storage device is attached to the VM, however we still need to format the disk and mount the filesystem.  We have provided a script to execute these steps for you. Note that you must run mount-disk-lab-vm2.sh   not mount-disk-lab-vm1.sh on this vm.  mount-disk-lab-vm1.sh will reformat the disk and you will not see the changes you made.

1.          To set up the filesystem, Execute:

 ./mount-disk-lab-vm2.sh

You will see that the device /dev/sdb is mounted at /mnt/dockervolume.  

 

 

Create The New Nginx Container

 

We will now create a New Nginx container on our second Docker host (lab-vm2).  This container will have a volume called /usr/shared/nginx/html that is mounted on /mnt/dockervolume from the host.  Nginx uses /usr/shared/nginx/html as the default path for its configuration files.  So our changed home page on the persistent disk will be used as the default page.

1.          To create the nginx container,  Execute:  

docker run -v /mnt/dockervolume:/usr/share/nginx/html -d -p 80:80 192.168.120.20:5000/nginx
To return to the Photon CLI, type: exit

Let's look at this command:  docker run creates a container.  The -v says to create a Docker volume in the container that is mounted on /mnt/dockervolume from the host.  The -d means to keep the container running until it is explicitly stopped.  The -p maps container port 80 to port 80 on the host.   So you will be able to access the Nginx Web Server on port 80 from your browser.  Lastly, nginx is the Docker image to use for container creation.  It resides on a local Docker Registry we created on 192.168.120.20 port 5000.     Extra Credit:   From the CLI, Execute docker ps and you will see the Docker Registry we are using

 

 

Verify That Our New Webserver Reflects Our Changes

 

You should see the New Nginx homepage on the IP of lab-vm2.

1.          Open one of the Web Browsers on the desktop

2.          Enter the IP address of lab-vm2.  The default http port is 80, so you do not need to enter it.  You should see the modified Nginx homepage.

 

 

Clean Up VMs

 

Our lab resources are very constrained.  In order to complete Module 3, you will need to delete the two VMs you created in this part of the lab.

1.           To delete a VM, Execute:  

photon vm list

note the UUIDs of the two VMs

2.           Execute:

photon vm stop UUID of lab-vm2

3.           Execute:

photon vm detach-disk UUID of lab-vm2 --disk UUID of disk

4.           Execute:

photon vm delete UUID of lab-vm2

5.           Repeat steps 2 and 4 for lab-vm1

 

Monitor and Troubleshoot Photon Platform


Photon Platform can be configured to push logs to any syslog server endpoint.  We have configured this deployment for LogInsight.  You will troubleshoot a failure in VM deployment using LogInsight and will monitor your infrastructure through integration with Graphite and Grafana.


 

Enabling Statistics and Log Collection

 

Photon platform provides the capability to push log files to any Syslog server.  Infrastructure statistics can also be captured and pushed to a monitoring endpoint.  Both of these are enabled during control plane deployment.  In this example, we are pushing statistics to a Graphite server and then using a visualization tool called Grafana to provide some nicer graphs.   Our Syslog server in this lab is LogInsight

 

 

Monitoring Photon Platform With Graphite Server

 

Let's start by seeing what statistics are available from Photon.  In this Pre-GA version we are primarily capturing ESXi performance statistics, but will enhance this over time.

1.          Connect to the Graphite Server by opening a browser

2.          Select the Graphite Browser Bookmark from the Toolbar.

 

 

Expand To View Available Metrics

 

Expand the Metrics folder and then select the Photon Folder.  You can see Two ESXi Hosts and statistics for CPU, Memory, Storage and Networking.  

1.          Expand cpu and select usage

2.          Expand mem and select usage

 

If you do not see any data, this is because the photon controller agent plugin on your hosts did not start correctly when the lab deployed.  Perform the following Step Only if no data displayed in Graphite.

 

 

 

No Performance Data in Graphite

 

If you saw performance data in Graphite, then skip to step "View Graphite Data Through Grafana"  

You will ssh into our two esxi hosts and restart the photon controller agent process.  If you are seeing performance data from only one host, then only restart that host's agent

1.          Login to the the PhotonControllerCLI through Putty.

2.          From the PhotonControllerCLI, Execute:  

ssh root@192.168.110.201   password is VMware1!

3.           Execute:  

/etc/init.d/photon-controller-agent restart

4.           Execute:  

exit

5) repeat steps 2-4 for host 192.168.110.202

It will take a couple of minutes for the stats to begin showing up in the browser.  You may need to refresh the page.  You may also want to jump to the LogInsight Section of the lab and come back here if you don't want to wait for the stats to collect.

 

 

View Graphite Data Through Grafana

 

Graphite can also act as a sink for other visualization tools.  In this case, we will take the data from Graphite and create a couple of charts in Grafana.

1.           From your browser, Select the Grafana Bookmark from the toolbar.

 

 

Graphite Data Source For Grafana

 

We have previously set up Graphite as the source for Data used by Grafana.  To see this setup

1.          Click on Data Sources.   We simply pointed to our Graphite Server Endpoint.

 

 

Create Grafana Dashboard

 

Grafana has the capability to create a lot of interesting graphics.  That is beyond the scope of this lab, but feel free to play and create whatever you want.  We will create a simple Dashboard to show CPU and Mem metrics that we viewed previously in Graphite.

1.          Click on Dashboards

2.          Click on Home

3.          Click on New

 

 

Add A Panel

 

1.           Select the Green tab

2.           Add Panel

3.           Graph

 

 

Open Metrics Panel

 

This is not intuitive, but you must click where it says "Click Here" and then Click Edit to add metrics

 

 

Add Metrics To Panel

 

1.          Select "Select Metrics"  and select photon.

2.          Select "Select Metrics" again and select one of the esxi hosts (This is the same Hierarchy you saw in Graphite).  Continue selecting until your metrics look like this.

This is a pretty straightforward way to monitor performance of Photon Platform resources.

 

 

Troubleshooting Photon Platform With LogInsight

 

We will try to create a VM that needs more resource than is available in our environment.  The create task will error out.  Rather than search through individual log files, we will use LogInsight to see more information.

1.          Execute the following command:  

photon vm create --name lab-vm1 --flavor cluster-master-vm --disks "disk-1 cluster-vm-disk boot=true" -w UUID of your Network  -i UUID of your PhotonOS image

the cluster-master-vm will try to create a VM with 8GB of Memory.  We do not have that available on our Cloud hosts, so it will fail.  The error message here tells us the problem, but we want to walk through the process of getting more detail from the logs.

2.          Note the Task ID from the Create command.  We are going to use that in a LogInsight Query.

 

 

Connect To Loginsight

 

1.          From Your browser, select the LogInsight Bookmark from the toolbar and login as User: admin  password:  VMware1!

 

 

Query For The Create Task

 

Once you Login, you will see the Dashboard screen

1.          Click on Interactive Analytics

2.          Paste the Task ID into Filter Field

3.          Change the Time Range to Last Hour of Data

4.          Click the Search Icon

You can look through these task results to find an error.  More interesting is looking through RequestIDs

5.          In Photon Platform, every Request through the API gets a requestID.  There could be many ReqIDs that are relevant to a task.  It takes a little work to see the right entries to drill into.  For instance, this entry shows an error, but the RequestID is related to querying the CloudStore for the Task.  So you see the Create VM task itself was in error, but the RequestID is for a request that was successful (querying the task info).  So we need to scroll for a more interesting request.

 

 

Browse The Logs For Interesting Task Error, Then Find RequestID

 

1.          Scroll down in the Log and look for RESERVE_RESOURCE.  

2.          Find the RequestID and Paste it into the Filter Field

Your log files will be slightly different, but you should see something similar.

 

 

Search The RequestID For RESERVE_RESOURECE

 

Once you click on the Search Icon, you will see log hits for that RequestID.  These are actual requests made by the Photon Controller Agent Running on the ESXi hosts.  In this case the Agent Request Errors were surfaced to the task level, so there isn't a lot of additional information, but that is not always true.  In many instances, the requestID will provide new data to root cause the initial Task Failure.  This is especially useful as the scale of your system grows.

 

Conclusion


The operational model for Cloud Native infrastructure is dramatically different from traditional platform 2 kinds of environments.  The expectation is that the control plane will be highly scalable, supporting both large numbers of physical hosts, as well as high churn-transient work loads.   The application frameworks handling application provisioning and availability, removing that requirement from the infrastructure.  The applications are very dynamic and infrastructure must be consumable through programmatic methods, rather than traditional Admin Interfaces.  In this module, you have been introduced to Photon Platform Multi-tenancy and its associated model for managing resources at scale.  You have also seen the API, consumed in this instance through the Command Line Interface.  You have also seen how storage persistence in the infrastructure can add value to Microservice applications that take advantage of Docker containers.  Finally, you have been exposed to monitoring and troubleshooting of this distributed environment.


Module 3 - Container Orchestration Frameworks with Photon Platform (45 minutes)

Introduction


This module provides an introduction to the operational model for developers of cloud native applications.  Deploying containers at scale will not be done through individual Docker run commands (as seen in the previous module), but through the use of higher level frameworks that provide orchestration of the entire application. Orchestration could include application deployment, restart on failure, as well as up/downscaling of applications instances. In this module you will focus on container frameworks that manage micro service applications running on Photon Platform.  You will build and deploy a simple web application using Opensource Kubernetes and Docker.   You will also see how orchestration at scale can be administered through a tool like Rancher.

1)       Container Orchestration With Kubernetes on Photon Platform.    

         We have provided a small Kubernetes cluster, deployed on Photon Platform.  You will see the process for deploying Opensource Kubernetes on Photon Platform, but Due to timing and resource constraints in the lab, we could not create it as part of the lab.   You will deploy the Nginx Webserver application (Manually deployed in Module Two) via Kubernetes.  You will verify that multiple instances have been deployed and see how to scale additional instances.  You will kill an instance of the webserver and see that kubernetes detects the failure and restarts a new container for you.

2)      Container Orchestration with Rancher on Photon Platform

        Rancher is another Opensource Container management platform.   You will see how the Rancher UI allows you to provision Docker-Machine nodes on Photon platform and deploy will then deploy an Nginx Webserver onto the Docker hosts.   Rancher provides that higher level container orchestration and takes advantage of the resource and tenant isolation provided by the underlying Photon Platform.


Container Orchestration With Kubernetes on Photon Platform


We have provided a small Kubernetes cluster, deployed on Photon Platform.  You will see the process for deploying Opensource Kubernetes on Photon Platform, but due to timing and resource constraints in the lab, we could not create it as part of the lab.   You will deploy the Nginx/Redis application (Manually deployed in Module Two) via Kubernetes.  You will verify that multiple instances have been deployed and see how to scale additional instances.  You will kill an instance of the webserver and see that kubernetes detects the failure and restarts a new container for you.  Also troubleshoot the outage via LogInsight.


 

Kubernetes Deployment On Photon Platform

 

Photon Platform provides two methods for deploying Kubernetes Clusters.  The first method is an opinionated deployment where we have pre-defined all of the elements of the deployment.  We will briefly look at the CLI commands to support this.

1) From the Windows Desktop, login to the PhotonControllerCLI VM.   SSH key login has been enabled, but if you have a problem, the password is vmware.

 

 

Photon Cluster Create Command

 

The CLI supports a Cluster Create command.  This command allows you to specify the cluster type (Kubernetes, Mesos, Swarm are currently supported) and size of the cluster.  You will also provide additional IP configuration information.   Photon Platform will Create the Master and Worker node VMs, configure the services (for Kubernetes in this example), setup the internal networking and provide a running environment with a single command.  We are not going to use this method in the lab.  If you try to create a Cluster, you will get an error because there is not enough resource available to create more VMs.

Example: photon cluster create -n Kube5 -k KUBERNETES --dns “dns-Server” --gateway “Gateway” --netmask “Netmask” --master-ip “KubermasterIP” --container-network  “KubernetesContainerNetwork” --etcd1 “StaticIP” -w “uuid demo network” -s 5

With this command we are creating a cluster called Kube5 of type Kubernetes.  We are specifying the networking configuration for the Kuberetes Master VM and a separate etcd VM (etcd is a backing datastore that holds networking information used by Flannel internal to Kubernetes).  The Worker node VMs will receive IPs from DHCP.  You will specify the network on which to place these VMs through the -w option and -s is the number of Worker nodes in the cluster.  The Kubernetes container network is a private network that is used by Flannel to connect Containers within the Cluster.

1.          To see the command syntax, Execute:  

photon cluster create -h 

 

 

Kube-Up On Photon Platform

 

You just saw the Photon Cluster Create command.  This is an easy way to get a cluster up and running very quickly, and also provides capability to scale it up as needed.  Awesome for a large number of use cases, but you probably noticed that there is no way to customize it beyond the parameters provided in the command line.   What if you want a different version of Kubernetes, or Docker within the VMs.  How about replacing Flannel with NSX for networking or using a different Operating System in the Nodes?  These are not easily done with Cluster Create at this point.   We have provided a second option for creating the cluster.  We have modified Open Source Kubernetes directly to support Photon Platform.

Your process for deploying the cluster is to clone the Kubernetes Repo from github, build it and run the kube-up command while passing in the environment variable that tells it to use our deployment scripts.  This allows you complete freedom to configure the cluster however you want.

 

 

Our Lab Kubernetes Cluster Details

 

We have created a Kubernetes Cluster with one Master and 2 Worker nodes.  You are welcome to take a look at the configuration files in ~/kubernetes/cluster/photon-controller/   You can look through the config-default and config-common files to see how some of the configuration is done.

1.          Let's take a look at the VMs that make up our cluster.  Execute:  

photon tenant set kube-tenant       

This points to the kube tenant that we created for our cluster.  For details on tenants and projects, return to module 1

2.          To set our kube project, Execute:  

photon project set kube-project

3.         To see our VMs, Execute:  

photon vm list

You can see that our cluster consists of one Master VM and 2 Worker VMs.   Kubernetes will create Pods that are deployed as Docker containers within the Worker VMs.

 

 

 

Basic Introduction To Kubernetes Application Components

 

Before we deploy the app, let get a little familiarity with Kubernetes concepts. This is not meant to be a Kubernetes tutorial, but to get you familiar with the pieces of our application.   A node represents the Worker nodes in our Kubernetes Cluster.  

Kubernetes has a basic unit of work called a Pod.  A Pod is a group of related containers that will be deployed to a single Node.  you can generally think of a Pod as the set of containers that make up an application.  You can also define a Service that acts as a Load Balancer across a set of containers.  Lastly, Replication Controllers facilitate replicated pods and are responsible for maintaining the desired number of copies of a particular Pod.  In our application, you will deploy 3 replicated copies of the Nginx Webserver, with a frontend Service.  The command line utility for managing Kubernetes is called kubectl.  Let's start by looking at the nodes.

1.          From the CLI VM, Execute:  

kubectl get nodes

You will see the two worker nodes associated with our cluster.  This is slightly different from seeing the VMs that the nodes run on as you did previously.

 

 

 

Deploying An Application On Kubernetes Cluster

 

Our application is defined through 3 yaml files.  One for each of the Pod, Replication Controller and Service.  These files provide the configuration Kubernetes uses to deploy and maintain the application.

To look at these configuration files:

1.          Execute:

cat ~/demo-nginx/nginx-pod.yaml

2.          Execute:

cat ~/demo-nginx/nginx-service.yaml

3.          Execute:

cat ~/demo-nginx/nginx-rc.yaml

 

 

 

Kubectl To Deploy The App

 

We are now going to deploy the application.  From the CLI VM:

1.          To deploy the pod, Execute:  

kubectl create -f ~/demo-nginx/nginx-pod.yaml

2.          To deploy the service, Execute:  

kubectl create -f ~/demo-nginx/nginx-service.yaml

3.          To deploy the Replication Controller, Execute:  

kubectl create -f ~/demo-nginx/nginx-rc.yaml

 

 

Kubernetes UI Shows Our Running Application

 

After you have deployed your application, you can view it through the Kubernetes UI

1.          Open your Web Browser and enter https://192.168.100.175/ui If you are prompted for username and password, they are admin/4HjyqnFZK4tntbUZ  Sorry about the randomly generated password.   You may get an invalid certificate authority error.  Click on Advanced and Proceed to the site.

nginx-demo is your application

2.          Note the port number for the External endpoint.  We will use it in a couple of steps

 

 

Application Details

 

1.          Click on the 3 dots and select "View Details" to see what you have deployed.

 

 

Your Running Pods

 

You can see the Replication Controller is maintaining 3 Replicas.  They each have their own internal IP and are running on the 2 Nodes.  3 Replicas is not particularly useful given that we have only 2 Nodes, but the concept is valid.  Explore the logs if you are interested.

We can connect to the application directly through the Node IP and the port number we saw earlier.

 

 

Connect To Your Application Web Page

 

Now let's see what our application does.   We will choose one of the node IP addresses with the port number shown earlier to see our nginx webserver homepage.  It's just a simple dump of the application configuration info.

1.          From your browser, Connect to:  http://192.168.100.176:portnumber      Note that your port number may be different than the lab manual port number.  IP will be the same.

 

Container Orchestration With Docker Machine Using Rancher on Photon Platform


Rancher is another Opensource Container management platform.   You will use the Rancher UI to provision Docker-Machine nodes on Photon platform and deploy a Micro-Service application onto the newly created Docker hosts.  Rancher provides that higher level container orchestration and takes advantage of the resource and tenant isolation provided by the underlying Photon Platform.


 

Login To Photon ControllerCLI VM

 

  1. Open Putty from the desktop and Click on PhotonControllerCLI link
  2. Click on Open

 

 

Deploy Rancher Server

 

You will first deploy a new version of the Rancher Server container into our environment.  Before that you need to delete the existing container.

  1. Execute:   docker ps | grep rancher/server  to see the running container.  Find the Container ID for the Rancher/Server container.  That is the one we want to remove
  2. Execute:   docker kill "ContainerID"     This will remove the existing Rancher Server container
  3. Execute:   !885     This will execute command number 885 stored in Linux history.  It will create a new Docker container.

Note that your new container is tagged with 192.168.120.20:5000.  This is the local Docker Registry that is used to serve our lab's images.

 

 

Clean Up Rancher Host

 

The VM that we will use as a Rancher Host (more explanation below) needs have a few files removed prior to deploying the Rancher Agent.

  1. Execute:  ssh root@192.168.100.201    The password is:  vmware
  2. Execute: rm -rf /var/lib/rancher/state
  3. Execute: docker rm -vf rancher-agent
  4. Execute: docker rm -vf rancher-agent-state

 

 

Connect To Rancher UI

 

Now we can add a Rancher host.  Rancher server is running in a container on 192.168.120.20.  You can connect from your browser at https://192.168.120.20:8080.  Rancher hosts are VMs running Docker.  This will be where application containers are deployed.  Much like Kubernetes Worker nodes you saw in the previous section.  We will first add a Rancher host.  The host is a VM that we previously created for you.

1.          From your browser,

Connect to https://192.168.120.20:8080 and then click Add Host
  1. If you get this page, just click Save

 

 

Add Rancher Host

 

Rancher has several options for adding hosts.  There are a couple of direct drivers for cloud platforms, as well as machine drivers supported through Docker Machine plugins. There is a Docker Machine Plugin for Photon Controller available.  In this lab we are using the Custom option to show you how to manually install the Rancher Agent on your Host VM and see it register with Rancher Server.

  1. Note: that the Custom icon is selected
  2. Cut the pre-formed Docker run command by dragging the mouse over the command and doing a Ctrl-C or click the "Copy to Clipboard" icon at the right of the box.

 

 

Paste In The Docker Run Command To Start Rancher Agent

 

Go back to the Putty session.  You should still be connected to your Rancher Host VM.  You will now paste in the Docker Run command you captured from the Rancher UI.   Either use Ctrl-v or Right Click the mouse to paste the clipboard onto the command line.  Note:  You must cut/paste the command from the Rancher UI and not use the command in the image.  The registration numbers are specific to your host.

  1. Execute:  Either Right Click of the mouse or Ctrl-v  and hit Return

 

 

View the Agent Container

 

To view your running container:

  1. Execute:  docker ps

 

 

Verify New Host Has Been Added

 

To view your new host, return to the Rancher UI in your browser

  1. Click the Close button
  2. Click on Infrastructure and Hosts
  3. This is your host

 

 

Deploy Nginx Webserver

 

To deploy our application, we are going to create an Nginx Container Service.   Services in Rancher can be a group of containers, but in this case we will be deploying a single container application.

1.          Click on Containers

2.          Click on Add Container

 

 

Configure Container Info

 

We need to define the container we want to deploy.

1.          Enter a Name for your container

2.          Specify the Docker Image that you will run.  This image is in a local Registry, so the name is the IP:port/image-name.    Enter 192.168.120.20:5000/nginx

3.          This image is already cached locally on this VM, so uncheck the box to Pull the latest image

4.          We now want to map the container port to the host port that will be used to access the Webserver.  Nginx by default is listening on Port 80.  We will map it to Host port 2000.  Note that you might have to click on the + Portmap sign to see these fields.

5.           Click on Create Button

 

It may take a minute or so for the container to come up.   Its possible the screen will not update, so try holding Shift-Key while clicking Reload on the browser page

 

 

Container Information

 

1.          Once your container is running, Check out the performance charts

2.          Note that the you can see the container status, Its internal IP address - this is a Rancher managed network that containers communication on.

 

 

Open Your Webserver

 

From you Browser, Enter the IP address of the Rancher Host VM and the Port you mapped.  

1.            From your Internet Browser, enter 192.168.100.201:2000 to view the default Nginx webpage

 

 

Rancher Catalogs

 

Rancher also provides the capability to deploy multi-container applications in catalogs that are provided directly by the application vendors.  Browse through some of the available applications.   You will not be able to deploy them because the lab does not have an external internet connection.

 

Conclusion


This module provided an introduction to the operational model for developers of cloud native applications.  Deploying containers at scale will not be done through individual Docker run commands, but through the use of higher level frameworks that provide orchestration of the entire application.  

You have seen two examples of application frameworks that can be used to deploy and manage containers at scale.  You have also seen that Photon Platform provides a scalable underpinning to these frameworks.

 


Conclusion

Thank you for participating in the VMware Hands-on Labs. Be sure to visit http://hol.vmware.com/ to continue your lab experience online.

Lab SKU: HOL-1730-USE-2

Version: 20161024-114615