VMware Hands-on Labs - HOL-2113-01-SDC


Lab Overview - HOL-2113-01-SDC - vSphere with Tanzu

Lab Guidance


Hands-on Labs allows you to evaluate the features and functionality of VMware products with no installation required. This lab is self-paced, and most modules are independent of each other. You can use the Table of Contents located in the upper right-hand corner to access any module.

If you are new to the VMware Learning Platform (VLP), please read the New User Guide located in the appendix. Click below to go directly to the new user console walkthrough before continuing:

Lab Module List:

Lab Captains:

  • Jose Manzaneque, Senior Systems Engineer, ES
  • Peter Kieren, Senior Solutions Engineer, CA
  • Bob Bauer, Staff Systems Engineer, US

This lab manual can be downloaded from the Hands-on Labs Document site found here:

http://docs.hol.vmware.com

This lab may be available in other languages. To set your language preference and have a localized manual deployed with your lab, you may utilize this document to help guide you through the process:

http://docs.hol.vmware.com/announcements/nee-default-language.pdf


Module 1 - Introduction to vSphere with Tanzu (15 minutes)

Introduction


This lab is an overview of the Kubernetes capability in vSphere with Tanzu. After completing Module 1 you should have a basic understanding of the vSphere components that support Kubernetes functionality as well as how to enable Kubernetes on a vSphere Cluster. The remaining modules will cover managing vSphere with Tanzu and working with Tanzu Kubernetes clusters.

Modules 1 and 2 cover Application Focused Management which is driven by the IT Operator.  This workflow focuses on the tasks IT Operators perform to enable Kubernetes in vSphere with Tanzu, as well as creating and managing the new vSphere objects used to provision, view and manage Kubernetes consumption.

This module will highlight:

  • vSphere with Tanzu Introduction
  • VMware vSphere with Tanzu Services
  • vSphere with Tanzu Components
  • Enabling vSphere with Tanzu

vSphere with Tanzu Introduction


Common Platform for Running both Kubernetes/Containerized Workloads and VMs

Kubernetes is now built into vSphere with Tanzu which allows developers to continue using the same industry-standard tools and interfaces they've been using to create modern applications. vSphere Admins also benefit because they can help manage the Kubernetes infrastructure using the same tools and skills they have developed around vSphere. To help bridge these two worlds we've introduced a new vSphere construct called Namespaces, allowing vSphere Admins to create a logical set of resources, permissions, and policies that enable an application-centric approach.


 

What is vSphere with Tanzu?

VMware vSphere with Tanzu delivers developer ready infrastructure and application-focused management for streamlined development, agile operations, and accelerated innovation. It’s a flexible environment for modern applications that are built from microservices and run across heterogenous environments.

 

With vSphere with Tanzu, VMware delivers embedded Tanzu Kubernetes Grid Service for fully compliant and conformant Kubernetes capabilities for containerized applications. This approach provides Kubernetes APIs to developers, enabling CI/CD (continuous integration / continuous delivery) processes across a global infrastructure including on-premises data centers, hyperscalers, and Managed Service Providers (MSP) infrastructure. It unites the data center and the cloud with an integrated cloud operating model. Now enterprises can increase the productivity of developers and operators, enabling faster time-to-innovation combined with the security, stability, and governance, and avoid cost proliferation due to multiple stacks of IT infrastructure or cloud services.

 

 

Streamlined Development of Kubernetes Applications

vSphere with Tanzu enables the DevOps model with infrastructure access for developers through Kubernetes APIs. It includes the Tanzu Kubernetes Grid Service, which is VMware's compliant and conformant Kubernetes implementation for building modern containerized applications.   In addition, the vSphere Pod Service compliments the Tanzu Kubernetes Grid Service for application container instances requiring VM-like isolation benefits of improved performance and security of a solution built into the hypervisor.

 

 

Agile Operations for Kubernetes

We are introducing a lot of value in vSphere with Tanzu for the VI admin. We deliver a new way to manage infrastructure, called application-focused management. This enables VI admins to organize multiple objects into a logical group and then apply policies to the entire group. For example, an administrator can apply security policies and storage limits to a group of virtual machines and Kubernetes clusters that represent an application, rather than to all of the VMs and clusters individually.

 

 

vSphere with Tanzu Delivers Essential Services for Hybrid Cloud

VMware solved the challenges faced by traditional apps across heterogeneous architectures with the introduction of VMware vSphere. With vSphere 7, we’re delivering the essential services for the modern hybrid cloud. VMware hyperconverged infrastructure (HCI) stack combines compute, storage, and networking with unified management—vSphere with Tanzu powers the innovation behind developer ready infrastructure. With the new Kubernetes and RESTful API surface, developers can streamline their work and IT administrators can improve productivity using application-focused management.

 

 

Technical Overview of vSphere with Tanzu (4:25)

 

VMware vSphere with Tanzu Services


Powered by innovations in vSphere with Tanzu, vSphere with Tanzu Services is a new, integrated Kubernetes and RESTful API surface that enables you to drive API access to all core services.


 

VMware vSphere with Tanzu Services Explained

VMware vSphere with Tanzu consists of two families of services: Tanzu Runtime Services and Infrastructure Services.

  • Tanzu Runtime Services deliver core Kubernetes development services, including an up-to-date distribution of Tanzu Kubernetes Grid.
  • Infrastructure Services include full Kubernetes and RESTful API access that spans creating and manipulating virtual machines, containers, storage, networking, and other core capabilities.

 

* vSphere Pod Service and Registry Service require NSX-T

 

 

Tanzu Kubernetes Grid Service

A Tanzu Kubernetes Grid (TKG) cluster is a Kubernetes (K8s) cluster that runs inside virtual machines on the Supervisor layer and not on vSphere Pods. It is enabled via the Tanzu Kubernetes Grid Service for vSphere. Since a TKG cluster is fully upstream-compliant with open-source Kubernetes it is guaranteed to work with all your K8s applications and tools. Tanzu Kubernetes clusters are the primary way the customers will deploy Kubernetes based applications.

TKG clusters in vSphere use the open source Cluster API project for lifecycle management, which allows developers and operators to manage the lifecycle (create, scale, upgrade, destroy) of conformant Kubenetes clusters using the same declarative, Kubernetes-style API that is used to deploy applications on Kubernetes.

 

 

vSphere Pod Service*

The vSphere Pod Service is a service that runs on a VMware managed Kubernetes control plane over your ESXi cluster. It allows you to run native Kubernetes workloads directly on ESXi. The ESXi hosts become the Kubernetes Nodes and vSphere Pods are the components that run the app workloads.

The vSphere Pod Service provides a purpose-built lightweight Linux kernel that is responsible for running containers inside the guest. It provides the Linux Application Binary Interface (ABI) necessary to run Linux applications. Since this Linux kernel is provided by the hypervisor, VMware has been able to make numerous optimizations to boost its performance and efficiency.  When users need the security and performance isolation of a VM, orchestrated as a set of containers through Kubernetes, they should use the vSphere Pod Service.

* vSphere Pod Service requires NSX-T

 

 

Registry Service*

The Registry Service allows developers to store, manage and better secure Docker and OCI (Open Container Initiative) images using Harbor as a private registry. The lifecycle of projects and members of the private image registry is automatically managed and is linked to the lifecycle of namespaces and user or group permissions in namespaces created in vCenter.

* Registry Service requires NSX-T

 

 

Storage Service

The Storage Service exposes vSphere or vSAN based storage and provides developers and operators the capability to manage persistent disks for use with containers, Kubernetes clusters and virtual machines.

 

 

Network Service

The Network Service abstracts the underlying virtual networking infrastructure from the Kubernetes environment.  It can be implemented using VMware vSphere Networking or NSX-T.  It provides network services for Supervisor Cluster control plane VMs as well as TKG cluster control plane and worker nodes.  If NSX-T is leveraged there are additional capabilities around Kubernetes service load balancing and network security.

 

Hands-on Labs Interactive Simulation: Enabling Kubernetes Supervisor Cluster in vSphere Tanzu


vSphere with Tanzu Supervisor Cluster

IT Operators enable the Supervisor Kubernetes Cluster on vSphere clusters through a simple wizard in the vSphere Client. The Supervisor cluster provides the Kubernetes backbone onto which we have built services that can be consumed by both Operators and DevOps teams.

 


 

Hands-on Labs - Enabling Kubernetes Supervisor Cluster

In this simulation we will enable the vSphere with Tanzu Supervisor Cluster using the built-in VMware vSphere Networking feature.  Using NSX-T for the Supervisor Cluster is not covered in this simulation

This part of the lab is presented as a Hands-on Labs Interactive Simulation. This will allow you to experience steps that are too time-consuming or resource-intensive to do live in the lab environment. In this simulation, you can use the software interface as if you are interacting with a live environment

  1. Click here to open the interactive simulation. It will open in a new browser window or tab.
  2. When finished, click the “Return to the lab” link to continue with this lab.

The lab continues to run in the background. If the lab goes into standby mode, you can resume it after completing the module.

 

Conclusion


In this module, you were able to gain a  basic understanding of the vSphere components that support Kubernetes functionality as well as how to enable Kubernetes on a vSphere Cluster.


 

You have finished Module 1

 

 

How to End Lab

 

If this is the last module you would like to take in this lab, you may end your lab by clicking on the END button.

 

Module 2 - Managing vSphere with Tanzu (30 minutes)

The Supervisor Cluster


In Module 1, the Hands-on Labs Interactive Simulation guided you through enabling a Supervisor cluster on a vSphere with Tanzu cluster. As part of that operation, several new objects were created.

During the enabling process there were 3 Supervisor Control Plane VMs created that act as the Kubernetes API Server and etcd hosts and function as the Kubernetes control plane for the vSphere Cluster.

As the IT Operator, you can view all of the objects created during the process from vCenter. In this Lab, we will inspect the components of the Supervisor cluster.


 

Supervisor Cluster Components

 

 

 

Open Chrome Browser from Windows Quick Launch Task Bar

 

1. Click on the Chrome Icon on the Windows Quick Launch Task Bar.

 

 

Log into vCenter

 

Log into Region A vCenter

  1. Click on the Region A Folder in the Bookmark toolbar.
  2. Click on vcsa-01a Web Client link in the Bookmark toolbar
  3. Click the Login Button. The credentials should be saved for you.  

If credentials aren't saved, use the following

  • username: administrator@corp.local
  • password: VMware1!

 

 

Expand RegionA01-COMP01 Cluster

 

  1. Click on the arrow next to the RegionA01-COMP01 cluster to expand.

 

 

Expand the Namespaces Object

 

  1. Click on the arrow next to the Namespaces object under RegionA01-COMP01 Cluster.
  2. You will see the three (3) Supervisor cluster control plane VMs.

 

 

Supervisor Cluster Control Plane VM

 

  1. Click on SupervisorControlPlaneVM(1)
  2. Network Adapter 1 is the Management Network and supports traffic to vCenter.  This network is connected to a vDS  port group.
  3. Network Adapter 2 supports traffic to the Kubernetes API and the Pods/Services that are deployed on the Supervisor cluster.
  4. This VM has Four (4) IP Addresses. One IPv4 / IPv6 address each for Management and Kubernetes API.

 

 

Etcd Leader Supervisor Control Plane VM

 

  1. Click SupervisorControlPlaneVM (3).
  2. This VM should show Five (5) IP Addresses.  NOTE: If SupervisorControlPlaneVM(3) doesn't have 5 IPs then check SupervisorControlPlaneVM (2).
  3. The extra IP address in the Management network (192.168.120.x) signifies this is the etcd Leader node for the Supervisor Cluster.  This IP is assigned via an election process and can move depending on VM state.

 

Supervisor Cluster Namespaces


Namespaces allow IT Operators to give a project, team or customer capacity to deploy vSphere Pods and Tanzu Kubernetes Grid (TKG) clusters while having control over access and resource utilization.  IT Operators have full visibility into resource consumption, permissions and status of the namespace as well as the vSphere Pods and TKG clusters and VMs running inside.

This allows you to deliver self service Kubernetes operations inside the namespace to your development teams while retaining the control and visibility needed.


 

Expand demo-app-01 Namespace

 

  1. Select the demo-app-01 namespace
  2. Click the arrow next to demo-app-01 namespace.
  3. Inside the demo-app-01 namespace there is a tkg-cluster (tkg-cluster-01) deployed.
  4. Inside the demo-app-01 namespace there are vSphere Pods running (tkg-cluster).

If you need a refresher on TKG Clusters and vSphere PODs  reference the VMware vSphere with Tanzu Services section of this lab.

 

 

demo-app-01 namespace summary tab

 

  1. With the demo-app-01 namespace object selected in the vCenter Host and Cluster view,  Click the  Summary Tab in the right vCenter pane.
  2. Status shows the state of the Supervisor Cluster.
  3. Permissions show which users/groups have access to this namepace and what role the are assigned.
  4. Storage shows which storage policies are assigned to the namespace and the number of persistent volume claims.
  5. Capacity and Usage show resources being used by vSphere Pods and Tanzu Kubernetes Grid  (TKG) clusters.
  6. Tanzu Kubernetes shows the number and state of TKG clusters running in the namespace.

 

 

demo-app-01 namespace Monitor tab

 

  1. Click Monitor tab.
  2. Click Kubernetes to view Kubernetes events from past hour.
  3. Click vCenter to view vCenter events related to this namespace.

Note: Kubernetes events may be blank because there may not have been any Kubernetes activity in the past 1 hour

 

 

demo-app-01 namespace Configure tab

 

  1. Click Configure tab.
  2. Click General to view namespace information.

 

  1. Click Resource Limits.
  2. Click Edit to view Resource Limits you can set on demo-app-01 namespace.
  3. Click Cancel to exit Resource Limits screen.

Do not set any Resource Limits in this lab or you may affect your ability to perform steps in other modules

 

 

DEMO-APP-01 Namespace Object Limits

 

  1. Click Object Limits.
  2. Click Edit to view Object Limits you can set on demo-app-01 namespace.
  3. Click Cancel to exit Object Limits screen.

Do not set any Object Limits in this lab or you may affect your ability to perform steps in other modules

 

 

demo-app-01 namespace Permissions tab

 

  1. Click Permissions tab.
  2. Observe the Users/Groups, Roles and Identity Source  fields.

Users can be selected from any Identity  source configured in the vCenter Identity provider

 

 

demo-app-01 namespace Compute tab - VMware Resources

 

  1. Click Tanzu Kubernetes.
  2. This view shows any Tanzu Kubernetes Grid clusters running in the demo-app-01 namespace.
  3. TKG cluster status.
  4. TKG Worker nodes count.
  5. Kubernetes and TKG version of the cluster (Kubernetes 1.18.5).
  6. Control Plane Address for the Kubernetes API for the TKG cluster.

 

 

demo-app-01 namespace Compute tab - Virtual Machines

 

 

 

demo-app-01 namespace Storage Tab

 

  1. Click Storage tab.
  2. Click Storage Polices to view any VM Storage Polices assigned to demo-app-01 namespace.
  3. Click Config Map, Secrets and Persistent Volume Claims to view these objects.

 

 

demo-app-01 namespace Network tab

 

  1. Click Network.
  2. Click Services and Endpoints to view the associated objects.

vSphere with Tanzu when installed using vCenter Server Network the Supervisor Cluster and TKG cluster control plane VM load balancing is done using HA Proxy.   The default overlay networking within the Tanzu Kubernetes cluster is done using Antrea.

 

Creating a Supervisor Cluster Namespace


Namespaces are used in Kubernetes to share resources, set permissions and isolate applications and teams from each other.  In with vSphere with Tanzu, the Supervisor Cluster is a Kubernetes Cluster.  Namespaces are utilized to provide projects and teams self-service access to deploy resources using Kubernetes.  IT Operators can set RBAC, Storage Policies, Resource Limits and other vSphere features and capabilities on Namespaces.

In this Lab we will cover creating a Supervisor Cluster Namespace in vSphere 7.


 

Open Workload Management

 

  1. Click Menu.
  2. Click Workload Management.

 

 

New Namespace

 

  1. Click Namespaces Tab.
  2. Click New Namespace.

 

 

Create Namespace

 

  1. Click the arrow to expand RegionA01.
  2. Click RegionA01-COMP01.
  3. Name: demo-app-02 (lowercase).
  4. Select network-1 Network for the namespace.
  5. Click Create.

Note: Once the namespace has been completed you will see the "Your namespace demo-app-02 has been successfully created" screen.  You can read the suggested next steps bullet points to understand next steps to finish configuration of the namespace.  You can click the GOT IT button to dismiss this screen.

 

 

Configure  Namespace - Add Permissions

 

 

  1. Click Add Permissions.  You may need to scroll down to see the Add Permissions option.
  2. Select CORP.LOCAL identity source.
  3. Search for Administrator and select.
  4. Select Can edit for Role.
  5. Click OK.

 

 

Configure Namespace - Add Storage

 

 

  1. Click Add Storage.
  2. Select the kubernetes storage policy.
  3. Click OK.

 

 

Configure Namespace - Edit Limits

 

 

  1. Click Edit Limits.
  2. Enter 5 Ghz for CPU.
  3. Enter 4 GB for Memory.
  4. Enter 5 GB for Storage.
  5. Click OK.

For more information on Namespaces read the - Introduction to Kubernetes Namespaces blog.

 

Content Library for Tanzu Kubernetes Grid Images


The virtual machine images used to deploy Tanzu Kubernetes Grid cluster on vSphere with Tanzu are sourced from a public content delivery network (CDN).  vSphere with Tanzu uses a subscribed Content Library to automatically download and synchronize new versions of the TKG virtual machine image and corresponding Kubernetes releases.


 

Content Libraries for Tanzu Kubernetes Grid

 

  1. Click Menu.
  2. Click Content Libraries.

 

Click Kubernetes

In this lab we are using a local content library.  A typical install would use a subscribed library.

 

 

Content Library Detail

 

  1. Click Templates.
  2. Click OVF & OVA Templates.
  3. We see the OS and Kubernetes Version of Tanzu Kubernetes Grid OVA.

As VMware releases new versions of Tanzu Kubernetes Grid OVAs you will see them appear in a subscribed library.  Developers have the freedom to deploy various versions of Tanzu Kubernetes clusters in their Namespaces based on application needs.

 

Conclusion


In this module you were able to gain a deeper understanding of the components created when enabling vSphere with Tanzu using vCenter Networking including the supervisor cluster VMs, Namespace Objects, Tanzu Kubernetes Grid (TKG) Clusters and CDN for TKG clusters.  Also, you created a Supervisor Cluster Namespace and assigned resources and permissions.  


 

You have finished Module 2

Proceed to any module below which interests you most.

To review more info on the new features of vSphere 7, please use the links below: 

 

 

How to End Lab

 

If this is the last module you would like to take in this lab, you may end your lab by clicking on the END button.

 

Module 3 - Working with Tanzu Kubernetes Clusters (30 minutes)

Introduction


VMware Tanzu Kubernetes Grid Service for vSphere is a Kubernetes experience that is tightly integrated with vSphere 7.0 and made available through both vSphere with Tanzu and VCF with Tanzu. Tanzu Kubernetes Grid Service runs on Supervisor Clusters in vSphere with Tanzu to create Kubernetes conformant clusters that have been optimized for vSphere.

In this module, we'll explore how to create Tanzu Kubernetes clusters, how to perform basic operations with them and how they interact with Harbor as an image registry.


 

Open Google Chrome

 

1. Click on the Chrome Icon on the Windows Quick Launch Task Bar.

 

 

Log in to vCenter

 

Log into Region A vCenter

  1. Click on the Region A Folder in the Bookmark toolbar.
  2. Click on vcsa-01a Web Client link in the Bookmark toolbar.
  3. Click the Login Button. The credentials should be saved for you.

If credentials aren't saved, use the following

  • username: administrator@corp.local
  • password: VMware1!

 

 

Verify that the Supervisor Cluster is ready

 

  1. Click on Menu.
  2. Click on Workload Management.
  3. Click on Clusters.
  4. The RegionA01-COMP01 should show as Running.

You're now good to start Module 3 - Working with Tanzu Kubernetes Clusters.

 

Tanzu Kubernetes Clusters


Tanzu Kubernetes Grid is a full distribution of the open-source Kubernetes container orchestration platform that is built, signed, and supported by VMware. A Tanzu Kubernetes cluster is a Kubernetes cluster that runs inside virtual machines on the Supervisor Cluster and not on vSphere Pods. It is enabled via the Tanzu Kubernetes Grid Service for vSphere. 


 

Exploring the demo namespace

Tanzu Kubernetes clusters are deployed inside a namespace using the Supervisor Cluster. In our environment, we have a namespace demo-app-01 where we have deployed an application using the vSphere Pod Service and a Tanzu Kubernetes cluster tkg-cluster-01.  

 

To view the cluster tkg-cluster-01

  1. Click the Menu icon at the top.
  2. Click Hosts and Clusters.

 

  1. Inside the demo-app-01 namespace there is a tkg-cluster-01 deployed.

As you can see, cluster is made up of a single control plane node and three worker nodes.

 

 

Logging into the Supervisor Cluster

As you saw, Tanzu Kubernetes Clusters are deployed in a Supervisor Cluster namespace. Therefore, we'll have to log in to the Supervisor Cluster.

 

 

  1. Click on the PuTTY icon in the taskbar to open a new SSH session.

 

Login using SSH to Linux-01a VM:

  1. Click on linux-01a.corp.local under Saved Sessions.
  2. Click the Load button.
  3. Click Open.

 

 

Login to Supervisor Cluster

Supervisor Cluster authentication is integrated with vSphere Single Sign On through the vSphere Kubernetes Plugin. This plugin is already installed in the Linux-01 workstation.

Log in to Supervisor Cluster:

  1. Type or copy/paste the following command into Putty: kubectl vsphere login --server=https://192.168.130.129 --vsphere-username=administrator@corp.local
  2. Type or copy/paste the password: VMware1!

The login process updates the kubectl config file with contexts that you are authorized for. If everything went as expected, you will see the Supervisor Cluster namespaces you have been authorized to use:

 

 

 

Switch kubectl context

Lets switch to the demo-app-01 context:

  1. To switch, type or copy/paste following command into Putty: kubectl config use-context demo-app-01

 

We're now ready to launch our new Tanzu Kubernetes cluster!

 

 

Examining a Tanzu Kubernetes Cluster

Tanzu Kubernetes clusters are deployed using Cluster API. Cluster API is an open source Kubernetes project that aims to bring declarative, Kubernetes-style APIs to cluster life cycle management. With Cluster API, we can define our clusters with a YAML file.

The definition for our Tanzu Kubernetes cluster is located under ~/labs/tanzucluster/tkg-cluster-01.yaml

  1. Type or copy/paste the following command into Putty: cd ~/labs/tanzucluster
  2. Type or copy/paste the following command into Putty: cat tkg-cluster-01.yaml
apiVersion: run.tanzu.vmware.com/v1alpha1
kind: TanzuKubernetesCluster
metadata:
  name: tkg-cluster-01
  namespace: demo-app-01
spec:
  distribution:
    version: v1.18.5
  topology:
    controlPlane:
      count: 1                                 #1 control plane nodes
      class: best-effort-xsmall                  # Best effort extra small class
      storageClass: kubernetes  #Specific storage class for control plane
    workers:
      count: 3                                 #3 worker nodes
      class: best-effort-xsmall                 # Best effort extra small class
      storageClass: kubernetes   #Specific storage class for workers
  settings:
    network:
      cni:
        name: antrea
      services:
        cidrBlocks: ["198.51.100.0/12"]     #Cannot overlap with Supervisor Cluster
      pods:
        cidrBlocks: ["192.0.2.0/16"]        #Cannot overlap with Supervisor Cluster
    storage:
      classes: ["kubernetes"]           #Named PVC storage classes
      defaultClass: kubernetes                  #Default PVC storage class

In the next section, we'll explore how to perform basic operations in our existing Tanzu Kubernetes Cluster.

 

Basic operations on a Tanzu Kubernetes Cluster


In this module, we'll explore how to perform basic operations with Tanzu Kubernetes Clusters.


 

Logging into the Tanzu Kubernetes Cluster

Now that our Tanzu Kubernetes Cluster is fully created, we can now log in.

 

  1. Click on the PuTTY icon in the taskbar to open a new SSH session.

 

Login using SSH to Linux-01a VM:

  1. Click on linux-01a.corp.local under Saved Sessions.
  2. Click the Load button.
  3. Click Open.

 

 

Login to Tanzu Kubernetes Cluster

Tanzu Kubernetes Cluster authentication is integrated with vSphere Single Sign On through the vSphere Kubernetes Plugin. This plugin is already installed in the Linux-01 workstation.

Log in to Supervisor Cluster:

  1. Type or copy/paste the following command into Putty: kubectl vsphere login --server https://192.168.130.129 --vsphere-username=administrator@corp.local --tanzu-kubernetes-cluster-name=tkg-cluster-01 --tanzu-kubernetes-cluster-namespace=demo-app-01
  2. Type or copy/paste the password: VMware1!

If everything went as expected, you will see the namespace and Tanzu Kubernetes cluster:

This is a similar command  to the Supervisor Cluster login. However, it is not the same. Notice the --tanzu-kubernetes-cluster-name and --tanzu-kubernetes-cluster-namespace flags we're using in order to log in the Tanzu Kubernetes Cluster

 

 

 

Switch to tkg-cluster-01 context

Lets switch to the tkg-cluster-01 context: kubectl config use-context tkg-cluster-01

  1. To switch, type or copy/paste following command into Putty:

 

We're now logged in our Tanzu Kubernetes cluster!

You can verify that this is a Tanzu Kubernetes cluster by getting the Kubernetes nodes:

  1. Type or copy/paste the following command into Putty: kubectl get nodes

 

We are ready to deploy some application to the cluster.

 

 

Deploying a stateless application to the Tanzu Kubernetes Cluster

Now we will deploy an application in the cluster. The application is called NGINX. This is a web server that we can use to validate our cluster. However, this application is stateless, as it does not have any persistent storage. It is made of three NGINX pods and a Service with type LoadBalancer

Navigate to the ~/labs/nginx folder in the Linux-01 workstation. You'll find a YAML file called nginx.yaml:

  1. Type or copy/paste the following command into Putty: cd ~/labs/nginx

 

The YAML file is used to define our application. Lets take a look at the nginx.yaml file:

  1. Type or copy/paste the following command into Putty: cat nginx.yaml

 

By examining the YAML file, you'll notice there are two types of resources: Services and Deployments. Services are used to expose an application externally while Deployments are used to maintain multiple replicas of an application.

 

 

Deploy Application

After examining the YAML file, we can go ahead and deploy the application:

  1. Type or copy/paste the following command into Putty: kubectl apply -f nginx.yaml

 

After a few minutes, the application will be up and running:

  1. Type or copy/paste the following command into Putty: kubectl get pods

 

The endpoint of the application can now be retrieved:

  1. Type or copy/paste the following command into Putty: kubectl get services

The Application Load Balancer will be provisioned automatically by NSX-T (or HAProxy if using the vDS deployment) when a Service resource with type LoadBalancer is created inside the Tanzu Kubernetes cluster.

 

 

 

Access application using the web browser

The application is now accessible through the External-IP endpoint retrieved in the previous step. Please note that your External-IP may be different.

  1. Open a new tab on your browser and type the IP address from the previous step. In this case, it is: 192.168.130.134. You may have to refresh the browser in order to clean any previous cached page.

 

Lastly, you can delete the whole application by using:

  1. Type or copy/paste the following command into Putty: kubectl delete -f nginx.yaml

The whole application will now be deleted.

 

 

 

Deploying a stateful application to the Tanzu Kubernetes Cluster

Now, we will deploy a stateful application inside the Tanzu Kubernetes cluster. A stateful application will have a Persistent Volume that will persist across Pod deletions. To demonstrate this, we will use the Guestbook application.

Navigate to ~/labs/guestbook.

  1. Type or copy/paste the following command into Putty: cd ~/labs/guestbook

You'll find a YAML file called guestbook.yaml.  Lets take a look at the guestbook.yaml file:

  1. Type or copy/paste the following command into Putty: cat guestbook.yaml

 

 

There are three main blocks in this file:

1. Service: We'll expose the application. We'll be connecting to the port 6379 of the redis-master Pod.

2. Deployment: Here we'll define what Docker image will our container run and where should the Persistent Volume be mounted. We'll map the /data folder to the volume redis-master-claim

3. PersistentVolumeClaim: This is the part where we'll define the persistent volume. We'll be creating a request to vSphere for a 2GB volume with the Storage Class high-performance-ssd

In the YAML file there are resources also for the creation of the frontend and redis-slave applications. Details of those have been omitted. However, the same principles apply as they have a Deployment and a Service.

 

 

Apply YAML File

You can now apply the YAML file by using:

  1. Type or copy/paste the following command into Putty: kubectl apply -f guestbook.yaml

 

The application will be deployed. You can see the status of the pods by using:

  1. Type or copy/paste the following command into Putty: kubectl get pods

You can also verify the status of the Persistent Volume by using:

  1. Type or copy/paste the following command into Putty. Please note that it may take a few minutes for the Persistent Volume Claims to show as Bound: kubectl get persistentvolumeclaims

 

The endpoint of the application can now be retrieved:

  1. Type or copy/paste the following command into Putty: kubectl get services

 

 

 

Access the endpoint and verify that the application works

The application is now accessible in the External-IP endpoint from the previous step. Please note that you External-IP address may be different.

 

Head to your web browser to access the endpoint and verify that the application works:

  1. Open a new tab on your browser and type the External-IP address from the previous step. In this case, it is: 192.168.130.135

 

 

Verify creation of Persistent Volume  in vSphere

 

By pressing in the Persistent Volume, we can access more information:

Cloud Native Storage is available natively in vSphere 7 and it allows higher levels of visibility for Kubernetes Persistent Volumes for the IT Administrators.

 

Lastly, you can delete the whole application by using:

  1. Type or copy/paste the following command into Putty: kubectl delete -f guestbook.yaml

The whole application will now be deleted:

 

 

Deploying a microservices application on a Tanzu Kubernetes Cluster


In this module, we'll dig deeper into application deployment to deploy a microservices based application on a Tanzu Kubernetes Cluster.


 

Logging into the Linux-01a jumpbox

First of all, we need to log into Linux-01a jumpbox

 

  1. Click on the PuTTY icon in the taskbar to open a new SSH session.

 

Login using SSH to Linux-01a VM:

  1. Click on linux-01a.corp.local under Saved Sessions.
  2. Click the Load button.
  3. Click Open.

 

 

Login to Tanzu Kubernetes Cluster

Tanzu Kubernetes Cluster authentication is integrated with vSphere Single Sign On through the vSphere Kubernetes Plugin. This plugin is already installed in the Linux-01 workstation.

Log in to Supervisor Cluster:

  1. Type or copy/paste the following command into Putty: kubectl vsphere login --server=https://192.168.130.129 --vsphere-username=administrator@corp.local --tanzu-kubernetes-cluster-name=tkg-cluster-01 --tanzu-kubernetes-cluster-namespace=demo-app-01
  2. Type or copy/paste the password: VMware1!

If everything went as expected, you will see the namespaces and Tanzu Kubernetes clusters:

This is a similar command to the Supervisor Cluster login. However, it is not the same. Notice the --tanzu-kubernetes-cluster-name and --tanzu-kubernetes-cluster-namespace flags we're using in order to log in the Tanzu Kubernetes Cluster

 

 

 

Switch to tkg-cluster-01 context

Lets switch to the tkg-cluster-01 context: kubectl config use-context tkg-cluster-01

  1. To switch, type or copy/paste following command into PuTTY:

 

We're now logged in our Tanzu Kubernetes cluster!

You can verify that this is a Tanzu Kubernetes cluster by getting the Kubernetes nodes:

  1. Type or copy/paste the following command into Putty: kubectl get nodes

 

We are ready to deploy our application to the cluster.

 

 

The application: Acme-Fitness

In a previous lab we have deployed both stateless and stateful applications to a Tanzu Kubernetes Cluster. Tanzu Kubernetes Clusters are production-ready Kubernetes clusters that can host complex microservices applications. The application that we are going to deploy is called acme-fitness. It is an application where the business logic has been decomposed in smaller polyglot applications. It is made from a Frontend Service, User Service, Catalog Service, Cart Service, Payment Service and Order Service.

 

Navigate to the ~/labs/acme-fitness folder in the Linux-01 workstation. You'll find a folder for every service described above:

  1. Type or copy/paste the following command into Putty: cd ~/labs/acme-fitness

 

These folders have the manifests to deploy the application and the associated databases. The services have to be initially deployed in a certain order. To group all the application resources together, we are going to deploy the application into a Kubernetes namespace.

2.     Type or copy/paste the following command into Putty: kubectl create ns acme-fitness

 

We are now ready to deploy every service into the newly created acme-fitness namespace.

 

 

Cart Service

  1. Type or copy/paste the following command into Putty: cd ~/labs/acme-fitness/cart-service
  2. Type or copy/paste the following command into Putty to deploy the Cart Service and its Redis database: kubectl apply -f cart-service.yaml

 

3.   Type or copy/paste the following command into Putty to verify that the Cart Service are up and running. You may have to wait a few minutes until all pods are shown as Running: kubectl get pod -n acme-fitness

 

 

 

Catalog Service

  1. Type or copy/paste the following command into Putty: cd ~/labs/acme-fitness/catalog-service
  2. Type or copy/paste the following command into Putty to deploy the Catalog Service and its MongoDB database: kubectl apply -f catalog-service.yaml

 

3.   Type or copy/paste the following command into Putty to verify that the Catalog Service pods are up and running. You may have to wait a few minutes until all pods are shown as Running: kubectl get pod -n acme-fitness

 

 

 

Payment Service

  1. Type or copy/paste the following command into Putty: cd ~/labs/acme-fitness/payment-service
  2. Type or copy/paste the following command into Putty to deploy the Payment Service. This service does not have an associated database: kubectl apply -f payment-service.yaml

 

3.   Type or copy/paste the following command into Putty to verify that the Payment Service pods are up and running. You may have to wait a few minutes until all pods are shown as Running: kubectl get pod -n acme-fitness

 

 

 

Order Service

  1. Type or copy/paste the following command into Putty: cd ~/labs/acme-fitness/order-service
  2. Type or copy/paste the following command into Putty to deploy the Order Service and its Postgres database: kubectl apply -f order-service.yaml

 

3.   Type or copy/paste the following command into Putty to verify that the Order Service pods are up and running. You may have to wait a few minutes until all pods are shown as Running: kubectl get pod -n acme-fitness

 

 

 

Users Service

  1. Type or copy/paste the following command into Putty: cd ~/labs/acme-fitness/users-service
  2. Type or copy/paste the following command into Putty to deploy the Users Service and its Mongo and Redis databases: kubectl apply -f users-service.yaml

 

3.   Type or copy/paste the following command into Putty to verify that the Users Service pods are up and running. You may have to wait a few minutes until all pods are shown as Running: kubectl get pod -n acme-fitness

 

 

 

Frontend Service

  1. Type or copy/paste the following command into Putty: cd ~/labs/acme-fitness/frontend-service
  2. Type or copy/paste the following command into Putty to deploy the Frontend Service. This service does not have an associated database: kubectl apply -f frontend-service.yaml

 

3.   Type or copy/paste the following command into Putty to verify that the Frontend Service pods are up and running. You may have to wait a few minutes until all pods are shown as Running: kubectl get pod -n acme-fitness

 

 

 

Retrieving the application endpoint

Now that our application is fully deployed, we have to retrieve the application endpoint.

  1. Type or copy/paste the following command into Putty to retrieve the application endpoint. You'll see the application endpoint listed under frontend: kubectl get service -n acme-fitness

 

 

 

Access application using the web browser

The application is now accessible through the External-IP endpoint retrieved in the previous step. Please note that your External-IP may be different.

  1. Open a new tab on your browser and type the IP address from the previous step. In this case, it is: 192.168.130.134. You may have to refresh the browser in order to clean any previous cached page.

 

Try navigating around to test that all components are working properly.

You can test the Users Service by logging in with the user "eric" and password "vmware1!":

  1. Click on the login button as shown below:

 

2.     Use the following credentials:

  • Username: eric
  • Password: vmware1!

3.    Verify that the login was successful:

 

 

 

Test the Catalog Service

You can test the Catalog Service by clicking into a product.

 

Verify that the product page loads successfully:

  1. Click in the water bottle

 

The rest of the services can be tested in a similar way.

 

 

Delete Application

Lastly, you can delete the whole application by deleting the Kubernetes namespace:

  1. Type or copy/paste the following command into Putty: kubectl delete ns acme-fitness

The whole application will now be deleted.

 

 

Working with Harbor


One of the most significant benefits of containers is that they allow you to package up an application with its dependencies and run it anywhere. This form of application packaging is called a container image. Docker images need to be stored in a secure and reliable location, a image registry. Harbor is an open source container image registry for Docker images.

In this lab, we'll explore how to use Harbor to store our Docker images and we'll update an application in Kubernetes.


 

Open Google Chrome

 

1. Click on the Chrome Icon on the Windows Quick Launch Task Bar.

 

 

Log in to Harbor

 

Log into Region A vCenter

  1. Click on the Region A Folder in the Bookmark toolbar.
  2. Click on Harbor link in the Bookmark toolbar.
  3. Click the Login Button. The credentials should be saved for you.

If credentials aren't saved, use the following:

  • username: admin
  • password: VMware1!

 

 

Exploring Harbor

 

You'll notice there is a project called library:

  1. Click on library

 

  1. Click on Repositories
  2. Click on library/nginx

Now, the tags for the NGINX image will be displayed. We only have one tag "latest".

 

Harbor integrates with third-party vulnerability scanners such as Clair and Trivy to scan your Docker images.

  1. Click on the image hash, beginning by sha256, to see more information about the image:

 

 

 

Creating a new Docker image from an existing one

Now we will create a Docker image. All Docker images are created from a file called a Dockerfile. This Dockerfile defines how the image will be built.

Docker images are built using layers. This means that you can cache commonly-used layers so that you only need to download or upload the layers that you do not already have.

We'll use the existing NGINX image and we'll modify the index.html page to display a change.

 

  1. Click on the PuTTY icon in the taskbar to open a new SSH session.

 

Login using SSH to Linux-01a VM:

  1. Click on linux-01a.corp.local under Saved Sessions
  2. Click the Load button
  3. Click Open

Now navigate to ~/labs/nginx and create a new file called index.html.  This will be our new front page in our NGINX web server.

  1. Type or copy/paste the following command into PuTTY: cd ~/labs/nginx
  2. Then type or copy/paste the following command into PuTTY:
  3. Press Enter.
cat << 'EOF' > ~/labs/nginx/index.html


Welcome to VMware Hands-on-Labs!


Welcome to VMware Hands-on-Labs!

If you see this page, you have modified correctly the base NGINX image. Congratulations!

EOF

Now, create a new file called Dockerfile.

  1. Type or copy/paste the following command into PuTTY:
  2. Press Enter.
cat << 'EOF' > ~/labs/nginx/Dockerfile
FROM harbor.corp.local/library/nginx:latest
COPY index.html /usr/share/nginx/html
EOF

In the first line, we are telling Docker to start building our new image from the existing nginx image in the Harbor registry.

In the second line, we are telling Docker to copy the index.html file we just generated in the NGINX www folder.

 

 

Build the image

First of all, we need to log in to Harbor from the CLI:

  1. Type or copy/paste the following command into PuTTY: docker login harbor.corp.local

 

We can now build and tag our image. The Docker images need to be tagged with the following format to be stored in Harbor registry_url/project/image:tag

  1. Type or copy/paste the following command into PuTTY: docker build . -t harbor.corp.local/library/nginx:new

We can build and tag our image with a single command using the -t flag.

 

We are now ready to upload the new image to Harbor.  You can do so with this command:

  1. Type or copy/paste the following command into PuTTY: docker push harbor.corp.local/library/nginx:new

 

Docker images are built using layers. Therefore, Docker didn't need to upload the full image again to Harbor. Docker just pushed the layer with the change we made to the base image

 

 

View image in Harbor Web UI

If you go back to the Harbor Web UI and back to the NGINX project, you'll notice the new tag:

  1. Click on nginx to return to the project view.

 

2. A new image with the tag "new" has been created.

 

 

 

Deploying the NGINX application with the base image

To be able to notice the difference, we first need to deploy the application with the NGINX image prior to the modification. In order to deploy the new application. We need to log into the Tanzu Kubernetes Cluster:

  1. Type or copy/paste the following command into PuTTY: kubectl vsphere login --server=https://192.168.130.129 --insecure-skip-tls-verify --vsphere-username=administrator@corp.local --tanzu-kubernetes-cluster-name=tkg-cluster-01 --tanzu-kubernetes-cluster-namespace=demo-app-01
  2. Type or copy/paste the password: VMware1!

If everything went as expected, you will see the namespaces and Tanzu Kubernetes clusters:

 

 

 

Switch to tkg-cluster-01 context

Lets switch to the tkg-cluster-01 context.

  1. To switch, type or copy/paste following command into PuTTY: kubectl config use-context tkg-cluster-01

 

We're now logged in our Tanzu Kubernetes cluster! We can now proceed and deploy the new application to the cluster.

We'll deploy the nginx.yaml file located in ~/labs/nginx in the tkg-cluster-01 Tanzu Kubernetes Cluster.

  1. Type or copy/paste the following command into PuTTY: kubectl apply -f nginx.yaml

The application will now be deployed:

 

You can get the application endpoint by using:

  1. Type or copy/paste the following command into PuTTY: kubectl get services

 

 

 

Access application using the web browser

You can now use your web browser to navigate to that address and see the NGINX welcome page. Please note that your External-IP Address may be different:

 

  1. Open a new tab on your browser and type the External-IP endpoint from the previous step. In this case, it is: 192.168.130.133

 

 

Changing the NGINX deployment with a new Docker image

We are going to modify the existing NGINX deployment with our new image with its custom index.html page.

To do so, open the nginx.yaml file. You'll notice that the existing Deployment has harbor.corp.local/library/nginx as an image.

  1. Type or copy/paste the following command into PuTTY: cat ~/labs/nginx/nginx.yaml

When no tag is specified, Docker will assume that it is the tag "latest"

 

Now, we'll modify the file to have the new tag.

  1. Type or copy/paste the following command into PuTTY:  sed -i s@harbor.corp.local/library/nginx@harbor.corp.local/library/nginx:new@g ~/labs/nginx/nginx.yaml
  2. Type or copy/paste the following command into PuTTY to verify that the change is now applied: cat ~/labs/nginx/nginx.yaml

 

Now we are able to update the Kubernetes deployment by running:

  1. Type or copy/paste the following command into PuTTY: kubectl apply -f nginx.yaml

 

Kubernetes will notice that the only thing that changed was the Docker image. As such, it will only change the Deployment and not the Service.

If you get all the Pods, you'll notice that Kubernetes launched new Pods with the updated Docker image and removed the old ones (rolling update):

  1. Type or copy/paste the following command into PuTTY: kubectl get pods

 

We can head back to our web browser and refresh the page. We should now have the updated index.html page.

 

 

Conclusion


In this module, you saw how to create Tanzu Kubernetes clusters, performed basic operations with them and interacted with Harbor as an image registry.


 

You have finished Module 3

 

 

How to End Lab

 

If this is the last module you would like to take in this lab, you may end your lab by clicking on the END button.

 

Appendix - New User Guide

Appendix - New User Guide


The New User Guidance covers the following topics as part of the console walkthrough:

  • Location of the main console  
  • Alternative methods of keyboard data entry  
  • Activation prompt or watermark

 

Location of the Main Console

 

  1. The area in the RED box contains the Main Console.  The Lab Manual is on the tab to the Right of the Main Console.
  2. A particular lab may have additional consoles found on separate tabs in the upper left. You will be directed to open another specific console if needed.
  3. Your lab starts with 90 minutes on the timer.  The lab can not be saved.  All your work must be done during the lab session.  But you can click the EXTEND to increase your time.  If you are at a VMware event, you can extend your lab time twice, for up to 30 minutes.  Each click gives you an additional 15 minutes.  Outside of VMware events, you can extend your lab time up to 9 hours and 30 minutes. Each click gives you an additional hour.

 

 

Alternate Methods of Keyboard Data Entry

During this module, you will input text into the Main Console. Besides directly typing it in, there are two very helpful methods of entering data which make it easier to enter complex data.

 

 

Click and Drag Lab Manual Content Into Console Active Window

You can also click and drag text and Command Line Interface (CLI) commands directly from the Lab Manual into the active window in the Main Console.  

 

 

Accessing the Online International Keyboard

 

You can also use the Online International Keyboard found in the Main Console.

  1. Click on the Keyboard Icon found on the Windows Quick Launch Task Bar.

 

 

Activation Prompt or Watermark

 

When you first start your lab, you may notice a watermark on the desktop indicating that Windows is not activated.  

One of the major benefits of virtualization is that virtual machines can be moved and run on any platform.  The Hands-on Labs utilizes this benefit and we are able to run the labs out of multiple datacenters.  However, these datacenters may not have identical processors, which triggers a Microsoft activation check through the Internet.

Rest assured, VMware and the Hands-on Labs are in full compliance with Microsoft licensing requirements.  The lab that you are using is a self-contained pod and does not have full access to the Internet, which is required for Windows to verify the activation.  Without full access to the Internet, this automated process fails and you see this watermark.

This cosmetic issue has no effect on your lab.  

 

 

Look at the lower right portion of the screen

 

Please check to see that your lab has finished all the startup routines and is ready for you to start. If you see anything other than "Ready", please wait a few minutes.  If after 5 minutes your lab has not changed to "Ready", please ask for assistance.

CLICK HERE TO RETURN TO THE LAB GUIDANCE

 

Conclusion

Thank you for participating in the VMware Hands-on Labs. Be sure to visit http://hol.vmware.com/ to continue your lab experience online.

Lab SKU: HOL-2113-01-SDC

Version: 20210331-134706