VMware Hands-on Labs - HOL-2013-01-SDC


Lab Overview - HOL-2013-01-SDC - vSphere 7 with Kubernetes- Lightning Lab

VMware Tech Preview - Disclaimer


This session may contain product features that are currently under development.

This session/overview of the new technology represents no commitment from VMware to deliver these features in any generally available product.

Features are subject to change, and must not be included in contracts, purchase orders, or sales agreements of any kind.

Technical feasibility and market demand will affect the final delivery.

Pricing and packaging for any new technologies or features discussed or presented have not been determined.


Introduction


Welcome to the vSphere 7 with Kubernetes Lightning Lab!

We have developed Lightning Labs to help you learn about VMware products in small segments of time.  In this lab, you will go through the process of creating a Kubernetes Supervisor cluster within VMware's vSphere 7 with Kubernetes platform.

This lab will be presented in the form of an Interactive Simulation.


 

Introducing vSphere 7 with Kubernetes

Lab Module List:

 Lab Captains: 

This lab manual can be downloaded from the Hands-on Labs Document site found here:

http://docs.hol.vmware.com

This lab may be available in other languages.  To set your language preference and have a localized manual deployed with your lab, you may utilize this document to help guide you through the process:

http://docs.hol.vmware.com/announcements/nee-default-language.pdf

 

 

What is vSphere 7 with Kubernetes?

 

vSphere 7 with Kubernetes, empowers IT Operators and Developers to accelerate innovation by converging containers and VMs into VMware's vSphere platform with native Kubernetes. VMware has leveraged Kubernetes to rearchitect vSphere and extend its capabilities to all modern and traditional applications.

vSphere 7 with Kubernetes transforms vSphere into the app platform of the future.

Enterprises can now accelerate the development and operation of modern apps on VMware vSphere while continuing to take advantage of existing investments in technology, tools, and skillsets. By leveraging Kubernetes to rearchitect vSphere, vSphere 7 with Kubernetes will enable developers and IT operators to build and manage apps comprised of containers and/or virtual machines. This approach gives enterprises a single platform to operate existing and modern apps side-by-side.

vSphere 7 with Kubernetes exposes a new set of services that developers can consume through the Kubernetes API.   The VMware Tanzu Kubernetes Grid Service for vSphere allows developers to lifecycle manage Kubernetes clusters on demand.  The Network Service enables integrated Load Balancing, Ingress and Network Policy for developer’s Kubernetes clusters.

The Storage service integrates vSphere Cloud Native Storage into Kubernetes to provide stateful application support with Persistent Volumes back by vSphere Volumes.

The vSphere Pod service takes advantage of the other services to deliver Pods Natively on Esxi.  The primary place that customers will run Pods is in the upstream aligned, fully conformant clusters deployed through the Tanzu Kubernetes Grid Service.  The Pod service compliments the TKG Service for specific use cases where the application components need the security and performance isolation of a VM in a Pod form factor.  

And finally, vSphere 7 with Kubernetes has a native registry service that can be used to deploy container images as Kubernetes pods.

Application Focused management means that policy has been attached to namespaces that contain Developer’s applications.  Operations teams now have a holistic view of an application by managing the namespace that contains all of the application objects.

 

 

Video: vSphere 7 with Kubernetes Demo (3:42)

See vSphere 7 with Kubernetes in action in this brief demonstration!

 

Module 1 - Deploying a Supervisor Kubernetes Cluster (15 minutes)

Introduction


The Supervisor cluster is a special kind of Kubernetes cluster that uses ESXi as its worker nodes instead of Linux. This is achieved by integrating a Kubelet (our implementation is called the Spherelet) directly into ESXi. The Spherelet doesn’t run in a VM, it runs directly on ESXi.


 

Supervisor Cluster

 

Workloads deployed on the Supervisor, including Pods, each run in their own isolated VM on the hypervisor. To accomplish this we have added a new container runtime to ESXi called the CRX. The CRX is like a virtual machine that includes a Linux kernel and minimal container runtime inside the guest. But since this Linux kernel is coupled with the hypervisor, we’re able to make a number of optimizations to effectively paravirtualized the container.

The supervisor includes a Virtual Machine operator that allows Kubernetes users to manage VMs on the Supervisor. You can write deployment specifications in YAML that mix container and VM workloads in a single deployment that share the same compute, network and storage resources.

The VM operator is just an integration with vSphere’s existing virtual machine lifecycle service, which means that you can use all of the features of vSphere with kubernetes managed VM instances. Features like RLS settings, Storage Policy, and Compute policy are supported.

In addition to VM management, the operator provides APIs for Machine Class and Machine Image management. To the VI admin, Machine Images are just Content Libraries.

In the following interactive simulation, we will demonstrate how to deploy a Supervisor cluster in a vSphere environment

 

 

Namespaces

Using vSphere 7 with Kubernetes, admins now have the ability to create Namespaces within vSphere.  The vSphere Namespace is an abstraction onto which admins attach policy and then assign to development teams.  More specifically, authentication and authorization for Namespaces are enabled through vSphere Single Sign-On and Administrators align Storage and Network policy with corresponding Kubernetes constructs through the Namespace.   Administrators are able to create and manage these Namespaces directly through the vSphere Web Client.

 

 

Hands-on Labs Interactive Simulation: Deploying a Supervisor Kubernetes Cluster - Administrator Persona


This part of the lab is presented as a Hands-on Labs Interactive Simulation. This will allow you to experience steps that are too time-consuming or resource-intensive to do live in the lab environment. In this simulation, you can use the software interface as if you are interacting with a live environment.

In this stage of the lab, we will be walking through the creation of a Kubernetes cluster from the perspective of an infrastructure administrator.

  1. Click here to open the interactive simulation. It will open in a new browser window or tab.
  2. When finished, click the “Return to the lab” link to continue with this lab.

The lab continues to run in the background. If the lab goes into standby mode, you can resume it after completing the module.


ISim Notes Admin Persona- Do Not Publish


This part of the lab is presented as a Hands-on Labs Interactive Simulation. This will allow you to experience steps which are too time-consuming or resource intensive to do live in the lab environment. In this simulation, you can use the software interface as if you are interacting with a live environment.

The orange boxes show where to click, and the left and right arrow keys can also be used to move through the simulation in either direction.

 vSphere 7 with Kubernetes is the new generation of vSphere for modern applications and is referred to as the Supervisor cluster.  This fundamentally changes how developers get access to vSphere infrastructure and how IT Operations provides governance.

In the following scenario, the application platform team has requested resources for a new application that is being worked on by the development team. The IT operations team will enable a Supervisor Cluster on an existing ESXi Cluster. Once the Supervisor cluster is enabled, resources are allocated to the development via the creation of a Namespace. Let’s start by enabling the Supervisor Cluster in vCenter.


 

Enable Supervisor Cluster

  1. Click Menu
  2. Click Workload Platform
  3. Click the scroll bar
  4. Click I'm Ready

This is a list of ESXi clusters that are compatible with being enabled as a Supervisor Cluster.  Currently that means HA and DRS are enabled on the cluster.

  1. Click Site01-Cluster01
  2. Click OK

Enabling the Supervisor Cluster means that a Kubernetes control plane will be deployed onto the ESXi Cluster. This will be a highly available, multi-master deployment with etcd and the Kubernetes API stacked into each node. For this small environment, we are deploying a single Master control plane. This plane defines the size of the master VM and an estimate of the number of pods that could be deployed.

  1. Click Tiny
  2. Click Next

Next we add networking to the Control Plane configuration.  Each Master node will have multiple network interfaces. The first one is the Management network and supports traffic to vCenter.

  1. Click Select Network
  2. Click VM Network
  3. Click the Starting Master IP field and type 192.168.120.10
  4. Click in the Subnet Mask field and type 255.255.255.0
  5. Click in the Gateway field and type 192.168.120.1
  6. Click in the DNS Servers field and type 192.168.110.10
  7. Click in the DNS Search Domains field and type corp.local
  8. Click in the NTP Servers field and type 10.166.1.120
  9. Click the scroll bar

The other Network interfaces support traffic to the Kubernetes API and to the Pods/Services that are deployed on the Supervisor cluster.  This network is supported by NSX. Choose the NSX Distributed switch and Edge. The Master Nodes will be assigned Virtual IPs (VIPS) and be front ended by a load balancer with its own VIP. Internal traffic from the Kubernetes nodes (the ESXi hosts) to the master also has a Load Balancer.

  1. Click Select a VDS for the Namespace network
  2. Click the available VDS in the list  
  3. Click on Select an Edge Cluster
  4. Click edge-cluster-01
  5. Click in the DNS Servers field and type 192.168.110.10

Pod and Service CIDRs are the network segments that are assigned to the Kubernetes Pods and Services that are created on the cluster.  These IPs are private to the cluster and do not get routed. Routing into and out of the cluster happens via the segments defined with Ingress and Egress CIDRs.  The Ingress CIDR IPs are the VIPs assigned to Master Nodes and its Load Balancer, as well as any Kubernetes Service that must be accessed from outside the cluster.  VIPs are also assigned to each Namespace with an SNAT rule for any outbound traffic. These VIPs come from the Egress CIDR.

  1. Click in the Ingress CIDRs field and type 192.168.124.0/28
  2. Click in the Egress CIDRs field and type 192.168.124.32/28
  3. Click Next

Next we will choose where storage objects are placed. Note that storage placement is done via vSphere Storage Policies rather than by individual Datastores. Choose the Storage policy that defines where the Master VMs will be placed.

  1. Click Select Storage for the Master Node
  2. Click high-performance-ssd
  3. Click OK

Ephemeral disks are the volumes that are created for the native Pods running on ESXi hosts. Choose where these should reside.

  1. Click Select Storage for the Ephemeral Disks
  2. Click high-performance-ssd
  3. Click OK

As part of supporting pods running natively on ESXi, we have created the capability to cache images that are downloaded for the containers running in the pods. Subsequent pods using the same image will pull from the local cache rather than the external container registry. Choose the storage policy that defines where this cache should be located.

  1. Click Select Storage for the Image Cache
  2. Click high-performance-ssd
  3. Click OK
  4. Click Next
  5. Click Finish
  6. Click Menu
  7. Click Hosts and Clusters.  There is a new resource pool called Namespaces. It will include inventory objects created on this Supervisor cluster.
  8. Click the Namespaces resource pool.  Note the Kubernetes MasterAPI VM is running which means Kubernetes has been enabled directly on the cluster Site01-Cluster01

 

 

Create a Namespace

Now that the Supervisor cluster has been enabled, we will create a new Namespace.  Namespace is an extension of the core Kubernetes namespace construct and allows the vSphere admins to attach policy that spans both vSphere and Kubernetes. All developer workloads run in the namespaces that they are assigned.

  1. Click Menu
  2. Click Workload Platform
  3. Click on the Namespaces tab
  4. Click Create Namespace
  5. Click Site01-Datacenter01
  6. Click Site01-Cluster01
  7. Click in the Name field and type hol (Note: We use lower case because it is a Kubernetes namespace)
  8. Click Create.  This creates a namespace in the Supervisor cluster and a vCenter namespace object.
  9. Click Menu
  10. Click Host and Clusters
  11. Click hol under the Namespaces resource pool to view the Summary page

 

 

Add Developer Access

We will now grant edit permissions for the namespace to a user named Fred who is a member of the development team.  A Kubernetes role binding is created and the application platform team will be able to create objects in the Kubernetes namespace.  Users will be authenticated to the cluster using vSphere Single Sign-On.

 

  1. Click Add Permissions
  2. Click Select Domain
  3. Click vsphere.local
  4. Click in the User/Group field and type Fred
  5. Click fred in the search results
  6. Click Select Role
  7. Click Can Edit
  8. Click OK

 

 

Add Storage Policy to Namespace

vSphere administrators can associate storage policies to this namespace. Once assigned, this will automatically create a Kubernetes Storage Class on the Supervisor cluster and associate it to this policy. It also causes an unlimited resource quota to be assigned for the Namespace on that storage class. Storage limits can be configured and are enforced through Kubernetes Storage Class resource quotas assigned to the namespace. These storage classes are used in the placement of Kubernetes persistent volumes that are created within the Namespace.

 

  1. Click Add Storage
  2. Click high-performance-ssd
  3. Click OK

 

 

Add CPU and Memory Limits

CPU and Memory limits can also be set for the namespace.  Each namespace is backed by a resource pool where these limits are enforced.

  1. Click the Configure tab
  2. Click Resource Limits
  3. Click Edit
  4. Click in the CPU field and type 3000
  5. Click in the Memory field and type 1000
  6. Click in the high-performance-ssd field and type 2000
  7. Click OK
  8. Click the Summary tab and review details

In the next simulation, see how members of the development team can use the new namespace.

 

Hands-on Labs Interactive Simulation: Supervisor Kubernetes Cluster - Developer Persona


This part of the lab is presented as a Hands-on Labs Interactive Simulation. This will allow you to experience steps that are too time-consuming or resource-intensive to do live in the lab environment. In this simulation, you can use the software interface as if you are interacting with a live environment.

In this stage of the lab, we will be continuing to configure the same Kubernetes cluster from the perspective of a developer.

  1. Click here to open the interactive simulation. It will open in a new browser window or tab.
  2. When finished, click the “Return to the lab” link to continue with this lab.

The lab continues to run in the background. If the lab goes into standby mode, you can resume it after completing the module.


ISim Notes Developer Persona - Do Not Publish


This part of the lab is presented as a Hands-on Labs Interactive Simulation. This will allow you to experience steps which are too time-consuming or resource intensive to do live in the lab environment. In this simulation, you can use the software interface as if you are interacting with a live environment.

The orange boxes show where to click, and the left and right arrow keys can also be used to move through the simulation in either direction.


 

Download and install the Plugin

In the following scenario, we will focus on a Developers Persona and use the Kubernetes cluster to deploy, manage and secure a container deployment.

  1. Click Open Link to access the CLI Tools

We will use a Linux VM as our client and need to download the appropriate client plugin

  1. Click Show More Options
  2. Click Linux
  3. Click Plugin for Linux
  4. Click Copy Link Address
  5. Click Putty Icon

We will ssh into the Ubuntu cli-vm using Putty

  1. Click cli-vm
  2. Click Load
  3. Click Open

The wget command downloads the plugin zip file through the proxy server in the Supervisor cluster control plane.

  1. Click on the Putty Shell, type wget http://192.168.124.1/wcp/plugin/linux-amd64/vsphere-plugin.zip --no-check-certificate and hit Enter

Unzip the plugin file.  Note that the bin directory now contains both the kubectl and the kubectl-vsphere plugin executables.  This directory must be in your system PATH.  We have already set that up for you.

  1. Click to paste unzip vsphere-plugin.zip and hit Enter
  2. Click to type and hit Enter

 

 

Application Deployment on the Supervisor Cluster

Supervisor Namespaces are integrated with vSphere Single SignOn through the vSphere Kubernetes Plugin. 

The Login command is redirected using an authenticating proxy in the Supervisor Control plane to the PSC in vCenter.  The PSC returns a json web token that is stored in the client config file and will authenticate the user to the Supervisor Cluster master on all subsequent kubectl commands.

  1. Click to type kubectl vsphere login --server=192.168.124.1 --vsphere-username fred@vsphere.local --insecure-skip-tls-verify and hit Enter
  2.  Hit Enter for the password.

The config file contains the contexts for the namespaces the user has access to, along with the jwt tokens needed to authenticate to the kubernetes API.

  1. Click to type cat ~/.kube/config and hit Enter

Next we want to see the Namespaces that user Fred has access to

  1. Click to type kubectl config get-contexts and hit Enter

Now we set the current Namespace to be hol.  This means Fred won’t have to append the -n hol flag to each kubectl command.

  1. Click to type kubectl config use-context hol  and hit Enter

To verify that your context is set correctly and that the API is responding, execute the following command.  You should see No Resources Found because nothing has been created in this new Namespace.

  1. Click to type kubectl get all and hit Enter (Should get No Resources Found response)

Let’s jump back into our admin persona and take a look at NSX.  NSX objects are automatically created as we do things like create clusters, add Namespaces or deploy application objects on our cluster.  We will log into NSX-Mgr and look at some of those objects.

  1. Click to open a new Chrome tab
  2. Click nsx-mgr.corp.local bookmark
  3. Click username and type admin
  4. Click password and type VMware1!base
  5. Click LOG IN
  6. Click Advanced Networking & Security

You will see Logical Switches (now called segments) for each Namespace. The first one in the list should be your Namespace – 

  1. Click on the square box to select Logical Switch 
  2. Click on the arrow to Expand Subnets 
  3. Click to Scroll Down - Notice the CIDR for your Namespace.  10.244.7.0/24 This segment came from a Pod CIDR defined on cluster create
  4. Click on Routers - Note the T0 router for north/south routing a single T1 for your cluster
  5. Click on NAT
  6. Click the Logical Router 
  7. Click domain-cB:4122c457-a31b-4749-aaa7-a8b4f9a03adb

Note that there is an SNAT IP for each Namespace in the cluster.  Find the CIDR for your Namespace and note that all traffic from Pods in this Namespace will have that translated IP.   These IPs come from the Egress CIDR range and must be routable from your corporate network.

  1. Click IPAM
  2. Click the domain-cB:4122c457-a31b-4749-aaa7-a8b4f9a03adb-ip... to see your Pod CIDR subnets
  3. Click on Load Balancers
  4. Click the Small domain-cB:4122c457-a31b-4749-aaa7-a8b4f9a03a... this is the Load balancer for access to the MasterAPI from external users.
  5. Click on Virtual Servers and note the IP. 
  6. Click to Scroll to the right.  This is the IP you logged in with. The LB labelled clusterip-domain…. Is used for calls from outside the cluster to the master.  This is in preparation for Multi-Master support that will be in a later build.
  7. Click to Scroll back to the left
  8. Click on Security.
  9. Click on Distributed Firewall
  10. Click the plus to expand the line that starts ds-domain-cB:4122c457-a31b-...
  11. Click to Scroll Down

Notice the Deny-all Drop rule that will prevent all ingress/egress by default.  Keep this in mind as we deploy an application.

  1. Click the Putty on the taskbar to go back to the CLI-VM
  2. Click to type cd $HOME/labs and hit Enter

We are going to start by creating a persistent Volume Claim.  This command will cause a kubernetes persistent volume and a vSphere volume to be created.  The Volume will be bound to the volume claim as a single step.

  1. Click to type cat nginx-claim.yaml and hit Enter
  2. Click to type kubectl apply -f nginx-claim.yaml and hit Enter
  3. Click to type kubectl get pvc and hit Enter - Notice volume bound to the claim

Now we will create a pod that mounts that persistent volume onto the pod when it is deployed. 

  1. Click to type cat nginx-pers.yaml and hit Enter
  2. Click to type  kubectl apply -f nginx-pers.yaml and hit Enter
  3. Click to type  kubectl get all and hit Enter

The pod has been created but is still Pending while the image is downloaded and the container is created.

  1. Click to type  kubectl describe pod/nginx-pers-646cdfdbbd-b58gv and hit Enter to show you the status of the pod startup

Now the Pod is running and the persistent volume has been successfully attached.

  1. Click the tab at the top to vCenter
  2. Click Menu
  3. Click Hosts and Clusters
  4. Click on nginx-pers-646cdfdbbd.. Native Pod

This is a new summary page for vSphere Native Pods.  Notice that there is no console and that you cannot take action on these objects except through the Kubernetes API.

  1. Click the hol Namespace Resource Pool

Kubernetes events are now captured in vCenter.  VI Admin can collaborate with developers on kubernetes specific troubleshooting.

  1. Click Monitor - Note Kubernetes events in vSphere client

Container Native Storage (CNS) in vCenter provides the VI Admin with the ability to associate vSphere storage volumes with the associated Kubernetes objects.

  1. Click Storage
  2. Click Persistent Volumes
  3. Click the Volume Name
  4. Click icon next to volume name for details
  5. Click Basic - This will display some basic information about the volume
  6. Click to Scroll Down to view more information
  7. Click to Scroll Back Up
  8. Click Kubernetes Objects
  9. Click to Scroll Down to see the Native Pod it's attached to and the other Kubernetes information
  10. Click to Scroll Back Up

Interaction with vSphere Native pods is the same as with traditional Kubernetes pods.  Kubectl provides the capability to jump into individual containers within the pod.  We will do that here.

  1. Click the Putty on the taskbar to return to the session
  2. Click to type kubectl exec -it “nginx-pers-646cdfdbbd-b58gv” bash and hit Enter
  3. Click to type help and hit Enter to see the available shell commands
  4. Click to type exit and hit Enter

NSX objects are automatically created as part of the lifecycle of vSphere native pods.  We will now create a deployment and a Load Balancer service.   An NSX Load Balancer is automatically created and associated with the service

  1. Click to type cat nginx-lbsvc.yaml  and hit Enter to deploy replica set with LB
  2. Click to type kubectl apply -f nginx-lbsvc.yaml and hit Enter
  3. Click to type Kubectl get all and hit Enter
  4. Click to type Kubectl get svc -o wide and hit Enter. Notice that the EXTERNAL-IP status is <pending>.  Let try to check again to see if an IP has been assigned.
  5. Click to paste Kubectl get svc -o wide and hit Enter
  6. Click to open a new Tab in Chrome.
  7. Click to type the URL http://192.168.124.2  - Note that you are blocked.  Namespace Ingress is Denied by default
  8. Click the Putty on the taskbar to return to the session
  9. Click to type cat enable-policy.yaml and hit Enter. This will create a network policy that enables Ingress/Egress to/from all Pods.
  10. Click to type kubectl apply -f enable-policy.yaml and hit Enter

Kubernetes Network Policy is implemented through NSX.  Deploying this network policy causes distributed firewall rules to be created in NSX that enable Ingress to the application we just deployed.

  1. Click the NSX-Manager tab in Chrome
  2. Click on Distributed Firewall
  3. Click on plus sign to expand hol-allow-all-whitelist. Note that all-ingress-allow has been added.

The Nginx Welcome page is now displayed.

  1. Click on Chrome Browser Tab to return to http://192.168.124.2. Note that it works now.

 

Conclusion


In this module, you have taken your first steps toward learning about vSphere 7 with Kubernetes! This module walked you through the creation of a Supervisor Cluster.  


 

You've finished Module 1

 

Congratulations on completing this lab.

If you are looking for additional information on vSphere 7 with Kubernetes, try one of these:

 

 

How to End Lab

 

To end your lab click on the END button.  

 

Conclusion

Thank you for participating in the VMware Hands-on Labs. Be sure to visit http://hol.vmware.com/ to continue your lab experience online.

Lab SKU: HOL-2013-01-SDC

Version: 20200428-150137