Lab Overview - HOL-2113-01-SDC - vSphere with Tanzu
Hands-on Labs allows you to evaluate the features and functionality of VMware products with no installation required. This lab is self-paced, and most modules are independent of each other. You can use the Table of Contents located in the upper right-hand corner to access any module.
If you are new to the VMware Learning Platform (VLP), please read the New User Guide located in the appendix. Click below to go directly to the new user console walkthrough before continuing:
Lab Module List:
Lab Captains:
This lab manual can be downloaded from the Hands-on Labs Document site found here:
This lab may be available in other languages. To set your language preference and have a localized manual deployed with your lab, you may utilize this document to help guide you through the process:
http://docs.hol.vmware.com/announcements/nee-default-language.pdf
Module 1 - Introduction to vSphere with Tanzu (15 minutes)
This lab is an overview of the Kubernetes capability in vSphere with Tanzu. After completing Module 1 you should have a basic understanding of the vSphere components that support Kubernetes functionality as well as how to enable Kubernetes on a vSphere Cluster. The remaining modules will cover managing vSphere with Tanzu and working with Tanzu Kubernetes clusters.
Modules 1 and 2 cover Application Focused Management which is driven by the IT Operator. This workflow focuses on the tasks IT Operators perform to enable Kubernetes in vSphere with Tanzu, as well as creating and managing the new vSphere objects used to provision, view and manage Kubernetes consumption.
This module will highlight:
Common Platform for Running both Kubernetes/Containerized Workloads and VMs
Kubernetes is now built into vSphere with Tanzu which allows developers to continue using the same industry-standard tools and interfaces they've been using to create modern applications. vSphere Admins also benefit because they can help manage the Kubernetes infrastructure using the same tools and skills they have developed around vSphere. To help bridge these two worlds we've introduced a new vSphere construct called Namespaces, allowing vSphere Admins to create a logical set of resources, permissions, and policies that enable an application-centric approach.
VMware vSphere with Tanzu delivers developer ready infrastructure and application-focused management for streamlined development, agile operations, and accelerated innovation. It’s a flexible environment for modern applications that are built from microservices and run across heterogenous environments.
With vSphere with Tanzu, VMware delivers embedded Tanzu Kubernetes Grid Service for fully compliant and conformant Kubernetes capabilities for containerized applications. This approach provides Kubernetes APIs to developers, enabling CI/CD (continuous integration / continuous delivery) processes across a global infrastructure including on-premises data centers, hyperscalers, and Managed Service Providers (MSP) infrastructure. It unites the data center and the cloud with an integrated cloud operating model. Now enterprises can increase the productivity of developers and operators, enabling faster time-to-innovation combined with the security, stability, and governance, and avoid cost proliferation due to multiple stacks of IT infrastructure or cloud services.
vSphere with Tanzu enables the DevOps model with infrastructure access for developers through Kubernetes APIs. It includes the Tanzu Kubernetes Grid Service, which is VMware's compliant and conformant Kubernetes implementation for building modern containerized applications. In addition, the vSphere Pod Service compliments the Tanzu Kubernetes Grid Service for application container instances requiring VM-like isolation benefits of improved performance and security of a solution built into the hypervisor.
We are introducing a lot of value in vSphere with Tanzu for the VI admin. We deliver a new way to manage infrastructure, called application-focused management. This enables VI admins to organize multiple objects into a logical group and then apply policies to the entire group. For example, an administrator can apply security policies and storage limits to a group of virtual machines and Kubernetes clusters that represent an application, rather than to all of the VMs and clusters individually.
VMware solved the challenges faced by traditional apps across heterogeneous architectures with the introduction of VMware vSphere. With vSphere 7, we’re delivering the essential services for the modern hybrid cloud. VMware hyperconverged infrastructure (HCI) stack combines compute, storage, and networking with unified management—vSphere with Tanzu powers the innovation behind developer ready infrastructure. With the new Kubernetes and RESTful API surface, developers can streamline their work and IT administrators can improve productivity using application-focused management.
Powered by innovations in vSphere with Tanzu, vSphere with Tanzu Services is a new, integrated Kubernetes and RESTful API surface that enables you to drive API access to all core services.
VMware vSphere with Tanzu consists of two families of services: Tanzu Runtime Services and Infrastructure Services.
* vSphere Pod Service and Registry Service require NSX-T
A Tanzu Kubernetes Grid (TKG) cluster is a Kubernetes (K8s) cluster that runs inside virtual machines on the Supervisor layer and not on vSphere Pods. It is enabled via the Tanzu Kubernetes Grid Service for vSphere. Since a TKG cluster is fully upstream-compliant with open-source Kubernetes it is guaranteed to work with all your K8s applications and tools. Tanzu Kubernetes clusters are the primary way the customers will deploy Kubernetes based applications.
TKG clusters in vSphere use the open source Cluster API project for lifecycle management, which allows developers and operators to manage the lifecycle (create, scale, upgrade, destroy) of conformant Kubenetes clusters using the same declarative, Kubernetes-style API that is used to deploy applications on Kubernetes.
The vSphere Pod Service is a service that runs on a VMware managed Kubernetes control plane over your ESXi cluster. It allows you to run native Kubernetes workloads directly on ESXi. The ESXi hosts become the Kubernetes Nodes and vSphere Pods are the components that run the app workloads.
The vSphere Pod Service provides a purpose-built lightweight Linux kernel that is responsible for running containers inside the guest. It provides the Linux Application Binary Interface (ABI) necessary to run Linux applications. Since this Linux kernel is provided by the hypervisor, VMware has been able to make numerous optimizations to boost its performance and efficiency. When users need the security and performance isolation of a VM, orchestrated as a set of containers through Kubernetes, they should use the vSphere Pod Service.
* vSphere Pod Service requires NSX-T
The Registry Service allows developers to store, manage and better secure Docker and OCI (Open Container Initiative) images using Harbor as a private registry. The lifecycle of projects and members of the private image registry is automatically managed and is linked to the lifecycle of namespaces and user or group permissions in namespaces created in vCenter.
* Registry Service requires NSX-T
The Storage Service exposes vSphere or vSAN based storage and provides developers and operators the capability to manage persistent disks for use with containers, Kubernetes clusters and virtual machines.
The Network Service abstracts the underlying virtual networking infrastructure from the Kubernetes environment. It can be implemented using VMware vSphere Networking or NSX-T. It provides network services for Supervisor Cluster control plane VMs as well as TKG cluster control plane and worker nodes. If NSX-T is leveraged there are additional capabilities around Kubernetes service load balancing and network security.
vSphere with Tanzu Supervisor Cluster
IT Operators enable the Supervisor Kubernetes Cluster on vSphere clusters through a simple wizard in the vSphere Client. The Supervisor cluster provides the Kubernetes backbone onto which we have built services that can be consumed by both Operators and DevOps teams.
In this simulation we will enable the vSphere with Tanzu Supervisor Cluster using the built-in VMware vSphere Networking feature. Using NSX-T for the Supervisor Cluster is not covered in this simulation
This part of the lab is presented as a Hands-on Labs Interactive Simulation. This will allow you to experience steps that are too time-consuming or resource-intensive to do live in the lab environment. In this simulation, you can use the software interface as if you are interacting with a live environment
The lab continues to run in the background. If the lab goes into standby mode, you can resume it after completing the module.
In this module, you were able to gain a basic understanding of the vSphere components that support Kubernetes functionality as well as how to enable Kubernetes on a vSphere Cluster.
Proceed to any module below which interests you most.
To review more info on the new features of vSphere 7, please use the links below:
If this is the last module you would like to take in this lab, you may end your lab by clicking on the END button.
Module 2 - Managing vSphere with Tanzu (30 minutes)
In Module 1, the Hands-on Labs Interactive Simulation guided you through enabling a Supervisor cluster on a vSphere with Tanzu cluster. As part of that operation, several new objects were created.
During the enabling process there were 3 Supervisor Control Plane VMs created that act as the Kubernetes API Server and etcd hosts and function as the Kubernetes control plane for the vSphere Cluster.
As the IT Operator, you can view all of the objects created during the process from vCenter. In this Lab, we will inspect the components of the Supervisor cluster.
1. Click on the Chrome Icon on the Windows Quick Launch Task Bar.
Log into Region A vCenter
If credentials aren't saved, use the following
Namespaces allow IT Operators to give a project, team or customer capacity to deploy vSphere Pods and Tanzu Kubernetes Grid (TKG) clusters while having control over access and resource utilization. IT Operators have full visibility into resource consumption, permissions and status of the namespace as well as the vSphere Pods and TKG clusters and VMs running inside.
This allows you to deliver self service Kubernetes operations inside the namespace to your development teams while retaining the control and visibility needed.
If you need a refresher on TKG Clusters and vSphere PODs reference the VMware vSphere with Tanzu Services section of this lab.
Note: Kubernetes events may be blank because there may not have been any Kubernetes activity in the past 1 hour
Do not set any Resource Limits in this lab or you may affect your ability to perform steps in other modules
Do not set any Object Limits in this lab or you may affect your ability to perform steps in other modules
Users can be selected from any Identity source configured in the vCenter Identity provider
Read the vSphere with Tanzu - Introduction to Tanzu Kubernetes Grid Cluster blog for more information
vSphere with Tanzu when installed using vCenter Server Network the Supervisor Cluster and TKG cluster control plane VM load balancing is done using HA Proxy. The default overlay networking within the Tanzu Kubernetes cluster is done using Antrea.
Namespaces are used in Kubernetes to share resources, set permissions and isolate applications and teams from each other. In with vSphere with Tanzu, the Supervisor Cluster is a Kubernetes Cluster. Namespaces are utilized to provide projects and teams self-service access to deploy resources using Kubernetes. IT Operators can set RBAC, Storage Policies, Resource Limits and other vSphere features and capabilities on Namespaces.
In this Lab we will cover creating a Supervisor Cluster Namespace in vSphere 7.
Note: Once the namespace has been completed you will see the "Your namespace demo-app-02 has been successfully created" screen. You can read the suggested next steps bullet points to understand next steps to finish configuration of the namespace. You can click the GOT IT button to dismiss this screen.
For more information on Namespaces read the - Introduction to Kubernetes Namespaces blog.
The virtual machine images used to deploy Tanzu Kubernetes Grid cluster on vSphere with Tanzu are sourced from a public content delivery network (CDN). vSphere with Tanzu uses a subscribed Content Library to automatically download and synchronize new versions of the TKG virtual machine image and corresponding Kubernetes releases.
Click Kubernetes
In this lab we are using a local content library. A typical install would use a subscribed library.
As VMware releases new versions of Tanzu Kubernetes Grid OVAs you will see them appear in a subscribed library. Developers have the freedom to deploy various versions of Tanzu Kubernetes clusters in their Namespaces based on application needs.
In this module you were able to gain a deeper understanding of the components created when enabling vSphere with Tanzu using vCenter Networking including the supervisor cluster VMs, Namespace Objects, Tanzu Kubernetes Grid (TKG) Clusters and CDN for TKG clusters. Also, you created a Supervisor Cluster Namespace and assigned resources and permissions.
Proceed to any module below which interests you most.
To review more info on the new features of vSphere 7, please use the links below:
If this is the last module you would like to take in this lab, you may end your lab by clicking on the END button.
Module 3 - Working with Tanzu Kubernetes Clusters (30 minutes)
VMware Tanzu Kubernetes Grid Service for vSphere is a Kubernetes experience that is tightly integrated with vSphere 7.0 and made available through both vSphere with Tanzu and VCF with Tanzu. Tanzu Kubernetes Grid Service runs on Supervisor Clusters in vSphere with Tanzu to create Kubernetes conformant clusters that have been optimized for vSphere.
In this module, we'll explore how to create Tanzu Kubernetes clusters, how to perform basic operations with them and how they interact with Harbor as an image registry.
1. Click on the Chrome Icon on the Windows Quick Launch Task Bar.
Log into Region A vCenter
If credentials aren't saved, use the following
You're now good to start Module 3 - Working with Tanzu Kubernetes Clusters.
Tanzu Kubernetes Grid is a full distribution of the open-source Kubernetes container orchestration platform that is built, signed, and supported by VMware. A Tanzu Kubernetes cluster is a Kubernetes cluster that runs inside virtual machines on the Supervisor Cluster and not on vSphere Pods. It is enabled via the Tanzu Kubernetes Grid Service for vSphere.
Tanzu Kubernetes clusters are deployed inside a namespace using the Supervisor Cluster. In our environment, we have a namespace demo-app-01 where we have deployed an application using the vSphere Pod Service and a Tanzu Kubernetes cluster tkg-cluster-01.
To view the cluster tkg-cluster-01
As you can see, cluster is made up of a single control plane node and three worker nodes.
As you saw, Tanzu Kubernetes Clusters are deployed in a Supervisor Cluster namespace. Therefore, we'll have to log in to the Supervisor Cluster.
Login using SSH to Linux-01a VM:
Supervisor Cluster authentication is integrated with vSphere Single Sign On through the vSphere Kubernetes Plugin. This plugin is already installed in the Linux-01 workstation.
Log in to Supervisor Cluster:
kubectl vsphere login --server=https://192.168.130.129 --vsphere-username=administrator@corp.local
The login process updates the kubectl config file with contexts that you are authorized for. If everything went as expected, you will see the Supervisor Cluster namespaces you have been authorized to use:
Lets switch to the demo-app-01 context:
kubectl config use-context demo-app-01
We're now ready to launch our new Tanzu Kubernetes cluster!
Tanzu Kubernetes clusters are deployed using Cluster API. Cluster API is an open source Kubernetes project that aims to bring declarative, Kubernetes-style APIs to cluster life cycle management. With Cluster API, we can define our clusters with a YAML file.
The definition for our Tanzu Kubernetes cluster is located under ~/labs/tanzucluster/tkg-cluster-01.yaml
cd ~/labs/tanzucluster
cat tkg-cluster-01.yaml
apiVersion: run.tanzu.vmware.com/v1alpha1 kind: TanzuKubernetesCluster metadata: name: tkg-cluster-01 namespace: demo-app-01 spec: distribution: version: v1.18.5 topology: controlPlane: count: 1 #1 control plane nodes class: best-effort-xsmall # Best effort extra small class storageClass: kubernetes #Specific storage class for control plane workers: count: 3 #3 worker nodes class: best-effort-xsmall # Best effort extra small class storageClass: kubernetes #Specific storage class for workers settings: network: cni: name: antrea services: cidrBlocks: ["198.51.100.0/12"] #Cannot overlap with Supervisor Cluster pods: cidrBlocks: ["192.0.2.0/16"] #Cannot overlap with Supervisor Cluster storage: classes: ["kubernetes"] #Named PVC storage classes defaultClass: kubernetes #Default PVC storage class
In the next section, we'll explore how to perform basic operations in our existing Tanzu Kubernetes Cluster.
In this module, we'll explore how to perform basic operations with Tanzu Kubernetes Clusters.
Now that our Tanzu Kubernetes Cluster is fully created, we can now log in.
Login using SSH to Linux-01a VM:
Tanzu Kubernetes Cluster authentication is integrated with vSphere Single Sign On through the vSphere Kubernetes Plugin. This plugin is already installed in the Linux-01 workstation.
Log in to Supervisor Cluster:
kubectl vsphere login --server https://192.168.130.129 --vsphere-username=administrator@corp.local --tanzu-kubernetes-cluster-name=tkg-cluster-01 --tanzu-kubernetes-cluster-namespace=demo-app-01
If everything went as expected, you will see the namespace and Tanzu Kubernetes cluster:
This is a similar command to the Supervisor Cluster login. However, it is not the same. Notice the --tanzu-kubernetes-cluster-name and --tanzu-kubernetes-cluster-namespace flags we're using in order to log in the Tanzu Kubernetes Cluster
Lets switch to the tkg-cluster-01 context: kubectl config use-context tkg-cluster-01
We're now logged in our Tanzu Kubernetes cluster!
You can verify that this is a Tanzu Kubernetes cluster by getting the Kubernetes nodes:
kubectl get nodes
We are ready to deploy some application to the cluster.
Now we will deploy an application in the cluster. The application is called NGINX. This is a web server that we can use to validate our cluster. However, this application is stateless, as it does not have any persistent storage. It is made of three NGINX pods and a Service with type LoadBalancer
Navigate to the ~/labs/nginx folder in the Linux-01 workstation. You'll find a YAML file called nginx.yaml:
cd ~/labs/nginx
The YAML file is used to define our application. Lets take a look at the nginx.yaml file:
cat nginx.yaml
By examining the YAML file, you'll notice there are two types of resources: Services and Deployments. Services are used to expose an application externally while Deployments are used to maintain multiple replicas of an application.
After examining the YAML file, we can go ahead and deploy the application:
kubectl apply -f nginx.yaml
After a few minutes, the application will be up and running:
kubectl get pods
The endpoint of the application can now be retrieved:
kubectl get services
The Application Load Balancer will be provisioned automatically by NSX-T (or HAProxy if using the vDS deployment) when a Service resource with type LoadBalancer is created inside the Tanzu Kubernetes cluster.
The application is now accessible through the External-IP endpoint retrieved in the previous step. Please note that your External-IP may be different.
Lastly, you can delete the whole application by using:
kubectl delete -f nginx.yaml
The whole application will now be deleted.
Now, we will deploy a stateful application inside the Tanzu Kubernetes cluster. A stateful application will have a Persistent Volume that will persist across Pod deletions. To demonstrate this, we will use the Guestbook application.
Navigate to ~/labs/guestbook.
cd ~/labs/guestbook
You'll find a YAML file called guestbook.yaml. Lets take a look at the guestbook.yaml file:
cat guestbook.yaml
There are three main blocks in this file:
1. Service: We'll expose the application. We'll be connecting to the port 6379 of the redis-master Pod.
2. Deployment: Here we'll define what Docker image will our container run and where should the Persistent Volume be mounted. We'll map the /data folder to the volume redis-master-claim
3. PersistentVolumeClaim: This is the part where we'll define the persistent volume. We'll be creating a request to vSphere for a 2GB volume with the Storage Class high-performance-ssd
In the YAML file there are resources also for the creation of the frontend and redis-slave applications. Details of those have been omitted. However, the same principles apply as they have a Deployment and a Service.
You can now apply the YAML file by using:
kubectl apply -f guestbook.yaml
The application will be deployed. You can see the status of the pods by using:
kubectl get pods
You can also verify the status of the Persistent Volume by using:
kubectl get persistentvolumeclaims
The endpoint of the application can now be retrieved:
kubectl get services
The application is now accessible in the External-IP endpoint from the previous step. Please note that you External-IP address may be different.
Head to your web browser to access the endpoint and verify that the application works:
By pressing in the Persistent Volume, we can access more information:
Cloud Native Storage is available natively in vSphere 7 and it allows higher levels of visibility for Kubernetes Persistent Volumes for the IT Administrators.
Lastly, you can delete the whole application by using:
kubectl delete -f guestbook.yaml
The whole application will now be deleted:
In this module, we'll dig deeper into application deployment to deploy a microservices based application on a Tanzu Kubernetes Cluster.
First of all, we need to log into Linux-01a jumpbox
Login using SSH to Linux-01a VM:
Tanzu Kubernetes Cluster authentication is integrated with vSphere Single Sign On through the vSphere Kubernetes Plugin. This plugin is already installed in the Linux-01 workstation.
Log in to Supervisor Cluster:
kubectl vsphere login --server=https://192.168.130.129 --vsphere-username=administrator@corp.local --tanzu-kubernetes-cluster-name=tkg-cluster-01 --tanzu-kubernetes-cluster-namespace=demo-app-01
If everything went as expected, you will see the namespaces and Tanzu Kubernetes clusters:
This is a similar command to the Supervisor Cluster login. However, it is not the same. Notice the --tanzu-kubernetes-cluster-name and --tanzu-kubernetes-cluster-namespace flags we're using in order to log in the Tanzu Kubernetes Cluster
Lets switch to the tkg-cluster-01 context: kubectl config use-context tkg-cluster-01
We're now logged in our Tanzu Kubernetes cluster!
You can verify that this is a Tanzu Kubernetes cluster by getting the Kubernetes nodes:
kubectl get nodes
We are ready to deploy our application to the cluster.
In a previous lab we have deployed both stateless and stateful applications to a Tanzu Kubernetes Cluster. Tanzu Kubernetes Clusters are production-ready Kubernetes clusters that can host complex microservices applications. The application that we are going to deploy is called acme-fitness. It is an application where the business logic has been decomposed in smaller polyglot applications. It is made from a Frontend Service, User Service, Catalog Service, Cart Service, Payment Service and Order Service.
Navigate to the ~/labs/acme-fitness folder in the Linux-01 workstation. You'll find a folder for every service described above:
cd ~/labs/acme-fitness
These folders have the manifests to deploy the application and the associated databases. The services have to be initially deployed in a certain order. To group all the application resources together, we are going to deploy the application into a Kubernetes namespace.
2. Type or copy/paste the following command into Putty: kubectl create ns acme-fitness
We are now ready to deploy every service into the newly created acme-fitness namespace.
cd ~/labs/acme-fitness/cart-service
kubectl apply -f cart-service.yaml
3. Type or copy/paste the following command into Putty to verify that the Cart Service are up and running. You may have to wait a few minutes until all pods are shown as Running: kubectl get pod -n acme-fitness
cd ~/labs/acme-fitness/catalog-service
kubectl apply -f catalog-service.yaml
3. Type or copy/paste the following command into Putty to verify that the Catalog Service pods are up and running. You may have to wait a few minutes until all pods are shown as Running: kubectl get pod -n acme-fitness
cd ~/labs/acme-fitness/payment-service
kubectl apply -f payment-service.yaml
3. Type or copy/paste the following command into Putty to verify that the Payment Service pods are up and running. You may have to wait a few minutes until all pods are shown as Running: kubectl get pod -n acme-fitness
cd ~/labs/acme-fitness/order-service
kubectl apply -f order-service.yaml
3. Type or copy/paste the following command into Putty to verify that the Order Service pods are up and running. You may have to wait a few minutes until all pods are shown as Running: kubectl get pod -n acme-fitness
cd ~/labs/acme-fitness/users-service
kubectl apply -f users-service.yaml
3. Type or copy/paste the following command into Putty to verify that the Users Service pods are up and running. You may have to wait a few minutes until all pods are shown as Running: kubectl get pod -n acme-fitness
cd ~/labs/acme-fitness/frontend-service
kubectl apply -f frontend-service.yaml
3. Type or copy/paste the following command into Putty to verify that the Frontend Service pods are up and running. You may have to wait a few minutes until all pods are shown as Running: kubectl get pod -n acme-fitness
Now that our application is fully deployed, we have to retrieve the application endpoint.
kubectl get service -n acme-fitness
The application is now accessible through the External-IP endpoint retrieved in the previous step. Please note that your External-IP may be different.
Try navigating around to test that all components are working properly.
You can test the Users Service by logging in with the user "eric" and password "vmware1!":
2. Use the following credentials:
3. Verify that the login was successful:
You can test the Catalog Service by clicking into a product.
Verify that the product page loads successfully:
The rest of the services can be tested in a similar way.
Lastly, you can delete the whole application by deleting the Kubernetes namespace:
kubectl delete ns acme-fitness
The whole application will now be deleted.
One of the most significant benefits of containers is that they allow you to package up an application with its dependencies and run it anywhere. This form of application packaging is called a container image. Docker images need to be stored in a secure and reliable location, a image registry. Harbor is an open source container image registry for Docker images.
In this lab, we'll explore how to use Harbor to store our Docker images and we'll update an application in Kubernetes.
1. Click on the Chrome Icon on the Windows Quick Launch Task Bar.
Log into Region A vCenter
If credentials aren't saved, use the following:
You'll notice there is a project called library:
Now, the tags for the NGINX image will be displayed. We only have one tag "latest".
Harbor integrates with third-party vulnerability scanners such as Clair and Trivy to scan your Docker images.
Now we will create a Docker image. All Docker images are created from a file called a Dockerfile. This Dockerfile defines how the image will be built.
Docker images are built using layers. This means that you can cache commonly-used layers so that you only need to download or upload the layers that you do not already have.
We'll use the existing NGINX image and we'll modify the index.html page to display a change.
Login using SSH to Linux-01a VM:
Now navigate to ~/labs/nginx and create a new file called index.html. This will be our new front page in our NGINX web server.
cd ~/labs/nginx
cat << 'EOF' > ~/labs/nginx/index.htmlWelcome to VMware Hands-on-Labs! Welcome to VMware Hands-on-Labs!
If you see this page, you have modified correctly the base NGINX image. Congratulations!
EOF
Now, create a new file called Dockerfile.
cat << 'EOF' > ~/labs/nginx/Dockerfile FROM harbor.corp.local/library/nginx:latest COPY index.html /usr/share/nginx/html EOF
In the first line, we are telling Docker to start building our new image from the existing nginx image in the Harbor registry.
In the second line, we are telling Docker to copy the index.html file we just generated in the NGINX www folder.
First of all, we need to log in to Harbor from the CLI:
docker login harbor.corp.local
We can now build and tag our image. The Docker images need to be tagged with the following format to be stored in Harbor registry_url/project/image:tag
docker build . -t harbor.corp.local/library/nginx:new
We can build and tag our image with a single command using the -t flag.
We are now ready to upload the new image to Harbor. You can do so with this command:
docker push harbor.corp.local/library/nginx:new
Docker images are built using layers. Therefore, Docker didn't need to upload the full image again to Harbor. Docker just pushed the layer with the change we made to the base image
If you go back to the Harbor Web UI and back to the NGINX project, you'll notice the new tag:
2. A new image with the tag "new" has been created.
To be able to notice the difference, we first need to deploy the application with the NGINX image prior to the modification. In order to deploy the new application. We need to log into the Tanzu Kubernetes Cluster:
kubectl vsphere login --server=https://192.168.130.129 --insecure-skip-tls-verify --vsphere-username=administrator@corp.local --tanzu-kubernetes-cluster-name=tkg-cluster-01 --tanzu-kubernetes-cluster-namespace=demo-app-01
If everything went as expected, you will see the namespaces and Tanzu Kubernetes clusters:
Lets switch to the tkg-cluster-01 context.
kubectl config use-context tkg-cluster-01
We're now logged in our Tanzu Kubernetes cluster! We can now proceed and deploy the new application to the cluster.
We'll deploy the nginx.yaml file located in ~/labs/nginx in the tkg-cluster-01 Tanzu Kubernetes Cluster.
kubectl apply -f nginx.yaml
The application will now be deployed:
You can get the application endpoint by using:
kubectl get services
You can now use your web browser to navigate to that address and see the NGINX welcome page. Please note that your External-IP Address may be different:
We are going to modify the existing NGINX deployment with our new image with its custom index.html page.
To do so, open the nginx.yaml file. You'll notice that the existing Deployment has harbor.corp.local/library/nginx as an image.
cat ~/labs/nginx/nginx.yaml
When no tag is specified, Docker will assume that it is the tag "latest"
Now, we'll modify the file to have the new tag.
sed -i s@harbor.corp.local/library/nginx@harbor.corp.local/library/nginx:new@g ~/labs/nginx/nginx.yaml
cat ~/labs/nginx/nginx.yaml
Now we are able to update the Kubernetes deployment by running:
kubectl apply -f nginx.yaml
Kubernetes will notice that the only thing that changed was the Docker image. As such, it will only change the Deployment and not the Service.
If you get all the Pods, you'll notice that Kubernetes launched new Pods with the updated Docker image and removed the old ones (rolling update):
kubectl get pods
We can head back to our web browser and refresh the page. We should now have the updated index.html page.
In this module, you saw how to create Tanzu Kubernetes clusters, performed basic operations with them and interacted with Harbor as an image registry.
Proceed to any module below which interests you most.
To review more info on the new features of vSphere 7, please use the links below:
If this is the last module you would like to take in this lab, you may end your lab by clicking on the END button.
Appendix - New User Guide
The New User Guidance covers the following topics as part of the console walkthrough:
During this module, you will input text into the Main Console. Besides directly typing it in, there are two very helpful methods of entering data which make it easier to enter complex data.
You can also click and drag text and Command Line Interface (CLI) commands directly from the Lab Manual into the active window in the Main Console.
You can also use the Online International Keyboard found in the Main Console.
When you first start your lab, you may notice a watermark on the desktop indicating that Windows is not activated.
One of the major benefits of virtualization is that virtual machines can be moved and run on any platform. The Hands-on Labs utilizes this benefit and we are able to run the labs out of multiple datacenters. However, these datacenters may not have identical processors, which triggers a Microsoft activation check through the Internet.
Rest assured, VMware and the Hands-on Labs are in full compliance with Microsoft licensing requirements. The lab that you are using is a self-contained pod and does not have full access to the Internet, which is required for Windows to verify the activation. Without full access to the Internet, this automated process fails and you see this watermark.
This cosmetic issue has no effect on your lab.
Please check to see that your lab has finished all the startup routines and is ready for you to start. If you see anything other than "Ready", please wait a few minutes. If after 5 minutes your lab has not changed to "Ready", please ask for assistance.
Conclusion
Thank you for participating in the VMware Hands-on Labs. Be sure to visit http://hol.vmware.com/ to continue your lab experience online.
Lab SKU: HOL-2113-01-SDC
Version: 20210331-134706