VMware Hands-on Labs - HOL-2032-01-CNA


Lab Overview - HOL-2032-01-CNA - Tanzu Mission Control

Lab Guidance


The Table of Contents can be accessed in the upper right-hand corner of the Lab Manual.

Tanzu Mission Control (TMC) provides a single control point for teams to more easily manage Kubernetes and operate modern, containerized applications across multiple clouds and clusters. Tanzu Mission Control codifies the know-how of operating Kubernetes - including deploying and upgrading  clusters, setting policies and configurations, understanding the health of clusters and the root cause of underlying issues, plus creating a map from the “people structure” to the infrastructure. 

As  more and more teams adopt containers and Kubernetes, it becomes harder  to manage. Development teams need independence to run and operate their applications across multiple clusters and clouds. IT teams need to maintain visibility and control at an organization-wide level. 

Tanzu Mission Control supports both development and IT teams in three key ways:  

Lab Module List:

 Lab Captains: 

 

This lab manual can be downloaded from the Hands-on Labs Document site found here:

http://docs.hol.vmware.com

This lab may be available in other languages.  To set your language preference and have a localized manual deployed with your lab, you may utilize this document to help guide you through the process:

http://docs.hol.vmware.com/announcements/nee-default-language.pdf


 

Location of the Main Console

 

  1. The area in the RED box contains the Main Console.  The Lab Manual is on the tab to the Right of the Main Console.
  2. A particular lab may have additional consoles found on separate tabs in the upper left. You will be directed to open another specific console if needed.
  3. Your lab starts with 90 minutes on the timer.  The lab can not be saved.  All your work must be done during the lab session.  But you can click the EXTEND to increase your time.  If you are at a VMware event, you can extend your lab time twice, for up to 30 minutes.  Each click gives you an additional 15 minutes.  Outside of VMware events, you can extend your lab time up to 9 hours and 30 minutes. Each click gives you an additional hour.

 

 

Alternate Methods of Keyboard Data Entry

During this module, you will input text into the Main Console. Besides directly typing it in, there are two very helpful methods of entering data which make it easier to enter complex data.

 

 

Click and Drag Lab Manual Content Into Console Active Window

You can also click and drag text and Command Line Interface (CLI) commands directly from the Lab Manual into the active window in the Main Console.  

 

 

Accessing the Online International Keyboard

 

You can also use the Online International Keyboard found in the Main Console.

  1. Click on the Keyboard Icon found on the Windows Quick Launch Task Bar.

 

 

Click once in active console window

 

In this example, you will use the Online Keyboard to enter the "@" sign used in email addresses. The "@" sign is Shift-2 on US keyboard layouts.

  1. Click once in the active console window.
  2. Click on the Shift key.

 

 

Click on the @ key

 

  1. Click on the "@ key".

Notice the @ sign entered in the active console window.

 

 

Activation Prompt or Watermark

 

When you first start your lab, you may notice a watermark on the desktop indicating that Windows is not activated.  

One of the major benefits of virtualization is that virtual machines can be moved and run on any platform.  The Hands-on Labs utilizes this benefit and we are able to run the labs out of multiple datacenters.  However, these datacenters may not have identical processors, which triggers a Microsoft activation check through the Internet.

Rest assured, VMware and the Hands-on Labs are in full compliance with Microsoft licensing requirements.  The lab that you are using is a self-contained pod and does not have full access to the Internet, which is required for Windows to verify the activation.  Without full access to the Internet, this automated process fails and you see this watermark.

This cosmetic issue has no effect on your lab.  

 

 

Look at the lower right portion of the screen

 

Please check to see that your lab is finished all the startup routines and is ready for you to start. If you see anything other than "Ready", please wait a few minutes.  If after 5 minutes your lab has not changed to "Ready", please ask for assistance.

 

Module 1 - Introduction to Tanzu Mission Control (15 minutes)

Introduction


In Module 1, you will learn:


 

Introduction to VMware Tanzu

 

 

In Swahili, ’tanzu’ means the growing branch of a tree. In Japanese, ’tansu’ refers to a modular form of cabinetry. At VMware, Tanzu represents our growing portfolio of solutions to help you build, run and manage modern apps.

VMware Tanzu portfolio enables customers to build modern apps on Kubernetes and manage all of their clusters from a single control point.  Tanzu allows you to build applications with velocity, run open source Kubernetes with consistency, and manage your entire footprint with confidence.

Tanzu Capabilities Include

 

 

Challenge: Managing disparate Kubernetes clusters

 

Kubernetes (K8s) is an open-source system for automating deployment, scaling, and management of containerized applications.

It groups containers that make up an application into logical units for easy management and discovery. Kubernetes builds upon 15 years of experience of running production workloads at Google, combined with best-of-breed ideas and practices from the community

Kubernetes is a great starting point in a modern IaaS strategy for any enterprise. When we started the project, we saw Kubernetes as offering a ‘Goldilocks’ abstraction—something that is low enough level that you can run pretty much any application, but high enough level that it hides the specifics of the infrastructure environment. Over the past few years, Kubernetes has emerged as a standard for distributed infrastructure, and for the first time ever we have a common, open source abstraction that spans the private cloud, public cloud, and edge-based computing environments.

Kubernetes is also something new, and perhaps the greatest value it represents is an opportunity for IT organizations to move from a world of ‘ticket driven infrastructure’ to ‘modern API driven dynamic infrastructure’, better connecting operators and developers.

While this modern API-driven infrastructure provides a rich and capable environment for cloud native applications that are being built by modern enterprises, it introduces a lot of new ‘moving parts’ and day 2 operating issues that IT teams haven’t had to tackle before. How do you create and enforce security policy in a highly fluid environment? How do you make sure that your identity and access control systems are configured? How do you make certain that everything stays properly configured?

These challenges are hard enough to get right in a single Kubernetes cluster, but we don’t live in a world of single Kubernetes clusters:

Taking all this into consideration there are still significant challenges to running Kubernetes at scale including:

So as IT, how do you manage this fragmentation in a way that doesn’t impose limits on what your development teams can do? How can you enable them to move faster by delivering Kubernetes as a service in a way that is empowering, not limiting? 

 

 

VMware Tanzu Mission Control

 

Enter VMware Tanzu Mission Control 

Tanzu  Mission Control provides a single control point for teams to more  easily manage Kubernetes and operate modern, containerized applications across multiple clouds and clusters. Tanzu Mission Control codifies the know-how of operating Kubernetes—including deploying and upgrading  clusters, setting policies and configurations, understanding the health of clusters and the root cause of underlying issues, and creating a map from the “people structure” to the infrastructure. 

With VMware Tanzu Mission Control, we are providing customers with a powerful, API driven platform that allows operators to apply policy to individual clusters or groups of clusters, establishing guardrails and freeing developers to work within those boundaries.

A SaaS based control plane will securely integrate with a Kubernetes cluster through an agent and supports a wide array of operations on the cluster. That includes lifecycle management (deploy, upgrade, scale, delete) of cloud-based clusters via Cluster API.

A core principle of the VMware Tanzu portfolio is to make best use of open source software.  VMware Tanzu Mission Control leverages Cluster API for Lifecycle Management, Velero for backup/recovery, Sonobuoy for configuration control and Contour for ingress control.

A year ago, we launched the public beta of Cloud PKS, a fully managed Kubernetes as a service offering. That has put us on the front lines of supporting Kubernetes clusters at scale 7×24 and has taught us a great deal. The lessons learned in running Cloud PKS and the Cloud PKS capabilities which customers enjoy are now found in VMware Tanzu Mission Control.

As  more and more teams adopt containers and Kubernetes, it becomes harder  to manage. Development teams need independence to run and operate their applications across multiple clusters and clouds. IT teams need to  maintain visibility and control at an organization-wide level. 

Tanzu Mission Control supports both development and IT teams in three key ways:  

Independence for Developers

We believe that developers need modern API driven infrastructure to do their job. A big part of the success of cloud has been delivering a set of useful services at the other end of an API call. This has enabled teams that adopt a single cloud to move from a world managed by tickets into an API driven, self-service universe.

VMware Tanzu will bring an API driven model to the world of developers building across multiple clouds. It all starts with the provisioning of Kubernetes clusters. We have created a simple, cloud friendly cluster lifecycle management model that offers an ‘easy button’ for the creation and management of Kubernetes clusters. This will feel a little like one of the managed Kubernetes offerings that the cloud providers deliver (AKS, EKS, GKE), but the advantage is that the Kubernetes cluster is fully provisioned into a developer environment and is fully accessible to the developer. This will offer levels of customizability and control that are difficult to accomplish with a cloud provider offering.

 Consistency for Operators

In working with customers, we identified an opportunity to put more control at the fingertips of the platform operator or SRE managing a Kubernetes footprint. VMware Tanzu Mission Control’s SaaS control plane will securely connect with, manage, and operate a potentially large number of clusters across environments.

You will be able to attach any conformant Kubernetes cluster to VMware Tanzu Mission Control, including clusters running on vSphere, public clouds, managed services, or DIY implementations. That’s a degree of neutrality that’s not possible from a cloud service provider, and an openness that is fairly unique to VMware.

Once you have all your Kubernetes managed from one place, you will be able to take many meaningful actions on individual clusters or across fleets of clusters, including (at launch):

These  undertakings become exponentially challenging with a growing number of  clusters, multi-cloud deployments, and an accelerating number of  developers and teams. 

 

 

Tanzu Mission Control Concepts


 

Cluster lifecycle management

The ability to drive Cluster API based Kubernetes based services to centralize cluster lifecycle management across all environments. With our model, you don't share a control plane. You own the control plane in a public cloud environment. (This is different from the approach that managed services like AKS, EKS, and GKE take.)

Identity and access management (IAM)

Integrate VMware Tanzu Mission Control with your existing enterprise IAM solution (like Active Directory or SAML) to easily give developers self-service access to the right resources without changing their workflows.

Security and configuration management

Create and apply critical security and access policies across multiple clusters and clouds with one click, and automate inspection tests to ensure new and upgraded clusters are always conformant.

Observability and diagnostics

Streamline troubleshooting with centralized visibility into the health of fleets of clusters and applications running across multiple clouds.

Data protection and migration (coming soon)

Using Velero, an open source project, create and enforce backup and recovery policies for clusters and namespaces, to support data recovery and migration.

Resource and cost management (coming soon)

Enforce quota policies so that development teams can independently create new clusters without exceeding their allotted capacity.

Audit and compliance (coming soon)

Create and enforce custom compliance policies across clusters and teams, and export logs to meet audit requirements.

Connectivity and traffic management (coming soon)

Enable secure, delegated ingress for multi-team clusters with Contour, an open source ingress controller.

Use VMware Tanzu Mission Control together with NSX Service Mesh, which extends Istio to provide microservices visibility, isolation, tracing, and routing.

 


 

Tanzu Mission Control Architecture

 

This is the overall architecture. Tanzu Mission Control  is a multi tenant platform where each customer has access to a Policy Framework that can be applied to a Resources Hierarchy ( logical components that group clusters and namespaces within clusters ). Each customer has an Organization which is the root of the resource hierarchy .   The resource hierarchy applies to  Clusters that are provisioned and managed by Mission Control as well as clusters that are attached.

Each customer will have access to:

 

 

Resource Hierarchy

 

Each customer gets mapped to an Organization

Multiple Cluster Groups

Multiple Workspaces

Cascading Resource Hierarchy

 

 

Policy Model

 

Policies cascade down the hierarchy tree.

Policies applied at Org, Cluster Group level traverse multiple Clusters globally.

Policies applied to Workspaces traverse multiple Namespace across many Clusters.

Direct Policy: Applied to a node in the hierarchy tree.

Inherited Policy: Inherited from the node above in the tree.

A Cluster and Managed Namespace can always have a single Parent Node.

A node is a worker machine in Kubernetes. A node may be a VM or physical machine, depending on the cluster. Each node contains the services necessary to run pods and is managed by the master components. The services on a node include the container runtime, kubelet and kube-proxy.

 

 

Unified Identity Across Kubernetes (K8s) Clusters

 

Tanzu Mission Control derives identity from cloud.vmware.com

User / Groups imported from identity mapped to Tanzu Mission Control roles (Org Admin, Users)

Tanzu Mission Controls roles will translate to Kubernetes Cluster Role Bindings

 

 

 

Tanzu Cluster Lifecycle Management

 

Multi-Cloud

Community Alignment

Operational Control

Flexible networking options

Suppport for any workload

 

 

Accessing VMware Tanzu Mission Control

UI

Declarative APIs

CLI

Technology integrations (coming soon)

 

 

Cluster Management Capabilities

Cluster Management Capabilities

Capabilities include the following:

 

Tanzu Namespace in the above diagram is referring to the vmware-system-tmc namespace that you will see in practice within Kubernetes clusters being manged or attached to Tanzu Mission Control. 

Attached and Provisioned Clusters

Using Tanzu Mission Control, you can attach existing Kubernetes clusters from various cloud providers, organize them into logical groups, observe their health and resource usage, and manage their security posture and configuration.

You can also provision clusters directly through Tanzu Mission Control, provisioned in your own cloud provider account using Cluster API, to leverage the built-in cluster lifecycle management best practices.

Managed and Unmanaged Namespaces

In both attached and provisioned clusters, you can create namespaces that you can manage through Tanzu Mission Control using policies. Your clusters can also have unmanaged namespaces that were created externally and don't need to be managed through Tanzu Mission Control. 

 

 

Cluster inspections via Sonobuoy

 

With such a wide array of Kubernetes distributions available, conformance tests help ensure that a Kubernetes cluster meets the minimal set of features. 

A conformance-passing cluster provides the following guarantees:

Individual Kubernetes distributions may offer additional features beyond conformance testing, but if you change distributions, these features can't be expected to be provided.

NOTE: Kubernetes documentation also describes the concept of "node conformance tests". Although they are useful, these tests are more component-focused than system-wide. They validate only the behavior of a specific node, not cluster behavior as a whole.

Tanzu Mission Control extends open source Sonobuoy

 

Conclusion


This concludes "Module 1 - Introduction to VMware Tanzu Mission Control"

You should now have an understanding of VMware Tanzu Mission Control and its components.  Jump into modules 2-4 to get hands on with some of the available tools you can use to help manage your deployment!

 

Congratulations on completing Module 1.

If you are looking for additional information on VMware Tanzu Mission Control, try one of these links:


 

To continue with this lab, proceed to any module below which interests you most regarding VMware Tanzu Mission Control:

 

 

How to End Lab

 

To end your lab click on the END button.  

 

Module 2 - Cluster Life Cycle Management (30 minutes)

Introduction



 

Lab Environment

 

Above is a architectural diagram of what the lab environment looks like.  Hosted inside of the HOL environment are two VM's.  First is the Windows control center VM (this is the desktop that you are seeing when you access this HOL).  The second is an Ubuntu VM that is also hosted in the HOL environment.

You will be connecting to live instances of Amazon Web Services (AWS) and VMware Cloud Services, specifically the Tanzu Mission Control service.

 

 

Kubernetes-in-docker (KIND)

KIND is a tool for running local Kubernetes clusters using Docker container "nodes". kind is primarily designed for testing Kubernetes 1.11+, initially targeting the conformance tests.  In this lab we will be using a Ubuntu VM to create a local Kubernetes cluster using KIND.  This locally created cluster will then be attached to TMC in order to show how any cluster can be brought under management.

 

Student Check-In


This article will provide guidance on how to gain access to Tanzu Mission Control Cloud Services. You will locate the Student Check-In page, search for your email address and then be provided My VMware accounts for the VMware Cloud Services login.


 

Open Student Check-In Web Page

 

Open Chrome Browser

 

  1. On top of browser click Student Check-In
  2. This will navigate to https://checkin.hol.vmware.com

 

 

Search and Validate

 

ENSURE THAT YOU TAKE NOTE OF BOTH LOGINS AND KEEP TRACK OF THEM SOMEWHERE!

1. Enter your email address used to login and start the lab

2. Click Search

Two My VMware login accounts will be provided: Platform Admin and Developer User;  please take note of these as they will be needed throughout this lab

3.    Click the link under the Platform Admin Login  (e.g myvmware301@vmware-hol.com / VMware1!)

You will be redirected to the VMware Cloud Services login page.

 

 

VMware Cloud Services Sign-In

 

Even though the email address has been auto-populated for you it will be necessary to input/delete a character prior to being able to click next.

  1. Type in your Platform Admin email address that you received as part of the check in process (this will be of the format myvmware3xx@vmware-hol.com)
  2. Click NEXT or TAB key to continue

 

 

  1. Enter password: VMware1!
  2. Cick SIGN IN
  3. Click on the Tanzu Mission Control Tile

Congratulations!

You are logged into the Cloud Services Console with an Organization assigned (eg. TMC-HOL-01) and the Tanzu Mission Control services are available for use while this lab is active.

 

Deploy a local Kubernetes cluster


In this section you will deploy a Kubernetes cluster using KIND on a local Ubuntu VM.


 

Accessing the Ubuntu machine

In our lab we have a Ubuntu Linux box configured with all the needed tools to get through the lab lessons.  You will access this VM using PuTTY which has already been installed and has a saved session to the Ubuntu Linux box.

 

 

  1. Click on the puTTY icon in the Windows toolbar;
  2. Select the "Ubuntu" saved session;
  3. Click Open in order to open an SSH session to the Ubuntu VM.

The user configured for auto-login in the puTTY VM-CLI session is holuser. We are going to use this user for the whole module while connected to Ubuntu machine.

 

 

Create a Kubenetes cluster using KIND

Kubernetes in Docker (KIND) and all required supporting binaries, such as GO and Docker, have been pre-installed on the Ubuntu VM.  

*************************************

Before proceeding with the below steps you will want to resize the PuTTy window such that it fits on your display.  If you do not your cursor will eventually go off the screen.

*************************************

 

  1. At the command line, type kind create cluster --config 3node.yaml --name=hol
  2. After the previous command returns your cursor (this will take a few minutes), run the following
    export KUBECONFIG="$(kind get kubeconfig-path --name="hol")"
    in order to load the KUBECONFIG file into your current session
  3. Run the command:
    kubectl get nodes
    and observe that you now have a 3 node Kubernetes cluster running, one node has the role of master, and 2 nodes that have a role of <none>, which are the worker nodes

 

Attach local Kubernetes cluster to Tanzu Mission Control


In this step you will attach the KIND cluster you just created to TMC.


 

Switch back to Google Chrome to access the VMware Cloud Services Console

 

  1. Click the Chrome Icon on the Task Bar

 

 

Initiate Cluster Attach

 

You'll notice that this is a new instance of TMC and there should not be any clusters attached at this point

  1. Click Clusters on the left hand side of the page
  2. Click ATTACH CLUSTER

 

 

Register Cluster

********************

IF YOU ARE HAVING ISSUES WITH THE CONTRAST IN SOME OF THE DROP DOWN MENUS YOU CAN OPTIONALLY SWITCH OVER TO THE LIGHT UI THEME BY CLICKING THE "LIGHT" BUTTON IN THE BOTTOM LEFT CORNER OF THE UI

*********************

 

 

  1. Select default from the Cluster group drop-down menu
  2. Type kind-cluster in the Name field
  3. Click REGISTER

 

 

Agent Installation Instruction

 

The above shown kubectl apply command will be unique to your environment and you must copy it directly from the context

  1. Click the "copy to clipboard" icon on the right hand side of the "run the following command in your terminal" window (alternatively you can select all the text in the window and press Ctrl-c to copy the command

 

 

Run the agent installation command on your cluster

 

  1. Click on the PuTTy session you have running in the task bar
  2. Paste the command that you just copied from TMC into the PuTTy session using a Right Click
    PLEASE NOTE YOUR COMMAND WILL BE DIFFERENT FROM THE ONE SHOWN ABOVE
  3. In order to watch the agent come up please run the command:
    watch kubectl get pods --all-namespaces
  4. Wait for all your containers to come up to a Running or Completed state, as shown below
  5. Once your screen is similar to the below you can exit the watch by using "Ctrl-C" on your keyboard

Please wait for your screen to look similar to the one shown below before moving on to the next step.  This may take up to 10 minutes to complete.

 

 

 

Verify your connection in TMC

 

 

  1. Click on the Chrome Icon on the task bar to switch back to your browser session
  2. Click VERIFY CONNECTION
  3. After your connection is verified and you receive the "Success. Your kubernetes cluster connection is established" message, click CONTINUE

 

 

Observe your newly attached cluster

 

  1. Observe the various different component and agent/extension health check indications, these should all be green in a successful environment attach.  NOTE: If your cluster does not show up right away you may need to hit the refresh button on your browser.
  2. Feel free to spend some time familiarizing yourself with the different types of information available within the Overview tab as well as across the variety of other tabs

 

Create a Cluster Group then Provision a New Cluster


In this section you will create a cluster group and then deploy a new cluster into the freshly created cluster group.


 

Create New Cluster Group

 

  1. Click on Cluster groups in the left hand menu
  2. Click NEW CLUSTER GROUP

 

 

Name Cluster Group

 

  1. In the Name field type: hands-on-labs
  2. Click CREATE

 

 

Observe new Cluster Group

 

  1. Observe your newly created Cluster Group.  Notice that you should now have the default cluster group as well as the hands-on-labs one you just created.  If your cluster group is not immediately visible you may need to refresh your browser.

 

 

Provision new Cluster

 

  1. Click on Clusters
  2. Click NEW CLUSTER
  3. Click on your Provider.  This will be different than what is shown above it will be of the format TMC-HOL-xx

Your environment has been prepaired with a dedicated AWS account.  Setting up a provider account is straightforward but is outside of the scope of this lab for security reasons.

 

 

Cluster name and Cluster group

 

  1. In the Cluster Name field type: aws-cluster
  2. In the Cluster Group field select the hands-on-labs cluster group
  3. In the Region field select us-west-2; this corresponds to the AWS region you will be deploying in.  Other regions are not currently setup for deployment.
  4. Click NEXT

 

 

Cluster Type

 

  1. Leave the default settings and click NEXT

 

 

Edit cluster specification

 

  1. Click Edit

 

 

Increase number of worker nodes

 

  1. In the Number of worker nodes field type: 2
  2. Click SAVE
  3. Click CREATE

 

 

Wait for your Cluster to Deploy

The step will take approximately 10 minutes as a live Kubernetes cluster is being deployed in AWS.  Feel free to look around the rest of the interface while you wait.

 

 

  1. Wait for your Cluster to be created
    THIS STEP WILL TAKE AROUND 10 MINUTES TO COMPLETE
  2. Once your cluster is deployed you will see a dashboard similar to the one shown.  All the Health Check Components should have a Green checkmark prior to proceeding to the next section.  You may need to refresh your browser in order for this to properly render.

 

 

Refresh Your Browser if you don't see all the Health Checks Green

 

  1. If all your Health Check have not yet returned as shown in the above image refresh the webpage
  2. Hit the refresh button on Chrome

 

 

Download KUBECONFIG

Now that your cluster has been successfully deployed you will need to download the KUBECONFIG file so that you can get access to the cluster using standard kubectl commands.  Tanzu allows you an easy way to do this right through the UI.  NOTE: At this time the download KUBECONFIG file option is only available for provisioned clusters not attached clusters.

 

  1. Click Access the cluster

 

 

Download KUBECONFIG FILE and Tanzu CLI

 

  1. Click DOWNLOAD KUBECONFIG FILE
  2. Click the click here link to download the Tanzu Mission Control CLI
  3. Ensure that you click Keep at the bottom of the Chrome Browser once the download completes

 

This will open an new page.

  1. Click DOWNLOAD CLI
  2. Select Windows (64-bit)

 

  1. When the download is complete, click the Keep button.

Please note we have placed  C:\Users\Administrator\Downloads  in the Windows PATH Environment Variable to allow you to easily run the TMC CLI.

This is not best practice but is done to simplify the workflow in this lab environment.  You should really be placing the CLI binary in an appropriate location and then adding its location to your path variable if required.

 

 

Open PowerShell and get namespaces

 

 

 

  1. Open PowerShell by clicking the PowerShell Icon on the Taskbar
  2. Type the command in the PowerShell Window:
    kubectl --kubeconfig=C:\Users\Administrator\Downloads\kubeconfig-aws-cluster.yml get namespaces
  3. After you run the above command you will be redirected to the browser for authentication, because you are already logged in to TMC your workflow should complete as shown
  4. Click the PowerShell Icon on the taskbar to jump back into your PowerShell Session, you should see the name spaces as shown above

 

 

Verify your Kubernetes Cluster is ready

 

  1. Run the command:
    kubectl --kubeconfig=C:\Users\Administrator\Downloads\kubeconfig-aws-cluster.yml get nodes
  2. Verify your nodes are in a Ready state and observe the other available information
    If all your nodes are not yet Ready you may need to run the command again to verify they all come up correctly.

 

 

Warning

**********************************************

Occasionally after a cluster is created and comes up cleanly it will go into a Disconnected state after about 5 minutes.  This will automatically resolve itself within 5-15 minutes.  This is a known bug that is being worked on currently. We ask that you bear with us if this happens to you.  You can still proceed to do any of the policy steps inside of the TMC interface during this time.  Alternatively you can deploy a second cluster if you'd like, however this will take some time.

*********************************************

 

Deploy a Sample Application


In this section we will deploy a small sample application to the cluster that you just provisioned in AWS.

The following procedure creates two deployments (coffee and tea) exposed as services (coffee-svc and tea-svc), each running a different image (nginxdemos/hello and nginxdemos/hello:plain-text).


 

Observe the cafe-services.yaml file

 

  1. Change directory to the Downloads:
    cd C:\Users\Administrator\Downloads
  2. View the contents of the Downloads directory using the command:
    ls
  3. Run the command:
    cat .\cafe-services.yaml Observe the output, this is a Kubernetes deployment file

 

 

Update the cafe-service.yaml File

************************************************

Due to some of the API changes in Kubernetes 1.16 the YAML file loaded into the HOL environment no longer works correctly and needs to be updated to utilize the most current, (non-deprecated) API construct.  We will do that in the following step.

 

Specifically the extentions/v1beta1 API group is not supported any longer.  For more information refer to this link: https://kubernetes.io/blog/2019/07/18/api-deprecations-in-1-16/

*************************************************

 

  1. Run the command
    kubectl convert -f .\cafe-services.yaml > cafe-services-convert.yaml
  2. Run the command
    mv .\cafe-services-convert.yaml .\cafe-services.yaml -Force

 

 

Deploy the Cafe-Services to your AWS Cluster

 

Pod Security Policies (PSP) are enabled by default. To run apps with priviliges, you need to create a role binding for the serive account to use a PSP.

  1. Run the command: kubectl --kubeconfig=kubeconfig-aws-cluster.yml create  clusterrolebinding privileged-cluster-role-binding  --clusterrole=vmware-system-tmc-psp-privileged  --group=system:authenticated to create role binding for the service account.
  2. Run the command: kubectl --kubeconfig=kubeconfig-aws-cluster.yml apply -f cafe-services.yaml to deploy your cluster
  3. Verify your deployment has come up with the command: kubectl --kubeconfig=kubeconfig-aws-cluster.yml get pods

 

Conclusion


This concludes "Module 2 - Cluster Life Cycle Management"

You should now have an understanding of how Tanzu Mission Control can be used to deploy, manage and operate Kubernetes clusters across your environment.

 

Congratulations on completing Module 2.

If you are looking for additional information on VMware Tanzu Mission Control, try one of these links:


 

To continue with this lab, proceed to any module below which interests you most regarding VMware Tanzu Mission Control:

 

 

How to End Lab

 

To end your lab click on the END button.  

 

Module 3 - Working with Policies (30 minutes)

Introduction


In order for you to get full value out of this Module you should have completed Module 2 - Cluster Life Cycle Management prior to doing this module.  If you have not completed Module 2 you will not have any Clusters deployed to set policies on.


 

Organizing Clusters and Namespaces

Through the Tanzu Mission Control console you can organize and view your Kubernetes resources in two different ways, enabling operations administrators to maintain control over clusters while allowing application teams self-serve access to namespaces.

Infrastructure View

Application View

By combining your resources into groups, you can simplify management by applying policies at the group level. For example, you can apply an access policy to an entire cluster group rather than creating separate policies for each individual cluster.

 

 

Organizing Users and Controlling Access

Use secure-by-default policies to implement role-based access control (RBAC) for the resources in your organization.

You use VMware Cloud Services tools to invite users to your organization and organize them into user groups. You can set up federation with your corporate domain, which allows you to use your organization's single sign-on and identity source. By combining your users into groups, you can simplify access control by creating access policies that bind roles to groups rather than individuals.

As a best practice, add users through the group to which they initially belong. Use the Groups tab under Identity and Access Management, rather than the Active Users tab. In this way, the new user is added to a group to which you have already assigned roles through an access policy. If you use the Active Users tab, the new user is added to the organization and service, but because they are not yet added to a group, they will likely have only minimal access to the service until you add them to a group. 

To edit the access policy for an object, you must have iam.edit permissions on that object.

To apply the permissions in a given role to a member, you create a role binding in the object's access policy that associates members and groups with a role. For example, to give the members of group1 edit permission on example-cluster, you create a role binding in the policy for example-cluster that associates group1 with the cluster.edit role.

In addition to the direct policy defined for a given object, each object also has inherited policies that are defined in the parent objects. For more information about policy inheritance, see Policy Inheritance below.

The following table shows the roles you can use in access policies at each level of the hierarchy.

 

Policy Inheritance

In the Tanzu Mission Control resource hierarchy, there are three levels at which you can specify policies. 

In addition to the direct policy defined for a given object, each object also has inherited policies that are defined in the parent objects. For example, a cluster has a direct policy and also has inherited policies from the cluster group and organization to which it is attached.

 

Workspaces and Namespaces


In this section you will be covering the topic of Workspaces and Namespaces.  You will create a new Workspace within the environment, later on in this module we will get a Developer User access to this Namespace.

AGAIN PLEASE ENSURE YOU HAVE COMPLETED MODULE 2 BEFORE CONTINUING ON


 

Login to Tanzu Mission Control

 

  1. Click Tanzu Mission Control (you might already be logged in if you just completed the previous module)
    NOTE: You should still be logged in as the Platform Admin user (this will be of the format myvware3xx@vmware-hol.com)

 

 

Observe the existing Namespaces in the Environment

 

Notice that all the Namespaces in the environment today are not currently being managed by TMC, however they have been made visible to you.  You have the ability to create Namespaces in both provisioned and attached clusters.

  1. Click Namespaces
  2. Observe that none of the Namespaces are being managed by Tanzu Mission Control
  3. Some of the Namespaces are relevant to the control plane operation of TMC Kubernetes clusters, these are represented by the labels as shown.  Labels are a very good way to drive policy driven management of your environment.

 

 

Workspaces

 

Workspaces allow you to take an application or workload centric view of your environments.  Workspaces allow you to organize your managed namespaces into logical groups across clusters, perhaps to align with development projects. In an attached cluster, you can have both managed and unmanaged namespaces. When you create a namespace in an attached or provisioned cluster, you specify the workspace to which the namespace belongs.  

  1. Click Workspaces
  2. Observe (DO NOT CLICK) that there is just the single default Workspace available currently
  3. Click NEW WORKSPACE 

 

 

New Workspace

 

  1. Type:  hol-analytics in the Name input area
  2. Click CREATE

 

 

View the newly created Workspace

 

  1. Click on hol-analytics

 

 

Observe the Actions available to you within a Workspace

 

  1. Click ACTIONS
  2. Observe the actions you can perform on your Workspace

We will be using this hol-analytics Workspace in the next section to apply access policies

 

Applying Access Policies


In this section we will be exploring the policies that have already been applied to the environment and then setting up access for a new user.


 

Observe the Kubernetes Clusters in your Environment

 

  1. Click Clusters
  2. Observe the clusters that are present in your environment

If you do not have two clusters present in your environment please return to Module 2 to provision and attach these clusters as they will be required to fully demonstrate the access policies in this section

 

 

Cluster groups

 

  1. Click Cluster Groups
  2. Observe that you should have two cluster groups present

If you do not have two clusters groups present in your environment please return to Module 2 to provision and attach these clusters as they will be required to fully demonstrate the access policies in this section

 

 

Access Policies

 

  1. Click Policies, by default you will be brought to the Cluster Access Policies view
  2. Observe that you do not have permissions to view Policies at the Organizational level
  3. Expand the TMC-HOL-xx Organization, where xx will be different in each lab
  4. Expand the default cluster group
  5. Expand the hands-on-labs cluster group
  6. Click LIGHT, this will change the color theme of the UI; the increased contrast provided by the light theme makes the next few steps easier to view

 

 

Cluster Group Access Policies - clustergroup.view

You will need your Developer user ID and Password that were given to you during the check-in process, please have this ready for the remainder of this Module.

You will be granting your developer user clustergroup.view access in this section to allow for viewing of clusters.

 

  1. Click on the hands-on-labs Cluster Group
  2. Expand the Inherited Policies by click the arrow next to the TMC-HOL-xx organization
  3. Observe all the policies that were granted at an Organizational level and have been inherited by the cluster group
  4. Click the Dropdown arrow in the Direct access policies area
  5. Click NEW ROLE BINDING

 

 

Add a Direct Access Policy

 

 

FOR THIS SECTION YOU WILL NEED TO RECALL THE DEVELOPER USER CREDENTIALS THAT WERE GIVEN TO YOU AS PART OF THE CHECK-IN PROCESS.

This will be of the format myvmware4xx@vmware-hol.com

  1. Click the Role field
  2. Click the clustergroup.view: Read access to cluster groups and clusters role
  3. Type in your development user's email address from the check-in process (should be of the format myvmware4xx@vmware-hol.com)
  4. Click ADD
  5. Click SAVE

 

 

Cluster Group Access Policies - cluster.admin

You will be granting your Developer User cluster.admin access to all the clusters inside of the hands-on-labs cluster group.  This will give full administrative privileges within the Kubernetes clusters.

 

 

 

 

  1. Click NEW ROLE BINDING
  2. Click the on the Role field
  3. Click cluster.admin: Root access to clusters - including policies in the dropdown for Role selection
  4. Type in your development user's email address from the check-in process (should be of the format myvmware4xx@vmware-hol.com)
  5. Click ADD
  6. Click SAVE
  7. Confirm that your Direct access policies are setup as indicated

 

 

Workspace Access Policies

In this section you will be giving your developer access to a workspace.edit policy.  The workspace and namespace policies are more of an application centric way to give a user access to your Kubernetes infrastructure.  Contrast this with the Cluster and Cluster group policy which is more of an infrastructure centric way to define policies.  Both approaches are important when operating Kubernetes at scale.

 

  1. Click WORKSPACES in the Policies tab
  2. Expand your Organization (will be of the format TMC-HOL-xx)
  3. Click on the hol-analytics Workpace
  4. Expand the hol-analytics Direct access policies
  5. Click NEW ROLE BINDING

 

 

Workspace Access Policies - workspace.edit

 

 

 

  1. Click the dropdown arrow in the Role field
  2. Click the workspace.edit: Read/write access to workspaces - excluding policies role
  3. Type in your development user's email address from the check-in process (should be of the format myvmware4xx@vmware-hol.com)
  4. Click ADD
  5. Click SAVE
  6. Confirm that your Direct access policies are setup as indicated

 You have now successfully setup a policy framework that will allow you to on-board a new user to a restricted set of of capabilities.  It is now time to log off of your current user and login as the restricted user to demonstrate some of the policies in action.

 

 

Sign Out of your Current VMware Cloud Services Portal Session

 

  1. Click your Username in the top right of the UI
  2. Click SIGN OUT

 

 

Login as the Development user

*****************

DO NOT SKIP THIS STEP, YOU NEED TO LOGIN via the CHECK-IN OR THE NEXT SECTION WILL NOT WORK CORRECTLY

****************

 

 

 

 

  1. Click on the Student Check-in bookmark
  2. Type in the email address that you use to login to the HOL platform with (example yourname@yourcompany.com)
  3. Click Search
  4. Click on the link under Developer Login  (DO NOT JUST COPY PASTE, YOU NEED TO CLICK THE LINK)
  5. Click Sign in using another account
  6. Your account should be auto-populated for you, press tab to continue to the next screen
  7. Enter your password (should be VMware1!)
  8. Click SIGN IN

 

 

Log in to Tanzu Mission Control (Developer Persona)

 

  1. On the Cloud Services Portal Page, click Tanzu Mission Control

 

 

Check the Access Policies

 

 

  1. Click Policies
  2. Expand the TMC-HOL-xx Organization
  3. Click on the hands-on-labs cluster group
  4. Observe that you do not have permission to view the Policies while logged in as your Developer ID
  5. Expand the hands-on-labs cluster group
  6. Click on your aws-cluster
    Notice that you now have the ability to observe the policies at the cluster level as the development user has been granted the cluster.admin role by inheritance from the hand-on-lab cluster group

Feel free to explore the Policies section further to see what type of visibility this user has.

 

 

Workspaces

 

  1. Click Workspaces
  2. Click your hol-analytics workspace

 

 

Create a new Namespace

 

 

  1. Click NEW NAMESPACE
  2. Verify your cluster is set to: aws-cluster and your Workspace is set to hol-analytics
  3. In the Name field type: development
  4. Click Create

 

 

Observe new Namespace

 

  1. Click on Namespaces
  2. Hit Refresh on your browser
  3. Observe the new development namespace that you just created, also take note that it is a managed namespace in the hol-analytics Workspace.  Have a look at the Labels that have been attached to this namespace.

 

 

Launch PowerShell

 

  1. Launch PowerShell by clicking the icon on the taskbar (NOTE: you may already have a PowerShell session open, you can continue to use the open one)

 

 

Force Re-authentication as Developer User

Your authentication to TMC has been saved to a local file and will not have yet timed out, as such we will delete the file to force a re-authentication.  You will then authenticate as the Development user rather than the Platform Operator.

 

  1. Change to the Administrators home directory using cd c:\Users\Administrator
  2. To ensure there is no other Kubernetes configs stored remove the .kube directory by running the following commnad:
    rm -r .\.kube
  3. To remove any stored authenication to TMC run the following command, this will force you to authenticate to TMC
    rm -r .\.vmware-cna-saas

 

 

Verify the Namespace you just created is present on the AWS Cluster

 

 

 

  1. Change to the Downloads directory using the command:
    cd .\Downloads
  2. Get a list of the current namespaces in your AWS cluster using the following command:
    kubectl --kubeconfig="kubeconfig-aws-cluster.yml" get namespaces
  3. The previous command will kick off a browser based authentication workflow, you will be authenticated using your currently logged in user (i.e. your Development user)
  4. Jump back to your PowerShell session by clicking the icon on the taskbar
  5. Observe your available namespaces in your aws-cluster

 

 

Deploy a simple application into the development namespace

 

  1. Move the KUBECONFIG file to the .kube directory to set the default kubectl to access the AWS cluster.  Issue the following command:
    mv '.\kubeconfig-aws-cluster.yml' "C:\Users\Administrator\.kube\config"
  2. Check if there are any running pods in the development name space by running:
    kubectl --namespace development get pods
  3. Deploy a simple Cafe Service application (cafe-services.yaml was created in Module 2) into the new Namespace by running:
    kubectl --namespace development apply -f cafe-services.yaml
  4. Verify the pods have successfully come up using this command:
    kubectl --namespace development get pods

 

 

Observe all the pods you have running now

 

  1. Run the command
    kubectl get pods --all-namespaces Observe all the pods that have now been deployed in your environment.

You have just successfully deployed a simple application into the new Namespace that you created using your Developer access persona.

 

Revoking Access


In order to remove access from the Developer you will need to log back in as the Platform Operator in order to change some of the access policies.  To do this we will make use of an incognito Chrome browser window.

 

  1. RIGHT CLICK on the Chrome session on the task bar
  2. Click on New Incognito Window

 

Login as the Platform Operator (Incognito Session)

 

 

 

  1. Click the VMware Cloud Services bookmark
  2. Enter your Platform Operator credentials that you got from the check-in process (should be of the format myvmware3xx@vmware-hol.com
  3. Click NEXT
  4. Enter your password (should be VMware1!)
  5. Click SIGN IN
  6. Launch TMC by clicking Tanzu Mission Control

 

 

Remove Cluster Access Policies - clustergroup.view

 

 

  1. Click on Policies
  2. Expand your Organization
  3. Click on the Cluster Group hands-on-labs
  4. Expand your hands-on-labs Direct Access Policies
  5. Click Delete for the clustergroup.view binding
  6. Click YES, DELETE ROLE BINDING

 

 

Remove Cluster Access Policies - cluster.admin

 

 

  1. Expand the TMC-HOL-xx Organization using the arrow beside it
  2. Click on the hands-on-labs cluster group
  3. Click DELETE to remove the cluster.admin policy
  4. Click YES, DELETE ROLE BINDING

 

 

Leave Workspace Access Policies as is

Take note that we did not remove the Workspace Level Access Policy from the Development user.  As such they will still have access to the Workspace and any managed Namespaces inside of it.  In our lab that means the Development user will still have access to the hol-analytics Workspace and the development namespace that has been provisioned on the aws-cluster

 

  1. Click WORKSPACES
  2. Click hol-analytics

 

 

Checking Access Restrictions

 

 

Now lets see what this translates to when using the kubectl command

  1. Jump back into your PowerShell session by clicking the icon on the taskbar
  2. Run the command: kubectl --namespace development get pods
  3. Notice that you still have access to the development namespace
  4. Run the command: kubectl get pods
  5. Notice that you no longer have access to the default namespace
  6. Run the command: kubectl get namespaces
  7. Again notice you have been blocked from seeing objects outside of your access policy
  8. Run the command: kubectl get namespace development
    Once you limit the scope to within your access policy the command will run

 

Conclusion


This concludes "Module 3 - Working with Policies"

You should now have a good understanding of how Tanzu Mission Control can be used to govern Access to your Kubernetes environments.

 

Congratulations on completing Module 3.

If you are looking for additional information on VMware Tanzu Mission Control, try one of these links:


 

To continue with this lab, proceed to any module below which interests you most regarding VMware Tanzu Mission Control:

 

 

How to End Lab

 

To end your lab click on the END button.  

 

Module 4 - Conformance Testing (30 minutes)

Student Check-In


You may skip this step if you have already completed the login in Module 2 and jump to the "Login and Create Cluster" section.   

This article will provide guidance on how to gain access to Tanzu Mission Control Cloud Services. You will locate the Student Check-In page, search for your email address and then be provided My VMware accounts for the VMware Cloud Services login.


 

Open Student Check-In Web Page

 

Open Chrome Browser

 

  1. On top of browser click Student Check-In
  2. This will navigate to https://checkin.hol.vmware.com

 

 

Search and Validate

 

1. Enter your email address used to login and start the lab

2. Click Search

Two My VMware login accounts will be provided: Platform Admin and Developer User

3.    Click the link under the Platform Admin Login  (e.g myvmware301@vmware-hol.com / VMware1!)

You will be redirected to the VMware Cloud Services login page.

 

 

VMware Cloud Services Sign-In

 

Click NEXT or TAB key to continue

 

  1. Enter password: VMware1!
  2. Cick SIGN IN

 

1. Click on the Tanzu Mission Control Tile

Congratulations!

You are logged into the Cloud Services Console with an Organization assigned (eg. TMC-HOL-01) and the Tanzu Mission Control services are available for use while this lab is active.

 

Login and Create Cluster


In this section you will deploy a new cluster, in the subsequent sections you will run an Inspection on this cluster and explore the results


 

Ensure that you are logged in as the Platform Admin

 

  1. For this next section you will need to ensure you are logged in as the Platform Admin (your user ID should be of the format myvmware3xx@vmware-hol.com)

If continuing from the previous module this will be your Incognito Chrome session.

 

 

Provision new Cluster

**********************************************

PLEASE NOTE THAT IF YOU ARE RUNNING THROUGH THIS LAB START TO FINISH THE NEXT STEPS WILL BE REDUNDANT.   If you have already created a cluster in AWS (this is done in Module 2) then you can skip this section and move on to the next section, you can do so by hitting the link below:

RUN AN INSPECTION

If you have not created an AWS cluster yet please continue with the following steps.

********************************************

 

  1. Click on Clusters
  2. Click NEW CLUSTER
  3. Click on your Provider.  This will be different than what is shown above it will be of the format TMC-HOL-xx

Your environment has been pre-paired with a dedicated AWS account.  Setting up a provider account is straight forward but is outside of the scope of this lab for security reasons.

 

 

Cluster name and Cluster Group

 

  1. In the Cluster Name field type: aws-cluster
  2. In the Cluster Group field select the default cluster group
  3. In the Region field select us-west-2; this corresponds to the AWS region you will be deploying in.  Other regions are not currently setup for deployment.
  4. Click NEXT

 

 

Cluster Type

 

  1. Leave the default settings and click NEXT

 

 

Edit cluster specification

 

  1. Click Edit

 

 

Increase number of worker nodes

 

  1. In the Number of worker nodes field type: 2
  2. Click SAVE
  3. Click CREATE

 

 

Wait for your Cluster to Deploy

The step will take approximately 10 minutes as a live Kubernetes cluster is being deployed in AWS.  Feel free to look around the rest of the interface while you wait.

 

 

  1. Wait for your Cluter to be created
    THIS STEP WILL TAKE AROUND 10 MINUTES TO COMPLETE
  2. Once your cluster is deployed you will see a dashboard similar to the one shown.  All the Health Check Components should have a Green checkmark prior to proceeding to the next section.

 

 

Refresh Your Browser if you don't see all the Health Checks Green

 

  1. If your all your Health Check have not yet returned as shown in the above image refresh the webpage
  2. Hit the refresh button on Chrome

 

Run an Inspection


In this section we will be running a Lite conformance test on the AWS cluster in your environment.  A full conformance test will take several hours to run and is currently outside of the scope of this lab primarily due to the time required to run the test.

 

  1. Click on Inpections on the left hand menu
  2. Click NEW INSPECTION

 

New Inspection

 

  1. In the Cluster field select your aws-cluster
  2. Select a Lite inspection
  3. Click RUN INSPECTION

 

 

Wait for inspection to Complete Successfully

 

 

  1. Monitor the Progress of your Inspection
  2. Periodically hit Refresh on the browser until you see a Successful result, this should not take more than a few minutes
  3. Once your report is returned click on Success to view the report

 

 

Review the Inspection results

 

  1. Take note of the items that were tested for and the results

You'll notice the Lite Inspection does not provide very much detail.  A full conformance test consists of 180+ different tests and is intended to check your cluster against the latest Certified Kubernetes Offerings that are published through the Cloud Native Computing Foundation (CNCF)

More details can be found here: https://www.cncf.io/certification/software-conformance/

 

Conclusion


This concludes "Module 4 - Conformance Testing".

In this module you saw how to use Tanzu Mission Control to perform conformance tests in accordance with "CNCF - Cloud Native Computing Foundation".


 

You have finished Module 4 AND have come to the end of this Hands-on-Lab

Congratulations on completing Module 4 and concluding the lab.

As you have seen, Tanzu Mission Control gives extensive control over your Kubernetes environment and greatly simplifies access and policy enforcement. This includes not just provisioned clusters - but any cluster. Clusters can be quickly provisioned or attached, and teams given access so they can continue to be productive. Visibility into cluster health, conformance, and workloads helps operators address issues quickly, before they impact users or applications. Imagine the power of TMC to enforce consistency across thousands of retail stores, or shared Kubernetes clusters being accessed by tens of thousands of developers.

If you are looking for additional information on Sonobuoy, try one of these:

If you are looking for additional information on VMware Tanzu Mission Control, try one of these links:

 

 

Conclusion

Thank you for participating in the VMware Hands-on Labs. Be sure to visit http://hol.vmware.com/ to continue your lab experience online.

Lab SKU: HOL-2032-01-CNA

Version: 20200309-022134