VMware Hands-on Labs - HOL-2032-01-CNA


Lab Overview - HOL-2032-01-CNA - Tanzu Mission Control

Lab Guidance


The Table of Contents can be accessed in the upper right-hand corner of the Lab Manual.

Tanzu Mission Control (TMC) provides a single control point for teams to more easily manage Kubernetes and operate modern, containerized applications across multiple clouds and clusters. Tanzu Mission Control codifies the know-how of operating Kubernetes - including deploying and upgrading  clusters, setting policies and configurations, understanding the health of clusters and the root cause of underlying issues, plus creating a map from the “people structure” to the infrastructure. 

As  more and more teams adopt containers and Kubernetes, it becomes harder  to manage. Development teams need independence to run and operate their applications across multiple clusters and clouds. IT teams need to maintain visibility and control at an organization-wide level. 

Tanzu Mission Control supports both development and IT teams in three key ways:  

Lab Module List:

 Lab Captains: 

 

This lab manual can be downloaded from the Hands-on Labs Document site found here:

http://docs.hol.vmware.com

This lab may be available in other languages.  To set your language preference and have a localized manual deployed with your lab, you may utilize this document to help guide you through the process:

http://docs.hol.vmware.com/announcements/nee-default-language.pdf


 

Location of the Main Console

 

  1. The area in the RED box contains the Main Console.  The Lab Manual is on the tab to the Right of the Main Console.
  2. A particular lab may have additional consoles found on separate tabs in the upper left. You will be directed to open another specific console if needed.
  3. Your lab starts with 90 minutes on the timer.  The lab can not be saved.  All your work must be done during the lab session.  But you can click the EXTEND to increase your time.  If you are at a VMware event, you can extend your lab time twice, for up to 30 minutes.  Each click gives you an additional 15 minutes.  Outside of VMware events, you can extend your lab time up to 9 hours and 30 minutes. Each click gives you an additional hour.

 

 

Alternate Methods of Keyboard Data Entry

During this module, you will input text into the Main Console. Besides directly typing it in, there are two very helpful methods of entering data which make it easier to enter complex data.

 

 

Click and Drag Lab Manual Content Into Console Active Window

You can also click and drag text and Command Line Interface (CLI) commands directly from the Lab Manual into the active window in the Main Console.  

 

 

Accessing the Online International Keyboard

 

You can also use the Online International Keyboard found in the Main Console.

  1. Click on the Keyboard Icon found on the Windows Quick Launch Task Bar.

 

 

Click once in active console window

 

In this example, you will use the Online Keyboard to enter the "@" sign used in email addresses. The "@" sign is Shift-2 on US keyboard layouts.

  1. Click once in the active console window.
  2. Click on the Shift key.

 

 

Click on the @ key

 

  1. Click on the "@ key".

Notice the @ sign entered in the active console window.

 

 

Activation Prompt or Watermark

 

When you first start your lab, you may notice a watermark on the desktop indicating that Windows is not activated.  

One of the major benefits of virtualization is that virtual machines can be moved and run on any platform.  The Hands-on Labs utilizes this benefit and we are able to run the labs out of multiple datacenters.  However, these datacenters may not have identical processors, which triggers a Microsoft activation check through the Internet.

Rest assured, VMware and the Hands-on Labs are in full compliance with Microsoft licensing requirements.  The lab that you are using is a self-contained pod and does not have full access to the Internet, which is required for Windows to verify the activation.  Without full access to the Internet, this automated process fails and you see this watermark.

This cosmetic issue has no effect on your lab.  

 

 

Look at the lower right portion of the screen

 

Please check to see that your lab is finished all the startup routines and is ready for you to start. If you see anything other than "Ready", please wait a few minutes.  If after 5 minutes your lab has not changed to "Ready", please ask for assistance.

 

Module 1 - Introduction to Tanzu Mission Control (15 minutes)

Introduction


In Module 1, you will learn:


 

Introduction to VMware Tanzu

 

 

In Swahili, ’tanzu’ means the growing branch of a tree. In Japanese, ’tansu’ refers to a modular form of cabinetry. At VMware, Tanzu represents our growing portfolio of solutions to help you build, run and manage modern apps.

VMware Tanzu portfolio enables customers to build modern apps on Kubernetes and manage all of their clusters from a single control point.  Tanzu allows you to build applications with velocity, run open source Kubernetes with consistency, and manage your entire footprint with confidence.

Tanzu Capabilities Include

 

 

Challenge: Managing disparate Kubernetes clusters

 

Kubernetes (K8s) is an open-source system for automating deployment, scaling, and management of containerized applications.

It groups containers that make up an application into logical units for easy management and discovery. Kubernetes builds upon 15 years of experience of running production workloads at Google, combined with best-of-breed ideas and practices from the community

Kubernetes is a great starting point in a modern IaaS strategy for any enterprise. When we started the project, we saw Kubernetes as offering a ‘Goldilocks’ abstraction—something that is low enough level that you can run pretty much any application, but high enough level that it hides the specifics of the infrastructure environment. Over the past few years, Kubernetes has emerged as a standard for distributed infrastructure, and for the first time ever we have a common, open source abstraction that spans the private cloud, public cloud, and edge-based computing environments.

Kubernetes is also something new, and perhaps the greatest value it represents is an opportunity for IT organizations to move from a world of ‘ticket driven infrastructure’ to ‘modern API driven dynamic infrastructure’, better connecting operators and developers.

While this modern API-driven infrastructure provides a rich and capable environment for cloud native applications that are being built by modern enterprises, it introduces a lot of new ‘moving parts’ and day 2 operating issues that IT teams haven’t had to tackle before. How do you create and enforce security policy in a highly fluid environment? How do you make sure that your identity and access control systems are configured? How do you make certain that everything stays properly configured?

These challenges are hard enough to get right in a single Kubernetes cluster, but we don’t live in a world of single Kubernetes clusters:

Taking all this into consideration there are still significant challenges to running Kubernetes at scale including:

So as IT, how do you manage this fragmentation in a way that doesn’t impose limits on what your development teams can do? How can you enable them to move faster by delivering Kubernetes as a service in a way that is empowering, not limiting? 

 

 

VMware Tanzu Mission Control

 

Enter VMware Tanzu Mission Control 

Tanzu  Mission Control provides a single control point for teams to more  easily manage Kubernetes and operate modern, containerized applications across multiple clouds and clusters. Tanzu Mission Control codifies the know-how of operating Kubernetes—including deploying and upgrading  clusters, setting policies and configurations, understanding the health of clusters and the root cause of underlying issues, and creating a map from the “people structure” to the infrastructure. 

With VMware Tanzu Mission Control, we are providing customers with a powerful, API driven platform that allows operators to apply policy to individual clusters or groups of clusters, establishing guardrails and freeing developers to work within those boundaries.

A SaaS based control plane will securely integrate with a Kubernetes cluster through an agent and supports a wide array of operations on the cluster. That includes lifecycle management (deploy, upgrade, scale, delete) of cloud-based clusters via Cluster API.

A core principle of the VMware Tanzu portfolio is to make best use of open source software.  VMware Tanzu Mission Control leverages Cluster API for Life Cycle Management, Velero for backup/recovery, Sonobuoy for configuration control and Contour for ingress control.

A year ago, we launched the public beta of Cloud PKS, a fully managed Kubernetes as a service offering. That has put us on the front lines of supporting Kubernetes clusters at scale 7×24 and has taught us a great deal. The lessons learned in running Cloud PKS and the Cloud PKS capabilities which customers enjoy are now found in VMware Tanzu Mission Control.

As  more and more teams adopt containers and Kubernetes, it becomes harder  to manage. Development teams need independence to run and operate their applications across multiple clusters and clouds. IT teams need to  maintain visibility and control at an organization-wide level. 

Tanzu Mission Control supports both development and IT teams in three key ways:  

Independence for Developers

We believe that developers need modern API driven infrastructure to do their job. A big part of the success of cloud has been delivering a set of useful services at the other end of an API call. This has enabled teams that adopt a single cloud to move from a world managed by tickets into an API driven, self-service universe.

VMware Tanzu will bring an API driven model to the world of developers building across multiple clouds. It all starts with the provisioning of Kubernetes clusters. We have created a simple, cloud friendly cluster lifecycle management model that offers an ‘easy button’ for the creation and management of Kubernetes clusters. This will feel a little like one of the managed Kubernetes offerings that the cloud providers deliver (AKS, EKS, GKE), but the advantage is that the Kubernetes cluster is fully provisioned into a developer environment and is fully accessible to the developer. This will offer levels of customizability and control that are difficult to accomplish with a cloud provider offering.

 Consistency for Operators

In working with customers, we identified an opportunity to put more control at the fingertips of the platform operator or SRE managing a Kubernetes footprint. VMware Tanzu Mission Control’s SaaS control plane will securely connect with, manage, and operate a potentially large number of clusters across environments.

You will be able to attach any conformant Kubernetes cluster to VMware Tanzu Mission Control, including clusters running on vSphere, public clouds, managed services, or DIY implementations. That’s a degree of neutrality that’s not possible from a cloud service provider, and an openness that is fairly unique to VMware.

Once you have all your Kubernetes managed from one place, you will be able to take many meaningful actions on individual clusters or across fleets of clusters, including (at launch):

These  undertakings become exponentially challenging with a growing number of  clusters, multi-cloud deployments, and an accelerating number of  developers and teams. 

 

 

Tanzu Mission Control Concepts


 

Cluster lifecycle management

The ability to drive Cluster API based Kubernetes services to centralize cluster lifecycle management across all environments. With our model, you don't share a control plane. You own the control plane in a public cloud environment. (This is different from the approach that managed services like AKS, EKS, and GKE take.)

Identity and access management (IAM)

Integrate VMware Tanzu Mission Control with your existing enterprise IAM solution (like Active Directory or SAML) to easily give developers self-service access to the right resources without changing their workflows.

Security and configuration management

Create and apply critical security and access policies across multiple clusters and clouds with one click, and automate inspection tests to ensure new and upgraded clusters are always conformant.

Observability and diagnostics

Streamline troubleshooting with centralized visibility into the health of fleets of clusters and applications running across multiple clouds.

Data protection and migration (coming soon)

Using Velero, an open source project, create and enforce backup and recovery policies for clusters and namespaces, to support data recovery and migration.

Resource and cost management (coming soon)

Enforce quota policies so that development teams can independently create new clusters without exceeding their allotted capacity.

Audit and compliance (coming soon)

Create and enforce custom compliance policies across clusters and teams, and export logs to meet audit requirements.

Connectivity and traffic management (coming soon)

Enable secure, delegated ingress for multi-team clusters with Contour, an open source ingress controller.

Use VMware Tanzu Mission Control together with NSX Service Mesh, which extends Istio to provide microservices visibility, isolation, tracing, and routing.

 


 

Tanzu Mission Control Architecture

 

This is the overall architecture. Tanzu Mission Control  is a multi tenant platform where each customer has access to a Policy Framework that can be applied to a Resources Hierarchy (logical components that group clusters and namespaces within clusters). Each customer has an Organization which is the root of the resource hierarchy .   The resource hierarchy applies to  Clusters that are provisioned and managed by Mission Control as well as clusters that are attached.

Each customer will have access to:

 

 

Resource Hierarchy

 

Each customer gets mapped to an Organization

Multiple Cluster Groups

Multiple Workspaces

Cascading Resource Hierarchy

 

 

Policy Model

 

Policies cascade down the hierarchy tree.

Policies applied at Org and Cluster Group level traverse multiple Clusters globally.

Policies applied to Workspaces traverse multiple Namespace across many Clusters.

Direct Policy: Applied to a node in the hierarchy tree.

Inherited Policy: Inherited from the node above in the tree.

A Cluster and Managed Namespace can always have a single Parent Node.

A node is a worker machine in Kubernetes. A node may be a VM or physical machine, depending on the cluster. Each node contains the services necessary to run pods and is managed by the master components. The services on a node include the container runtime, kubelet and kube-proxy.

 

 

Unified Identity Across Kubernetes (K8s) Clusters

 

Tanzu Mission Control derives identity from cloud.vmware.com

User / Groups imported from identity mapped to Tanzu Mission Control roles (Org Admin, Users)

Tanzu Mission Controls roles will translate to Kubernetes Cluster Role Bindings

 

 

 

Tanzu Cluster Lifecycle Management

 

Multi-Cloud

Community Alignment

Operational Control

Flexible networking options

Suppport for any workload

 

 

Accessing VMware Tanzu Mission Control

UI

Declarative APIs

CLI

Technology integrations (coming soon)

 

 

Cluster Management Capabilities

Cluster Management Capabilities

Capabilities include the following:

 

Tanzu Namespace in the above diagram is referring to the vmware-system-tmc namespace that you will see in practice within Kubernetes clusters being manged or attached to Tanzu Mission Control. 

Attached and Provisioned Clusters

Using Tanzu Mission Control, you can attach existing Kubernetes clusters from various cloud providers, organize them into logical groups, observe their health and resource usage, and manage their security posture and configuration.

You can also provision clusters directly through Tanzu Mission Control, provisioned in your own cloud provider account using Cluster API, to leverage the built-in cluster lifecycle management best practices.

Managed and Unmanaged Namespaces

In both attached and provisioned clusters, you can create namespaces that you can manage through Tanzu Mission Control using policies. Your clusters can also have unmanaged namespaces that were created externally and don't need to be managed through Tanzu Mission Control. 

 

 

Cluster inspections via Sonobuoy

 

With such a wide array of Kubernetes distributions available, conformance tests help ensure that a Kubernetes cluster meets the minimal set of features. 

A conformance-passing cluster provides the following guarantees:

Individual Kubernetes distributions may offer additional features beyond conformance testing, but if you change distributions, these features can't be expected to be provided.

NOTE: Kubernetes documentation also describes the concept of "node conformance tests". Although they are useful, these tests are more component-focused than system-wide. They validate only the behavior of a specific node, not cluster behavior as a whole.

Tanzu Mission Control extends open source Sonobuoy

 

Conclusion


This concludes "Module 1 - Introduction to VMware Tanzu Mission Control"

You should now have an understanding of VMware Tanzu Mission Control and its components.  Jump into modules 2-4 to get hands on with some of the available tools you can use to help manage your deployment!

 

Congratulations on completing Module 1.

If you are looking for additional information on VMware Tanzu Mission Control, try these links below:


 

To continue with this lab, proceed to any module below which interests you most regarding VMware Tanzu Mission Control:

 

 

How to End Lab

 

To end your lab click on the END button.  

 

Module 2 - Cluster Lifecycle Management (30 minutes)

Introduction



 

Lab Environment

 

Above is a architectural diagram of what the lab environment looks like.  Hosted inside of the HOL environment are two VM's.  First is the Windows control center VM (this is the desktop that you are seeing when you access this HOL).  The second is an Ubuntu VM that is also hosted in the HOL environment.

You will be connecting to live instances of Amazon Web Services (AWS) and VMware Cloud Services, specifically the Tanzu Mission Control service.

 

 

Kubernetes-in-docker (KIND)

KIND is a tool for running local Kubernetes clusters using Docker container "nodes". kind is primarily designed for testing Kubernetes 1.11+, initially targeting the conformance tests.  In this lab we will be using a Ubuntu VM to create a local Kubernetes cluster using KIND.  This locally created cluster will then be attached to TMC in order to show how any cluster can be brought under management.

 

Student Check-In


This article will provide guidance on how to gain access to Tanzu Mission Control Cloud Services. You will locate the Student Check-In page, search for your email address and then be provided My VMware accounts for the VMware Cloud Services login.


 

Open Student Check-In Web Page

 

Open Chrome Browser

 

  1. On top of browser click Student Check-In
  2. This will navigate to https://checkin.hol.vmware.com

 

 

Search and Validate

 

ENSURE THAT YOU TAKE NOTE OF BOTH LOGINS AND KEEP TRACK OF THEM SOMEWHERE!

1. Enter your email address used to login and start the lab

2. Click Search

Two My VMware login accounts will be provided: Platform Admin and Developer User;  please take note of these as they will be needed throughout this lab

3.    Click the link under the Platform Admin Login  (e.g myvmware301@vmware-hol.com / VMware1!)

You will be redirected to the VMware Cloud Services login page.

 

 

Capacity Limits

 

If you searched for your email address and this response is returned  please END your lab and try again later. Each student is assigned a  cloud services organization (Org). When your lab started, all these Orgs  were in use.

 

 

VMware Cloud Services Sign-In

 

  1. Type in your Platform Admin email address that you received as part of the check in process (this will be of the format myvmware3xx@vmware-hol.com)
  2. Click NEXT or TAB key to continue

 

 

  1. Enter password: VMware1!
  2. Cick SIGN IN
  3. Click on the Tanzu Mission Control Tile

Congratulations!

You are logged into the Cloud Services Console with an Organization assigned (eg. TMC-HOL-01) and the Tanzu Mission Control services are available for use while this lab is active.

 

Deploy a local Kubernetes cluster


In this section you will deploy a Kubernetes cluster using KIND on a local Ubuntu VM.


 

Accessing the Ubuntu machine

In our lab we have a Ubuntu Linux box configured with all the needed tools to get through the lab lessons.  You will access this VM using PuTTY which has already been installed and has a saved session to the Ubuntu Linux box.

 

 

  1. Click on the puTTY icon in the Windows toolbar;
  2. Select the "Ubuntu" saved session;
  3. Click Open in order to open an SSH session to the Ubuntu VM.

The user configured for auto-login in the puTTY VM-CLI session is holuser. We are going to use this user for the whole module while connected to Ubuntu machine.

 

 

Create a Kubenetes cluster using KIND

Kubernetes in Docker (KIND) and all required supporting binaries, such as GO and Docker, have been pre-installed on the Ubuntu VM.  

*************************************

Before proceeding with the below steps you will want to resize the PuTTy window such that it fits on your display.  If you do not your cursor will eventually go off the screen.

*************************************

 

  1. At the command line, type kind create cluster --config 3node.yaml --name=hol
  2. After the previous command returns your cursor (this will take a few minutes), run the following
    export KUBECONFIG="$(kind get kubeconfig-path --name="hol")"
    in order to load the KUBECONFIG file into your current session
  3. Run the command:
    kubectl get nodes
    and observe that you now have a 3 node Kubernetes cluster running, one node has the role of master, and 2 nodes that have a role of <none>, which are the worker nodes

 

Attach local Kubernetes cluster to Tanzu Mission Control


In this step you will attach the KIND cluster you just created to TMC.


 

Switch back to Google Chrome to access the VMware Cloud Services Console

 

  1. Click the Chrome Icon on the Task Bar

 

 

Initiate Cluster Attach

 

You'll notice that this is a new instance of TMC and there should not be any clusters attached at this point

  1. Click Clusters on the left hand side of the page
  2. Click ATTACH CLUSTER

 

 

Register Cluster

********************

IF YOU ARE HAVING ISSUES WITH THE CONTRAST IN SOME OF THE DROP DOWN MENUS YOU CAN OPTIONALLY SWITCH OVER TO THE LIGHT UI THEME BY CLICKING THE "LIGHT" BUTTON IN THE BOTTOM LEFT CORNER OF THE UI

*********************

 

 

  1. Select default from the Cluster group drop-down menu
  2. Type kind-cluster in the Name field
  3. Click REGISTER

 

 

Agent Installation Instruction

 

The above shown kubectl apply command will be unique to your environment and you must copy it directly from the context

  1. Click the "copy to clipboard" icon on the right hand side of the "run the following command in your terminal" window (alternatively you can select all the text in the window and press Ctrl-c to copy the command

 

 

Run the agent installation command on your cluster

 

  1. Click on the PuTTy session you have running in the task bar
  2. Paste the command that you just copied from TMC into the PuTTy session using a Right Click
    PLEASE NOTE YOUR COMMAND WILL BE DIFFERENT FROM THE ONE SHOWN ABOVE
  3. In order to watch the agent come up please run the command to watch all of the pods in the vmware-system-tmc namespace:
    watch kubectl get pods -n vmware-system-tmc

Please wait for your screen to look similar to the one shown below before moving on to the next step.  This may take up to 5 minutes to complete.

 

  1. Wait for all your containers to come up to a Running or Completed state, as shown below
  2. Once your screen is similar to the below you can exit the watch by using "Ctrl-C" on your keyboard

 

 

Verify your connection in TMC

 

 

  1. Click on the Chrome Icon on the task bar to switch back to your browser session
  2. Click VERIFY CONNECTION
  3. After your connection is verified and you receive the "Success. Your kubernetes cluster connection is established" message, click CONTINUE

 

 

Observe your newly attached cluster

 

  1. Observe the various different component and agent/extension health check indications, these should all be green in a successful environment attach.  NOTE: If your cluster does not show up right away you may need to hit the refresh button on your browser.
  2. Feel free to spend some time familiarizing yourself with the different types of information available within the Overview tab as well as across the variety of other tabs

 

Create a Cluster Group then Provision a New Cluster


In this section you will create a cluster group and then deploy a new cluster into the freshly created cluster group.


 

Create New Cluster Group

 

  1. Click on Cluster groups in the left hand menu
  2. Click NEW CLUSTER GROUP

 

 

Name Cluster Group

 

  1. In the Name field type: hands-on-labs
  2. Click CREATE

 

 

Observe new Cluster Group

 

Observe your newly created Cluster Group.  Notice that you should now have the default cluster group as well as the hands-on-labs one you just created.  If your cluster group is not immediately visible you may need to refresh your browser.

 

 

Provision new Cluster

 

  1. Click on Clusters
  2. Click NEW CLUSTER
  3. Click on your Provider.  This will be different than what is shown above. It will be of the format TMC-HOL-xx

Your environment has been prepared with a dedicated AWS account.  Setting up a provider account is straightforward but is outside of the scope of this lab for security reasons.

 

 

Cluster name and Cluster group

 

  1. In the Cluster Name field type: aws-cluster

 

  1. In the Cluster Group field select the hands-on-labs cluster group
  2. In the Region field select us-west-2; this corresponds to the AWS region you will be deploying in.  Other regions are not currently setup for deployment.
  3. Click NEXT

 

 

Cluster Type

 

  1. Leave the default settings and click NEXT

 

 

Edit cluster specification

 

  1. Click Edit

 

 

Increase number of worker nodes

 

  1. In the Number of worker nodes field type: 2
  2. Click SAVE
  3. Click CREATE

 

 

Wait for your Cluster to Deploy

The step will take approximately 10 minutes as a live Kubernetes cluster is being deployed in AWS.  Feel free to look around the rest of the interface while you wait.

 

 

  1. Wait for your Cluster to be created
    THIS STEP WILL TAKE AROUND 10 MINUTES TO COMPLETE
  2. Once your cluster is deployed you will see a dashboard similar to the one shown.  All the Health Check Components should have a Green checkmark prior to proceeding to the next section.  You may need to refresh your browser in order for this to properly render.

 

 

Refresh Your Browser if you don't see all the Health Checks Green

 

  1. If all your Health Check have not yet returned as shown in the above image refresh the webpage
  2. Hit the refresh button on Chrome

 

 

Download KUBECONFIG

Now that your cluster has been successfully deployed you will need to download the KUBECONFIG file so that you can get access to the cluster using standard kubectl commands.  Tanzu allows you an easy way to do this right through the UI.  NOTE: At this time the download KUBECONFIG file option is only available for provisioned clusters not attached clusters.

 

  1. Click Access the cluster

 

 

Download KUBECONFIG FILE and Tanzu CLI

 

  1. Click DOWNLOAD KUBECONFIG FILE
  2. Click the click here link to download the Tanzu Mission Control CLI
  3. Ensure that you click Keep at the bottom of the Chrome Browser once the download completes

 

This will open an new page.

  1. Click DOWNLOAD CLI
  2. Select Windows (64-bit)

 

  1. When the download is complete, click the Keep button.

Please note we have placed  C:\Users\Administrator\Downloads  in the Windows PATH Environment Variable to allow you to easily run the TMC CLI.

This is not best practice but is done to simplify the workflow in this lab environment.  You should really be placing the CLI binary in an appropriate location and then adding its location to your path variable if required.

 

 

Create VMware Cloud Services API Token

 

 

  1. Click your Username in the top right of the UI
  2. Select My Account
  3. Select the API Tokens tab
  4. Click Generate Token A New API Token

 

 

  1. Enter tmc-hol-token for the Token Name (you may need to scroll up to the top of the window)
  2. In the Token TTL field, change from 6 days to 6 hours 
  3. Under Define Scopes, select the All Roles checkbox
  4. Click  Generate to create the token
  5. IMPORTANT: After the token is generated, a popup will appear. Ensure you select the Copy option to copy the generated token to your clipboard to use in the next step.  Also note your lab number (this example is TMC-HOL-06)
  6. After copying the token, click Continue to close the pop-up (option will be grayed-out until you select Copy from the previous step)

 

 

Login to the TMC CLI and Confirm Cluster Access

 

 

 

  1. Open PowerShell by clicking the PowerShell Icon on the Taskbar
  2. Type the command in the PowerShell Window to start the interactive login session: tmc login
  3. For the API Token field, right click in the PowerShell window to paste your generated token from the previous section and press enter
  4. Type tmc-hol-XX (where XX is your lab number shown in the token generation screen) for Login Context name and press enter
  5. Press enter to select info for the default log level
  6. Press enter to select TMC-HOL-XX as the default credential
  7. Use the arrow keys to choose us-west-2 as the default region and press enter
  8. Press enter to select olympus-default as the default AWS ssh key
  9. Run the following command to confirm access to the newly created cluster: kubectl --kubeconfig=C:\Users\Administrator\Downloads\kubeconfig-aws-cluster.yml get namespaces

 

 

Verify your Kubernetes Cluster is Ready

 

  1. Run the command:
    kubectl --kubeconfig=C:\Users\Administrator\Downloads\kubeconfig-aws-cluster.yml get nodes
  2. Verify your nodes are in a Ready state and observe the other available information.
    If all your nodes are not yet Ready you may need to run the command again to verify they all come up correctly.

 

Deploy a Sample Application


In this section we will deploy a small sample application to the cluster that you just provisioned in AWS.

The following procedure creates two deployments (coffee and tea) exposed as services (coffee-svc and tea-svc), each running a different image (nginxdemos/hello and nginxdemos/hello:plain-text).


 

Observe the cafe-services.yaml file

 

  1. Change directory to the Downloads:
    cd C:\Users\Administrator\Downloads
  2. View the contents of the Downloads directory using the command:
    ls
  3. Run the command:
    cat .\cafe-services.yaml Observe the output, this is a Kubernetes deployment file

 

 

Update the cafe-service.yaml File

************************************************

Due to some of the API changes in Kubernetes 1.16 the YAML file loaded into the HOL environment no longer works correctly and needs to be updated to utilize the most current, (non-deprecated) API construct.  We will do that in the following step.

Specifically the extensions/v1beta1 API group is not supported any longer.  For more information refer to this link: https://kubernetes.io/blog/2019/07/18/api-deprecations-in-1-16/

*************************************************

The first command will utilize kubectl to convert the existing cafe-services.yaml file to a new file (cafe-services-convert.yaml) with updated APIs to work around the issue referenced above. The second command will simply rename the cafe-services-convert.yaml file to cafe-services.yaml, which will allow you to proceed with the deployment as instructed.

 

  1. Run the command
    kubectl convert -f .\cafe-services.yaml > cafe-services-convert.yaml
  2. Run the command
    mv .\cafe-services-convert.yaml .\cafe-services.yaml -Force

 

 

Deploy the Cafe-Services to your AWS Cluster

 

Pod Security Policies (PSP) are enabled by default. To run apps with privileges, you need to create a role binding for the service account to use a PSP.

  1. Run the command: kubectl --kubeconfig=kubeconfig-aws-cluster.yml create  clusterrolebinding privileged-cluster-role-binding  --clusterrole=vmware-system-tmc-psp-privileged  --group=system:authenticated to create role binding for the service account.
  2. Run the command: kubectl --kubeconfig=kubeconfig-aws-cluster.yml apply -f cafe-services.yaml to deploy your application
  3. Verify your deployment has come up with the command: kubectl --kubeconfig=kubeconfig-aws-cluster.yml get pods

 

Conclusion


This concludes "Module 2 - Cluster Life Cycle Management"

You should now have an understanding of how Tanzu Mission Control can be used to deploy, manage and operate Kubernetes clusters across your environment.

 

Congratulations on completing Module 2.

If you are looking for additional information on VMware Tanzu Mission Control, try one of these links:


 

To continue with this lab, proceed to any module below which interests you most regarding VMware Tanzu Mission Control:

 

 

How to End Lab

 

To end your lab click on the END button.  

 

Module 3 - Working with Policies (30 minutes)

Introduction


In order for you to get full value out of this Module you need to complete Module 2 - Cluster Life Cycle Management prior to doing this module.  If you have not completed Module 2 you will not have any Clusters deployed to set policies on.


 

Organizing Clusters and Namespaces

Through the Tanzu Mission Control console you can organize and view your Kubernetes resources in two different ways, enabling operations administrators to maintain control over clusters while allowing application teams self-service access to namespaces.

Infrastructure View

Application View

By combining your resources into groups, you can simplify management by applying policies at the group level. For example, you can apply an access policy to an entire cluster group rather than creating separate policies for each individual cluster.

 

 

Organizing Users and Controlling Access

Use secure-by-default policies to implement role-based access control (RBAC) for the resources in your organization.

You use VMware Cloud Services tools to invite users to your organization and organize them into user groups. You can set up federation with your corporate domain, which allows you to use your organization's single sign-on and identity source. By combining your users into groups, you can simplify access control by creating access policies that bind roles to groups rather than individuals.

As a best practice, add users through the group to which they initially belong. Use the Groups tab under Identity and Access Management, rather than the Active Users tab. In this way, the new user is added to a group to which you have already assigned roles through an access policy. If you use the Active Users tab, the new user is added to the organization and service, but because they are not yet added to a group, they will likely have only minimal access to the service until you add them to a group. 

To edit the access policy for an object, you must have iam.edit permissions on that object.

To apply the permissions in a given role to a member, you create a role binding in the object's access policy that associates members and groups with a role. For example, to give the members of group1 edit permission on example-cluster, you create a role binding in the policy for example-cluster that associates group1 with the cluster.edit role.

In addition to the direct policy defined for a given object, each object also has inherited policies that are defined in the parent objects. For more information about policy inheritance, see Policy Inheritance below.

The following table shows the roles you can use in access policies at each level of the hierarchy.

 

Policy Inheritance

In the Tanzu Mission Control resource hierarchy, there are three levels at which you can specify policies. 

In addition to the direct policy defined for a given object, each object also has inherited policies that are defined in the parent objects. For example, a cluster has a direct policy and also has inherited policies from the cluster group and organization to which it is attached.

 

Workspaces and Namespaces


In this section you will be covering the topic of Workspaces and Namespaces.  You will create a new Workspace within the environment, later on in this module we will get a Developer User access to this Namespace.

AGAIN PLEASE ENSURE YOU HAVE COMPLETED MODULE 2 BEFORE CONTINUING.


 

Login to Tanzu Mission Control

 

 

You should already be logged in if you just completed the previous module and just need to return to the Tanzu Mission Control console.

  1. Click on VMware Cloud Services at the top of the screen
  2. Click the Tanzu Mission Control tile
    NOTE: You should still be logged in as the Platform Admin user (this will be of the format myvware3xx@vmware-hol.com)

 

 

Observe the existing Namespaces in the Environment

Notice that all the Namespaces in the environment today are not currently being managed by TMC, however they have been made visible to you.  You have the ability to create Namespaces in both provisioned and attached clusters.

 

  1. Click Namespaces
  2. Observe that none of the Namespaces are being managed by Tanzu Mission Control
  3. Some of the Namespaces are relevant to the control plane operation of TMC Kubernetes clusters, these are represented by the labels as shown.  Labels are a very good way to drive policy driven management of your environment.

 

 

Workspaces

 

Workspaces allow you to take an application or workload centric view of your environments.  Workspaces allow you to organize your managed namespaces into logical groups across clusters, perhaps to align with development projects. In an attached cluster, you can have both managed and unmanaged namespaces. When you create a namespace in an attached or provisioned cluster, you specify the workspace to which the namespace belongs.  

  1. Click Workspaces
  2. Observe (DO NOT CLICK) that there is just the single default Workspace available currently
  3. Click NEW WORKSPACE 

 

 

New Workspace

 

  1. Type:  hol-analytics in the Name input area
  2. Click CREATE

 

 

View the newly created Workspace

 

  1. Click on hol-analytics

 

 

Observe the Actions available to you within a Workspace

 

  1. Click ACTIONS
  2. Observe the actions you can perform on your Workspace

We will be using this hol-analytics Workspace in the next section to apply access policies

 

Applying Access Policies


In this section we will be exploring the policies that have already been applied to the environment and then setting up access for a new user.


 

Observe the Kubernetes Clusters in your Environment

 

  1. Click Clusters
  2. Observe the clusters that are present in your environment

If you do not have two clusters present in your environment please return to Module 2 to provision and attach these clusters as they will be required to fully demonstrate the access policies in this section

 

 

Cluster groups

 

  1. Click Cluster Groups
  2. Observe that you should have two cluster groups present

If you do not have two clusters groups present in your environment please return to Module 2 to provision and attach these clusters as they will be required to fully demonstrate the access policies in this section

 

 

Access Policies

 

  1. Click Policies, by default you will be brought to the Cluster Access Policies view
  2. Expand the TMC-HOL-xx Organization (xx will be a different for each lab users)
  3. Expand the default cluster group and observe the clusters that are members of the group
  4. Expand the hands-on-labs cluster group and observe the clusters that are members of the group
  5. Click LIGHT to switch the UI to "light mode" as the increased contrast provided by the light theme makes the next few steps easier to view

 

 

Cluster Group Access Policies - clustergroup.view

You will need your Developer user ID and Password that were given to you during the check-in process, please have this ready for the remainder of this Module.

You will be granting your developer user clustergroup.view access in this section to allow for viewing of clusters.

 

  1. Click on the hands-on-labs Cluster Group
  2. Expand the Inherited Policies by click the arrow next to the TMC-HOL-xx organization
  3. Observe all the policies that were granted at an Organizational level and have been inherited by the cluster group
  4. Click the Dropdown arrow in the Direct access policies area
  5. Click NEW ROLE BINDING

 

 

Add a Direct Access Policy

 

 

FOR THIS SECTION YOU WILL NEED TO RECALL THE DEVELOPER USER CREDENTIALS THAT WERE GIVEN TO YOU AS PART OF THE CHECK-IN PROCESS.

This will be of the format myvmware4xx@vmware-hol.com

  1. Click the Role field
  2. Click the clustergroup.view: Read access to cluster groups, clusters, and namespaces role
  3. Type in your development user's email address from the check-in process (should be of the format myvmware4xx@vmware-hol.com)
  4. Click ADD
  5. Click SAVE

 

 

Cluster Group Access Policies - cluster.admin

You will be granting your Developer User cluster.admin access to all the clusters inside of the hands-on-labs cluster group.  This will give full administrative privileges within the Kubernetes clusters.

 

 

 

 

  1. Click NEW ROLE BINDING
  2. Click the Role field
  3. Click cluster.admin: Root access to clusters - including policies in the dropdown for Role selection
  4. Type in your development user's email address from the check-in process (should be of the format myvmware4xx@vmware-hol.com)
  5. Click ADD
  6. Click SAVE
  7. Confirm that your Direct access policies are setup as indicated

 

 

Workspace Access Policies

In this section you will be giving your developer access to a workspace.admin policy. This will allow your developer to create image container and network policies on the hol-analytics namespace.  The workspace and namespace policies are more of an application centric way to give a user access to your Kubernetes infrastructure.  Contrast this with the Cluster and Cluster group policy which is more of an infrastructure centric way to define policies.  Both approaches are important when operating Kubernetes at scale.

 

  1. Click WORKSPACES in the Policies tab
  2. Expand your Organization (will be of the format TMC-HOL-xx)
  3. Click on the hol-analytics Workpace
  4. Expand the hol-analytics Direct access policies
  5. Click NEW ROLE BINDING

 

 

Workspace Access Policies - workspace.admin

 

 

 

  1. Click the dropdown arrow in the Role field
  2. Click the workspace.admin: Admin access to workspaces - including policies role
  3. Type in your development user's email address from the check-in process (should be of the format myvmware4xx@vmware-hol.com)
  4. Click ADD
  5. Click SAVE
  6. Confirm that your Direct access policies are setup as indicated

 You have now successfully setup a policy framework that will allow you to on-board a new user to a restricted set of capabilities.  It is now time to log off of your current user and login as the restricted user to demonstrate some of the policies in action.

 

 

Sign Out of your Current VMware Cloud Services Portal Session

 

  1. Click your Username in the top right of the UI
  2. Click SIGN OUT

 

 

Login as the Development user

*****************

DO NOT SKIP THIS STEP, YOU NEED TO LOGIN via the CHECK-IN OR THE NEXT SECTION WILL NOT WORK CORRECTLY

****************

 

 

 

 

  1. Click on the Student Check-in bookmark
  2. Type in the email address that you use to login to the HOL platform with (example yourname@yourcompany.com)
  3. Click Search
  4. Click on the link under Developer Login  (DO NOT JUST COPY PASTE, YOU NEED TO CLICK THE LINK)
  5. Click Sign in using another account
  6. Your account should be auto-populated for you, press tab to continue to the next screen
  7. Enter your password (should be VMware1!)
  8. Click SIGN IN

 

 

Log in to Tanzu Mission Control (Developer Persona)

 

  1. On the Cloud Services Portal Page, click Tanzu Mission Control

 

 

Check the Access Policies

 

 

  1. Click Policies
  2. Expand the TMC-HOL-xx Organization
  3. Click on the hands-on-labs cluster group
  4. Observe that you do not have permission to view the Policies while logged in as your Developer ID
  5. Expand the hands-on-labs cluster group
  6. Click on your aws-cluster
    Notice that you now have the ability to observe the policies at the cluster level as the development user has been granted the cluster.admin role by inheritance from the hand-on-lab cluster group

Feel free to explore the Policies section further to see what type of visibility this user has.

 

 

Workspaces

 

  1. Click Workspaces
  2. Click your hol-analytics workspace

 

 

Create a new Namespace

 

 

  1. Click NEW NAMESPACE
  2. Verify your cluster is set to: aws-cluster and your Workspace is set to hol-analytics
  3. In the Name field type: development
  4. Click Create

 

 

Observe new Namespace

 

  1. Click on Namespaces
  2. Hit Refresh on your browser
  3. Observe the new development namespace that you just created, also take note that it is a managed namespace in the hol-analytics Workspace.  Have a look at the Labels that have been attached to this namespace.

 

 

Generate API Token for Developer User

We will need to create a new API token for the Developer user to authenticate to TMC via the tmc cli as the token we created in the previous exercise is only valid for the Platform Operator.

 

 

  1. Click your Username in the top right of the UI (verify you are logged in as the myvmware4xx user)
  2. Select My Account
  3. Select the API Tokens tab
  4. Click Generate Token A New API Token

 

 

  1. Enter tmc-hol-token for the Token Name
  2. In the Token TTL field, change from 6 days to 6 hours
  3. Under Define Scopes, select the All Roles checkbox
  4. Click Generate to create the token
  5. IMPORTANT: After the token is generated, a popup will appear. Ensure you select the Copy option to copy the generated token to your clipboard to use in the next step. Also note your lab number (this example is TMC-HOL-06)
  6. After copying the token, click Continue to close the pop-up (option will be grayed-out until you select Copy from the previous step)

 

 

Launch PowerShell

 

  1. Launch PowerShell by clicking the icon on the taskbar (NOTE: you may already have a PowerShell session open, you can continue to use the open one)

 

 

Force Re-authentication as Developer User

Your authentication to TMC has been saved to a local file and will not have yet timed out, as such we will delete the file to force a re-authentication.  Using the tmc cli, you will then authenticate as the Development user rather than the Platform Operator.

 

  1. Change to the Administrators home directory using cd c:\Users\Administrator
  2. To ensure there is no other Kubernetes configs stored remove the .kube directory by running the following commnad:
    rm -r .\.kube
  3. To remove any stored authenication to TMC run the following command, this will force you to authenticate to TMC
    rm -r .\.vmware-cna-saas

 

 

Login to the TMC CLI as the Developer user

 

  1. Type the following command in the PowerShell Window to start the interactive login session: tmc login
  2. For the API Token field, right click in the PowerShell window to paste your generated token from the previous section and press enter
  3. Type tmc-hol-XX (where XX is your lab number shown in the token generation screen) for Login Context name and press enter
  4. Press enter to select info for the default log level
  5. Press enter to select TMC-HOL-XX as the default credential
  6. Use the arrow keys to choose us-west-2 as the default region and press enter
  7. Press enter to select olympus-default as the default AWS ssh key

 

 

Verify the Namespace you just created is present on the AWS Cluster

 

 

  1. Change to the Downloads directory using the command:
    cd .\Downloads
  2. Get a list of the current namespaces in your AWS cluster using the following command:
    kubectl --kubeconfig="kubeconfig-aws-cluster.yml" get namespaces
  3. Observe your available namespaces in your aws-cluster

 

 

Deploy a simple application into the development namespace

 

  1. Move the KUBECONFIG file to the .kube directory to set the default kubectl to access the AWS cluster.  Issue the following command:
    mv '.\kubeconfig-aws-cluster.yml' "C:\Users\Administrator\.kube\config"
  2. Check if there are no running pods in the development name space by running:
    kubectl --namespace development get pods
  3. Deploy a simple Cafe Service application (cafe-services.yaml was created in Module 2) into the new Namespace by running:
    kubectl --namespace development apply -f cafe-services.yaml
  4. Verify the pods have successfully come up using this command:
    kubectl --namespace development get pods

 

 

Observe all the pods you have running now

 

  1. Run the command
    kubectl get pods --all-namespaces Observe all the pods that have now been deployed in your environment.

You have just successfully deployed a simple application into the new Namespace that you created using your Developer access persona.

 

Revoking Access


In order to restrict the Developer's access to the cluster to only the workspace created in the previous module, you will need to log back in as the Platform Operator to change some of the access policies.  To do this we will make use of an incognito Chrome browser window.

 

  1. RIGHT CLICK on the Chrome session on the task bar
  2. Click on New Incognito Window

 

Login as the Platform Operator (Incognito Session)

 

 

 

  1. Click the VMware Cloud Services bookmark
  2. Enter your Platform Operator credentials that you got from the check-in process (should be of the format myvmware3xx@vmware-hol.com
  3. Click NEXT
  4. Enter your password (should be VMware1!)
  5. Click SIGN IN
  6. Launch TMC by clicking Tanzu Mission Control

 

 

Remove Cluster Access Policies - cluster.admin

 

 

  1. Click on Policies
  2. Expand your Organization
  3. Click on the Cluster Group hands-on-labs
  4. Expand your hands-on-labs Direct Access Policies
  5. Click Delete for the cluster.admin binding (leave clustergroup.view)
  6. Click YES, DELETE ROLE BINDING

 

 

Leave Workspace Access Policies as is

Take note that we did not remove the Workspace Level Access Policy from the Development user.  As such they will still have access to the Workspace and any managed Namespaces inside of it.  In our lab that means the Development user will still have access to the hol-analytics Workspace and the development namespace that has been provisioned on the aws-cluster.

 

  1. Click WORKSPACES
  2. Click hol-analytics

 

 

Checking Access Restrictions

 

 

Now lets see what this translates to when using the kubectl command

  1. Jump back into your PowerShell session by clicking the icon on the taskbar
  2. Run the command: kubectl --namespace development get pods
  3. Notice that you still have access to the development namespace
  4. Run the command: kubectl get pods
  5. Notice that you no longer have access to the default namespace
  6. Run the command: kubectl get namespaces
  7. Again notice you have been blocked from seeing objects outside of your access policy
  8. Run the command: kubectl get namespace development Once you limit the scope to within your access policy the command will run
  9. Run the following command: kubectl --namespace development delete -f .\cafe-services.yaml This will delete the deployed application in the development namespace

 

Applying Image Registry Policies


In this section we will explore Image Registry Policies and how admins can allow access to only trusted registries.  Note: you will continue in your Incognito window as the Platform Admin.


 

Image Registry Policies

The image registry policies can only be applied to Workspaces. By default, access to all container registries is allowed. Admins can create a whitelist of registries. In this section, we will apply Image Registry Policies and deploy the cafe services application.

 

 

Add bintray.io to Image Registry Policy

 

Note: Please navigate back to the incognito browser session, where you are logged in as the org admin (myvmware3xx user) before proceeding.

  1. Click Policies
  2. Click Image registry
  3. Click Workspaces
  4. Select the hol-analytics workspace
  5. Click Add Image Registry Policy to initiate the creation of an image registry policy

 

  1. Type bintray in the Policy Name field
  2. Type *.bintray.io in the Image Registry patterns field
  3. Click Add
  4. Click Add Policy to create the image registry policy for the hol-analytics workspace

 

  1. Observe the newly created bintray image registry policy

 

 

Checking Image location for Cafe Services Application

Before we start, we'll need to examine the cafe applications deployment file to note what container registry the images that support the application will be pulled from.

 

  1. Verify the container image location in the cafe services application in your PowerShell window:

    cat ./cafe-services.yaml | Select-String -Pattern image:

The absence of explicit image registry name, before the image name in the screenshot above, implies that image is being pulled from Docker Hub. 

 

 

Deploying the Application

Now we are ready to deploy the cafe services application in the development namespace. The cafe services application is pulling container images from Docker Hub and we have a whitelist in the hol-analytics workspace to only allow images to be pulled into our Kubernetes cluster from the bintray.io container registry. If the image registry policy is configured correctly, the deployment should fail.

 

  1. Run the following command to deploy the application

    kubectl --namespace development apply -f cafe-services.yaml
  2. Run the following command to check the status of the deployment

    kubectl --namespace development get deployment

As you can see from above screenshot, coffee and tea deployments are not in the ready state. Let's investigate the failure.

 

  1. Run the following command to gather information about the current state of our application deployment and observe the output:

    kubectl --namespace development get deployment coffee -o yaml

Look in the status section of the previous command, as shown in the screenshot above. The request to create the deployment was denied because there was no image registry policy that allowed the fetching of container images from Docker Hub. 

 

 

Adding docker.io to Image Registry Policy

Now that we have verified the container registry policy we created in the TMC portal is working as expected, we can add an additional image registry policy allowing images to be pulled from *docker.io so our cafe app can be successfully deployed on the cluster.

 

  1. In your Incognito window as Platform Admin, Click Policies
  2. Click Image registry
  3. Click Workspaces
  4. Select the hol-analytics workspace
  5. Click Add Image Registry Policy to initiate the creation of an image registry policy

 

  1. Type dockerhub in the Policy Name field
  2. Type *docker.io in the Image Registry patterns field
    Note: there is no . in between the * and docker text above
  3. Click Add
  4. Click Add Policy to create the additional image registry policy for the hol-analytics workspace

 

  1. Observe the newly created docker image registry policy, which will allow the cafe application to pull its images from dockerhub.

 

 

Redeploying the Application

After adding Docker Hub to Image Registry Policy, let's re-deploy the cafe services application to test our newly created image registry policy.

 

  1. Run the following command to delete the existing cafe services application from the devlopment namespace:

    kubectl --namespace development delete -f .\cafe-services.yaml
  2. Run the following command to deploy the cafe service application:

    kubectl --namespace development apply -f .\cafe-services.yaml
  3. Check the status of the cafe application deployments with the following command:

    kubectl --namespace development get deployment

After adding the Dockerhub registry to our whitelist via the Image Registry policy, our application was able to successfully pull images from DockerHub and deploy as expected.

 

 

Removing all Image Registry Policies

Before we proceed to the next section, please remove all image registry policies from the TMC UI

 

 

 

  1. As the Platform Admin, click Image registry
  2. Click Workspaces
  3. Select the hol-analytics workspace
  4. Click the disclosure triangle to expand the bintray image registry policy
  5. Select Delete, confirm the deletion by selecting Yes, Delete Policy
  6. Click the disclosure triangle to expand the dockerhub image registry policy
  7. Select Delete, confirm the deletion by selecting Yes, Delete Policy

You have successfully created, tested, and deleted image registry policies applied to a workspace in Tanzu Mission Control. You were able to verify the validity of the image registry policies by deploying a sample application in the development namespace of  your "aws-cluster."

 

Applying Network Policies


In this section we will explore Network Policies and how users can configure them through TMC.


 

Network Policies

A Kubernetes network policy is a specification of how groups of pods are allowed to communicate with each other and other network endpoints. NetworkPolicy resources use labels to select pods and define rules which specify what traffic is allowed to the selected pods. Below are the traits of Network Polices and how they are implemented and managed in Tanzu Mission Control:

There are 4 main types of Network Policies available to implement in Tanzu Mission Control:

  1. deny-all
  2. allow-all
  3. deny-all-to-pods
  4. allow-all-to-pods

Users can specify labels to match which pods/workloads a policy applies to. For example, observe the .yaml file for the test-allow-policy below. This policy is applied to the test-namespace and ensures ingress traffic is allowed to any pods with the labels app=mysql and release=dev within the namespace:

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: test-allow-policy
  namespace: test-namespace
spec:pod
  Selector:match
    Labels:
      app: mysql
      release: dev
  ingress:
  - {}

A common way of applying these policies is to set a deny-all on the organization or workspace level within TMC and then adding allow-all-to-pods network policies to allow traffic from specific pods based on label selectors. This is the workflow we will perform in the exercise below by setting a deny-all policy on the hol-analytics workspace and then creating an allow-all-to-pods policy to permit traffic to the coffee pods that are part of our cafe application deployment.

 

 

Getting the IP address of the Coffee POD

 

  1. Run the following command to get the IP address of the coffee pods:

    kubectl --namespace development get pods -o wide

The above command shows all pods running in the development namespace. The Cafe Application has 2 replicas of the coffee pods. Choose 1 of the pods and note the IP address of this pod. Note that in your environment, the pod IPs will be different. Let's note down the first pod IP address, which in this example is 192.168.202.42.

 

 

Creating Photon Pod for Command Prompt Access

In this section, we will deploy a photon pod and use this pod to test connectivity between the photon and coffee pods. In order to test connectivity to the pods in the namespace, we will try to ping the pods from inside the Kubernetes cluster using the Photon pod.

 

  1. Execute the following command to run the photon pod on the aws-cluster in the development namespace:

    Note: After running the command, you may have to press "enter" to bring up the pod's shell

    kubectl --namespace development run photon --rm -i --tty --image vmware/photon -- bash

Note that the command prompt has changed after executing the above command. This is because the above command started a Photon pod and dropped us into a shell prompt within the pod. Now, when we issue commands, we are running them inside the Photon pod running within the aws-cluster. This will allow us to ping other pods and services only on IP addresses resolvable from within the cluster itself.

 

 

Check Connectivity to the Coffee Pod

 

  1. Run the following command to check the connectivity between photon and the coffee pod:

    curl -I -m1 192.168.202.42

    Note: the IP address used in this command was the IP of coffee pod on the previous page. This IP will be different in your environment so please substitute with the appropriate IP.

As seen in the screenshot above, the curl command is showing that the coffee pod returned 200 OK response code. This shows that the communication between the coffee and photon pod is enabled, which is expected behavior as default Kubernetes behavior is to allow all traffic between pods if no Network Policies are defined. Do not close the shell as we will revisit this pod to test connection after implementing additional Network Policies in Tanzu Mission Control.

 

 

Creating a Deny All Network Policy

Now that we've tested base connectivity, we will log in to the Tanzu Mission Control portal and create a deny-all Network Policy to block all traffic to pods in namespaces within the hol-analytics workspace.  As the Platform admin user (myvmware3xx@vmware-hol.com), create a deny-all network policy from the TMC web UI.

 

  1. Select Policies
  2. Select Network
  3. Select Workspaces
  4. Select the hol-analytics workspace
  5. Click the Add Network Policy button

 

  1. Select deny-all from the Network Policy drop-down
  2. Type denyall in the Policy Name field
  3. Click the Add Policy button to create the policy and apply it to the workspace

 

  1. Click the disclosure triangle for the denyall network policy
  2. Observe the newly created direct network policy

Navigate back to the Powershell session and verify the network policy has been created and applied locally on the aws-cluster. If you are still utilizing the photon pod's shell, type exit to revert back to the shell of local user.

 

  1. Run the following command to verify the denyall network policy has been created on the "aws-cluster:"

    kubectl --namespace development get networkpolicy

As observed from the output of the command, the denyall policy has, in fact, been created locally on the aws-cluster Kubernetes cluster after the policy was created in the TMC UI.

After this policy has been applied, all ingress traffic to all the pods in the development namespace has been restricted. Let's hop back over to the photon pod console to verify we can no longer ping the coffee pod.

 

 

Checking Connectivity Between Pods

Now that we have set up the deny-all network policy, we can verify connectivity between the photon and coffee pods is blocked .

 

  1. Again, execute the following command to run the photon pod on the aws-cluster in the development namespace:

    Note: After running the command, you may have to press "enter" to bring up the pod's shell

    kubectl --namespace development run photon --rm -i --tty --image vmware/photon -- bash 

  2. Run the following command to test connectivity with the coffee pod. Note: Again this IP will be different in your environment so please substitute with the appropriate IP.

    curl -I -m1 192.168.202.42

As seen in the screenshot above, the command is timing out as the photon pod is unable to reach the coffee pod. This confirms that connectivity between pods has been restricted after applying the denyall policy. 

 

 

Allow Ingress on the Coffee Pods

Now that we have created a deny-all policy on the workspace, we can create an additional network policy to selectively allow traffic to our coffee pods. We will utilize labels to create this network policy. The label app=coffee uniquely identifies the coffee pods that are part of our cafe application.

 

  1. Type exit to leave the photon pod's shell and return to the shell of the local user
  2. Confirm the coffee pods have the app=coffee label by running the following command, which will list all pods running in the development namespace with the app=coffee label:

    kubectl --namespace development get pods -l app=coffee

Now that we have verified our pods' labels, we can navigate to the TMC web portal to set up our network policy to allow traffic to all of our coffee pods.

 

Navigate to the TMC portal and perform the following actions to create the allow-coffee-pods network policy:

  1. Click Policies from the left hand menu
  2. Select the Network tab
  3. Select the Workspace tab
  4. Click on the hol-analytics workspace and observe the presence of the existing deny-all network policy
  5. Click Add Network Policy under Direct Network policies section

 

  1. Select allow-all-to-pods from the Network policy drop-down menu
  2. Type allow-coffee-pods in the Policy Name field
  3. Type app in the key field
  4. Type coffee in the value field
  5. Click Add
  6. Click Add Policy to create the allow-coffee-pods network policy

 

  1. Click the disclosure triangle next to the newly created allow-coffee-pods network policy to observe the policy characteristics

Now that we have created the additional network policy, let's again utilize the Powershell session to verify the allow-coffee-pods network policy created in the TMC web portal has been implemented locally on the "aws-cluster."

 

  1. Navigate back to the Powershell session
  2. Run the following command to verify the existence of the allow-coffee-pods network policy on the aws-cluster:

    kubectl --namespace development get networkpolicy

 

 

Checking Connectivity Between Pods After Allowing Traffic to Coffee Pods via Network Policy

After allowing an explicit rule in the network policy to enable ingress traffic to the coffee pods, let's verify this has allowed communication from the photon pod to coffee pod.

 

  1. Navigate back to the Powershell session
  2. Again, execute the following command to run the photon pod on the aws-cluster in the development namespace:

    Note: After running the command, you may have to press "enter" to bring up the pod's shell

    kubectl --namespace development run photon --rm -i --tty --image vmware/photon -- bash 

  3. Run the following command to test connectivity between the pods again changing to the IP address for your environment.

    curl -I -m1 192.168.202.42

As seen from the results above, connectivity between photon and coffee pods has been allowed.

 

 

Cleanup Network Policies

Now that we have verified the workflow for implementing network policies on a workspace in TMC, we can restore the default behavior in our aws-cluster by deleting the created network policies.

 

  1. Click the disclosure triangle next to the allow-coffee-pods network policy to expand the network policy
  2. Click Delete to delete the allow-coffee-pods network policy
  3. Repeat the above process for the denyall network policy

 

  1. Verify No direct policies exist under the Direct Network policies  section of the hol-analytics workspace.

 

 

Delete the Cafe Services Application from the development namespace

We can also delete our cafe application now that we have finished our testing of network policies implementation in TMC.

 

  1. Type exit to leave the shell of the photon pod.
  2. Change into the Downloads directory:
    cd C:\Users\Administrator\Downloads
  3. Delete the cafe service application
    kubectl --namespace development delete -f .\cafe-services.yaml

 

Conclusion


This concludes "Module 3 - Working with Policies"

You should now have a good understanding of how Tanzu Mission Control can be used to govern Access to your Kubernetes environments.

 

Congratulations on completing Module 3.

If you are looking for additional information on VMware Tanzu Mission Control, try one of these links:


 

To continue with this lab, proceed to any module below which interests you most regarding VMware Tanzu Mission Control:

 

 

How to End Lab

 

To end your lab click on the END button.  

 

Module 4 - Conformance Testing (30 minutes)

Introduction


 

Welcome to Module 4 - Sonobuoy: Test your Kubernetes Deployment

This Module contains the following lessons:


 

Sonobuoy Overview

Sonobuoy is a diagnostic tool that makes it easier to understand the state of a Kubernetes cluster by running a set of plugins (including Kubernetes conformance tests) in an accessible and non-destructive manner. It is a customizable, extendable, and cluster-agnostic way to generate clear, informative reports about your cluster.

Its selective data dumps of Kubernetes resource objects and cluster nodes allow for the following use cases:

Sonobuoy supports 3 Kubernetes minor versions: the current release and 2 minor versions before. Sonobuoy is currently versioned to track the Kubernetes minor version to clarify the support matrix. For example, Sonobuoy v0.14.x would support Kubernetes 1.14.x, 1.13.x, and 1.12.x.

You can skip this version enforcement by running Sonobuoy with the --skip-preflight flag.

 

Student Check-In


You may skip this step if you have already completed the login in Module 2 and jump to the "Login and Create Cluster" section.   

This article will provide guidance on how to gain access to Tanzu Mission Control Cloud Services. You will locate the Student Check-In page, search for your email address and then be provided My VMware accounts for the VMware Cloud Services login.


 

Open Student Check-In Web Page

 

Open Chrome Browser

 

  1. On top of browser click Student Check-In
  2. This will navigate to https://checkin.hol.vmware.com

 

 

Search and Validate

 

1. Enter your email address used to login and start the lab

2. Click Search

Two My VMware login accounts will be provided: Platform Admin and Developer User

3.    Click the link under the Platform Admin Login  (e.g myvmware301@vmware-hol.com / VMware1!)

You will be redirected to the VMware Cloud Services login page.

 

 

VMware Cloud Services Sign-In

 

  1. Type in your Platform Admin email address that you received as part of the check in process (this will be of the format myvmware3xx@vmware-hol.com)
  2. Click NEXT or TAB key to continue

Click NEXT or TAB key to continue

 

 

  1. Enter password: VMware1!
  2. Cick SIGN IN
  3. Clickon the Tanzu Mission Control Tile

Congratulations!

You are logged into the Cloud Services Console with an Organization assigned (eg. TMC-HOL-01) and the Tanzu Mission Control services are available for use while this lab is active.

 

Login and Create a Cluster


In this section you will deploy a new cluster, in the subsequent sections you will run an Inspection on this cluster and explore the results.


 

Ensure that you are logged in as the Platform Admin

 

  1. For this next section you will need to ensure you are logged in as the Platform Admin (your user ID should be of the format myvmware3xx@vmware-hol.com)

If continuing from the previous module this will be your Incognito Chrome session.

 

 

Provision new Cluster

**********************************************

PLEASE NOTE THAT IF YOU ARE RUNNING THROUGH THIS LAB START TO FINISH THE NEXT STEPS WILL BE REDUNDANT.   If you have already created a cluster in AWS (this is done in Module 2) then you can skip this section and move on to the next section, you can do so by hitting the link below:

RUN AN INSPECTION

If you have not created an AWS cluster yet please continue with the following steps.

********************************************

 

  1. Click on Clusters
  2. Click NEW CLUSTER
  3. Click on your Provider.  This will be different than what is shown above it will be of the format TMC-HOL-xx

Your environment has been pre-paired with a dedicated AWS account.  Setting up a provider account is straight forward but is outside of the scope of this lab for security reasons.

 

 

Cluster name and Cluster Group

 

  1. In the Cluster Name field type: aws-cluster

 

  1. In the Cluster Group field select the default cluster group
  2. In the Region field select us-west-2; this corresponds to the AWS region you will be deploying in.  Other regions are not currently setup for deployment.
  3. Click NEXT

 

 

Cluster Type

 

  1. Leave the default settings and click NEXT

 

 

Edit cluster specification

 

  1. Click Edit

 

 

Increase number of worker nodes

 

  1. In the Number of worker nodes field type: 2
  2. Click SAVE
  3. Click CREATE

 

 

Wait for your Cluster to Deploy

The step will take approximately 10 minutes as a live Kubernetes cluster is being deployed in AWS.  Feel free to look around the rest of the interface while you wait.

 

 

  1. Wait for your Cluter to be created
    THIS STEP WILL TAKE AROUND 10 MINUTES TO COMPLETE
  2. Once your cluster is deployed you will see a dashboard similar to the one shown.  All the Health Check Components should have a Green checkmark prior to proceeding to the next section.

 

 

Refresh Your Browser if you don't see all the Health Checks Green

 

  1. If your all your Health Check have not yet returned as shown in the above image refresh the webpage
  2. Hit the refresh button on Chrome

 

Run an Inspection


In this section we will be running a Lite conformance test on the AWS cluster in your environment.  A full conformance test will take several hours to run and is currently outside of the scope of this lab primarily due to the time required to run the test.

 

  1. Click on Inpections on the left hand menu
  2. Click NEW INSPECTION

 

New Inspection

 

  1. In the Cluster field select your aws-cluster
  2. Select a Lite inspection
  3. Click RUN INSPECTION

 

 

Wait for inspection to Complete Successfully

 

 

  1. Monitor the Progress of your Inspection
  2. Periodically hit Refresh on the browser until you see a Successful result, this should not take more than a few minutes
  3. Once your report is returned click on Success to view the report

 

 

Review the Inspection results

 

  1. Take note of the items that were tested for and the results

You'll notice the Lite Inspection does not provide very much detail.  A full conformance test consists of 180+ different tests and is intended to check your cluster against the latest Certified Kubernetes Offerings that are published through the Cloud Native Computing Foundation (CNCF).

More details can be found here: https://www.cncf.io/certification/software-conformance/

 

Conclusion


This concludes "Module 4 - Conformance Testing".

In this module you saw how to use Tanzu Mission Control to perform conformance tests in accordance with "CNCF - Cloud Native Computing Foundation".


 

You have finished Module 4 AND have come to the end of this Hands-on-Lab

Congratulations on completing Module 4 and concluding the lab.

As you have seen, Tanzu Mission Control gives extensive control over your Kubernetes environment and greatly simplifies access and policy enforcement. This includes not just provisioned clusters - but any cluster. Clusters can be quickly provisioned or attached, and teams given access so they can continue to be productive. Visibility into cluster health, conformance, and workloads helps operators address issues quickly, before they impact users or applications. Imagine the power of TMC to enforce consistency across thousands of retail stores, or shared Kubernetes clusters being accessed by tens of thousands of developers.

If you are looking for additional information on Sonobuoy, try one of these:

If you are looking for additional information on VMware Tanzu Mission Control, try one of these links:

 

 

Conclusion

Thank you for participating in the VMware Hands-on Labs. Be sure to visit http://hol.vmware.com/ to continue your lab experience online.

Lab SKU: HOL-2032-01-CNA

Version: 20200729-220816