Lab Overview - HOL-1830-01-CNA - Photon OS and Container Basics - Getting Started
Note: It will likely take more than 45 minutes to complete this lab and you may not finish all of the modules during your time. The modules are independent of each other so you can start at the beginning of any module and proceed from there. You can use the Table of Contents to access any module of your choosing.
The Table of Contents can be accessed in the upper right-hand corner of the Lab Manual.
Welcome to HOL-1830-01! In this lab, we will introduce the basic concepts of containers in general and Docker in particular. We will also look at how to implement a simple application in Docker and how to use container networking and Docker volumes. All exercises in this lab are run on top of vSphere and using PhotonOSTM as Container host OS. We will also briefly cover the vSphere Docker Volume Service.
Lab Module List:
This lab manual can be downloaded from the Hands-on Labs Document site found here:
This lab may be available in other languages. To set your language preference and have a localized manual deployed with your lab, you may utilize this document to help guide you through the process:
During this module, you will input text into the Main Console. Besides directly typing it in, there are two very helpful methods of entering data which make it easier to enter complex data.
You can also click and drag text and Command Line Interface (CLI) commands directly from the Lab Manual into the active window in the Main Console.
You can also use the Online International Keyboard found in the Main Console.
When you first start your lab, you may notice a watermark on the desktop indicating that Windows is not activated.
One of the major benefits of virtualization is that virtual machines can be moved and run on any platform. The Hands-on Labs utilizes this benefit and we are able to run the labs out of multiple datacenters. However, these datacenters may not have identical processors, which triggers a Microsoft activation check through the Internet.
Rest assured, VMware and the Hands-on Labs are in full compliance with Microsoft licensing requirements. The lab that you are using is a self-contained pod and does not have full access to the Internet, which is required for Windows to verify the activation. Without full access to the Internet, this automated process fails and you see this watermark.
This cosmetic issue has no effect on your lab.
Please check to see that your lab is finished all the startup routines and is ready for you to start. If you see anything other than "Ready", please wait a few minutes. If after 5 minutes your lab has not changed to "Ready", please ask for assistance.
Module 1: Introduction to Containers (15 minutes)
In this Chapter, we will explain containers and how they enable 3rd Platform application architectures to be run efficiently in distributed environments.
While containers are certainly a very popular topic right now, containers themselves are not new. They have existed for many years. FreeBSD, Solaris Zones, LXC...there are many incarnations of containerization technology.
You may ask - then why is Docker so popular? For a few good reasons, but mainly because Docker offers a very convenient form factor for distributing and sharing code. Very complex applications can be run using a single command line and the containerization ensures that the application behaves the same regardless of the underlying OS, hardware and networking environment. This has made it very popular among developers and testers who need to quickly spin up complex environments.
Containers are not without challenges. While they work extremely well in development, organizations are still struggling with issues like isolation, security, network virtualization and monitoring of containers. As you will discover throughout this lab, VMware is fully embracing the idea of concept of containers and is providing solutions to many of the problems we just discussed.
Containers are an OS-level virtualization method in which the kernel of an operating system allows for multiple isolated user-space instances, instead of just one. The primary benefits of using containers include limited overhead, increased flexibility and efficient use of storage; the container looks like a regular OS instance from the user's perspective. Changes to the image can be made very quickly and pushed to a repository to share with others for further development and utilization.
Docker containers wrap up a piece of software in a complete filesystem that contains everything it needs to run: code, runtime, system tools, system libraries – anything you can install on a server. This guarantees that it will always run the same, regardless of the environment it is running in.
Containers running on a single machine all share the same operating system kernel so they start instantly and make more efficient use of RAM. Images are constructed from layered filesystems so they can share common files, making disk usage and image downloads much more efficient. Docker containers are based on open standards allowing containers to run on all major Linux distributions and Microsoft operating systems.
Containers include the application and all of its dependencies, but share the kernel with other containers. They run as an isolated process in userspace on the host operating system.
Docker is a natural fit for microservice-based architectures.
This sounds a lot like virtual machines, doesn't it? Aren't they just variations on the same theme?
Well, not really... A virtual machine is an emulation of a piece of hardware. When you run a virtual machine, you run an entire operating system including kernel and BIOS that behaves as if it interacted with physical hardware. A container, on the other hand, runs in an emulated operating system with multiple containers sharing a single kernel. Containers run on top of a host operating system, which, in turn can run on either physical or virtual hardware.
A container is intended to run a single application. Containers are typically very specific, intended to run MySQL, Nginx, Redis, or some other application. So what happens if you need to run two distinct applications or services in a containerized environment? The recommendation is usually to use two separate containers. The low overhead and quick start-up times make running multiple containers trivial, thus they are typically scoped to a single application.
A virtual machine, on the other hand, has a broader range, and can run almost any operating system. As you are likely aware, the virtual machine serves as an extremely firm boundary between OS instances that's enforced by a robust hypervisor, and connects to Enterprise-level storage, network and compute systems in a trusted, well-defined and secure manner. Virtual machines have traditionally lent themselves to running 2nd Platform (Web - App - Database) applications that compromise 99% of the application space today.
Containers provide great application portability, enabling the consistent provisioning of the application across infrastructures. However, applications and data alone are rarely the major barrier to workload mobility. Instead, operational requirements such as performance and capacity management, security, and various management tool integrations can make redeploying workloads to new environments a significant challenge. So while containers help with portability, they’re again only a piece of a bigger puzzle.
Due to the fundamental differences in architecture (namely the ESXi hypervisor used by virtual machines versus the shared kernel space leveraged by containers), Linux containers will not achieve the same level of isolation and security. Furthermore, the toolsets available in the virtual machine ecosystem are battle-tested and Enterprise-grade, enabling scores of benefits (stability, compliance, integrated operations, etc) that are indispensable to operations and infrastructure teams.
For these reasons, VMware provides the best of both worlds by offering an optimized operating system built for containers to run with minimal overhead. By dedicating an extremely lightweight operating system to run containerized workloads, we don't have to choose one or the other - we can have both! By taking advantage of memory sharing, a core feature of the ESXi hypervisor, we drastically reduce the operating system overhead while enabling the application flexibility promised by containers.
In lab HOL-1830-02 we will look at vSphere Integrated Containers, which provides enterprise level isolation, security and networking for containers by turning them into virtual machine-like objects that can be managed just like virtual machines.
Notice: This section is all reading. If you are eager to get to typing on the keyboard, you can move on to the next module, but you will miss out on learning some very important concepts.
Containers enables you do deliver applications as something called micro services. This isn't a new concept. In fact, it's quite old. Back in the mainframe days if you used something like IBM CICS, your code would be divided into transactions, each responsible for carrying out some specific business function. For example, withdrawing money from an account would be one transaction, depositing would be another and so on. Similar ideas appeared in the 90s with CORBA and the early 2000s with SOA.
At its core, the concept is simple: Each business function is encapsulated in a self-contained service that exposes a well-defined and published interface describing how to interact with it. Since the services are self-contained and independent, they can be managed and upgraded independently. For example, if the credit card functionality of an application needs to be replaced, we can just replace the corresponding micro service. As long as we expose the same interface, any application component using the service will continue running without even noticing the change.
As we discussed above, this is nothing new. What's new is that the container form factor makes delivering and managing micro services a lot easier.
If we look at the Outcomes Delivered from a new model of IT, Businesses are increasing their focus on App and Infrastructure Delivery Automation throughout the datacenter.
IT is making strides to provide the ability to enable faster delivery of application and IT Services leveraging capabilities derived from automated infrastructure and application provisioning.
Competitive businesses are delivering new applications to market in increasingly faster cycles, ushering in technologies like Linux containers and microservices. Next-generation applications are being built on infrastructure assumed to be dynamic and elastic. To keep our customers agile, our Cloud-Native Apps group builds infrastructure technologies to open, common standards that preserve security, performance, and ease-of-use, from developer desktop to the production stack.
To move faster, businesses implement a variety of cultural, design, and engineering changes. At VMware, we are striving to make the Developer a first class citizen of the Data Center and help align them with IT's journey to achieve streamlined App and Infrastructure Delivery Automation.
1st Platform systems were based around mainframes and traditional servers without virtualization. Consolidation was a serious issue and it was normal to run one application per physical server.
2nd Platform architectures have been the standard mode for quite a while. This is the traditional Client/Server/Database model with which you are likely very familiar, leveraging the virtualization of x86 hardware to increase consolidation ratios, add high availability and extremely flexible and powerful management of workloads.
3rd Platform moves up the stack, standardizing on Linux Operating Systems primarily, which allows developers to focus on the application exclusively. Portability, scalability and highly dynamic environments are valued highly in this space. We will focus on this for the rest of the module.
Microservices are growing in popularity, due in no small part to companies like Netflix and Paypal that have embraced this relatively new model. When we consider microservices, we need to understand both the benefits and the limitations inherent in the model, as well as ensure we fully understand the business drivers.
At its heart, microservice architecture is about doing one thing and doing it well. Each microservice has one job. This is clearly in stark contrast to the monolithic applications many of us are used to; using microservices, we can update components of the application quickly without forcing a full recompile of the entire application. But it is not a "free ride" - this model poses new challenges to application developers and operations teams as many assumptions no longer hold true.
The recent rise of containerization has directly contributed to the uptake of microservices, as it is now very easy to quickly spin up a new, lightweight run-time environments for the application.
The ability to provide single-purpose components with clean APIs between them is an essential design requirement for microservices architecture. At their core, microservices have two main characteristics; they are stateless and distributed. To achieve this, let's take a closer look at the Twelve-Factor App methodology in more detail to help explain microservices architecture as a whole.
Microservice architecture has benefits and challenges. If the development and operating models in the company do not change, or only partially change, things could get muddled very quickly. Decomposing an existing app into hundreds of independent services requires some choreography and a well thought-out plan. So why are teams considering this move? Because there are considerable benefits!
Microservices can be accompanied by additional operations overhead compared to the monolithic application provisioned to a application server cluster. When each service is separately built out, they could each potentially require clustering for fail over and high availability. When you add in load balancing, logging and messaging layers between these services, the real-estate starts to become sizable even in comparison to a large off the shelf application. Microservices also require a considerable amount of DevOps and Release Automation skills. The responsibility of ownership of the application does not end when the code is released into production, the Developer of the application essentially owns the application until it is retired. The natural evolution of the code and collaborative style in which it is developed can lend itself to challenges when making a major change to the components of the application. This can be partially solved with backwards compatibility but it is not the panacea that some in the industry may claim.
Microservices can only be utilized in certain use cases and even then, Microservices open up a world of new possibilities that come with new challenges and operational hurdles. How do we handle stateful services? What about orchestration? What is the best way to store data in this model? How do we guarantee a data persistence model? Precisely how do I scale an application properly? What about "simple" things like DNS and content management? Some of these questions do not have definitive solutions yet. A distributed system can also introduce a new level of complexity that may not have been such a large concern like network latency, fault tolerance, versioning, and unpredictable loads in the application. The operational cost of application developers needing to consider these potential issues in new scenarios can be high and should be expected throughout the development process.
When considering the adoption of a Microservices, ensure that the use case is sound, the team is aware of the potential challenges and above all, the benefits of this model outweigh the cost.
Recommended reading: If you would like to learn more about the operational and feasibility considerations of Microservices, look up Benjamin Wootton and read some of his publications on the topic, specifically 'Microservices - Not A Free Lunch!'.
In this lesson we will discuss the trenemdous impact containers are having on the IT industry and what the benefits and challenges.
The reason you're taking this lab is probably that you've heard talk of containers swirling around and are eager to understand what all the buzz is about. Let's take a minute to try to understand what has happened over the last few years.
Revenge of the Developer
Some of the most successful startups today are driven by developers and coders rather than business people. This is a radical change from how things used to be done. The business used to come up with ideas and push them down to development to have them translated into code. That is about to change and has already changed in some industries. Developers now constantly push the boundaries of what's possible and send their ideas to the business who builds offerings around them. This shift fundamentally changes how IT operates and has driven the demand for new tools and technologies.
From Months to Minutes
Under the old paradigm, releases once every quarter was considered frequent. Today, some industries need to make several releases an hour to keep up. Also, it would be impossible to roll out three months worth of changes to a system that's constantly used by millions of people and that doesn't tolerate any downtime. Frequent changes means smaller changes, which means easier rollouts. In addition, if you keep the changes well encapsulted and independent from each other, they become much easier to roll back if something should go wrong.
It's been reported that the big social media and web portal companies deploy hundreds of changes every day. Clearly, something in the way we build and deploy code has to change fundamentally!
DevOps and CI/CD
To get to the speed and agility needed, we need to change our processes fundamentally.
You may have heard of DevOps. This is the idea that we can no longer have developers simply throwing code over the fence to operations and hope that it gets deployed. It's too slow and error prone. Instead we need to streamline the handover and even eliminate it. DevOps is the idea of integrating development and operations so they behave as one unit.
Sounds good, but how do we get there? One way is to implement a Continuous Integration/Continuous Deployment process. The idea is that whenever a developer commits a piece of code and tags it as ready for release, the deployment process automatically kicks off. In some cases, this process can be made completely automatic by introducing automated testing and deployment.
The challenge to implementing a CI/CD pipeline is that it requires you to set up and tear down complete environments very quickly.
Docker to the Rescue!
Docker is no silver bullet, but it offers some attractive features helping anyone who is trying to increase speed and agility:
So why isn't everyone doing this right now? Because problems arise when you try to operationalize this model. There are currently no good tools for monitoring and managing containers. Also, most IT operations organizations lack the knowledge and processes for handling containers. Some of the challenges IT operations is struggling with are:
VMware is aggressively working to mitigate these problems using products like vSphere Integrated Containers, Harbor and Admiral.
Congratulations on completing Module 1. You should now have a solid foundational understanding of container technologies and the Docker implementation.
Based on your interests, please proceed to any module below:
To end your lab click on the END button.
Module 2: A Quick Tour of Docker (15 minutes)
In this lesson we will cover the following:
Enough theory for now! Let's get a container up and running! To begin we're going to SSH from our Windows console to a VM running Photon OS - we'll cover Photon OS in detail a bit later in the lab, but know that Photon OS is an open-sourced, lightweight Linux distribution optimized for virtualized environments and containers. From there we will run our Docker commands.
Here are the steps:
rootfollowed by Enter.
docker run -it busybox
ifconfig eth0Notice that you ended up in a different file system.
exitfollowed by Enter. You are now back to the command prompt of the host VM.
Congratulations! You just created and ran your first container! But what actually happened?
The command docker run -it busybox told docker to run a Busybox environment. This is a tiny Linux environment typically used for smaller devices, such as routers. If the docker image for busybox didn't exist in your local cache, it would have downloaded it. To make things quicker, we made sure you had it cached. The -it flag simply meant that you want an interactive process with the current terminal connected to it.
So this dropped you into the shell of busybox. You also noticed that you were given a different file system. This is because each Docker container comes with its own file system that's completely isolated from the host. You also noticed you were connected to the 172.x.x.x network instead of the 192.168.x.x one. This is because docker creates an ad-hoc virtual network for your containers so that ports you open in different containers won't interfere with each other.
We've already used the terms container and image. Let's define them and some other concepts a bit more in detail.
Let's get hands on again and do something a bit more useful. How about starting an nginx web proxy? That can be a bit of a daunting task when doing it in a traditional VM. With Docker, it's a bit easier. Let's do it:
docker run -d -p 80:80 nginx
localhostYou should see a short HTML document printed in your terminal window
You probably noticed two things: You weren't dropped into the command prompt of the container and you could access the service on the http port (80) on the Docker host itself (localhost). It's all in the command options!
Speaking of private container networks, let's have a quick look at how that works.
Let's clean up after this exercise and get ready for the next!
We learned in the previous section how Docker automatically creates a private container network for communication between your containers. But what if we want to create a separate network for a a specific group of containers? That's where the docker network create command comes in handy!
Let's spin up a new network, then create an nginx proxy and a busybox and hook them both up to our newly created network.
docker network create hol-net
docker run -d --network hol-net --name web nginxThe --name parameter assigns a name to the container. This name can be used as a host name for containers wanting to connect to the container. We're calling container "web".
docker run -it --network hol-net busybox
ping -c 1 web
docker network rm hol-net
So far, we've only been using pre-built images. But how were those images created in the first place? And what if you want to build your own customized image?
That's where the docker build command comes into the picture. This command reads a definition file calles the Dockerfile and uses the instructions in the file to build a new image. You can see the Dockerfile as the source code of an image.
Let's build a simple application inside an image. Our "code" is the file "hello.sh" and our job is to build a Container based on busybox, copying our application code to it and instruct Docker to run that code when the container is started.
docker build -t hello .
docker run hello
In this module, we have covered the following:
Congratulations on completing Module 2.
Proceed to any module below which interests you most. [Add any custom/optional information for your lab manual.]
To end your lab click on the END button.
Module 3: Deploying a Docker Application on VMware PhotonOS (15 minutes)
An introduction to VMware Photon OS, a container host operating system optimized for vSphere.
All the exercises in this module are running on top of Photon OS. Let's take a couple of minutes to understand what Photon OS is and why VMware decided to offer a container host OS.
Photon OS is a lightweight Linux operating system for Cloud-Native apps. Photon OS is optimized for vSphere and vSphere-based cloud offerings providing an easy way for our customers to extend their current platform with VMware and run modern, distributed applications using containers.
Photon provides the following benefits:
We have open sourced Photon OS to encourage widespread contributions and testing from customers, partners, prospects, and the developer community at large. It is available today on GitHub for forking and experimentation; the binary is also available on JFrog Bintray.
By offering Photon OS, we are able to provide integrated support for all aspects of the infrastructure, adding to the leading compute, storage, networking, and management found today. Customers will benefit from end-to-end testing, compatibility, and interoperability with the rest of our software-defined data center and End User Computing product portfolios. Through integration between Photon and Lightwave, customers can enforce security and governance on container workloads, for example, by ensuring only authorized containers are run on authorized hosts by authorized users.
In this lesson, we will learn how to deploy a multi-node application using the docker-compose tool. Specifically, we will discuss:
In this section, we will deploy a more meaningful application. We're going to build a simple product rating system for a storefront website. The customer feedback is stored in a replicated Redis datastore and accessed through a PHP-based front end. We are going to deploy each of these three components in its own container and make them all communicate on a private container network. The PHP frontend will expose port 80 to the public network.
One of the challenges we're going to face is how to deploy a multi-tier application without having to manually deploy each tier and carefully tying them together. Luckily, there are several tools for doing that in an enterprise environment. Kubernetes is probably the most well-known one in that space.
In this exercise, we're going for a simpler and more lightweight solution using Docker Compose. This tool lets us define the container making up the tiers of the application in a single YAML file and bring the entire application up using a single command.
We have already provided a docker-compose file for you to use with this exercise. Let's familiarize ourselves with the structure of this file.
cat docker-compose.ymlYou should see a short text file. Let's examine the contents!
Since we have dependencies between the nodes, Docker will automatically place them on a container network and link them together. It will also set up a simple naming service so that the code in the frontend can refer to the nodes by name instead of using IP addresses.
Now for the fun (and easy) part. Let's start the application. Since the application is fully described by the docker-compose file, we can bring it up without adding any additional parameters.
docker-compose up -d
The -d option just tells docker-compose to start the containers in background mode without assigning the current terminal to them.
Let's test the application!
The application will write the product review to the master redis node and then read it back from the slave one. This demonstrates the behavior of a simple multi-tier application implemented as containers. Given the modular design of the application and the use of micro-services, we can replace individual components with no or little downtime.
Let's clean up after ourselves by bringing the application down!
Not surprisingly, the "down" command does the exact opposite of "up" and brings down the containers, as well as releasing all resources associated with them.
In this section we will explore stateful components and how to use volumes to store state.
Within the context of application services, the term "state" typically refers to data that needs to be preserved between function calls. For example, the contents of a shopping cart in a web store is a typical example of state. Some state is said to be persistent, which means that it must survive system restarts and crashes. Typically, that means that it has to be written to some secondary storage, such as a disk.
By default, when you run a container, it creates its own ad hoc file system that stores data as long as the container is active. If you shut down the container and delete it, the data is forever lost. This is probably not what you want when running e.g. a database inside a container. To solve this problem, you can specify volumes that are stored externally in the file system of the Docker host.
Let's create a simple container with an external volume!
docker run -it -v /data/busybox:/data busybox
echo "Kilroy was here" > test.txt
We are now back in the host. Let's look at the directory we mapped by typing the following and pressing Enter.
ls /data/busyboxYou will see that the file "test.txt" appeared in the directory. This is because it was mapped to /data in the container and we created a file there from inside the container.
docker run -it -v /data/busybox:/data busybox
Let's revisit our product rating application! Obviously, we'd like the product ratings we enter to survive container shutdowns or crashes. To do that, we need to assign external volumes to the two Redis nodes. Let's have a look at how that's done!
Let's try to run the demo app with persistent state backed by volumes! We have entered a couple of product reviews as we prepared the lab and since we're keeping persistent state in this version of the app, you should be able to see them.
docker-compose up -d
Extra credits: Enter a few new reviews and shut the application down using the docker-compose down command, then bring the application up again and make sure the reviews are still there.
Docker volumes are a great way of making data persistent across invocations of containers. But the standard, built-in Docker volumes have a serious drawback: They are local to the Docker host where they were created. This becomes a problem when deploying an application to a cluster, such as a Docker Swarm or Kubernetes cluster. Since the volume is local to a host, data would not be accessible if a container failed over to another host.
To address this problem, VMware has released vSphere Docker Volume Service (vDVS). By implementing a Docker volume driver that's backed by a vSphere Datastore, volumes can be accessed across different Docker hosts as long as they can access that datastore.
Installing vSphere Docker Volume Service takes only a couple of minutes. This process has already been finished in your lab, but we will outline the steps for illustration purposes.
There are two steps to this installation. The first step installs a driver module into the ESXi servers for managing the volumes. The second command installs a volume driver on the Docker hosts. Please note that these commands have to be executed on every ESXi host in the cluster and every Docker host, respectively!
Don't type either of the next two commands! They are for illustration purposes only!
esxcli software vib install -v /tmp/VMWare_bootbank_esx-vmdkops-service_0.14.0577889-0.0.1.vib
sudo docker plugin install --grant-all-permissions --alias vsphere vmware/docker-volume-vsphere:latest
Creating a volume is very simple. The only difference between a standard Docker volume and a vSphere backed volume is that we need to explicitly specify the volume driver using the -d option.
docker volume create -d vsphere vol1
docker run -it --rm -v vol1:/data busybox
echo "HOL rocks!" > test.txt
Log into the second Docker host.
docker run -it --rm -v vol1:/data busybox
exitfollowed by Enter to exit the container.
exitfollowed by Enter to exit the terminal. The terminal window should close.
Let's have a look at what happened behind the scenes in vSphere. You will notice that it's all implemented using standard vSphere components. The volume is mapped to a VMDK residing in a standard datastore. This means that it can be treated just like a normal VMDK for backups etc.
Let's start by logging in to vCenter.
Once the vCenter UI appears, we can navigate to the datastore and inspect it.
In this module, we have covered the following:
Congratulations on completing Module 3 and the HOL-1830-01-CNA - Container Basics with Photon OS lab!
You should now have a solid understanding of basic container and Docker concepts, as well as gained some experience deploying containers on a virtualized container host.
You may next want to explore how VMware VIC (vSphere Integrated Containers) adds additional and unique value to container management and operationalization. This content can be found in the following lab:
HOL-1830-02-CNA - vSphere Integrated Containers.
If you have already finished the other Modules, you may now end the lab.
To end your lab click on the END button.
Thank you for participating in the VMware Hands-on Labs. Be sure to visit http://hol.vmware.com/ to continue your lab experience online.
Lab SKU: HOL-1830-01-CNA