Lab Overview - HOL-2210-01-SDC - Virtualization 101: Introduction to vSphere
If you are not familiar with Virtualization, this lesson will give you an introduction to it.
If you are familiar with virtualization or have taken this lab previously, you can jump ahead to Module 1 - Introduction to management with vCenter Server.
Today's x86 computer hardware was designed to run a single operating system and a single application, leaving most machines vastly underutilized. Virtualization lets you run multiple virtual machines on a single physical machine, with each virtual machine sharing the resources of that one physical computer across multiple environments. Different virtual machines can run different operating systems and multiple applications on the same physical computer.
Virtualization is placing an additional layer of software called a hypervisor on top of your physical server. The hypervisor enables you to install multiple operating systems and applications on a single server.
By isolating the operating system from the hardware, you can create a virtualization-based x86 platform. VMware's hypervisor-based virtualization products and solutions provide you the fundamental technology for x86 virtualization.
In this screen, you can see how partitioning helps improve utilization.
You can isolate a VM to find and fix bugs and faults without affecting other VMs and operating systems. Once fixed, an entire VM Restore can be performed in minutes.
Encapsulation simplifies management by helping you copy, move and restore VMs by treating entire VMs as files.
VMs are not dependent on any physical hardware or vendor, making your IT more flexible and scalable.
Virtualization enables you to consolidate servers and contain applications, resulting in high availability and scalability of critical applications.
Virtualization eliminates the need for any hardware configuration, OS reinstallation and configuration, or backup agents. A simple restore can recover an entire VM.
A technology called thin provisioning helps you optimize space utilization and reduce storage costs. It provides storage to VMs when it's needed, and shares space with other VMs.
Note: It may take more than 90 minutes to complete this lab. You may only finish 2-3 of the modules during your time. However, you may take this lab as many times as you want. The modules are independent of each other so you can start at the beginning of any module and proceed from there. Use the Table of Contents to access any module in the lab. The Table of Contents can be accessed in the upper right-hand corner of the Lab Manual.
This introductory lab demonstrates the core features and functions of vSphere and vCenter. This is an excellent place to begin your Virtualization 101 experience.
This lab will walk you through the core features of vSphere and vCenter, including storage and networking. The lab is broken into 3 Modules and the Modules can be taken in any order.
Lab Module List:
Each Module will take approximately 60-90 minutes to complete, but based on your experience this could take more or less time.
We have included videos throughout the modules. To get the most out of these videos, it is recommenced that you have headphones to hear the audio. The timing of each video is noted next to the title. In some cases, videos are included for tasks we are unable to show in a lab environment, while others are there to provide additional information. Some of these videos may contain an earlier edition of vSphere, however, the steps and concepts are primarily the same.
Lab Captains:
This lab manual can be downloaded from the Hands-on Labs document site found here:
This lab may be available in other languages. To set your language preference and view a localized manual deployed with your lab, utilize this document to guide you through the process:
http://docs.hol.vmware.com/announcements/nee-default-language.pdf
Welcome! If this is your first time taking a lab navigate to the Appendix in the Table of Contents to review the interface and features before proceeding.
For returning users, feel free to start your lab by clicking next in the manual.
Please verify that your lab has finished all the startup routines and is ready for you to start. If you see anything other than "Ready", please wait a few minutes. If after 5 minutes your lab has not changed to "Ready", please ask for assistance.
Module 1 - Introduction to Management with vCenter Server (60 Min)
This module will start with an interactive simulation of an ESXi installation. ESXi is the foundation of vSphere and is sometimes referred to as the host. After the installation, the ESXi Host Client will be reviewed. It is a web-based management tool that allows you to manage a single ESXi host at a time.
The remainder of the module will focus on using the vSphere Client to access vCenter Server and manage your entire virtual infrastructure using one interface. Virtual machines will be created, with more details covered on how to manage and monitor the environment. Lastly, you will be introduced to vSphere Platinum, which provides advanced security capabilities in vSphere in combination with VMware AppDefense.
This part of the lab is presented as a Hands-on Labs Interactive Simulation. This will allow you to experience steps which are too time-consuming or resource intensive to do live in the lab environment. In this simulation, you can use the software interface as if you are interacting with a live environment.
The lab continues to run in the background. If the lab goes into standby mode, you can resume it after completing the module.
The VMware Host Client is an HTML5-based client that is used to connect to and manage single ESXi hosts.
You can use the VMware Host Client to perform administrative and basic troubleshooting tasks, as well as advanced administrative tasks on your target ESXi host. You can also use the VMware Host Client to conduct emergency management when vCenter Server is not available.
It is important to know that the VMware Host Client is different from the vSphere Web Client, regardless of their similar user interfaces. You use the vSphere Web Client to connect to vCenter Server and manage multiple ESXi hosts, whereas you use the VMware Host Client to manage a single ESXi host.
For additional details on the VMware Host Client, please see this PDF (https://docs.vmware.com/en/VMware-vSphere/7.0/vsphere-esxi-host-client-1370-guide.pdf)
This lesson will walk through some of the most frequently used features in the ESXi Host Client.
The ESXi Host, in this case, esx-03a, can now be directly managed. This can be useful in test/dev environments where a vCenter Server is not present or in a production environment where the vCenter Server is not reachable.
The initial screen shows high-level details and recent tasks. There are also various power options for the host and an Actions menu for the most common tasks. Note that the server is currently in Maintenance Mode, which will be discussed in a future lesson. Click to minimize the Recent tasks interface to gain more room.
On the System tab, the most common options set here are the date and time for the host. It can be set and synchronized with an NTP server or set manually. In addition, Autostart settings for the host can be configured here as well.
This is where power management policies can be set for the host.
Services like SSH access and the Direct Console UI can be stopped and started from this screen.
On the Security & Users tab, security options such as authentication to Active Directory and Certificates can be set here. There is also the ability to create additional roles and user accounts for the host itself. This option uses accounts that are local only to the host and not shared with any other hosts or vCenter Server. vCenter Server is set up to use single sign-on which makes account management much easier. This will be reviewed in the lessons that follow.
The Monitor section includes Performance Charts, Hardware monitoring, an event log and other useful monitoring information.
On the Logs tab, a support bundle can be created that includes log files and system information that can be helpful in troubleshooting issues.
This operation will automatically download the support file. It will take a couple of minutes.
You may be asked to provide credentials. Use the same information you used to log in:
Since these features will be covered throughout the lab and the actions performed are identical, just at the vCenter Server level, we will not be reviewing them here.
The ESXi Host Client can be very useful in situations where a vCenter Server is not present to manage the host. However, when a vCenter Server is present, it is the preferred option and provides better tools to manage your infrastructure as a whole.
vCenter Server unifies resources from individual hosts so that those resources can be shared among virtual machines in the entire datacenter. It accomplishes this by managing the assignment of virtual machines to the hosts and the assignment of resources to the virtual machines within a given host based on the policies that the system administrator sets.
The above diagram shows how vCenter fits in the vSphere stack. With vCenter installed, you have a central point of management. vCenter Server allows the use of advanced vSphere features such as vSphere Distributed Resource Scheduler (DRS), vSphere High Availability (HA), vSphere vMotion, and vSphere Storage vMotion.
The other component is the vSphere Web Client. The vSphere Web Client is the interface to vCenter Server and multi-host environments. It also provides console access to virtual machines. The vSphere Web Client lets you perform all administrative tasks by using an in-browser interface.
First, there is no longer an option to deploy the external Platform Services Controller (PSC). The only option is the vCenter Server Appliance which contains an embedded PSC. Embedded PSCs have all of the services required to manage a vSphere SSO Domain.
The vCenter Server Appliance (vCSA) is a single preconfigured Linux-based virtual machine optimized for running vCenter Server and associated services.
The Platform Services Controller (PSC) includes common services that are used across the suite. These include Single Sign-On (SSO), Licensing, and the VMware Certificate Authority (VMCA). You will learn more about SSO and the VMCA in the following pages.
In vCenter Server 7, PSC convergence now happens automatically during a vCenter Server upgrade!There is no longer a need to perform an upgrade and a convergence as two separate tasks. When upgrading your vCenter Server from version 6.5 or 6.7 to 7.0, the installer can detect external PSCs which allows these two processes to be merged for a simplistic method of upgrading and consolidating deprecated SSO topologies.
Once the Platform Services Controller is converged, it remains in inventory to be decommissioned by the vSphere Administrator. The upgrade and convergence process in vCenter Server 7 does not decommission the PSC automatically.
vSphere 5.1 introduced vCenter Single Sign On (SSO) as part of the vCenter Server management infrastructure. This change affects the vCenter Server installation, upgrading, and operation. Authentication by vCenter Single Sign On makes the VMware cloud infrastructure platform more secure by allowing the vSphere software components to communicate with each other through a secure token exchange mechanism, instead of requiring each component to authenticate a user separately with a directory service like Active Directory.
Starting with version 5.1, vSphere includes a vCenter Single Sign-On service as part of the vCenter Server management infrastructure.
Authentication with vCenter Single Sign-On makes vSphere more secure because the vSphere software components communicate with each other by using a secure token exchange mechanism, and all other users also authenticate with vCenter Single Sign-On.
Starting with vSphere 6.0, vCenter Single Sign-On is either included in an embedded deployment or part of the Platform Services Controller. The Platform Services Controller contains all of the services that are necessary for the communication between vSphere components including vCenter Single Sign-On, VMware Certificate Authority, VMware Lookup Service, and the licensing service. For example, in the image above, SSO resides within the Platform Services Controller as part of this multi-vCenter topology.
In a single vCenter topology, the PSC (along with all of its associated services) can run on a single machine, also called the embedded deployment. This single machine could be a physical Windows server, a Windows VM, or the vCSA.
While vCenter Server requires a database, as shown above, SSO itself does not have such a requirement.
The second Module in this lab, Introduction to vSphere Networking and Security covers SSO in more detail.
However, you can also refer to the vCenter 7 Deployment Guide for more in-depth requirements and considerations for SSO architecture in vCenter 7.
The previous lesson reviewed the ESXi Host Client, which can be used to manage one ESXi host at a time. This lesson will introduce the vSphere Client which is used to connect to vCenter Server to manage your collective infrastructure as a whole. In addition, the process of creating a virtual machine will also be covered.
The vSphere Client is the primary method for system administrators and end-users to interact with the virtual data center environment created by VMware vSphere. vSphere manages a collection of objects that make up the virtual data center, including hosts, clusters, virtual machines, data storage, and networking resources.
The vSphere Client is a Web browser-based application that you can use to manage, monitor, and administer the objects that make up your virtualized data center. You can use the vSphere Client to observe and modify the vSphere environment in the following ways.
You can extend vSphere in different ways to create a solution for your unique IT infrastructure. You can extend the vSphere Client with additional GUI features to support these new capabilities, with which you can manage and monitor your unique vSphere environment.
If you are not already in Chrome, double click on Google Chrome on your desktop. If you are already in Google Chrome, open a new tab.
Log in using the following method:
By default, you are brought to a view that shows the Hosts and Clusters attached to vCenter. Get a more complete look by viewing the Global Inventory Lists.
Clicking Global Inventory Lists will take you to the inventory page where you find all the objects associated with vCenter Server systems such as data centers, hosts, clusters, networking, storage, and virtual machines.
Here are all the virtual machines associated with this vCenter instance.
Add another network adapter to the windows10 machine.
Click on on the Arrows to view the Recent Tasks to watch the task's progress.
Review the "Recent Tasks" list. Once the task is complete, a second Network Adapter should be shown in the "VM Hardware" section. Note the networks are in a disconnected state because the VM is powered off.
Once you are done viewing the Recent Tasks list, click the down-arrows to minimize it.
In the next steps, we will create a virtual machine and then, install an operating system.
This wizard is used to create a new Virtual Machine and place it in the vSphere inventory.
Because Distributed Resource Scheduler (DRS) is not enabled, you just have to select a host to use for the VM. More details on DRS will be covered later in this module.
In this step, we will be selecting what operating system we will be installing. When we select the operating system, the supported virtual hardware and recommended configuration is used to create the virtual machine. Keep in mind this does not create a virtual machine with the operating system installed, but rather creates a virtual machine that is tuned appropriately for the operating system you have selected.
The recommended virtual hardware settings are shown as the default. These can be modified if needed.
The settings for the virtual machine can be verified prior to it being created.
Congratulations on creating your first virtual machine web-serv01!
In the next steps, Photon OS will be installed on the virtual machine.
To make it easier to install operating systems on virtual machines, ISO images can be used. These can be kept in the same storage used for virtual machines. In addition, vCenter offers a Content Library as a repository. Content Libraries can then be synchronized to ensure every location is using the same versions.
This will open a file explorer to select that file.
Finally, we want to attach or connect the ISO image to the virtual machine.
Note you also have the option of using the VMware Remote Console (VMRC). This is console is a separate application that needs to be installed on your local device as opposed to the Web Console which will launch in a new browser tab. The VMRC can be useful in certain situations when you need more capabilities, like attaching devices or power cycling options.
A new tab will open and you will be presented with the Photon OS boot screen.
After the boot process is complete, you will be presented with a license agreement.
NOTE: If 1. Hypervisor optimized is selected, the virtual machine will not boot. This is due to the unique environment the Hands-on Labs are running in.
Note that Photon requires a complex, non-dictionary password, which is why the typical password is being repeated.
After a minute or two, the installation will be complete.
Press a key to reboot the virtual machine. After a minute or two, the system should boot the login prompt.
Now that the operating system has been installed and is up and running, the ISO image needs to be disconnected from the virtual machine.
Make sure web-serv01 is still highlighted.
VMware provides several ways to provision vSphere virtual machines. In the last lesson, you saw how to create a virtual machine and manually install the operating system.
The virtual machine that was created can then be used as a base image from which to clone other virtual machines. Cloning a virtual machine can save time if you are deploying many similar virtual machines. You can create, configure, and install software on a single virtual machine. You can clone it multiple times, rather than creating and configuring each virtual machine individually.
Another provisioning method is to clone a virtual machine to a template. A template is a master copy of a virtual machine that you can use to create and provision virtual machines. Creating a template can be useful when you need to deploy multiple virtual machines from a single baseline but want to customize each system independently of the next. A common value point for using templates is to save time. If you have a virtual machine that you will clone frequently, make that virtual machine a template, and deploy your virtual machines from that template.
In this lesson, you will clone an existing Virtual Machine to a Template and deploy a new Virtual Machine from that Template.
Please leave the location as RegionA01 for this lab.
Select a compute resource:
When cloning a virtual machine from a template, the guest operating system and virtual hardware can be modified. For this example, we will not customize the operating system or hardware.
The vSphere Client provides some powerful search options. This lesson will guide you through the different search options to find the inventory of interest quickly. Also, the vCenter Inventory Service enables users to create custom defined tags that can be categorized and added to any inventory objects in the environment. These tags are searchable metadata and reduce the time to find inventory object information. This lab will cover how to create tags and use the tags for a search.
At the top of the vSphere Client is a search bar that can be used to find objects quickly. This can an object's name, like app-serv01 or an ESXi host. Tags can also be attached to objects and the search feature can be used to find them as well.
You can see all of the objects that contain the word tiny.
On this page, you can see all the results for objects that contain the word tiny. If you have a large inventory, the results can be narrowed down further by selecting the object type you are looking for. Tags or Custom Attributes could be used to narrow the search results down. Selecting the object type can help you quickly find the object you are looking for.
You can then filter the results down even further by specifying:
The search field is updated with the results.
If this is a frequently used search, it can be saved for use in the future.
Note that the name must be in lowercase with no spaces between words.
You use tags to add metadata to inventory objects. You can record information about your inventory objects in tags and use the tags in searches.
You use categories to group tags together and define how tags can be applied to objects.
Every tag must belong to one and only one category. You must create at least one category before creating any tags.
Associable Object Types: We will use the default which states that the new tag in this category can be assigned to all objects. The other option is you can specify a specific object, such as virtual machines or datastores.
In order for these tags to be useful, they need to be assigned to objects. In the next steps, the tag will be assigned to virtual machines.
This lab shows how to use the VMware vSphere web client to enable and configure vSphere Availability and Dynamic Resource Scheduling (DRS). HA protects from down time by automating recovery in the event of a host failure. DRS ensures performance by balancing virtual machine workloads across hosts a cluster.
vSphere Availability provides high availability for virtual machines by pooling the virtual machines and the hosts they reside on into a cluster. Hosts in the cluster are monitored and in the event of a failure, the virtual machines on a failed host are restarted on alternate hosts.
When you create a vSphere Availability cluster, a single host is automatically elected as the primary host. The primary host communicates with vCenter Server and monitors the state of all protected virtual machines and of the secondary hosts. Different types of host failures are possible, and the primary host must detect and appropriately deal with the failure. The primary host must distinguish between a failed host and one that is in a network partition or that has become network isolated. The master host uses network and datastore heartbeating to determine the type of failure. Also note that vSphere Availability is a host function which means there is not a dependency on vCenter in order to effectively fail over VMs to other hosts in the cluster.
By selecting VM and Application Monitoring, a VM will be restarted if heartbeats are not received within a set time, the default is 30 seconds.
We are setting aside a certain percentage of CPU and Memory resources to be used for failover, in the above case 25% for each.
This is another layer of protection. Heartbeat Datastores allows vSphere HA to monitor hosts when a management network partition occurs and to continue to respond to failures that occur.
Note: If you do not see the OK button, you may need to zoom out on the web browser to see it.
It will take a minute or two to configure vSphere HA. You can monitor the progress in the Recent Tasks window.
Once the three tasks have been completed, you can move on to the next step.
If vSphere HA does not show Protected and the tasks completed successfully, you may need to click the refresh button.
Notice the bars that display resource usage in blue, protected capacity in light gray, and reserve capacity using stripes.
The chart shown above is showing how DRS affects placement and migration according to the setting Manual, Partially Automated or Fully Automated.
You can use vSphere Fault Tolerance for your virtual machines to ensure continuity with higher levels of availability and data protection. Fault Tolerance is built on the ESXi host platform, and it provides availability by having identical Virtual Machines (VM) run on separate hosts.
vSphere Fault Tolerance (FT) provides continuous availability by creating and maintaining the states of a Primary and Secondary VMs identical. In the event of a failover situation, the Secondary VM will be executed and it will replace the Primary VM (the protected virtual machine) The duplicate virtual machine, the Secondary VM, is created and runs on another host. The primary VM is continuously replicated to the secondary VM so that the secondary VM can take over at any point, thereby providing Fault Tolerant protection. The Primary and Secondary VMs continuously monitor the status of one another to ensure that Fault Tolerance is maintained.
Fault Tolerance avoids "split-brain" situations, which can lead to two active copies of a virtual machine after recovery from a failure. Atomic file locking on shared storage is used to coordinate failover so that only one side continues running as the Primary VM and a new Secondary VM is respawned automatically. vSphere Fault Tolerance can accommodate symmetric multiprocessor (SMP) virtual machines with up to four vCPUs. The entire process is transparent and fully automated and occurs even if vCenter Server is unavailable.
The benefits of Fault Tolerance are:
Several typical situations can benefit from the use of vSphere Fault Tolerance. Fault Tolerance provides a higher level of business continuity than vSphere HA. When a Secondary VM is called upon to replace its Primary VM counterpart, the Secondary VM immediately takes over the Primary VMs role with the entire state of the virtual machine preserved. Applications are already running, and data stored in memory does not need to be reentered or reloaded. Failover provided by vSphere HA restarts the virtual machines affected by a failure.
This higher level of continuity and the added protection of state information and data provides the following use cases where you would want to implement Fault Tolerance:
Another key use case for protecting a virtual machine with Fault Tolerance can be described as On-Demand Fault Tolerance. In this case, a virtual machine is adequately protected with vSphere HA during normal operation. During certain critical periods, you might want to enhance the protection of the virtual machine. For example, you might be running a quarter-end report which, if interrupted, might delay the availability of critical information. With vSphere Fault Tolerance, you can protect this virtual machine before running this report and then turn off or suspend Fault Tolerance after the report has been produced. You can use On-Demand Fault Tolerance to protect the virtual machine during a critical time period and return the resources to normal during non-critical operation. See the Performance Best Practices for VMware vSphere and vSphere 7.0 Availability for more information.
This video shows how to protect virtual machines with VMware Fault Tolerance (FT). Due to resource constraints in the Hands-on Labs environment we are unable to demonstrate this live for you.
vSphere includes a user-configurable events and alarms subsystem. This subsystem tracks events happening throughout vSphere and stores the data in log files and the vCenter Server database. This subsystem also enables you to specify the conditions under which alarms are triggered. Alarms can change state from mild warnings to more serious alerts as system conditions change and can trigger automated alarm actions. This functionality is useful when you want to be informed, or take immediate action, when certain events or conditions occur for a specific inventory object, or group of objects.
Events are records of user actions or system actions that occur on objects in vCenter Server or on a host. Actions that might be reordered as events include, but are not limited to, the following examples:
Event data includes details about the event such as who generated it, when it occurred, and what type of event.
Alarms are notifications that are activated in response to an event, a set of conditions, or the state of an inventory object. An alarm definition consists of the following elements:
Alarms have the following severity levels:
Alarm definitions are associated with the object selected in the inventory. An alarm monitors the type of inventory objects specified in its definition.
For example, you might want to monitor the CPU usage of all virtual machines in a specific host cluster. You can select the cluster in the inventory and add a virtual machine alarm to it. When enabled, that alarm will monitor all virtual machines running in the cluster and will trigger when any one of them meets the criteria defined in the alarm. If you want to monitor a specific virtual machine in the cluster, but not others, you would select that virtual machine in the inventory and add an alarm to it. One easy way to apply the same alarms to a group of objects is to place those objects in a folder and define the alarm on the folder.
In this lab, you will learn how to create an alarm and review the events that have occurred.
Alarms can be defined at different levels. In the case of the highlighted alarm, you can see it is defined at the top level. Alarms that are defined at the top level are then inherited by the objects below.
Alarms can be defined at different levels. In the case of the highlighted alarm, you can see it is defined at the top level (vCenter Server). Alarms that are defined at the top level are then inherited by the objects below.
The Name and Targets screen defines the name of the alarm (Host CPU usage), what object it applies to (Hosts) and where the objects are located.
Notice this will trigger a Warning alarm.
When a Host's CPU runs at or above 80% for more than 5 minutes, a Warning alarm will be triggered, and the Host will be put in Maintenance mode. Maintenance mode is covered in Module 3, but when a host is in this state, it is taken offline and any virtual machines that are running on it will be moved to other hosts in the cluster. This lets maintenance be performed on hosts without suffering downtime.
On this screen we can set additional actions based on when a Host's CPU is about 90% for 5 minutes. In this case, it would trigger a Critical alarm. Additional actions could be taken when a Host is in this state.
If the conditions that originally triggered the alarm are no longer present, additional actions can take place. As an example, once a Host's CPU is no longer at 80% for more than 5 minutes, an email notification could be sent.
The Review screen shows what was configured.
We will be creating an alarm that will migrate a VM if CPU Ready exceeds an average of 8000ms over the course of 5 minutes.
This will gracefully shutdown the virtual machine rather than just powering it off.
Additional options could be specified once the conditions are clear.
The Review screen shows the details of what was configured for the new alarm.
If the Alarm Name field is still filtering by "cpu", the newly created alarm is displayed. If not, simply click on the Alarm Name field and type cpu ready to see it.
Shares specify the relative importance of a virtual machine (or resource pool). If a virtual machine has twice as many shares of a resource as another virtual machine, it is entitled to consume twice as much of that resource when these two virtual machines are competing for resources. This lab starts with a video walking you through the process of working with shares and resources. The remainder of this module walks you through making the changes to a VM's resources.
Shares are typically specified as High, Normal, or Low
This video explains how scalable shares are and how are they used in order to effectively distribute compute and memory resources among virtual machines.
The above example shows 2 VM's, one a development VM and the other a Production VM. On the left-hand side of the diagram, you can see the CPU shares are equal. We want to make sure the Production VM gets the majority of the CPU resources when there is contention for those resources in the environment. Changing the shares for the production VM from 1000 shares to 2000 shares accomplishes this goal. The new settings are shown on the right side of the diagram.
Note the current setting for Shares is set to 1000.
Limits and Reservations are set with the same procedure. When you click on the "edit" settings for a VM, you will find the ability to set the Limit and Reservations. Limit restricts a VM from using more than the limit setting. Reservations guarantee a minimum amount of a resource be available for the virtual machine. Try out some settings for Limits and Reservations. One note is that if you try to reserve more of a resource such as memory or CPU than is available, the VM may not power on.
Planned downtime typically accounts for over 80% of datacenter downtime. Hardware maintenance, server migration, and firmware updates all require downtime for physical servers. To minimize the impact of this downtime, organizations are forced to delay maintenance until inconvenient and difficult-to-schedule downtime windows.
The vMotion functionality in vSphere makes it possible for organizations to reduce planned downtime because workloads in a VMware environment can be dynamically moved to different physical servers without service interruption. Administrators can perform faster and completely transparent maintenance operations, without being forced to schedule inconvenient maintenance windows. With vSphere vMotion, organizations can:
Another feature of vSphere, Storage vMotion allows a virtual machine to be migrated to different storage devices with zero downtime. This technology is covered in more detail in Module 3.
In this lesson, you will learn how to work with vMotion and move virtual machines to different hosts within the cluster.
We will disable DRS and then migrate all of the virtual machines esx-02a.corp.local hosts over to esx-01a.corp.local. This will also help prepare us for the next lesson on Performance.
By disabling DRS, this will prevent the virtual machines from being migrated back to esx-01a.corp.local.
Depending on what other modules you have taken, you may see more VMs.
Do this for every powered off virtual machine, otherwise the next step will fail.
Click Yes to start the migration process.
In addition to changing what ESXi host the virtual machine will run on (using compute resources), the virtual machine can be moved to different datastores (storage) if needed, A virtual machine can also be moved to a different host and storage at the same time, More on migrating to different storage is covered in Module 3, in the Storage vMotion lesson.
Since we want to move all the virtual machines to esx-02a.corp.local, we are selecting a specific host. We could also place it in a Cluster and let DRS decide the best host to move it to.
In most cases, the network adapter will not need to be changed.
A priority can be set for the vMotion task. In most cases, the default option is OK.
Review the settings and click Finish to migrate the virtual machines to esx-02a.corp.local.
You can monitor progress using Recent Tasks.
When the task has been completed successfully, you should see all of the virtual machines moved over to esx-02a.corp.local.
VMware provides several tools to help you monitor your virtual environment and to locate the source of potential issues and current problems. This lesson will walk through using the performance charts and graphs in the vSphere Client.
For a more advanced look at monitoring and performance, consider taking one of the vRealize Operations Hands-on Labs. vRealize Operations provides a more dynamic, proactive approach to monitoring your virtual infrastructure.
Here we can see in real time the CPU usage in percent for esx-02a.corp.local. By default, the chart will refresh every 20 seconds. The amount of data you see will depend on how long you have been taking the lab.
This chart shows the real-time CPU usage of each virtual machine. Each VM is represented by a different color in the graph and you can see at the bottom, which VM is represented by what color. Combined, they give you an idea of overall CPU usage on the host.
There are other graphs available to show host and virtual machine memory usage, network (Mbps) and disk (KBps).
The graphs we have looked at so far will give you an overview of the four main components, CPU, memory, disk and storage. The advanced graphs will give you more detailed information on each of these.
Before we look at these charts, let's generate some CPU activity on esx-01a.corp.local by restarting all of the virtual machines it hosts.
To generate some activity on esx-02a.corp.local, the virtual machines will be rebooted.
Note: You may also receive a warning that only X of X virtual machines will be restarted. This depend on what other modules and/or lessons have been completed in the lab previously.
This will bring up options to customize the chart.
Here we can see the CPU usage of each virtual machine and esx-02a.corp.local.
Scroll down and you will see the Performance Chart Legend. You can click on any of the virtual machines or esx-01a.corp.local to highlight it on the chart.
On the left-hand side, you will see a list of all the available chart metrics that can be viewed. The counters will update based on what metric you select.
Notice the counters section updates and now we have additional counters to view for this chart.
Note: If you receive an error that No Counter were selected, uncheck and check Target Objects, then click OK.
This chart shows the memory counters relative to memory for esx-02a.corp.local. Scroll down the Performance Chart Legend to see the counter each line represents.
Feel free to explore the various chart options and/or continue to the next step.
Once you have finished viewing the charts, DRS needs to be enabled again.
For more information on performance charts, you can view the vSphere Monitoring and Performance guide.
vSphere 7 is the biggest release of vSphere in over a decade and delivers these innovations and the rearchitecting of vSphere with native Kubernetes that we introduced at VMworld 2019 as Project Pacific.
Kubernetes is now built into vSphere which allows developers to continue using the same industry-standard tools and interfaces they’ve been using to create modern applications. vSphere Admins also benefit because they can help manage the Kubernetes infrastructure using the same tools and skills they have developed around vSphere. To help bridge these two worlds we’ve introduced a new vSphere construct called Namespaces, allowing vSphere Admins to create a logical set of resources, permissions, and policies that enable an application-centric approach.
We are introducing a lot of value in vSphere with Tanzu for the VI admin. We deliver a new way to manage infrastructure, called ‘application-focused management’ for containerized applications. This enables admins to apply policies to an entire group of objects and organize multiple objects into a logical group and then apply policies to the entire group. For example, an administrator can apply security policies and storage limits to a group of containers and Kubernetes clusters that represent an application, rather than to each of the objects individually. This helps improve productivity and reduce errors that can be costly to identify and correct.
vSphere with Tanzu is available through VMware Cloud Foundation 4 with Tanzu. One key innovation available only in VMware Cloud Foundation is a set of developer-facing services and a Kubernetes API surface that IT can provision, called VMware Cloud Foundation Services.
It consists of two families of services: Tanzu Runtime Services and Hybrid Infrastructure Services.
VMware vSphere 7 is the efficient and secure platform for the hybrid cloud. It provides a powerful, flexible, and secure foundation for business agility that accelerates the digital transformation to the hybrid cloud as well as success in the digital economy.
Here are the other vSphere labs to take to get familiar with the lastest vSphere 7 release:
Due to the environment the Hands on Labs are running in and the high I/O it would cause, we are not able to install software. Please use the following videos to walk through the process.
The following video will walk through the process of installing and configuring vSphere.
This video will walk you through the Direct Console User Interface (DCUI).
Learn and Practice with Hands-On Labs to help prepare for several VMware Certifications.
This Lab can help you study for the industry-recognized VCAP-DCV Deploy 2021 Deploy certification which validates that you know how to deploy and optimize VMware vSphere infrastructures.
Learn More Here: https://www.vmware.com/learning/certification/vcap-dcv-deploy.html
Module 2 - Introduction to vSphere Networking and Security (60 Min)...
The ability to connect virtual machines through a logical switch that is part of the vSphere hypervisor is a necessity for operating systems and applications to communicate on the physical network. Traditionally this was done through a Standard vSwitch, configured individually at each ESXI host in the datacenter.
Since its introduction, the vSphere Distributed Switch quickly became the recommended type of virtual switch to use for most if not all types of network traffic in and out of the ESXi host. This is due mostly in part to its ability to be created and managed centrally through vCenter, as well as the advanced networking features it provides.
Let's spend some time reviewing the similarities and differences between the two types of switches.
There are two types of virtual switches in ESXi/ESX 4.x, ESXi 5.x, and ESXi 6.x, vNetwork Standard Switch and vNetwork Distributed Switch (vDS).
As in VMware Infrastructure 3, the configuration of each vSwitch resides on the specific ESXi/ESX host. The VI administrators have to manually maintain consistency of the vSwitch configuration across all ESXi/ESX hosts to ensure that they can perform operations such as vMotion.
vSwitches are configured on each ESXi/ESX host.
The configuration of vDS is centralized to vCenter Server. The ESXi/ESX 4.x, ESXi 5.x, and ESXi 6.x hosts that belong to a dvSwitch do not need further configuration to be compliant.
Distributed switches provide similar functionality to vSwitches. dvPortgroups is a set of dvPorts. The vDS equivalent of portgroups is a set of ports in a vSwitch. Configuration is inherited from dvSwitch to dvPortgroup, just as from vSwitch to Portgroup.
Virtual machines, Service Console interfaces (vswif), and VMKernel interfaces can be connected to dvPortgroups just as they could be connected to portgroups in vSwitches.
These features are available with both types of virtual switches:
These features are available only with a Distributed Switch:
vSphere 5.x provides these improvements to Distributed Switch functionality:
vSphere 6.x provides these improvements to Distributed Switch functionality:
Spend a few minutes reviewing the differences between the Standard vSwitch and Distributed vSwitch architectures.
Pay special attention to how the port groups and uplinks are designed.
Now that we have a better understanding of what a Distributed vSwitch is and why we would want to use it, let's spend a little time exploring an example of one.
The following lesson will walk you through the process of creating and configuring the vSphere Standard Switch.
If you are not already logged in, launch the Chrome browser from the desktop and log in to the vSphere Web Client.
If you are not directed to "Hosts and Clusters", click the icon for it.
At the Connection settings step of the wizard, for Network label, leave the default name of VM Network 2.
Do not change the VLAN ID; leave this set to None (0).
Next, we will verify the switch has been created.
You should see the above diagram showing a virtual port group (VM Network 2) that is on vSwitch1 and it is using vmnic3 as an uplink.
In this lesson, we will review the various properties of a Standard Switch.
vSphere Standard Switch settings control switch-wide defaults and switch properties such as the uplink configuration.
If you are using jumbo frames in your environment and want to leverage this on a vSphere Standard Switch, you can change the MTU setting here.
You can change the size of the maximum transmission unit (MTU) on a vSphere Standard Switch to increase the amount of payload data transmitted with a single packet, that is, enabling jumbo frames. Be sure to check with your Networking team prior to making any modifications here. To realize the benefit of this setting and prevent performance issues, compatible MTU settings are required across all virtual and physical switches and end devices such as hosts and storage arrays.
You will also notice the Security, Traffic shaping, and Team and Failover options. This is where the default settings for the virtual switch would be set. As you will see later, these defaults may be overridden at the port group level as required.
Next, an additional uplink will be added to the switch and the other options will be reviewed.
You can associate multiple adapters to a single vSphere standard switch to increase throughput and provide redundancy should a link fail. This is known as "NIC Teaming."
Since a new network connect will be added to vSwitch0, no changes are needed.
The new adapter has been added in the Active Adapters section. An adapter could also be moved to the Standby Adapters section to be used for failover. The Unused Adapters section can be used when there are multiple portgroups on a switch and you would like the ability to control what traffic flows through which physical adapter. It can be used to segment traffic or be used for individual VLAN traffic.
Click Finish to add vmnic3 to vSwitch0.
Once the vSwitch has been configured and its defaults have been set, the port group can be configured. The port group is the construct that is connected to virtual machine NICs and usually represents a VLAN or physical network partition such as Production, Development, Desktop or DMZ.
Now we will look at some of the options that can be selected at the port group level of a Standard Switch.
The Properties setting section is where the name or VLAN ID of the port group can be modified.
There is no need to modify these settings for this part of the lab.
By ticking the Override box, you can override the default setting of the Standard Switch for just this port group.
In this section, you can configure the following:
Promiscuous Mode
MAC Address Changes
Forged Transmits
No changes are needed here.
Just like in the Security settings, you can override the default policy set at the switch level to apply to just this port group.
A traffic shaping policy is defined by average bandwidth, peak bandwidth, and burst size. You can establish a traffic shaping policy for each port group.
ESXi shapes outbound network traffic on standard switches. Traffic shaping restricts the network bandwidth available on a port, but can also be configured to allow bursts of traffic to flow through at higher speeds.
Average Bandwidth
Peak Bandwidth
Burst Size
No changes are needed here.
Again, we have the option to override the default virtual switch settings.
Load Balancing Policy - The Load Balancing policy determines how network traffic is distributed between the network adapters in a NIC team. vSphere virtual switches load balance only the outgoing traffic. Incoming traffic is controlled by the load balancing policy on the physical switch.
Network Failure Detection - The method the virtual switch will use for failover detection.
Notify Switches - specifies whether the virtual switch notifies the physical switch in case of a failover.
Failover - specifies whether a physical adapter is returned to active status after recovering from a failure.
You can also override the default virtual switch setting for the Failover order of the physical adapters.
No changes are needed here and you may proceed to the next step.
Since we don't want to make any changes to the port group, click the Cancel button.
Since vmnic3 was removed from vSwitch0, you may receive an alert that network connectivity and/or redundancy has been lost.
You should no longer see the red exclamation point next to esx-01a.corp.local.
In preparation for the next lesson, we will delete the Standard Switch we created on esx-02a.corp.local.
The vSphere Standard Switch is a simple virtual switch configured and managed at the host level. This switch provides access, traffic aggregation and fault tolerance by allowing multiple physical adapters to be bound to each virtual switch.
The VMware vSphere Distributed Switch builds on the capabilities of the vSS and simplifies management in large deployments by appearing as a single switch spanning multiple associated hosts. This allows changes to be made once and propagated to every host that is a member of the switch.
Before we walk through the process of building our own Distributed vSwitch, let's take a minute to explore an existing vDS.
In this lab we will see how a Distributed vSwitch compares to a Standard vSwitch, how it is configured, and how it is connected to a running virtual machine.
Take note of the virtual machines that are connected to this vSwitch. You should see a VM called TinyLinux2.
Note: You may see different results based on what lessons or modules you have already completed.
Take note of the hosts connected to the VM Network vSwtich. You should see esx-01a.corp.local and esx-02a.corp.local.
Basic settings of Distributed Switch are displayed. Such as MTU settings, the version of the switch and discovery protocol being used.
Next, we will explore the various properties of the switch.
Click General to view the vSphere distributed switch settings. Here you can modify the following:
Name: You can modify the name of your distributed switch.
Number of Uplinks: Increase or decrease the number uplink ports attached to the distributed switch. Note that you can also click the Edit uplink names button to give the uplinks meaningful names.
Number of Ports: This setting cannot be modified. The port count will dynamically be scaled up or down by default.
Network I/O Control: You can use the drop-down menu to enable or disable Network I/O Control on the switch.
Description: You can use this field to give a meaningful description of the switch.
MTU (Bytes): Maximum MTU size for the vSphere Distributed Switch. To enable jumbo frames, set a value greater than 1500 bytes. Make sure you check with your Networking team prior to modifying this setting in your environment.
Multicast filtering mode
Discovery Protocol
Administrator Contact: Type the name and other details of the administrator for the distributed switch.
The Distributed Switch Health Check monitors for changes in vSphere Distributed Switch configurations. You must enable vSphere Distributed Switch Health Check to perform checks on Distributed Switch configurations.
Health Check is available on ESXi 5.1 Distributed Switches and higher.
We can see that Health check is disabled for VLAN and MTU as well as Teaming and failover.
A distributed port group specifies port configuration options for each member port on a vSphere distributed switch. Distributed port groups define how a connection is made to a network.
When creating a Distributed Port Group, you have the following options available:
Port binding - Choose when ports are assigned to virtual machines connected to this distributed port group.
Port allocation
Number of ports: Enter the number of ports on the distributed port group.
Network resource pool: If you have created network pool to help control network traffic, you can select it here.
VLAN: Use the Type drop-down menu to select VLAN options:
Advanced: Select this check box to customize the policy configurations for the new distributed port group.
Again, note how there are virtual machines tied to this distributed port group just like you would see in a port group on a standard vSwitch.
Note that a path to an uplink is drawn out and highlighted in orange to show the uplinks, hosts and vmnics it is associated with.
Now that we have had a chance to explore an existing vDS, let's build one of our own.
In this lab we will create a new Distributed vSwitch, add ESXi hosts to it, build port groups and connect them to uplinks so that we can use it to forward virtual machine traffic on to the physical network.
This will open the New Distributed Switch wizard.
On the Manage physical network adapters page, we want to configure which physical NICs will be used on the distributed switch.
This will automatically configure any other hosts that you are adding to this distributed switch with the same vmnic and uplink settings.
The add hosts wizard also gives us the ability to migrate VMs from one distributed switch to another on this page. While this action can be done here, we will be doing this in the next lesson.
Also note that this wizard is not the typical place where you would migrate VMs from one virtual switch to another. The process we will be using later is the recommended method.
With your new Distributed Switch highlighted, feel free to explore the associated tabs to get a feel for the setup and configuration.
Note that your distributed port group DPortGroup does not have any VMs connected to it. The next lesson will walk through the process of migrating VMs to the new vDS.
Now that we have created a new vDS, we want to take advantage of its capabilities. In this lab we will migrate a running virtual machine from a virtual standard switch to the newly created distributed virtual switch.
In the vSphere Client, there are numerous ways to accomplish the task of VM network migration. However, we will be walking through the procedures specifically outlined in the vSphere product documentation.
This is the network associated with the virtual standard switch where our VM is currently connected that we want to migrate.
This is the port group on the new Distributed Switch that you created. This is the new port group that will be used to connect the VM being migrated to the network.
Note that there is only one adapter associated with this VM. If there was more than one, you would have the option of choosing which one you would want to connect to the new vDS.
Select the TinyLinux2 VM and note the highlighted path through the new vDS and Uplink.
This lesson will walk you through adding and configuring a Distributed Switch.
Create a vSphere Distributed Switch on a vSphere datacenter to handle networking traffic for all associated hosts in the datacenter. If your system has many hosts and complex port group requirements, creating distributed port groups rather than a standard port groups can go a long way towards easing the administrative burden.
Keep the default name for the new distributed switch.
Note that the version of the Distributed Switch determines which ESXi host versions are able to join the switch. Once all hosts that are a member of a Distributed Switch have been upgraded, the switch may be upgraded to the matching version.
Notice the next suggested steps are to create Distributed Port Groups and adding Hosts.
This video guides the user through creating a vSphere Distributed Switch and Port Groups.
This video guides the user through migrating hosts and VM's to the vSphere Distributed Switch.
Now that we have created a vSphere distributed switch, let's add hosts and physical adapters to create a virtual network.
Expand RegionA01 until you see the Distributed Switch we just created, DSwitch.
To add hosts to the Distributed Switch, click the green '+'.
You should now see the hosts that will be added to the switch.
Part of the "Add Host" process involves assigning one or more network adapters from each host to the Distributed Switch. The assigned adapters may not be shared with any other switch in the host.
If you did not add a vmnic from each ESXi host, you will receive this warning.
In your environment, you may choose to migrate virtual network adapters from a vSphere Standard or Distributed switch to this new one. In this lab example, we won't move anything.
You are now asked to verify the changes you are about to make.
You can change the configuration for hosts and physical adapters on a vSphere Distributed Switch after they are added to the distributed switch.
General settings for a vSphere Distributed Switch include the distributed switch name and the number of uplink ports on the distributed switch. Advanced settings for a vSphere Distributed Switch include the Discovery Protocol configuration and the maximum MTU for the switch. Both general and advanced settings can be configured using the vSphere Web Client.
Click General to view the vSphere distributed switch settings. Here you can modify the following:
Name: You can modify the name of your distributed switch.
Number of Uplinks: Increase or decrease the number uplink ports attached to the distributed switch. Note that you can also click the Edit uplink names button to give the uplinks meaningful names.
Number of Ports: This setting cannot be modified. The port count will dynamically be scaled up or down by default.
Network I/O Control: You can use the drop-down menu to enable or disable Network I/O Control on the switch.
Description: You can use this field to give a meaningful description of the switch.
MTU (Bytes): Maximum MTU size for the vSphere Distributed Switch. To enable jumbo frames, set a value greater than 1500 bytes. Make sure you check with your Networking team prior to modifying this setting in your environment.
Multicast filtering mode
Discovery Protocol
Administrator Contact: Type the name and other details of the administrator for the distributed switch.
The Distributed Switch Health Check monitors for changes in vSphere Distributed Switch configurations. You must enable vSphere Distributed Switch Health Check to perform checks on Distributed Switch configurations.
Health Check is available on ESXi 5.1 Distributed Switches and higher. Also, you can only view Health Check information through the vSphere Web Client 5.1 or later.
A distributed port group specifies port configuration options for each member port on a vSphere distributed switch. Distributed port groups define how a connection is made to a network.
When creating a Distributed Port Group, you have the following options available:
Port binding - Choose when ports are assigned to virtual machines connected to this distributed port group.
Port allocation
Number of ports: Enter the number of ports on the distributed port group.
Network resource pool: If you have created network pool to help control network traffic, you can select it here.
VLAN: Use the Type drop-down menu to select VLAN options:
Advanced: Select this check box to customize the policy configurations for the new distributed port group.
In the Navigator, expand out DSwitch and you will see the newly created WebVMTraffic Distributed Port Group.
To increase the security of your ESXi hosts, you can put them in lockdown mode.
When you enable lockdown mode, no users other than vpxuser have authentication permissions, nor can they perform operations against the host directly. Lockdown mode forces all operations to be performed through vCenter Server.
When a host is in lockdown mode, you cannot run vSphere CLI commands from an administration server, from a script or from vSphere Management Assistant (vMA) against the host. External software or management tools might not be able to retrieve or modify information from the ESXi host.
Lockdown mode is only available on ESXi hosts that have been added to vCenter Server. You can enable lockdown mode using the Add Host wizard to add a host to vCenter Server, using the vSphere Web Client to manage a host or using the Direct Console User Interface (DCUI).
NOTES:
Users with the DCUI Access privilege are authorized to log in to the Direct Console User Interface (DCUI) when lockdown mode is enabled. When you disable lockdown mode using the DCUI, all users with the DCUI Access privilege are granted the Administrator role on the host. The DCUI Access privilege is granted in Advanced Settings on the host.
If you enable or disable lockdown mode using the Direct Console User Interface (DCUI), permissions assigned to users and groups on the host are discarded. To preserve these permissions, you must enable and disable lockdown mode using the vSphere Client connected to vCenter Server.
Enabling or disabling lockdown mode affects which types of users are authorized to access host services, but it does not affect the availability of those services. In other words, if the ESXi Shell, SSH, or Direct Console User Interface (DCUI) services are enabled they will continue to run whether or not the host is in lockdown mode.
First, you will enable Host Lockdown Mode with the Normal setting on esx-01a.corp.local. This will mean the host will be accessible from vCenter and through the DCUI, but not remotely over SSH.
Before we configure Host Lockdown Mode, let's verify the SSH service is running on esx-01a.corp.local.
First, verify you can login to esx-01a using an SSH connection.
You will be automatically logged in to esx-01a.corp.local because we have configured public-key authentication from the Main Console machine to the ESXi host.
Once you hit Enter, the PuTTY window will disappear.
Go back to the vSphere Client
Lockdown Mode is currently disabled. If we set it to Normal, we will not be able to access the host over SSH and only through vCenter or the local console (physically in front of the host). Lockdown Mode can also be set to Strict, meaning only vCenter can access the host and SSH and the local console are disabled.
As previously noted, when Lockdown Mode is enabled, remote access to the host is disabled. Some third-party applications rely on this access and it can be granted by adding the accounts they use to the Exception List. This should not be a way for specific users to bypass security and should only be used for applications that require access.
Wait for the vSphere Client to refresh to see that Lockdown Mode has been enabled.
Using the same steps we used above, open the PuTTY application from the Windows Taskbar.
You should receive an error when trying to connect to esx-01a.corp.local. The host has been configured with Host Lockdown Mode and will refuse any remote connections, unless those users were added to the Exception User list.
Go back to the vSphere Client.
Lockdown Mode for the host should now be disabled.
Host Lockdown Mode provides an excellent way to further secure your vSphere hosts.
Now you will set esx-02a.corp.local to use the Strict Mode of Host Lockdown. This means the host is only available through vCenter Server and access to the DCUI and SSH are disabled.
Again, note that users can be added to the exception list. This will only apply to SSH and not the DCUI.
You can see the Direct Console UI (DCUI) service has been stopped. Note that the SSH service is still running in case users have been added to the Exception List.
This will give us access to the DCUI on esx-02a-corp.local.
The console window will load the DCUI for esx-02a.corp.local.
Click in the console and press the space bar to wake up the host.
You should receive an error that access to the DCUI has been disabled.
Go back to the vSphere Client.
Lockdown Mode for the host should now be disabled.
Host Lockdown Mode provides an excellent way to further secure your vSphere hosts.
This lesson includes a short video on how to use the VMware ESXi firewall.
This video shows how to use the VMware ESXi Firewall on the vSphere host to block incoming and outgoing communication and to manage the services running on the host.
VMware recommends that you create roles to suit the access control needs of your environment. If you create or edit a role on a vCenter Server system that is part of a connected group in Linked Mode, the changes that you make are propagated to all other vCenter Server systems in the group.
Linked Mode connects multiple vCenter Server systems together by using one or more Platform Services Controllers. It lets you view and search across all linked vCenter Servers and replicate roles, permissions, licenses, policies and tags.
In the following steps, we will create a role in the vSphere Client that we can assign rights for the role.
You can use one of the provided roles as a starting point to create your own or in some cases, it may make sense to create a new rule with zero permissions and only add the one the role will need.
In this first example, a role will be created for a new contractor that will only be performing networking tasks.
When you edit a role, you can change the privileges selected for that role. When completed, these privileges are applied to any user or group that is assigned the edited role. In Linked Mode, the changes you make are propagated to all other vCenter Server systems in the group. However, assignments of roles to specific users and objects are not shared across linked vCenter Server systems.
Sometimes a role may need to be updated for access to additional objects or tasks in vCenter. As an example, say the Network Contractor now needs access to the ESXi Hosts.
We will keep the same Role name.
You can make a copy of an existing role, rename it, and edit it. When you make a copy, the new role is not applied to any users, groups or objects -- it does not inherit anything from the parent except the settings. In Linked Mode, the changes are propagated to all other vCenter Server systems in the group, but assignments of roles to specific users and objects are not shared across linked vCenter Server systems.
In this next example, the Administrator role will be cloned and the privileges that are not needed will be removed.
As an example, a new vSphere Amin is hired and they only need access to the compute and storage infrastructure, with no access to networking components.
When you remove a role that is not assigned to any users or groups, the definition of the role is removed from the list of roles. When you remove a role that is assigned to a user or group, you can remove assignments or replace them with an assignment to another role.
NOTE:
Before removing a role from a vCenter Server system that is part of a connected group in Linked Mode, check the use of that role on the other vCenter Server systems in the group. Removing a role from one vCenter Server system also removes that role from all other vCenter Server systems in the group, even if you reassign permissions to another role on the current vCenter Server system.
We can see that the role named Network Contractor has been deleted.
Creating unique and granular roles for users in your organization enables better security for your vSphere infrastructure.
You use vCenter Single Sign-On to authenticate and manage vCenter Server users.
The Single Sign-On administrative interface is part of the vSphere Web Client. To configure Single Sign-On and manage Single Sign-On users and groups, you log in to the vSphere Web Client as a user with Single Sign-On administrator privileges. This might not be the same user as the vCenter Server administrator. Enter the credentials on the vSphere Web Client login page and upon authentication, you can access the Single Sign-On administration tool to create users and assign administrative permissions to other users.
In vSphere versions prior to 5.1, users were authenticated when vCenter Server validated their credentials against an Active Directory domain or the list of local operating system users. As of vSphere 5.1, users authenticate through vCenter Single Sign On. The default Single Sign-On administrator for vSphere 5.1 is admin@System-Domain and administrator@vsphere.local for vSphere 5.5 and higher. The password for this account is the one you specified at installation. These credentials are used to log in to the vSphere Web Client to access the Single Sign-On administration tool. You can then assign Single Sign-On administrator privileges to specific users who are allowed to manage the Single Sign-On server. These users might be different from the users that administer vCenter Server.
NOTE: Logging in to the vSphere Web Client with Windows session credentials is supported only for Active Directory users of the domain to which the Single Sign On system belongs.
In most cases, vSphere SSO will be deployed to use an external Identity Source for primary authentication. In this lab environment, SSO has been integrated with Microsoft Active Directory so that users from the corp.local domain can log in to vSphere using their AD credentials.
In this section, we will look at the configured Identity Sources within Single Sign-on.
Login to the vSphere Web Client with an account which has the SSO Admin privilege:
When the machine with the Platform Services Controller (PSC), which runs the Single Sign-On component, is added to an Active Directory domain, the Identity Source for that domain is automatically added to SSO.
Users in the domains listed here can be granted permissions within vSphere.
In the vSphere Client, users listed on the Users tab are internal to vCenter Single Sign On. These users are not the same as local operating system users, which are local to the operating system of the machine where Single Sign On is installed (for example, Windows). When you add a Single Sign On user with the Single Sign On administration tool, that user is stored in the Single Sign On database, which runs on the system where Single Sign On is installed. These users are part of the SSO domain, by default, "vsphere.local" -- or "System-Domain" for vSphere 5.1. Exactly one system identity source is associated with an installation of Single Sign On.
NOTE: You cannot change the user's name after you create the user. First and Last name are optional parameters.
Here we can see the new user has been added.
In the vSphere Client, groups listed on the Groups tab are internal to vCenter Single Sign On. A group lets you create a container for a collection of group members called principals. When you add a Single Sign On group with the Single Sign On administration tool, the group is stored in the Single Sign On database. The database runs on the system where Single Sign On is installed. These groups are part of the identity source domain vsphere.local (the default for vSphere 5.5 and higher), or System-Domain for vSphere 5.1.
Group members can be users or other groups, and a group can contain members from across multiple identity sources. After you create a group and add principals, you apply permissions to the group. Members of the group inherit the group permissions.
Members of a vCenter Single Sign On group can be users or other groups from one or more identity sources. Members of a group are called principals. Groups listed on the Groups tab in the vSphere Client are internal to Single Sign On and are part of the identity source System-Domain. You can add group members from other domains to a local group. You can also nest groups.
Note: You may need to scroll down to see it.
The Administrator account for the vsphere.local and corp.local domains are members.
The HOL Group has now been added to the Administrator group.
Once identity sources, users and groups have been configured, they must be assigned permissions in order to be useful in vSphere.
SSO provides the ability to grant Global Permissions to an account by specifying the required access here. In the lab, this list represents the default permissions granted, with the exception of the CORP.LOCAL\Administrator user that we have added with Administrator permissions to the entire vSphere infrastructure.
The members of the HOL Group will need to manage all virtual machines in the environment, so we will configure permissions here.
The newly created vsphere.local Global Permission has been created.
Typically, user accounts will not be managed naively within the SSO domain, but will be handled by an external directory source like Microsoft Active Directory or OpenLDAP. Understanding how SSO handles accounts and where to look for account-to-permission binding is useful for managing a vSphere implementation.
In this lesson, we will walk through the process of adding an ESXi host to Active Directory.
In this lesson, we walk through the process of adding a vSphere Host to authenticate against Active Directory.
Note: You may need to expand Site A Datacenter and/or Site A Cluster 1 to see the host.
Now that the network settings have been verified, the host will be added to Active Directory.
You may need to scroll down to see it
Progress can be monitored using the Recent Tasks window. It should take a minute or two to complete.
Once the task has been completed, the Authentication Services section will update to show the host is now connected to the Active Directory domain.
If you are continuing on to other modules in this lab, please log out as administrator@vsphere.local.
This concludes Module 2 - An Introduction to vSphere Networking and Security . We hope you have enjoyed taking this lab. Please remember to take the survey at the end.
If you have time remaining, here are the other Modules that are part of this lab, along with an estimated time to complete each one. Click on the Table of Contents button to quickly jump to that module in the manual.
Learn and Practice with Hands-On Labs to help prepare for several VMware Certifications.
This Lab can help you study for the industry-recognized VCAP-DCV Deploy 2021 Deploy certification which validates that you know how to deploy and optimize VMware vSphere infrastructures.
Learn More Here: https://www.vmware.com/learning/certification/vcap-dcv-deploy.html
Module 3 - Introduction to vSphere Storage (60 Min)
The following lesson provides an overview of the different types of storage available in vSphere.
The vSphere Hypervisor, ESXi, provides host-level storage virtualization, which logically abstracts the physical storage layer from virtual machines.
A vSphere virtual machine uses a virtual disk to store its operating system, program files, and other data associated with its activities. A virtual disk is a large physical file, or a set of files, that can be copied, moved, archived, and backed up as easily as any other file. You can configure virtual machines with multiple virtual disks.
To access virtual disks, a virtual machine uses virtual SCSI controllers. These virtual controllers include BusLogic Parallel, LSI Logic Parallel, LSI Logic SAS, and VMware Paravirtual. These controllers are the only types of SCSI controllers that a virtual machine can see and access.
Each virtual disk resides on a vSphere Virtual Machine File System (VMFS) datastore or an NFS-based datastore that are deployed on physical storage. From the standpoint of the virtual machine, each virtual disk appears as if it were a SCSI drive connected to a SCSI controller. Whether the actual physical storage device is being accessed through parallel SCSI, iSCSI, network, Fibre Channel, or FCoE adapters on the host is transparent to the guest operating system and to applications running on the virtual machine.
The vSphere storage management process starts with storage space that your storage administrator allocates on different storage systems prior to vSphere ESXi assignment. vSphere supports two types of storage - Local and Networked. Each type is detailed in the following lesson steps.
The illustration above depicts virtual machines using Local VMFS storage directly attached to a single ESXi host.
Local storage can be internal hard disks located inside your ESXi host, or it can be external storage systems located outside and connected to the host directly through protocols such as SAS or SATA.
The illustration above depicts virtual machines using networked VMFS storage presented to multiple ESXi hosts.
Networked storage consists of external storage systems that your ESXi host uses to store virtual machine files remotely. Typically, the host accesses these systems over a high-speed storage network. Networked storage devices are typically shared. Datastores on networked storage devices can be accessed by multiple hosts concurrently, and as a result, enable additional vSphere technologies such as High Availability host clustering, Distributed Resource Scheduling, vMotion and Virtual Machines configured with Fault Tolerance. ESXi supports several networked storage technologies - Fiber Channel, iSCSI, NFS, and Shared SAS.
The illustration above depicts virtual machines using different types of virtual disk formats against a shared VMFS Datastore.
When you perform certain virtual machine management operations, such as creating a virtual disk, cloning a virtual machine to a template, or migrating a virtual machine, you can specify a provisioning policy for the virtual disk file format. There are three types of virtual disk formats:
Thin Provision
Use this format to save storage space. For the thin disk, you provision as much datastore space as the disk would require based on the value that you enter for the disk size. However, the thin disk starts small and at first, uses only as much datastore space as the disk needs for its initial operations.
Thick Provision Lazy Zeroed
Creates a virtual disk in a default thick format. Space required for the virtual disk is allocated when the virtual disk is created. Data remaining on the physical device is not erased during creation, but is zeroed out on demand at a later time on first write from the virtual machine.
Using the thick-provision, lazy-zeroed format does not zero out or eliminate the possibility of recovering deleted files or restoring old data that might be present on this allocated space. You cannot convert a thick-provisioned, lazy-zeroed disk to a thin disk.
Thick Provision Eager Zeroed
A type of thick virtual disk that supports clustering features such as Fault Tolerance. Space required for the virtual disk is allocated at creation time. In contrast to the thick-provision, lazy-zeroed format, the data remaining on the physical device is zeroed out when the virtual disk is created. In general, it takes much longer to create disks in this format than to create other types of disks.
This lesson will walk you through creating and configuring an NFS, and an iSCSI vSphere Datastore. Also adding and configuring an iSCSI software adapter.
If credentials aren't saved, use the following:
There are 2 storage datastores configured, an ISCSI datastore and an NFS datastore.
Repeat the steps for the ds-nfs01 datastore.
In this section, you will create a new vSphere NFS Datastore using a pre-provisioned NFS mount.
In this section, you will create a new vSphere NFS Datastore using a pre-provisioned NFS mount.
Note: You may need to zoom out in order to see the Next button.
In this section, you will create a new vSphere iSCSI Datastore with a pre-provisioned iSCSI LUN.
Note: Do not click Next just yet, proceed to the next step!
From this view, we can see that there are existing datastores that can be presented to our vSphere environment.
We can use all available capacity for this datastore or change the size if needed. The defaults are fine for this step.
In this section, we will add a new ESXi host, esx-03a.corp.local, to the environment in RegionA01 and ensure that it has the appropriate storage configured so that it can become a productive member of the cluster.
It is a best practice to bring hosts into a datacenter first before adding them to a cluster. If a host is added to a cluster first, by not having access to the cluster's storage volumes, it could impact High Availability (see Module 1 for more details on High Availability).
This screen shows the details of the host.
When a host is being added to a Datacenter, it can be placed in what is called Lockdown mode. This can prevent unauthorized users from gaining access to the ESXi host either through local console access or remotely by way of SSH. If you are interested in Lockdown Mode, the details are covered in Module 1.
The virtual machines currently on the ESXi host being imported can be placed in either the Datacenter itself or in the default Discovered virtual machines folder.
Here you can see esx-03a.corp.local has been added to the datacenter and is in Maintenance Mode.
Maintenance Mode is used for hosts that service. A host could enter Maintenance Mode so that it can be brought offline in order for additional memory to be added to the physical host. In our case, it is in Maintenance Mode once it has been added to the datacenter so that we can verify its settings prior to bringing it online and potentially conflicting with other hosts in the environment.
Prior to adding the new host to the cluster, an NFS datastore will be added to the host.
In this case, there are two NFS datastores used by RegionA01 cluster. Adding an existing NFS datastore to a new host is a simple process.
This will show all hosts in the inventory that have mounted this datastore.
For additional practice, perform the same steps to mount the other NFS datastore, ds-nfs02 to the esx-03a.corp.local host.
iSCSI devices are presented via an iSCSI Target. Think of this as the host for the iSCSI devices. The ESXi host needs to know where to look for the devices, so this section will go through the process of pointing the ESXi host at the iSCSI target and discovering which LUNs are available.
Once the new Target has been added, a message will appear in yellow to remind you of the need to tell the adapter to reach out and query the iSCSI Target.
Notice that the two iSCSI datastores are now visible to the esx-03a.corp.local host.
Now that we have the storage configured, move the esx-03a-corp.local into RegionA01-COMP01.
The host has been added to the cluster. Now it can exit Maintenance Mode and participate in the cluster.
After a minute or two, the host will exit Maintenance Mode. If you enabled vSphere HA on the cluster, the HA agent will be configured and started before the host shows a Status of Normal. The process occurs fairly quickly, so a refresh of the Web Client may be required to show the current state.
Note that basic networking for virtual machines, vMotion, and IP Storage have been preconfigured on this host for the purpose of this lab exercise. Adding the new host to a distributed switch would typically be done prior to taking the host out of Maintenance Mode, but is not required for this exercise. Feel free to migrate this switch to the vDS if you would like the practice.
This host is now able to handle workloads for the cluster.
Planned downtime typically accounts for over 80% of datacenter downtime. Hardware maintenance, server migration, and firmware updates all require downtime for physical servers. To minimize the impact of this downtime, organizations are forced to delay maintenance until inconvenient and difficult-to-schedule downtime windows.
The vMotion and Storage vMotion functionality in vSphere makes it possible for organizations to reduce planned downtime because workloads in a VMware environment can be dynamically moved to different physical servers or to different underlying storage without service interruption. Administrators can perform faster and completely transparent maintenance operations, without being forced to schedule inconvenient maintenance windows. With vSphere vMotion and Storage vMotion, organizations can:
In this lesson, you will learn how to work with vMotion and move virtual machines to different hosts within the cluster.
Before the Storage vMotion, we'll verify there is no downtime for the virtual machine by constantly ping it. To ping it, we will need the IP address of the virtual machine, TinyLinux-01.
Issue the following in the command prompt and press the Enter key:
ping -t 192.168.120.51
You should now see a continuous ping to TinyLinux.
You should now have a list of all virtual machines on the selected datastore.
Note: depending on which lessons you have completed, the available datastores and virtual machines may be different than the images.
The VM TinyLinux is initially on ds-iscsi01 and needs to be moved to ds-nfs01.
Feel free to monitor the operation within the Recent Tasks pane or move on to the next step.
Go back to the command prompt and review the results of the ping. You can use the scroll bar to see if there were any dropped packets.
You may see instances where the time field increases to 2ms, but otherwise no packets should have dropped.
Click the 'X' to stop the ping and close the command window.
The Storage vMotion progress can be monitored in the Recent Tasks panel.
The virtual machine's storage has been migrated from iSCSI to NFS storage without the need to take the virtual machine offline.
When working with Virtual Machines, you can create a virtual disk or use an existing virtual disk, A virtual disk comprises one or more files on the file system that appear as a single hard disk to the guest operating system. These disks are portable among hosts.
You use the "Create Virtual Machine" wizard to add virtual disks during virtual machine creation. However, in this lesson you will work with an existing Virtual Machine in the inventory.
This lesson will walk you through the process of adding a new virtual disk to an existing Virtual Machine. Additionally, you will extend the Virtual Machine's original disk to a larger capacity.
From this view, we can see that there are several existing Virtual Machines in our vSphere environment. In the next step, we will add a new virtual disk to the Windows10 Virtual Machine.
You can follow the progress in the Recent Tasks pane
In this section, you will extend an existing Virtual Disk for a Virtual Machine.
You can follow the progress in the Recent Tasks pane.
Snapshots preserve the state and data of a virtual machine at the time you take the snapshot. Snapshots are useful when you must revert repeatedly to the same virtual machine state, but you do not want to create multiple virtual machines. You can also take multiple snapshots of a virtual machine to create restoration positions in a linear process. With multiple snapshots, you can save many positions to accommodate many kinds of work processes. The Snapshot Manager in the vSphere Web Client provides several operations for creating and managing virtual machine snapshots and snapshot trees. These operations let you create snapshots, restore any snapshot in the snapshot hierarchy, delete snapshots, and more.
A Virtual Machine snapshot preserves the following information:
In this section, you will create a Virtual Machine snapshot, make changes to the Virtual Machine's hardware and configuration state, and then revert back to the original state of the Virtual Machine by leveraging the vSphere Web Client Snapshot Manager.
In this step, you'll take a Snapshot of a Virtual Machine.
Note: When you take a snapshot of a powered-on virtual machine, you are given the option to capture the running VMs memory state. In our case, since we are in a lab environment, this will generate unneeded I/O.
Note the progress in the Recent Tasks pane. Once the snapshot task is complete:
Here you can view the details of the snapshot and verify it was taken.
In this section, you will change the memory configuration for the Virtual Machine.
To change the memory configuration for Windows10, we will need to shut it down.
NOTE: This is not the proper way to shut the VM down gracefully, but for our lab environment, it provides a quick way to power off a machine.
In this section, you revert the Virtual Machine's configuration back to the original state using the Snapshot Manager.
Here you can go and delete the taken snapshot.
It is a best practice to delete virtual machine snapshots when they are no longer needed. Over time the snapshot delta can grow to be quite large which could result in issues consolidating the virtual machine files and lead to performance issues.
You can watch the progress of the snapshot being deleted in the Recent Tasks window.
For more information on vSphere Virtual Machine Snapshots, be sure to check out this video.
A vSphere Datastore Cluster balances I/O and storage capacity across a group of vSphere datastores. Depending on the level of automation desired, Storage Dynamic Resource Scheduler will place and migrate virtual machines in order to balance out datastore utilization across the Datastore Cluster.
In this section, you will create a vSphere Datastore Cluster using two iSCSI datastores.
Storage DRS provides multiple options for tuning the sensitivity of storage cluster balancing.
View the Recent Tasks to check the progress of the operation.
Leveraging vSphere Datastore Clusters in your vSphere environment can help to ensure datastores are filled evenly and I/O is spread out across the group of datastores in the cluster. Storage DRS can automate the initial placement of new virtual machines and adjust virtual machine placement to maintain an even distribution of I/O across the datastore cluster.
Learn and Practice with Hands-On Labs to help prepare for several VMware Certifications.
This Lab can help you study for the industry-recognized VCAP-DCV Deploy 2021 Deploy certification which validates that you know how to deploy and optimize VMware vSphere infrastructures.
Learn More Here: https://www.vmware.com/learning/certification/vcap-dcv-deploy.html
Conclusion
This section provides supplementary documentation and videos.
This video shows how to use the VMware vSphere web client to configure basic networking for your vSphere hosts using the vSphere Standard Switch (VSS).
Lightboard illustration of the vCenter Server High Availability options for each deployment type.
This video shows how to use the VMware vSphere web client to configure vCenter Server alarms and alerts and how to enable email notification.
This video shows how to secure VMware vSphere hosts with Lockdown Mode in order to limit direct access to the host console and to require administrators manage hosts through vCenter Server.
This video shows how to join a VMware vSphere host to a Microsoft Active Directory (AD) domain in order to allow administrators to use their Active Directory credentials to access and manage hosts.
This video shows how to use the VMware ESXi Firewall on the vSphere host to block incoming and outgoing communication and to manage the services running on the host.
A vCenter Single Sign On user account might be locked when a user exceeds the allowed number of failed login attempts. After a user account is locked, the user cannot log in to the Single Sign On system until the account is unlocked, either manually or after a certain amount of time has elapsed.
You specify the conditions under which a user account is locked in the Single Sign On Lockout Policy. Locked user accounts appear on the Users and Groups administration page. Users with appropriate privileges can manually unlock Single Sign On user accounts before the specified amount of time has elapsed. You must be a member of the Single Sign On Administrators group to unlock a Single Sign On user.
By default, after three failed login attempts, the Users' account is locked.
In the lab, this policy has been disabled in order to prevent login issues that frequently occur with non-US keyboards.
This section has been included for reference purposes only.
Login to the vSphere Web Client as a user with SSO Admin privileges and navigate Menu --> Administration.
Log out of the Web Client.
Depending on your vCenter Single Sign On privileges, you might not be able to view or edit your Single Sign On user profile. However, all users can change their Single Sign On passwords in the vSphere Web Client. The password policy defined in the vCenter Single Sign-On configuration tool determines when your password expires. By default, Single Sign-On passwords expire after 90 days in vSphere 6, but your system administrator might change this depending on the policy of your organization. If you choose to keep the defaults, remember to change the password for the administrator@vsphere.local account password every 90 days or it will lock out on day 91.
In the upper navigation pane, click your user name to pull down the menu.
Select Change Password and type your current password.
Enter a new password.
Type a new password and confirm it.
Click the OK button to make the change.
NOTE: If you do change the password, please make sure to remember it for other activities in the lab.
In this section, you revert the Virtual Machine's configuration back to the original state using the Snapshot Manager.
This animated video shows how VMware Storage DRS reduces the time and complexity of provisioning virtual machines by aggregating data stores into a single pool, called a datastore cluster, enabling rapid placement of virtual machines and virtual machine disks.
This video reviews the process of creating and managing a datastore cluster in a vSphere environment.
Appendix
Welcome to Hands-on Labs! This overview of the interface and features will help you to get started quickly. Click next in the manual to explore the Main Console or use the Table of Contents to return to the Lab Overview page or another module.
In this lab you will input text into the Main Console. Besides directly typing it in, there are two very helpful methods of entering data which make it easier to enter complex data.
You can also click and drag text and Command Line Interface (CLI) commands directly from the Lab Manual into the active window in the Main Console.
You can also use the Online International Keyboard found in the Main Console.
In this example, you will use the Online Keyboard to enter the "@" sign used in email addresses. The "@" sign is Shift-2 on US keyboard layouts.
Notice the @ sign entered in the active console window.
When you first start your lab you may notice a watermark on the desktop indicating that Windows is not activated.
A major benefit of virtualization allows virtual machines to be moved and run on any platform. Hands-on Labs utilizes this benefit and hosts labs from multiple datacenters. However, these datacenters may not have identical processors which triggers a Microsoft activation check through the Internet.
Rest assured VMware and Hands-on Labs are in full compliance with Microsoft licensing requirements. The lab that you are using is a self-contained pod and does not have full access to the Internet. Without this, the Microsoft activation process fails and you see this watermark.
This cosmetic issue has no effect on your lab.
Use the Table of Contents to return to the Lab Overview page or another module.
Conclusion
Thank you for participating in the VMware Hands-on Labs. Be sure to visit http://hol.vmware.com/ to continue your lab experience online.
Lab SKU: HOL-2210-01-SDC
Version: 20230526-200356