VMware Hands-on Labs - HOL-1928-01-HCI


Lab Overview - HOL-1928-HCI - VxRail Getting Started

Lab Guidance


Note: It may take more than 90 minutes to complete this lab. You should expect to only finish 2-3 of the modules during your time.  The modules are independent of each other so you can start at the beginning of any module and proceed from there. You can use the Table of Contents to access any module of your choosing.

The Table of Contents can be accessed in the upper right-hand corner of the Lab Manual.

This lab introduces VxRail 4.5 and VMware Virtual SAN. In this lab, you will learn and explore the following

Lab Module List:

Module 1: Getting Started (10 Minutes) - Basic

Module 2: Monitoring and Maintenance (10 minutes) - Intermediate

Module 3: Cluster Expansion - Adding nodes (10 minutes) - Basic

Module 4: High Availability options for Rack and Datacenter failures - (5 Minutes) Basic

Module 5:Space efficient options - (5 Minutes) Basic

Module 6: Data Security (5 Minutes) - Basic

Module 7: Data Protection (5 Minutes) - Basic

Module 8: Upgrading Infrastructure Software (10 minutes) - Intermediate

 Lab Captains:

This lab manual can be downloaded from the Hands-on Labs Document site found here:

http://docs.hol.vmware.com

This lab may be available in other languages.  To set your language preference and have a localized manual deployed with your lab, you may utilize this document to help guide you through the process:

http://docs.hol.vmware.com/announcements/nee-default-language.pdf


 

Location of the Main Console

 

  1. The area in the RED box contains the Main Console.  The Lab Manual is on the tab to the Right of the Main Console.
  2. A particular lab may have additional consoles found on separate tabs in the upper left. You will be directed to open another specific console if needed.
  3. Your lab starts with 90 minutes on the timer.  The lab can not be saved.  All your work must be done during the lab session.  But you can click the EXTEND to increase your time.  If you are at a VMware event, you can extend your lab time twice, for up to 30 minutes.  Each click gives you an additional 15 minutes.  Outside of VMware events, you can extend your lab time up to 9 hours and 30 minutes. Each click gives you an additional hour.

 

 

Alternate Methods of Keyboard Data Entry

During this module, you will input text into the Main Console. Besides directly typing it in, there are two very helpful methods of entering data which make it easier to enter complex data.

 

 

Click and Drag Lab Manual Content Into Console Active Window

You can also click and drag text and Command Line Interface (CLI) commands directly from the Lab Manual into the active window in the Main Console.  

 

 

Accessing the Online International Keyboard

 

You can also use the Online International Keyboard found in the Main Console.

  1. Click on the Keyboard Icon found on the Windows Quick Launch Task Bar.

 

 

Click once in active console window

 

In this example, you will use the Online Keyboard to enter the "@" sign used in email addresses. The "@" sign is Shift-2 on US keyboard layouts.

  1. Click once in the active console window.
  2. Click on the Shift key.

 

 

Click on the @ key

 

  1. Click on the "@ key".

Notice the @ sign entered in the active console window.

 

 

Activation Prompt or Watermark

 

When you first start your lab, you may notice a watermark on the desktop indicating that Windows is not activated.  

One of the major benefits of virtualization is that virtual machines can be moved and run on any platform.  The Hands-on Labs utilizes this benefit and we are able to run the labs out of multiple datacenters.  However, these datacenters may not have identical processors, which triggers a Microsoft activation check through the Internet.

Rest assured, VMware and the Hands-on Labs are in full compliance with Microsoft licensing requirements.  The lab that you are using is a self-contained pod and does not have full access to the Internet, which is required for Windows to verify the activation.  Without full access to the Internet, this automated process fails and you see this watermark.

This cosmetic issue has no effect on your lab.  

 

 

Look at the lower right portion of the screen

 

Please check to see that your lab is finished all the startup routines and is ready for you to start. If you see anything other than "Ready", please wait a few minutes.  If after 5 minutes you lab has not changed to "Ready", please ask for assistance.

 

Key Solution Benefits


Dell EMC VxRail Appliances, the fastest growing hyper-converged systems worldwide, are the standard for simplifying and modernizing VMware environments, regardless of where an organization starts or ends their IT transformation. The only HCI appliances powered by Dell EMC PowerEdge platforms and fully integrated and pre-tested with VMware vSAN seamlessly extend existing VMware environments, simplifying deployment and enabling IT organizations to leverage in-house expertise and operational processes. Tight integration with VMware technologies ensures that VxRail Appliances across the infrastructure can be easily managed from a central location. VxRail accelerates the adoption of HCI and creates certainty in IT transformation.

VxRail Appliances are the standard for transforming VMware environments. Quickly and easily integrate into existing VMware eco-systems, removing IT lifecycle complexity while simplifying deployment, administration and management. As such they are an integral infrastructure option for modernizing and automating IT via IT Transformation, Digital Transformation and Workplace Transformation.

For more information on VxRail please visit: http://www.dellemc.com/vxrail


 

Extend and simplify VMware environment

Managed through the well-known VMware vCenter Server, VxRail Appliances provide existing VMware customers an experience you are familiar with allowing you to seamlessly integrate VxRail into your existing IT infrastructure.

VxRail Appliances are fully loaded with integrated Dell EMC mission-critical data services including replication, backup, and cloud tiering at no additional charge and are the industry's only hyper-converged appliance featuring kernel-layer integration between VMware vSAN and the vSphere hypervisor, delivering unique and unmatched performance and efficiency. As a fully optimized and supported VMware-based solution, the appliances also integrate with VMwares cloud management platform and end-user computing solutions. VxRail is also a foundational infrastructure platform which makes it simple to introduce advanced VMware SDDC offerings like NSX, vRealize Automation, and vRealize Operations.

The following image describes the software that is included with each and every VxRail.

 

 

 

Start small and grow

The Dell EMC VxRail Appliance allows you to start small  with as few as three nodes - and grow incrementally, scaling capacity and performance easily and non-disruptively up to 64 nodes per cluster. Single-node scaling and storage capacity expansion provide a predictable, pay-as-you-grow approach for future scale up and out as your business and user requirements evolve without up-front planning.

 

 

 

Flexibility and choice

As the world's most configurable appliances, Dell EMC VxRail provides extreme flexibility with purpose-built appliances that are designed to address any use case, including big data, analytics, 2D/3D visualization, or collaboration applications. VxRail Appliances, built with the latest PowerEdge servers based on Intel Xeon Scalable processors, deliver more predictable high performance with up to 2x more IOPS while cutting response times in half. The VxRail Appliance family offers GPU optimized, storage dense, high performance computing, and entry level options - to give you the perfect match for your specific HCI workload requirements.

 

 

Module 1: Getting Started

Introduction


Connecting to your vSphere Client - You will now connect to the vSphere Web Client session which you will use throughout the lab.

Click on the Google Chrome icon located on the Windows Taskbar or on the Desktop.

 


 

Select the vSphere bookmark

 

 

 

Login to the vSphere Web Client

Login to the VMware vSphere Web Client using the following credentials:

User name: administrator@demo.local

Password: Password123!

 

 

 

Verify the Current Configuration under vCenter Server Management

We will now verify the configuration of the new VxRail Appliance, checking the resources that are currently under the management of the vCenter Server.

 

 

Hosts and Cluster

Click on Hosts and Clusters button at the Navigator pane on the left

 

 

 

Verify the current configuration

 

1. Click on  vcenter01.demo.local on the Navigator pane

2. Select the Summary tab

3. Observe on the Navigator pane that under the vCenter Server vcenter01.demo.local there is one datacenter, vlab-dc, and one cluster, vlab-cluster. vlab-cluster is the VxRail cluster.

4. Note that the vCenter on this lab environment is running version 6.5 Build 8024368.

 

 

Synchronize the Storage Providers

1. Select vcenter01.demo.local on the Navigator pane

2. Select the Configure tab on the main pane

3. Select Storage Providers

4. Click on the Synchronize button

 

 

Conclusion


Congratulations on completing  Module 1.

Proceed to any module below which interests you most.

 


 

How to End Lab

 

To end your lab click on the END button.  

 

Module 2 - Monitoring and maintaining the logical and physical health of the VxRail cluster

Monitoring and maintaining the logical and physical health of the VxRail cluster


In this module you will navigate through VxRail Manager interface to become more familiar with the options available to monitor the health indicators of the VxRail Cluster, and how these functions can simplify the management of your environment.  You will also have the opportunity to execute a few hardware maintenance simulations.


 

Make sure you are connected to the VxRail Manager

Make sure you are connected to the VxRail Manager Interface. If not, Click on the Google Chrome icon located on the Windows Taskbar or on the Desktop

 

 

 

Select the VxRail Manager bookmark to open up the VxRail Manager Web Interface

 

If asked to acknowledge the security exception, click on the Advanced link, and then click on Proceed to vxm.demo.local; otherwise go directly to the Log On page

 

 

Log On to VxRail Manager

1. Use the following credentials to login to VxRAil Manager:

2. Click Authenticate

 

 

 

Temporary - Unmute Cluster health status

 

We will have to temporary unmute the cluster's health status. If you have the orange notice in the vxrail main page please perform the following steps:

1. Click on Config on the right tab

2. in the Config page select general and scroll down to "Cluster Health Monitoring"

3. Select On to turn Health Monitoring

4. Click Apply

 

 

 

Navigate through the VxRail Manager Dashboard Page

 

The VxRail Manager dashboard shows system health and support resources at a glance, including expansion status, overall system health, support, community activity, and event history.

Here is a brief explanation of the information sections in the dashboard:

When a new upgrade package is downloaded or installed, the status of the upgrade task will be displayed in the dashboard.

When a new node is detected by VxRail Manager, the node information will be displayed in the upper left portion of the screen.  You should already see the node EMCVLAB40000000 in your dashboard. 

Note: this node will be employed in the next lab module. 

The Overall System Health area shows the high-level system status of your VxRail Appliance. Status is shown as one of the following:

Note: A Critical status message may be displayed during the execution of the labs. This can be ignored.

VxRail community shows the most recent articles and other content from the online VxRail community.

Support displays status and links to support resources, including:

The access to ESRS (EMC Secure Remote Services) is not allowed in this virtual lab environment which explains the heartbeat message in the support area.

Event history displays the most recent system events. 

 

 

Navigate to VxRail Manager - System Events

1. Click on Events tab within VxRail Manager

 

The VxRail Manager Events tab displays a list of current events.

The events list can be sorted by ID, Severity, or Time.

New critical events are displayed in red.

If a physical component is listed in the 'Event Details', the 'Component ID' field will have a link to the Health > Physical screen to facilitate visualization and identification of the component.

A csv file with the event messages can be created, and exported/downloaded by the web browser.

 

 

Next we will navigate to the VxRail Manager Health Tab

1. Select HEALTH on the vertical bar within VxRail Manager

2. Select the Logical tab at the top of the screen.

 

This screen displays CPU, memory, and storage usage for your entire cluster, individual appliances, and individual nodes. The color-coded status for storage IOPS, CPU usage, and memory usage indicates the following:

Observe that in the storage information display, VxRail Manager provides a summary of total provisioned and used capacity which can be used to identify over-provisioning levels.

Note on the upper part of the screen that you can select to display information either for the 'Cluster' as a whole or for a specific node. The product serial numbers of the hosts (PSNT) are used to identify the hosts.

 

 

Select one of the VxRail Appliances

You may need to scrowdown the page to see the VxRail Manager Appliance information with Health > Logical tab.

1. Click on the appliance serial number EMCVLAB1000000

 

 

 

VxRail Manager Logical System Health - ESXi Node Components

Scroll down to the ESXi Nodes area

Note that in this screen we have the host name associated with the PSNT selected, node6001-dev.demo.local in our example.

This view provides the status of the host components.

By clicking on the  '>' expand sign we can obtain more information about the components.

This is a fast way to check the status of the host components.

 

 

 

 

VxRail Manager Physical System Health - Summary View

1. Scroll up and click Physical tab

The Physical tab of the VxRail Manager Health window displays information about the hardware components of your appliance. A graphical representation of the appliances in your cluster makes it easy to navigate for event and status information.

 

 

 

 

Click on the first appliance image to expand the information

1. Make sure you are on the Physical tab view.

2. Click on the Appliance image

 

 

 

 

Physical node information

 

You can view an appliance's status and information such as ID, serial number, service tag, and individual appliance components. You can drill down to see status and information for appliance components such as disks, compute nodes, NIC ports, and power supplies.

In the upper left part of the screen you can see that the service tag of our first appliance is 5HB4YK2. This is a P570 Model. 

In the main part of the screen we have a detailed view about the front end and back end characteristics of the appliance.

In case of problems with any of the "Customer Replaceable" hardware components, the failed component is highlighted to facilitate identification.

 

 

Front View of the node

The front view provides disk drive information.

To simplify serviceability, VxRail has pre-defined slots for the capacity drives as well as cache drives of each disk group.

In the P570 models we can have up to 4 Disk Groups per node with a maximum of 5 capacity disks per group.

The first 20 slots that we see in the front view image are reserved for capacity drives and the last 4 slots are reserved for cache drives.

We can observe that we only have 3 capacity disks in our first disk group.

 

 

Hover the mouse over the front view and click on the disk that is on slot 0 

 

Scroll down to the disk information display

Observe that the disk type is HDD, and the capacity available for use is 1.09TB

 

 

Hover the mouse over the front view image again and click on slot 20 this time

 

This is the cache drive of disk group 1

Observe that this is an SSD drive

Note that the 'Remaining Write Endurance' is displayed for all flash drives.

Monitoring of the wear level of the flash drives is done automatically by VxRail Manager. In case the endurance of any flash drive falls below a pre-determined threshold, the system will send alert messages to the support center, in addition to logging an event.

 

 

VxRail Manager Physical System Health - Disk drive replacement simulation

From the same disk information window we can initiate a drive replacement procedure.

1. Click on the Replace Disk link

Note that in this virtual lab we will execute a simulated drive replacement procedure, for illustration only.

 

 

 

Hardware Replacement dialog will appear

 

1. Click on Continue

 

 

Pre-check for Hardware Replacement

A pre-check is executed to ensure that the hosts are in the appropriate state and that the cluster health allows the execution of the procedure.

After the pre-check is complete, click on Continue   

 

In a real environment VxRail performs a disk clean-up and displays a status bar showing progress.

At the end of the cleanup the disk will be ready to be replaced.

Because we are in a virtual environment, this cleanup procedure will fail.

Please Click on Cancel

Click Confirm on the 'Abort Disk Replacement' pop-up dialog box

Click Done

 

 

VxRail Manager Physical System Health - Back view - Node information

1. Hover the mouse over the graphics and click on the node in the Back View

 

 

 

 

Details about the node will be displayed

 

Under node information we can obtain the BIOS firmware revision, ESXi and VIB versions, Boot device information, and BMC firmware revision.

The easy access to this information can facilitate serviceability.

 

 

VxRail Manager Physical System Health - Back view - Network Interface Controller Information

1. Hover the mouse over the graphics and click on the Network interface card.

This screen will provide us with the MAC addresses, link speed and status of the ports.

 

 

 

VxRail Manager Physical System Health - Back view - Power Supply information

1. Hover the mouse over the graphics and click on the Power Supply

This screen will provide us with the serial number, revision number and part number.

 

 

 

VxRail Manager Physical System Health - Back view - Node Shutdown

The node shut down procedure can be quite useful when replacing certain hardware components or performing other maintenance procedures.

This procedure perform a series of checks to ensure that the host and the cluster are in a state that will allow the execution of a clean procedure.

 

1. For a simulation Hover the mouse over the graphics and click on the Node

2. Click on the Shutdown link

 

 

Shut Down Node dialog will display

 

Leave unchecked the box: "Move powered-off and suspended virtual machines to the other hosts"

Click Confirm on the 'Shut Down Node' pop-up dialog box.

 

 

Shut Down Node pre-check

 

Like in the disk replacement procedure, a pre-check is executed to ensure that the hosts are in the appropriate state and that the cluster health allows the execution of a node shutdown procedure, noting that in the case of a node shutdown additional verifications have to be executed.

After the pre-check is complete, click Continue

The procedure will put the host in maintenance mode and then shut it down.

Wait for the message indicating that the node is powered off before proceeding

During host maintenance procedures VxRail Manager mutes the health monitoring. When this happens, an alert is displayed in orange on the top of the screen.

 

 

 

Node Shut Down Status

Once completed the node information will display that the node is powered off

 

 

 

VxRail Manager Physical System Health - Enabling System Health Monitoring

 

We will have to unmute the System Health Monitoring before proceeding to the next exercise.

1. Click on the Config icon

2. Select the General tab on top of the screen

3. Scroll down to Cluster Health Monitoring

4. Select the option On for Health Monitoring

5. Click Apply

Ignore alert messages about maintenance activity in progress. The orange bar should disappear.

 

 

VxRail Manager Physical System Health - Cluster shutdown procedure

There are situations in which the shutdown of the entire cluster is required; for example, when the appliances are being physically relocated.

For these situations VxRail manager provides a cluster shutdown function that simplifies and automates the time of this entire process. This can be quite useful, especially when the cluster has a large number of hosts.

On the same Config > General view, scroll down to Shut Down Cluster

Click Shut Down button

Click Confirm on the 'Shut Down Cluster' pop-up dialog box

 

 

 

 

Shut Down Cluster Dialog will Display

 

The confirmation for shut down cluster will display. Press Confirm.

 

 

Cluster Shut Down Pre-check

As we saw in the previous exercises, a pre-check has to be executed to ensure that the cluster and nodes are in the proper state for a normal shutdown.

One check in particular is that all customer virtual machines have been shut down, to ensure a graceful shutdown and a clean restart afterwards.

After the Pre-check is complete, click the Shut Down button

 

 

 

 

VxRail Manager Physical System Health - Enabling System Health Monitoring

After the previous host maintenance procedure, it will be necessary to unmute the System Health Monitoring.

Scroll up to Cluster Health Monitoring

Select the option On for Health Monitoring

Click Apply

 

 

Conclusion


Congratulations on completing  Module 2.

Proceed to any module below which interests you most.

 


 

How to End Lab

 

To end your lab click on the END button.  

 

Module 3 - Adding a node to an existing VxRail Cluster

Adding a node to an existing VxRail Cluster


In this module you will learn how to add a node to your VxRail cluster, which is a very simple process. One of the core benefits provided by VxRail is to allow a configuration to start small, at the right cost to satisfy the current demands, and then grow the configuration as needed, in small increments.

Note: Starting with VxRail software version 4.5.150 the first 3 nodes in a cluster must be identical (previous versions require the first 4 nodes to be identical). Additionally VxRail clusters must be entirely all flash or entirely hybrid.


 

Connect to the VxRail Manager Web Browser

Make sure you are connected to the VxRail Manager Interface. If not, Click on the Google Chrome icon located on the Windows Taskbar or on the Desktop

 

 

 

Select the VxRail Manager bookmark to open up the VxRail Manager Web Interface

 

If asked for Acknowledge security exception, click on the Advanced link, and then click on Proceed to vxm.demo.local; otherwise go directly to the Log On page

 

 

Log On to VxRail Manager

Use the following credentials

Click Authenticate

 

 

 

Navigate through the VxRail Manager Dashboard Page

 

When we power on a new ESXi node that is connected to the same network as the VxRail cluster, this node is automatically discovered by VxRail manager. The information that a new node has been detected is displayed on the VxRail manager dashboard.

1. Click Dashboard on the vertical bar

2. Select the node to be added to the cluster

3. Click on Add Nodes

Note: Up to 6 nodes can be selected at a time for cluster expansion procedure.

 

 

Enter the vSphere Credentials

 

Enter the following credentials:

Click Next

 

 

Allocate new IP addresses for your cluster

 

During the installation of this system we provisioned 4 IP addresses to each network, but only configured 3 hosts.

Because we have an extra IP available in each network we can proceed without any changes.

However, if only 3 IP addresses were provisioned, we would have to explicitly enter a new IP address for each the 3 networks being maintained in this step, Management, vMotion and vSAN.

Click Next

 

 

Final Steps - Validation and Building of the Node

The final steps of the node expansion process is to Validate the configuration and Confirm the Build request.

The Cluster Expansion process can be Monitored on the VxRail Manager Dashboard while in progress. Upon completion, the Cluster Expansion section in the Dashboard disappears, and the health and other information about the new node can be observed as already demonstrated in the previous Cluster Monitoring and Maintenance module.

 

We will NOT carry out the Validate and Build steps because of resource constraints in the virtual Lab. In a production VxRail cluster, a node can take between 7-10 minutes to be added.

 

Click Cancel now

 

 

Conclusion

In this module we demonstrated the process to perform the cluster expansion.

Please proceed to the next lab module

 

Conclusion


Congratulations on completing  Module 3.

Proceed to any module below which interests you most.

 


 

How to End Lab

 

To end your lab click on the END button.  

 

Module 4 - Defining fault domains in the cluster configuration

Defining fault domains in the cluster configuration


When Fault Domains are enabled vSAN ensures that each protection component is placed in separate fault domains. Fault domains enablement is necessary when trying to protect against rack, room, floor and local site failures. The purpose of this module is to increase your level of familiarity with the fault domain definition.


 

Connecting to vSphere Web Client

You will now connect to the vSphere Web Client.

 

Click on the Google Chrome icon located on the Windows Taskbar or on the Desktop.

 

 

Select the vSphere bookmark

 

 

 

Login to the vSphere Web Client

Login to the VMware vSphere Web Client using the following credentials:

User name: administrator@demo.local

Password: Password123!

 

 

 

Navigate to Fault Domains & Stretched Cluster configuration page

Navigate to Fault Domains & Stretched Cluster configuration page.

1. Select the vlab-cluster within your Hosts and Clusters page

2. Click on the Configure tab

3. Scroll down to Fault Domains & Stretched Cluster

4. Click on the + sign

 

 

 

Add hosts to the Fault Domains

In the New Fault Domains configuration enter the following information:

1. In the Name field, type FD01                                          

2. Select a host to be inserted in the Fault Domain. In this first example, select node6001-dev.demo.local

3. Click OK                                                                          

Repeat the process for all hosts, creating three different fault domains(FD01 to FD03)

 

 

 

Three Fault Domains Defined

At the end of the configuration you will have three Fault Domains defined (you may need to click on the Refresh icon for the fault domains you have just created to be displayed).

 

In our example we only have three hosts, but consider a larger configuration with 16 hosts and 4 racks; in this case, we would be able to allocate 4 hosts to each of the 4 fault domains that we have defined, and place the hosts of each FD in their own Rack, providing then an efficient way to protect against rack failures. Without the Fault Domain definition, each host would be its own Fault Domain.

 

Conclusion


Congratulations on completing  Module 4.

Proceed to any module below which interests you most.

 


 

How to End Lab

 

To end your lab click on the END button.  

 

Module 5 - Deduplication and Compression

Deduplication and Compression Overview


The De-duplication and Compression feature is enabled in this vSAN Cluster.

In this module we will check the space savings obtained from de-duplication and observe the object types created for metadata management.

We will not enable/disable the de-duplication feature because it requires a rolling reformat of all the disks.


 

Connecting to vSphere Web Client

You will now connect to the vSphere Web Client.

 

Click on the Google Chrome icon located on the Windows Taskbar or on the Desktop.

 

 

Select the vSphere bookmark

 

 

 

Login to the vSphere Web Client

Login to the VMware vSphere Web Client using the following credentials:

User name: administrator@demo.local

Password: Password123!

 

 

 

Navigate to the Manage General VSAN settings page

Once logged in to vCenter Web Client navigate to Hosts and Clusters

1. Select the vlab-cluster within your Hosts and Clusters page

2. Click on the Configure tab

3. Scroll down to vSAN General Settings

 

Observe in the vSAN is Turned ON area that the state of Deduplication and compression feature is Enabled

Let's go to the next step.

We will not execute the enable / disable function because it requires a disk reformat that can take more than 20 minutes in this lab environment.

 

 

Checking Space Savings

You will now check the space savings from Deduplication and Compression.

1. Navigate to Storage tab within the Navigator panel

2. Select the VxRail-Virtual-SAN-Datastore

 

 

 

Select Summary view to check deduplication savings

1. On the Navigator pane, ensure that the VxRail-Virtual-SAN-Datastore is selected

2. On the main view, select Monitor

3. Select vSAN

 

The Deduplication and Compression Overview section provides the capacity figures via Used Before and Used After charts.

Let us study the numbers on the right side of the main pane reported as USED BEFORE and USED AFTER.

Before data reduction we were using near 177 GB in our virtual cluster. After dedup and compression was enabled, and after running a "dedup friendly" workload, the amount of used capacity has reduced to approximately 15 GB, this is a reduction of ~12X and reported as a ratio.

We want to note that the ratio of data reduction is totally dependent on the application data.

It is also important to observe that Deduplication and Compression reserves about 5% of the total raw capacity to store the deduplication metadata. In our virtual lab the Deduplication and compression overhead is 7.62 GB and our total allocated area is 143.95 GB.

 

Conclusion


Congratulations on completing  Module 5.

Proceed to any module below which interests you most.

 


 

How to End Lab

 

To end your lab click on the END button.  

 

Module 6 - VxRail Encryption

Encryption


Native support for Data at Rest Encryption (DARE) was introduced with vSAN 6.6, and is available on VxRail release 4.5.

Encryption can be enabled on both Hybrid and All Flash models.

In this module we will illustrate a few concepts and components needed to enable the native Data at Rest Encryption.

We highly recommend reading the VMware's Data at Rest Encryption guide that is available for download at https://storagehub.vmware.com/export_to_pdf/vsan-data-encryption-at-rest


 

Encryption at VM level versus at vSAN storage level - Capacity considerations

The core benefit of implementing Data at Rest Encryption at the storage level is that we can provide a higher level of data security without losing the benefits of data reduction features, such as deduplication and compression.

When encryption occurs within a virtual machine at the host level, the chances of finding duplicate data blocks are significantly reduced. Also, data that might have once been easily compressible, is likely to be no longer as compressible.

By moving the encryption to the storage system, encryption can be done after data reduction services are applied, as data is being written to persistent media, preserving the ability to optimize the use of the storage capacity.

 

 

Main components participating in vSAN Encryption

There are three parties participating in vSAN Encryption domain of trust:

1. Key Management Server (KMS) or a KMS Cluster

2. vCenter

3. vSphere Hosts with vSAN enabled (vSAN host)

VMware vCenter and vSphere hosts can only use a KMS after establishing a trust with the KMS. A digital certificate must be provided to the KMS from the vCenter environment.

 

 

Key Management Server - KMS

A Key Management Server (KMS) has to be available to provide standards-compliant lifecycle management of encryption keys.

Tasks such as key creation, activation, deactivation, and deletion of encryption keys are performed by Key Management Servers.

vCenter Server provides a central location for Key Management Server configuration that is available to be used by either vSAN Encryption or VM Encryption.

Certificates used to establish the trust with the KMS are persisted into the VMware Endpoint Certificate Store (VECS).

These certificates are shared by both vSAN Encryption and VM Encryption. To ensure proper trust between the hosts and the KMS, certificates and the KEK_ID (Key Encryption Key) are pushed to vSphere hosts for vSAN Encryption.

Using the KEK_ID and KMS configuration, hosts can directly communicate with the KMS cluster without a dependency of vCenter being available.

The KMS should be external to the vSphere-VSAN cluster being encrypted.

Choosing a KMS Server solution that provides a resilient and available KMS infrastructure is an important part of the vSAN Encryption design.

A list of Key Management Server solutions compatible with vSAN Encryption and VM Encryption can be found in the VMware site. Key Managers are provided by 3rd party vendors and at the time of this writing, two vendors are on the hardware compatibility list for Key Managers: HyTrust and Dell/EMC Cloudlink (https://www.emc.com/collateral/handouts/h14453-cloudlink-secure-vm-pb.pdf)

 

 

 

 

 

Enable vSAN Encryption

Once the trust is established between the KMS and vCenter, a vSAN cluster (with vSAN Enterprise Licensing) may use vSAN Encryption.

After the KMS has been configured, vSAN Encryption is easily enabled through the cluster management UI in the vSphere Web Client, by configuring vSAN's general settings.

vSAN Encryption is a configuration option that affects the entire cluster, and requires all disks to be reformatted.

A common recommendation is to enable encryption before loading applications to the system in order to avoid the overhead of the reformatting.

When encrypting a system that is already in use consider the following options:

  1. Erase disks before use - This wipes any data from the disk before encryption occurs.
  2. Allow Reduced Redundancy - vSAN will reduce the protection level during the enable/disable process. This reduces overhead of the reformatting process.

 

 

Conclusion


Congratulations on completing  Module 6.

Proceed to any module below which interests you most.

 


 

How to End Lab

 

To end your lab click on the END button.  

 

Module 7 - Creating and Managing Snapshots

Creating and Managing Snapshots


VMware's Snapshots provide the ability to capture a point-in-time state of a Virtual Machine. This includes the VM's storage, memory and other devices such as Virtual NICs.

Using the Snapshot Manager in vSphere Web Client, administrators can create, revert or delete VM snapshots. A chain of up to 32 snapshots per VM is supported.

This module briefly demonstrates the Snapshot functionality.


 

Connecting to vSphere Web Client

You will now connect to the vSphere Web Client.

 

Click on the Google Chrome icon located on the Windows Taskbar or on the Desktop.

 

 

Select the vSphere bookmark

 

 

 

Login to the vSphere Web Client

Login to the VMware vSphere Web Client using the following credentials:

User name: administrator@demo.local

Password: Password123!

 

 

 

Create a Snapshot of the VxRail Manager virtual machine

At the Navigator pane of your vSphere Web Client session

1. Right click the VxRail Manager VM

2. Click Snapshots

3. Click Take Snapshot

 

Note: Starting with vSAN 6.0 U2, you have the ability to take32 snapshots of a single VM.

 

 

Naming your Snapshot

A name is automatically generated with a timestamp to facilitate the identification of an image when reverting, but this name can be modified at will.

You can choose to include the VM's memory as part of the snapshot operation. However, when the memory content is part of the snapshot, the time to perform the snap is elongated.

Click OK

The 'Create virtual machine snapshot' task should complete in seconds.

 

 

 

Revert VM to Snapshot image

At the Navigator pane of your vSphere Web Client Session

Right-click the VxRail ManagerVM

Click Snapshots

Click Manage Snapshots

 

You will now revert the VM image to the state that was when we took the snapshot session. The revert image process will suspend the VM. You will power on and reconnect later.

 

Conclusion


Congratulations on completing  Module 7.

Proceed to any module below which interests you most.

 


 

How to End Lab

 

To end your lab click on the END button.  

 

Module 8 - Upgrading infrastructure software

Upgrading the VxRail Manager Software


This is the last module. In this module you will install updates for the system software installed on your VxRail Appliance.

The software that makes the VxRail Appliance includes VMware ESXi, VMware vCenter, vSAN and VxRail Manager.


 

Connect to the VxRail Manager Web Browser

Make sure you are connected to the VxRail Manager Interface. If not, Click on the Google Chrome icon located on the Windows Taskbar or on the Desktop

 

 

 

Select the VxRail Manager bookmark to open up the VxRail Manager Web Interface

 

If asked for Acknowledge security exception, click on the Advanced link, and then click on Proceed to vxm.demo.local; otherwise go directly to the Log On page

 

 

Log On to VxRail Manager

Use the following credentials

Click Authenticate

 

 

 

Initiating a Software Upgrade Procedure

 

When a new software version is available for upgrade, CONFIG in the left navigation bar displays a highlighted number.

In this virtual lab, to expedite the process, we executed the initial step of the upgrade which is to load the composite bundle.

We will re-initiate now the upgrade process, but will not have to wait for the file download.

  1. Click CONFIG on the vertical panel at the left of the screen
  2. Select System on the top horizontal bar
  3. Select Internet upgrade

 

 

 

Enter Support Account Login

 

Enter the following credentials:

Click login

The page will be refreshed.

 

 

 

Verify Components included in the upgrade bundle

 

The page should display that VxRail is ready to upgrade your cluster.

Note that VxRail Manager is the only component included in our bundle.

1. This is an upgrade from VxRail Manager 4.5.100 to 4.5.150.

2. Click Continue

 

 

Enter credentials

 

Root privileges are required to continue.

Enter the credentials for the VxRail Manager upgrade

1. vCenter Server Administrator Account

<<<< Please note here that the domain in this case is vsphere.local >>>>

2. VxRail Manager account

3. Click Submit.

 

 

Conclusion

 

Monitor progress

The time to execute this upgrade will vary depending on the amount of resources available in the lab infrastructure. It will likely take more than 5 minutes for the lab to complete. In a production VxRail environment, the upgrade of VxRail Manager takes a few minutes to complete and is followed by a reboot of the VxRail Manager VM.

This is the last step of the lab. You can either conclude the lab now or wait for the completion

If you decide to wait for the the completion of the upgrade process:

 

Conclusion


Congratulations on completing  Module 8.

Proceed to any module below which interests you most.

 


 

How to End Lab

 

To end your lab click on the END button.  

 

Conclusion

Thank you for participating in the VMware Hands-on Labs. Be sure to visit http://hol.vmware.com/ to continue your lab experience online.

Lab SKU: HOL-1928-01-HCI

Version: 20200210-210718