VMware Hands-on Labs - HOL-1808-01-HCI


Lab Overview - HOL-1808-01-HCI - vSAN v6.6.1 - Getting Started

Lab Guidance


Note: It will take more than 90 minutes to complete this lab. You should expect to only finish 2-3 of the modules during your time.  The modules are independent of each other so you can start at the beginning of any module and proceed from there. You can use the Table of Contents to access any module of your choosing.

The Table of Contents can be accessed in the upper right-hand corner of the Lab Manual.

Lab Module List:

 Lab Captains:

Special Thanks for their guidance and assistance:

This lab manual can be downloaded from the Hands-on Labs Document site found here:

http://docs.hol.vmware.com

This lab may be available in other languages.  To set your language preference and have a localized manual deployed with your lab, you may utilize this document to help guide you through the process:

http://docs.hol.vmware.com/announcements/nee-default-language.pdf


 

Location of the Main Console

 

  1. The area in the RED box contains the Main Console.  The Lab Manual is on the tab to the Right of the Main Console.
  2. A particular lab may have additional consoles found on separate tabs in the upper left. You will be directed to open another specific console if needed.
  3. Your lab starts with 90 minutes on the timer.  The lab can not be saved.  All your work must be done during the lab session.  But you can click the EXTEND to increase your time.  If you are at a VMware event, you can extend your lab time twice, for up to 30 minutes.  Each click gives you an additional 15 minutes.  Outside of VMware events, you can extend your lab time up to 9 hours and 30 minutes. Each click gives you an additional hour.

 

 

Alternate Methods of Keyboard Data Entry

During this module, you will input text into the Main Console. Besides directly typing it in, there are two very helpful methods of entering data which make it easier to enter complex data.

 

 

Click and Drag Lab Manual Content Into Console Active Window

You can also click and drag text and Command Line Interface (CLI) commands directly from the Lab Manual into the active window in the Main Console.  

 

 

Accessing the Online International Keyboard

 

You can also use the Online International Keyboard found in the Main Console.

  1. Click on the Keyboard Icon found on the Windows Quick Launch Task Bar.

 

 

Activation Prompt or Watermark

 

When you first start your lab, you may notice a watermark on the desktop indicating that Windows is not activated.  

One of the major benefits of virtualization is that virtual machines can be moved and run on any platform.  The Hands-on Labs utilizes this benefit and we are able to run the labs out of multiple datacenters.  However, these datacenters may not have identical processors, which triggers a Microsoft activation check through the Internet.

Rest assured, VMware and the Hands-on Labs are in full compliance with Microsoft licensing requirements.  The lab that you are using is a self-contained pod and does not have full access to the Internet, which is required for Windows to verify the activation.  Without full access to the Internet, this automated process fails and you see this watermark.

This cosmetic issue has no effect on your lab.  

 

 

Look at the lower right portion of the screen

 

Please check to see that your lab is finished all the startup routines and is ready for you to start. If you see anything other than "Ready", please wait a few minutes.  If after 5 minutes your lab has not changed to "Ready", please ask for assistance.

 

Module 1 - vSAN 6.6.1 Setup and Enablement (15 Minutes,Beginner)

Introduction


What is VMware vSAN ?

VMware vSAN is a storage solution from VMware, released as a beta version back in 2013, made generally available to the public in March 2014, and reached version 6.6.1 in July 2017. vSAN is fully integrated with vSphere. It is an object-based storage system and a platform for Virtual Machine Storage Policies that aims to simplify Virtual Machine storage placement decisions for vSphere administrators. It fully supports and is integrated with core vSphere features such as vSphere High Availability (HA), vSphere Distributed Resource Scheduler (DRS), and vMotion.

This Module contains the following lessons:

 

 


VMware vSAN Overview


Before we jump into the lab, let's take a moment to review vSAN in greater depth ...


 

What is VMware vSAN

As a component of vSphere, vSAN extends the hypervisor to pool and abstract server based storage resources, much the way vSphere pools and abstracts compute resources. It is designed to be much simpler and more cost-effective than traditional external storage arrays. Users of vSphere should be able to learn vSAN and become productive quickly.

vSAN is fully integrated with vSphere, and supports almost all popular vSphere functionality: DRS, HA, vMotion and more. vSAN is also integrated with the vRealize suite.

Administrators define storage policies, and assign them to VMs. A storage policy will define availability, performance and provisioning requirements (e.g. thin). When a VM is provisioned, vSAN will interpret the storage policy, and configure the underlying storage devices to satisfy the policy automatically (e.g. RAID 1). When the storage policy is changed, vSAN will automatically reconfigure resources to satisfy the new policy.

Key points:

• Software-defined storage, fully integrated with vSphere

• Uses internal server components to create a shared storage pool across a single cluster

• Uses storage policies to provide per-VM storage services

Technical characteristics:

• Highly-resilient scale-out storage cluster, dynamically expandable and reconfigurable

• Very resource efficient: more performance, more consolidation

• Hybrid configurations use flash as cache, magnetic disks for capacity

• All-flash configurations use flash for both cache and capacity

• Scales to 62TB VMDKs, 64 nodes, 35 capacity devices per node, 200 VMs per node

• Up to 7m read-only 4K IOPS in a single 64-nodecluster (that's a lot!)

 

 

Customer Benefits

 

Simple

Compared to traditional storage solutions, vSAN is exceedingly simple to install and operate day-to-day. Storage is presented as a natural extension of the vSphere management experience. Policy-based management dramatically simplifies the provisioning of storage services for VMs.

High Performance

vSAN's deep integration with the vSphere kernel and use of flash dramatically improves application performance as compared to traditional storage solutions. Applications that require even higher levels of predictable performance can use all-flash configurations.

Lower TCO

vSAN can lower TCO by up to 50% by using a streamlined management model as well as cost effective server storage components. Expanding either capacity or performance involves simply adding more resources to the cluster: flash, disks or servers.

 

 

Primary Use Cases

 

 

 

vSAN Adoption

 

 

VMware vSAN 6.6.1 requirements


Let's review the requirements for deploying VMware Virtual SAN.


 

vCenter Server

vSAN 6.6.1 requires ESXi 6.5 U1 or later and vCenter Server 6.5 U1 or later. vSAN can be managed by both the Windows version of vCenter Server and the vCenter Server Appliance (VCSA).

vSAN is configured and monitored via the vSphere Web Client and this also needs to be version 6.5 U1 or later.

 

 

vSphere ESXi

A standard vSAN deployment requires at least 3 vSphere hosts (where each host has local storage) in order to form a supported vSAN cluster. This is to allow the cluster to meet the minimum availability requirements of tolerating at least one host failure. The vSphere hosts must be running vSphere 6.5 U1 or later. In vSAN clusters with only 2 hosts there is a risk to the availability of virtual machines if a single host goes down unless a special 2-Node deployment is performed (this 2-Node architecture is covered in Module 7 of this lab). The maximum number of hosts supported is 64.

Each vSphere host in the cluster that contributes to local storage to vSAN must have at least one hard disk drive (HDD) and at least one solid state disk drive (SSD).

 

 

Disk and Network

IMPORTANT : All components ( hardware, drivers, firmware ) must be listed on the vSphere Compatibility Guide for vSAN. All other configurations are unsupported.

One SAS or SATA host bus adapter (HBA), or a RAID controller that is in passthrough or RAID 0 mode.

Hybrid disk group configuration : At least one flash cache device, and one or more SAS, NL-SAS or SATA magnetic disks.

All-flash disk group configuration : One SAS or SATA solid state disk (SSD) or PCIe flash device used for caching, and one or more flash devices used for capacity.

In vSAN 6.5 hybrid cluster SSD will provide both a write buffer ( 30% ) and a read cache ( 70% ). The more SSD capacity in the host, the greater the performance since more I/O can be cached.

In vSAN all-flash cluster, 100% of the cache is allocated for writes, read performance from capacity flash is more than sufficient.

Not every node in a vSAN cluster needs to have local storage although a balanced configuration is recommended. Hosts with no local storage can still leverage the distributed vSAN datastore.

Each host must have minimum bandwidth dedicated to vSAN. 1 GbE for hybrid capacity, 10 GbE for all-flash capacity

A Distributed Switch can be optionally configured between all hosts in the vSAN cluster, although VMware Standard Switches (VSS) will also work.

A vSAN VMkernel port must be configured for each host. With a Distributed Switch, Network I/O Control can also be enabled to dedicate bandwidth to the vSAN network.

For vSAN 6.6 to use unicast networking mode, all ESXI hosts must be upgraded to vSAN 6.6 and the on-disk format must be upgraded to version 5.

Version 6.2 and later of vSAN support IPv4-only configurations, IPv6-only configurations, and also configurations where both IPv4 & IPv6 are enabled. This addresses requirements for customers moving to IPv6 and, additionally, supports mixed mode for migrations.

The VMkernel port is labeled vSAN. This port is used for intra-cluster node communication and for read and writes when one of the vSphere hosts in the cluster owns a particular virtual machine, but the actual data blocks making up the virtual machine files are located on a different vSphere host in the cluster. In this case, I/O will need to traverse the network configured between the hosts in the cluster.

 

Prepare VMware vSAN Cluster


To use vSAN, you must create a host cluster and enable vSAN on the cluster.  A  vSAN cluster can include hosts with capacity and hosts without  capacity.


 

Open Chrome Browser from Windows Quick Launch TaskBar

 

  1. Click on the Chrome Icon on the Windows Quick Launch Task Bar.

 

 

Login to vSphere Web Client

 

  1. On the vSphere Web Client login screen, select "Use Windows session authentication"
  2. Click Login

 

 

vSphere Web Client

 

You will be presented with the vSphere Web Client Home page.

To minimize or maximize the Recent Tasks, Alarms or Work In Progress panes, click the pin.

If the Home page is not the initial screen that appears, select Home from the top menu in the vSphere Web Client.

  1. Select Hosts and Clusters

 

 

Enable vSAN

 

In your lab environment, vSAN is currently disabled. In this lesson we will show you how to enable or turn on vSAN in a few easy steps.

A quick note about the Lab environment : The Cluster called RegionA01-COMP01 currently contains 3 ESXi hosts that will contribute storage in the form of cache and capacity to form the vSAN datastore.

  1. Select RegionA01-COMP01
  2. Select Configure
  3. Select vSAN > General
  4. Select Configure

 

 

Configure vSAN

 

 

  1. Enable Deduplication and Compression
  2. Select Allow Reduced Redundancy.

Selecting Allow Reduced Redundancy, vSAN will be able to reduce the protection level of your VMs, if needed, during operations of enabling Deduplication and Compression. This option is only usable if your setup is at the limit of the protection level, configured by the Storage Policy of a Specific VM.

  1. In the Fault Domains and Stretched Cluster section, verify Do not configure is selected

Click Next

 

 

Network Validation

 

Checks have been put in to verify that there are VMKernel adapters configured and the vSAN network service is Enabled.

Click Next

 

 

Claim Disks

 

Select which disks should be claimed for Cache and which for Capacity in the vSAN cluster.

The disks are grouped by model and size or by host. The recommended selection has been made based on the available devices in your environment.

You can expand the lists of the disks for individual disk selection.

The number of capacity disks must be greater than or equal to the number of cache disks claimed per host.

Click Next

 

 

Ready to Complete

 

Review and verify your selection.

  1. Here we can determine that we will create a vSAN datastore with a capacity of 60 GB.

The VSAN datastore uses the Capacity disks for the vSAN datastore capacity. The Caching disks are not taken into account.

  1. This is an All Flash vSAN Cluster, where both the Cache and Capacity disks are SSD/Flash disks.

Click Finish

 

 

Refresh Display

 

Virtual SAN is now enabled.

Click the Refresh icon to see the changes. ( If you see Misconfiguration detected you may need to Refresh a couple of times )

After the refresh you should see all 3 hosts in the vSAN cluster

 

 

Recent Tasks

 

You can review the Tasks that were carried out by opening the Recent Tasks in the vSphere Web Client.

These tasks consist of Creating the VSAN cluster, Creating the Disk Groups and adding the Disks to the Disk Groups.

 

 

Virtual SAN - Disk Management

 

  1. Select RegionA01-COMP01
  2. Select Configure
  3. Select vSAN ->Disk Management

The vSAN Disk Groups on each of the ESXi hosts are listed.

You may have to scroll down through the list to see all the disk groups.

Towards the lower part of the screen , you can see the Drive Types and the Disk Tier that make up these disk groups.

To summarize :

 

 

vSAN Datastore Properties

 

Once you have formed the vSAN Cluster, a vsanDatastore has also been created.

  1. To see the capacity navigate to Datastores View
  2. Select vsanDatastore
  3. Select Configure
  4. Select General

The capacity shown is an aggregate of the capacity devices taken from each of the ESXi hosts in the cluster (less some vSAN overhead - in vSAN 6.5 overhead is 1% of physical disk capacity + deduplication metadata which is highly variable and will depend on the data set stored in the vSAN datastore).

The flash devices used as cache are not considered when the capacity calculation is made.

 

 

Verify Storage Provider Status

 

For each ESXi host to be aware of the capabilities of vSAN and to communicate between vCenter and the storage layer a Storage Provider is created. Each ESXi host has a storage provider once the vSAN cluster is formed.

The storage providers will be registered automatically with SMS (Storage Management Service) by vCenter. However, it is best to verify that the storage providers on one of the ESXi hosts has successfully registered and is active, and that the other storage providers from the remaining ESXi hosts in the cluster are registered and are in standby mode.

  1. Navigate to the vcsa-01a.corp.local
  2. Select Configure
  3. Select More > Storage Providers to check the status.
  4. USe the Filter to display the VSAN Providers
  5. In this three-node cluster, one of the vSAN providers is Online and Active, while the others are in Standby. Each ESXi host participating in the Virtual SAN cluster will have a provider, but only one needs to be active to provide vSAN datastore capability information.

Should the active provider fail for some reason one of the standby storage providers will take over.

This completes Lesson 3.

 

Conclusion


In this module, we showed you how to enable vSAN in just a few clicks. As part of the vSAN enablement, we verified that vSAN Network was configured correctly, we demonstrated how to select the disks for the vSAN Disk Groups and we also enabled Deduplication and Compression on the vSAN Datastore.


 

You've finished Module 1

Congratulations on completing Module 1!

Proceed to any module below which interests you most.

Module 2 - vSAN Scale Out with Configuration Assist (30 Minutes, Beginner)

Module 2 explores how to scale out your vSAN environment with the Configuration assist.

Module 3 - Virtual SAN All Flash Capabilities (30 Minutes, Beginner)

Module 3 explores all flash array capabilities along with storage policy based management integration with Virtual SAN

Module 4 - iSCSI Target (30 minutes, Beginner)

Module 4 demonstrates the iSCSI feature in the v6.6.1 release and the use cases. This feature provides block storage for physical and virtual workloads using the iSCSI protocol.

Module 5 - vSAN Encryption (45 Minutes, Intermediate)

Module 5 explores how to add a Key Management Server and how to enable vSAN Encryption.

Module 6 - vSAN PowerCLI /esxcli (45 Minutes, Intermediate)

Module 6 unveils the PowerCLI 6.5 Release 1 and esxcli enhancements that have been introduced to help automate, manage and monitor VMware Virtual SAN Environments.

Module 7 - 2-Node Direct Connect (30 minutes, Advanced)

Learn how to create a 2 Node Direct Connect vSAN Stretched Cluster with Witness Traffic Separation.

 

 

How to End Lab

 

If you would like to end your lab click on the END button.

 

Module 2 - vSAN Scale Out with Configuration Assist (30 Minutes, Beginner)

Introduction


The vSAN Setup Wizard takes care of specific tasks when setting up a  vSAN cluster. Configuration settings like Deduplication & Compression  (as well as Encryption in vSAN 6.6), whether or not a cluster is 2  Node/Stretched or not, as well as claiming disks. What about some of the  normal vSAN recommendations?

Some of the normal vSAN recommendations/checks that are not configured as part of the vSAN cluster wizard include:

To configure each of these, tasks must be performed in different  parts of the vSphere Web Client. Configuration Assist allows these to be  done from single location in the UI.

Previously configuring vSAN VMKernel interfaces for vSAN or vMotion  traffic required creating these individually on each host or through the  vSphere Distributed Switch wizard. They are now part of Configuration  Assist.

This Module contains the following lessons:


 

Lab Preparation

If you have completed Module 1 by completing the steps as outlined, then you can skip the next few steps to prepare your environment for this lesson.

Click here to go to the lesson.

If you have skipped to this module, we will use our Module Switcher PowerCLI Application to prepare the environment.

 

 

Module Switcher

 

Double-Click the Module Switcher Desktop Shortcut called HOL-1808-HCI

 

 

Module 2 Start

 

  1. Click the Module 2 Start button

 

 

Module 2 Progress

 

Monitor Progress until Complete.

• Press Enter to continue (and close the PowerCLI Window)

 

 

Lab Prep Complete

 

Your Lab has been successfully prepared for Module 2 !

  1. Click Window Close to safely stop the Module Switcher

Please Note that you cannot 'go back' and take Modules prior to the one you are currently in unless you end the lab and start it over again

For example: If you Start Module 4, you cannot use the Module Switcher to Start Labs 1, 2 or 3).

 

VSAN Cluster Capacity Scale Out and vSAN Config Assist


You can use Configuration Assist to check the configuration of your Virtual SAN cluster, and resolve any issues.

Virtual  SAN Configuration Assist enables you to verify the configuration of  cluster components, resolve issues, and troubleshoot problems. The  configuration checks cover hardware compatibility, network, and Virtual  SAN configuration options.


 

Verify current vSAN Capacity

 

Adding hosts to the vSAN cluster is quite straight forward. Of course, you must ensure that the host meets vSAN requirements or recommendations such as a 1 Gb dedicated network interface card (NIC) port (10 GbE recommended) and at least one cache tier device and one or more capacity tier devices if the host is to provide additional storage capacity. Also, pre-configuration steps such as a VMkernel port for vSAN communication should be considered, although these can be done after the host is added to the cluster.

Switch back to the "Hosts & Clusters" view in the vSphere Web Client Navigator pane

  1. Select the cluster called RegionA01-COMP01
  2. Click Summary
  3. Expand vSAN Capacity

Our current VSAN datastore capacity is ~60 GB.

 

 

Verify Storage Devices

 

If ESXi does not automatically recognize its devices as flash/SSD, you can mark them as flash/SSD devices. ESXi does not recognize certain devices as flash when their vendors do not support automatic flash disk detection. The Drive Type column for the devices shows HDD as their type.

Note : Marking HDD disks as flash disks could deteriorate the performance of datastores and services that use them. Mark disks as flash disks only if you are certain that those disks are flash disks.

  1. Select esx-04a.corp.local
  2. Select Configure
  3. Select Storage -> Storage Devices

In the Storage Devices list, you can see the disks that will contribute storage to the vSAN datastore.

Although this is an All Flash VSAN Cluster, we still need one SSD disk for Cache and at least one SSD disk for Capacity.

 

NOTE: You can ignore the Detached Disks, we will use these in another lab.

 

 

Add Additional nodes to cluster

 

We are now going to add the esx-04a.corp.local to the vSAN Cluster.

Drag and drop esx-04a.corp.local into RegionA01-COMP01 cluster

If the Drag and Drop does not seem to be working for you, right click the ESXi host called esx-04a.corp.local an select Move to .... Select the cluster called RegionA01-COMP01.

 

 

Move Host into Cluster

 

If there are Virtual Machines running on the ESXi host , you may see the following message. If this screen does not appear, move to the next step.

  1. Select the default option"Put all of this host's virtual machines in the cluster's root resource pool. Resource pools currently present on the hosts will be deleted."

Click OK

You may see warning messages against the ESXi hosts already in the Cluster, but these messages will self heal after a while.

 

 

Take Host out of Maintenance Mode

 

The ESXi host is still in Maintenance Mode.

  1. Right Click the ESXi host called esx-04a.corp.local
  2. Select Maintenance Mode
  3. Select Exit Maintenance Mode

If the Exit Maintenance Mode option is not instantly available, you may have to wait a little while or Refresh the vSphere Web Client.

 

 

vSAN Configuration Assistant

The vSAN Setup Wizard takes care of specific tasks when setting up a  vSAN cluster. Configuration settings like Deduplication & Compress  (as well as Encryption in vSAN 6.6), whether or not a cluster is 2  Node/Stretched or not, as well as claiming disks. What about some of the  normal vSAN recommendations?

Some of the normal vSAN recommendations/checks that are not configured as part of the vSAN cluster wizard include:

To configure each of these, tasks must be performed in different  parts of the vSphere Web Client. Configuration Assist allows these to be  done from single location in the UI.

Previously configuring vSAN VMKernel interfaces for vSAN or vMotion  traffic required creating these individually on each host or through the  vSphere Distributed Switch wizard. They are now part of Configuration  Assist.

 

 

vSAN Configuration Assistant

 

To access the vSAN Configuration Assistant,

  1. Select the cluster called RegionA01-COMP01
  2. Select Configure
  3. Select vSAN -> Configuration Assist
  4. Click the Retest button, to run the configuration assistant and retrieve the latest status of the vSAN Cluster.

 

 

Configure Networking (1)

 

We can see that the Networking Configuration has Failed and specifically for the ESXi host esx-04a.corp.local

This is the ESXi host that we have just added to the vSAN Cluster.

  1.  Select the cluster called RegionA01-COMP01
  2. Select Configure
  3. Select vSAN -> Configuration Assist
  4. Expand Network Configuration and select All hosts have a vSAN vmknic configured

At the bottom of the screen, you will see the affected host. ( esx-04a.corp.local ). We will now configure the vSAN vmknic from the vSAN Configuration Assistant

  1. Click Create VMkernel Network Adapter

 

 

Configure Networking (2)

 

Here are the list of the ESXi host in our vSAN Cluster.

  1. Deselect the ESXi hosts esx-01a.corp.lcoal, esx-02a.corp.local and esx-03a.corp.local. They already have valid vSAN VMkernel ports configured.
  2. You will notice that the ESXi host called esx-04a.corp.local does not have a VMkernel port for the vSAN Network.

Click Next

If the Next button is not available on the bottom of the screen, double click the Grey bar of the dialog box and it should appear.

 

 

Configure Networking (3)

 

  1. Leave the default Distributed switch selected ( RegionA01-vDS-Comp )
  2. Leave the vSAN traffic enabled.

Click Next

 

 

Configure Networking (4)

 

  1. Change the IP settings to Static IPv4 settings
  2. Use the following information for the static IPv4 Address and Subnet Mask.
IPv4 Address : 192.168.130.54
Subnet Mask : 255.255.255.0

Click Next

 

 

 

Configure Networking (5)

 

Verify the settings.

Click Finish

 

 

Configure Networking (6)

 

After a few moments once the vSAN Network has been configured on the ESXi host esxi-04a.corp.local the configuration alert will turn green.

You can also manually run the test by clicking on the Retest button.

 

 

Configure Storage (1)

 

Now that we have configured the network, we need to turn our attention to the Storage.

Claiming disks was accomplished via the vSAN cluster wizard, but only upon initial setup. Adding additional disks required manual  intervention.

Configuration Assist will show all disks that have not  been claimed, even after hosts have been added to an existing cluster.

  1.  Select the cluster called RegionA01-COMP01
  2. Select Configure
  3. Select vSAN -> Configuration Assist
  4. Expand the vSAN Configuration
  5. Select All disks claimed
  6. A list of the Disks not yet claimed for vSAN use will be listed.
  7. Select Claim Disks for vSAN

 

 

Configure Storage (2)

 

 

  1. For Group by: select Host
  2. esx-01a.corp.local, esx-02a.corp.local and esx-03.corp.local are already contributing storage to the vSAN Datastore. For these 3 ESXi hosts, leave as  Do not claim
  3. Verify that ESXi host esx-04a.corp.local will contribut storage to the vSAN Cluster.

Click OK

 

 

Configure Storage (3)

 

You can monitor the Disk Group creation task in the Recent Tasks

 

 

 

Configure Storage (4)

 

  1. The All disks claimed configuration assist tesh should now turn green
  2. All Disks should now be claimed on all ESXi hosts in the vSAN Cluster

 

 

 

Configure Storage (5)

 

  1. Select RegionA01-COMP01
  2. Select Configure
  3. Select vSAN ->Disk Management

The vSAN Disk Groups on each of the ESXi hosts are listed.

  1. Verify that the Disk Group has been created on the ESXi host called esx-04a.corp.local

 

 

Configuration Assist

 

Configuring settings like vSphere HA/DRS are also accomplished from the Configuration Assist UI.

Configuration Assist even allows updating tools and firmware for storage  controllers from some OEM vendors.

Before Configuration Assist, customers would often have to update  firmware out of band, often through remote consoles, or through custom  processes.

 

Conclusion


Configuration Assist is a great new feature of vSAN 6.6 that provides a central local for initial and ongoing vSAN Cluster configuration tasks.

The ability to make changes for both for configuration settings as well  as controller firmware provides a more uniform and consistent management  experience.

Congratulations on completing Module 2

Proceed to any module below which interests you most.

Module 3 - Virtual SAN All Flash Capabilities (30 Minutes, Beginner)

Module 3 explores all flash array capabilities along with storage policy based management integration with Virtual SAN

Module 4 - iSCSI Target (30 minutes, Beginner)

Module 4 demonstrates the iSCSI feature in the v6.6.1 release and the use cases. This feature provides block storage for physical and virtual workloads using the iSCSI protocol.

Module 5 - vSAN Encryption (45 Minutes, Intermediate)

Module 5 explores how to add a Key Management Server and how to enable vSAN Encryption.

Module 6 - vSAN PowerCLI /esxcli (45 Minutes, Intermediate)

Module 6 unveils the PowerCLI 6.5 Release 1 and esxcli enhancements that have been introduced to help automate, manage and monitor VMware Virtual SAN Environments.

Module 7 - 2-Node Direct Connect (30 minutes, Advanced)

Learn how to create a 2 Node Direct Connect vSAN Stretched Cluster with Witness Traffic Separation.


 

How to End Lab

 

If you would like to end your lab click on the END button.

 

Module 3 - vSAN All Flash Capabilities (30 Minutes, Beginner)

Introduction


In this module we will take a look at some of the VMware vSAN all flash features enabled through storage policy based management. This module will more specifically concentrate on failure tolerance methods which specifies whether the data replication method optimizes for Performance or Capacity. The Number of failures to tolerate has an important role when we plan and size storage capacity for vSAN.

RAID 5 or RAID 6 erasure coding enables vSAN to tolerate the failure of up to two capacity devices in the datastore. You can configure RAID 5 on all-flash clusters with four or more fault domains. You can configure RAID 5 ( requires 4 ESXi hosts )  or RAID 6 ( requires 6 ESXi hosts ) on all-flash clusters. RAID 5 or RAID 6 erasure coding requires less additional capacity to protect your data than RAID 1 mirroring. For example, a VM protected by a Number of failures to tolerate value of 1 with RAID 1 requires twice the virtual disk size, but with RAID 5 it requires 1.33 times the virtual disk size.

This module also discusses VM swap objects.  SwapThickProvisionDisabled was created to allow the VM swap option to be provisioned as a thin object. If this advanced setting is set to true, the VM swap objects will be thinly provisioned.

This Module contains the following lessons:


 

Lab Preparation

If you have completed the previous modules by completing the steps as outlined, then you can skip the next few steps to prepare your environment for this lesson.

Click here to go to the lesson.

If you have skipped to this module, we will use our Module Switcher PowerCLI Application to prepare the environment.

 

 

Module Switcher

 

Double-Click the Module Switcher Desktop Shortcut

 

 

Module 3 Start

 

  1. Click the Module 3 Start button

This Startup Routine can take a few minutes to complete - thank you for your patience !

 

 

Monitor Progress

 

Monitor Progress until Complete.

• Press Enter to continue (and close the PowerCLI Window)

 

 

Lab Prep Complete

 

Your Lab has been successfully prepared for Module 3 !

  1. Click Window Close to safely stop the Module Switcher

Please Note that you cannot 'go back' and take Modules prior to the one you are  currently in unless you end the lab and start it over again (for example: If you Start Module 4, you cannot use the Module Switcher to Start Labs 1, 2 or 3).

 

Storage Policy Based Management - Raid 5/6


Failure Tolerance Method is a capability introduced in vSAN 6.2 and is how administrators can choose either RAID-1 or RAID-5/6 configuration for their virtual machine objects.

The failure tolerance method is used in conjunction with number of failures to tolerate. The purpose of this setting is to allow administrators to choose between performance and capacity. If performance is the absolute end goal for administrators, then RAID-1 (which is still the default) is the tolerance method that should be used.

If administrators do not need maximum performance and are more concerned with capacity usage, then RAID-5/6 is the tolerance method that should be used.

The easiest way to explain the behavior is to display the various policy settings and the resulting object configuration.


 

Storage Policy Based Management

 

We are giving a brief description of each of the Storage Policies here.

Number of disk stripes per object - The number of capacity devices across which each replica of a virtual machine object is striped. A value higher than 1 might result in better performance, but also results in higher use of system resources.

Flash read cache reservation - Flash capacity reserved as read cache for the virtual machine object. Specified as a percentage of the logical size of the virtual machine disk (vmdk) object. Reserved flash capacity cannot be used by other objects. Unreserved flash is shared fairly among all objects. This option should be used only to address specific performance issues.

Primary level failures to tolerate - For non-stretched clusters, defines the number of disk,host or fault domain failures a storage object can tolerate. For n failures tolerated, n+1 copies of the virtual machine object are created and 2*n+1 hosts contributing storage are required.

Force provisioning - If the option is set to Yes, the object will be provisioned even if the policy specified in the storage policy is not satisfiable by the datastore. Use this parameter in bootstrapping scenarios and during an outage when standard provisioning is no longer possible.

Object space reservation - Percentage of the logical size of the virtual machine disk (vmdk) object that should be reserved, or thick provisioned when deploying virtual machines.

Disable object checksum - If the option is set to No, the object calculates checksum information to ensure the integrity of its data. If this option is set to Yes, the object will not calculate checksum information. Checksums ensure the integrity of data by confirming that each copy of a file is exactly the same as the source file. If a checksum mismatch is detected, Virtual SAN automatically repairs the data by overwriting the incorrect data with the correct data.

Failure tolerance method - Specifies whether the data replication method optimizes for Performance or Capacity. If you choose Performance, Virtual SAN uses more disk space to place the components of objects but provides better performance for accessing the objects. If you select Capacity, Virtual SAN uses less disk space, but reduces the performance.

IOPS limit for object - Defines the IOPS limit for a disk. IOPS is calculated as the number of IO operations, using a weighted size. If the system uses the default base size of 32KB, then a 64KB IO represents two IO operations. When calculating IOPS, read and write are considered equivalent, while cache hit ratio and sequentiality are not considered. If a disk’s IOPS exceeds the limit, IO operations will be throttled. If the IOPS limit for object is set to 0, IOPS limits are not enforced.

 

 

Storage Policy Based Management - Raid 5/6 (Erasurecoding )

 

Note that there is a requirement on the number of hosts needed to implement RAID-5 or RAID-6 configurations on vSAN.

For RAID-5, a minimum of 4 hosts are required; for RAID-6, a minimum of 6 hosts are required.

The objects are then deployed across the storage on each of the hosts, along with a parity calculation. The configuration uses distributed parity, so there is no dedicated parity disk. When a failure occurs in the cluster, and it impacts the objects that were deployed using RAID-5 or RAID-6, the data is still available and can be calculated using the remaining data and parity if necessary.

A new policy setting has been introduced to accommodate the new RAID-5/RAID-6 configurations.

This new policy setting is called Failure Tolerance Method. This policy setting takes two values: performance and capacity. When it is left at the default value of performance, objects continue to be deployed with a RAID-1/mirror configuration for the best performance. When the setting is changed to capacity, objects are now deployed with either a RAID-5 or RAID-6 configuration.

The RAID-5 or RAID-6 configuration is determined by the number of failures to tolerate setting. If this is set to 1, the configuration is RAID-5. If this is set to 2, then the configuration is a RAID-6.

 

 

Storage Policy Based Management - Raid 5/6 (Erasurecoding )

 

First we need to create a VM Storage Policy that will define the Failure Tolerance method of Raid 5/6.

  1. From the Home page of the vSphere Web Client
  2. Select Policies and Profiles

 

 

Storage Policy Based Management - Raid 5/6 (Erasure coding )

 

  1. Select VM Storage Policies
  2. Select Create a New VM Storage policy

 

 

Storage Policy Based Management - Raid 5/6 (Erasure coding )

 

Create a new VM Storage Policy using the following name :

PFTT=1-Raid5

 

Click Next

Click Next on the Policy structure information page

Click Next on the 2a Common Rules

 

 

Storage Policy Based Management - Raid 5/6 (Erasure coding)

 

Create a new Rule-Set using the following information :

Select VSAN as the Storage Type and add rules for the Primary level of failures to tolerate and the Failure tolerance method.

Storage Type: VSAN
Rule 1 : Primary level of failures to tolerate = 1
Rule 2 : Failure tolerance method = Raid-5/6 (Erasure Coding)- Capacity

Before you click next, check out the following :

Change the Failure tolerance method = RAID-1 (Mirroring) - Performance

Review the Storage Consumption Model on the right hand side of the screen. Notice that the Storage space that would be used would be 200 GB based on a virtual disk of 100 GB.

Now change the Failure tolerance method = Raid-5/6 (Erasure Coding)-Capacity and you will see that the Storage space will now be reduced to 133 GB.

Click Next

 

 

Storage Policy Based Management

 

The Storage compatibility will be determined based on the VM Storage Policy.

Here we can see that the vsanDatastore is compatible with the VM Storage Policy that we are about to create.

Depending on how many vSAN Disk Groups you created on each ESXi host in your vSAN Cluster, the Total Capacity of the vSAN Datastore may be different.

Click Next

 

 

Storage Policy Based Management - Raid 5/6 (Erasure coding)

 

Review the VM Storage Policy Settings

Click Finish

 

 

Storage Policy Based Management - Raid 5/6 (Erasurecoding )

 

  1. Select FTT=1-Raid5
  2. Select Manage
  3. Select Rule-Set-1:VSAN

Here we can see the rules that make up our VM Storage Policy.

 

 

Virtual SAN Capacity - Raid 5/6 (Erasure coding )

 

  1. Select Home -> Hosts and Clusters
  2. Select RegionA01-COMP01
  3. Select Monitor
  4. Select vSAN
  5. Select Capacity

Make a note of the capacity figures here. ( Pretty much our vsanDatastore is empty )

Depending on how many vSAN Disk Groups you created on each ESXi host in your vSAN Cluster, the Total Capacity of the vSAN Datastore may be different.

 

 

Clone VM to VSAN datastore - Raid 5/6 (Erasure coding )

 

We will clone the VM called core-A ( which currently resides on a Local VMFS datastore ) to the vSAN Datastore and apply the VM Storage Policy ( PFTT=1-Raid5 ) that we have just created.

  1. Expand the ESXI host called esxi-07a.corp.local and right click the VM called core-A
  2. Select Clone
  3. Select Clone to Virtual Machine

 

 

Clone VM to VSAN datastore - Raid 5/6 (Erasure coding )

 

Give the Virtual Machine a name called :

PFTT=1-Raid5

Click Next

 

 

Clone VM to VSAN datastore - Raid 5/6 (Erasure coding )

 

Expand the Compute resource called RegionA01-COMP01

Select the esxi host called esx-01a.corp.local

Click Next

 

 

Clone VM to VSAN datastore - Raid 5/6 (Erasure coding )

 

  1. For the VM Storage Policy, select PFTT=1-Raid5

The resulting list of compatible datastores will be presented, in our case the vsanDatastore

In the lower section of the screen we can see that the Virtual SAN storage consumption would be 133.33 MB disk space and 0.00 B reserved Flash space.

Since we have a VM with 100 MB disk and a VM Storage Policy of Raid 5, the VSAN disk consumption will be 133.33 MB disk.

Click Next

Click Next on the Select clone options

 

 

Clone VM to VSAN datastore - Raid 5/6 (Erasure coding )

 

Click Finish

Wait for the Clone operation to complete.

Check the Recent Tasks for a status update on the Clone virtual machine task.

 

 

Clone VM to VSAN datastore - Raid 5/6 (Erasure coding )

 

Once the clone operation has completed,

  1. Select the VM called PFTT=1-Raid5
  2. Select Summary
  3. Select Related Objects

The VM is now residing on the vsanDatastore

  1. Select VM Storage Policies

 

 

 

Disk Policies - FTT=1 Raid 5

 

  1. Select the VM PFTT=1-Raid5
  2. Select Monitor
  3. Select Policies
  4. Select Hard Disk 1
  5. Select Physical Disk Placement

Notice with this VM Storage Policy, we have a Raid 5 disk placement, made up of 4 Components.

There is one component residing on each host in the Cluster.

 

 

Virtual SAN Capacity - Raid 5/6 (Erasure coding )

 

To allow administrators to track where the storage consumption is occurring a Capacity View is available.

  1. Select RegionA01-COMP01
  2. Select Monitor
  3. Select vSAN
  4. Select Capacity

If we focus on the Capacity Overview first of all, we can see the full size of the VSAN datastore. This is approximately 80 GB in size. We can also see Deduplication and compression overhead.

The amount of space Used – Total on the VSAN datastore refers to how much space has been physically written (as opposed to logical size). This is a combination of Virtual disks, VM home objects, Swap objects, Performance management objects and Other items that may reside on the datastore. Other items could be ISO images, unregistered VMs, or templates, for example.

The Deduplication and Compression overview on the top right gives administrators an idea around the space savings and deduplication ratio that is being achieved, as well as the amount of space that might be required if an administrator decided that they wanted to disable the space efficiency features on VSAN and re-inflate any deduplicated and compressed objects.

The space savings ratio increases with the more “similar” VMs that are deployed.

This is telling us that without deduplication and compression, it would have required ~11 GB of capacity to deploy the current workloads. With deduplication and compression, we’ve achieved it with ~3.6 GB. (The Capacity values in your lab environment , may be different).

 

 

Virtual SAN Capacity - Raid 5/6 (Erasure coding )

 

Towards the bottom of the Capacity Screen, we will get a breakdown of the objects.

  1. Select RegionA01-COMP01
  2. Select Monitor
  3. Select vSAN
  4. Select Capacity

Group by Object Types:

Performance management objects: Capacity consumed by objects created for storing performance metrics when Performance service is enabled

File system overhead: Any overhead taken up by the on-disk file system (VirstoFS) on the capacity drives, which is neither attributed to deduplication, compression or checksum overhead. When deduplication and compression is enabled, file system overhead is increased 10X to reflect the increase in the logical size of the vSAN datastore.

Deduplication and compression overhead: Overhead incurred to get the benefits of deduplication and compression. This includes the associated mapping table, hash tables, and other mechanisms required for deduplication and compression.

Checksum overhead: Overhead to store all the checksums. When deduplication and compression are enabled, checksum overhead is increased 10X to reflect the increase in the logical size of the VSAN datastore. When a VM and a template are deployed on the vSAN datastore, more objects appear:

Virtual disks: Capacity consumed by Virtual machine disks (VMDKs) objects that reside on the vSAN datastore

VM home objects: Capacity consumed by VM home namespace objects (containing virtual machine files) residing on the vSAN datastore

Swap objects: Capacity consumed by VM Swap space that reside on the vSAN datastore when a Virtual Machine is powered on.

Vmem– Capacity consumed by memory Objects, created as a result of taking a snapshot of the VM which included the VM memory, or from suspended virtual machines. Note that this will only be visible on VMs that are using a minimum of Virtual Hardware V10.

Other: Capacity consumed by VM templates, unregistered VMs, standalone VMDKs not associated with VMs, manually created vSAN objects, manually created directories that store ISOs for example.

 

 

Implement Raid 6 - Disk Policies

 

Your Lab environment is currently running a 4 Node vSAN Cluster. To implement Raid 6, you would require a minimum of 6 hosts in the vSAN Cluster.

The VM Storage Policy will have a Failure Tolerance Method of Raid 5/6 - ( Erasure Coding ) - Capacity and the Primary Level of failures to tolerate set to 2.

In a Raid-6 you will consume x1.5 times the storage assigned to the VM.

 

 

Implement Raid 6 - Disk Policies

 

Here is an example of a VM with a Raid 6 VM Storage Policy Configuration.

In the Raid 6 configuration, there are 6 components and they are spread out across the 6 ESXi hosts in the Cluster.

This is for example purposes only.

 

New Sparse VM Swap Object


The VM swap object also has its own special policy settings. For the VM swap object, its policy always has number of Failures to tolerate set to 1. The main reason for this is that swap does not need to persist when a virtual machine is restarted. Therefore, if vSphere High Availability restarts the virtual machine on another host elsewhere in the cluster, a new swap object is created. Therefore, there is no need to add additional protection above tolerating one failure.

By default, swap objects are provisioned 100% up front, without the need to set object space reservation to 100% in the policy. This means, in terms of admission control, vSAN will not deploy the VM unless there is enough disk space to accommodate the full size of the VM swap object. In vSAN 6.2, a new advanced host option SwapThickProvisionDisabled has been created to allow the VM swap option to be provisioned as a thin object. If this advanced setting is set to true, the VM swap objects will be thinly provisioned.


 

New Sparse VM Swap Object

 

To show this example, the only VM that we need powered on in our environment is the VM called PFTT=1-Raid5 that we created earlier. If the VM is powered-off, power it on now.

If you have other VM's running in the RegionA01-COMP01 cluster, power them off now.

In the VM called PFTT=1-Raid5, we can see that we have 256 MB memory assigned.

Note the ESXi host that the VM is running on, it may be different than shown here.

 

 

New Sparse VM Swap Object

 

Now switch to the Capacity View.

  1. Select RegionA01-COMP01
  2. Select Monitor
  3. Select vSAN
  4. Select Capacity

Scroll to the bottom of the CapacityView to the Used Capacity Breakdown section.

Here we can see the Swap Objects are taking around 548 MB

 

 

Power Off VM

 

  1. Right click the VM called PFTT=1-Raid5
  2. Select Power
  3. Select Power Off

 

 

New Sparse VM Swap Object

 

Now switch back to the Capacity View.

  1. Select RegionA01-COMP01
  2. Select Monitor
  3. Select vSAN
  4. Select Capacity

Scroll down to the Used Capacity Breakdown

As expected, there are No VM swap objects consuming space on the VSAN datastore as the Virtual Machine is power off.

 

 

New Sparse VM Swap Object

 

  1. Select the Virtual Machine called PFTT=1-Raid5
  2. Select Summary
  3. Make a note of the ESXi host that the VM is registered against.

 

 

New Sparse VM Swap Object

 

Open a puTTY session to the ESXi host that the PFTT=1-Raid5 VM is registered on.

The first thing to note is that this advanced setting needs to be set on each ESXi host that is in the vSAN cluster.

In our environment, we will set it only on the ESXi Host that will run the VM.

Note : You can drag and drop the command from the manual or use the "send text" top menu option.

The setting is called SwapThickProvisionDisabled, and is disabled by default:

esxcfg-advcfg -g /VSAN/SwapThickProvisionDisabled

To enable it:

esxcfg-advcfg -s 1 /VSAN/SwapThickProvisionDisabled

 

 

Power On VM

 

  1. Right click the VM called PFTT=1-Raid5
  2. Select Power
  3. Select Power On

 

 

New Sparse VM Swap Object

 

Return to the Capacity View Screen.

  1. Select RegionA01-COMP01
  2. Select Monitor
  3. Select vSAN
  4. Select Capacity

Now we can see that the Swap objects is now only consuming 36 MB of disk, instead of the original 584 MB.

This feature can provide a considerable space-saving on capacity space consumed.

This will rely on how many VMs you have deployed, and how large the VM swap space is (essentially the size of unreserved memory assigned to the VM).

 

 

 

Disable Sparse VM Swap Object setting

 

Since we only enabled this vSAN advanced setting on one ESXi host, the vSAN Health Check will report this as vSAN Configuration out of sync.

We will disable this Advanced vSAN Configuration option again.

Return back to the PuTTY Session and run the following command to disable the setting :

esxcfg-advcfg -s 0 /VSAN/SwapThickProvisionDisabled

 

Conclusion


In this module we demonstrated some of the VM Storage Policies features that are in VMware vSAN 6.6.

We started by showing the Failure tolerance method where we could specify whether the data replication method optimizes for Performance or Capacity. If you choose Performance, vSAN uses more disk space to place the components of objects but provides better performance for accessing the objects. If you select Capacity, vSAN uses less disk space, but reduces the performance.

We then looked at the Sparse VM Swap Object. This new feature can provide a considerable space-saving on capacity space consumed, meaning the VM swap objects will be thinly provisioned.


 

You've finished Module 3

Congratulations on completing Module 3!

Proceed to any module below which interests you most.

Module 4 - iSCSI Target (30 minutes, Beginner)

Module 4 demonstrates the iSCSI feature in the v6.6.1 release and the use cases. This feature provides block storage for physical and virtual workloads using the iSCSI protocol.

Module 5 - vSAN Encryption (45 Minutes, Intermediate)

Module 5 explores how to add a Key Management Server and how to enable vSAN Encryption.

Module 6 - vSAN PowerCLI /esxcli (45 Minutes, Intermediate)

Module 6 unveils the PowerCLI 6.5 Release 1 and esxcli enhancements that have been introduced to help automate, manage and monitor VMware Virtual SAN Environments.

Module 7 - 2-Node Direct Connect (30 minutes, Advanced)

Learn how to create a 2 Node Direct Connect vSAN Stretched Cluster with Witness Traffic Separation.

 

 

 

How to End Lab

 

If you would like to end your lab click on the END button.

 

Module 4 - vSAN iSCSI Target (30 Minutes, Beginner)

Introduction


iSCSI SANs use Ethernet connections between computer systems, or host servers, and high performance storage subsystems. The SAN components include iSCSI host bus adapters (HBAs) or Network Interface Cards (NICs) in the host servers, switches and routers that transport the storage traffic, cables, storage processors (SPs), and storage disk systems.

iSCSI SAN uses a client-server architecture. The client, called iSCSI initiator, operates on your host. It initiates iSCSI sessions by issuing SCSI commands and transmitting them, encapsulated into iSCSI protocol, to a server. The server is known as an iSCSI target. The iSCSI target represents a physical storage system on the network. It can also be provided by a virtual iSCSI SAN, for example, an iSCSI target emulator running in a virtual machine. The iSCSI target responds to the initiator's commands by transmitting required iSCSI data.

Specifically, "iSCSI targets on vSAN" are managed the same as other objects with Storage Policy Based Management (SPBM) which means functionality such as deduplication, compression, mirroring, and erasure coding can be utilized.

This Module contains the following lessons:

 

iSCSI Storage System Types

Active-active storage system

Allows access to the LUNs simultaneously through all the storage ports that are available without significant performance degradation. All the paths are active at all times, unless a path fails.

Active-passive storage system

A system in which one storage processor is actively providing access to a given LUN. The other processors act as backup for the LUN and can be actively providing access to other LUN I/O.

I/O can be successfully sent only to an active port for a given LUN. If access through the active storage port fails, one of the passive storage processors can be activated by the servers accessing it.

Asymmetrical storage system

Supports Asymmetric Logical Unit Access (ALUA). ALUA-complaint storage systems provide different levels of access per port. ALUA allows hosts to determine the states of target ports and prioritize paths. The host uses some of the active paths as primary while others as secondary.

Virtual port storage system

Allows access to all available LUNs through a single virtual port. These are active-active storage devices, but hide their multiple connections though a single port. ESXi multi-pathing does not make multiple connections from a specific port to the storage by default. Some storage vendors supply session managers to establish and manage multiple connections to their storage. These storage systems handle port failover and connection balancing transparently. This is often referred to as transparent failover.

 


 

iSCSI Target use cases

 

vSAN provides both enterprise-class scale and performance as well as new capabilities that broaden the applicability to a wide variety of use cases. vSAN is suited to be the storage for all your VM's and now with vSAN iSCSI targets you can now expand this to physical workloads.

The most common use cases shared by our customers include

 

 

Lab Preparation

 

If you have completed previous modules by completing the steps as outlined, then you can skip the next few steps to prepare your environment for this lesson.

Click here to go to the lesson.

If you have skipped to this module, we will use our Module Switcher PowerCLI Application to prepare the environment.We will use our Module Switcher PowerCLI Application to prepare the environment.

 

 

Module Switcher

 

If you have not completed previous lessons, the Module Switcher is way in which we can prepare the lab environment for you to carry out the steps in this lesson.

  1. Launch the Module Switcher called HOL-1808-HCI on the Windows Desktop

 

 

Module 4 Start

 

Click the Module 4 Start button

This Startup Routine can take a few minutes to complete - thank you for your patience!

 

 

Monitor Progress

 

Monitor Progress until Complete.

• Press Enter to continue (and close the PowerCLI Window)

 

 

Lab Prep Complete

 

Your Lab has been successfully prepared for Module 4 !

  1. Click Window Close to safely stop the Module Switcher

Please Note that you cannot 'go back' and take Modules prior to the one you are currently in unless you end the lab and start it over again

For example: If you Start Module 4, you cannot use the Module Switcher to Start Labs 1, 2 or 3).

 

iSCSI Target Configuration


The vSAN iSCSI target service is enabled with just a few mouse clicks. CHAP and Mutual CHAP authentication is supported. vSAN objects that serve as iSCSI targets are managed with storage policies just like virtual machine objects.


 

Enable iSCSI Target Service on the vSAN Cluster

 

To enable vSAN iSCSI Target Service on a vSAN Cluster :

  1. From the Home menu of the vSphere Web Client
  2. Select Hosts and Clusters

 

 

Enable iSCSI Target Service on the vSAN Cluster

 

Enable the vSAN iSCSI Target Service

  1. Select Cluster RegionA01-COMP01
  2. Select Configure tab
  3. Select vSAN > General
  4. Click on Edit next to "vSAN iSCSI Target Service" section

 

 

Enable iSCSI Target Service on the vSAN Cluster

 

  1. Check Enable Virtual SAN iSCSI Target Service
  2. The Default iSCSI network is vmk3 in our environment, so we can leave this as the default.
  3. We will use the vSAN Default Storage Policy as the Storage policy for the home object.

Click OK

Note: Before proceeding to the next step please make sure ALL of the tasks are completed. Review the vSphere WebClient tasks for proper status.

 

 

Enable iSCSI Target Service on the vSAN Cluster

 

Verify the vSAN iSCSI Target Service is enabled.

 

 

Create iSCSI Targets

 

To create an iSCSI Targets

  1. Select Cluster RegionA01-COMP01
  2. Select Configure tab
  3. Select vSAN -> iSCSI Targets
  4. Click on Add (+)

 

 

iSCSI Target Details

 

  1. Define an Alias for the Target - TGT-01
  2. Check Add your first LUN to the iSCSI Target
  3. Enter a LUN ID of 10
  4. Enter an Alias for the Lun as Lun-10
  5. Define the LUN size (2 GB)

Click OK

Note: Target IQN (unique iSCSI Qualified Name) may differ from lab to lab

 

 

iSCSI Target Details

 

 

The vSAN iSCSI Target and Lun will be created.

 

 

Create iSCSI Device Access List

 

The last step is adding initiator names to an initiator group, which controls access to the target, as shown here.

  1. Select Cluster RegionA01-COMP01
  2. Select Configure tab
  3. Select vSAN -> iSCSI Initiator Groups
  4. Click on Add (+)

 

 

New Virtual SAN iSCSI Initiator Group

 

  1. Define the name of the Initiator Group ( Initiator-Group-01)
  2. Click OK

 

 

New Virtual SAN iSCSI Initiator Group

 

Verify that the vSAN iSCSI Initiator Group has been created

 

 

Setup iSCSI initiator on Windows Main Console vm

 

This lesson will emulate a physical server connecting to a iSCSI vSAN cluster.

Note: Demo purposes only since attaching a vSAN iSCSI volume is only supported for physical hosts

  1. Open the iSCSI initiator application that is located on your Desktop
  2. Select the Configuration tab
  3. Copy (CTRL-C) the Initiator Name string into the clipboard

Example:

iqn.1991-05.com.microsoft:controlcenter.corp.local

 

 

Configure iSCSI initiator for vSAN iSCSI Target

 

  1. Select Cluster RegionA01-COMP01
  2. Select Configure tab
  3. Select vSAN -> iSCSI Initiator Groups
  4. Select the Initiator Group called Initiator-Group-01
  5. Select Initiators tab
  6. Click on Add (+)

 

 

Configure iSCSI initiator for vSAN iSCSI Target

 

 

  1. Paste (CTRL-V) the Initiator Name string in the Member initiator name and click Add

Click OK

 

 

Configure iSCSI initiator for vSAN iSCSI Target

 

Verify that the Initiator has been added to the vSAN iSCSI Initiator Group

 

 

Add Accessible Targets

 

  1. Select Cluster RegionA01-COMP01
  2. Select Configure tab
  3. Select vSAN -> iSCSI Initiator Groups
  4. Select the Initiator Group called Initiator-Group-01
  5. Select Accessible Targets tab
  6. Click on Add (+)

 

 

Add Accessible Targets

 

  1. Select the Target IQN
  2. Click OK

 

 

Connect Host to iSCSI Target

 

  1. Locate one of the ESXi hosts in the vSAN Cluster. In this example, I will use esx-01a.corp.local as an example.
  2. Select Configure
  3. Select Networking -> VMkernel Adapters
  4. Note the IP Address of the vSAN VMkernel port group.  ( 192.168.130.51 )

 

 

Connect Host to iSCSI Target

 

Open the iSCSI initiator application that is located on the Desktop

  1. Select the Discovery tab
  2. Select Discover Portal

 

 

Connect Host to iSCSI Target

 

  1. Enter the IP associated with vmk3 from host esx-01a.corp.local ( 192.168.130.51 )

Click OK

 

 

Connect Host to iSCSI Target

 

1. Click the Targets tab

Notice that the Target is Inactive

2. Click Connect

 

 

 

Connect Host to iSCSI Target

 

  1. Click Enable multi-path
  2. Click OK

Verify that the Discovery Target is Connected.

 

 

Initialize iSCSI Disk

 

Open the Computer Management Tool; Double click the Computer Management shortcut on the desktop

  1. Click on Storage > Disk Management
  2. Windows will prompt you to Initialize Disk 1; (check Taskbar for pop-up)
  3. Validate information/options match and click OK

 

 

New Simple Volume

 

Create a Simple Volume to start using the new disk

  1. Right click on Disk 1 the unallocated disk ( 2 GB ) and select New Simple Volume
  2. Accept all Wizard defaults and give the volume a new name ( vSAN iSCSI Volume)
  3. Click on Next and Finish to complete the wizard

 

 

Format the New Volume (E:)

 

Note : Windows may prompt (see Taskbar) you to format the disk; click"Cancel" as you have already formatted the disk.

 

 

 

Test iSCSI-based volume in Windows

 

Open Windows Explorer (shortcut on the taskbar) and Navigate to the C:\ folder

Right click and copy the "Software" Folder

 

 

Copy data to iSCSI-based drive

 

Navigate to the new volume ( E: )

Right click on the root of the volume and select paste

 

 

Review iSCSI Target usage in Web Client

 

  1. Open vSphere Web Client
  2. Select Storage
  3. Click on vsanDatastore
  4. Select Files
  5. Expand the .iSCSI-CONFIG folder. Expand the targets folder. Select the folder in the targets folder.
  6. Validate the vSAN VMDK object exists

 

Conclusion


In this module we demonstrated the new iSCSI feature which is part of the latest VMware vSAN release. We started by showing you how to enable the iSCSI target services and how to configure the iSCSI Initiator. Once these were configured correctly we demonstrated how to attach a host to the iSCSI Volume which can provide additional, simple, cost effective storage solutions for physical hosts.


 

You've finished Module 4

Congratulations on completing Module 4.

Proceed to any module below which interests you most.

Module 5 - vSAN Encryption (45 Minutes, Intermediate)

Module 5 explores how to add a Key Management Server and how to enable vSAN Encryption.

Module 6 - vSAN PowerCLI /esxcli (45 Minutes, Intermediate)

Module 6 unveils the PowerCLI 6.5 Release 1 and esxcli enhancements that have been introduced to help automate, manage and monitor VMware Virtual SAN Environments.

Module 7 - 2-Node Direct Connect (30 minutes, Advanced)

Learn how to create a 2 Node Direct Connect vSAN Stretched Cluster with Witness Traffic Separation.

 

 

How to End Lab

 

If you would like to end your lab click on the END button.

 

Module 5 - vSAN Encryption (30 Minutes, Beginner)

Introduction


You can use data at rest encryption to protect data in your Virtual SAN cluster.

Virtual SAN can perform data at rest encryption. Data is encrypted after all other processing, such as deduplication, is performed. Data at rest encryption protects data on storage devices, in case a device removed from the cluster.

Using encryption on your Virtual SAN cluster requires some preparation. After your environment is set up, you can enable encryption on your Virtual SAN cluster.

Virtual SAN encryption requires an external Key Management Server (KMS), the vCenter Server system, and your ESXi hosts. vCenter Server requests encryption keys from an external KMS. The KMS generates and stores the keys, and vCenter Server obtains the key IDs from the KMS and distributes them to the ESXi hosts.

vCenter Server does not store the KMS keys, but keeps a list of key IDs.

This Module contains the following lessons:


 

Lab Preparation

If you have completed the previous modules by completing the steps as outlined, then you can skip the next few steps to prepare your  environment for this lesson.

Click here to go to the lesson.

If you have skipped to this module, we will use our Module Switcher PowerCLI Application to prepare the environment.

 

 

Module Switcher

 

Double-Click the Module Switcher Desktop Shortcut called HOL-1808-HCI

 

 

Module 5 Start

 

  1. Click the Module 5 Start button

 

 

Module 5 Progress

 

Monitor Progress until Complete.

• Press Enter to continue (and close the PowerCLI Window)

 

 

Lab Prep Complete

 

Your Lab has been successfully prepared for Module 5

  1. Click Window Close to safely stop the Module Switcher

Please Note that you cannot 'go back'  and take Modules prior to the one you are currently in unless you end  the lab and start it over again

For example: If you Start Module 4, you cannot use the Module Switcher to Start Labs 1, 2 or 3).

 

Configuring the Key Management Server


A Key Management Server (KMS) cluster provides the keys that you can use to encrypt the Virtual SAN datastore.

Before you can encrypt the vSAN Datastore, you must set up a KMS cluster to support encryption. That task includes adding the KMS to vCenter Server and establishing trust with the KMS.

The vCenter Server provisions encryption keys from the KMS cluster.

The KMS must support the Key Management Interoperability Protocol (KMIP) 1.1 standard.


 

vSAN Encryption at a glance

 

vSAN Encryption is data at rest encryption, this means data is encrypted  at rest both on the caching and capacity devices. Enabling encryption  is a single click operation and is designed to work seamlessly with any  of the other vSAN and vSphere features (example vMotion, HA and DRS). It  is the first hyper-converged-infrastructure offering included in a DISA STIG.

vSAN encryption technically is designed to work with KMS server that  communicates via KMIP 1.1(or above). However, we explicitly certify KMS  servers from our partners to provide a consistent user experience.

 

 

How vSAN Encryption Works

When you enable encryption, Virtual SAN encrypts everything in the Virtual SAN datastore. All files are encrypted, so all virtual machines and their corresponding data are protected. Only administrators with encryption privileges can perform encryption and decryption tasks.

Virtual SAN uses encryption keys as follows:

When a host reboots, it does not mount its disk groups until it receives the KEK. This process can take several minutes or longer to complete. You can monitor the status of the disk groups in the Virtual SAN health service, under Physical disks > Software state health.

 

 

Design Considerations for Virtual SAN Encryption

The password recrypts core dumps that use internal keys to use keys that are based on the password. You can later use the password to decrypt any encrypted core dumps that might be included in the support bundle. Unencrypted core dumps or logs are not affected.

 

 

Setting up domain of trust

 

There are three parties involved in vSAN encryption – (1) Key Management  Server or the KMS server (this is the entity that generates the keys))  (2) vCenter and (3) vSAN host or ESXi host.

Before we attempt to encrypt any data on vSAN, the first step is to set up a domain of trust among 3 parties (KMS, vCenter and vSAN host).

Setting up the domain of trust follows the standard Public Key  Infrastructure (PKI) based management of digital certificates. The exact  steps are dependent on the KMS provider.

Once the domain of trust is set up, KMS, vCenter and the vSAN host  can begin communicating with each other. The exchange of key happens  between the vSAN host and the KMS server.

The vSAN host provides a key reference or key id to the KMS server  and the KMS server in response provides the key that is associated with  the key id.

 

 

Review Key Management Server settings

You add a Key Management Server (KMS) to your vCenter Server system from the vSphere Web Client.

vCenter Server creates a KMS cluster when you add the first KMS instance. If you configure the KMS cluster on two or more vCenter Servers, make sure you use the same KMS cluster name.

Note Do not deploy your KMS servers on the Virtual SAN cluster you plan to encrypt. If a failure occurs, hosts in the Virtual SAN cluster must communicate with the KMS.

 

 

Add Key Management Server settings

 

A Key Management Server (KMS) cluster provides the keys that you can use to encrypt the Virtual SAN datastore.

Before  you can encrypt the Virtual SAN datastore, you must set up a KMS  cluster to support encryption.

That task includes adding the KMS to vCenter Server and establishing trust with the KMS. vCenter Server provisions encryption keys from the KMS cluster.

  1. From the Home page of the vSphere Web Client ,
  2. Select Hosts and Clusters

 

 

 

Review Key Management Server settings

 

To aqdd a Key Management Server to vCenter Server :

  1. Select the vCenter Server called vcsa-01a.corp.local
  2. Select the Configure tab
  3. Select Key Management Servers
  4. Click Add KMS ...

To use vSAN Encryption, a Key Management Server  (KMS) is required. Nearly all KMIP 1.1-compliant KMS vendors are  compatible, with specific testing completed for vendors such as  HyTrust®, Gemalto®, Thales e-Security®, CloudLink®, and Vormetric®.  These solutions are commonly deployed in clusters of hardware appliances  or virtual appliances for redundancy and high availability.

 

 

Add Key Management Server

 

Enter the following information to create the KMS Cluster:

KMS Cluster : <Create new cluster>
Cluster name : KMS-Cluster
Server alias : KMS01
Server Address : kms-01a.corp.local
Server port : 5696

The remaining settings can be left blank.

Click OK

 

 

Add Key Management Server

 

To set the KMS Server as the default , Click Yes

 

 

Add Key Management Server

 

On the Trust Certificate dialog box.

Click Trust

 

 

Add Key Management Server

 

The Key Management Server called KMS01 will be added.

Verify the Connection Status is Normal and the Certificate Status has a valid certificate that will expire some time in the future.

 

 

Establish Trust with Key Management Server

 

After you add the KMS to the vCenter Server  system, you must establish a trusted connection. The exact process  depends on the certificates that the KMS accepts, and on company policy.  

  1. Select the KMS instance with which you want to establish a trusted connection.  ( KMS01 )
  2. Click Establish trust with KMS ...

 

 

Establish Trust with Key Management Server

 

Select the option appropriate for your server and complete the steps.

  1. Select Root CA Certificate

Different KMS vendors require different means to trust the digital certificates of vCenter and ESXi hosts. You should contact your Key Management Server vendor for your Certificate option.

Click OK on Establish Trust With KMS

Click OK on the Download Root CA Certificate

 

 

 

Review Key Management Server settings

 

  1. After establishing the trust with the Key Management server, click the Refresh icon to update the Web Client status

Verify the Connection Status is Normal and the Certificate Status has a valid certificate that will expire some time in the future for both the KMS Cluster and the KMS Server.

 

 

Enabling vSAN Encryption


In vSAN 6.6, we are introducing another option for native data-at-rest encryption, vSAN Encryption.

vSAN Encryption is the industry’s first native HCI encryption  solution; it is built right into the vSAN software. With a couple of  clicks, it can be enabled or disabled for all items on the vSAN  datastore, with no additional steps.

Because it runs at the hypervisor level and not in the context of the  virtual machine, it is virtual machine agnostic, like VM Encryption.

And because vSAN Encryption is hardware agnostic, there is no  requirement to use specialized and more expensive Self-Encrypting Drives  (SEDs), unlike the other HCI solutions that offer encryption.


 

Enabling vSAN Encryption

 

You can enable encryption by editing the configuration parameters of an existing Virtual SAN cluster.

  1. Select Home page of the vSphere Web Client
  2. Select Hosts and Clusters

Turning on encryption is a simple matter of clicking a checkbox.  Encryption can be enabled when vSAN is enabled or after and with or  without virtual machines (VMs) residing on the datastore.

Note that a rolling disk reformat is required when encryption is enabled.

This  can take a considerable amount of time – especially if large amounts of  existing data must be migrated as the rolling reformat takes place. 

 

 

Enabling vSAN Encryption

 

Lets review the current state of our vSAN Cluster

  1. Select the cluster called RegionA01-COMP01
  2. Select Configure
  3. Select vSAN -> General
  4. From this screen we can verify the following :
    • vSAN is Enabled
    • Deduplication and compression is Enabled
    • Encryption is Disabled
    • Our Disk Format Version is 5.0
  5. Lets now enable vSAN Encryption, click Edit

 

 

Enabling vSAN Encryption

 

Enabling vSAN Encryption is a one click operation.

  1. Click to enable Encryption
  2. Verify the KMS-Cluster is selected, if you have multiple KMS clusters in your environment, you can choose from here.
  3. Select option to Allow Reduced Redundancy

Enabling vSAN Encryption has an option to Erase disk before use.  Do not enable this option.

Click on the information button (i) for these options to get additional information on these options.

Click OK

The Erase disks before use option will significantly reduce the possibility of data leak and increase the attackers cost to reveal sensitive data. This option will also increase the cost of time to consume disks.

 

 

Enabling vSAN Encryption

 

You can monitor the vSAN Encryption process from the Recent Tasks window.

If you get and error and vSAN Encryption fails, Turn Off vSAN Encryption and Enable vSAN Encryption again. In this lab environment we are using an Open Source KMIP server to showcase this feature. In Customer Production environment you should not see this using supported Key Management Server.

To enable vSAN Encryption, the following operation take place.

This process is repeated for each of the Disk Groups in the vSAN Cluster.

 

 

 

Enabling vSAN Encryption

 

You can also monitor the vSAN Encryption process from the Configure -> vSAN -> General

Enabling vSAN Encryption will take a little time. Each of  the Disk Groups in the vSAN Cluster have to be removed and recreated.

 

 

Enabling vSAN Encryption

 

Once the rolling reformat of all the disk groups task has completed, Encryption of data at rest is enabled on the Virtual SAN cluster.

Virtual SAN encrypts all data added to the Virtual SAN datastore.

You have the option to generate new encryption keys, in case a key expires or becomes compromised.

 

 

Verifying vSAN Encryption is enabled

 

To show that vSAN Encryption is enabled on the disks, we can use the following command :

esxcli vsan storage list

From the output we can verify that Encryption is enable and the Disk Key is loaded :

 

vSAN Encryption Health Check



 

vSAN Encryption Health Check

 

There are vSAN Health Checks to verify that your vSAN Encryption is enabled and healthy.

  1. Select the Cluster called RegionA01-COMP01
  2. Select Monitor
  3. Select vSAN
  4. Select Health
  5. You can use the Retest button to rerun the vSAN Health Check 

 

 

vSAN Encryption Health Check

 

Scroll down through the list of vSAN Heath Checks

  1. Select Encryption and expand the individual tests.
  2. Select CPU AES-NI is enabled on hosts

This check verifies whether ESXi hosts in the vSAN cluster have CPU AES-NI feature enabled.

Advanced Encryption Standard Instruction Set (or the Intel Advanced Encryption Standard New Instructions; AES-NI) is an extension to the x86 instruction set architecture for microprocessors from Intel and AMD. The purpose of the instruction set is to improve the speed of applications performing encryption and decryption using the Advanced Encryption Standard (AES).

 

 

vSAN Encryption Health Check

 

  1. Select vCenter and all hosts are connected to Key Management Servers
  2. Select vCenter KMS status

This vSAN Health Check verifies that the vCenter Server can connect to the Key Management Servers

 

 

vSAN Encryption Health Check

 

  1. Select Hosts KMS status

This vSAN Health Check verifies that the ESXi hosts can connect to the Key Management Servers.

 

 

Conclusion


With the addition of vSAN Encryption in vSAN 6.6 and with VM  Encryption introduced in vSphere 6.5, native data-at-rest encryption can  be easily accomplished on hyper-converged infrastructure (HCI) powered  by vSAN storage or any other vSphere storage.

While vSAN Encryption and VM Encryption meet similar requirements,  they do so a bit differently, each with use cases they excel at.

Most  importantly, they provide customers choice for when deciding how to  provide data-at-rest encryption for their vSphere workloads.

Congratulations on completing Module 5!

Proceed to any module below which interests you most.

Module 6 - vSAN PowerCLI /esxcli (45 Minutes, Intermediate)

Module 6 unveils the PowerCLI 6.5 Release 1 and esxcli enhancements that have been introduced to help automate, manage and monitor VMware Virtual SAN Environments.

Module 7 - 2-Node Direct Connect (30 minutes, Advanced)

Learn how to create a 2 Node Direct Connect vSAN Stretched Cluster with Witness Traffic Separation.


 

How to End Lab

 

If you would like to end your lab click on the END button.

 

Module 6 - vSAN PowerCLI and ESXCLI (30 Minutes, Beginner)

Introduction


In this Module you will learn about the latest release of VMware PowerCLI (6.5 Release 1) and the enhancements that have been introduced to help automate, manage and monitor VMware Virtual SAN Environments.

We promise that there will be no, "Hello, World!" examples for you to work through. :)

This Module contains the following lessons:


 

Lab Preparation

We will use our Module Switcher PowerCLI Application to prepare the environment.

 

 

Module Switcher

 

Double-Click the Module Switcher Desktop Shortcut called HOL-1808-HCI

 

 

Module 6 Start

 

  1. Click the Module 6 Start button

 

 

Module 6 Progress

 

Monitor Progress until Complete.

• Press Enter to continue (and close the PowerCLI Window)

 

 

Lab Prep Complete

 

Your Lab has been successfully prepared for Module 6 !

  1. Click Window Close to safely stop the Module Switcher

Please Note that you cannot 'go back'  and take Modules prior to the one you are currently in unless you end  the lab and start it over again

For example: If you Start Module 4, you cannot use the Module Switcher to Start Labs 1, 2 or 3).

 

PowerCLI Overview


VMware PowerCLI is a command-line and scripting tool built on Windows Powershell, and provides more than 500 cmdlets for managing and automating vSphere, vSAN, Site Recovery Manager, vRealize Operations Manager, vSphere Automation SDK, vCloud Director, vCloud Air, vSphere Update Manager and VMware Horizon environments.

In this lesson we will examine our Lab PowerCLI environment and perform a few vSphere Administrative Tasks.


 

Launch PowerCLI

 

PowerCLI has been pre-installed in the Lab.

1. Double-click the Desktop VMware PowerCLI Icon

 

 

Confirm Version

 

  1. Type the following cmdlet name to retrieve our PowerCLI version information:
Get-PowerCLIVersion

You will notice that the get-powercliversion command is being deprecated, so lets run the get-module cmdlet.

Get-Module

You'll notice that we are running the latest PowerCLI release/build and you can also see a list of installed VMware Components. These components contain the various cmdlets to manage their respective areas, for example, the vSAN cmdlets that you'll be using in this Lab are contained within the 'VMware Storage PowerCLI component'.

Note: PowerCLI commands are not case sensitive; however, you can press Tab at any time to attempt autocompletion. This is a valuable timesaver so take advantage of it!

 

 

Connect to vCenter

 

  1. Type the following to connect to our Lab vCenter:
Connect-VIServer vcsa-01a.corp.local

The Connect-VIServer cmdlet can be used to connect and query across multiple vCenter instances.

 

 

Powershell Providers

 

  1. Type the following command to list the Powershell Providers that are available for usage:
Get-PSProvider

Let's examine the Inventory and Datastore providers in more detail.

 

 

Inventory Provider

 

The Inventory Provider is designed to expose an unfiltered inventory view of the inventory items from a server. It enables navigation and file-style management of the

VMware vSphere inventory. By creating a PowerShell drive based on a managed object (such as a data center), you can obtain a view of its contents and the relationships between the items. In addition, you can move, rename, or delete objects by running commands from the PowerCLI console.

When you connect to a server with Connect-VIServer, the cmdlet builds two default inventory drives: vi and vis. The vi inventory drive shows the inventory on the last connected server. The vis drive contains the inventory of all vSphere servers connected within the current PowerCLI session. You can use the default inventory drives or create custom drives based on the default ones.

Let's examine the Inventory Drive.

  1. Change Directory (cd) to inventory drive:
cd vi:
  1. List contents:
ls
  1. Change Directory (cd) to Datacenter:
cd RegionA01
  1. List contents:
ls
  1. Change Directory (cd) to Virtual Machines:
cd vm
  1. List contents:
ls

 

 

Datastore Provider

 

The Datastore Provider is designed to provide access to the contents of one or more datastores. The items in a datastore are files that contain configuration, virtual disk, and the other data associated with a virtual machine. When you connect to a server with Connect-VIServer, the cmdlet builds two default datastore drives: vmstore and vmstores. The vmstore drive provides a list of the datastores available on the vSphere server that you last connected to.

If you establish multiple connections to the same vSphere server, the vmstore drive is not updated. The vmstores drive contains all datastores available on all vSphere servers that you connected to within the current PowerCLI session. You can use the default datastore drives or create custom drives based on the default ones.

Let's examine the Datastore Provider.

  1. Change Directory (cd) to Datastore Provider:
cd vmstore:
  1. List contents:
ls
  1. Change Directory (cd) to Datacenter:
cd RegionA01
  1. List contents (notice that we have two Datastores present -- a shared iSCSI Datastore and our vsanDatastore):
ls
  1. Change Directory (cd) back to the C:\ volume:
cd c:

 

 

PowerCLI Cmdlets

 

We used the 'Connect-VIServer' cmdlet earlier. Cmdlets are small programs that are pre-compiled for your usage.

Let's use a few cmdlets to examine our vCenter environment by typing these commands (remember that you can use the Tab key to autocomplete if desired).

  1. Retrieve available vCenter Datacenter(s):
Get-Datacenter
  1. Retrieve vCenter Cluster(s):
Get-Cluster
  1. Retrieve Virtual Machines:
Get-VM
  1. Retrieve available vCenter Datastores:
Get-Datastore

 

 

Cmdlets, cont.

 

You can pipe commands together to create a pipeline.

A pipeline is a series of commands separated by the pipe operator |. Each command in the pipeline receives an object from the previous command, performs some operation on it, and then passes it to the next command in the pipeline. Objects are output from the pipeline as soon as they become available.

  1. Type the following command to pipe the output of Get-VM to the Format-Table cmdlet and return only the Name and PowerState Columns:
Get-VM | Format-Table Name, PowerState
  1. 2. We can also pipe the result of Get-VM to the Where-Object cmdlet to filter on specific information (like Power state):
Get-VM | Where-Object {$_.PowerState -eq 'PoweredOn'}

 

 

Get-Help

 

You can use the Get-Help cmdlet to view the description, syntax information and examples for any cmdlet that you are interested in learning more about.

Tip : You will need to scroll up the page to get to the beginning of the help text.

Example 1 (not shown in screenshot)

Get-Help Get-VM

Example 2 (not shown in screenshot)

Get-Help Get-VM -examples

 

 

Clone a Virtual Machine

 

For the final step of this Lesson we will clone an existing VM using the New-VM cmdlet (this VM will be used in a later automation lesson on using Storage Policy Based Management).

  1. Type the following command and monitor the clone progress (you can also simply highlight the entire command in your manual then drag and drop it into your PowerCLI window if you prefer):
New-VM -Name PowerCLI-VM -VM core-A -Datastore vsanDatastore -ResourcePool esx-01a.corp.local

 

PowerCLI vSAN Commands


The previous version of PowerCLI had (6) vSAN specific cmdlets that could be utilized:

Get-VsanDisk

Get-VsanDiskGroup

New-VsanDisk

New-VsanDiskGroup

Remove-VsanDisk

Remove-VsanDiskGroup

Let's look at what has been added with PowerCLI 6.5 R1, next.


 

Whats New

 

  1. Use Get-Command to view cmdlets containing 'vsan' in their name:
Get-Command *vsan*
  1. Storage Policy Based Management (SPBM) is also an important capability when it comes to vSAN (not shown in screenshot):
Get-Command *spbm*
  1. You can also view all of the vSphere Storage related cmdlets contained within the VMware.VimAutomation.Storage Module if desired (not shown in screenshot):
Get-Command -module VMware.VimAutomation.Storage

Let's examine a few of the new vSAN cmdlets, next.

 

 

vSAN Configuration Information

 

  1. To make things easier, let's create a Variable named $cluster and set it equal to the value of the Get-Cluster cmdlet:
$cluster = Get-Cluster
  1. Output the Variable contents:
$cluster
  1. Pass the $cluster variable to the new Get-VsanClusterConfiguration cmdlet:
Get-VsanClusterConfiguration $cluster

Note that we can see a few high level properties of our vSAN Cluster (vSAN is enabled, Stretched Cluster is not, etc.)

 

 

Get-VsanClusterConfiguration

 

Let's see what additional information is available via this cmdlet.

  1. Set a variable named $vsanconfig equal to the results of Get-VsanClusterConfiguration (you can up-arrow once then left-arrow over to insert the variable name):
$vsanConfig = Get-VsanClusterConfiguration $cluster
  1. Pipe $vsanConfig into the Get-Member cmdlet to see all of the Methods and Properties that are available:
$vsanConfig | Get-Member

 

 

Get-VsanClusterConfiguration, cont.

 

  1. You can directly view individual Properties by appending their name to your $vsanConfig variable. For example, try one or more of these:
$vsanConfig.HealthCheckEnabled
$vsanConfig.PerformanceServiceEnabled
$vsanConfig.VsanDiskClaimMode
  1. To view all of the Properties and their results you can simply pass the $vsanConfig variable to the Format-List cmdlet:
$vsanConfig | Format-List

 

 

vSAN Health Tests

 

The ability to Test vSAN Health and Performance was previously available within the vSphere Web Client -- this functionality has now been made accessible via PowerCLI 6.5 as well.

These Health Tests check all aspects of a Virtual SAN configuration including hardware compatibility, networking configuration and operations, advanced Virtual SAN configuration options, storage device health as well as virtual machine object health.

The health check will provide two main benefits to administrators of Virtual SAN environments:

Let's introduce a failure condition in our vSAN Cluster by running an existing PowerCLI Script. We will then use one of our new 'Test-' cmdlets to troubleshoot the condition.

1. Change into the c:\hol directory:

cd c:\hol

2. Type the following (script name) then press enter:

.\module4break.ps1

 

 

Test-VsanClusterHealth

 

  1. Let's set a variable named $vsanHealth equal to the result of running the 'Test-VsanClusterHealth' cmdlet against our vSAN Cluster:
$vsanHealth = Test-VsanClusterHealth -Cluster $cluster

Note: In our shared lab environment it is possible for this cmdlet to take a few minutes to complete (thank you for your patience)!

  1. Output the result of this test by typing the $vsanHealth variable and pressing enter:
$vsanHealth

Notice that our OverallHealthStatus is indicated as 'Failed'.

 

 

Test-VsanClusterHealth, cont.

 

We know that the test Failed; however, we still do not understand the specific reason why.

  1. Let's dig deeper and examine the Properties that Test-VsanClusterHealth is aware of by using the Get-Member cmdlet:
$vsanHealth | Get-Member
  1. 2. Examine the OverallHealthDescription by appending this property to your $vsanHealth variable (remember that you can use the Tab key to autocomplete):
$vsanHealth.OverallHealthDescription

Things are getting interesting! Notice that we have a 'Network misconfiguration' somewhere...

 

 

Test-VsanClusterHealth, cont.

 

  1. Examine the NetworkHealth Property to shed more light, by typing:
$vsanHealth.NetworkHealth

Notice that we are getting a False result for VsanVmknicPresent (each vSphere Host participating in a vSAN Cluster must have a vmknic adapter enabled for vSAN Traffic).

 

 

Test-VsanClusterHealth, cont.

 

  1. To identify the culprit Host, examine the results of NetworkHealth.HostResult (you might need to scroll up in your PowerCLI window):
$vsanHealth.NetworkHealth.HostResult

Ah-ha! Notice that the Host, 'esx-03a.corp.local' does not have a vSAN vmknic configured (you can compare this against one of the other Hosts returned).

 

 

Fix Host

 

Let's run a script to re-enable vSAN traffic on our impacted vSphere Host.

  1. Change into the c:\hol directory:
cd c:\hol
  1. Type the following (script name) then press enter:
.\module4fix.ps1

Note: The command that is utilized to enable vSAN Traffic on the impacted host is output in the console (and is also shown in the Lab Manual screenshot above).

Extra Credit (Optional): Re-run the previous steps beginning with the Test-VsanClusterHealth cmdlet to confirm that the vSAN Cluster is now healthy and that the vmknic has been properly enabled for the impacted host. You may receiving a 'warning' result via Test-VsanClusterHealth (this is expected as we are running vSAN in a nested ESXi environment on virtual hardware).

 

 

Test-VsanVMCreation

 

This test creates a very simple, tiny virtual machine on every ESXi host in the Virtual SAN cluster. If that creation succeeds, the virtual machine is deleted and it can be concluded that a lot of aspects of Virtual SAN are fully operational (the management stack is operational on all hosts, the Virtual SAN network is plumbed and is working, the creation, deletion and I/O to objects is working, etc.).

By performing this test, an administrator can reveal issues that the passive health checks may not be able to detect. By doing so systemically it is also very easy to isolate any particular faulty host and then take steps to remediate the underlying problem.

  1. Create a $testVM variable and assign it to the result of our Test-VsanVMCreation cmdlet:
$testVM = Test-VsanVMCreation $cluster
  1. Output the result of this test by typing the $testVM variable and pressing enter:
$testVM

Notice that the Test Results indicated, 'Passed'.

 

 

Test-VsanVMCreation, cont.

 

  1. Examine the Properties that Test-VsanVMCreation is aware of by using the Get-Member cmdlet:
$testVM | Get-Member
  1. Examine the HostResult property by appending this to your $testVM variable:
$testVM.HostResult

Notice that the Test Virtual Machine was successfully created on each vSphere Host.

 

 

Test-VsanNetworkPerformance

 

Warning: This test should only be run while the Virtual SAN cluster (or even the physical switch attached to the Virtual SAN cluster) are not running in production. It is advisable to run during a maintenance window or before placing the Virtual SAN cluster into production. The reason for this is because this test will flood the network with multicast packets, trying to find where an issue lies. If other users need bandwidth, they may not get enough bandwidth while this test is running.

This test is designed to assess connectivity and multicast speed between the hosts in the Virtual SAN cluster. It verifies that the multicast network setup can satisfy Virtual SAN's requirements.

  1. Create a $testNetwork variable and assign it to the result of our Test-VsanNetworkPerformance cmdlet:
$testNetwork = Test-VsanNetworkPerformance $cluster

 

  1. Output the result of this test by typing the $testNetwork variable and pressing enter:
$testNetwork

Note: This Network test may report a 'Failed' status if the Cloud Environment where our Lab is running overly busy. The command can be run multiple times if needed.

 

 

Test-VsanStoragePerformance

 

The last Test cmdlet that we will review is Test-VsanStoragePerformance.

There are two primary use cases for this test:

  1. Create a $testStorage variable and assign it to the result of our Test-VsanStoragePerformance cmdlet (although there are over 10 different Workload tests, we will be utilizing a 'LowStressTest' workload so that we do not overly tax our lab environment).

You can highlight this command then drag and drop it into the PowerCLI window if you prefer:

$testStorage = Test-VsanStoragePerformance $cluster -Workload LowStressTest -TestDurationSeconds 10 -Policy "Virtual SAN Default Storage Policy"
  1. Output the result of this test by typing the $testStorage variable and pressing enter:
$testStorage

Notice that the Test Results indicated, 'Passed'.

 

 

Test-VsanStoragePerformance, cont.

 

  1. Examine the Properties that Test-VsanStoragePerformance is aware of by using the Get-Member cmdlet:
$testStorage | Get-Member
  1. Examine the HostResult property by appending this to your $testStorage variable (you may need to scroll up in your PowerCLI window):
$testStorage.HostResult

Notice that we gain visibility to all sorts of interesting information: IssueFound?, Latency, IOPS etc.

 

PowerCLI vSAN Automation


VMware Virtual SAN is an ideal candidate for robust Automation via PowerCLI. This includes automating the entire end-to-end build out of a vSAN Environment ('Day 0' task) or performing day to day Virtual Administrator tasks ('Day 1'or 'Day 2' activities).

Virtual SAN APIs can also be accessed through PowerCLI cmdlets. IT administrators can automate common tasks such as assigning storage policies and checking storage policy compliance. Consider a repeatable task such as deploying or upgrading two-node Virtual SAN clusters at 100 retail store locations. Performing each one manually would take a considerable amount of time. There is also a higher risk of error leading to non-standard configurations and possibly downtime. vSphere PowerCLI can instead be used to ensure all of the Virtual SAN clusters are deployed with the same configuration. Lifecycle management, such as applying patches and upgrades, is also much easier when these tasks are automated.

In this Lesson, we will walk through a few automation examples and will also highlight referencing links for further information on end-to-end vSAN automation.


 

Daily Management Tasks

 

In Lesson 2, "PowerCLI vSAN Commands", we explored some of the new Get-VsanClusterConfiguration and Test-VsanClusterHealth PowerCLI 6.5 cmdlets. These cmdlets can be used to monitor for changes in the environment (config drift) or health challenges and alert accordingly.

Additional 'management' examples include leveraging these cmdlets:

Update-VsanHclDatabase (NEW with PowerCLI 6.5, can be used to update and check vSAN Hardware Compatibility )

Get-VsanSpaceUsage (NEW with PowerCLI 6.5, can be used to monitor vSAN disk capacity )

Export/Import-SpbmStoragePolicy ( to backup and restore vSAN Storage Policies, can also be used when migrating to a new vCenter instance )

 

 

Update-VsanHclDatabase

 

As its name implies, the Update-VsanHclDatabase cmdlet can be used to grab the latest vSAN HCL Database file either online (requires Internet access) or imported from a locally staged .json file.

Once updated, you can use the Test-VsanClusterHealth cmdlet that we learned about previously to validate compatibility.

Since our Lab does not have external Internet access, we have staged an HCL .json file locally.

  1. Update your vSAN HCL:
Update-VsanHclDatabase -filepath "c:\hol\all.json"
  1. Test vSAN Cluster Health using the updated HCL:
$testVSAN = Test-VsanClusterHealth -Cluster RegionA01-COMP01
  1. Examine the HclInfo Property returned via the Test-VsanClusterHealth cmdlet (date/time stamp results may vary vs. those shown in screenshot):
$testVSAN.HclInfo
  1. Examine the test result if desired (note that we receive a 'Warning' in our Lab since we are running vSAN in a nested Virtualized Environment):
$testVSAN

 

 

Get-VsanSpaceUsage

 

Let's examine the Get-VsanSpaceUsage cmdlet in more detail.

  1. Set a variable named $vsanUsage equal to the result of the Get-VsanSpaceUsage cmdlet:
$vsanUsage = Get-VsanSpaceUsage
  1. Output the result by typing the variable name:
$vsanUsage

Note : The CapacityGB size may be different in your lab environment, depending on host many disks in each ESXi host that are consumed to create the vSAN Datastore

 

 

Get-VsanSpaceUsage, cont.

 

  1. Examine the Properties that are available to the Get-VsanSpaceUsage cmdlet:
$vsanUsage | Get-Member

 

 

Get-VsanSpaceUsage, cont.

 

  1. Enter this simple script to check the amount of disk free and respond accordingly.
if ($vsanUsage.FreeSpaceGB -gt 50)
{ write-host -foregroundColor Yellow "You have plenty of disk remaining!" }
elseif ($vsanUsage.FreeSpaceGB -lt 50)
{ write-host -foregroundColor Yellow "Time to order more disk!"}

Note: You can highlight then drag and drop the above script contents to your PowerCLI window if you prefer.

 

 

Storage Policy Based Management

 

Storage Policy Based Management (SPBM) enables precise control of storage services. Virtual SAN provides services such as availability level, striping for performance, and the ability to limit IOPS. Policies that contain one or more rules can be created using the vSphere Web Client and/or PowerCLI.

These policies are assigned to virtual machines and individual objects such as a virtual disk. Storage policies can easily be changed and/or reassigned if application requirements change. These changes are performed with no downtime and without the need to migrate (Storage vMotion) virtual machines from one location to another.

 

 

Virtual Machine Prep

 

Applying new Storage Policies could be very cumbersome if you had to apply them manually to individual Virtual Machines. In this section we will create a new Storage Policy and illustrate how easy it is to apply it to multiple Virtual Machines.

This new Storage Policy will set an IOPS Limit of 500 per VM -- this could be helpful if you wanted to prioritize certain VM's over others.

To prepare our Virtual Machines, please perform the following steps:

  1. Create another VM in your environment
New-VM -Name PowerCLI-VM-01 -VM core-A -Datastore vsanDatastore -ResourcePool esx-02a.corp.local
  1. Set a Variable named $vms equal to all Virtual Machines that start with the word 'photon', then confirm variable contents:
$vms = Get-VM -name PowerCLI*
$vms
  1. Power on each VM:
Start-VM $vms

 

 

New-SpbmStoragePolicy

 

  1. Create a new Storage Policy that sets an IOPS Limit of 500:
New-SpbmStoragePolicy -Name vSAN-IOPSlimit -RuleSet (New-SpbmRuleSet -Name "vSAN-IOPSlimit" -AllOfRules @((New-SpbmRule -Capability VSAN.iopslimit 500)))
  1. View Storage Policies:
Get-SpbmStoragePolicy -requirement -namespace "VSAN" | Select Name, Description

 

 

Set-SpbmStoragePolicy

 

1. Apply the newly created Storage Policy to our multiple Virtual Machines:

foreach ( $vm in $vms ) { $vm, (Get-HardDisk -VM $vm) | Set-SpbmEntityConfiguration -StoragePolicy "vSAN-IOPSlimit" }

 

Note: This command may take a while to complete in our Lab environment. In the meantime, please feel free to continue on to our final section of this Lesson.

 

ESXCLI Enhancements


VMware Virtual SAN has several documented ESXCLI commands that can be used to explore & configure individual ESXi hosts.

In this lesson, we will provide some useful commands to use with Virtual SAN. Feel free to follow along. Do note that if you run any commands outside the scope of this lesson, you could potentially have an adverse effect on the lab and may not be able to continue with any remaining modules or the remainder of this module. We will use some of these commands later in this module, too.

Two new esxcli namesspaces were add in vSAN 6.6 :


 

Launch PuTTY

 

Launch the PuTTy application from the Windows Taskbar.

 

 

Choose esx-01a.corp.local

 

  1. Select the ESXi host called esx-01a.corp.local
  2. Select Load
  3. Select Open

 

 

esxcli vsan Commands

 

By typing:

esxcli vsan

This will give you a list of all the possible esxcli commands related to Virtual SAN, with a brief description for each.

 

 

esxcli vsan cluster command

 

  1. To view details about the Virtual SAN Cluster, like it's Health or whether it is a Master or Backup Node, you can type the following:
esxcli vsan cluster get

Please note that the UUID typically used to reference the VSAN cluster is listed as the "Sub-Cluster UUID".

If you ever were to issue the corresponding "esxcli vsan cluster join" command you would furnish this value for the UUID.

 

 

esxcli vsan network command

 

  1. To view networking details, you can execute this command:
esxcli vsan network list

Here we can see that the Network VmkNic is vmk3 and the Traffic Type on this VMKernel port is vsan.

By the way, if you run an esxcli vsan network list, multicast information will still be displayed even though it may not be used.

 

 

esxcli vsan storage command

 

To view the details on the physical storage devices on this host that are part of the vSAN Cluster, you can use this command:

esxcli vsan storage list

Please note that this command does NOT list the storage devices available in the ESXi host - it only reports those storage devices that have already been assigned to VSAN as part of VSAN Disk Group.  If no disks are configured for vSAN on the ESXi host, then the output from this command will be blank.

We can tell a lot of information from this command.

  1. Is the disk an SSD or spinning disk.
  2. Is vSAN Dedupe and Compression enabled.
  3. Is the disk used for Cache or Capacity.
  4. What the On-disk format is.
  5. Is vSAN Encryption enabled.

 

 

 

esxcli vsan policy command

 

  1. To view the Policies in effect, such as how many failures the Virtual SAN can tolerate, the command can be executed:
esxcli vsan policy getdefault

Notice that the policy may contain different capabilities for different VSAN object types - here this is reflected as specifying the additional capability of "forceProvisioning" exclusively for the vmswap object. This makes sense for vmswap object type since it is not a permanent attribute of the VM and will be recreated if the VM needs to migrate to another host in the cluster (vMotion, DRS, etc.)

 

 

esxcli vsan health command

 

The following two ESXCLI commands have been added to support vSAN Health Checks on an individual ESXi host:

  1. To get a summary view of all vSAN Health Checks, you can run the following command:
esxcli vsan health cluster list

 

 

esxcli vsan health command contd...

 

 

 

  1. To check Host vSAN Health service installation
esxcli vsan health cluster get -t "ESXi vSAN Health service installation"
  1. To check Hosts with no vSAN vmknic configured
esxcli vsan health cluster get -t "All hosts have a vSAN vmknic configured"

 

 

esxcli vsan debug command

 

A new esxcli command to assist with troubleshooting has also been added to the latest vSphere release.

esxcli vsan debug

 

 

 

esxcli vsan debug command cont

 

Use "vsan debug vmdk" command to check all of VMDKs status

esxcli vsan debug vmdk list

The output shown is from a vSAN iSCSI target/lun configuration.

 

 

esxcli vsan debug command cont

 

This out will be quite lengthy depending on how many objects are present in your lab environment.

Use “vsan debug object” command to get the health of the vSAN components, Component configuration, the owner host information and other information like the VM Storage Policy, Component State and Type.

esxcli vsan debug object list

 

 

esxcli vsan cluster unicastagent command

 

The following new esxcli command will tell which hosts are using unicast  (it does not list the host where the command is being run from  however):

esxcli vsan cluster unicastagent list

 

Conclusion


In this Module you have spent time learning about PowerCLI 6.5 and how it can be used to monitor, manage and automate VMware Virtual SAN.

We hope that this information has sparked ideas around how you can utilize PowerCLI in your own environments.

As you would expect, there is a wealth of additional information available to assist you in your PowerCLI with vSAN journey.

Please Click ahead for a full listing!


 

You've finished Module 6

Congratulations on completing Module 6!

Proceed to any module below which interests you most.

Module 7 - 2-Node Direct Connect (30 minutes, Advanced)

Learn how to create a 2 Node Direct Connect vSAN Stretched Cluster with Witness Traffic Separation.

Additional Useful Sites:

Cormac Hogan Blog

Jase McCarty Blog

Jeff Hunter Blog

 

 

How to End Lab

 

If you would like to end your lab click on the END button.

 

Module 7 - vSAN Stretched Cluster (30 Minutes, Beginner)

Introduction


In this Module we will learn about how to setup a 2 Node vSAN Stretched Cluster with Witness Traffic Separation.

This Module contains the following lessons:


vSAN 6.6.1 - Stretched Cluster Overview


Before delving into the installation of a vSAN Stretched Cluster, there are a number of important features to highlight that are specific to stretch cluster environments.


 

2-Node Direct Connect vSAN Cluster

 

vSAN 6.5 newly supports the use of network crossover cables in 2-node configurations. This is especially beneficial in use cases such as remote office and branch office (ROBO) deployments where it can be cost prohibitive to procure, deploy, and manage 10GbE networking equipment at each location. This configuration also reduces complexity and improves reliability. In the VMware Hands On Labs platform we aren't able to fully simulate this configuration, but the steps in this lab module show how to prepare a 2-node stretched cluster and separate the Witness VM traffic just as one would do in a direct-connect cluster.

 

 

What is a Preferred Domain/Preferred Site?

Preferred domain/preferred site is simply a directive for vSAN. The “Preferred” site is the site that vSAN wishes to remain running when there is a failure and the sites can no longer communicate. One might say that the “Preferred" site is the site expected to have the most reliability.

Since virtual machines can run on any of the two sites, if network connectivity is lost between site 1 and site 2, but both still have connectivity to the Witness, the preferred site is the one that survives and its components remains active, while the storage on the non-preferred site is marked as down and components on that site are marked as absent.

 

 

What is Read Locality?

Since virtual machines deployed on vSAN Stretched Cluster will have compute on one site, but a copy of the data on both sites, vSAN will use a read locality algorithm to read 100% from the data copy on the local site, i.e. same site where the compute resides. This is not the regular vSAN algorithm, which reads in a round-robin fashion across all replica copies of the data.

This new algorithm for vSAN Stretched Clusters will reduce the latency incurred on read operations.

If latency is less than 5ms and there is enough bandwidth between the sites, read locality could be disabled. However please note that disabling read locality means that the read algorithm reverts to the round robin mechanism, and for Virtual SAN Stretched Clusters, 50% of the read requests will be sent to the remote site. This is a significant consideration for sizing of the network bandwidth. Please refer to the sizing of the network bandwidth between the two main sites for more details.

The advanced parameter VSAN.DOMOwnerForceWarmCache can be enabled or disabled to change the behavior of read locality. This advanced parameter is hidden and is not visible in the Advanced System Settings vSphere web client. It is only available the CLI.

Read locality is enabled by default when vSAN Stretched Cluster is configured – it should only be disabled under the guidance of VMware’s Global Support Services organization, and only when extremely low latency is available across all sites.

 

 

Launch vSphere Web Client

 

If not already open from a prior Lab Module, launch the vSphere Web Client using the Google Chrome icon in the Windows Taskbar.

  1. Select the "Use Windows session authentication" checkbox to login at "CORP\Administrator".
  2. Click Login

 

 

Witness Host must not be part of the vSAN Cluster

 

When configuring your vSAN stretched cluster, only data hosts must be in the cluster object in vCenter.

  1. The vSAN Witness Host must remain outside of the cluster, and must not be added to the cluster at any point. In your lab environment, we have already deployed the vSAN Witness host.

Thus for a 1 (host) +1 (host) +1 (witness) configuration, there is one ESXi host at each site and one ESXi witness host.

 

 

Networking

 

The vSAN Witness Appliance contains two network adapters that are connected to separate vSphere Standard Switches (VSS).

The vSAN Witness Appliance Management VMkernel is attached to one VSS, and the WitnessPG is attached to the other VSS. The Management VMkernel (vmk0) is used to communicate with the vCenter Server for appliance management. The WitnessPG VMkernel interface (vmk1) is used to communicate with the vSAN Network. This is the recommended configuration. These network adapters can be connected to different, or the same, networks, provided they have connectivity to their appropriate services.

The Management VMkernel interface could be tagged to include vSAN Network traffic as well as Management traffic. In this case, vmk0 would require connectivity to both vCenter Server and the vSAN Network. In many nested ESXi environments (such as the platform VMware uses for this Hands On Lab), there is a recommendation to enable promiscuous mode to allow all Ethernet frames to pass to all VMs that are attached to the port group, even if it is not intended for that particular VM. The reason promiscuous mode is enabled in many nested environments is to prevent a virtual switch from dropping packets for (nested) vmnics that it does not know about on nested ESXi hosts.

The Witness has a portgroup pre-defined called witnessPg. Here the VMkernel port to be used for vSAN traffic is visible. If there is no DHCP server on the vSAN network (which is likely), then the VMkernel adapter will not have a valid IP address.

  1. Select the ESXi host called esx-08a.corp.local
  2. Select Configure
  3. Select Networking -> VMkernel adapters
  4. Select vmk1 to view the properties of the witnessPg.
  5. Validate that "vSAN" is an enabled service as depicted in the screenshot.

 

 

 

Default Gateways and Static Routes

The final step before a vSAN Stretched Cluster can be configured, is to ensure there is connectivity among the hosts in each site and the Witness host. It is important to verify connectivity before attempting to configure vSAN Stretched Clusters.

When using vSAN 6.1, 6.2, or 6.5 (without a specified gateway), administrators must implement static routes. Static routes, as highlighted previously, tell the TCPIP stack to use a different path to reach a particular network. Now we can tell the TCPIP stack on the data hosts to use a different network path (instead of the default gateway) to reach the vSAN network on the witness host. Similarly, we can tell the witness host to use an alternate path to reach the vSAN network on the data hosts rather than via the default gateway.

Note once again that the vSAN network is a stretched L2 broadcast domain between the data sites as per VMware recommendations, but L3 is required to reach the vSAN network of the witness appliance. Therefore, static routes are needed between the data hosts and the witness host for the vSAN network, but they are not required for the data hosts on different sites to communicate to each other over the vSAN network.

In vSphere 6.5, a default gateway can be specified for each VMkernel interface and does not require static routes when specifying a default route for the vSAN tagged VMkernel interfaces.

The esxcli commands used to add a static route is:

esxcli network ip route ipv4 add –n <remote network> -g <gateway to use>

Other useful commands are esxcfg-route –n, which will display the network neighbors on various interfaces, and esxcli network ip route ipv4 list, to display gate ways for various networks. Make sure this step is repeated for all hosts.

 

Creating a New vSAN 6.6.1 - 2 Node Stretched Cluster


Creating a vSAN stretched cluster from a group of hosts that doesn’t already have vSAN configured is relatively simple. A new vSAN cluster wizard makes the process very easy.

In this lesson we will steps you through each of these steps.

 


 

Create a new vSphere Cluster

 

The first step is to create a vSphere Cluster for the 2 ESXi hosts that we will use to form the 2 Node vSAN Stretched Cluster.

  1. Right click the Datacenter called RegionA01
  2. Select New Cluster ..

 

 

Create vSphere Cluster

 

1. Give the vSphere Cluster a name :

2-Node-Stretched-Cluster

Click OK

 

 

 

Move Hosts into Cluster

 

Once we have the vSphere Cluster created, move the 2 ESXi hosts called esx-05a.corp.local and esx-06a.corp.local into the vSphere Cluster.

This can be achieved in either of 2 ways :

  1. Drag the ESXi host and drop it on top of the vSphere cluster called 2-Node-Stretched-Cluster

or

  1. Right click the ESXi host and select Move To..., select the vSphere cluster called 2-Node-Stretched-Cluster and click OK

 

 

Take Hosts out of Maintenance Mode

 

Take the ESXi hosts esx-05a.corp.local and esx-06a.corp.local out of Maintenance Mode.

  1. Right click the ESXi host esx-05a.corp.local
  2. Select Maintenance Mode
  3. Select Exit Maintenance Mode

Repeat these steps for the other ESXi host in the vSphere cluster called 2-Node-Stretched-Cluster

 

 

Verify vSphere Environment

 

Verify that your 2-Node-Stretched-Cluster looks like the screenshot before we continue.

Verify that you have a vSphere Cluster containing 2 ESXi host and that they are not in Maintenance Mode.

 

 

Verify Networking

 

Verify that each of the ESXi hosts have a VMkernel port for vSAN and the vSAN traffic service is enabled.

  1. Select ESXi host called esx-05a.corp.local
  2. Select Configure
  3. Select Networking -> VMkernel Adapters
  4. Select vmk3 ( vSAN enabled port-group )
  5. Verify that the vSAN service is enabled on the port-group.

 

 

Verify Storage

 

Verify that each of the ESXi hosts have a Storage devices available to create the vSAN Disk Groups and enable the creation of a vSAN Datastore.

As shown in the screenshot, we will use the 2 x 5 GB disks for the cache tier and the 4 x 10 GB disks for the Capacity tier when creating the vSAN Disk Groups.

  1. Select ESXi host called esx-05a.corp.local
  2. Select Configure
  3. Select Storage -> Storage Devices

 

 

Witness Traffic Seperation

 

VMware vSAN 6.5 and later supports the ability to directly connect two vSAN data nodes using one or more crossover cables.

This is accomplished by tagging an alternate VMkernel port with a traffic type of “Witness". The data and metadata communication paths can now be separated.

Metadata traffic destined for the Witness vSAN VMkernel interface, can be done through an alternate VMkernel port. It is called “Witness Traffic Separation” (or WTS).

With the ability to directly connect the vSAN data network across hosts, and send witness traffic down an alternate route, there is no requirement for a high speed switch for the data network in this design.

This lowers the total cost of infrastructure to deploy 2 Node vSAN. This can be a significant cost savings when deploying vSAN 2 Node at scale.

 

 

Preparing your ESXi hosts for Witness Traffic Separation(WTS)

 

To prepare the ESXi hosts for the 2 Node vSAN Stretched Cluster,open a Putty session to the following hosts.

You will find the PuTTY application on the taskbar of your Main Console.

esx-05a.corp.local
esx-06a.corp.local

 

 

Preparing your ESXi hosts for Witness Traffic Separation(WTS)

 

Lets first have a look at what traffic types are configured.

  1. Run the following command on host esx-05a.corp.local and esx-06a.corp.local :
esxcli vsan network list
  1. Here you will see that we have a Traffic Type : vsan configured on each host.

 

 

Preparing your ESXi hosts for Witness Traffic Separation(WTS)

 

To use ports for vSAN today, VMkernel ports must be tagged to have “vsan” traffic. This is easily done in the vSphere Web Client.

To tag a VMkernel interface for “Witness” traffic, today it has to be done at the command line.

To add a new interface with Witness traffic as the type, the command is:

esxcli vsan network ipv4 add -i vmk0 -T=witness
  1. Run this command on both esx-05a.corp.local and esx-06a.corp.local

Note : Remember it is the Management Network that we are going to use for the Witness Traffic, which in our environment is vmk0

 

 

Preparing your ESXi hosts for Witness Traffic Separation(WTS)

 

Lets look at what traffic types are configured now.

  1. Run the following command on host esx-05a.corp.local and esx-06a.corp.local :
esxcli vsan network list

Here you will see that we have a Traffic Type : vsan and Traffic Type : witness configured on each host.

Now that we have configured the Networking, lets create our 2 Node vSAN Stretched Cluster.

 

 

Create a 2 Node vSAN Cluster

 

The following steps should be followed to install a new vSAN stretched cluster. This example is a 1+1+1 deployment, meaning one ESXi hosts at the preferred site, one ESXi hosts at the secondary site and 1 witness host.

To setup vSAN and configure stretch cluster navigate to

  1. The cluster called 2-Node-Stretched-Cluster
  2. Select Configure
  3. Select vSAN -> General
  4. Click Configure to begin the Virtual SAN wizard.

 

 

Configure vSAN as a Stretched Cluster

 

The initial wizard allows for choosing various options like disk claiming method, enabling Deduplication and Compression (All-Flash architectures only with Advanced or greater licensing), as well as configuring fault domains or stretched cluster.

  1. Select Configure two host vSAN cluster

Click Next

 

 

Validate Network

 

Network validation will confirm that each host has a VMkernel interface with vSAN traffic enabled.

Select Next.

 

 

Claim Disks

 

Disks will be selected for their appropriate role ( cache and capacity ) in the vSAN cluster.

As shown in the screen shot, the 5 GB disks from each of the ESXi hosts have been selected as the Cache tier, and the 10 GB disks have been selected for the Capacity tier.

Select Next

 

 

Create Fault Domains

 

In the Create fault domains screen, move the host esx-06a.corp.local to the Secondary fault domain.

  1. Select the host esx-06a.corp.local
  2. Click the >> button top move the ESXi host to the Secondary fault domain

Click Next

 

 

Select Witness host

 

The Witness host detailed earlier must be selected to act as the witness to the two Fault Domains.

  1. Expand the Datacenter RegionA01 and select esx-08a.corp.local

Click Next

 

 

Claim Disks for Witness host

 

Just like physical vSAN hosts, the witness needs a cache tier and a capacity tier.

Note: The witness does not actually require SSD backing and may reside on a traditional mechanical drive.

  1. Select the cache tier disk
  2. Select the capacity tier disk

Click Next

 

 

Ready to Complete

 

Review the vSAN Stretched Cluster configuration for accuracy.

Select Finish.

 

 

Monitor Tasks

 

You can monitor the tasks in the Recent Tasks window.

You will see tasks for the Reconfigure Virtual SAN cluster, Creating disk groups,Converting to Stretched Cluster and Adding disks to the Disk groups.

 

 

vSAN Cluster Created

 

Lets now verify that we have created the vSAN stretched cluster.

  1. Select  2-Node-Stretched Cluster
  2. Select Configure
  3. Select  vSAN > General
  4. We can see that vSAN is turned on and the disk format is 5.0

 

 

Disk Management

 

Lets now have a look at the Disk Groups that have been created.

  1. Select 2-Node-StretchedCluster
  2. Select Configure
  3. Select vSAN > Disk Management
  4. We can see that we have a disk group on the ESXi hosts called esx-05a.corp.local and esx-06a.corp.local. We also have a disk group on esx-08a.corp.local which is the vSAN witness host in our Stretched Cluster configuration.

 

 

Fault Domains and Stretched Cluster

 

Lets now have a look at the Fault Domains and Stretched Cluster configuration.

  1. Select 2-Node-Stretched-Cluster
  2. Select Configure
  3. Select vSAN > Fault Domains and Stretched Cluster
  4. vSAN Stretched Cluster is enabled with the witness host esx-08a.corp.local.
  5. We can also see the 2 Fault Domains that have been created and their respective ESXi hosts.

 

 

Conclusion

This concludes the lesson on creating a vSAN 6.6 2 Node Stretched Cluster with witness traffic separation.

 

Monitoring a vSAN 6.6.1 Stretched Cluster


One of the ways to monitor your vSAN environment is to use the vSAN Health Check.

The vSAN Health runs a comprehensive health check on your vSAN environment to verify that it is running correctly and will alert you if it finds some inconsistencies and options on how to fix these.


 

vSAN Health Check

 

Lets have a look at how the health check works and what we can report on.

  1. Select 2-Node-Stretched-Cluster
  2. Select Monitor
  3. Select vSAN
  4. Select Health

Here you will see the high level list of the vSAN Health checks that can be performed.

5. To re-run the vSAN Health Check at any time,you can click the Retest button.

 

 

vSAN Health Check

 

Lets drill in deeper to the individual tests.

  1. Expand Stretched Cluster
  2. Select Site latency health

Towards the bottom of the screen , you will see the results of these tests.

Spend some time having a look at the other tests and the data that we return from the tests.

 

 

Conclusion

The vSAN health check is great help to get more deeper into the testing performance and health check of vSAN installations. The vSAN Health Check should be the first place you should go to monitor your vSAN environment.

It is good practice to rerun the vSAN Health Check so that you retrieve the current state of the environment.

 

vSAN Site Affinity


There are several workloads in a datacenter with inbuilt application level availability or redundancy.

However, typical production workloads require multi-site protection to enable better data redundancy.

How do we cater to workloads which do not  require copies to be stored on different sites?

 


 

Introducing Local Affinity

With Local Affinity, customers can use policies to keep data on a single  site. In this case Primary Failures to Tolerate (PFTT) = 0.

This ensures that objects are not  replicated to the secondary site thereby reducing the bandwidth required  between sites.

Additionally, by using Affinity rules, customers can set  VM/VMDK assignments to specific hosts.

 

For example, to test local affinity, you can set PFTT = 0, SFTT = 2,  FTM = RAID 5. The outcome of this test is that all IOs should be done  locally and not on the secondary site. This way, you can  seamlessly achieve host/disk protection for objects that do not require  site protection.

A few housekeeping rules for local affinity:

  1. Affinity will only be available when Stretched Clusters is enabled
  2. DRS/HA rules should be aligned with Data Locality
  3. RAID0/RAID 1 are supported for Hybrid and RAID0/RAID1/RAID5/RAID/6 are supported for All Flash

 

 

Storage Policy Based Management - Local Affinity

 

In this lesson , we will create a VM Storage Policy for Local Affinity.

  1. From the Home page of the vSphere Web Client
  2. Select Policies and Profiles

 

 

Storage Policy Based Management - Local Affinity

 

  1. Select VM Storage Policies
  2. Select Create VM Storage Policy

 

 

Storage Policy Based Management - Local Affinity

 

Enter a name for the VM Storage Policy

Single Site with Mirroring

Click Next

Click Next on Policy Structure

Click Next on Common rules for data services provided by hosts

 

 

Storage Policy Based Management - Local Affinity

 

  1. For the Storage Type, select VSAN
  2. Add the following 3 rules :
Primary level of failures to tolerate: 0
Failure tolerance method: Raid-1 (Mirroring) - Performance
Affinity: Preferred Fault Domain

Click Next

 

 

Storage Policy Based Management - Local Affinity

 

From the Compatible storage, select vsanDatastore (1)

Click Next

 

 

 

Storage Policy Based Management - Local Affinity

 

At the Ready to complete, click Finish

 

 

Storage Policy Based Management - Local Affinity

 

Verify the VM Storage Policy is created.

  1. Select the VM Storage Policy called Single Site with Mirroring
  2. Select Manage
  3. Select Rule-set-1: VSAN

 

 

Create a VM with Local Affinity SPBM

 

  1. From the Home button on the vSphere Web Client,
  2. Select Hosts and Clusters

 

 

Create a VM with Local Affinity SPBM

 

  1. Expand the ESXi host called esx-07a.corp.local
  2. Right click the VM called core-A
  3. Select Clone
  4. Select Clone to Virtual Machine

 

 

 

Create a VM with Local Affinity SPBM

 

  1. Give the VM a name, lets call it Local Affinity VM
Local Affinity VM
  1. Select the datacenter called RegionA01

Click Next

 

 

Create a VM with Local Affinity SPBM

 

  1. In the Select a compute resource, expand the cluster called 2-Node-Stretched-Cluster
  2. Select the ESXi host called esx-05a.corp.local

Click Next

 

 

Create a VM with Local Affinity SPBM

 

  1. For the VM Storage policy, select Single Site with Mirroring

The vsanDatastore (1) will be compatible with this Storage Policy.

Click Next

Click Next on the Clone options

At the Ready to complete click Finish

 

 

 

Create a VM with Local Affinity SPBM

 

  1. Verify that the Virtual Machine called Local Affinity VM has been created in the 2-Node-Stretched-Cluster.
  2. Select Summary tab
  3. Verify that the VM Storage Policy called Single Site with Mirroring has been applied to the VM and the policy is Compliant.

 

 

vSAN Fault Domain

 

  1. Select the cluster called 2-Node-Stretched-Cluster
  2. Select Configure
  3. Select vSAN -> Fault Domains & Stretched Cluster
  4. Note the Preferred fault domain and the ESXi host in the Preferred Fault Domain.

In the example shown here, the ESXi host called esx-06a.corp.local is in the Preferred fault domain.

 

 

 

 

Policy Compliance

 

  1. Select the VM called Local Affinity VM
  2. Select Monitor
  3. Select Policies
  4. Select Hard disk 1
  5. Select Physical Disk Placement
  6. Notice that the Hard disk 1 component has been placed on the Preferred fault domain which is the ESXi host called esx-06a.corp.local

 

 

Modify VM Storage Policy

 

  1. From the Home button of the vSphere Web Client , select Policies and Profiles
  2. Select VM Storage Policies
  3. Select Single Site with Mirroring
  4. Click Edit

 

 

 

Modify VM Storage Policy

 

  1. Select Rule-Set 1
  2. Change the Affinity setting from Preferred Fault Domain to Secondary Fault Domain

Click OK

 

 

Modify VM Storage Policy

 

The VM storage Policy is already in use by some virtual machines.

  1. Change the Reapply to VMs to Now

Click Yes to save changes.

 

 

Verify VM Storage Policy

 

  1. From the Home page of the vSphere Web Client, select  Hosts and Clusters
  2. Select the VM called Local Affinity VM
  3. Select Monitor
  4. Select Policies
  5. Select Hard Disk 1
  6. Select Physical Disk Placement
  7. Verify that the Component is now on the Secondary Fault Domain, which in our case is the ESXi host called esx-05a.corp.local

In your lab environment, the ESXi host in the Secondary Fault Domain may be esxi-06a.corp.local

 

Conclusion


In this lesson we looked at how to configure a 2 Node vSAN Stretched Cluster. We gave you some background and some important features to understand before you configure your stretched vSAN Cluster environment.

Once of the features that we wanted to show here was the Witness and vSAN data separation. We showed you how to configure a Management VMKernel port for Witness traffic.

We then completed a 2 Node vSAN Cluster Stretched Cluster configuration. In the end we showed you how to monitor the vSAN Health and how to run the vSAN Health Checks.


 

You've finished Module 7

Additional information is available here on vSAN Clusters and vSAN Stretched Clusters :

VMware Blogs

VMware vSAN

vSAN on Youtube

vSAN on Youtube

 

 

How to End Lab

 

If you would like to end your lab click on the END button.

 

Conclusion

Thank you for participating in the VMware Hands-on Labs. Be sure to visit http://hol.vmware.com/ to continue your lab experience online.

Lab SKU: HOL-1808-01-HCI

Version: 20180425-080148