VMware Hands-on Labs - HOL-SDC-1608


Lab Overview - HOL-SDC-1608 - What's New with Virtual SAN 6?

Lab Guidance


This lab covers the A to Z of Virtual SAN 6.0.  

The key take aways include:

The lab is broken up into six modules and they can be taken in any order. The Modules are:

Module 1 - Virtual SAN 6.0 Setup and Enablement (30 Minutes)

Module 2 - Virtual SAN 6.0 with vMotion, Storage vMotion and HA Interoperability (60 minutes)

Module 3 - Virtual SAN 6.0 Storage Level Agility (60 minutes)

Module 4 - Useful CLI Commands (30 minutes)

Module 5 - Virtual SAN 6.0 Monitoring (30 Minutes)

Module 6 - Virtual SAN 6.0 Troubleshooting (60 minutes)

Module 7 - Extending Virtual SAN with Additional File Services (60 minutes)

Module 8 - Securing Virtual SAN Data with Encryption (45 minutes)

Module 9: Identify and Resolve Virtual SAN Issues (30 minutes)

You have a total of 90 minutes per lab session and each module will take between 30 to 60 minutes to complete but depending on experience, your time may be more or less. A single session should cater for up to 3 modules.

Lab Captains: John Browne, Ken Osborn, Tony Okwechime, Jitender Rohilla


 

Activation Prompt or Watermark

 

When you first start your lab, you may notice a watermark on the desktop indicating that Windows is not activated.  

One of the major benefits of virtualization is that virtual machines can be moved and run on any platform.  The Hands-on Labs utilizes this benefit and we are able to run the labs out of multiple datacenters.  However, these datacenters may not have identical processors, which triggers a Microsoft activation check through the Internet.

Rest assured, VMware and the Hands-on Labs are in full compliance with Microsoft licensing requirements.  The lab that you are using is a self-contained pod and does not have full access to the Internet, which is required for Windows to verify the activation.  Without full access to the Internet, this automated process fails and you see this watermark.

This cosmetic issue has no effect on your lab.  If you have any questions or concerns, please feel free to use the support made available to you  either at VMworld in the Hands-on Labs area, in your Expert-led Workshop, or online via the survey comments as we are always looking for ways to improve your hands on lab experience.

 

Module 1 - Virtual SAN Setup and Enablement (30 Min)

Virtual SAN Overview


The software-defined data center (SDDC) is changing the way that data centers are built and operated to be more agile, automated and efficient than ever before. Software-defined storage (SDS) is a key component of this architectural shift, and Virtual SAN 6.0 represents the current state of the art.

The key idea behind SDS is to create a storage platform that allows for the dynamic composition of services, abstracted from the underlying hardware. Traditionally, storage services have been provided using external storage arrays. With VSAN, the hypervisor and server-resident storage devices work together to provide storage services -- but more flexibly and cost-effective than before.


 

What is Virtual SAN

As a component of vSphere, VSAN extends the hypervisor to pool and abstract server-based storage resources, much the way vSphere pools and abstracts compute resources. It is designed to be much simpler and more cost-effective than traditional external storage arrays. Users of vSphere should be able to learn VSAN and become productive quickly.

VSAN is fully integrated with vSphere, and supports almost all popular vSphere functionality: DRS, HA, vMotion and more. VSAN is also integrated with the vRealize suite for larger automated environments.

Administrators define storage policies, and assign them to VMs. A storage policy will define availability, performance and provisioning requirements (e.g. thin). When a VM is provisioned, VSAN will interpret the storage policy, and configure the underlying storage devices to satisfy the policy automatically(e.g. RAID 1). When the storage policy is changed, VSAN will automatically reconfigure resources to satisfy the new policy.

Key points:

Technical characteristics:

 

 

 

Customer Benefits

Simple

Compared to traditional storage solutions, Virtual SAN is exceedingly simple to install and operate day-to-day. Storage is presented as a natural extension of the vSphere management experience. Policy-based management dramatically simplifies the provisioning of storage services for VMs.

High Performance

VSAN's deep integration with the vSphere kernel and use of flash dramatically improves application performance as compared to traditional storage solutions. Applications that require even higher levels of predictable performance can use all-flash configurations.

Lower TCO

VSAN can lower TCO by up to 50% by using a streamlined management model as well as cost-effective server storage components. Expanding either capacity or performance involves simply adding more resources to the cluster: flash, disks or servers.

 

 

 

 

Primary Use Cases

 

 

Virtual SAN Requirements


The following section details the hardware and software requirements necessary to create a Virtual SAN cluster.


 

vCenter Server

Virtual SAN 6.0 requires ESXi 6.0 and vCenter Server 6.0. Virtual SAN can be managed by both the Windows version of vCenter Server and the vCenter Server Appliance (VCSA).

VSAN is configured and monitored via the vSphere Web Client and this also needs to be version 6.0.

 

 

vSphere ESXi

Virtual SAN requires at least 3 vSphere hosts (where each host has local storage) in order to form a supported Virtual SAN cluster. This is to allow the cluster to meet the minimum availability requirements of tolerating at least one host failure. The vSphere hosts must be running vSphere 6.0. With fewer hosts there is a risk to the availability of virtual machines if a single host goes down. The maximum number of hosts supported is 64.

Each vSphere host in the cluster that contributes to local storage to Virtual SAN must have at least one hard disk drive (HDD) and at least one solid state disk drive (SSD).

 

 

Disk and Network

 

IMPORTANT : All components ( hardware, drivers, firmware ) must be listed on the vSphere Compatibility Guide for VSAN. All other configurations are unsupported.

The VMkernel port is labeled Virtual SAN. This port is used for inter-cluster node communication and also for read and writes when one of the vSphere hosts in the cluster owns a particular virtual machine, but the actual data blocks making up the virtual machine files are located on a different vSphere host in the cluster. In this case, I/O will need to traverse the network configured between the hosts in the cluster.

 

Virtual SAN Setup Guide


In this section, we outline the steps required to enable Virtual SAN, followed by a video showing the steps involved in setting up Virtual SAN.


 

Setup process overview

 

Virtual SAN setup can be broken down into the following steps :

1. Setup Virtual SAN Network - This is a VMkernel network with the Virtual SANtraffic network service enabled.

2. Enable Virtual SAN on the Cluster.

3. Create your Virtual SAN disk groups by selecting either Automatic or Manual mode.

 

 

Setup video

The following video shows the steps involved in setting up Virtual SAN in the sequence showed above.

Click the play button to view the how to setup video.

 

 

Setup Virtual SAN Network

 

Begin by launching Firefox browser and login to vSphere Web Client.

We are using the Appliance based vCenter so your login will be Administrator.

FYI - Virtual SAN is supported on both Windows and the Appliance versions of vCenter Server.

Select the checkbox for "Use Windows session authentication".

Alternativly, you can enter a User name of administrator@corp.local and a password of VMware1!

Click Login

 

 

Navigate to Hosts and Clusters view

 

Select Home tab

Select Hosts & Clusters

 

 

Navigate to esx-01a.corp.local

 

In this task we will show you how to add a Virtual SAN VMkernel port

Expand the navigation on the left side and click esx-01a.corp.local

Select Manage -> Networking -> VMkernel adapters

We must now add a VMkernel adapter for the Virtual SAN traffic.

Click the "Add host networking" icon to add a new VMkernel adapter.

 

 

Virtual SAN traffic

 

Select VMkernel Network Adapter.

Click Next

 

 

Select Target Device

 

We have already attached each host to a distributed switch and created some VSAN Port Groups.

You must select the port group to use for this host.

Click 'Browse'.

 

 

Select Network

 

Select the VSAN-PG-vmk3 port group and click 'OK'.

 

 

Target Device Selected

 

After the port group is selected your screen should look like the above.

Click Next

 

 

Specify Virtual SAN for the Port Group

 

Keep the default settings but enable the Virtual SAN traffic service.

Click Next

 

 

Use static IPv4 address

 

Select Use static IPv4 settings.

Use the following network information for the VMkernel port

IPv4 address : 192.168.130.51
Subnet mask : 255.255.255.0

Click Next

 

 

Ready to complete

 

Click Finish if your screen looks like the above

 

 

Virtual SAN Network Added

 

You should now see vmk3 added for the VSAN Network.

A VMkernel adapter for Virtual SAN Traffic must be added to each host in the cluster.

We have already repeated the above steps for esx-02a, esx-03a, esx-04a, esx-05a and esx-06a for you.

Feel free to click on each host to see the Virtual SAN VMkernel adapter.

If you don't add this to each host a "Network Misconfiguration" warning will appear on the Summary page.

 

 

Adding another VMkernel portgroup

 

A later lab exercise will require an additional VMkernel Port with VSAN traffic enabled.

Add another VMkernel Portgroup using the following information.

Connection Type : VMkernel Network Adapter
Target device : VSAN-PG-vmk4
Enable services : Virtual SAN traffic
Use Static IPv4 settings
IPv4 address : 192.168.210.51
Subnet mask : 255.255.255.0

Once you have entered this information, you screen should look like the following.

In your environment, you may have to scroll to the right to see the Virtual SAN Traffic column.

 

Prepare Virtual SAN Cluster


To use Virtual SAN, you must create a host cluster and enable Virtual SAN on the cluster.

A Virtual SAN cluster can include hosts with storage disks and hosts without storage disks.

Follow these guidelines when you create a Virtual SAN cluster.

After you enable Virtual SAN, the Virtual SAN storage provider is automatically registered with vCenter Server and the Virtual SAN datastore is created.


 

Enable Virtual SAN

 

Once our network adapters are in place we can turn on Virtual SAN at the Cluster level.

Select Cluster Site A then navigate to Manage > Settings > Virtual SAN > General > Edit

 

 

Turn On Virtual SAN

 

Check "Turn ON Virtual SAN"and for the Add disks to storage option, select Automatic.

Click OK.

The Automatic option will add all disks on the hosts to be claimed by Virtual SAN.

 

 

Refresh Display

 

Click the Refresh icon to see the changes. ( If you see Misconfiguration detected you may need to Refresh a couple of times )

After the refresh you should see all 3 hosts in the Virtual SAN cluster

Note : In all-flash configurations, flash devices used as capacity must be manually tagged prior to use. This is detailed in the VSAN 6.0 Product Documentation.

 

 

Virtual SAN - Disk Management

 

Select Disk Management

The Virtual SAN Disk Groups on each of the ESXi hosts are listed.

You may have to scroll down through the list to see all the disk groups.

Towards the lower part of the screen , you can see the Flash and HDD drive types that make up these disk groups.

 

 

 

vsanDatastore Properties

 

A vsanDatastore has also been created.

To see the capacity navigate to Datastores > Manage > Settings > VSANDatastore > General

The capacity shown is an aggregate of the capacity devices taken from each of the ESXi hosts in the cluster (less some vsanDatastore overhead).

The flash devices used as cache are not considered when the capacity calculation is made.

 

 

Verify Storage Provider Status

 

For each ESXi host to be aware of the capabilities of Virtual SAN and to communicate between vCenter and the storage layer a Storage Provider is created. Each ESXi host has a storage provider once the Virtual SAN cluster is formed.

The storage providers will be registered automatically with SMS (Storage Management Service) by vCenter. However, it is best to verify that the storage providers on one of the ESXi hosts has successfully registered and is active, and that the other storage providers from the remaining ESXi hosts in the cluster are registered and are in standby mode.

Navigate to the vCenter server > Manage > Storage Providers to check the status.

In this three-node cluster, one of the Virtual SAN providers is online and active, while the other three are in Standby. Each ESXi host participating in the Virtual SAN cluster will have a provider, but only one needs to be active to provide Virtual SAN datastore capability information.

Should the active provider fail for some reason one of the standby storage providers will take over.

 

Module 2 - Virtual SAN with vMotion, Storage vMotion and HA Interoperability (60 Min)

Preparing Virtual SAN (Optional)


If proceeding from Module 1 then you may skip this section and move forward to the section called VM Placement with Tag and VM Storage Policy


 

Execute Virtual SAN preparation script

 

*** If you already formed the VSAN Cluster earlier in your lab session, you do NOT need to perform it again - please skip to VM Placement with Tag and VM Storage Policy<-- Click here ***

Double click on the "Prepare VSAN Cluster" script shortcut on the Desktop.

 

 

Monitor execution progress

 

The script takes a less than a minute to execute, monitor the progress.

 

 

Review and exit result

 

  1. Ignore the warning messages
  2. Type exit at the prompt and press Enter key

 

 

Open Mozilla Firefox browser

 

Launch the Mozilla Firefox browser from the desktop

 

 

Login to vSphere Web Client

 

Check the box to "Use Windows session authentication"

Alternatively, you can enter a User name of administrator@corp.local and a password of VMware1!

Click "Login" button

 

 

Hosts and Clusters view

 

  1. Select the Home tab
  2. Select Hosts and Clusters view under Inventories

 

 

Verify Virtual SAN status

 

  1. Select Cluster Site A from the object navigation pane
  2. Select Manage tab in the Content pane
  3. Select Settings button
  4. Select General under Virtual SAN section
  5. Verify that Virtual SAN is Turned ON and Add disks to storage is Automatic

If the Add disks to storage is not Automatic, click the Edit button and change to Automatic.

Click the Edit button and select Automatic

 

VM Placement with Tag and VM Storage Policy


In this section we will create a Tag Category and a Tag and assign it to the vsanDatastore

We will create QoS Tiers within the disk group and take a look at VSAN Storage Policies.


 

Create a New Category

 

Click on  the Home page

Select Tags from menu list.

 

 

Create a Tag Category

 

Before we can create Tags, we need to create Tag Categories.

You use categories to group tags together and define how tags can be applied to objects.

Select Items

Select Categories

Select New Category

 

 

Create a Tag Category

 

Enter a Category Name of VCDS

Deselect All objects

Select the Associable Object Type of Datastore

Click OK

 

 

Create a Tag Category

 

Your Category will be created.

Select Hosts and Clusters

 

 

Create A Silver Tag

 

You use tags to add metadata to inventory objects. You can record information about your inventory objects in tags and use the tags in searches.

  1. Select Storage tab from the Object Navigator
  2. Expand Datacenter Site A and select vsanDatastore
  3. Select the Manage tab
  4. Select Tags
  5. Select New Tag

 

 

Define Tag Properties

 

  1. Name the tag Silver
  2. For the Category, select VCDS
  3. Click OK

 

 

Tag details

 

Your Tag is now created and assigned to the vsanDatastore object

 

 

VM Storage Policies

 

  1. Go to the Home page
  2. Select the Home tab
  3. Select VM Storage Policies view under Monitoring section

 

 

Create VM storage policy

 

Click the icon with the plus sign to Create a new storage policy.

 

 

Create a new rule for a Print Server

 

In this example we will create a new storage policy rule for a Print Server

In the Name field enter PrintServer then Click Next to continue

 

 

Rule-Sets

 

Spend a moment reading this page to learn about rule-sets.

Click Next to proceed

 

 

Create a Rule based on tags

 

In this section we will show you how to create a rule based on a Tag.

Click "Add tag-based rule..."

 

 

Select the VCDS Category and the Silver tag

 

  1. Click the Categories drop down and select VCDS.
  2. Select the Silver tag
  3. Click OK

 

 

Rule based on Tag added

 

You can now see your tag based rule added.

Click Next to proceed

 

 

Matching Resources

 

  1. A list of Compatible Datastores with matching Tag definitions is now listed.
  2. Here we can see that the vsanDatastore is compatible with our Storage Policy.
  3. Click Next

 

 

Ready to complete

 

Review the rules added and click Finish

 

 

Print Server VM Storage Policy complete

 

The storage policy called Print Server is created and appears in the list of VM Storage Policies.

The purpose of this task was to show you that Tags can be used in the storage selection criteria for a VM Storage Policy.

 

VM Storage Policy and Virtual SAN Capability


Now we will show you the relationship between VM Storage Policies and VSAN Availability Capabilities


 

Open the VM Storage Policies

 

  1. Go to the Home page
  2. Select VM Storage Policies under Monitoring on the Home tab

 

 

Create VM storage policy

 

Click the icon with the plus sign to Create a new storage policy.

 

 

Create a new VM Storage Policy for Tier 2 Apps

 

In the Name field enter Tier 2 Apps

Click Next to continue

 

 

Rule-Sets

 

Rule-sets are a way of using storage from different vendors, for example you can have single “bronze” policy with one VSAN Rule-Set and one 3rd party storage vendor Rule-Set. When “Tier 2 Apps” is chosen as the storage service level at VM deployment time both VSAN and the 3rd party storage will match the requirements in the policy.

Spend a moment reading this page to learn more about rule-sets.

Click Next when ready

 

 

Create a Rule on Number of Failures tolerate

 

This policy will protect the associated workload against at least one component failure (host, network or disk).

  1. Select VSAN from the Rules based on data services list
  2. Select Number of failures to tolerate

 

 

How many failures to tolerate?

 

1. Set the Number of failures to tolerate to 1.

2. On the right hand side of the screen, you will see the Storage Consumption Model

From the Storage Consumption model, you can review the virtual disk size available for use and the corresponding flash and HDD capacity including the reserved storage space your virtual machines would potentially consume when you apply the specified storage policy.

Click Next

 

 

Matching Resources

 

As you can see vsanDatastore is capable of understanding these requirements that are defined in the VM Storage Policy.

Note that there is no guarantee that the datastore has the resources available to satisfy the requested VM Storage Policy. It simply means that the requirements in the storage policy can be understood by the datastores which show up in the matching resources.

Click Next.

 

 

Ready to complete

 

Review the rules added and click Finish

 

 

Tier 2 Apps Rule Ready

 

The Tier 2 Apps VM Storage Policy is created and available. Now we simply tell the storage layer what our requirements are by selecting the appropriate VM Storage Policy during VM deployment and the storage layer takes care of deploying the VM in such a way that it meets those requirements.

 

vMotion & Storage vMotion Interoperability


In this section we will examine the interoperability of VSAN with core vSphere features such as vMotion & Storage vMotion. You will power on a virtual machine called linux-micro-01a which resides on host esx-01a.corp.local.


 

VSAN Interoperability

 

VSAN is fully integrated with many of VMware's storage and availability features.

In this module we will turn on HA and use vMotion, but you will notice that many other availability features are supported.

Storage I/O Control(SIOC) is not applicable because VSAN takes the performance requirements from policy settings.

Storage DRS is not applicable because VSAN is a single datastore that can provide multiple service levels, depending on the requested policy.

Distributed Power Management(DPM) may include hosts in a VSAN cluster so we don't want to power-off hosts that may impact the storage policy.

 

 

Hosts and Clusters view

 

  1. Go to the Home page
  2. Select Hosts and Clusters view under Inventories on the Home tab

 

 

Storage vMotion from NFS to vsanDatastore

 

  1. Select the linux-micro-01a virtual machine from Object Navigation pane

 

 

Migrate linux-micro-01a VM

 

  1. Right click linux-micro-01a virtual machine
  2. Select Migrate...

 

 

Change datastore

 

  1. Choose the option to Change storage only
  2. Click "Next"

 

 

Select Tier 2 Apps policy

 

  1. In the VM Storage Policy drop down select Tier 2 Apps
  2. Select vsanDatastore from list of compatible Datastores

Based on your input, the storage consumption mechanism calculates the amount of space that is required for a virtual disk that will reside on this storage entity.

Click Next

 

 

Review and Finish

 

On the Review Selections section, Click Finish

 

 

Complete The Migration

 

  1. Click Refresh icon to monitor the migration progress
  2. Let the migration finish and navigate to the linux-micro-01aSummary page

 

 

Review the new destination

 

Notice in the Summary screen the VM Storage Policy of Tier 2 Apps and the VM is now residing on the vsanDatastore.

Note: You may have to minimize a few Panels within the Summary View in order to confirm the new Storage Policy and Storage Location information.

This demonstrates that you can migrate from traditional datastores such as NFS or VMFS to vsanDatastore.

Note: You might need to Refresh the VM Storage Policies view for the properties to be refreshed or click the CheckCompliance link.

 

Module 3 - Virtual SAN Storage Level Agility (60 Min)

Preparing Virtual SAN (Optional)


If proceeding from Module 2 then you may skip this section and move forward to the section called Defining your VM Storage Policies


 

Execute Virtual SAN preparation script

 

*** If you have already performed the "Prepare Cluster" script earlier in your lab session, you do NOT need to do it again, you can skip forward to the section called Defining your VM Storage Policies<-- Click here ***

Double click on the "Prepare VSAN Cluster" script shortcut on the Desktop.

 

 

Monitor execution progress

 

The script takes a less than a minute to execute, monitor the progress.

 

 

Review and exit result

 

  1. Ignore the warning messages
  2. Type exit at the prompt and press Enter key

 

 

Open Mozilla Firefox browser

 

Launch the Mozilla Firefox browser from the desktop

 

 

Login to vSphere Web Client

 

Check the box to "Use Windows session authentication"

Alternatively, you can enter a User name of administrator@corp.local and a password of VMware1!

Click "Login" button

 

 

Hosts and Clusters view

 

  1. Select the Home tab
  2. Select Hosts and Clusters view under Inventories

 

 

Verify Virtual SAN status

 

  1. Select Cluster Site A from the object navigation pane
  2. Select Manage tab in the Content pane
  3. Select Settings button
  4. Select General under Virtual SAN section
  5. Verify that Virtual SAN is Turned ON and Add disks to storage is Automatic

If the Add disks to storage is set to Manual, click the Edit button and change to Automatic

 

Defining your VM Storage Policies


This lesson will walk through Defining Storage Polices for your Virtual Machines.


 

Decisions when creating a VM Storage Policy

 

 

 

VM Storage Policy Best Practices

 

 

 

Creating a policy

 

  1. Go to the Home page
  2. Select VM Storage Policies in the Monitoring section

 

 

Creating a new VM Storage Policy

 

Click the Create a new VM Storage Policy icon

 

 

Name and Description

 

Enter "VDI-Desktops" as the Name, and enter "VM Storage Policy for VDI Desktops" for the description

Click Next to continue

 

 

Description of Rule-Sets

 

Rule-sets are a way of using storage from different vendors, for example you can have single “bronze” policy with one VSAN Rule-Set and one 3rd party storage vendor Rule-Set.

When “bronze” is chosen as the storage service level at VM deployment time, both VSAN and the 3rd party storage will match the requirements in the policy.

Spend a moment reading this page to learn more about rule-sets.

Click Next when ready

 

 

Configure Rule-Set 1

 

  1. Select VSAN from vendor-specific capabilities list.
  2. Select Number of failures to tolerate.

The capability "Number of failures to tolerate" defines the number of hosts, disk or network failures a storage object can tolerate.

 

 

Set failures to tolerate

 

Set the Number of failures to tolerate to 1 (One)

From the Storage Consumption model, you can review the virtual disk size available for use and the corresponding flash and HDD capacity including the reserved storage space your virtual machines would potentially consume when you apply the specified storage policy.

Click Next

 

 

Matching Resources

 

As you can see, vsanDatastore is capable of understanding these requirements that are defined in the VM Storage Policy

Select the Compatible storage

Click Next

Click Finish once you have reviewed the rules

 

 

Policy created

 

Complete the creation of the VM Storage Policy.

This new ‘policy’ should now appear in the list of VM Storage Policies.

 

Create a Virtual Machine and apply VM Storage Policy



 

Navigate to Hosts and Clusters

 

  1. Click Home
  2. Click Hosts and Clusters

 

 

Create new Virtual Machine

 

Create a virtual machine, which uses the VDI-Desktops profile created earlier.

Right click the ESXi host called esx-01a.corp.local in the Cluster and select New Virtual Machine and New Virtual Machine

 

 

Creation type

 

Select "Create a new virtual machine"

Click Next

2a. Name: Windows2008 and Folder: Datacenter Site A, Click Next

2b. Compute Resource: Any Host within the Cluster, Click Next

 

 

 

Select Storage policy

 

When it comes to selecting storage, you can now specify a VM Storage Policy (in this case VDI-Desktops).

This will show that vsanDatastore is Compatible as a storage device, meaning once again that it understands the requirements placed in the storage policy.

It does not mean that the vsanDatastore will implicitly be able to accommodate the requirements just that it understands them. This is an important point to understand about Virtual SAN.

Click Next

2d. SelectCompatibility: Keep default, Click Next

2e. Select aGuest OS: Keep default, Click Next

 

 

 

Customize Hardware

 

Continue with the creation of this Virtual Machine, selecting the defaults for the remaining steps, including compatibility with ESXi 6.0 and later and Windows 2008 R2 (64-bit) as the Guest OS.

When you get to the 2f. Customize hardware step, in the Virtual Hardware tab, expand the New Hard Disk virtual hardware and you will see VM storage policy set to VDI-Desktops.

Reduce the Memory to 512 MB.

Reduce the Hard Disk Size to 1 GB in order for it to be replicated across hosts (the default size is 40GB we want to reduce this as this is a small lab environment, but needless to say it will work just fine in a physical environment)

Click Next and click Finish

 

 

Review created machine summary

 

Complete the wizard. When the VM is created, look at its Summary tab and check the compliance state in the VM Storage Policies window.

It should say Compliant with a green check mark.

Yo will also notice that the Storage that the VM resides on is the vsanDatastore

 

 

View Physical Disk Placement of the VM

 

As a final step, you might be interested in seeing how your virtual machine’s objects have been placed on the vsanDatastore.

To view the placement, select your Virtual Machine > Monitor > Policies

If you select one of your objects, the Physical Disk Placement will show you on which host the components of your objects reside, as shown in the example.

The RAID 1 indicates that the VMDK has a replica. This is to tolerate a failure, the value that was set to 1 in the policy.

So we can continue to run if there is a single failure in the cluster. The witness is there to act as a tiebreaker. If one host fails, and one component is lost, then this witness allows a quorum of storage objects to still reside in the cluster.

Notice that all three components are on different hosts for this exact reason. At this point, we have successfully deployed a virtual machine with a level of availability that can be used as the base image for our VDI desktops.

Examining the lay out of the object above, we can see that a RAID1 configuration has been put in place by VSAN placing each replica on different hosts.

This means that in the event of a host, disk or network failure on one of the hosts, the virtual machine will still be available.

 

Cluster Capacity Scale Out


In this lesson we explore the 2 methods for managing and scaling out capacity on the VSAN cluster, Manual and Automatic scale out.  We will achieve this by adding hosts esx-04a and esx-05a in Manual Mode and esx-06a in Automatic mode.

 

Let's start with manual mode.


 

Set VSAN Cluster to Manual Mode

 

Select the Cluster Site A

 

 

Edit Virtual SAN Cluster setting

 

  1. Select Cluster Site A
  2. Select Manage tab
  3. Select Settings button
  4. Select General under Virtual SAN section
  5. Click Edit button

Move to the next step in the Lab document.

 

 

Set scale out to Manual

 

Set the "Add disks to storage" to Manual and click OK

 

 

Mark Storage Devices as Flash (SSD)

 

If ESXi does not automatically recognize its devices as flash/SSD, you can mark them as flash/SSD devices.

ESXi does not recognize certain devices as flash when their vendors do not support automatic flash disk detection. The Drive Type column for the devices shows HDD as their type.

For all-flash configurations, the marking of flash devices as capacity must be currently done manually, outside of the vSphere Web Client. Refer to the VSAN 6.0 Product Documentation.

Note : Marking HDD disks as flash disks could deteriorate the performance of datastores and services that use them. Mark disks as flash disks only if you are certain that those disks are flash disks.

  1. Select the ESXi host called esx-04a.corp.local
  2. Select Manage
  3. Select Storage
  4. Select Storage Devices
  5. Select the 8 GB device ( This is the device that we want to mark as the Flash (SSD) disk )
  6. Click the Mask as Flash Disk icon

Click Yes to save your changes

 

 

Mark Storage Devices as Flash (SSD)

 

The type of the devices changes to Flash.

 

 

Add Additional nodes to cluster

 

Drag and drop esx-04a.corp.local and esx-05a.corp.local into Cluster Site A

Select the default option"Put all of this host's virtual machines in the cluster's root resource pool. Resource pools currently present on the hosts will be deleted." andclick OK

Monitor the task to completion

You may see warning messages against the ESXi hosts already in the Cluster, but these messages will self heal after a while.

 

 

Exit maintenance mode

 

Right click on esx-04a.corp.local and select Maintenance Mode> Exit Maintenance Mode

Right click on esx-05a.corp.local and select Maintenance Mode> Exit Maintenance Mode

 

 

Verify additional nodes

 

Select Cluster Site A

Select Manage

Select Settings

Select General

You should now see that there are :

This means that the disks (1 SSD's and 2 HDD's) from the newly added hosts are not yet claimed for Virtual SAN diskgroup usage.

 

 

Claim eligible disk to capacity

 

  1. Select the Cluster called Cluster Site A
  2. Select Manage
  3. Select Settings
  4. Select Disk Management in the Virtual SAN section ( Scroll down to esx-04a.corp.local, you will see that there are 0 of 3 disks used by VSAN )
  5. Click the Claim Disks button

 

 

 

Select available disks

 

Click the Select all eligible disks button to instruct Virtual SAN to select the usable disk.  Note: You can also select the disks by click each checkbox next to the disk.

Notice that although we added 2 ESXi Hosts to the Cluster, only esx-04a.corp.local is presenting storage to the VSAN Cluster. The ESXi host esx-05a.corp.local is not presenting local storage to the VSAN Cluster but can still participate in the VSAN Cluster and access the vsanDatastore.

Doubleclick the dialog box to resize so that the OK / Cancel buttons are available

Click OK.

 

 

Verify VSAN cluster

 

Select General

Wait for the tasks to finish claiming the disks and expanding the capacity of the cluster.

You may have to refresh the vSphere Web Client for the resources to get updated.

 

 

Set VSAN Cluster to Automatic Mode

 

Click the Edit button

 

 

Edit Cluster Settings

 

Set the mode to Automatic and click OK.

Wait for the task to complete.

 

 

Add node to Cluster

 

Drag and drop esx-06a.corp.local into Cluster Site A

Select the default option"Put all of this host's virtual machines in the cluster's root resource pool. Resource pools currently present on the hosts will be deleted." andclick OK

 

 

 

Exit maintenance mode

 

Right click on esx-06a.corp.local and select Maintenance Mode >Exit Maintenance Mode

 

 

Verify additional nodes

 

Select Cluster Site A

Select Manage

Select Settings

Select General

Once the exit task is complete, you can refresh the page.

You will see that summary count has increased to 6 ESXi hosts and additional disks have been automatically added to a disk Group on ESXi host esx-06a.corp.local

 

Changing VM Storage Policies on the fly


In this lesson, we will once again modify the VM Storage Policy and make some more changes.

This time we will apply the VM Storage Policy manually and watch what happens the physical disk layout of the VM


 

VM Storage Policies

 

  1. Click the Home icon
  2. Open the VM Storage Policies

 

 

Modify the VM Storage Policy (1)

 

Select the VDI-Desktop policy and click on ‘Edit’.

 

 

Add disk stripes capability

 

In the Rule-Set1, add a new capability called Number of disk Stripes per object and set the value to 3.

This is the number of disks that the stripe will span.

Note : Changing policies dynamically may lead to a temporary increase in the amount of space consumed on the Virtual SAN datastore.

Click OK

 

 

Modify the VM Storage Policy (1)

 

You will observe a popup which states that the VM Storage Policy in Use. We will need to synchronize the virtual machine with the policy after saving the changes.

  1. Select Manually later
  2. Click Yes

 

 

Resync Virtual Machine with Policy Changes (1)

 

Change back to the Hosts and Clusters view, Home -> Hosts and Clusters

  1. Select the Windows 2008 Virtual Machine that you created earlier.
  2. Select the Monitor tab.
  3. Select Policies button.

Since we changed the VM Storage Policy capabilities, you will notice that the Compliance Status is now Out of Date.

Click on the Reapply the VM Storage Policy to all out of date entities icon (3rd from left) to reapply policy.

 

 

Reapply VM Storage Policy

 

Answer Yes to the popup.

The compliance state should now change once the updated policy is applied.

 

 

Resync Virtual Machine with Policy Changes (2)

 

Now we can see that the disk layout has changed significantly.

Because we have requested a stripe width of three (3), the components that make up the stripe are placed in a RAID-0 configuration.

And since we still have our failures to tolerate requirement, these RAID-0s must be mirrored by a RAID-1.

Note : You will have to scroll down to see the additional components.

 

 

Edit VM Storage Policy

 

If your storage requirements for the applications on the virtual machine change, you can edit the storage policy that was originally applied to the virtual machine.

You can edit the storage policy for a powered-off or powered-on virtual machine.

When changing the VM storage policy assignment, you can apply the same storage policy to the virtual machine configuration file and all its virtual disks. If storage requirements for your virtual disks and the configuration file are different, you can associate different storage policies with the VM configuration file and the selected virtual disks.

  1. Select the linux-micro-01a Virtual Machine
  2. Select the Manage tab.
  3. Select Policies button.
  4. Click Edit VM Storage Policies ...

 

 

 

Manage VM Storage Policies

 

From the VM Storage policy dropdown, select VDI-Desktops and click Apply to all

Click OK

 

 

Resync Components Dashboard

 

 

  1. Select the cluster called Cluster Site A
  2. Select Monitor Tab
  3. Select Virtual SAN
  4. Select Resyncing Components
  5. You may have to refresh the display

Here you will see the the number of components that are resyncing, the bytes left to sync and the ETA to compliance

Significant changes to policy in production can generate quite a load, and that it should be done during maintenance hours ideally. 

This operation may happen very quickly and since we may not have a lot of data to resync the Resyncing Components screen may blank values. The screen shot we have provided here is this an example of the data you could see in a production environment.

 

 

Resync Components Dashboard

 

Once the screen is showing blank values, it means that the Resync is complete

 

Virtual SAN Fault Domains


If your Virtual SAN cluster spans across multiple racks or blade server chassis in a data center and you want to make sure that your hosts are protected against rack or chassis failure, you can create fault domains and add one or more hosts to it.

A fault domain consists of one or more Virtual SAN hosts grouped together according to their physical location in the data center.

When configured with a minimum requirement of 3, fault domains enable Virtual SAN to tolerate failures of entire physical rack as well as failures of a single host, storage disk, network link or a network switch dedicated to fault domains.

Note : Fault Domains should not be confused with Stretched Clusters.


 

Create a new Fault Domain

 

Your Hands on Lab environment may show that the hosts esx-04a.corp.local, esx-05a.corp.local and esx-06a.corp.local may need a reboot. There is no need to reboot the ESXi hosts. The labs will continue to function. The reason that a reboot is required for these hosts is that we have just added these hosts into the VSAN Cluster and it needs to install the VSAN Health Check plugin.

Select the cluster called Cluster Site A

Click the Manage tab

Click Settings.

Click Fault Domains

Click the Create a new fault domain icon.(+)

 

 

 

Create a new Fault Domain

 

Give the Fault Domain a Name, lets call it FD-1

Select the host esx-01a.corp.local and esx-02a.corp.local to be added to the Fault Domain

Click OK

 

 

Create a new Fault Domain

 

You will now see your first Fault Domain that you created and the 2 ESXi hosts that we added to that Fault Domain.

 

 

Create a new Fault Domain

 

Create another Fault Domain and name it FD-2

Select the host esx-03a.corp.local and esx-04a.corp.local to be added to the Fault Domain called FD-2

Click OK

 

 

Create a new Fault Domain

 

You should now have 2 Fault Domain created with 2 ESX hosts in each Fault Domain

To tolerate a host failure, 2n + 1 hosts are required. To tolerate a domain failure, 2n + 1 Fault Domains are required.

 

 

Create a new Fault Domain

 

Create another Fault Domain and name it FD-3

Select the host esx-05a.corp.local to be added to the Fault Domain called FD-3

Click OK

 

 

Add ESX host to Fault Domain with CLI

 

Open the Putty application on your task bar

 

 

Add ESX host to Fault Domain with CLI

 

Select the host called esx-06a.corp.local and click Open

You should be automatically logged into the ESXi host, if not the root password is VMware1!

 

 

Add ESX host to Fault Domain with CLI

 

Run the following esxcli commands to add the ESXi host to  Fault Domain

1. Check if the ESXi host is in a Fault Domain

esxcli vsan faultdomain get

2. Add the ESXi to a Fault Domain called FD-3

esxcli vsan faultdomain set --fdname=FD-3

3. Confirm the ESXi host has been added to the Fault Domain called FD-3

esxcli vsan faultdomain get

 

 

 

 

Add ESX host to Fault Domain with CLI

 

Return to your vSphere Web Client session, if the Fault Domains in Host list is not updated, click the Refresh icon.

You should now have 3 Fault Domains created with 2 ESXi hosts in each Fault Domain.

You can group Virtual SAN hosts that could potentially fail together by creating a fault domain and assigning one or more hosts to it. Failure of all hosts within a single fault domain is treated as one failure. If fault domains are specified, Virtual SAN will never put more than one replica of the same object in the same fault domain.

 

 

Remove Fault Domains

 

Since we will not be doing any operations with Fault Domains, w will now remove them.

Select FaultDomains

Remove the 3 Fault Domains called FD-1, FD-2 and FD-3

Select a Fault Domain and click red X to delete the fault domain.

 

 

 

Remove Fault Domain

 

All hosts in the fault domain are moved out and the selected fault domain is deleted from the Virtual SAN cluster.

Each host is now available as a single independent host.

 

 

Modify VM Storage Policy

 

Now let us change the VM Storage Policy to ensure compatibility with existing nodes.

Select VM Storage Policies

 

 

Edit VDI-Desktop Policy

 

  1. Select the VDI-Desktops policy
  2. Click the Edit button

 

 

Remove Disk Stripe Capability

 

Click the delete button to remove the capability from the policy

Click OK

 

 

Reapply VM Storage Policy

 

  1. Set the schedule on when to apply the policy to Now
  2. Click Yes

 

 

Reapply policy

 

To view the changes

  1. Select the Windows 2008 virtual machine
  2. Go to Monitor tab
  3. Select Policies
  4. Select Hard Disk 1

Now the Hard Disk is back to Failures to tolerate of 1 compliance policy

 

 

Host Maintenance Mode

 

  1. Right click on the esx-06a.corp.local
  2. Select Maintenance Mode
  3. Select Enter Maintenance Mode

 

 

Data Migration

 

  1. Select Full data migration from the data migration options
  2. Click OK

Click on the Information button to see a description of the data migration methods.

 

 

Host Maintenance Mode

 

 

Right click on esx-05a.corp.local

Select Maintenance Mode and then Enter Maintenance Mode

For the Virtual SAN data migration, select Full data migration

 

 

Remove host from cluster

 

Drag and drop the ESXi hosts esx-05a.corp.local and esx-06a.corp.local from Cluster Site A into Datacenter Site A

When the task is complete then you will notice that cluster capacity now reflects the change i.e. 4 hosts are now participating in the VSAN Cluster

 

Module 4 - Useful CLI Commands (30 Min)

Introduction


There are multiple approaches to scripting and enabling automation in VMware Virtual SAN Environments.  

In this Module we will illustrate three different methods:

  1. PowerCLI
  2. ESXCLI
  3. Ruby vSphere Console (RVC)

Preparing for CLI Lab


In this section we will run a PowerCLI script to properly prepare the Lab Environment.


 

Execute "Prepare CLI Lab" script

 

Double click on the "Prepare CLI Lab" script shortcut on the Desktop.

 

 

Monitor execution progress

 

The script takes roughly a minute to execute, monitor the progress.

 

 

Review and exit result

 

  1. Type exit at the prompt and press Enter key

 

PowerCLI / ESXCLI


In this section, we will use PowerCLI release (6.0 R1) to perform the following activities:


 

No Longer Just a Fling

 

The PowerCLI Virtual SAN cmdlets that were previously available as part of a VMware Technical Preview Fling have been officially introduced into the latest release of PowerCLI and are readily available after PowerCLI installation.

To examine these cmdlets, launch VMware vSphere PowerCLI by single-clicking the PowerCLI toolbar Shortcut Icon, then type:

help vsan

In the steps that follow, we will leverage these new cmdlets along with other PowerCLI pre-existing cmdlets to perform Virtual SAN related activities.  Along the way we will provide screen shots of what you would see via the vSphere Web Client as a visual frame of reference.

 

 

Examine VSAN Cluster

Let's connect to our Virtual Center instance and begin our Virtual SAN PowerCLI exploration.

Hint: The PowerCLI commands in this Lab are not case sensitive. If you prefer to copy and paste these commands, you will find a full listing of them within the 'README.txt' file on the Desktop.  Alternatively, you can simply highlight the command in manual (Copy it to Clipboard) and then use the Hands On Labs 'Send Text' feature that is available in your Lab's upper-lefthand corner.  

Connect-VIServer -Server vcsa-01a.corp.local -User administrator@corp.local -Password VMware1!

Create a variable for the first vSphere Host in the Virtual SAN Cluster in order to make our PowerCLI commands easier to manage.

$vmhost = "esx-01a.corp.local"

 

 

Check Cluster Configuration

 

Examine the enabled state of VSAN and the VSAN Disk Claim Mode by issuing this command:

Get-Cluster | Select VsanEnabled, VsanDiskClaimMode

 

 

 

Check Network Configuration

 

Virtual SAN network communication between the Hosts in our Virtual SAN Cluster is enabled via redundant vmkernel ports that are configured within a single vSphere Distributed Switch.

Let's examine this configuration:

Get-VDSwitch -VMHost $vmhost

Now, examine the Port Groups (press the Up Arrow once to repeat the previous command and pipe in the cmdlet below):

Get-VDSwitch -VMHost $vmhost | Get-VDPortgroup

Check to see if Virtual SAN Traffic is enabled on the vmkernel ports dedicated to Virtual SAN:

Get-VMHostNetworkAdapter -VMhost $vmhost | select PortGroupName, name, VsanTrafficEnabled | Format-Table -autosize

 

 

Check Disk Configuration

 

Examine the Datastores that are available to our vSphere Host by using the Get-Datastore cmdlet.  We will filter the results on any Datastores containing "VSAN" in their name.

Get-Datastore -VMHost $vmhost | where-object {$_.Name -like "vsan*"} 

Create a variable that contains our Virtual SAN Disk Groups.  We will use the same $vmhost (esx-01a.corp.local) variable as utilized previously.

$dg = Get-VsanDiskGroup -VMHost $vmhost

Examine the contents of the newly created variable.

$dg

Let's identify the disks that are a part of these Virtual SAN Disk Groups (note the mix of SSD and Magnetic Drives via the 'IsSsd' column):

$dg | Get-VsanDisk

 

 

 

VSAN Storage Policies

 

Examine the Virtual SAN Storage Policies that are available in our environment:

Get-SpbmStoragePolicy -requirement -namespace "VSAN" | Select Name, Description

 

 

New Policy Scenario

 

We are running a Virtual Machine that uses VMware's newest Linux based Open Source Operating System, "Photon".  To keep things simple, we have named this Virtual Machine, "Photon-01a".

In this section we will create a new Storage Based Policy that increases Stripe Width from '1' to '2'.

This new Policy could be leveraged to improve performance since it creates a RAID-0 stripe set across two disks, thereby increasing the amount of Storage I/O available.  This Policy will also inherit the "Failures to Tolerate = 1" setting which indicates that any VM leveraging this policy can survive at least one VSAN component failure in the environment.

First, let's create a few variables to make our commands easier to type:

$vm = "Photon-01a"
$vmHdd = Get-HardDisk $vm

Check the current Storage Policy that is applied to our Photon Virtual Machine and its virtual Hard Disks:

Get-SpbmEntityConfiguration $vm, $vmHdd

 

 

Create New Policy

 

Reminder: PowerCLI commands are available for Copy/Paste via the README.txt file on the Desktop.

Create the new Storage Policy (using Stripe Width = 2):

New-SpbmStoragePolicy -Name sw=2 -RuleSet (New-SpbmRuleSet -Name "sw=2" -AllOfRules @((New-SpbmRule -Capability VSAN.stripeWidth 2)))

Let's add a description to our new Policy:

Get-SpbmStoragePolicy -Name sw=2 | Set-SpbmStoragePolicy -Description "Sets VM Stripe Width equal to 2"

Up Arrow a few times and repeat this command to confirm policy creation:

Get-SpbmStoragePolicy -requirement -namespace "VSAN" | Select Name, Description

 

 

 

Apply New Policy

 

Apply the newly created storage policy using the previously created variables:

Set-SpbmEntityConfiguration $vm, $vmHdd -StoragePolicy "sw=2"

Note:  The Virtual Machine Hard Disk will initially return a 'nonCompliant' status.  Virtual SAN is configuring additional stripes per the newly applied policy and this resyncing operation can take several minutes to complete.  

 

 

ESXCLI Commands

Historically, vSphere Administrators have had the ability to SSH directly to an individual vSphere Host and issue ESXCLI commands directly. 

With Virtual SAN 6 there are new ESXCLI command options that can be run within the ESXCLI Virtual SAN namespace.  In this section, we will 'wrap' these remote ESXCLI commands via PowerCLI (utilizing the 'Get-ESXCLI' Powershell cmdlet).  This can be done within the existing PowerCLI command window we have open and will remove the necessity to SSH to a remote Host.

 

 

Create Get-EsxCli Object

Let's define a variable that we can run future commands against:

$esxcli = Get-EsxCli -VMhost $vmhost

Input our new variable and press Enter or Return to view all of the available namespaces:

$esxcli

Append the vsan element to our variable to view the vsan specific namespaces.  This will give us a list of all of the possible esxcli commands related to Virtual SAN.

$esxcli.vsan

 

 

 

Methods Exposed

Drill down further by examining the 'cluster' element:

$esxcli.vsan.cluster

Notice that methods are now available for us to utilize ("get","join", "leave","new" and "restore").  Let's utilize the get method with an empty parameter set to examine more details about our Virtual SAN Cluster, including our vSphere Host Health State:

$esxcli.vsan.cluster.get()

Examine the Network configuration:

$esxcli.vsan.network.list()

Retrieve the Virtual SAN Datastore name:

$esxcli.vsan.datastore.name.get()

 

 

Maintenance Mode

Let's use ESXCLI to place our vSphere Host into maintenance mode.  With Virtual SAN 6, Maintenance Mode options have been made available via cmdline ("Ensure Accessibility", "Full Data Migration" and "No Data Migration").  

Examine the system configuration options that are available:

$esxcli.system

Drill down to the Maintenance Mode element:

$esxcli.system.maintenanceMode

We can check the current Maintenance Mode state of the Host (enabled or disabled) by using the get parameter:

$esxcli.system.maintenanceMode.get()

Place the vSphere Host into Maintenance Mode using the 'Ensure Accessibility' option:

$esxcli.system.maintenanceMode.set($true, 0, "ensureObjectAccessibility")

Note: Since we are passing through ESXCLI commands to our vSphere Host, the 'ensureObjectAccessibility' option is Case Sensitive.

We built the above command to pass in a boolean value of '$true' to turn on Maintenance Mode, selected a wait time of 0 seconds and indicated our Virtual SAN maintenance option.  By using Ensure Accessibility, Virtual SAN guarantees that all Virtual Machines on this host remain accessible.  This is a good option if you plan on taking the host out of the cluster temporarily to install upgrades, for example.

You can check that the vSphere Host has entered Maintenance Mode via the Web Client or by re-issuing this command:

$esxcli.system.maintenanceMode.get()

Let's put the vSphere Host back into service:

$esxcli.system.maintenanceMode.set($false)

Note:  If you would like to re-check the compliance status for the Photon VM and the new Storage Policy that you applied, issue this command:

Get-SpbmEntityConfiguration $vm, $vmHdd -CheckComplianceNow  

 

Ruby vSphere Console (RVC)


The Ruby vSphere Console (RVC) is an interactive command-line console user interface for VMware vSphere and Virtual Center. The Ruby vSphere Console is based on the popular RbVmomi Ruby interface to the vSphere API and has been an open source project for the past 2-3 years. RbVmomi was created with the goal to dramatically decrease the amount of coding required to perform routine tasks, as well as increase the efficiency of task execution, all while still allowing for the full power of the API when needed.

The Ruby vSphere Console comes bundled with both the vCenter Server Appliance (VCSA) and the Windows version of vCenter Server. The RVC is quickly becoming one of the primary tools for managing and troubleshooting Virtual SAN environments.


 

Features

RVC has a lot of the capabilities you’d expect from a modern command-line interface.

 

 

Advantages

 

 

Usage

 

The Ruby vSphere Console is free of charge and comes bundled with both the vCenter Server Appliance (VCSA) and vCenter Server for Windows.  In this Lab, we will connect to our VCSA instance and explore a few RVC Virtual SAN related capabilities.

  1. Launch Putty via the Toolbar Shortuct
  2. Scroll down, select our VCSA ('vcsa-01.corp.local') then click Load
  3. Click Open

Start the RVC by issuing the following command:

rvc localhost 

Enter 'Y' if prompted with, "Are you sure you want to continue connecting?"

Enter the password when prompted:  VMware1!

 

 

 

Navigation

 

The vSphere and Virtual SAN infrastructure is presented to the user as a virtual file system that can be navigated with traditional directory listing (ls) and change directory (cd) commands. This virtual file system mirrors the hierarchy of the vSphere infrastructure and allows RVC commands to be issued on each of the manageable entities and their individual components (i.e. vCenter, Datacenter, Cluster, Storage, Hosts, Networks, Datastores, VMs).

Navigate through the directory structure by using 'cd' and 'ls' commands.  The text in parenthesis below is for reference only and should not be typed.  The 'cd' commands utilize the numerals '1' and '0':

cd 1  (localhost)

List out the Datacenters that are available via the 'ls' command:

ls

Change to the Datacenter directory and list out the resources that are available:

cd 0  (Datacenter Site A)
ls

Change to the Computers directory and list out the vSphere Clusters and standalone vSphere Hosts that are available:

cd 1  (computers)
ls

Change to the vSphere Cluster directory and list out the Hosts and Resource Pools within this Cluster:

cd 0  (Cluster Site A)
ls

Change to our Hosts directory and list out the individual vSphere Hosts in this Cluster along with their CPU and Memory details:

cd 0  (Hosts)
ls

 

 

VSAN Host Information

 

Examine the Host information for a single Host in our Cluster by entering:

vsan.host_info 0

 

 

VSAN Disk Information

 

Examine the Disk Information for a single Host:

vsan.disks_info 0

Note: You may need to increase your screen resolution and maximize your Putty window in order for the Table Result Set to output properly.  To increase your resolution, right-click any open space on the Desktop and select, "Screen Resolution" (choose a new resolution via the drop-down box and click apply).

 

 

VSAN Disk Stats

 

Retrieve disk information including Capacity and Utilization metrics:

vsan.disks_stats 0

 

 

VSAN Cluster Level Information

 

As mentioned in the previous "Advantages" Section, the RVC is not confined to working with one single vSphere Host at a time.  For example, you can gather disk stats for all of the Hosts in the Cluster by cd-ing back two directories to the 'computers' directory and then re-issuing your disk stats command:

cd ..
cd ..
ls
vsan.disks_stats 0

 

 

VSAN VM Information

 

Let's cd back to the VM directory and examine performance statistics for our Photon Virtual Machine:

cd ..
ls
cd 4
ls
vsan.vm_perf_stats 2 -i 0

Note: We passed in the '-i 0' command to specify a capture interval of zero seconds.  By default, this command takes two samples 20 seconds apart.  It is expected to see little (or no) performance information for this Virtual Machine in our Lab Environment.

 

 

Summary

 

There are over 40 different Virtual SAN Namespace commands that can be used to manage the environment via the Ruby vSphere Console.  You can view the Virtual SAN Namespace commands by issuing this command:

help vsan

To see all of the Namespaces that are available to manage via the Ruby vSphere Console, simply issue this command:

help

When you are finished exploring, enter 'exit' to close your Ruby vSphere Console session and then type 'exit' again to close your Putty SSH Session.

 

Conclusion


As illustrated in this Module there are a variety of CLI tools at your disposal to programatically interact with Virtual SAN.  

Choosing the best tool that works for you will help enable a key tenet of the Software Defined Datacenter:  Automation.

Here are a few links for further reference:

PowerCLI

ESXCLI

Ruby vSphere Console (RVC):


Module 5 - Virtual SAN Monitoring (30 Min)

Preparing Virtual SAN (Optional)


If proceeding from Module 4 then you may skip this section and move forward to the section called Using the Virtual SAN Health Check Plugin


 

Execute Virtual SAN preparation script

 

*** If you have already performed the "Prepare VSAN Cluster" script earlier in your lab session, you do NOT need to do it again, you can skip forward to the section calledUsing the Virtual SAN Health Check Plugin<-- Click here ***

Double click on the "Prepare VSAN Cluster" script shortcut on the Desktop.

 

 

Monitor execution progress

 

The script takes a less than a minute to execute, monitor the progress.

 

 

Review and exit result

 

  1. Ignore the warning messages
  2. Take note of the message stating that the VSAN cluster has been prepared
  3. Type exit at the prompt and press the Enter key

 

 

Verify Virtual SAN Status

 

Launch the Mozilla Firefox browser from the desktop

 

 

Provide login credentials

 

  1. Enter the username as administrator@vsphere.local and password of VMware1!
  2. Click "Login" button

 

 

 

Hosts and Clusters view

 

  1. Select the Home tab
  2. Select Hosts and Clusters view under Inventories

 

 

Verify Virtual SAN is turned on

 

  1. Select Cluster Site A from the object navigation pane
  2. Select Manage tab in the Content pane
  3. Select Settings button
  4. Select General under Virtual SAN section
  5. Verify that Virtual SAN is Turned ON and Add disks to storage is Automatic

 

Virtual SAN Health Check Plugin


This lesson will walk through the Virtual SAN Health Check Plugin for monitoring the health of  VSAN environments


 

Virtual SAN Health Check Plugin Overview

The Virtual SAN Health check plugin checks all aspects of a Virtual SAN configuration.

It implements a number of checks on hardware compatibility, networking configuration and operations, advanced Virtual SAN configuration options, storage device health and also virtual machine object health.

The health check provides two main benefits to administrators of Virtual SAN environments:

  1. It will give administrators peace of mind that their Virtual SAN deployment is fully supported, functional and operational
  2. It will provide immediate indications to a root cause in the event of a failure, leading to speedier resolution times

It is recommended that the Health check plugin be utilized for initial triage of any Virtual SAN problems. It also provides users with the ability to join VMware’s Customer Experience Improvement Program (“CEIP”).

The Health check plugin also includes a a new feature called the Virtual SAN Support Assistant. This allows administrators to upload VSAN log bundles directly to their support request (SR) opened with VMware Global Support Services (GSS)

 

 

Verify Virtual SAN Health Check Plugin Status

 

  1. Select the Home tab
  2. Select Hosts and Clusters view under Inventories

 

 

Verify that the Virtual SAN Health Check Plugin is installed

 

  1. Select Cluster Site A from the object navigation pane
  2. Select Manage tab in the Content pane
  3. Select Settings button
  4. Select Health under the Virtual SAN section
  5. Verify that the Health service status is Enabled

 

 

Additional Virtual SAN Health Check Plugin Features

 

  1. The Customer Experience Improvement Program option is disabled by default. This feature allows customers to opt-in for sending VMware anonymous information on how they are using VSAN.
  2. The HCL Database section enables customers update the HCL database manually from a file or automatically from the VMware website
  3. The Support Assistant enables administrators upload VSAN log bundles directly to their support request (SR) opened with VMware Global Support Services (GSS)
  4. The External Proxy Settings option enables administrators configure a proxy server for Health check features such as the Customer Experience Improvement Program and the HCL Database in environments where vCenter Server does not have direct access to the internet.

 

 

Changes to General View with Health Check Plugin

 

The Health check plugin adds a few pieces of addition information to the Virtual SAN > General view.

  1. Select General under the Virtual SAN section
  2. The On-disk Format Version displays how many of the disks are on version 2, the VirstoFS on-disk format included with Virtual SAN 6.0.

In the environment above, all disks are at version 2. If disks are discovered with an outdated on-disk format, the "Upgrade" button will be available and may be used to update them to version 2.

 

 

Navigate to the Virtual SAN Health Check Plugin

 

  1. Select the Monitor tab in the Content pane
  2. Select the Virtual SAN button
  3. Select Health to display the Virtual SAN Health Check Plugin interface

 

 

Virtual SAN Health Check Plugin Categories

 

The plugin can be used to validate the health of a Virtual SAN cluster by performing a series of individual checks to determine if there are any issues with the Virtual SAN cluster in any of the categories listed below.

  1. Cluster health
  2. Data health
  3. Limits health
  4. Network health
  5. Physical disk health

Each health check category is composed of a series of individual checkpoints that can be run directly from the vSphere Web Client at any time by clicking on the Retest button.

***Note. DO NOT click the Retest button at this time.***

We will explore each category in greater detail in subsequent sections of this manual.

 

Health Check Tests via the User Interface


In this lesson we will walk through the health checks available in the different categories and running them via the user interface.
There are too many tests to allow for covering each individually in this manual so we will review a few tests in each category.


 

Virtual SAN HCL health

 

Expand the Virtual SAN HCL health group to reveal the underlying checks.

The checks in this group are focused on the SCSI controller, the controller driver and the Virtual SAN HCL Database
Click on each check in the group to see a description of the action carried out by the check as well as to view the results.

***Note. Disregard the displayed Warning status of the tests in this group. The warnings occur as a result of running in a nested environment.***

 

 

Cluster Health - ESX Virtual SAN Health service installation

 

  1. Expand the Cluster health group to reveal the underlying checks.
  2. Select ESX Virtual SAN Health service installation
  3. A description of the task carried out by the check is provided.
  4. The results of the check are displayed and all 3 hosts can be seen to have passed the check

 

 

Data Health - Virtual SAN object health

 

  1. Expand the Data health group to reveal the underlying checks.
  2. Select Virtual SAN Object Health
  3. A description of the task carried out by the check is provided.
  4. The results of the check are displayed in the table below the description

 

 

Limits Health - Current cluster situation

 

  1. Expand the Limits health group to reveal the underlying checks.
  2. Select Current cluster situation
  3. A description of the task carried out by the check is provided.
  4. The results of the check are displayed in the table below the description

 

 

Network Health - All hosts have matching multicast settings

 

  1. Expand the Network health group to reveal the underlying checks.
  2. Select All hosts have matching multicast settings
  3. A description of the task carried out by the check is provided.
  4. The results of the check are displayed and it can be seen that all 3 hosts have matching multicast settings

 

 

Physical disk health - Overall disks health

 

  1. Expand the Physical disk health group to reveal the underlying checks.
  2. Select Overall disks health
  3. A description of the task carried out by the check is provided.
  4. The results of the check are displayed and it can be seen that there are no disks with issues in the VSAN cluster

 

Health Check Tests via the Command Line Interface


The health check plugin provides checks at both the command line and the user interface. The command line checks are part of the Ruby vSphere Console (RVC). This lesson will walk through running the Health check tests via the command line.


 

RVC Health Check Commands

 

The full list of RVC health check commands is provided here.
All of the commands can be run with a -h (help) option for further information.

 

 

Running RVC Health Check Commands

 

Launch the Putty SSH client

 

 

Login to vCenter

 

  1. Scroll to the bottom of the list to select vcsa-01a.corp.local
  2. Click Load
  3. Click Open to login. Credentials are not required as the putty session has been configured to login to the "root" account automatically

 

 

Access the Ruby vSphere Console (RVC)

 

Do the following to access the RVC

  1. Type rvc then press enter
  2. When prompted for the Host to connect to (user@host): type administrator@vsphere.local@localhost
  3. When prompted for the password type VMware1!

 

 

Display Health

 

Type vsan.health.health_summary /localhost/"Datacenter Site A"/computers/"Cluster Site A"

 

 

View Display Health result

 

Scroll down to see the entire output of the Display Health check. The Overall health is yellow as a result of the same Virtual SAN HCL warning that we noticed in the UI earlier.
You can ignore this warning. It occurs as a result of running in a nested environment.

***Note: If you get a RuntimeError when you run this command, type exit to log out of RVC and log back into RVC again.***

 

 

Check Status

 

Type vsan.health.cluster_status /localhost/"Datacenter Site A"/computers/"Cluster Site A"

 

 

View Check Status result

 

The Health check VIBs have been successfully installed on all hosts in the cluster

***Note: If you get a RuntimeError when you run this command, type exit to log out of RVC and log back into RVC again.***

 

 

Close vcsa-01a PuTTy session

This concludes the VSAN Health Check Plugin module.

 

Module 6 - Virtual SAN Troubleshooting (60 Min)

Introduction


In this module, we will illustrate two different methodologies for troubleshooting Virtual SAN:

  1. Virtual SAN Observer, and
  2. vRealize Operations Manager (Virtual SAN Plug-In)

If you are already familiar with one or the other please feel free to skip directly to the section that interests you the most.

 


Preparing Virtual SAN (Optional)


If proceeding from Module 5 then you may skip this section and move forward to the section called Virtual SAN Observer


 

Execute Virtual SAN preparation script

 

*** If you have already performed the "Prepare VSAN Cluster" script earlier in your lab session, you do NOT need to do it again, you can skip forward to the section calledVirtual SAN Observer<-- Click here ***

Double click on the "Prepare VSAN Cluster" script shortcut on the Desktop.

 

 

Monitor execution progress

 

The script takes a less than a minute to execute, monitor the progress.

 

 

Review and exit result

 

  1. Type exit at the prompt and press the Enter key

 

 

Verify Virtual SAN Status

 

Launch the Mozilla Firefox browser from the desktop

 

 

Provide login credentials

 

  1. Enter the username as administrator@vsphere.local and password of VMware1!
  2. Click "Login" button

 

 

 

Hosts and Clusters view

 

  1. Select the Home tab
  2. Select Hosts and Clusters view under Inventories

 

 

Verify Virtual SAN is turned on

 

  1. Select Cluster Site A from the object navigation pane
  2. Select Manage tab in the Content pane
  3. Select Settings button
  4. Select General under Virtual SAN section
  5. Verify that Virtual SAN is Turned ON and Add disks to storage is Automatic

 

Virtual SAN Observer


VSAN Observer is designed to capture performance statistics for a Virtual SAN Cluster and provide access via a web browser and capture the statistics for customer use or for VMware Technical Support. For any vSphere/VSAN/storage administrator wanting to dig deeper and analyze performance issues, the virtual SAN observer is a valuable tool. It provides an in-depth snapshot of IOPS, latencies at different layers of VSAN, read cache hits and misses, outstanding I/Os, congestion, etc. This information is provided at different layers in the VSAN stack to help troubleshoot storage performance.

The VSAN Observer is packaged with vSphere 6.0 vCenter Server. The VSAN observer is part of the Ruby vSphere Console (RVC), an interactive command line shell for vSphere management that is part of both Windows and Linux vCenter Server in vSphere 6.0. Initially VSAN engineering team exclusively used the VSAN Observer for early internal VSAN troubleshooting, but the utility is now available to all VMware customers using the new vSphere 6.0.


 

Virtual SAN Observer UI

 

This is how the landing page looks on the virtual SAN observer UI

 

 

How to start Virtual SAN Observer Via Ruby vSphere Console (RVC) In vCenter Server

Please do not start RVC Observer yet or type any commands in vcsa cli. This are just for reference and the real execution of RVC commands will be happening several steps later.

rvc username@VC-FQDN

%PROGRAMFILES%\VMware\Infrastructure\VirtualCenter Server\support\rvc\rvc.bat

cd vcsa-01a

cd VSAN

 

 

 

Different Modes of VSAN Observer Can Be Run

Please do not start RVC Observer yet or type any commands in vcsa cli. This are just for reference and the real execution of RVC commands will be happening several steps later.

We can start Virtual SAN Observer in 3 different modes.

Live monitoring for a VSAN cluster named VSAN

vsan.observer ~/computers/VSAN --run-webserver --force

vsan.observer ~/computers/VSAN --run-webserver --force --generate-html-bundle /tmp --interval 30 --max-runtime 1

Off line monitoring

vsan.observer ~/computers/VSAN --generate-html-bundle /tmp

Full raw stats bundle

vsan.observer ~/computers/VSAN --filename /tmp/vsan-observer-file-sample.json

The vCenter Server retains the entire history of the observer session in memory until it is stopped with Ctrl+C

We can view live statistics when the command is running, navigate to your vCenter Server and specified port number using the URL:

http://vCenterServer_hostname_or_IP_Address:8010

 

 

Open Mozilla Firefox browser

 

Launch the Mozilla Firefox browser from the desktop

 

 

Login to vSphere Web Client

 

  1. Enter a User name of administrator@corp.local and a password of VMware1!
  2. Click "Login" button

 

 

 

Hosts and Clusters view

 

  1. Select the Home tab
  2. Select Hosts and Clusters view under Inventories

 

 

Verify Virtual SAN status

 

  1. Select Cluster Site A from the object navigation pane
  2. Select Manage tab in the Content pane
  3. Select Settings button
  4. Select General under Virtual SAN section
  5. Verify that Virtual SAN is Turned ON and Add disks to storage is Automatic

 

 

Log In To VCSA

 

We have gone through the lessons of "how to start VSAN Observer in RVC" in the previous steps

  1. Now that we have enabled VSAN, open the Putty application on the desktop
  2. Select vcsa-01a.corp.local
  3. Double click on it and this will automatically log into vcsa-01a.corp.local

Enabling VSAN in vSphere Web Client is a prerequisite step to view the graphs in the Observer tool

 

 

Start Live VSAN Observer In Lab Environment

 

After logging into vCenter Server

  1. Type "rvc Administrator@corp.local@vcsa-01a"
  2. Enter password "VMware1!"
  3. Type cd vcsa-01a/Datacenter\ Site\ A/  (you can press tab button after you type vcsa-01a)
  4. Enter vsan.observer ~/computers/Cluster\ Site\ A/ --run-webserver --force --max-runtime 1

The above command will start VSAN Observer for 1 hour.

--max-runtime is counted in terms of hours and if you do not specify the default run time is 2 hours.

 

 

Log In To View VSAN Observer Graphs In Browser

 

Open Mozilla Firefox browser from desktop and type the following URL in the address bar

  1. https://vcsa-01a:8010
  2. Type Password: VMware1!

 

 

View VSAN Observer In Browser

 

In the following chapters, we will dig deeper into most of these tabs in the Observer UI. The VSAN Observer default port number is 8010

 

 

Observer UI Walk-through

 

We will now walk through the available tabs in VSAN observer UI and explain what information each tab holds

 

 

VSAN Disk Tabs

 

When we connect to VSAN observer using a browser we see the VSAN Client page as show above. There are four tabs dedicated to per host disk performance counters.

 

 

CPU & Memory Tabs

 

The PCPU and Memory tabs provide per-host CPU and memory statistics.

 

 

Distribution tab

 

This tab shows how well VSAN is balancing the objects (VMDKs, delta disks, witness) and components across hosts in the cluster. Each object can be broken down into components and the components themselves are distributed across hosts in the cluster. Each host has a 3000 component limit.

 

 

VMs Tab

 

Each VM has a directory that holds non-vmdk files that contain files like VM configuration and VM log files. In addition to this each VM can have one or more virtual disks. Each of these virtual disks represent a single entity. The presence of snapshots will transform each virtual disks into multiple backing entities. This tab provides storage performance statistics as seen by each VM. Latency, IOPS, read cache hit rates and eviction statistics are provided at the VM directory, virtual disk and backing disks level. Drilling down at a VM level is conveniently fulfilled by this tab.

 

 

About Tab

 

The "About" tab provides various hardware and software information about ESX hosts in the cluster. We can get a quick glimpse of disks, memory, CPU, and software information on each ESX host in the cluster.

 

Understanding Virtual SAN Storage Performance


In this chapter we will understand some of the key concepts and terminologies pertaining to storage performance as applicable to virtual SAN.

Everything explained in this chapter and the next chapters which explains different benchmark analysis using VSAN Observer graphs is based on vSphere 5.5 U1 (VSAN 1.0)


 

IOPS

 

IOPS gives a measure of number of Input/Output Operations Per Second of a storage system. An I/O operation is typically a read or a write and a size. I/O size can vary from anywhere between a few bytes and several megabytes.

 

 

Outstanding I/O

 

When a virtual machine requests for certain IO to be performed (reads or writes), these requests are sent to storage devices. Until these requests are complete they are termed outstanding I/Os. Large number of outstanding I/Os can have an adverse effect on the device latency. Storage controllers that have a large queue depth can handle higher outstanding IOs.

 

 

Latency

 

Latency gives a measure of how long it takes to complete one I/O operation from an application's viewpoint. As we know that I/O sizes can vary from a few bytes to several megabytes it follows that latency can vary based on the size of the I/O. VSAN uses SSD to reduce the effective latency as seen by a virtual machine. If we see relatively high latency in VSAN it could mean

 

 

Bandwidth

 

Bandwidth measures the data rate that a storage is capable of. Now is a good time to understand how IOPS and bandwidth may influence.

When troubleshooting storage performance we have to look at IOPS, I/O sizes, outstanding I/Os, and latency to get a complete picture.

 

 

Congestion

 

Congestion in VSAN happens when typically lower layers fail to keep up with the I/O rate of higher layers. For example if VMs are performing a lot of write operations it could lead to filling up of write buffers. These buffers have to be de-staged to magnetic disks, however, they can only be done at a slower rate than SSDs. This causes VSAN to artificially introduce latencies in the VMs in order to slow down writes so that write buffers can be freed up. Congestion is not normal and in most cases congestion will be close to zero.

 

 

Putting It All Together

We will put all the concepts discussed earlier into an example. Consider a typical mid range SATA SSD with the following characteristics.

Write latency can be calculated as:

Read latency can be calculated with the same formula. We leave it as an exercise for the user to calculate read latency.

If OIO is increased to 16 the latency increases. We can find that by using the same formula

Increasing I/O size can roughly increase latency proportionally. There are exceptions to this rule, however, for simplicity we will take this as a rough guideline.

 

 

Virtual SAN Architecture

 

There are four main high level components in Virtual SAN

VSAN Client

DOM Owner

VSAN Disks layer

Disk deep-dive/physical devices

These four main blocks are directly mapped in the Virtual SAN observer UI to aid in better analysis of performance statistics at different layers of Virtual SAN.

 

Virtual SAN Observer Analysis - 1


In this chapter we will try to analyze a 4k 80% read workload and understand the behavior and impact at different layers.

All performance data represented here is based on vSphere 6.0 (VSAN 2.0)


 

4k IO Size (80% Read, 4 Outstanding IOs)

 

 

VSAN Client Tab

 

We will now walk through each graph in this screenshot

Latency

IOPS

Outstanding IO

Latency Standard Deviation

Under each of the hosts there is a link to full size graphs of what is visible in the thumbnail. We invite the user to click on the Analysis_1.html link on the desktop, included as part of the lab, and open in a browser to explore the full size graphs

 

 

VSAN Disks Tab

 

We know from a previous chapter that VSAN disks layer is responsible for servicing the IO from local disks and may be residing on different nodes in the cluster. We will analyze this layer from the graphs presented by Virtual SAN observer.

Latency & IOPS

Outstanding IO

In general this host is performing well under this workload with all parameters well under threshold. Next, let us look at the physical disk layer.

 

 

VSAN Disks (Deep-Dive) Tab

 

We are now looking at the physical disk layer of host esx-02a

 

 

PCPU Tab

 

We show a partial screen-shot of the PCPU tab encompassing a few graphs in host esx-01a. This tab shows CPU usage of VSAN client, component server (VSAN disks layer), different component owners, and LSOM (VSAN Disks deep-dive layer)

 

 

Memory

 

This tab shows memory consumption of different memory pools. The most important graph to look at is congestion. If memory consumption of congestion pool is very high it usually is indicative of an underlying performance problem.

 

 

Distribution Tab

 

In this tab the distribution of components is shown in a graphical way. Each host has a 3000 component limit. A uniform line indicates a well balanced system. Removal of disk groups will result in a re-balancing of components, which would show up as fluctuations in these graphs.

 

 

VMs Tab

 

All VMs are listed under VMs tab. Selecting a particular VM (HOL_esx-01a-vsanDatastore-rhel6-64-vmwpv-p-0001) provides more details about the object layout of that VM. There are two sections under each VM -- VM Home and Virtual Disk

VM Home

 

 

 

VMs Tab (Contd)

 

We now turn our attention to the second main section under each VM

Virtual Disk

 

 

VMs Tab (Contd)

 

This expanded view of backing disk shows a lot of information.

Moving further down we come to the RAID tree view.

 

 

DOM Owner

 

We explained earlier that every object in VSAN has an owner and that it is responsible for providing RAID and resync services to ensure correctness. VSAN tries to co-locate the owner and the client to not incur an additional network hop.

The main purpose of this tab is for the use of VMware GSS and VSAN developers. We do not recommend users delving into the full graphs of DOM owner tab as advanced analysis and manual correlation needs to be performed in order to understand data in this layer.

 

Virtual SAN Observer Analysis - 2


We will continue our analysis in the chapter but with a write intensive workload as opposed to a read intensive one we analyzed in the last chapter.

All performance data represented here is based on vSphere 6.0 (VSAN 2.0)


 

4k IO Size (80% Write, 4 Outstanding IO)

 

 

VSAN Client Tab

 

We see that

 

 

VSAN Disks Tab

 

 

 

VSAN Disks (Deep-Dive)

 

Peeking into one of the hosts, esx-02a

 

 

VMs Tab

 

From the analysis we can possibly infer

We have touched on a few troubleshooting examples via Virtual SAN Observer.  

In our next Section, we will examine troubleshooting steps via the vRealize Operations Management Plug-In for Virtual SAN.

 

vRealize Operations Manager Virtual SAN Plug-in Offline Demo


vRealizeOperations Management Pack for Storage Devices provides you with a complete view of your entire storage topology from your host,through your storage network, and out to the storage array. With this solution, you can use vRealizeOperations Manager to monitor and troubleshoot capacity and performance problems on different components of your storage area network.

The management pack will feature advanced insight into Virtual SAN through advanced analytics to enable rapid troubleshooting and cluster optimization. It provides Out of the box Dashboards for VirtualSAN Troubleshooting, Cluster Insights, Device Insights, Entity usage, heat maps etc.

In this off-line interactive demo, we will showcase important alerts and their solutions in Virtual SAN environment.

Click here to view an interactive demo


Module 7:  Extending Virtual SAN with Additional File Services (60 Min)

Nexenta VSAN-based File Share Services



 

About NexentaConnect for VMware Virtual SAN

NexentaConnect for VMware Virtual SAN, aka NexentaConnect for VSAN, is a Nexenta product that provides network file services for VMware Virtual SAN. NexentaConnect for VSAN provides user interface that enables you to manage NFS and SMB shares from the VMware vSphere Web Client, as well as monitor datastore performance and utilization.

NexentaConnect for VSAN provides the following features:

• Network Attached Storage (NAS) for VMware Virtual SAN

• NFSv3, NFSv4, and SMB 1.0 support

• User Interface integrated into VMware vSphere Web Client

• Performance and health monitoring

• Folder level snapshots

• Capacity savings by utilizing data reduction capability

• Support for Windows previous versions of files and folders (restore points)

• VSAN reservation for file services

 

 

 

NexentaConnect for VMware Virtual SAN Requirements

You must have the following components installed in your environment:

• VMware vCenter Server 5.5 U1 or later

• VMware vSphere 5.5 or later

• VMware VSAN 1.0 or later

 

 

 

Preparing for the NexentaConnect Lab (1)

 

Double click on the "Prepare Nexenta Lab" PowerCLI script shortcut on the Desktop.

This will prepare your Lab for the NexentaConnect Exercises.

 

 

Preparing for the NexentaConnect Lab (1)

 

Type exit to close the PowerCLI window.

Your Lab environment will continue to prepare for another minute or 2 , but you can continue through the next steps in the Lab manual.

 

 

Login to vSphere Web Client

 

Check the box to "Use Windows session authentication"

Alternatively, you can enter a User name of administrator@corp.local and a password of VMware1!

Click "Login" button

 

 

NexentaConnect for VMware Virtual SAN Components

 

From the Home page of the vSphere Web Client, select Hosts and Clusters

NexentaConnect for VMware vSphere Web Client plugin

Provides user interface that enables you to manage NexentaConnect VSA folders from VMware vSphere Web Client. The plugin is provided as MSI Installer (for Windows) and as Installation Script (for Linux version)

• NexentaConnect Manager

Processes all operations between VMware Virtual SAN and NexentaConnect for VSAN Web Client plugin.

Nexenta IO Engine

Provides file services, such as NFS and SMB, as well as Snapshot creation capability and Auto-Snap service. Managed by NexentaConnectManager. Deployed from the Nexenta IO Engine Image.

 

 

NexentaConnect Settings

 

Lets first have a look at the NexentaConnect Settings.

We have preinstalled the NexentaConnect software for you, so lets look at what settings we would have to configure.

Click the NexentaConnect Settings icon under Administration

 

 

NexentaConnect Settings (1)

 

The NexentaConnect Settings will be displayed.

Here we can see the NexentaConnect Manager IP Address and the Manager Port Number.

Also the name of the Nexenta IO Engine Image. This is the VM Template that gets cloned to provide the NFS and SMB file services.  

 

 

NexentaConnect Settings (2)

 

Click VMware vCenter Settings.

Here you will provide the VMware vCenter Server connection settings.

 

 

NexentaConnect Settings (3)

 

Click Configure Network.

Here we can configure either a Domain or Workgroup.

If you want to integrate with Active Directory for SMB File services permissions, you would provide the AD credentials here.

 

 

NexentaConnect Settings (4)

 

Click Register NexentaConnect

Here is where you register NexentaConnect software.

 

 

 

Overview of the Summary Tab

 

Click the Home button in the vSphere Web Client.

Select NexentaConnect for Virtual SAN

 

 

Overview of the Summary Tab

 

Select vsanDatastore and click the Summary Tab

Performance Summary displays the aggregated NexentaConnect filer vCPU, read cache utilization and I/O performance. Acceleration ratio compares the real-user-IOPS vs IOPS-executed-by-Virtual SAN.

Usage Summary displays the total number of NFS and SMB file shares and their respective utilized capacity with physical free and logical free space. Data Reduction ratio shows the capacity savings.

File services allows user to create and manage folders. Use the control icons on the top left corner of the table to add, delete and edit file shares. Detailed information is displayed below when a folder is selected.

 

 

Nexenta File Services

 

We have already created an initial NFS Share in the Lab environment.

When you create the very first share ( NFS or SMB ) the Nexenta IO Engine Template will be cloned to create the Nexenta_IO_Engine VM.

Select the the share and towards the bottom of the screen , you will see the details of the NFS Share that has been created.

Use the scroll bar to see all the details of the NFS Share that has been created.

 

 

Creating an NFS Share (1)

 

The following steps will demonstrate how to create an NFSShare

Click the Add Folder icon.

 

 

Creating an NFS Share (2)

 

Enter the following information to create the NFS Share :

 Use the exact settings outlined below for the best experience.

Folder Name : NFS-1
Description : My Second NFS Share
Filer Network Settings : Leave defaults ( Once the Nexenta_IO_Engine is running, these cannot be changed )
Share Type : NFS
Storage Policy : NFS-1
Max Folder Size : 1 GB

Note : The Storage Policy needs to be already defined in the vSphere Web Client in the VM Storage Policies.

Click Create

 

 

Creating an NFS Share (3)

 

The File Services window shows the progress of the task.

What is happening in the background is that a Disk of 1 GB has been attached to the NexentaConnect IO Engine.

While that task is completing, you can move to the next step where we will review the VM Storage Policies that are being used.

 

 

 

Reviewing VM Storage Policies

 

On the top of the vSphere Web Client, select the Home button and select Policies and Profiles.

 

 

Reviewing VM Storage Policies

 

Select VM Storage Policies and select NFS-1.

This is the VM Storage Policy that we used to create the previous NFS Share.

 

 

Reviewing VM Storage Policies

 

Click the Edit button  ( the pencil icon ) and select the option Rule-Set-1

For the purposes of the lab, we have created a very simple VM Storage Policy with - Number of failures to tolerate = 1, which defines the number of hosts, disk or network failures a storage object can tolerate. You can use any combination of the 5 VSAN capability rules available to you.

Click Cancel as we are not going to change anything here.

 

 

Creating an NFS Share (4)

 

Select the Home icon at the top of the vSphere Web Client and return to the NextentaConnect for Virtual SAN

 

 

Creating an NFS Share (5)

 

Select vsanDatastore under Nexenta Filers and select Summary Tab.

Scroll down the page and you will see the the NFS Share called NFS-1/NFS-1 share has been created.

 

 

Creating an NFS Share (6)

 

Select the NFS Share ( NFS-1 ) that you just created and scroll down to see the details of the NFS Share.

 

 

Mounting NFS Datastore to ESXi Hosts (1)

 

We will now mount this NFS Share to the ESXi hosts in our Lab Environment.

Select the Home menu and select Hosts and Clusters

Select the ESXi host called esx-01a.corp.local

Right click and select Storage and New Datastore

 

 

Mounting NFS Datastore to ESXi Hosts (2)

 

For the New Datastore Type, select NFS

Click Next

 

 

Mounting NFS Datastore to ESXi Hosts (3)

 

Select NFS 3

Click Next

 

 

Mounting NFS Datastore to ESXi Hosts (4)

 

Use the following information to create the NFS Datastore. These details are available from the properties of the share that you just created.

Datastore name : NFS-1
Folder : /volumes/NFS-1/NFS-1
Server : 192.168.100.201

Click Next

 

 

Mounting NFS Datastore to ESXi Hosts (5)

 

Review the details and Click Finish

 

 

Mounting NFS Datastore to ESXi Hosts (6)

 

For the ESXi host called esx-01a.corp.local, select Related Objects

Select Datastores

Here you will see the NFS datastore (NFS-1) mounted on the ESXi host.

Additional Step : Right click the Datastore and select option to Mount Datastore to Additional Hosts

Select ESXi hosts called esx-02a.corp.local and esx-03a.corp.local

 

 

 

Mounting NFS Datastore to ESXi Hosts (7)

 

Select the Storage section in the vSphere Web Client Navigator

Select the NFS Datastore called NFS-1

Select Related Objects

Select Hosts

Here we will see the NFS Share called NFS-1 mounted on the 3 ESXi Hosts.

 

 

Creating an SMB Share (1)

 

In this Lab we will create an SMB share

Select NexentaConnect for Virtual SAN from the Home screen of the vSphere Web Client

 

 

Creating an SMB Share (2)

 

From the Nexenta Filers, select vsanDatastore

Select the Summary tab.

Click the Add Folder icon to create the SMB share

 

 

Creating an SMB Share (3)

 

Enter the following information to create the NFS Share

Folder Name : SMB-1
Description : My First SMB Share
Filer Network Settings : Leave defaults
Share Type : SMB
Storage Policy : SMB-1
Max Folder Size : 1 GB

Note : The Storage Policy needs to be already defined in the vSphere Web Client.

Click Create

 

 

Creating an SMB Share (4)

 

The SMB share called SMB-1 will be created

 

 

Creating an SMB Share (5)

 

Select the SMB share called SMB-1/SMB-1 and scroll down to see the details of the share that you have just created.

 

 

Assigning Permissions to SMB Shares (1)

 

Click User Configuration

After you create an SMB folder, you may want to specify access permissions for the folder. By default, the smb user has access to the shared folder. However, you can specify additional access permissions from Windows clients using the Microsoft Management Console (MMC) snap-in.

Default password for the smb user is nexenta.

 

 

Assigning Permissions to SMB Shares (2)

 

To open Microsoft Management Console (MMC) - Click Start, click Run, type mmc, and then click OK

 

 

 

Assigning Permissions to SMB Shares (3)

 

The empty MMC console appears.

Click File > Add/Remove Snap-in.

In the Available Snap-ins list, select Shared Folders.

Click Add

 

 

Assigning Permissions to SMB Shares (4)

 

In the Shared Folders dialog window, select Another Computer.

Type the NexentaConnect VSA IP address 192.168.100.201

Under View, select Shares.

Click Finish.

Click OK.

 

 

Assigning Permissions to SMB Shares (5)

 

Under Console Root, select Shares> smb-1_smb-1

 

 

Assigning Permissions to SMB Shares (6)

 

Double click the share named smb-1_smb-1 to open the Properties

Select Share Permissions

 

 

Assigning Permissions to SMB Shares (7)

 

Click Add button

Enter Administrator and click Check Names

Click OK

 

 

Assigning Permissions to SMB Shares (8)

 

The default permissions are Read

Assign the Full Control and Change to the Administrator user.

This is a very simple exercise on SMB Share Permissions, refer to Microsoft Documentation for more complex scenarios.

Click Apply

Click OK

 

 

Mapping the SMB Share to Windows host

 

Click Start> Run

Enter the Following information :

\\192.168.100.201\smb-1_smb-1

Click OK

The SMB Share will be opened.

 

 

Create Data on SMB Share

 

Right Click on SMB Share and create a Text document, lets call it Nexenta Snapshot.txt

Open the document called Nexenta Snapshot.txt and enter some text, something like This is version 1.

 

 

NexentaConnect Snapshots (1)

 

A snapshot is a read-only copy of a dataset. You can rollback datasets to previous versions. Initially, datasets do not occupy any disk space. As the dataset changes, snapshots start to consume more space by referencing the old data. NexentaConnect for VMware Virtual SAN provides a capability to manage snapshots and snapshot schedules for the NexentaConnect VSA folders. Using the NexentaConnect for VSAN user interface, you can perform standard operations with snapshots, such as delete and create.

You can create instant folder snapshots using the NexentaConnect for VSAN user interface. Use these snapshots to protect and restore your data.

Click Data Protection.

Under Snapshots, click the Create Snapshot icon.

 

 

NexentaConnect Snapshots (2)

 

A Snapshot will be created

 

 

Recover Share from Snapshot

 

Open the file called Nexenta Snapshot.txt and add another line of text, something like This is version 2.

 

 

Create another Snapshot

 

You should now have 2 snapshots created for the SMB Share.

 

 

Mapping the SMB Share

 

It may be easier to do this exercise with the SMB Share mapped to a Drive letter

 

 

Mapping the SMB Share

 

Use the following information to Map the SMB Share to a drive letter.

Drive : Z:
Folder : \\192.168.100.201\smb-1_smb-1
Reconnect at sign-in : <deselect>

Click Finish

 

 

Restore Previous Versions

 

Right click on the Mapped SMB share and select the option to Restore previous versions

 

 

Restore Previous Versions

 

Select the Snapshot with the older time stamp and select the Restore option.

Click Restore

Click OK

Wait for the message"The Folder has been successfully restored to the Previous version"

Click OK

 

 

Restore Previous Versions

 

Open the file called Nexenta Snapshot.txt and you should see the version 1 of the file that you created.

There should be just one line of text saying "This is version 1"

 

 

NexentaConnect Snapshot Schedules (1)

 

NexentaConnect for VSAN uses the native Nexenta Auto-Snap service to create snapshot schedules. You can create multiple schedules for any folder.

Under File Services, select the SMB-1/SMB-1folder.

Click Data Protection.

Under Snapshot Scheduler, click the Create Snapshot Schedule icon.

 

 

 

NexentaConnect Snapshot Schedules (2)

 

In the Schedule Snapshot window, specify a schedule and retention policy.

Use the Following information :

Interval : Every Minute
Run every N Minutes : 5
Retention Policy : 10

Click Create.

 

 

NexentaConnect Snapshot Schedules (3)

 

The Snapshot Schedule will be created.

 

 

NexentaConnect Monitoring  - Tasks (1)

 

You can monitor tasks performed by components of NexentaConnect for VMware Virtual SAN.

To view tasks, using the NexentaConnect for VSAN UI:

Select a registered VSAN datastore.

Click the Monitor tab.

Select the Tasks tab.

View the tasks, these should correspond to the Tasks that you just completed in the Lab exercises.

Note : If the tasks do not appear, minimize the Navigator pane by clicking on the pin ( shown as 4 on the screen). The Navigator can be restored again by clicking the Navigator box.

 

 

NexentaConnect Monitoring (2)

 

Use system logs for troubleshooting NexentaConnect for VMware Virtual SAN.

To view system logs, using the NexentaConnect for VSAN UI:

Select a registered VSAN datastore.

Click the Monitor tab.

Select the System Logs tab.

You also have the ability to Export System Logs if requested by Nexenta Technical Support.

 

 

NexentaConnect Monitoring (3)

 

The Service Health tab displays information about the status of NexentaConnect Manager and Nexenta IO Engine.

To monitor service health,using the NexentaConnect for VSAN UI:

Select a registered VSAN datastore.

Click the Monitor tab.

Select the Service Health tab.

We have highlighted the 3 Shares that have been created.

 

This completes the Lab exercises on Nexenta VSAN-based File Share Services

 

 

 

NexentaConnect for VMware VSAN

 

What's included ?

Learn more at:

nexenta.com/VSAN

vmware.com/products/virtual-san/

100% Software. Total Freedom. All Love

 

Module 8:  Securing Virtual SAN Data with Encryption (45 Min)

Preparing for Hytrust Lab



 

Execute "Prepare Hytrust" script

 

Double click on the "Prepare Hytrust Lab" script shortcut on the Desktop.

 

 

Monitor execution progress

 

The script takes a less than a minute to execute, monitor the progress as per the output from the screenshot above.

Once complete, type exit at the prompt and press Enter key.

While the Lab is being prepared in the background, continue through the next few steps to get the background on Hytrust.

 

HyTrust DataControl-based VM Encryption



 

Introduction

 

The figure provides a high level view of the main architectural components of the HyTrust KeyControl and DataControl solution.

HyTrust provides encryption and key management for virtual and physical machines. The major components of the HyTrust solution are:

HyTrust KeyControl Nodes and clusters - supporting an active-active cluster, the KeyControl cluster stores keys, policies and configuration data related to the cluster, or any number of virtual machines where HyTrust DataControl Policy Agent is installed. Administration of the system is through a web-browser-based GUI or through a set of REST-based APIs. Communications between the browser and the KeyControl cluster is over HTTPS. Since this is a full active-active cluster, the browser can point at any KeyControl node in the cluster. Any changes made are immediately reflected on all cluster nodes.

HyTrust DataControl Policy Agent - the HyTrust DataControl Policy Agent (the DataControl agent) is a software module that runs inside Windows and Linux virtual machines, either local or in a private, public or hybrid cloud, providing encryption of virtual disks and individual files. The DataControl agent is typically used to provide encryption of virtual machines (or physical servers) in the data center. All VMs that have the DataControl agent installed can also securely share encrypted files. Encryption keys (keyIDs) can be used by selected VMs to encrypt and decrypt files. Encrypted files can also be sent to cloud storage such as Amazon S3 and only accessed by the selected VMs where the DataControl agent is installed.

The KeyControl nodes contain the HyTrust FreeBSD as the core operating system.

 

 

HyTrust KeyControl Nodes and Clusters

 

At the heart of every DataControl deployment is an active-active cluster of KeyControl nodes that manage encryption within individual virtual/physical machines. All administration takes place from a standard web browser to any node in the KeyControl cluster or from a set of REST-based APIs. KeyControl nodes typically reside in your data center but could be run out of the public cloud as well.

KeyControl features include:

 

 

Administration model

 

The administration model provides for:

Security Admin: 

Domain Admin:

Cloud Admin:

 

 

 

Support for VM in-guest encryption using the HyTrust DataControl Policy Agent

 

The HyTrust DataControl Policy Agent (the DataControl agent) provides for encryption within a virtual machine.

There are a number of features provided in the DataControl agent including:

 

 

General Overview

 

To understand the major components of the system consider the following figure:

A "Cloud Administrator" controls one or more "Cloud VM Sets" through the KeyControl cluster. Each Cloud VM Set can be considered a logical grouping of related VMs, for example "Amazon EC2 VMs," "Savvis VMs," "Legal Dept VMs," and so on. In addition to this logical grouping, authentication between the VMs and the KeyControl cluster requires the use of a per-VM certificate which is used during registration of the VM with the KeyControl cluster. Certificates can be revoked post-install which de-authenticates the VM that is tied to the certificate.

 

 

 

Installation of the HyTrust DataControl Policy Agent

 

Although operation of the installed DataControl agent is similar across Windows and Linux, installation and adding devices to be encrypted is different.

Installation of the KeyControl cluster is as described in the KeyControl Installation, Upgrade and Configuration Documentation. When installing the KeyControl cluster we recommend that you mirror the system drives, have at least two KeyControl nodes for High Availability / DR purposes, establish sound KeyControl backup procedures, and ensure that the master key parts are well protected.

The overall steps to be followed to install and get going are shown in the follow figure:

 

 

Managing Devices From the KeyControl Cluster

 

The following figure shows the state of disks as they go from "FREE" (out of HyTrust control), to within HyTrust control and accessible by applications (attached), or to a detached state in which case access to decrypted data is not available.

Once devices have been added and attached, they can be viewed from within the webGUI. Select the Cloud VM Set and the VM from the file browser. Expand the tab for the VM and you will see the Disks tag. Expand this and the list of disks will be displayed.

 

Hytrust User Interface



 

Open Mozilla Firefox Prowser

 

Launch the Mozilla Firefox browser from the desktop.

 

 

Login to vSphere Web Client

 

Check the box to "Use Windows session authentication"

Alternatively, you can enter a User name of administrator@corp.local and a password of VMware1!

Click "Login" button

 

 

Hosts and Clusters view

 

1. Select the Home tab

2. Select Hosts and Clusters view under Inventories

 

 

Verify Hytrust Lab Status

 

Verify that the Hytrust-VM01 has been registered and powered on.

The Hytrust-VM01 should have an IP Address of 192.168.110.91.

 

 

Hytrust User Interface

 

1. Open a new tab in Firefox - ( Click on the + sign)

2. Click the Hytrust Cloud Administration quick link in the Firefox browser or enter the following link https://hytrust.corp.local

There may be situations where you will be asked to Recover the Master Key for the Hytrust Cloud Administration portal. This is due to the VMworld labs being deployed out to different Cloud instances as it determines that the Hytrust software has moved to different hosts.

If you encounter this, there is aREADME.txt file on your desktop. At the bottom of the README.txt file, we have copied this HytrustMaster Key. Copy the entire key and paste it into the Recover Master Key page and click the Apply button.

You will then be presented with the Hytrust Cloud Administration portal.

 

 

Hytrust User Interface

 

Login with username of secroot and a password of VMware1! and select Sign In

 

 

Hytrust Security

 

1. Select the SECURITY tab

2. Expand Users

3. Here we can see the initial secroot user of the KeyControl Domain.

From here you can add additional users and assign them to Groups.

For the purposes of this lab , we will continue and use the secroot user.

 

 

Hytrust Domains

 

1. Select the DOMAINS tab

2. Expand Domains and expand KeyControl Domain

3. Here we can see the initial hytrust.corp.local KeyControl node.

Additional KeyControl Nodes can be added by installing another KeyControl Node and joining the KeyControl Cluster. KeyControl nodes can operate singly, but usually operate as part of an active-active cluster. You can log into any node in the cluster and make changes that will be automatically replicated across all other nodes in the cluster.

 

 

Hytrust Cloud

 

A "Cloud Administrator" controls one or more "Cloud VM Sets" through the KeyControl cluster. Each Cloud VM Set can be considered a logical grouping of related VMs, for example "VMworld US",  "VMworld Europe"  and so on. In addition to this logical grouping, authentication between the VMs and the KeyControl cluster requires the use of a per-VM certificate which is used during registration of the VM with the KeyControl cluster.

1. Select the CLOUD tab

2. Select Cloud VM Sets

3 Select Create Cloud VM Set

 

 

 

Hytrust Create Cloud VM Set

 

You will then be presented with the following screen:

Enter the following information to create the Cloud VM Set

Name :  VMworld-HOL-VMs
Group : Cloud Admin Group
Description : < any description >

Click Apply

 

 

Hytrust Create Cloud VM Set

 

The Cloud VM Set called VMworld-HOL-VMs will be created.

 

 

Create KeyControl Mapping

 

You can maintain a list of KeyControl mappings from within KeyControl itself. Each mapping is a list of KeyControl <IP, port number> pairs. You can have multiple lists. For example, if you have a KeyControl cluster in the data center, you may have direct access from your VMs within the data center. However, your VMs in the public cloud may come through a firewall.

1. Select the CLOUD tab

2. Select KeyControl Mappings

3 Select CreateKeyControl Mapping

 

 

Create KeyControl Mapping

 

Enter the following information to create the KeyControl Mapping :

Name : VMworld
Group : Cloud Admin group
Description : VMworld KeyControl Mapping
Description : SanFrancisco
Server Name / IP Address : 192.168.110.90
Port : 443

Click Apply

 

 

Create KeyControl Mapping

 

Your new KeyControl Mapping will be listed with the number of servers.

 

 

Open Putty connextion to Hytrust-VM01 VM

 

Open a puTTY connection to hytrust-vm01.corp.local

Login as root and password of VMware1!

 

 

 

Write some data to the disk

 

Before we encrypt the disk, lets write some data to the disk.

First we will mount the disk and then we will create a folder and a datafile to the disk

mkdir /diskb
mount/dev/sdb1 /diskb
mkdir /diskb/Datadisk
touch /diskb/Datafile.txt
ls -al /diskb
umount /diskb

 

 

 

Installing Hytrust agent

 

We have already copied the Hytrust Agent for Linux to the VM for you.

Change to the following folder :

cd /Hytrust/hcs-client-agent/linux/
ls

The Hytrust agent is located in the folder.

 

 

 

Installing Hytrust agent

 

To install the Hytrust agent, run the following command :

sh hcs-client-agent-2.7.1-6930M.run

 

 

Installing Hytrust agent

 

The status of the VM can be checked at any time using the hcl status command.

Run the following command

hcl status

This will give us some information about the disks in our host.

Notice the following :

Summary
--------------------------------------------------------------------------------
KeyControl: None
Status: Not registered

and also :

Available Devices

--------------------------------------------------------------------------------
Disk Name           Device Node                     Size (in MB)
--------------------------------------------------------------------------------
sdb1                /dev/sdb1                       50
sdc                 /dev/sdc                        50

 

 

 

Registering the VM with Hytrust

 

Here, we invoke hcl register to  register the VM with Hytrust, but pass the -a option. First, hcl prompts for your username and password. It authenticates against KeyControl and then lists the available Cloud VM Sets. All you need to do now is to select the Cloud VM Set in which to place the VM. A certificate is created, copied to the VM and registration/authentication takes place.

Enter the following command :

hcl register -a 192.168.110.90

Use the following information :

Username : secroot
PAssword VMware1!
Cloud VM Set : VMworld-HOL-VMs
KeyControl Mapping : 1

 

 

 

Registering the VM with Hytrust

 

The status of the VM can be checked at any time using the hcl status command.

For example, following a successful registration, run the command as follows:

hcl status

The status is shown as Connected. The list of devices detected is shown along with information about whether the device is available or in use. It displays all physical devices and logical volumes that are in use or available.

 

 

Registering the VM with Hytrust

 

If the Hytrust Cloud Administration portal times out, login as secroot and a password of VMWare1!

You can then find your VM in the correct Cloud VM Set as the following figure shows.

1. Select CLOUD

2. Expand Cloud VM sets and highlight the hytrust-vm01

Here we can see our VM called hytrust-vm01 is registered.

 

 

Registering the VM with Hytrust

 

Click Disks

As of yet we don't have any disks under the control of Hytrust.

A free disk can be in one of two stages. Either the disk is free or it is under HyTrust control, in which case it must be attached before applications can access it. 

 

 

Encrypt the contents of a disk

 

You can use hcl add if you are starting with a new disk. If you already have data on your disk (for example, a filesystem or files) you can use hcl encrypt which will encrypt the contents.

Note that you should partition disks first.

To encrypt the contents of a disk, run the following command :

hcl encrypt sdb1

 

 

 

Encrypt the contents of a disk

 

HyTrust DataControl requires disks to be self-identifying. This is achieved by writing a private region near the start of the disk through which we can associate keys and key versions.

Go to the KeyControl GUI and see the disks encrypted by expanding the hytrust-vm01 tab in the file browser.

 

 

HCL Status

 

The unencrypted path to the /dev/sdb1 disk is through /dev/mapper/clear_sdb1.

Note however that the unencrypted path is not accessible when a disk is detached.

 

 

Mounting Unencrypted path

 

To mount the unencrypted disk, run the following commands :

mount /dev/mapper/clear_sdb1/diskb
ls -la /diskb
umount /diskb

Here you should see the Datadisk folder and the Datafile.txt file that you created earlier.

 

 

HCL Encryption

 

The following figure summarizes the layers at which encrypted and unencrypted data is available.

Once the encrypted device has been set up you should NOT access the unencrypted device through /dev/sdb1.

The Linux kernel caches data in the kernel which may be periodically flushed.

If you write to the raw device without going through the /dev/mapper interface, you could end up with corrupted data.

 

 

Moving a Disk to another Virtual Machine


HyTrust supports the migration of disks between one or more Linux VMs or between one or more Windows VMs. All VMs to which the disk will be moved must be in the same Cloud VM Set. We must therefore ensure that disks are self identifying by having a GUID stamped on the disk.

 


 

Viewing the Disk GUID

 

Run the following command to see the GUID details :

hcl status -g

We can now see the associated GUIDs. At this point you are now able to copy this disk to another VM in the same Cloud VM Set

On the hytrust-vm01, run the following command to poweroff the VM

shutdown -h now

You can close the putty session.

 

 

Attaching the disk to another VM

 

In the Hosts and Clusters view of the vSphere Web Client, locate the VM called Hytrust-VM02

Right click the VM and select Edit Settings

 

 

Attaching the disk to another VM

 

In the New Device section of the Edit Settings, select Existing Hard Disk and click Add

 

 

Attaching the disk to another VM

 

Expand the vsanDatastore, select the folder Hytrust-VM01 and select the disk called Hytrust-VM01_1.vmdk

Click OK

 

 

Attaching the disk to another VM

 

Click OK at add the disk.

Once the disk is added to the Hytrust-VM02, power on the VM called Hytrust-VM02.

Wait until the VM is fully powered on. 

 

 

Open Putty connection to Hytrust-VM02

 

Open the Putty application from the taskbar.

Select hytrust-vm02.corp.local and click Open.

 

 

Verify Disk is attached to the Hytrust-VM02 VM

 

Run the following command to verify that the disk was added to the VM.

fdisk -l

Here we can see that the disk added as /dev/sdc

 

 

Verify Disk is attached to the Hytrust-VM02 VM

 

Lets try to mount the disk /dev/sdc on hytrust-vm02

Run the following commands

cd /
mkdir /diskc
mount /dev/sdc1 /diskc

Looks like this is an unrecognized filesystem type.

 

 

 

Install the Hytrust agent

 

Change to the following folder where we have copied the Hytrust agent software.

Run the following commands :

cd /Hytrust/hcs-client-agent/linux
ls -la

 

 

 

Install the Hytrust agent

 

To install the Hytrust agent, run the following command :

sh hcs-client-agent-2.7.1-6930M.run

The Hytrust agent will be installed.

 

 

Register the VM with Hytrust Keystore

 

Run the following command to register the VM with Hytrust Keystore:

hcl register -a 192.168.110.90

Use the following information :

Username : secroot
Password VMware1!
Cloud VM Set : VMworld-HOL-VMs
KeyControl Mapping : 1

 

 

Registering the VM with Hytrust

 

The status of the VM can be checked using the hcl status command.

For example, following a successful registration, run the command as follows:

hcl status

The status is shown as Connected. The list of devices detected is shown along with information about whether the device is available or in use. It displays all physical devices and logical volumes that are in use or available.

 

 

Registering the VM with Hytrust

 

If the Hytrust Cloud Administration portal times out, login as secroot and a password of VMWare1!

You can then find your VM in the correct Cloud VM Set as the following figure shows.

1. Select CLOUD

2. Expand Cloud VM sets and highlight the hytrust-vm02

Here we can see our VM is registered.

 

 

Registering the VM with Hytrust

 

Expand hytrust-vm02

Click Disks

As of yet we don't have any disks under the control of Hytrust.

A free disk can be in one of two stages. Either the disk is free or it is under HyTrust control, in which case it must be attached before applications can access it. 

 

 

Add the Disk to Hytrust

 

To add the disk , run the following command :

hcl add sdc1

 

 

Viewing the Disk GUID

 

Run the following command to see the GUID details :

hcl status -g

We can now see the associated GUIDs. This should be the same GUID that was returned to us when we ran the "hcl status -g" on the other VM at the beginning of the Lab.

 

 

Mounting Unencrypted path

 

To mount the unencrypted disk, run the following commands :

mount /dev/mapper/clear_sdc1/diskc
ls -la /diskc
umount /diskc

Here you should see the Datadisk folder and the Datafile.txt file that you created earlier.

 

This concludes the Lab module on Moving a Disk to another Virtual Machine.

 

Data Forensics Lab


In this Lab we will use some Data Forensics Tools to look at the Data on  the disk. Here we will have the user simulate "stealing" a vmdk and then loading it up in Autopsy and showing how you can see real and deleted files inside the vmdk. Then we will show the same thing but with an encrypted version of the vmdk where everything is obfuscated by the encryption and nothing is detected at all.

 


 

Launching Autopsy application

 

We have already installed Autopsy application in the Hytrust-vm02 VM for you.

Open a Putty session to the Hytrust-VM02 VM

Launch the Autopsy application as follows :

autopsy -d /datadisk/Documents/Evidence_Locker 192.168.110.10

Keep this putty session open throughout this lab exercise. The URL that is displayed on your screen will be different to the one displayed here.

 

 

 

Launching Autopsy application

 

Copy the URL and paste it to a blank tab in the FireFox browser.

(To copy just highlight the URL, be careful as CTRL-C may end the Autopsy application. If the Autopsy application ends, just start it again. Again be careful as you may get a new URL presented to you )

Leave this web page open as we will be returning to it later.

 

 

Hytrust Disk Status

 

Open another Putty session to the VM Hytrust-VM02.

 

 

Hytrust Disk Status

 

Run the following command to get the current status of the disks in our environment.

hcl status

Here we can see that the disk is Attached and Registered with Hytrust.

 

 

Detach the disk from Hytrust control

 

Run the following command to detach the disk from Hytrust control

hcl detach sdc1

 

 

 

Detach the disk from Hytrust control

 

Lets confirm with the following command.

hcl status

Here we can see that the device is still registered, but detached.

 

 

Create a DD image of the disk

 

Run the following command to create a dd image of the disk

dd if=/dev/sdc1 bs=1M > /datadisk/sdc1-encrypted.img

 

 

 

 

Opening the Autopsy Case

 

Return to the Autopsy Web Page that you opened in your browser previously and click New Case

 

 

Opening the Autopsy Case

 

Give the Case a Name, I have called it SDC1-Encrypted.

The other boxes on the screen are optional.

Click New Case

 

 

Opening the Autopsy Case

 

Click Add Host

 

 

Opening the Autopsy Case

 

Enter Hytrust-VM02 as the Host Name

Click Add Host

 

 

Opening the Autopsy Case

 

Click Add Image

 

 

Opening the Autopsy Case

 

Click Add Image File

 

 

Opening the Autopsy Case

 

Enter the location of the disk image that you created earlier, in my case it was :

/datadisk/sdc1-encrypted.img

For the Type , select Partition

for the InputMethod, select Symlink

Click Next

 

 

Opening the Autopsy Case

 

Here we can see the Analysis of the Disk.

Few things to call out here :

1. Warning: The file system type of the volume image file could not be determined. If this is a disk image file, return to the previous page and change the type.

2. Analysis of the image file shows the following partitions: File System Type : raw

Click Add

 

 

 

Opening the Autopsy Case

 

Click OK

 

 

Opening the Autopsy Case

 

At this stage there is nothing more we can do here, We cannot read the encrypted filesystem on the disk as it has been detected as raw.

This ends the Data Forensics Lab.

 

Module 9:  Identify and Resolve Virtual SAN Issues (30 Min)

Troubleshoot Random Errors injected real time into your VSAN Lab Environment


The VSAN Health Check training tool is a web based application running in the vCenter Server.

It is designed to inject VSAN fault scenarios to teach a user how to fix VSAN related issues according to the VSAN health check result.

This lab is a challenge lab format where VSAN failures will be injected into the environment and then ask you to resolve with the help of the VSAN Health Check plugin.


Preparing Virtual SAN



 

Execute Virtual SAN preparation script

 

We have just a few steps to complete before we get on our way, that is to enable the Virtual SAN cluster for your environment and prepare an ESXi host to participate in the VSAN Cluster

Double click on the "Prepare VSAN Cluster" PowerCLIscript shortcut on the Desktop of your control center VM.

This will prepare your Lab environment for the Virtual SAN exercises to follow.

 

 

Monitor execution progress (1)

 

The script takes less than a minute to execute, monitor the progress.

Wait for the script to finish preparing your Virtual SAN environment.

Type exit at the prompt and press Enter key

 

 

Open Mozilla Firefox browser

 

Launch the Mozilla Firefox browser from the desktop, this will launch the VMware vSphere Web Client.

 

 

Login to vSphere Web Client

 

Click the "Use Windows session authentication" and click Login

Alternatively enter a user name of administrator@corp.local and a password of VMware1!

 

 

 

Hosts and Clusters view

 

First, let have a look around the environment to familiarize yourself the the Virtual SAN setup.

1. Select the Home tab

2. Select Hosts and Clusters view under Inventories

 

 

Hosts and Clusters View

 

Note : For this lab to function, you need 4 ESXi Hosts in the Cluster. If you have additional Hosts in the "Datacenter Site A", don't worry, these do not come into play for these Lab exercises. Depending on what previous module that you took, you may have additional VM's registered in your environment.

If you have only 3 ESXi hosts in the Cluster, select the ESXi host called esx-04a.corp.local and move it into the cluster called Cluster Site A.

This can be done by choosing one of the following options :

1. Drag the esx-04a.corp.local and drop it on the cluster called Cluster Site A

or

2. Right click the esx host called esx-04a.corp.local and select Move To... (select the cluster called Cluster Site A and take the default options presented)

Once that task has completed, take the host out of Maintenance Mode.

Right click the esx host called esx-04a.corp.local and select Maintenance Mode -> Exit Maintenance Mode

 

 

Hosts and Cluster View

 

The ESXi host esx-04a.corp.local, needs to have one of its disks converted to a Flash disk to be able to participate in the Virtual SAN Cluster.

You may have completed this task in a previous lab, check that the 8 GB disk has a disk type of Flash.

Select esx-04a.corp.local -> Manage -> Storage -> Storage Devices

Select the the disk with the 8 GB capacity and click the "F" icon to convert to a Flash Disk.

 

 

Verify Virtual SAN Status

 

1. Select VSAN Cluster from the object navigation pane

2. Select Manage tab in the Content pane

3. Select Settings button

4. Select General under Virtual SAN section

5. Verify that Virtual SAN is Turned ON and Add disks to storage is set to Automatic

6. Verify that you have 4 Hosts under Resources are as per the screen shot and that the Network Status is Normal. The Total capacity of Virtual SAN datastore may be different from the screenshot presented here.

 

 

VSAN Health User Interface


Lets now look at some of the areas in the vSphere Web Client that you will use in the following scenarios to help troubleshoot the Virtual SAN failures that will be presented to you.

Virtual SAN 6.0 introduces a new health check plug-in. The plug-in includes several preconfigured health tests to monitor, troubleshoot, and diagnose the cause of cluster component problems.

The plug-in includes the following key features:


 

VSAN Health Service

 

To enable the VSAN Health Service, vSphere DRS Automation needs to be in Fully Automated mode.

1. Select Cluster Site A from the object navigation pane

2. Select Manage tab in the Content pane

3. Select Settings button

4. Select vSphere DRS

5. Click Edit and change the DRS Automation mode to Fully Automated.

 

 

VSAN Health Service

 

After you remove or add hosts that are configured for VSAN Health Service from or to your Virtual SAN cluster, the Retry button is enabled.

1. Select Cluster Site A from the object navigation pane

2. Select Manage tab in the Content pane

3. Select Settings button

4. Select Health under Virtual SAN section

5. Click Retry, Virtual SAN initiates automatic removal or install of the VSAN health service plug-in and restarts the hosts that are removed from the cluster in order to complete the uninstall process.  

 

 

VSAN Health Service

 

The following message appears. It explains the process that is going to take place, in summary :

1. The VSAN Health Extension VIB will be installed on any host that does not have it running.
2. Those hosts will be automatically rebooted.
3. The DRS Automation mode is checked to ensure that it is in Fully Automated mode.
4. A rolling reboot takes place, one host at a time, and ensures that all production workloads remain running.

Click Yes

This process may take a couple of minutes. Monitor the tasks until it reports that the VSAN Health Service Status is Enabled.

You can continue to the next step where we have links to some VSAN reference / troubleshooting documentation.

 

 

VSAN Health Check

Virtual SAN reference / troubleshooting documentation.

 

 

VSAN Health Check

 

With the VSAN Health Check Plugin installed, it can be used to verify the health of the Virtual SAN environment.

The first step is to verify that the health services are enabled. Once they are enabled, the individual checks can be examined to see if there are any issues with this Virtual SAN deployment.

1. Select VSAN Cluster from the object navigation pane

2. Select Monitor tab in the Content pane

3. Select Virtual SAN button

4. Select Health  

In this screen you will be able to run the VSAN Health check to see the status of you VSAN environment. You can run the Retest at any time to refresh the Virtual SAN Health status

The Virtual SAN HCL health will always show as Warning as we are running these labs in a virtualized environment.

 

 

VSAN Proactive Tests

 

VSAN Health Services comes with a set of Proactive Tests that can be run from the vSphere Web Client.

VM Creation Test

This test creates a very simple, tiny VM on every ESXi host in the Virtual SAN cluster. If that creation succeeds, the VM gets deleted and it can be concluded that a lot of aspects of Virtual SAN are fully operational.

Multicast Performance Test

This test is designed to assess connectivity and multicast speed between the hosts in the Virtual SAN cluster. It verifies that the multicast network setup can satisfy VSAN's requirements.

Storage Performance Test

There are two primary use cases for this test:

  1. Burn-in hardware to detect faulty hardware. As the test is very stressful to all aspects of the Virtual SAN stack, including the network, flash devices, storage capacity devices and storage controllers, it should be able to detect unreliable hardware.
  2. A simple-to-use tool to assess the performance characteristics of a Virtual SAN cluster. The test can run a number of different workloads, varying between random and sequential, small and large I/O, high or low outstanding I/O, and different mixes of read and write I/O.

Since we are running in a Lab environment and not physical, the results of these test may fail, but it is still an interesting exercise to see the results.

 

VSAN - Scenario Failure Injection


Tips for Troubleshooting :

You are allowed to use any means to resolve this issue, e.g. vSphere Web Client, Virtual SAN Health Check Plugin or RVC

Note that this environment has the VSAN Health Services installed. This should be used as a starting point for troubleshooting Virtual SAN issues.

There is also a significant amount of documentation available.

These include:


 

Injecting a Failure Scenario

 

Connect to the VSAN Training portal 

It is available as a secondtab in the FireFox Browser or you can click on the VSAN Training button quick link.

Read the following instructions before proceeding with the tests :

Inject a new scenario, assuming the cluster is currently in perfect health and has been returned to the default configuration. Your assignment will be to identify the issue that was injected and resolve it and restore the VSAN cluster to full health.

 

 

 

Injecting a Failure Scenario



 

Injecting a Failure Scenario

 

 

  1. Each scenario launches a task. Once the task completes successfully, the scenario is active.
  2. It is nowyour job to troubleshooting and resolve the situation.

Enter a scenario ID, and press Submit.

Read the messages as it changes from scenario to scenario and will give you a clue to the issue.

Please take the scenarios in the following order:

 

Solution 1 - VMkernel Nics not configured for Virtual SAN traffic



 

Solution 1 - VMkernel Nics not configured for Virtual SAN traffic

 

To participate in a Virtual SAN cluster, and form a single partition of fully connected ESXi hosts, each ESXi host in a VSAN cluster must have a vmknic (VMkernel NIC or VMkernel adapter) configured for Virtual SAN traffic. This check ensures each ESXi host in the Virtual SAN Cluster has a vmknic configured for Virtual SAN traffic.

Note: Even if an ESXi host is part of the Virtual SAN cluster, but is not contributing storage, it must still have a VMkernel NIC configured for Virtual SAN traffic.

Ensure that each ESXi host participating in the Virtual SAN cluster has a vmknic enabled for VSAN traffic.

This can be done using the vSphere Web Client, where each ESXi host’s networking configuration can easily be checked.

Navigate to Hosts and Clusters > ESXi host > Manage Networking > VMkernel Adapters, check the Virtual SAN Traffic column and ensure that at least 1 vmknic is Enabled for Virtual SAN traffic.

It can also be checked from the CLI using the command:

 esxcli vsan network list

For example:

 [root@ESXi-h01:~] esxcli vsan network list
 Interface
    VmkNic Name: vmk2   IP Protocol: IPv4
    Interface UUID: 264ed254-5aa5-0647-9cc7-001f29595f9f
    Agent Group Multicast Address: 224.2.3.4
    Agent Group Multicast Port: 23451
    Master Group Multicast Address: 224.1.2.3
    Master Group Multicast Port: 12345
    Multicast TTL: 5

In the preceding output, the VMkernel nic vmk2 is used for Virtual SAN traffic.

KB article 2108062 has instructions on on how  to enable VSAN Networking.

 

Return

 

Solution 2 - Incorrect multicast address for VMkernel nics



 

Solution 2 - Incorrect multicast address for VMkernel nics

 

This solution provides steps to change the multicast address for each VMware Virtual SAN cluster. If there are multiple Virtual SAN clusters on the same Layer 2 network, each host receives all multicast messages. In order to reduce the amount of multicast traffic for each VSAN cluster, it is necessary to change the multicast address for each VMware Virtual SAN cluster.

Warning: If you change the multicast address on an active Virtual SAN cluster, it can lead to network partitioning until all of the ESXi hosts in the cluster are on the same multicast network. It is recommended to organize downtime before making this change.

In order to change the multicast address for VMware Virtual SAN, perform these steps on each ESXi host within the Virtual SAN Cluster.

To change the multicast address on an ESXi host configured for Virtual SAN:

Open an SSH connection to the ESXi host and log in as root.

Identify the VMkernel interface configured for Virtual SAN.

To identify the VMkernel interface, run this command on the ESXi hosts:

   esxcli vsan network list

   You see output similar to:

   Interface
       VmkNic Name: vmk1
       IP Protocol: IPv4
       Interface UUID: 28b52f53-69c1-c193-eabe-005056885a94
       Agent Group Multicast Address: 224.2.3.4
       Agent Group Multicast Port: 23451
       Master Group Multicast Address: 224.1.2.3
       Master Group Multicast Port: 12345
       Multicast TTL: 5

   To change the multicast address on each ESXi host in the cluster, run this command:

   esxcli vsan network ipv4 set -i <vmkernel interface> -d <multicast agent group address> -u <multicast master group address>

   For example, to set the Master Group Multicast address to 224.1.2.3 and the Agent Group Multicast Address to 224.2.3.4 , run this command on each ESXi host for this particular VSAN cluster:

   esxcli vsan network ipv4 set -i vmk3 -d 224.2.3.4 -u 224.1.2.3
    esxcli vsan network ipv4 set -i vmk4 -d 224.2.3.4 -u 224.1.2.3

KB article 2075451has instructions on on how  to change multicast address for VMware Virtual SAN Cluster.

 

Return

 

Solution 3 - Incorrect VSAN.ClomRepairDelay value



 

Solution 3 - Incorrect VSAN.ClomRepairDelay value

 

The VSAN.ClomRepairDelay setting specifies the amount of time Virtual SAN waits before rebuilding a disk object.

By default, the repair delay value is set to 60 minutes; this means that in the event of a failure that renders components ABSENT, Virtual SAN waits 60 minutes before rebuilding any disk objects.

This is because Virtual SAN is not certain if the failure is transient or permanent.

Changing this setting requires a clomd service restart.

Note: If a failure in a physical hardware component is detected, such as an Solid State Disk (SSD) or Magnetic Disk (MD), this results in components being marked as DEGRADED and Virtual SAN immediately responds by rebuilding the impacted components.

To change the repair delay time using the VMware vSphere Web Client, run these steps on each ESXi host in the VSAN cluster:

Log in with admin credentials to the VMware vCenter Server using the vSphere Web Client.

Select the VSAN Cluster and highlight the ESXi host > Manage > Settings.

Select Advanced System Settings > VSAN.ClomRepairDelay.

Click Edit.

Modify VSAN.ClomRepairDelay value in minutes (60) as required.

Restart the Cluster Level Object Manager (CLOM) service clomd to apply the changes by running this command:

   /etc/init.d/clomd restart

Note: Restarting the clomd service briefly interrupts CLOM operations. The length of the outage should be less than one second. However, if a virtual machine is being provisioned at the time the clomd service is restarted, that provisioning task may fail.

KB article 2075456 has instructions on how to change this setting.

 

Return

 

Solution 4 - VSAN CLOM service stopped



 

Solution 4 - VSAN CLOM service stopped

 

CLOMD (Cluster Level Object Manager Daemon) plays a key role in the operation of a VSAN cluster.

It runs on every host and is responsible for new object creation, initiating repair of existing objects after failures, all types of data moves and evacuations (e.g. Enter Maintenance Mode, Evacuate data on disk removal from Virtual SAN), maintaining balance and thus triggering rebalancing, implementing policy changes,  etc.

In this scenario the CLOM service has been stopped

Restart the Cluster Level Object Manager (CLOM) service clomd to apply the changes by running this command:

/etc/init.d/clomd restart

Note: Restarting the clomd service briefly interrupts CLOM operations. The length of the outage should be less than one second. However, if a virtual machine is being provisioned at the time the clomd service is restarted, that provisioning task may fail.

 

Return

 

Solution 5 - Virtual SAN Disk format downgrade



 

Solution 5 - Virtual SAN Disk format downgrade

 

The On-disk Format Version displays how many of the disks have been upgraded to version 2, the VirstoFS on-disk format included with Virtual SAN 6.0.

Customers should upgrade to version 2 to leverage new performance and scalability improvements with snapshots and clones.

The upgrade can be done online via a rolling-upgrade mechanism.

In the above example, 3 of the disks are at version v1 on-disk format version.

If disks are discovered with an outdated on-disk format, the “Upgrade” button will be available and may be used to update them to version 2.

 

Return

 

Solution 6 - VM Storage Policies - Disk Failures to Tolerate



 

Solution 6 - VM Storage Policies - Disk Failures to Tolerate

 

VM Storage Policy capability Number Of Failures To Tolerate, defines how many failures can occur in the VSAN cluster and still provide a full copy of the data to allow a virtual machine to remain available.

The formula is as follows, if you want to tolerate n failures, then you need 2n + 1 ESXi hosts in the VSAN cluster.

This table should make it clearer:

Number of Failures To Tolerate    Number of hosts in the Cluster

         1                                                3

         2                                                5

         3                                                7

So what happens if you try to deploy a virtual machine using the Virtual SAN Default Storage Policy with a particular Number of Failures to Tolerate capability but the VSAN Cluster does not contain enough ESXi hosts to satisfy this request?

The above messages is displayed when you try to deploy the VM (this example was trying to deploy a VM using the Virtual SAN Default Storage Policy with Number of Failures to Tolerate set to 2 on a VSAN cluster with only 4 ESXi hosts).

The solution here is to reduce the Number of Failures to Tolerate to 1 in the Virtual SAN Default Storage Policy and the VM should be created successfully.

 

Return

 

 

Conclusion

Thank you for participating in the VMware Hands-on Labs. Be sure to visit http://hol.vmware.com/ to continue your lab experience online.

Lab SKU: HOL-SDC-1608

Version: 20150923-054732