VMware Hands-on Labs - HOL-1827-01-HCI


Lab Overview - HOL-1827-01-HCI - Virtual Volumes (VVOLs) and Storage Policy Based Management

Lab Guidance


Note: It will take more than 90 minutes to complete this lab. You should expect to only finish 2-3 of the modules during your time.  The modules are independent of each other so you can start at the beginning of any module and proceed from there. You can use the Table of Contents to access any module of your choosing.

The Table of Contents can be accessed in the upper right-hand corner of the Lab Manual.

Virtual Volumes (VVols) is an integration and management framework that virtualizes SAN/NAS arrays, enabling a more efficient operational model that is optimized for virtualized environments and centered on the application instead of the infrastructure.

VVols simplifies operations through policy-driven automation that enables more agile storage consumption for virtual machines and dynamic adjustments in real time, when they are needed. It simplifies the delivery of storage service levels to individual applications by providing finer control of hardware resources and native array-based data services that can be instantiated with virtual machine granularity.

With Virtual Volumes (VVols), VMware offers a new paradigm in which an individual virtual machine and its disks, rather than a LUN, becomes a unit of storage management for a storage system. Virtual volumes encapsulate virtual disks and other virtual machine files, and natively store the files on the storage system.

This lab will guide you through the introduction, provisioning, and advance features of VVols.

Lab Module List:

 Lab Captains:

 Platform Partners:

 

 

This lab manual can be downloaded from the Hands-on Labs Document site found here:

http://docs.hol.vmware.com

This lab may be available in other languages.  To set your language preference and have a localized manual deployed with your lab, you may utilize this document to help guide you through the process:

http://docs.hol.vmware.com/announcements/nee-default-language.pdf


 

Location of the Main Console

 

  1. The area in the RED box contains the Main Console.  The Lab Manual is on the tab to the Right of the Main Console.
  2. A particular lab may have additional consoles found on separate tabs in the upper left. You will be directed to open another specific console if needed.
  3. Your lab starts with 90 minutes on the timer.  The lab can not be saved.  All your work must be done during the lab session.  But you can click the EXTEND to increase your time.  If you are at a VMware event, you can extend your lab time twice, for up to 30 minutes.  Each click gives you an additional 15 minutes.  Outside of VMware events, you can extend your lab time up to 9 hours and 30 minutes. Each click gives you an additional hour.

 

 

Alternate Methods of Keyboard Data Entry

During this module, you will input text into the Main Console. Besides directly typing it in, there are two very helpful methods of entering data which make it easier to enter complex data.

 

 

Click and Drag Lab Manual Content Into Console Active Window

You can also click and drag text and Command Line Interface (CLI) commands directly from the Lab Manual into the active window in the Main Console.  

 

 

Accessing the Online International Keyboard

 

You can also use the Online International Keyboard found in the Main Console.

  1. Click on the Keyboard Icon found on the Windows Quick Launch Task Bar.

 

 

Activation Prompt or Watermark

 

When you first start your lab, you may notice a watermark on the desktop indicating that Windows is not activated.  

One of the major benefits of virtualization is that virtual machines can be moved and run on any platform.  The Hands-on Labs utilizes this benefit and we are able to run the labs out of multiple datacenters.  However, these datacenters may not have identical processors, which triggers a Microsoft activation check through the Internet.

Rest assured, VMware and the Hands-on Labs are in full compliance with Microsoft licensing requirements.  The lab that you are using is a self-contained pod and does not have full access to the Internet, which is required for Windows to verify the activation.  Without full access to the Internet, this automated process fails and you see this watermark.

This cosmetic issue has no effect on your lab.  

 

 

Look at the lower right portion of the screen

 

Please check to see that your lab is finished all the startup routines and is ready for you to start. If you see anything other than "Ready", please wait a few minutes.  If after 5 minutes your lab has not changed to "Ready", please ask for assistance.

 

Module 1 - Overview and Requirements (30 minutes)

Introduction


Welcome to the Virtual Volumes Lab!

In this Module, you will be given an overview of Virtual Volumes and add your first VVol to vCenter. You will also learn about the benefits that Virtual Volumes provide as well as the requirements necessary to utilize VMware Virtual Volumes (VVols).

 


Lesson 1: Overview and Requirements


Virtual Volumes (VVols) is an integration and management framework for external storage (SAN and NAS). This framework allows customers to easily assign and manage storage capabilities on a per-application (per-VM) basis at the hypervisor level using Storage Policy-Based Management (SPBM).

Virtual Volumes is an industry-wide initiative that allows customers to leverage the unique capabilities of their current storage investments and transition - without disruption - to a simpler and more efficient operational model optimized for virtual environments that works across all storage types.


 

Did you know?

With Virtual Volumes, there are two big changes to how Virtual Machine Snapshots work:

  1. Snapshots are no longer managed by vSphere.  You still create them the same way via the vSphere client; however, all of the snapshots are actually managed on the Storage Array.
  2. With traditional VMFS based Virtual Machine snapshots, the base disk becomes read-only and all changes are written to delta files.  Sounds familiar, right?  Once you delete a snapshot, those delta files all have to be merged back into the base disk which is both resource intensive and time consuming.  With Virtual Volumes, the base disk remains read/write when a snapshot is taken (the delta snap files hold the original data when a change is made).  When you delete a snapshot you simply discard the snap delta files since the base disk already contains all of the latest data.

This results in an incredible benefit in performance and efficiency (with an additional benefit being that you no longer have to worry about keeping 'too many' snapshots around)!

 

 

Storage Optimized for Virtualized Environments

 

Virtual Volumes enable Virtual Machine aware storage and policy based management across heterogeneous arrays.

Let's jump right in and create a Virtual Volume!

 

 

Open Chrome Browser from Windows Quick Launch Task Bar

 

  1. Click on the Chrome Icon on the Windows Quick Launch Task Bar.

 

 

vCenter Login

 

  1. Select the checkbox for "Use Windows session authentication".
  2. Click Login

Alternatively, you can enter a User name of administrator@corp.local and a password of VMware1!

 

 

Host and Clusters

 

  1. Hover over the Home Menu
  2. Select Hosts and Clusters.

 

 

Add Storage

 

Expand vcsa-01a.corp.local if necessary

  1. Select the RegionA01 Cluster
  2. Right-click RegionA01-COMP01
  3. Select Storage > New Datastore...

 

 

Location

 

  1. Click Next

 

 

Select Type

 

  1. Select VVOL
  2. Click Next

 

 

Name and container selection

 

In this case, we are creating a VVol that communicates to a Storage Array via an iSCSI based protocol.

  1. Enter the Datastore Name:  VVol-FileBased-Datastore
  2. Click Next

 

 

Select Hosts Accessibility

 

  1. Select all three of the ESXi Hosts
  2. Click Next

 

 

Ready to complete

 

  1. Review settings then click Finish

 

 

Confirm VVol Creation

 

  1. Select the Datastores Tab
  2. Notice that the newly created Virtual Volume is showing Normal (online) status.

The Virtual Volume is ready to be used by Virtual Machines.

 

 

Congratulations!

In just a few simple steps, you have added a Virtual Volume to your vSphere lab environment.

If it feels like the exercise you just went through was nothing more than adding a New Datastore to vCenter, don't worry.  Working with Virtual Volumes is meant to provide a familiar storage experience to Virtual Infrastructure Administrators; however, they provide powerful new capabilities that have not existed previously.

In later modules of this lab you will learn about each of the Architectural components that drive Virtual Volumes and see first-hand how Storage Administrators perform the necessary steps to provision storage resources that are consumable via Virtual Volumes.

Virtual Volumes are enabled by features that are exposed via the back-end Storage Array (QoS, Disk Performance Tiers, Snapshotting, Replication, De-Duplication, etc.).  Virtual Machines that use Virtual Volumes can leverage the features that you choose via Software Based Policies, on the fly, without having to make any changes to the Storage Array or re-format LUNs via vSphere.

In the next Lesson you will learn about the benefits that Virtual Volumes provide.  You will then complete the Module with a review of the Software and Hardware Requirements necessary for Virtual Volumes.

 

Lesson 2: Benefits


vSphere Virtual Volumes implements the core tenets of the VMware Software-Defined Storage vision to enable a fundamentally more efficient Operational Model for external storage in virtualized environments, centering it on the Application instead of the physical infrastructure.

Virtual Volumes enables Application-specific requirements to drive storage provisioning decisions while leveraging the rich set of Capabilities provided by existing Storage Arrays.

Read on to learn about some of the primary benefits delivered by Virtual Volumes.


 

Streamlined Storage Operations

Virtual Volumes simplifies storage operations by automating manual tasks and eliminating operational dependencies between the vSphere Admin and the Storage Admin. Provisioning is faster, and change management is simpler as the new operational model is built upon policy-driven automation. 


 

 

Finer Controls

Virtual Volumes simplifies the delivery of storage service levels to applications by providing administrators with finer control of storage resources and data services at the VM level that can be dynamically adjusted in real time. 


 

 

Improved Resource Utilization

Virtual Volumes improves resource utilization by enabling more flexible consumption of storage resources, when needed and with greater granularity. The precise consumption of storage resources eliminates over-provisioning. The Virtual Datastore defines capacity boundaries, access logic, and exposes a set of data services accessible to the virtual machines provisioned in the pool. 


 

 

Flexible Consumption Model

Virtual Datastores are purely logical constructs that can be configured on the fly, when needed, without disruption and don’t require formatting with a file system. 


 

 

Benefits Summary

 

Historically, vSphere storage management has been based on constructs defined by the storage array: LUNs and filesystems. A storage administrator would configure array resources to present large, homogenous storage pools that would then be consumed by vSphere administrator.

Since a single, homogeneous storage pool would potentially contain many different applications and virtual machines; this approach resulted in needless complexity and inefficiency. vSphere administrators could not easily specify specific requirements on a per-VM basis.

Changing service levels for a given application usually meant relocating the application to a different storage pool. Storage administrators had to forecast well in advance what storage services might be needed in the future, usually resulting in the over-provisioning of resources.

With Virtual Volumes, this approach is fundamentally changed. vSphere administrators use policies to express application requirements to a storage array. The storage array responds with an individual storage container that precisely maps to application requirements and boundaries.

 

 

Summary, cont.

 

Typically, the virtual datastore is the lowest granular level at which data management occurs from a storage perspective. However, a single virtual datastore contains multiple virtual machines, which might have different requirements. With the traditional approach, differentiation on a per virtual machine level is difficult.

The Virtual Volumes functionalities allows for the differentiation of virtual machine services on a per application level by offering a new approach to storage management.

Rather than arranging storage around features of a storage system, Virtual Volumes arranges storage around the needs of individual virtual machines, making storage virtual machine centric.

Virtual Volumes map virtual disks and their respective components directly to objects, called virtual volumes, on a storage system. This mapping allows vSphere to offload intensive storage operations such as snapshot, cloning, and replication to the storage system.

Now that you understand the Benefits, let's review the Requirements that are necessary to use Virtual Volumes in our final Lesson of this Module.

 

Lesson 3: Requirements


Virtual Volumes requires the following Software, Hardware and Licensing Requirements.


 

Software

The use of Virtual Volumes requires the following software components:

 

 

Hardware

 

There are over 150 array models certified for Virtual Volumes from multiple Partners with even more on the way.

The use of vSphere Virtual Volumes requires the following hardware components:

The VMware Compatibility Guide for Virtual Volumes makes it easy to answer requirement questions like:

 

 

License

The use of vSphere Virtual Volumes requires the following license:

 

Conclusion


In this module you learned about the benefits that Virtual Volumes provides, the requirements necessary for Virtual Volumes and also added your first Virtual Volume to vSphere.


 

You've finished Module 1

Congratulations on completing Module 1!

If you are looking for additional information on a Virtual Volumes Overview:

The Virtual Volumes Compatibility Guide can be found here:

Proceed to any module below which interests you most.

 

 

 

How to End Lab

 

To end your lab, click on the END button.  

 

Module 2 - Virtual Volumes Architecture (30 minutes)

Introduction


In this Module you will learn about all of the Key Components that enable Virtual Volumes.  

You will then familiarize yourself with each of those working components within our Lab Environment.

 


Key Components


The following is a summarized description and definition of the key components of vSphere Virtual Volumes.


 

Virtual Volumes (VVols)

 

Virtual Volumes are considered a type of virtual machine object, which are created and stored natively on the storage array. VVols are stored in storage containers and mapped to virtual machine files/objects such as VM swap, VMDKs and their derivatives.

There are five different types of Virtual Volumes object types and each of them map to a different and specific virtual machine file.

 

 

Vendor Provider (VP)

 

The vendor provider, also known as the vSphere Storage APIs for Storage Awareness (VASA) provider, is a storage-side software component that acts as a storage awareness service for vSphere and mediates out-of-band communication between vCenter Server and ESXi hosts on one side and a storage system on the other. Storage vendors exclusively develop vendor providers.

ESXi hosts and vCenter Server connect to the Vendor Provider and obtain information about available storage topology, capabilities, and status.

Subsequently vCenter Server provides this information to vSphere clients, exposing the capabilities around which the administrator might craft storage policies in SPBM.

Vendor Providers are typically setup and configured by the vSphere administrator in one of two ways:

 

 

Storage Container (SC)

 

Unlike traditional LUN and NFS based vSphere storage, the Virtual Volumes functionality does not require preconfigured volumes on a storage side.

Instead, Virtual Volumes uses a storage container, which is a pool of raw storage capacity and/or an aggregation of storage capabilities that a storage system can provide to virtual volumes.

Depending on the storage array implementation, a single array may support multiple storage containers. Storage Containers are typically setup and configured by storage administrators.

Containers are used to define:

 

 

Virtual Datastore

 

A Virtual Datastore represents a storage container in a vCenter Server instance and the vSphere Web Client. A Virtual Datastore represents a one-to-one mapping to the storage system’s storage container.

The storage container (or Virtual Datastore) represents a logical pool where individual Virtual Volumes created VMDKs are created.

Virtual Datastores are typically setup and configured by vSphere administrators.

 

 

Protocol Endpoints (PE)

 

Although storage systems manage all aspects of virtual volumes, ESXi hosts have no direct access to virtual volumes on the storage side. Instead, ESXi hosts use a logical I/O proxy, called the protocol endpoint, to communicate with virtual volumes and virtual disk files that virtual volumes encapsulate.

ESXi uses protocol endpoints to establish a data path on demand from virtual machines to their respective virtual volumes.

Protocol Endpoints are compatible with all SAN/NAS industry standard protocols:

Protocol Endpoints are setup and configured by Storage Administrators.

 

 

Putting it all together

 

Here you can see the relationships between the Key Virtual Volumes Components, including views from a top-down (VI Admin) perspective and bottom-up (Storage Admin) perspective.  

Published Capabilities will vary depending on Array types.

Once all of these components are in place, VI Admin's can easily create VM Storage Policies to take advantage of Array capabilities as part of automating the Software Defined Datacenter.

 

Configuration Review


Now that you know the different components that make up the Virtual Volumes Architecture, let's examine how they are configured and utilized in our Lab Environment.

Our Lab is using nested Virtualized technology from Nimble and EMC so that you can compare different Vendor approaches to Virtual Volumes:

Nimble

EMC


 

Open Chrome Browser from Windows Quick Launch Task Bar

 

  1. Click on the Chrome Icon on the Windows Quick Launch Task Bar.

 

 

vCenter Login

 

  1. Select the checkbox for "Use Windows session authentication".
  2. Click Login

Alternatively, you can enter a User name of administrator@corp.local and a password of VMware1!

 

 

Examine Vendor (VASA) Providers

 

  1. Hover over the Home Menu
  2. Select Storage

 

 

Examine EMC Storage Containers

 

  1. Open a new Tab in your browser and select the EMC Unisphere on the bookmarks bar.

Login with the User: admin | Password: VMware1!  

NOTE: Upon login, you may receive a notification indicating "Software Upgrade Successful" when logging in - you may safely close and ignore this message and continue the lab.

 

 

Examine Nimble Storage Containers

 

  1. Open a new Tab in your browser and select the Nimble-OS1 on the bookmarks bar.

Login with the User: vmadmin | Password: VMware1!  

 

Note: When logging into the Nimble arrays in this lab you may be presented with a Usage Warning Message - Simply Click "OK" to continue.

 

 

Examine Virtual Datastores

 

  1. Hover over the Home menu
  2. Select Storage

 

 

Datastores

 

  1. Select RegionA01
  2. Click Datastores  
  3. Select Datastores

Note that there are multiple Virtual Datastores present of type, "VVOL" that were added to vCenter by a VI Admin.

These Virtual Volumes are a one-to-one mapping to their respective EMC and Nimble Storage Containers and are ready for Virtual Machine usage.

Note: Based on the Modules you have completed in this Lab, you will see different datastores available. It is safe to continue to your next step if your datastores are not identical to those listed in the screenshot.

You have confirmed all of the key VVol Components within our Lab Environment.  In a later Lab Module, you will create new Virtual Volumes all the way from End-to-End (Storage Array to vSphere).

 

Conclusion


In this Module you learned about the Key Components that drive Virtual Volumes Architecture and also examined them in action within our Lab Environment.


 

You've finished Module 2

Congratulations on completing Module 2!

If you are looking for additional information on Virtual Volumes Architecture, try one of these:

Proceed to any module below which interests you most.

 

 

 

How to End Lab

 

To end your lab, click on the END button.  

 

Module 3 - Provisioning Virtual Volumes (45 minutes)

Introduction


In this Module you will learn how Virtual Volumes are provisioned from end-to-end (Storage Array to vSphere) utilizing multiple Storage Vendor Appliances.

 


Working with EMC Virtual Volumes


This Lesson will show how Virtual Volumes (VVols) can be configured on the EMC Unity platform.

Historically, virtual storage management has been defined by the presented storage system resources of an array, such as LUNs and File systems. These large storage resources could potentially contain many different applications and virtual machine data, resulting in complex management and inefficient usage of resources, where VI administrators could not specify requirements on a per-VM basis.

As a result, changing service levels meant administrators would need to relocate VMs to different LUNs/Storage Pools which is a lot of management overhead. VVols is an integration and management framework that delivers a new model to provision storage (Block or File) for virtual machine (VM) objects. This means instead of managing VMs at the LUN or file system level, individual VMs can be managed using this new architecture.

VVols provides administrators finer control over their storage resources and data services at the VM level which can be adjusted dynamically to the needs of the application.


 

Open Chrome Browser from Windows Quick Launch Task Bar

 

 

EMC - Storage Admin Steps

 

In this lesson you will see the end-to-end provisioning of a new EMC UnityVSA based Virtual Volume beginning with the steps that a Storage Administrator must perform on the Array and finishing with the steps that a VI Admin performs in order to add the VVol to vCenter.

  1. Open a New Tab in your Browser
  2. Select the EMC Unisphere shortcut on the bookmarks bar.

Login with the User: admin | Password: VMware1!  

 

 

VI Admin Steps

 

Now that the Unity Array Storage Container and Protocol Endpoints have been created, you can create a Virtual Datastore within vCenter to complete our Virtual Volumes provisioning exercise.

  1. Click the vSphere Web Client Browser Tab to return to vCenter.

If you need to re-authenticate, you can select the checkbox for "Use Windows session authentication" and click Login.

Alternatively, you can enter a User name of administrator@corp.local and a password of VMware1!

 

Working with Nimble Virtual Volumes



 

Nimble - Storage Admin Steps

 

In this lesson you will see the end-to-end provisioning of a new Nimble Virtual Volume beginning with the steps that a Storage Administrator must perform on the Array and finishing with the steps that a VI Admin performs to add the VVol to vCenter.

  1. Open a New Tab in your Browser
  2. Select the Nimble-OS1 shortcut on the bookmarks bar.  

Login with the User: vmadmin Password: VMware1!

 

 

 

VASA Provider (VVols)

 

We will be provisioning a Block VVol Datastore on the Nimble Array that is accessible via vCenter using the iSCSI protocol. We will first verify the Nimble array can communicate with vCenter through VASA (This was pre-configured for you)

  1. Select Administration from the menu bar.
  2. Click VMWARE INTEGRATION

 

 

 

Test VASA on Nimble

 

Note that there is currently a registration for vcsa-01a and the Checkbox labeled VASA Provider (VVols) is currently checked

  1. Click Test STATUS

 

 

 

Nimble Folders

 

Nimble utilizes folders, also referred to as storage containers, to house the virtual volumes. To view the current folder configuration:

  1. Select Manage from the menu bar
  2. Click Data Storage

 

 

View Nimble Folder and Properties

 

Here you can see the default Folder that was created labeled NimOS1-VVol.

Once vSphere starts using this container for virtual volumes you will see VMDK and other files stored natively within the folder.

 

 

Performance Policies

 

Our final step is to view the Performance Policies

  1. Select Manage from the menu bar
  2. Click Performance Policies

 

 

Performance Policies

 

There are a variety of pre-created Performance Policies as well as the capability to create our own. These policies will be used in vSphere during the creation of the VM Storage Policy

 

 

Create a New VVol on the Nimble Storage Array

 

We will now create a new VVol on the Nimble Array:

  1. Select Manage on the menu bar
  2. Click Data Storage

 

 

You are presented with the above screen.  VVols on the Nimble Array are created in Folders.  

  1. Click Folders .

 

 

We'll use the Nimble VVols wizard to create our volume

  1. Click the "+"

 

You are presented with the "Create Folder" dialog box.  Follow the example screenshot and instructions below to complete the required fields.

  1. Enter "NimOS1-VVol-2" in the Name field
  2. Choose "VMware virtual volumes" Management Type
  3. Choose "vcsa-01a" as the vCenter Server
  4. Enter 10 GiB Usage Limit
  5. Click Create

 

 

Upon creation, you are directed back to the Folder View. Notice NimOS1-VVol-2 is now available.

 

 

VI Admin Steps

 

We will now return to the vSphere Web Client to provision our Nimble storage based Virtual Volume.

  1. Click vSphere Web Client on the Windows Task Bar

 

Conclusion


In this Module you learned how to successfully provision Virtual Volumes from end-to-end on multiple Storage Array's and the vSphere Platform.


 

You've finished Module 3

Congratulations on completing Module 3!

If you are looking for additional information on Virtual Volumes, try this:

Proceed to any module below which interests you most.

 

 

 

How to End Lab

 

To end your lab click on the END button.  

 

Module 4 - Storage Policy Based Management With Virtual Volumes (30 minutes)

Introduction


In this Module you will see the power of the Software Defined Datacenter in action.  

By creating and applying Storage Policies to a Virtual Volumes based Virtual Machine you will learn how easy it is to change the storage characteristics of this VM, on-the-fly, all without requiring an outage.


Creating Storage Policies


Storage Policies are used for Storage Policy-Based Management (SPBM). They are authored by the vSphere admin to describe the desired storage attributes of a Datastore. VMs are deployed using these policies to ensure they are stored on a compliant Datastore. This enables vSphere admins to maintain the appropriate service level, performance, data services and so on, without the need to track the specific storage details of each individual Datastore.

The Storage capabilities themselves are configured and managed on the storage systems by Storage Administrators. Storage capabilities are a presented to vSphere via the VASA APIs (aka Vendor Provider) in the form of data services and unique storage system features.

A vSphere admin maps the storage capabilities presented to vSphere and organize them into a set of rules that are designed to capture the quality of service requirements for Virtual Machines and their application. These rules are saved onto vSphere in the form of a VM Storage policy.

Virtual Volumes utilize these Storage Policies for management related operations such as Virtual Machine placement decisions on different disk speeds, RAID levels, quality of service levels, disk encryption capabilities, and so on.

The power of Storage Policies is that they can be applied to a Virtual Machine on-the-fly, with no manual intervention required to reconfigure Storage resources and no interruption to the Virtual Machine's availability.


 

Open Chrome Browser from Windows Quick Launch Task Bar

 

  1. Click on the Chrome Icon on the Windows Quick Launch Task Bar.

 

 

Lab Storage

Let's examine the Storage that is available in our Lab Environment so we know what we have to work with prior to creating our new Storage Policies.

 

 

vSphere Web Client

 

Let's examine the Virtual Volume within vCenter.

  1. Click the vSphere Web Client Browser Tab
  2. Hover over the Home Menu
  3. Click Storage

If you need to re-authenticate to vSphere:

  1. Select the checkbox for "Use Windows session authentication".
  2. Click Login

Alternatively, you can enter a User name of administrator@corp.local and a password of VMware1!

 

Assigning Storage Policies to Virtual Machines and Migrating Between VVols


In this lesson you will create a new Virtual Machine that utilizes both of our new Storage Policies.


 

Create Virtual Machine

 

  1. Hover over the Home Menu
  2. Select Hosts and Clusters

 

 

vSphere Web Client

 

1. Click the vSphere Web Client Browser Tab to return to vCenter

 

Do not make any changes to this screen.

  1. Click Next

 

 

Managing Virtual Machines

 

  1. Hover over the Home Menu
  2. Select Hosts and Clusters

In our last remaining exercise we will migrate (storage vMotion) our recently built Virtual Machine between the UnityVSA and Nimble Storage Array.  The Storage Policies that we have just created will dictate storage placement.

 

 

Nimble Storage Manager

 

  1. Select the Nimble-OS1 shortcut on the bookmarks bar.

Login with the User:vmadmin Password:VMware1!

 

 

 

vSphere Web Client

 

We will view the Storage Policy that we migrated to

  1. Select vSphere Web Client

 

Conclusion


In this Module you learned how to create and utilize Storage Policies for Virtual Volumes based Virtual Machines.

The ability to leverage Policies for automation is a Key Tenet of VMware's Software Defined Datacenter Vision.


 

You've finished Module 4

Congratulations on completing Module 4!

If you are looking for additional information on Virtual Volumes Storage Policy-Based Management, try one of these:

Proceed to any module below which interests you most.

 

 

 

How to End Lab

 

To end your lab click on the END button.  

 

Module 5 - Virtual Volumes New Features (45 minutes)

Introduction


VVols 6.5 introduced support for array based replication through storage based policy management and failover through PowerCLI scripts.  In this lesson will work through setting up replication and failover a VM.


Array Based Replication with Nimble Storage



 

Introduction

While VVol support for array based replication is supported by many vendors, in the lab we will be working with the Nimble storage array.  Setting up replication with the Nimble array and vCenter requires a few steps, all of which the lab will cover.

  1. First we will establish replication partners between 2 VVol folders on the Nimble array.
  2. Create a new storage based policy for the Nimble array that controls the replication schedule of the VM.
  3. Create a basic VM which we will assign the the storage based policy.
  4. Once created we will review the VM on the array to ensure replication occured.

 

 

Establish Replication Partners on the Nimble Array

 

  1. Open a new Tab in your browser and select the Nimble-OS1 on the bookmarks bar.

 

 

Login with the User: vmadmin | Password: VMware1!  

 

You are presented with the Nimble array dashboard.  

  1. Choose Manage from the menu bar
  2. Click Data Protection

 

On the Data Protection page

  1. Click Replication Partners

 

 

On the Replication Partners page will establish a connection to our second Nimble array.  

  1. Click on the "+" to add a new connection.

 

You will be presented with the "Partner Creation" dialog in which we will enter all required information to connect to our second Nimble array.  

  1. Partner Name: group-nimble-os2
  2. Hostname/IP: 192.168.110.203
  3. Shared Secret: VMware1!
  4. Inbound Location: Select the arrow next to default and choose the folder labled NimOS1-VVol
  5. Click Next

 

 

In the QOS dialog, you have the option of creating a policy that sets bandwidth limits on the replication.  In production environments, this might be a requirement.  In the lab, we will not create a policy, thus using all available bandwidth for the replication.

  1. Click Create

 

You are returned to the "Replication Partners" screen.  Initially your connection will have a status of "unreachable".  

  1. Select the checkbox to the left of the replication partner you just created
  2. Click Test

 

This test will fail because we haven't created our replication partner set up on the second array.  

  1. Click OK

 

 

 

Set Up Replication On The Second Nimble Array

 

Open a new tab in the Google Chrome browser and click the "Nimble-OS2" bookmark.

Login with the User:vmadmin Password:VMware1!

 

 

We will repeat the exact same steps as we previously did on Nimble-OS1 to create the replication partner. Upon login, you are presented with the Nimble Array dashboard.  

  1. Choose Manage from the menu bar
  2. Click Data Protection

 

On the Data Protection page

  1. Click Replication Partners

 

 

On the Replication Partners page we will establish a connection to our first Nimble array.  Click on the "+" to add a new connection.

 

You will be presented with the "Partner Creation" dialog in which we will enter all required information to connect to our first Nimble array.  

  1. Partner Name: group-nimble-os1
  2. Hostname/IP: 192.168.110.201
  3. Shared Secret: VMware1!
  4. Inbound Location: Select Click the arrow next to default and choose the folder that is displayed.
  5. Click Next

 

 

  1. Click Create on the "QOS Policy" screen.

 

 

You are returned to the "Replication Partners" screen.  Initially the status should be listed as "Unreachable" just as in the set up on the first array.

  1. Check the box next to partner set up.  
  2. Click Test

 

You should be presented with an "Alive" status now.  This is because we now have configured both arrays as replication partners.

 

 

We'll now return to the first array and run the same test

  1. Click on the Nimble-OS1 tab in the browser

 

 

  1. Select the check box next to the replication partner set up
  2. Click Test  

 

 

This will change the status to "Alive" as it is now set up to communicate with the second array.

This completes the replication set up requirements on the arrays.  We will now need to return to vCenter to complete the set up and exercise.

 

 

Create Replication Storage Policy

 

1. Click the vSphere Web Client Browser Tab to return to vCenter

 

Automating Fail Over with Nimble Storage



 

Automating Fail Over with Nimble Storage

 

View the VM before failover: note the Related Objects and VM Policies under the Summary tab

  1. View Storage Location
  2. View Storage Policies

 

 

Navigate back to Nimble-OS1 and view Source side view prior to failover

  1. Note the volumes are online

 

Navigate to Nimble-OS2 view Destination side view prior to failover

  1. Note the volumes are offline

 

Before beginning the fail over process, we have to take an initial snapshot at the Volume Collection level.  

  1. Make certain you are the Nimble-OS1 array
  2. Select Manage
  3. Choose Data Protection
  4. Click on "Volume Collections".

 

 

  1. Check the box next to the volume collection.

 

 

  1. Click the Take Snapshot button.

 

 

  1. Name: Initial
  2. Replication: Checked
  3. Choose OK

 

 

We'll Failover using a Powershell Script:

  1. Open the Powershell from taskbar

 

  1. Enter .\FailoverNimble.ps1 at the command prompt

 

  1. Enter the name of the VM: Nimble-ReplicationVM
  2. Click Enter

 

 

  1. View the execution of the script

 

  1. Notice the VM has powered off during the running of the script

 

 

Monitor the script and wait until it completes to continue. (It may be necessary to press Enter mutiple times for the script to fully complete)

You will see a C:\> prompt when the failover script has completed.

 

Navigate to the Nimble-OS1 array and note the Volumes on Nimble-OS1 are now Offline

 

Navigate to the Nimble-OS2 array and note the Volumes on Nimble-OS2 are now Online

 

Navigate to the vSphere RegionA01-COMP02 and note the Nimble-ReplicationVM Virtual Machine is back online.

 

Conclusion


In this Module you learned how to configure Array Based Replication with Nimble Storage as well as automate the Fail Over of a VM.


 

You've finished Module 5 and the Virtual Volumes (VVOLs) and Storage Policy Based Management Hands On Lab!

Congratulations on completing Module 5 and the HOL-1827-01-HCI - Virtual Volumes (VVOLs) and Storage Policy Based Management Hands On Lab!

If you are looking for additional information on Virtual Volumes Storage Policy-Based Management, try one of these:

Review any module below which interests you most.

 

 

 

How to End Lab

 

To end your lab click on the END button.  

 

Conclusion

Thank you for participating in the VMware Hands-on Labs. Be sure to visit http://hol.vmware.com/ to continue your lab experience online.

Lab SKU: HOL-1827-01-HCI

Version: 20170920-132955