VMware Hands-on Labs - HOL-2205-02-HCI


Lab Overview - HOL-2205-02-HCI - VMware vSphere Virtual Volumes and Storage Policy Based Management

Lab Guidance


Note: It may take more than 90 minutes to complete this lab. You may only finish 2-3 of the modules during your time.  However, you may take this lab as many times as you want. The modules are independent of each other so you can start at the beginning of any module and proceed from there. Use the Table of Contents to access any module in the lab. The Table of Contents can be accessed in the upper right-hand corner of the Lab Manual.

Virtual Volumes (vVols) is an integration and management framework that  virtualizes SAN/NAS arrays, enabling a more efficient operational model  that is optimized for virtualized environments and centered on the  application instead of the infrastructure.

vVols simplifies operations through policy-driven automation that  enables more agile storage consumption for virtual machines and dynamic  adjustments in real time, when they are needed. It simplifies the  delivery of storage service levels to individual applications by  providing finer control of hardware resources and native array-based  data services that can be instantiated with virtual machine granularity.

With Virtual Volumes (vVols), VMware offers a new paradigm in which an  individual virtual machine and its disks, rather than a LUN, becomes a  unit of storage management for a storage system. Virtual volumes  encapsulate virtual disks and other virtual machine files, and natively  store the files on the storage system.

This lab will guide you through the introduction, provisioning, and advance features of vVols.

Lab Module List:

 Lab Captain:

  • Tim Koishor - Staff Systems Engineer. Dallas, TX, USA

This lab manual can be downloaded from the Hands-on Labs document site found here:

http://docs.hol.vmware.com

This lab may be available in other languages.  To set your language preference and view a localized manual deployed with your lab, utilize this document to guide you through the process:

http://docs.hol.vmware.com/announcements/nee-default-language.pdf


 

First time using Hands-on Labs?

 

Welcome! If this is your first time taking a lab navigate to the Appendix in the Table of Contents to review the interface and features before proceeding.

For returning users, feel free to start your lab by clicking next in the manual.

 

 

You are ready....is your lab?

 

Please verify that your lab has finished all the startup routines and is ready for you to start. If you see anything other than "Ready", please wait a few minutes.  If after 5 minutes your lab has not changed to "Ready", please ask for assistance.

 

Module 1 - Overview and Requirements – Higher Level Concepts (15 minutes)

Introduction


Welcome to the Virtual Volumes Lab!

In this Module, you will be given an overview of Virtual Volumes (vVols) and add your first vVol to vCenter. You will also learn about the benefits that vVols provide, as well as the requirements necessary to utilize VMware Virtual Volumes.

 


Overview and Requirements


vSphere Virtual Volumes or vVols, implements the core tenants of the VMware Software Defined Storage vision to enable a fundamentally more efficient operational model for external storage in virtualized environments, centering it on the application instead of the physical infrastructure.

vVols enables application-specific requirements to drive storage provisioning decisions while leveraging the rich set of capabilities provided by existing storage arrays. Some of the primary benefits delivered by vVols are focused around operational efficiencies and flexible consumption models on a per-application (per-VM) basis at the hypervisor level using Storage Policy-Based Management (SPBM).

Virtual Volumes is an industry-wide initiative that allows customers to leverage the unique capabilities of their current storage investments and transition - without disruption - to a simpler and more efficient operational model optimized for virtual environments that works across all storage types.


 

Did you know?

With Virtual Volumes, there are two big changes to how Virtual Machine Snapshots work:

  1. Snapshots are no longer managed by vSphere.  You still create them the same way via the vSphere client; however, all of the snapshots are actually managed on the Storage Array.
  2. With traditional VMFS based Virtual Machine snapshots, the base disk becomes read-only and all changes are written to delta files.  Sounds familiar, right?  Once you delete a snapshot, those delta files all have to be merged back into the base disk which is both resource intensive and time consuming.  With Virtual Volumes, the base disk remains read/write when a snapshot is taken (the delta snap files hold the original data when a change is made).  When you delete a snapshot, you simply discard the snap delta files, since the base disk already contains all of the latest data.

This results in an incredible benefit in performance and efficiency (with an additional benefit being that you no longer have to worry about keeping 'too many' snapshots around)!

 

 

Storage Optimized for Virtualized Environments

 

Virtual Volumes enable Virtual Machine aware storage and policy based management across heterogeneous arrays.

Let's jump right in and create a Virtual Volume!

 

 

Open Chrome Browser from Windows Quick Launch Task Bar

 

  1. Click on the Chrome Icon on the Windows Quick Launch Task Bar.

 

 

Launch vSphere using the HTML5 Client

 

For the purposes of this Hands On Lab, we will be using the HTML5-based vSphere client.

  1. Select RegionA
  2. Click vcsa-01a Web Client

 

 

Bypass Chrome pop-up (If Applicable)

 

If you experience the Chrome "cip" pop-up, you can safely choose to Always open links of this type and continue.

  1. Select the Checkbox
  2. Click Open vmware-cip-launcher.exe

 

 

vCenter Login

 

  1. Click Login

If the username and password are not pre-populated, enter the following

Username: administrator@vsphere.local

Password: VMware1!

 

 

Host and Clusters

 

  1. Left-Click on "Menu"
  2. Select Storage.

 

 

Add Storage

 

Expand vcsa-01a.corp.local if necessary

  1. Right-click RegionA01 datacenter
  2. Select Storage
  3. Select New Datastore...

 

 

Select Type

 

  1. Select vVOL
  2. Click Next

 

 

Name and container selection

 

In this case, we are creating a vVol that communicates to a Storage Array via an iSCSI based protocol.

  • A Storage Administrator has performed the necessary work in our lab to make this particular Storage Container accessible.  Notice that this disk is being made available via a Nimble  Virtual Storage Appliance (VSA).
  1. Enter the Datastore Name:  nimblea-module1
  2. Choose the "nimblea-vvol-module1" as the Backing Storage Container
  3. Click Next

 

 

Select Hosts Accessibility

 

  1. Select esx-01a and esx-02a host.
  2. Click Next

 

 

Ready to complete

 

  1. Review settings and click Finish

 

 

Confirm vVol Creation

 

  1. Select RegionA01
  2. Select the Datastores Tab
  3. Notice that the newly created Virtual Volume (nimblea-module1) is showing Normal (online) status.

The Virtual Volume is ready to be used by Virtual Machines!

 

 

Congratulations!

In just a few simple steps, you have added a Virtual Volume to your vSphere lab environment.

If it feels like the exercise you just went through was nothing more than adding a New Datastore to vCenter, don't worry.  Working with Virtual Volumes is meant to provide a familiar storage experience to Virtual Infrastructure Administrators; however, they provide powerful new capabilities that have not existed previously.

In later modules of this lab you will learn about each of the architectural components that drive Virtual Volumes and see first-hand how Storage Administrators perform the necessary steps to provision storage resources that are consumable via Virtual Volumes.

Virtual Volumes are enabled by features that are exposed via the back-end Storage Array (QoS, Disk Performance Tiers, Snapshotting, Replication, De-Duplication, etc.).  Virtual Machines that use Virtual Volumes can leverage the features that you choose via Software Based Policies, on the fly, without having to make any changes to the Storage Array or re-format LUNs via vSphere.

In the next Lesson you will learn about the benefits that Virtual Volumes provide.  You will then complete the Module with a review of the Software and Hardware Requirements necessary for Virtual Volumes.

 

Benefits


vSphere Virtual Volumes implements the core tenets of the VMware Software-Defined Storage vision to enable a fundamentally more efficient Operational Model for external storage in virtualized environments, centering it on the Application instead of the physical infrastructure.

Virtual Volumes enables Application-specific requirements to drive storage provisioning decisions while leveraging the rich set of Capabilities provided by existing Storage Arrays.

Read on to learn about some of the primary benefits delivered by Virtual Volumes.


 

Streamlined Storage Operations

Virtual Volumes simplifies storage operations by automating manual tasks and eliminating operational dependencies between the vSphere Admin and the Storage Admin. Provisioning is faster, and change management is simpler as the new operational model is built upon policy-driven automation.

 

 

Finer Controls

Virtual Volumes simplifies the delivery of storage service levels to applications by providing administrators with finer control of storage resources and data services at the VM level that can be dynamically adjusted in real time.

 

 

Improved Resource Utilization

Virtual Volumes improves resource utilization by enabling more flexible consumption of storage resources, when needed and with greater granularity. The precise consumption of storage resources eliminates over-provisioning. The Virtual Datastore defines capacity boundaries, access logic, and exposes a set of data services accessible to the virtual machines provisioned in the pool.

 

 

Flexible Consumption Model

Virtual Datastores are purely logical constructs that can be configured on the fly, when needed, without disruption and don’t require formatting with a file system.

 

 

Benefits Summary

 

Historically, vSphere storage management has been based on constructs defined by the storage array: LUNs and filesystems. A storage administrator would configure array resources to present large, homogenous storage pools that would then be consumed by vSphere administrator.

Since a single, homogeneous storage pool would potentially contain many different applications and virtual machines; this approach resulted in needless complexity and inefficiency. vSphere administrators could not easily specify specific requirements on a per-VM basis.

Changing service levels for a given application usually meant relocating the application to a different storage pool. Storage administrators had to forecast well in advance what storage services might be needed in the future, usually resulting in the over-provisioning of resources.

With Virtual Volumes, this approach has fundamentally changed. vSphere administrators use policies to express application requirements to a storage array. The storage array responds with an individual storage container that precisely maps to application requirements and boundaries.

 

 

Summary, cont.

 

Typically, the virtual datastore is the lowest granular level at which data management occurs from a storage perspective. However, a single virtual datastore contains multiple virtual machines, which might have different requirements. With the traditional approach, differentiation on a per virtual machine level is difficult.

The Virtual Volumes functionalities allows for the differentiation of virtual machine services on a per application level by offering a new approach to storage management.

Rather than arranging storage around features of a storage system, Virtual Volumes arranges storage around the needs of individual virtual machines, making storage virtual machine centric.

Virtual Volumes map virtual disks and their respective components directly to objects, called virtual volumes, on a storage system. This mapping allows vSphere to offload intensive storage operations such as snapshot, cloning, and replication to the storage system.

Now that you understand the Benefits, let's review the Requirements that are necessary to use Virtual Volumes in our final Lesson of this Module.

 

Requirements


Virtual Volumes requires the following Software, Hardware and Licensing Requirements.


 

Software

The use of Virtual Volumes requires the following software components:

  • vCenter Server 6.5 or later Appliance (VCSA) or vCenter Server 6.5 or later for Windows
  • ESXi 6.5 or later

 

 

Hardware

 

There are over 150 array models from multiple partners that have been certified for Virtual Volumes, with even more on the way.

The use of vSphere Virtual Volumes requires the following hardware components:

  • Any Server that is certified for vSphere 6.5 or later that is listed on the VMware Compatibility Guide
  • A third party storage array system that supports vSphere Virtual Volumes and is able to integrate with vSphere through the VMware APIs for Storage Awareness (VASA)
  • Depending on the vendor specific implementation, the storage array system may or may not require a firmware upgrade in order to support vSphere Virtual Volumes.  Check with your storage vendor for detailed information and configuration procedures.

The VMware Compatibility Guide for Virtual Volumes makes it easy to answer requirement questions like:

  • Which array models support vVols?
  • Which VASA Provider supports which vVols features?
  • Which array vendors support iSCSI vVols?
  • ...and many more!

 

 

License

The use of vSphere Virtual Volumes requires the following license:

  • vSphere Standard
  • vSphere Enterprise Plus 

 

Conclusion


In this module you learned about the benefits that Virtual Volumes provides, the requirements necessary for Virtual Volumes and also added your first Virtual Volume to vSphere.


 

You've finished Module 1

Congratulations on completing Module 1!

If you are looking for additional information on a Virtual Volumes Overview:

The Virtual Volumes Compatibility Guide can be found here:

Proceed to any module below which interests you most.

 

 

 

How to End Lab

 

To end your lab, click on the END button.  

 

Module 2 - Virtual Volumes Architecture – Exploring the Details (15 minutes)

Introduction


In this Module you will learn about all of the Key Components that enable Virtual Volumes.  

You will then familiarize yourself with each of those working components within our Lab Environment.


Key Components


The following module contains descriptions and definitions of the key components of vSphere Virtual Volumes.


 

Virtual Volumes (vVols)

 

Virtual Volumes are considered a type of virtual machine object, which are created and stored natively on the storage array. vVols are stored in storage containers and mapped to virtual machine files/objects such as VM swap, VMDKs and their derivatives.

There are five different types of Virtual Volumes object types and each map to a different and specific virtual machine file.

  • Config - VM Home, Configuration files, logs
  • Data - Equivalent to a Virtual Machine Disk File (VMDK)
  • Memory - Snapshots
  • SWAP - Virtual machine memory swap
  • Other - vSphere solution specific object

 

 

Vendor Provider (VP)

 

The vendor provider, also known as the vSphere Storage APIs for Storage Awareness (VASA) provider, is a storage-side software component that acts as a storage awareness service for vSphere and mediates out-of-band communication between vCenter Server and ESXi hosts on one side and a storage system on the other. Storage vendors exclusively develop vendor providers

ESXi hosts and vCenter Server connect to the Vendor Provider and obtain information about available storage topology, capabilities, and status.

Subsequently vCenter Server provides this information to vSphere clients, exposing the capabilities around which the administrator might craft storage policies in Storage Policy Based Management (SPBM).

Vendor Providers are typically setup and configured by the vSphere administrator in one of two ways:

  • Automatically via the array vendors plug-in
  • Manually through the vCenter Server

 

 

Storage Container (SC)

 

Unlike traditional LUN and NFS-based vSphere storage, the Virtual Volumes functionality does not require preconfigured volumes on a storage side.

Instead, Virtual Volumes uses a storage container, which is a pool of raw storage capacity and/or an aggregation of storage capabilities that a storage system can provide to virtual volumes.

Depending on the storage array implementation, a single array may support multiple storage containers. Storage Containers are typically setup and configured by storage administrators.

Containers are used to define:

  • Storage capacity allocations and restrictions 

  • Storage policy settings based on data service capabilities on a per 
virtual machine basis 


 

 

Virtual Datastore

 

A Virtual Datastore represents a storage container in a vCenter Server instance and the vSphere Web Client. A Virtual Datastore represents a one-to-one mapping to the storage system’s storage container.

The storage container (or Virtual Datastore) represents a logical pool where individual Virtual Volumes VMDKs are created.

Virtual Datastores are typically set up and configured by vSphere administrators.

 

 

Virtual Datastore Mapping

 

A one-to-mapping is created between a vVols datastore and a storage container on the array. If another vVols datastore is needed, a new storage container must be created.

 

 

Protocol Endpoints (PE)

 

Although storage systems manage all aspects of virtual volumes, ESXi hosts have no direct access to virtual volumes on the storage side. Instead, ESXi hosts use a logical I/O proxy, called the protocol endpoint, to communicate with virtual volumes and virtual disk files that virtual volumes encapsulate.

ESXi uses protocol endpoints to establish an on-demand data path from virtual machines to their respective virtual volumes.

Protocol Endpoints are compatible with all SAN/NAS industry standard protocols:

  • iSCSI
  • NFS v3
  • Fiber Channel (FC)
  • Fiber Channel over Ethernet (FCoE)

Protocol Endpoints are set up and configured by Storage Administrators.

 

 

Putting it all together

 

Here you can see the relationships between the key Virtual Volumes Components, including views from a top-down (VI Admin) perspective and bottom-up (Storage Admin) perspective.  

Published Capabilities will vary depending on array types.

Once all of these components are in place, VI Admin's can easily create VM Storage Policies to take advantage of Array capabilities as part of automating the Software-Defined Datacenter.

 

Configuration Review


Now that you know the different components that make up the Virtual Volumes Architecture, let's examine how they are configured and utilized in our Lab Environment.

Our Lab is using nested Virtualized technology from HPE: Nimble VSA

Nimble

  • The Nimble virtual array is a virtualized instance of NimbleOS developed for use in the VMware HOL. It provides block storage and VMware vVols support.


 

Open Chrome Browser from Windows Quick Launch Task Bar

 

  1. Click on the Chrome Icon on the Windows Quick Launch Task Bar.

 

 

Launch vSphere using the HTML5 Client

 

This Hands On Lab will be using the HTML-5 based vSphere client.

  1. Select Region A
  2. Click vcsa-01a Web Client

 

 

vCenter Login

 

  1. The username and password should be auto-saved. If they are not use user name of administrator@vsphere.local and a password of VMware1!
  2. Click Login

 

 

 

Examine Vendor (VASA) Providers

 

  1. Hover over the Home Menu
  2. Select Storage

 

 

Storage Providers

 

  1. Make certain the focus is on the vCenter level: vcsa-01a.corp.local
  2. Click Configure
  3. Select Storage Providers

The Nimble VSA Storage Providers have already been added to the Lab and they are in Online and Active status.

Virtual Volumes use vSphere Storage APIs (VASA) version 3.0.

As a reminder, these VASA providers are responsible for mediating out-of-band communication between vSphere and Storage Systems.  ESXi hosts and vCenter connect to the Vendor Provider to obtain information about available storage topology, capabilities and status.

Note:  The terms, "Storage Provider", "Vendor Provider" and "VASA Provider" can be used interchangeably.

 

 

 

Examine Nimble Storage Folders

 

  1. Open a new Tab in your browser and select the NimbleA on the bookmarks bar

 

Login

 

  1. Login with the User: admin | Password: VMware1! 

Inspect Storage

 

  1. Select Manage
  2. Select Data Storage

Nimble Folder Inspection

 

  1. In the left-hand navigation menu, Select the "nimblea-vvol-default" (This is the Nimble Folder that we used to create the Datastore nimblea-vvol01 in vSphere

Storage Containers are pools of raw storage capacity that a storage system can provide to Virtual Volumes.

Review the usage statistics on the Nimble Storage Container for vVols and note the Volume Type and Protocol indicates vVols via iSCSI.

This Storage Container was created by a Storage Administrator and was used in a previous module of this lab. As you've seen, this Container is visible to a vSphere Administrator when adding storage via vCenter (done via the 'New Datastore' Wizard when selecting 'VVol' as the Virtual Datastore type).  

Examine Nimble Protocol Endpoints

 

  1. From the Manage menu choose Data Access

There will be 2 or 3 Protocol Endpoints of the type iSCSI from this Nimble VSA to vSphere depending on your work thus far. These represent the in-band connections to both esx-01a and esx-02a.

Protocol Endpoints are used to establish the Data path between Virtual Machines and their respective Virtual Volumes.  These Protocol Endpoints are using the iSCSI protocol; however, it is possible to use other SAN/NAS industry standard protocols like NFS, Fiber Channel and Fiber Channel over Ethernet. In this case, iSCSI protocol endpoints can utilize any iSCSI interface or Fibre Channel connection for IO.

We'll now examine the Nimble configurations...

 

Examine Nimble Storage Containers

 

The Nimble vSphere Plug-in has been installed on both vCenter Servers for you. You may navigate back to your vSphere tab in Chrome to view the components. This Plug-in will be valuable for later modules in this lab. To view the Plug-in follow the below:

  1. Return to your vCenter tab and navigate to the Menu button
  2. Choose HPE Alletra 6000 and Nimble Storage.

We'll now inspect the full web interface.

 (Note: Due to the nature of the Hands-on-Lab environment, it may be necessary to adjust the zoom settings on the Chrome browser window to view this plug-in properly - see below for an illustration on this setting. 80% zoom is recommended)

 

 

 

Examine Nimble Storage Containers

 

 

  1. Select nimblegroupa group

 

 

 

  1. Select VVOL

Here you can see the vVols that are available to vSphere via the Nimble VSA. Depending on the modules of this lab that you have completed, your display may have different vVols listed.

 You may inspect the vVols further before continuing to the next steps.

 

 

Managing Protocol Endpoints in vSphere

 

The Nimble Protocol Endpoints are managed in vCenter. In the next steps we'll see where these are visible in vSphere

  1. Hover over the Menu  
  2. Select Hosts and Clusters

 

 

Expand Contents

 

  1. Click the RegionA01 down arrow, to expand it's contents
  2. Click the RegionA01-COMP01 down arrow, to expand it's contents.

 

 

Manage Protocol Endpoints

 

  1. Select esx-02a.corp.local
  2. Select the Configure Tab
  3. Under the Storage category, Click Protocol Endpoints

Notice that the Nimble VSA Protocol Endpoint is configured to use SCSI (leveraging iSCSI). As you learned earlier, Protocol Endpoints are used to establish the Data path between Virtual Machines and their respective Virtual Volumes.

 

 

Examine Virtual Datastores

 

  1. Hover over the Menu
  2. Select Storage

 

 

Datastores

 

  1. Select RegionA01
  2. Click Datastores  

Note that there are multiple Virtual Datastores present with type "vVol" that were added to vCenter by a VI Admin.

These Virtual Volumes are a one-to-one mapping to their Nimble Storage Containers and are ready for Virtual Machine usage.

Note: Based on the Modules you have completed in this Lab, you will see different datastores available. It is safe to continue to your next step if your datastores are not identical to those listed in the screenshot.

You have confirmed all of the key vVol Components within our Lab Environment.  In a later Lab Module, you will create new Virtual Volumes all the way from End-to-End (Storage Array to vSphere).

 

Conclusion


In this Module you learned about the Key Components that drive Virtual Volumes Architecture and also examined them in action within our Lab Environment.


 

You've finished Module 2

Congratulations on completing Module 2!

If you are looking for additional information on Virtual Volumes Architecture, try one of these:

Proceed to any module below which interests you most.

 

 

 

How to End Lab

 

To end your lab, click on the END button.  

 

Module 3 - Provisioning Virtual Volumes (30 minutes)

Introduction


In this Module you will learn how Virtual Volumes are provisioned from end-to-end (Storage Array to vSphere) utilizing multiple Storage Vendor Appliances.


Working with Nimble Virtual Volumes


 

Nimble - Storage Admin Steps

 

In this lesson you will see the end-to-end provisioning of a new Nimble Virtual Volume beginning with the steps that a Storage Administrator must perform on the Array and finishing with the steps that a VI Admin performs to add the vVol to vCenter.

  1. Open a New Tab in your Browser
  2. Select the NimbleA shortcut on the bookmarks bar.  

Login with the User: admin Password: VMware1!

 

 

 

VASA Provider (vVols)

 

We will be provisioning a Block vVol Datastore on the Nimble Array that is accessible via vCenter using the iSCSI protocol. We will first verify the Nimble array can communicate with vCenter through VASA (This was pre-configured for you)

  1. Select Administration from the menu bar.
  2. Click VMware Integration

 

 

 

Test VASA on Nimble

 

Note: there are currently a registrations for vcsa-01a.corp.local and vcsa-01b.corp.local

 

Scroll to the bottom of the status window for vcsa-01a.corp.local and vcsa-01b.corp.local. The Checkbox labeled VASA Provider (vVols) and Web Client are currently checked

  1. Scroll to the bottom for vcsa-01a.corp.local
  2. Click Test Status for vcsa-01a.corp.local

This step was completed in the set up of this lab. This is a critical step to vVols operations. Recall that VASA (vStorage APIs for Storage Awareness) is a set of APIs that enables vSphere vCenter to recognize the capabilities of storage arrays. You should receive a successfull status message.

 

 

 

Nimble Folders

 

Nimble utilizes folders, also referred to as storage containers, to house the virtual volumes. To view the current folder configuration:

  1. Select Manage from the menu bar
  2. Click Data Storage

 

 

View Nimble Folder and Properties

 

Here you can see the default Folders that were created for this lab.

Once vSphere starts using this container for virtual volumes you will see VMDK and other files stored natively within the folder.

 

 

Performance Policies

 

Our final step is to view the Performance Policies

  1. Select Manage from the menu bar
  2. Click Performance Policies

 

 

Performance Policies

 

There are 25 of pre-configured Performance Policies as well as the capability to create our own. These policies will be used in vSphere during the creation of the VM Storage Policy.

 

 

Create a New vVol on the Nimble Storage Array

 

We will now create a new vVol on the Nimble Array:

  1. Select Manage on the menu bar
  2. Click Data Storage

 

 

View Folders

 

vVols on the Nimble Array are created in Folders, change to the Folder view   

  1. Click Folders

 

Add Folder

 

We'll use the Nimble vVols wizard to create our volume

  1. Click the "+"

Create Folder

 

You are presented with the "Create Folder" dialog box.  Follow the example screenshot and instructions below to complete the required fields.

  1. Enter "RegionA01-HOL" in the Name field
  2. Choose "VMware virtual volumes(VVols)" Management Type
  3. Scroll down once page
  4. Choose "vcsa-01a" as the vCenter Server
  5. Enter 5 GiB Usage Limit
  6. Click Create

 

View Created Folder

 

Upon creation, you are directed back to the Folder View. Notice RegionA01-HOL is now available.

 

VI Admin Steps

 

We will now return to the vSphere Web Client to provision our Nimble storage based Virtual Volume.

  1. Click vSphere Web Client on the Windows Task Bar

NOTE - If you need to re-authenticate to vSphere:

  1. Select the checkbox for "Use Windows session authentication".
  2. Click Login

Alternatively, you can enter a User name of administrator@vsphere.local and a password of VMware1!

 

vSphere Web Client

 

  1. On the vSphere Web Client Browser Tab
  2. Click Menu and select Shortcuts

Navigate to Storage

 

  1. Click on Storage

ReScan Storage

 

Here you will see a list of all existing datastores that have been provisioned.  Before adding a new Datastore you need to rescan the storage bus to capture any newly created devices.

  1. Right-click on RegionA01
  2. Select Storage
  3. Choose Rescan Storage

 

1. Click "OK" to complete the rescan.

Create New Datastore

 

Here you will see a list of all existing datastores that have been provisioned.  

  1. Right-click on RegionA01
  2. Select Storage
  3. Choose New Datastore

Choose Type: VVol

 

In this dialog, we will choose what type of datastore you want to create.

  1. Select the vVol radio button
  2. Click Next

 

Name your Datastore

 

You will need to give your new datastore a name that will be displayed in vCenter.  For tracking purposes, name the datastore the same as you previously did in on the array.

  1. Enter RegionA01-HOL in the Datastore name field
  2. Scroll-down in Backing Storage Container pane and select RegionA01-HOL
  3. Click Next

 

Choose ESX hosts

 

We'll need to select the hosts that have access to this newly created Datastore.

  1. Select the 2 checkboxes next to esx-01a.corp.local and esx-02a.corp.local.  (Note: esx-03a.corp.local may have been removed in your copy of the lab - this is expected and you may proceed)
  2. Click Next

 

Confirm and Finish

 

You are presented with a review screen.  Review the contents to be sure they match the screen above.

  1. Click Finish

 

View Newly Created Datastore

 

You will be returned to the vCenter view and your newly provisioned datastore will be in the list. If the Status is unknown, you can refresh the view after a short while.

Congratulations!  You've just created a new vVol on the Nimble storage array and presented it to 2 hosts in your vCenter!

 

Confirm datastore provisioning to hosts

 

Let's confirm that newly created datastore is presented to our hosts.

  1. Click Menu
  2. Click Hosts and Clusters

View Datastore

 

  1. Click the host esx-02a.corp.local
  2. Choose Datastores tab from the menu bar
  3. View RegionA01-HOL

 

At this point your newly created RegionA01-HOL Datastore is ready for use in vSphere!


Conclusion


In this Module you learned how to successfully provision Virtual Volumes from end-to-end on the Nimble Storage Array and the vSphere Platform.


 

You've finished Module 3

Congratulations on completing Module 3!

If you are looking for additional information on Virtual Volumes, try this:

Proceed to any module below which interests you most.

 

 

 

How to End Lab

 

To end your lab click on the END button.  

 

Module 4 - Storage Policy Based Management with Virtual Volumes (30 minutes)

Introduction


In this module, you will see the power of the Software Defined Datacenter in action.  

By creating and applying storage policies to a Virtual-Volumes-based virtual machine, you will learn how easy it is to change the storage characteristics of this VM, on the fly, all without requiring an outage.

Before we create Storage Policies, we'll need to create the Nimble Array Replication.


Array-Based Replication with Nimble Storage


 

Introduction

While vVol support for array-based replication is supported by many vendors, in the lab we will be working with the Nimble storage array.  Setting up replication with the Nimble array and vCenter requires a few steps, all of which the lab will cover:

  1. Establish replication partners between two vVol folders on the Nimble array.
  2. Create two new storage-based policy for the Nimble array that controls the replication schedule of the VM.
  3. Create a basic VM on Site B which we will assign the the storage-based policy.
  4. Inspect Replication success and recovery options

 

 

Establish Replication Partners on the Nimble Array

 

Open Chrome Browser from Windows Quick Launch Task Bar

 

  1. Click on the Chrome Icon on the Windows Quick Launch Task Bar.

Launch NimbleA

 

  1. Open a new Tab in your browser and select the NimbleA on the bookmarks bar.

 

Nimble Login

 

Login with the User: admin | Password: VMware1!  

Data Protection

 

You are presented with the Nimble array dashboard.  

  1. Choose Manage from the menu bar
  2. Click Data Protection

Replication Partner

 

On the Data Protection page

  1. Click Replication Partners (This could take about a minute to load the Replication Partners page)
  2. Select the "+"

 

Please wait for the Replication Partners screen to appear, this may take between 10-20 seconds.

Enter Partner Information

 

You will be presented with the "Partner Creation" dialog in which we will enter all required information to connect to our second Nimble array.  

  1. Partner Name: nimblegroupb
  2. Hostname/IP: 192.168.220.205 (note: the 3rd octet is "220" as the destination array is on the secondary side)
  3. Shared Secret: VMware1!
  4. Shared Secret: VMware1!
  5. Inbound Location: Select the arrow next to default and choose the folder labeled nimblea-vvol-default
  6. Click Next

 

Create Replication Partner

 

In the next dialog, you have the option to create a policy that sets bandwidth limits on the replication.  In production environments, this might be a requirement.  In the lab, we will not create such a policy, thus using all available bandwidth for the replication.

  1. Click Create

You will be returned to the "Replication Partners" screen.  Initially your connection will have a status of "unreachable" and connection tests will fail until we've configured the replication from the secondary array.

 

Set Up Replication On The Second Nimble Array

 

Open a new tab in the Google Chrome browser and click the NimbleB bookmark.

 

 

Nimble Login

 

Login with the User: admin | Password: VMware1!  

Data Protection

 

You are presented with the Nimble array dashboard.  

  1. Choose Manage from the menu bar
  2. Click Data Protection

Replication Partner

 

On the Data Protection page

  1. Click Replication Partners
  2. Select the "+"

 

Please wait for the Replication Partners screen to appear, this may take between 10-20 seconds.

Enter Partner Information

 

You will be presented with the "Partner Creation" dialog in which we will enter all required information to connect to our first Nimble array.  

  1. Partner Name: nimblegroupa
  2. Hostname/IP: 192.168.120.205 (note: the 3rd octet is "120" as the destination array is on the secondary side)
  3. Shared Secret: VMware1!
  4. Shared Secret: VMware1!
  5. Inbound Location: Select the arrow next to default and choose the folder labeled nimbleb-vvol-default
  6. Click Next

 

Create Replication Partner

 

  1. Click Create

 

Test Connection on NimbleB

 

You are returned to the "Replication Partners" screen.  Initially the status should be listed as "Unreachable" just as in the set up on the first array.

  1. Check the box next to partner set up.  
  2. Click Test

Test Results on NimbleB

 

You should be presented with an "Alive" status now.  This is because we now have configured both arrays as replication partners.

 

Return to NimbleA

 

1. Return to the NimbleA tab

Test Replication Results on NimbleA

 

  1. Select the check box next to the replication partner set up
  2. Click Test  

Both source and destination should be showing "Alive"

 

 

Create Replication Storage Policy

 

  1. Select RegionA
  2. Right-click vcsa-01a Web Client, and select "open in new tab"

 

 

Bypass Chrome pop-up (If Applicable)

 

If you experience the Chrome "cip" pop-up, you can safely choose to Always open links of this type and continue.

  1. Select the Checkbox
  2. Click Open vmware-cip-launcher.exe

 

 

Login to vCenter

 

  1. Username: administrator@vsphere.local
  2. Password: VMware1!
  3. Click Login

 

Storage Policy Creation

 

In our final section, we will create Storage Policies.

  1. Left-click Menu
  2. Select Policies and Profiles

VM Storage Policies

 

  1. In the left navigation pane, click VM Storage Policies
  2. Click Create

Enter Policy Name

 

  1. Enter Name: NimbleA-vVol-Replication
  2. Click Next

View Policy Structure

 

  1. Select Enable rules for "HPE Alletra 6000 and Nimble Storage" under Datastore specific rules
  2. Click Next

Generate Storage Rules

 

(Note this screen will update as you make selections)

  1. Select Placement
  2. Select Select "ADD RULE"
  3. Choose Protection schedule (minutely)
  4. Choose Next

Configure the Rules: Placement

 

(Note this screen will update as you make selections)

  1. Snapshot frequency: Every 5 minutes
  2. Replication partner: nimblegroupb
  3. Replication frequency: 1
  4. Snapshots to retain on replica: 1
  5. Delete replicas from partner: No

DO NOT click "Next" yet

Configure the Rules: Replication

 

(Note this screen will update as you make selections)

  1. Select the Replication tab
  2. Select Custom
  3. Choose Next

Storage Compatibility

 

  1. Confirm storage is available (nimblea-vvol01)
  2. Choose Next

Ready to Complete

 

Review the settings

  1. Click Finish

Create A Replication Policy for Reverse Replication

 

We need to create a policy on the secondary side that matches the primary side (for reverse replication needs during a failover). As our vCenters are in Enhanced Linked Mode, this task will be done in the same interface and following the same steps from the previous steps with three important differences in configuration listed below:

  1. The vCenter Server in focus will be VCSA-01B.CORP.LOCAL
  2. Name of the Replication: NimbleB-vVol-Replication
  3. Replication Partner: nimblegroupa

(Click here to jump back 9 pages in this manual to follow these instructions again using the 3 different variables above):

Please proceed to create the policy via previous steps and reference the above image to confirm the policy is created accurately.

Select Finish to continue

 

Create a Replication Virtual Machine

 

To take full advantage of all replication and DR options, we will be creating our test VM on the secondary site:

  1. Select Menu
  2. Select Hosts and Clusters

New Virtual Machine

 

 

  1. Right-click RegionB01-COMP01
  2. Select New Virtual Machine...

Select a Creation Type

 

  1. Confirm that Create a new virtual machine is selected
  2. Click Next

Enter VM name

 

  1. Enter Name: Nimble-ReplicationVM
  2. Confirm that RegionB01 is selected
  3. Click Next

Select Compute Resource

 

  1. Select esx-01b.corp.local
  2. Click Next

Select Storage Policy

 

  1. Left-click the Datastore Default dropdown
  2. Select NimbleB-vVol-Replication

Select vVol

 

Notice how the available Datastores are automatically separated into Compatible or Incompatible datastores. nimbleb-vvol01 is listed as Compatible because we configured Replication and tagged these volumes properly in previous steps. Note: Depending on what modules you have completed in this lab, you may see different compatible volumes.

  1. Select nimbleb-vvol01
  2. Verify Automatic is set for Replication Group
  3. Click Next

Select Compatibility

 

  1. Leave the default setting and click Next

Select a Guest OS

 

  1. Leave the default setting and click Next

Customize Hardware

 

Since we are using a nested Virtualization environment/lab with limited resources, we will be configuring a very small VM

  1. Set CPU to 1
  2. Set Memory to 32 MB
  3. Set New Hard Disk to 32 MB
  4. Click Next

Review Settings

 

  1. Review the Virtual Machine Summary and click Finish

Return to NimbleB

 

  1. Select the NimbleB shortcut on the bookmarks bar.

 

If your session has timed-out re-login with the User:admin Password:VMware1!

Open Data Storage

 

We will view the newly-created Virtual Machine files on the Nimble Array in the Folders View.

  1. Select Manage from the menu bar
  2. Click Data Storage

View VM files

 

Notice the Nimble-ReplicationVM files are now on NimbleB and in an Online state

Return to NimbleA

 

Return to NimbleA in the Google Chrome browser

 

Open Data Storage

 

We will view the Virtual Machine files that were replicated from NimbleB to NimbleA in the Folders View.

  1. Select Manage from the menu bar
  2. Click Data Storage

View VM files

 

Here you can see the VM files that have been replicated. They will remain "Offline" until a failover is triggered.

 

Switch to the vSphere Web Client

 

1. Click the vSphere Web Client browser tab to return to The RegionA vCenter

Explore the vVol Options via the Nimble Plug-in

 

  1. Select Menu
  2. Select HPE Alletra 6000 and Nimble Storage

 

Select your Nimble Group

 

  1. Click nimblegroupa

 

Replicated vVol VMs

 

  1. Select VVol VMs
  2. Select Replicated

Due to the nature of the Hands-on-Lab environment, it may be necessary to adjust the zoom settings on the Chrome browser window to view this plug-in properly - see below for an illustration on this setting. 75% zoom is recommended

 

Select your Action

 

You may choose three actions to take on your VM once it is selected and available

  • Clone
  • Revert to earlier Point in time snapshot
  • Delete

This lab will continue with a Clone

  1. Check the box next to your VM
  2. Select the Clone icon

Note that options 2 and 3 will be grayed out as you cannot take action on a VM that is actively being replicated from the upstream VM.

Clone the Replicated VM

 

The Clone VM dialog box will appear providing you with Day and Time options for cloning.

  1. Identify a time (There should only be one option in our lab - recall we configured this in our SPBM policy under the "Snapshots to retain on replica = 1" configuration )
  2. Choose your location/name
  3. Click Clone

 

Confirm the Clone Operation

 

  1. Choose Yes

 

Return to NimbleA

 

1. Return to the NimbleA tab

If your session has timed-out re-login with the User:admin Password:VMware1!

Confirm the Clone on Primary Array

 

  1. Returning to NimbleOS1 Array you can now see our newly-created clone available and online

Note: It may be necessary to Refresh the table to see the newly cloned VM

As shown in the screen shot you may hover over the items listed to show that the Cloned VM is in the Online state while the original Replicated VM remains offline

 

Switch to the vSphere Tab

 

1. Click the vSphere Web Client Browser Tab to return to RegionA vCenter

Hosts and Clusters

 

 

  1. Select Menu
  2. Select Hosts and Clusters

Confirm the Clone on Primary vCenter

 

  1. Left-click the arrow to expand out RegionA01
  2. Left-click the arrow to expand out RegionA01-COMP01
  3. You should see the NimbleR-eplicationVM_Clone in the list of VMs.

At this point, your VM is available for a variety of actions including:

  • SPBM assignment
  • Power On and use
  • Reverse Replication
  • Additional Cloning, Snapshots, and Deletion
  • etc.

Please feel free to explore these options before continuing to complete this module.

Conclusion


In this Module, you learned how to create and utilize Storage Policies for Virtual Volumes based Virtual Machines.

The ability to leverage policies for automation is a key tenet of VMware's Software Defined Datacenter vision.


 

You've finished Module 4

Congratulations on completing Module 4!

If you are looking for additional information on Virtual Volumes Storage  try the below:

Proceed to any module below which interests you most.

 

 

 

How to End Lab

 

To end your lab click on the END button.  

 

Module 5 - Virtual Volumes Advanced - SRM Integration and Failover (45 minutes)

Introduction


SRM and vVols: SRM 8.4 is fully integrated with vVols using array-based replication.  Users will have access to all the capabilities of SRM, combined with the  simplicity, manageability and performance of vVols. As this integration is utilizing vVol replication, it supports a protection granularity as  small as a single VM. When multiple VMs are replicated together in a replication group that is maintained as a consistency group.  As an added  benefit, the VASA provider replaces the SRA (Storage Replication  Adapter), so ongoing operations and maintenance are easier than ever. To  start with, Pure, HPE 3par and Nimble are all design partners for vVol  integration with SRM. Other vendors are working on their integrations  and will be added soon.


SRM and vVols Demo


Note: This is an advanced module;  familiarity with SPBM and vSphere Replication is strongly recommended before continuing.


<img src="assets/a9ffbe39-1a4a-4a82-9b21-4d251173a0d2.png" height="156" width="409" />

We will create an SPBM (Storage Policy Based Management) policy and replication pairing for this integration. Module 4 of this lab has detailed instructions of how to complete these set-up steps. We will assume that you are familiar with these configurations, as this module will not provide detailed setup instructions.  


 

 

Note from module 4

If you just completed module 4, please end this lab and restart it to begin module 5.  

 

 

Establish Replication Partners on the Nimble Array

 

  1. Click on the Chrome Icon on the Windows Quick Launch Task Bar.

 

 

Launch NimbleA

 

  1. Open a new Tab in your browser and select the NimbleA on the bookmarks bar.

 

 

 

Nimble Login

 

Login with the User: admin | Password: VMware1!  

 

 

Data Protection

 

You are presented with the Nimble array dashboard.  

  1. Choose Manage from the menu bar
  2. Click Data Protection

 

 

Replication Partner

 

On the Data Protection page

  1. Click Replication Partners (This could take about a minute to load the Replication Partners page)
  2. Select the "+"

 

Please wait for the Replication Partners screen to appear, this may take between 10-20 seconds.

 

 

Enter Partner Information

 

You will be presented with the "Partner Creation" dialog in which we will enter all required information to connect to our second Nimble array.  

  1. Partner Name: nimblegroupb
  2. Hostname/IP: 192.168.220.205 (note: the 3rd octet is "220" as the destination array is on the secondary side)
  3. Shared Secret: VMware1!
  4. Shared Secret: VMware1!
  5. Inbound Location: Select the arrow next to default and choose the folder labeled nimblea-vvol-srm
  6. Click Next

 

 

 

Create Replication Partner

 

In the next dialog, you have the option to create a policy that sets bandwidth limits on the replication.  In production environments, this might be a requirement.  In the lab, we will not create such a policy, thus using all available bandwidth for the replication.

  1. Click Create

You will be returned to the "Replication Partners" screen after a brief delay.  Initially your connection will have a status of "unreachable" and connection tests will fail until we've configured the replication from the secondary array.

 

 

Set Up Replication On The Second Nimble Array

 

Open a new tab in the Google Chrome browser and click the NimbleB bookmark.

 

 

Nimble Login

 

Login with the User: admin | Password: VMware1!  

Data Protection

 

You are presented with the Nimble array dashboard.  

  1. Choose Manage from the menu bar
  2. Click Data Protection

Replication Partner

 

On the Data Protection page

  1. Click Replication Partners
  2. Select the "+"

 

Please wait for the Replication Partners screen to appear, this may take between 10-20 seconds.

Enter Partner Information

 

You will be presented with the "Partner Creation" dialog in which we will enter all required information to connect to our first Nimble array.  

  1. Partner Name: nimblegroupa
  2. Hostname/IP: 192.168.120.205 (note: the 3rd octet is "120" as the destination array is on the secondary side)
  3. Shared Secret: VMware1!
  4. Shared Secret: VMware1!
  5. Inbound Location: Select the arrow next to default and choose the folder labeled nimbleb-vvol-srm
  6. Click Next

 

Create Replication Partner

 

  1. Click Create

 There may be a brief delay while the connection is established. You should be returned to the Replication Partners screen once the connection is created.

Test Connection on NimbleB

 

You are returned to the "Replication Partners" screen.  Initially the status should be listed as "Unreachable" just as in the set up on the first array.

  1. Check the box next to partner set up.  
  2. Click Test

Test Results on NimbleB

 

You should be presented with an "Alive" status now.  This is because we now have configured both arrays as replication partners.

 

Return to NimbleA

 

1. Return to the NimbleA tab

Test Replication Results on NimbleA

 

  1. Select the check box next to the replication partner set up
  2. Click Test  

Both source and destination should be showing "Alive"

 

 

Create Replication Storage Policy

 

  1. Select RegionA
  2. Right-click vcsa-01a Web Client, and select "open in new tab"

 

 

Login to vCenter

 

  1. Username: administrator@vsphere.local
  2. Password: VMware1!
  3. Click Login

 

Storage Policy Creation

 

In our final section, we will create Storage Policies.

  1. Left-click Menu
  2. Select Policies and Profiles

VM Storage Policies

 

  1. In the left navigation pane, click VM Storage Policies
  2. Click Create

Enter Policy Name

 

  1. Enter Name: NimbleA-vVol-SRM
  2. Click Next

View Policy Structure

 

  1. Select Enable rules for "HPE Alletra 6000 and Nimble Storage" under Datastore specific rules
  2. Click Next

Generate Storage Rules

 

(Note this screen will update as you make selections)

  1. Select Placement
  2. Select Select "ADD RULE"
  3. Choose Protection schedule (minutely)
  4. Choose Next

Configure the Rules: Placement

 

(Note this screen will update as you make selections)

  1. Snapshot frequency: Every 5 minutes
  2. Replication partner: nimblegroupb
  3. Replication frequency: 1
  4. Snapshots to retain on replica: 1
  5. Delete replicas from partner: No

DO NOT click "Next" yet

Configure the Rules: Replication

 

(Note this screen will update as you make selections)

  1. Select the Replication tab
  2. Select Custom
  3. Choose Next

Storage Compatibility

 

  1. Confirm storage is available (nimblea-vvol-srm)
  2. Choose Next

Ready to Complete

 

Review the settings

  1. Click Finish

Create A Replication Policy for Reverse Replication

 

We need to create a policy on the secondary side that matches the primary side (for reverse replication needs during a failover). As our vCenters are in Enhanced Linked Mode, this task will be done in the same interface and following the same steps from the previous steps with three important differences in configuration listed below

  1. The vCenter Server in focus will be VCSA-01B.CORP.LOCAL
  2. Name of the Replication: NimbleB-vVol-SRM
  3. Replication Partner: nimblegroupa

Please proceed to create the policy via previous steps and reference the above image to confirm the policy is created accurately.

Select Finish to continue

 

 

Hosts and Clusters

 

  1. Select Menu
  2. Select Hosts and Clusters

 

 

Rename and Recreate nimble-srm VM

 

 

Due to updates to Hands on Labs environment since creation of this lab we will be re-creating the "nimble-srm" VM. Please follow the below steps.

  1. Right Click on nimble-srm
  2. Choose Rename
  3. rename to 'nimble-srm_old'

 

 

Clone nimble-srm_old to nimble-srm

 

  1. Right-click the nimble-srm_old vm
  2. Choose Clone
  3. Select Clone to Virtual Machine...

 

 

Clone nimble-srm_old to nimble-srm (con't)

 

  1. Name the clone nimble-srm
  2. Follow all defaults until the "Select Storage" screen
  3. On the "Select Storage" Screen Choose Datastore "nimblea-vvol-srm".
  4. Complete the wizard by selecting Next, Next, and Finish

 

 

Edit Storage Policy

 

  1. Right-click the nimble-srm vm
  2. Hover over VM Policies
  3. Select Edit VM Storage Policies...

 

 

Configure Storage Policy

 

  1. Click the VM storage policy drop-down
  2. Select your newly created storage policy NimbleA-vVol-SRM
  3. Click Configure next Replication groups

 

 

Configure VM Replication Group

 

  1. Choose Automatic from the Select Replication Group
  2. Click OK

 

 

Assign Storage Policy

 

  1. Click OK

 

 

Site Recovery Manager

 

  1. Left-click Menu
  2. Select Site Recovery

 

 

Open Site Recovery Manager

 

  1. Click Open Site Recovery

 

 

View Details

 

  1. Click View Details

 

 

Enter SRM Credentials

 

  1. If necessary - enter the SRM login credentials

 

 

Storage Policy Mapping

 

  1. Select Storage Policy Mapping
  2. Click New

 

 

Creation Mode

 

  1. Select Prepare mappings manually
  2. Click Next

 

 

Recovery Storage Policies

 

  1. Check NimbleA-vVol-SRM on the left
  2. Select NimbleB-vVol-SRM on the right
  3. Click Add Mappings
  4. Click Next

 

 

Reverse Mapping

 

  1. Check the box next to vcsa-01b.corp.local to select all
  2. Click Next

 

 

Complete the Mapping

 

  1. Click Finish

 

 

Creating a Protection Group

 

  1. Select the Protection Groups tab
  2. Click  New

 

 

Protection Group Name

 

  1. Enter the name vVols-SRM-PG
  2. Click Next

 

 

Protection Group Configurations

 

  1. Select Virtual Volumes (vVol replication)
  2. Select nimblegroupa fault domain radio button
  3. Click Next

 

 

Replication Groups

 

  1. Click the checkbox next to replication group name nimble-srm-<hex_characters>
  2. Click the drop down arrow
  3. Verify you see our nimble-srm vm listed with status OK

(NOTE: You may not see the nimble-srm VM if you've completed the previous steps quickly; recall there is a 5 minute replication schedule. You may safely restart the creation of your Protecting Group - waiting a 1-2 minutes should be sufficient)

  1. Click Next

 

 

Skip Recovery Plan

 

  1. Select Do not add to recovery plan now
  2. Click Next

 

 

Complete the Missing Network mappings

 

  1. Expand the menus as necessary and Select ESXi-RegionA01-vDS-COMP
  2. Select ESXi-RegionB01-vDS-Comp
  3. Click Add Mappings
  4. Click Next

 

 

Complete the Missing folder mappings

 

  1. Expand the menus as necessary and Select RegionA01
  2. Select RegionB01
  3. Click Add Mappings
  4. Click Next

 

 

Review and Complete Protection Group Settings

 

 

  1. Click Finish

 

 

Wait until Protection Status is OK

 

  1. Wait until you see the Protection Status as OK

 

 

Creating a Recovery Plan

 

  1. Select Recovery Plans
  2. Click New

 

 

Recovery Plan Name

 

  1. Enter the name vVols-SRM-RP
  2. Click Next

 

 

Protection Group Selection

 

  1. Select the vVols-SRM-PG protection group
  2. Click Next

 

 

Network Mappings

 

  1. Map each drop-down on the right to the exact name on the left
  2. Click Next

This is helpful when using SRM in your real environment, as you can bring the VMs online in an isolated network if desired.

 

 

Finish the Recovery Plan

 

  1. Click Finish

 

 

Open the Recovery Plan

 

  1. Click on the vVols-SRM-RP recovery plan to open it up

 

 

Testing the Recovery Plan

 

  1. Click Test

 

 

Test Confirmation Options

 

  1. Click Next

 

 

Begin Test

 

  1. Click Finish

 

 

Monitor Test

 

  1. Select Recovery Steps

From here you can monitor the test and watch the steps as it proceeds.

 

 

Viewing the Data on the Nimble Arrays

While the recovery plan is running, let's go view the data on the Nimble arrays.

 

 

Return to NimbleA

 

1. Return to the Nimble-OS1 tab

If your session has timed-out re-login with the User:admin Password:VMware1!

 

 

Data Protection

 

  1. Click Manage
  2. Select Data Protection

 

 

NimbleA Volume Collections

 

Here you can see the volumes on the Nimble arrays that are being replicated.  When looking at Nimble-OS2, you will see the Replica tag attached to the alternate volume.

 

 

Return to NimbleB

 

1. Return to the NimbleB tab

If your session has timed-out re-login with the User:admin Password:VMware1!

 

 

Data Protection

 

  1. Click Manage
  2. Select Data Protection

 

 

NimbleB Volume Collections

 

Here you can see the volumes on the Nimble arrays that are being replicated.  When looking at NimbleB, you will see the Replica tag attached to the alternate volume.

 

 

Go back to Site Recovery Manager

 

  1. Click the Site Recovery tab in Chrome

 

 

Recovery Plan Test Completed!

 

  1. You should see that the recovery plan has been completed (it does take some time to run).  
  2. You may need to click the Refresh button to see this updated.

NOTE: You may notice task timeout related to VMTools not responding. This is expected behavior as the VM we are testing on is only a shell without an OS. This message can be safely ignored.

At this point you have completed the configuration of vVols with SRM!

 

Conclusion


In this Module you learned how to configure SRM and vVols Integration


 

You've finished Module 5

If you are looking for additional information on Virtual Volumes Storage  try the below:

Proceed to any module below which interests you most.

 

 

How to End Lab

 

To end your lab click on the END button.  

 

Module 6 - vVol Storage Vendor Partners (15 minutes)

Introduction


vVols represents the collaboration of VMware and the storage ecosystem towards a new storage operational model centered on the VM. Major vendors participated in the design of vVols. The ecosystem continues to  grow and strengthen with solutions ready for vVols. You have reached the end of the interactive portion of this hands-on-lab. This remainder of this lab contains video links to these Vendor solutions.


Storage Partners


vVols adoption continues to grow and is  accelerating in 2021 and its easy to see why. vVols eliminates LUN  management, accelerates deployments, simplifies operations, and enables  utilization of all of your arrays functionality. VMware and our storage  partners continue to develop and advance vVols, and its functionality.  In vSphere 7, more features and enhancements have been added, showing  the continued commitment to the program.

We have included videos from our vVols partners below. Please feel free to review them.


 

Dell/EMC

 

 

Hitachi

 

 

NetApp

 

 

Pure

 

Conclusion


In this Module you learned about some of our other partner's vVols solutions.


 

You've finished Module 6

Congratulations on completing Module 6!

Review any module below which interests you most.

 

 

 

How to End Lab

 

To end your lab click on the END button.  

 

Conclusion

Thank you for participating in the VMware Hands-on Labs. Be sure to visit http://hol.vmware.com/ to continue your lab experience online.

Lab SKU: HOL-2205-02-HCI

Version: 20221022-014657