Lab Overview - HOL-2205-02-HCI - VMware vSphere Virtual Volumes and Storage Policy Based Management
Note: It may take more than 90 minutes to complete this lab. You may only finish 2-3 of the modules during your time. However, you may take this lab as many times as you want. The modules are independent of each other so you can start at the beginning of any module and proceed from there. Use the Table of Contents to access any module in the lab. The Table of Contents can be accessed in the upper right-hand corner of the Lab Manual.
Virtual Volumes (vVols) is an integration and management framework that virtualizes SAN/NAS arrays, enabling a more efficient operational model that is optimized for virtualized environments and centered on the application instead of the infrastructure.
vVols simplifies operations through policy-driven automation that enables more agile storage consumption for virtual machines and dynamic adjustments in real time, when they are needed. It simplifies the delivery of storage service levels to individual applications by providing finer control of hardware resources and native array-based data services that can be instantiated with virtual machine granularity.
With Virtual Volumes (vVols), VMware offers a new paradigm in which an individual virtual machine and its disks, rather than a LUN, becomes a unit of storage management for a storage system. Virtual volumes encapsulate virtual disks and other virtual machine files, and natively store the files on the storage system.
This lab will guide you through the introduction, provisioning, and advance features of vVols.
Lab Module List:
Lab Captain:
This lab manual can be downloaded from the Hands-on Labs document site found here:
This lab may be available in other languages. To set your language preference and view a localized manual deployed with your lab, utilize this document to guide you through the process:
http://docs.hol.vmware.com/announcements/nee-default-language.pdf
Welcome! If this is your first time taking a lab navigate to the Appendix in the Table of Contents to review the interface and features before proceeding.
For returning users, feel free to start your lab by clicking next in the manual.
Please verify that your lab has finished all the startup routines and is ready for you to start. If you see anything other than "Ready", please wait a few minutes. If after 5 minutes your lab has not changed to "Ready", please ask for assistance.
Module 1 - Overview and Requirements – Higher Level Concepts (15 minutes)
Welcome to the Virtual Volumes Lab!
In this Module, you will be given an overview of Virtual Volumes (vVols) and add your first vVol to vCenter. You will also learn about the benefits that vVols provide, as well as the requirements necessary to utilize VMware Virtual Volumes.
vSphere Virtual Volumes or vVols, implements the core tenants of the VMware Software Defined Storage vision to enable a fundamentally more efficient operational model for external storage in virtualized environments, centering it on the application instead of the physical infrastructure.
vVols enables application-specific requirements to drive storage provisioning decisions while leveraging the rich set of capabilities provided by existing storage arrays. Some of the primary benefits delivered by vVols are focused around operational efficiencies and flexible consumption models on a per-application (per-VM) basis at the hypervisor level using Storage Policy-Based Management (SPBM).
Virtual Volumes is an industry-wide initiative that allows customers to leverage the unique capabilities of their current storage investments and transition - without disruption - to a simpler and more efficient operational model optimized for virtual environments that works across all storage types.
With Virtual Volumes, there are two big changes to how Virtual Machine Snapshots work:
This results in an incredible benefit in performance and efficiency (with an additional benefit being that you no longer have to worry about keeping 'too many' snapshots around)!
Virtual Volumes enable Virtual Machine aware storage and policy based management across heterogeneous arrays.
Let's jump right in and create a Virtual Volume!
For the purposes of this Hands On Lab, we will be using the HTML5-based vSphere client.
If you experience the Chrome "cip" pop-up, you can safely choose to Always open links of this type and continue.
If the username and password are not pre-populated, enter the following
Username: administrator@vsphere.local
Password: VMware1!
Expand vcsa-01a.corp.local if necessary
In this case, we are creating a vVol that communicates to a Storage Array via an iSCSI based protocol.
The Virtual Volume is ready to be used by Virtual Machines!
In just a few simple steps, you have added a Virtual Volume to your vSphere lab environment.
If it feels like the exercise you just went through was nothing more than adding a New Datastore to vCenter, don't worry. Working with Virtual Volumes is meant to provide a familiar storage experience to Virtual Infrastructure Administrators; however, they provide powerful new capabilities that have not existed previously.
In later modules of this lab you will learn about each of the architectural components that drive Virtual Volumes and see first-hand how Storage Administrators perform the necessary steps to provision storage resources that are consumable via Virtual Volumes.
Virtual Volumes are enabled by features that are exposed via the back-end Storage Array (QoS, Disk Performance Tiers, Snapshotting, Replication, De-Duplication, etc.). Virtual Machines that use Virtual Volumes can leverage the features that you choose via Software Based Policies, on the fly, without having to make any changes to the Storage Array or re-format LUNs via vSphere.
In the next Lesson you will learn about the benefits that Virtual Volumes provide. You will then complete the Module with a review of the Software and Hardware Requirements necessary for Virtual Volumes.
vSphere Virtual Volumes implements the core tenets of the VMware Software-Defined Storage vision to enable a fundamentally more efficient Operational Model for external storage in virtualized environments, centering it on the Application instead of the physical infrastructure.
Virtual Volumes enables Application-specific requirements to drive storage provisioning decisions while leveraging the rich set of Capabilities provided by existing Storage Arrays.
Read on to learn about some of the primary benefits delivered by Virtual Volumes.
Virtual Volumes simplifies storage operations by automating manual tasks and eliminating operational dependencies between the vSphere Admin and the Storage Admin. Provisioning is faster, and change management is simpler as the new operational model is built upon policy-driven automation.
Virtual Volumes simplifies the delivery of storage service levels to applications by providing administrators with finer control of storage resources and data services at the VM level that can be dynamically adjusted in real time.
Virtual Volumes improves resource utilization by enabling more flexible consumption of storage resources, when needed and with greater granularity. The precise consumption of storage resources eliminates over-provisioning. The Virtual Datastore defines capacity boundaries, access logic, and exposes a set of data services accessible to the virtual machines provisioned in the pool.
Virtual Datastores are purely logical constructs that can be configured on the fly, when needed, without disruption and don’t require formatting with a file system.
Historically, vSphere storage management has been based on constructs defined by the storage array: LUNs and filesystems. A storage administrator would configure array resources to present large, homogenous storage pools that would then be consumed by vSphere administrator.
Since a single, homogeneous storage pool would potentially contain many different applications and virtual machines; this approach resulted in needless complexity and inefficiency. vSphere administrators could not easily specify specific requirements on a per-VM basis.
Changing service levels for a given application usually meant relocating the application to a different storage pool. Storage administrators had to forecast well in advance what storage services might be needed in the future, usually resulting in the over-provisioning of resources.
With Virtual Volumes, this approach has fundamentally changed. vSphere administrators use policies to express application requirements to a storage array. The storage array responds with an individual storage container that precisely maps to application requirements and boundaries.
Typically, the virtual datastore is the lowest granular level at which data management occurs from a storage perspective. However, a single virtual datastore contains multiple virtual machines, which might have different requirements. With the traditional approach, differentiation on a per virtual machine level is difficult.
The Virtual Volumes functionalities allows for the differentiation of virtual machine services on a per application level by offering a new approach to storage management.
Rather than arranging storage around features of a storage system, Virtual Volumes arranges storage around the needs of individual virtual machines, making storage virtual machine centric.
Virtual Volumes map virtual disks and their respective components directly to objects, called virtual volumes, on a storage system. This mapping allows vSphere to offload intensive storage operations such as snapshot, cloning, and replication to the storage system.
Now that you understand the Benefits, let's review the Requirements that are necessary to use Virtual Volumes in our final Lesson of this Module.
Virtual Volumes requires the following Software, Hardware and Licensing Requirements.
The use of Virtual Volumes requires the following software components:
There are over 150 array models from multiple partners that have been certified for Virtual Volumes, with even more on the way.
The use of vSphere Virtual Volumes requires the following hardware components:
The VMware Compatibility Guide for Virtual Volumes makes it easy to answer requirement questions like:
The use of vSphere Virtual Volumes requires the following license:
In this module you learned about the benefits that Virtual Volumes provides, the requirements necessary for Virtual Volumes and also added your first Virtual Volume to vSphere.
Congratulations on completing Module 1!
If you are looking for additional information on a Virtual Volumes Overview:
The Virtual Volumes Compatibility Guide can be found here:
Proceed to any module below which interests you most.
To end your lab, click on the END button.
Module 2 - Virtual Volumes Architecture – Exploring the Details (15 minutes)
In this Module you will learn about all of the Key Components that enable Virtual Volumes.
You will then familiarize yourself with each of those working components within our Lab Environment.
The following module contains descriptions and definitions of the key components of vSphere Virtual Volumes.
Virtual Volumes are considered a type of virtual machine object, which are created and stored natively on the storage array. vVols are stored in storage containers and mapped to virtual machine files/objects such as VM swap, VMDKs and their derivatives.
There are five different types of Virtual Volumes object types and each map to a different and specific virtual machine file.
The vendor provider, also known as the vSphere Storage APIs for Storage Awareness (VASA) provider, is a storage-side software component that acts as a storage awareness service for vSphere and mediates out-of-band communication between vCenter Server and ESXi hosts on one side and a storage system on the other. Storage vendors exclusively develop vendor providers.
ESXi hosts and vCenter Server connect to the Vendor Provider and obtain information about available storage topology, capabilities, and status.
Subsequently vCenter Server provides this information to vSphere clients, exposing the capabilities around which the administrator might craft storage policies in Storage Policy Based Management (SPBM).
Vendor Providers are typically setup and configured by the vSphere administrator in one of two ways:
Unlike traditional LUN and NFS-based vSphere storage, the Virtual Volumes functionality does not require preconfigured volumes on a storage side.
Instead, Virtual Volumes uses a storage container, which is a pool of raw storage capacity and/or an aggregation of storage capabilities that a storage system can provide to virtual volumes.
Depending on the storage array implementation, a single array may support multiple storage containers. Storage Containers are typically setup and configured by storage administrators.
Containers are used to define:
A Virtual Datastore represents a storage container in a vCenter Server instance and the vSphere Web Client. A Virtual Datastore represents a one-to-one mapping to the storage system’s storage container.
The storage container (or Virtual Datastore) represents a logical pool where individual Virtual Volumes VMDKs are created.
Virtual Datastores are typically set up and configured by vSphere administrators.
A one-to-mapping is created between a vVols datastore and a storage container on the array. If another vVols datastore is needed, a new storage container must be created.
Although storage systems manage all aspects of virtual volumes, ESXi hosts have no direct access to virtual volumes on the storage side. Instead, ESXi hosts use a logical I/O proxy, called the protocol endpoint, to communicate with virtual volumes and virtual disk files that virtual volumes encapsulate.
ESXi uses protocol endpoints to establish an on-demand data path from virtual machines to their respective virtual volumes.
Protocol Endpoints are compatible with all SAN/NAS industry standard protocols:
Protocol Endpoints are set up and configured by Storage Administrators.
Here you can see the relationships between the key Virtual Volumes Components, including views from a top-down (VI Admin) perspective and bottom-up (Storage Admin) perspective.
Published Capabilities will vary depending on array types.
Once all of these components are in place, VI Admin's can easily create VM Storage Policies to take advantage of Array capabilities as part of automating the Software-Defined Datacenter.
Now that you know the different components that make up the Virtual Volumes Architecture, let's examine how they are configured and utilized in our Lab Environment.
Our Lab is using nested Virtualized technology from HPE: Nimble VSA
Nimble
This Hands On Lab will be using the HTML-5 based vSphere client.
The Nimble VSA Storage Providers have already been added to the Lab and they are in Online and Active status.
Virtual Volumes use vSphere Storage APIs (VASA) version 3.0.
As a reminder, these VASA providers are responsible for mediating out-of-band communication between vSphere and Storage Systems. ESXi hosts and vCenter connect to the Vendor Provider to obtain information about available storage topology, capabilities and status.
Note: The terms, "Storage Provider", "Vendor Provider" and "VASA Provider" can be used interchangeably.
Storage Containers are pools of raw storage capacity that a storage system can provide to Virtual Volumes.
Review the usage statistics on the Nimble Storage Container for vVols and note the Volume Type and Protocol indicates vVols via iSCSI.
This Storage Container was created by a Storage Administrator and was used in a previous module of this lab. As you've seen, this Container is visible to a vSphere Administrator when adding storage via vCenter (done via the 'New Datastore' Wizard when selecting 'VVol' as the Virtual Datastore type).
There will be 2 or 3 Protocol Endpoints of the type iSCSI from this Nimble VSA to vSphere depending on your work thus far. These represent the in-band connections to both esx-01a and esx-02a.
Protocol Endpoints are used to establish the Data path between Virtual Machines and their respective Virtual Volumes. These Protocol Endpoints are using the iSCSI protocol; however, it is possible to use other SAN/NAS industry standard protocols like NFS, Fiber Channel and Fiber Channel over Ethernet. In this case, iSCSI protocol endpoints can utilize any iSCSI interface or Fibre Channel connection for IO.
We'll now examine the Nimble configurations...
The Nimble vSphere Plug-in has been installed on both vCenter Servers for you. You may navigate back to your vSphere tab in Chrome to view the components. This Plug-in will be valuable for later modules in this lab. To view the Plug-in follow the below:
We'll now inspect the full web interface.
(Note: Due to the nature of the Hands-on-Lab environment, it may be necessary to adjust the zoom settings on the Chrome browser window to view this plug-in properly - see below for an illustration on this setting. 80% zoom is recommended)
Here you can see the vVols that are available to vSphere via the Nimble VSA. Depending on the modules of this lab that you have completed, your display may have different vVols listed.
You may inspect the vVols further before continuing to the next steps.
The Nimble Protocol Endpoints are managed in vCenter. In the next steps we'll see where these are visible in vSphere
Notice that the Nimble VSA Protocol Endpoint is configured to use SCSI (leveraging iSCSI). As you learned earlier, Protocol Endpoints are used to establish the Data path between Virtual Machines and their respective Virtual Volumes.
Note that there are multiple Virtual Datastores present with type "vVol" that were added to vCenter by a VI Admin.
These Virtual Volumes are a one-to-one mapping to their Nimble Storage Containers and are ready for Virtual Machine usage.
Note: Based on the Modules you have completed in this Lab, you will see different datastores available. It is safe to continue to your next step if your datastores are not identical to those listed in the screenshot.
You have confirmed all of the key vVol Components within our Lab Environment. In a later Lab Module, you will create new Virtual Volumes all the way from End-to-End (Storage Array to vSphere).
In this Module you learned about the Key Components that drive Virtual Volumes Architecture and also examined them in action within our Lab Environment.
Congratulations on completing Module 2!
If you are looking for additional information on Virtual Volumes Architecture, try one of these:
Proceed to any module below which interests you most.
To end your lab, click on the END button.
Module 3 - Provisioning Virtual Volumes (30 minutes)
In this Module you will learn how Virtual Volumes are provisioned from end-to-end (Storage Array to vSphere) utilizing multiple Storage Vendor Appliances.
In this lesson you will see the end-to-end provisioning of a new Nimble Virtual Volume beginning with the steps that a Storage Administrator must perform on the Array and finishing with the steps that a VI Admin performs to add the vVol to vCenter.
Login with the User: admin Password: VMware1!
We will be provisioning a Block vVol Datastore on the Nimble Array that is accessible via vCenter using the iSCSI protocol. We will first verify the Nimble array can communicate with vCenter through VASA (This was pre-configured for you)
Note: there are currently a registrations for vcsa-01a.corp.local and vcsa-01b.corp.local
Scroll to the bottom of the status window for vcsa-01a.corp.local and vcsa-01b.corp.local. The Checkbox labeled VASA Provider (vVols) and Web Client are currently checked
This step was completed in the set up of this lab. This is a critical step to vVols operations. Recall that VASA (vStorage APIs for Storage Awareness) is a set of APIs that enables vSphere vCenter to recognize the capabilities of storage arrays. You should receive a successfull status message.
Nimble utilizes folders, also referred to as storage containers, to house the virtual volumes. To view the current folder configuration:
Here you can see the default Folders that were created for this lab.
Once vSphere starts using this container for virtual volumes you will see VMDK and other files stored natively within the folder.
Our final step is to view the Performance Policies
There are 25 of pre-configured Performance Policies as well as the capability to create our own. These policies will be used in vSphere during the creation of the VM Storage Policy.
We will now create a new vVol on the Nimble Array:
vVols on the Nimble Array are created in Folders, change to the Folder view
We'll use the Nimble vVols wizard to create our volume
You are presented with the "Create Folder" dialog box. Follow the example screenshot and instructions below to complete the required fields.
Upon creation, you are directed back to the Folder View. Notice RegionA01-HOL is now available.
We will now return to the vSphere Web Client to provision our Nimble storage based Virtual Volume.
NOTE - If you need to re-authenticate to vSphere:
Alternatively, you can enter a User name of administrator@vsphere.local and a password of VMware1!
Here you will see a list of all existing datastores that have been provisioned. Before adding a new Datastore you need to rescan the storage bus to capture any newly created devices.
1. Click "OK" to complete the rescan.
Here you will see a list of all existing datastores that have been provisioned.
In this dialog, we will choose what type of datastore you want to create.
You will need to give your new datastore a name that will be displayed in vCenter. For tracking purposes, name the datastore the same as you previously did in on the array.
We'll need to select the hosts that have access to this newly created Datastore.
You are presented with a review screen. Review the contents to be sure they match the screen above.
You will be returned to the vCenter view and your newly provisioned datastore will be in the list. If the Status is unknown, you can refresh the view after a short while.
Congratulations! You've just created a new vVol on the Nimble storage array and presented it to 2 hosts in your vCenter!
Let's confirm that newly created datastore is presented to our hosts.
At this point your newly created RegionA01-HOL Datastore is ready for use in vSphere!
In this Module you learned how to successfully provision Virtual Volumes from end-to-end on the Nimble Storage Array and the vSphere Platform.
Congratulations on completing Module 3!
If you are looking for additional information on Virtual Volumes, try this:
Proceed to any module below which interests you most.
To end your lab click on the END button.
Module 4 - Storage Policy Based Management with Virtual Volumes (30 minutes)
In this module, you will see the power of the Software Defined Datacenter in action.
By creating and applying storage policies to a Virtual-Volumes-based virtual machine, you will learn how easy it is to change the storage characteristics of this VM, on the fly, all without requiring an outage.
Before we create Storage Policies, we'll need to create the Nimble Array Replication.
While vVol support for array-based replication is supported by many vendors, in the lab we will be working with the Nimble storage array. Setting up replication with the Nimble array and vCenter requires a few steps, all of which the lab will cover:
Login with the User: admin | Password: VMware1!
You are presented with the Nimble array dashboard.
On the Data Protection page
Please wait for the Replication Partners screen to appear, this may take between 10-20 seconds.
You will be presented with the "Partner Creation" dialog in which we will enter all required information to connect to our second Nimble array.
In the next dialog, you have the option to create a policy that sets bandwidth limits on the replication. In production environments, this might be a requirement. In the lab, we will not create such a policy, thus using all available bandwidth for the replication.
You will be returned to the "Replication Partners" screen. Initially your connection will have a status of "unreachable" and connection tests will fail until we've configured the replication from the secondary array.
Open a new tab in the Google Chrome browser and click the NimbleB bookmark.
Login with the User: admin | Password: VMware1!
You are presented with the Nimble array dashboard.
On the Data Protection page
Please wait for the Replication Partners screen to appear, this may take between 10-20 seconds.
You will be presented with the "Partner Creation" dialog in which we will enter all required information to connect to our first Nimble array.
You are returned to the "Replication Partners" screen. Initially the status should be listed as "Unreachable" just as in the set up on the first array.
You should be presented with an "Alive" status now. This is because we now have configured both arrays as replication partners.
1. Return to the NimbleA tab
Both source and destination should be showing "Alive"
If you experience the Chrome "cip" pop-up, you can safely choose to Always open links of this type and continue.
In our final section, we will create Storage Policies.
(Note this screen will update as you make selections)
(Note this screen will update as you make selections)
DO NOT click "Next" yet
(Note this screen will update as you make selections)
Review the settings
We need to create a policy on the secondary side that matches the primary side (for reverse replication needs during a failover). As our vCenters are in Enhanced Linked Mode, this task will be done in the same interface and following the same steps from the previous steps with three important differences in configuration listed below:
(Click here to jump back 9 pages in this manual to follow these instructions again using the 3 different variables above):
Please proceed to create the policy via previous steps and reference the above image to confirm the policy is created accurately.
Select Finish to continue
To take full advantage of all replication and DR options, we will be creating our test VM on the secondary site:
Notice how the available Datastores are automatically separated into Compatible or Incompatible datastores. nimbleb-vvol01 is listed as Compatible because we configured Replication and tagged these volumes properly in previous steps. Note: Depending on what modules you have completed in this lab, you may see different compatible volumes.
Since we are using a nested Virtualization environment/lab with limited resources, we will be configuring a very small VM
If your session has timed-out re-login with the User:admin Password:VMware1!
We will view the newly-created Virtual Machine files on the Nimble Array in the Folders View.
Notice the Nimble-ReplicationVM files are now on NimbleB and in an Online state
Return to NimbleA in the Google Chrome browser
We will view the Virtual Machine files that were replicated from NimbleB to NimbleA in the Folders View.
Here you can see the VM files that have been replicated. They will remain "Offline" until a failover is triggered.
1. Click the vSphere Web Client browser tab to return to The RegionA vCenter
Due to the nature of the Hands-on-Lab environment, it may be necessary to adjust the zoom settings on the Chrome browser window to view this plug-in properly - see below for an illustration on this setting. 75% zoom is recommended
You may choose three actions to take on your VM once it is selected and available
This lab will continue with a Clone
Note that options 2 and 3 will be grayed out as you cannot take action on a VM that is actively being replicated from the upstream VM.
The Clone VM dialog box will appear providing you with Day and Time options for cloning.
1. Return to the NimbleA tab
If your session has timed-out re-login with the User:admin Password:VMware1!
Note: It may be necessary to Refresh the table to see the newly cloned VM
As shown in the screen shot you may hover over the items listed to show that the Cloned VM is in the Online state while the original Replicated VM remains offline
1. Click the vSphere Web Client Browser Tab to return to RegionA vCenter
At this point, your VM is available for a variety of actions including:
Please feel free to explore these options before continuing to complete this module.
In this Module, you learned how to create and utilize Storage Policies for Virtual Volumes based Virtual Machines.
The ability to leverage policies for automation is a key tenet of VMware's Software Defined Datacenter vision.
Congratulations on completing Module 4!
If you are looking for additional information on Virtual Volumes Storage try the below:
Proceed to any module below which interests you most.
To end your lab click on the END button.
Module 5 - Virtual Volumes Advanced - SRM Integration and Failover (45 minutes)
SRM and vVols: SRM 8.4 is fully integrated with vVols using array-based replication. Users will have access to all the capabilities of SRM, combined with the simplicity, manageability and performance of vVols. As this integration is utilizing vVol replication, it supports a protection granularity as small as a single VM. When multiple VMs are replicated together in a replication group that is maintained as a consistency group. As an added benefit, the VASA provider replaces the SRA (Storage Replication Adapter), so ongoing operations and maintenance are easier than ever. To start with, Pure, HPE 3par and Nimble are all design partners for vVol integration with SRM. Other vendors are working on their integrations and will be added soon.
Note: This is an advanced module; familiarity with SPBM and vSphere Replication is strongly recommended before continuing.
We will create an SPBM (Storage Policy Based Management) policy and replication pairing for this integration. Module 4 of this lab has detailed instructions of how to complete these set-up steps. We will assume that you are familiar with these configurations, as this module will not provide detailed setup instructions.
If you just completed module 4, please end this lab and restart it to begin module 5.
Login with the User: admin | Password: VMware1!
You are presented with the Nimble array dashboard.
On the Data Protection page
Please wait for the Replication Partners screen to appear, this may take between 10-20 seconds.
You will be presented with the "Partner Creation" dialog in which we will enter all required information to connect to our second Nimble array.
In the next dialog, you have the option to create a policy that sets bandwidth limits on the replication. In production environments, this might be a requirement. In the lab, we will not create such a policy, thus using all available bandwidth for the replication.
You will be returned to the "Replication Partners" screen after a brief delay. Initially your connection will have a status of "unreachable" and connection tests will fail until we've configured the replication from the secondary array.
Open a new tab in the Google Chrome browser and click the NimbleB bookmark.
Login with the User: admin | Password: VMware1!
You are presented with the Nimble array dashboard.
On the Data Protection page
Please wait for the Replication Partners screen to appear, this may take between 10-20 seconds.
You will be presented with the "Partner Creation" dialog in which we will enter all required information to connect to our first Nimble array.
There may be a brief delay while the connection is established. You should be returned to the Replication Partners screen once the connection is created.
You are returned to the "Replication Partners" screen. Initially the status should be listed as "Unreachable" just as in the set up on the first array.
You should be presented with an "Alive" status now. This is because we now have configured both arrays as replication partners.
1. Return to the NimbleA tab
Both source and destination should be showing "Alive"
In our final section, we will create Storage Policies.
(Note this screen will update as you make selections)
(Note this screen will update as you make selections)
DO NOT click "Next" yet
(Note this screen will update as you make selections)
Review the settings
We need to create a policy on the secondary side that matches the primary side (for reverse replication needs during a failover). As our vCenters are in Enhanced Linked Mode, this task will be done in the same interface and following the same steps from the previous steps with three important differences in configuration listed below
Please proceed to create the policy via previous steps and reference the above image to confirm the policy is created accurately.
Select Finish to continue
Due to updates to Hands on Labs environment since creation of this lab we will be re-creating the "nimble-srm" VM. Please follow the below steps.
(NOTE: You may not see the nimble-srm VM if you've completed the previous steps quickly; recall there is a 5 minute replication schedule. You may safely restart the creation of your Protecting Group - waiting a 1-2 minutes should be sufficient)
This is helpful when using SRM in your real environment, as you can bring the VMs online in an isolated network if desired.
From here you can monitor the test and watch the steps as it proceeds.
While the recovery plan is running, let's go view the data on the Nimble arrays.
1. Return to the Nimble-OS1 tab
If your session has timed-out re-login with the User:admin Password:VMware1!
Here you can see the volumes on the Nimble arrays that are being replicated. When looking at Nimble-OS2, you will see the Replica tag attached to the alternate volume.
1. Return to the NimbleB tab
If your session has timed-out re-login with the User:admin Password:VMware1!
Here you can see the volumes on the Nimble arrays that are being replicated. When looking at NimbleB, you will see the Replica tag attached to the alternate volume.
NOTE: You may notice task timeout related to VMTools not responding. This is expected behavior as the VM we are testing on is only a shell without an OS. This message can be safely ignored.
At this point you have completed the configuration of vVols with SRM!
In this Module you learned how to configure SRM and vVols Integration
If you are looking for additional information on Virtual Volumes Storage try the below:
Proceed to any module below which interests you most.
To end your lab click on the END button.
Module 6 - vVol Storage Vendor Partners (15 minutes)
vVols represents the collaboration of VMware and the storage ecosystem towards a new storage operational model centered on the VM. Major vendors participated in the design of vVols. The ecosystem continues to grow and strengthen with solutions ready for vVols. You have reached the end of the interactive portion of this hands-on-lab. This remainder of this lab contains video links to these Vendor solutions.
vVols adoption continues to grow and is accelerating in 2021 and its easy to see why. vVols eliminates LUN management, accelerates deployments, simplifies operations, and enables utilization of all of your arrays functionality. VMware and our storage partners continue to develop and advance vVols, and its functionality. In vSphere 7, more features and enhancements have been added, showing the continued commitment to the program.
We have included videos from our vVols partners below. Please feel free to review them.
In this Module you learned about some of our other partner's vVols solutions.
Congratulations on completing Module 6!
Review any module below which interests you most.
To end your lab click on the END button.
Conclusion
Thank you for participating in the VMware Hands-on Labs. Be sure to visit http://hol.vmware.com/ to continue your lab experience online.
Lab SKU: HOL-2205-02-HCI
Version: 20221022-014657