Lab Overview - HOL-2101-06-CMP - vRealize Operations Advanced Topics
Note: It will take more than 90 minutes to complete this lab. You should expect to only finish 2-3 of the modules during your time. The modules are independent of each other so you can start at the beginning of any module and proceed from there. You can use the Table of Contents to access any module of your choosing.
The Table of Contents can be accessed in the upper right-hand corner of the Lab Manual.
This lab is a collection of feature-based modules that are designed to go into some depth in using several of the common components within vRealize Operations. The modules are all intended to be taken as stand-alone topics for people who want to become more familiar with using and getting value from vRealize Operations.
Lab Module List:
Lab Captain:
This lab manual can be downloaded from the Hands-on Labs Document site found here:
This lab may be available in other languages. To set your language preference and have a localized manual deployed with your lab, you may utilize this document to help guide you through the process:
http://docs.hol.vmware.com/announcements/nee-default-language.pdf
During this module, you will input text into the Main Console. Besides directly typing it in, there are two very helpful methods of entering data which make it easier to enter complex data.
You can also click and drag text and Command Line Interface (CLI) commands directly from the Lab Manual into the active window in the Main Console.
You can also use the Online International Keyboard found in the Main Console.
In this example, you will use the Online Keyboard to enter the "@" sign used in email addresses. The "@" sign is Shift-2 on US keyboard layouts.
Notice the @ sign entered in the active console window.
When you first start your lab, you may notice a watermark on the desktop indicating that Windows is not activated.
One of the major benefits of virtualization is that virtual machines can be moved and run on any platform. The Hands-on Labs utilizes this benefit and we are able to run the labs out of multiple datacenters. However, these datacenters may not have identical processors, which triggers a Microsoft activation check through the Internet.
Rest assured, VMware and the Hands-on Labs are in full compliance with Microsoft licensing requirements. The lab that you are using is a self-contained pod and does not have full access to the Internet, which is required for Windows to verify the activation. Without full access to the Internet, this automated process fails and you see this watermark.
This cosmetic issue has no effect on your lab.
Please check to see that your lab is finished all the startup routines and is ready for you to start. If you see anything other than "Ready", please wait a few minutes. If after 5 minutes your lab has not changed to "Ready", please ask for assistance.
Module 1 - Creating and Sharing Dashboards (30 minutes)
This Module contains the following lessons:
This lab environment is running two different instances of vRealize Operations. We have the different vRealize Operations instances for different use cases. The lab instances are:
In this lesson we will be using the live Instance of vRealize Operations.
If you are not currently logged into any instance of vRealize Operations, continue to the next page, but if you are already logged into the live (not historical) instance of vRealize Operations, click here to skip ahead.
If your browser isn't already open, launch Google Chrome
The browser Bookmarks Bar has links to the different instances of vRealize Operations that are running in the lab.
1. The Live instance of vRealize Operations has already been started for you.
vRealize Operations is integrated with VMware Identity Manager which we will use for user authentication in this lab.
VMware Identity Manager should be pre-selected as the identity source. However, if it is not you will choose it.
For the Live instance of vRealize Operations instance, the default username and password should already be entered. However, if needed type them in.
username: holadmin
password: VMware1!
You should be at the vRealize Operations Home screen and ready to start the module.
In this lesson, we will learn how to clone an existing dashboard and modify it to make it our own.
vRealize Operations 8.1 has numerous out-of-the-box dashboards that were created by industry experts who have a deep understanding of vRealize Operations as well as the characteristics and behavior of the underlying objects being managed. However, personalizing a Dashboard to fit a specific role or consolidate other information into a single view is a common use case for most administrators.
To start, we will clone and make some simple changes to create a custom Overview Dashboard for our administrators. For this example, we will clone the existing Operations Overview dashboard and add the Scoreboard Health, Object relationship and Top Alerts widgets. We will also minimize the three Top-15 widgets that are in the default dashboard so we will have more screen real estate in the dashboard.
Cloning the existing dashboards to create a new or modified dashboard is considered a best practice to ensure your custom content is not affected during an upgrade of vRealize Operations.
NOTE: If we are already on the Dashboard tab, we can skip this step.
We can now see the Operations Overview dashboard, which will be the basis for our own customized version of this dashboard. In order to modify this existing dashboard, we will first want to "clone" it and then modify the cloned version. We do not want to edit the default out-of-the-box dashboard so we don't potentially break the content and flow. We ALWAYS want to clone a dashboard and edit the clone or just create a brand new custom dashboard from scratch as a best practice!
In our custom dashboard, we want to minimize the three Top-15 widgets.
We see that since we cloned an existing dashboard, there are already relationships created from the "Select a Datacenter (DC)" object to the previous included objects (Cumulative Up-Time of all Clusters, Alert Volume and the (3) Top-15 widgets).
Now we need to connect the "Select a Datacenter (DC)" widget to the (3) widgets that we just added (Scoreboard Health, Object Relationship and Top Alerts). We'll do this in the next step.
Here is where we need to connect and create the relationships between the "Select a Datacenter (DC)" widget and the three new widgets we have added. We will do this by dragging and dropping from the "Select a Datacenter (DC)" icon to each of the three icons in the new widgets we added.
We should now see the lab environment match the screen capture.
As we see here, we have connecting relationship lines from the Select a Datacenter (DC) widget to each of the 3 new widgets we added.
We see that the top (2) rows of widgets are the original ones that were in the default Operations Overview Dashboard.
Congratulations, we just completed the Clone and Modify Existing Dashboards lesson!
In this lesson, we started out by cloning the Operations Overview Dashboard and then customized the cloned dashboard. We minimized the three Top-15 columns and then added the Scoreboard Health, Object Relationship and Top Alerts widgets to our custom dashboard.
Up next, is the lesson on Creating a New Dashboard.
In this lesson, we will learn how to create a new dashboard from scratch.
We will create a brand new dashboard from scratch that will contain an Object List for a list of virtual machines. We will then add the following widgets to the dashboard as well:
NOTE: If we are already on the Dashboard tab, we can skip this step.
If you are starting with a blank Dashboard, you can the select the action from the Getting Started section.
We now have to create the relationships between the widgets. We want to be able to click on a virtual machine in the Object List widget and have the rest of the widgets present the data associated with what we selected in the Object List.
After completing the previous steps, we should now see the connecting line from the Object List to the Object Relationship, Metric Chart and Health widgets. We will not be connecting the Object List to the (3) Top-N widgets since we want them to show the Top 10 virtual machines with contention for CPU, Memory and Disk Space. We will see this later once we are done configuring the entire dashboard.
We now need to go into the settings of the widgets to make some configuration changes so that they will present the appropriate data in each of the widgets.
Congratulations, we have completed the lesson on Creating a New Custom Dashboard!
In this lesson, we created a brand new custom dashboard that contained an Object List of virtual machines that had relationships to all the other widgets. However, we did not create the relationship from the virtual machine in the Object List widget to the Top-N widgets. This ensures that no matter which virtual machine we selected from the Object List widget, the Top-N widgets will always show the Top-10 VMs with CPU contention, Memory contention and Disk latency.
Up next is the lesson on Sharing Dashboards.
In this lesson, we will learn how to share the numerous dashboards available in vRealize Operations 8.1.
There are several very useful options for administrators to share dashboards to other personnel in their company. Now we can share a dashboard using a URL that can be given to ANYONE in our organization and they don’t even need to be able to access our vRealize Operations environment. This is a super useful feature when we need to share performance or capacity information to others in the organization, but don’t want them logging into our vRealize Operations instance.
We will see that we can also set an expiration time frame for the shared dashboard to be available. This is also really useful when you just want to give someone a view into a specific portion of the infrastructure for a limited period of time.
We can also share a dashboard through an email just by selecting the correct SMTP instance we have already set up in vRealize Operations and entering the email of the recipient you want to have your new dashboard. Like with the other sharing options, we can also put an expiration time frame for the email as well.
We can even embed the dashboard into any other web page by simply copying the HTML code provided and pasting it into any system like Confluence or our own internal intranet portal.
Group sharing is simply giving dashboard access to any group that currently is set up through the authentication source we already have configured in vRealize Operations.
The final option gives us the ability to export the dashboard and move it to any other vRealize Operations environment. This is very useful when we have multiple vRealize Operations instances or we have a Development instance that we use to play with and make our custom content.
We also have a great new website to contribute cool dashboards we have made called the Dashboard Exchange. We can get to the dashboard exchange quickly by visiting this site at https://vrealize.vmware.com/sample-exchange/.
We have commonly seen the (NOC) Network Operations Center of an IT organization share the Operations Overview dashboard on their large monitors in their NOC. They have created web pages that contain various bits of information from various monitoring systems in order to minimize the amount of monitors they have to have in the NOC. We can easily give them what they need by providing them an embedded link to the dashboard in which they can embed into their existing web portal. That way they don't have to add an additional monitor to house the vRealize Operations dashboard. We will use this scenario in this lesson to learn how to share out the Operations Overview dashboard to them.
NOTE: If we are already on the Dashboard tab, we can skip this step.
In our example, we want to share the Operations Overview dashboard with the (NOC) Network Operations Center, so lets go to the Operations Overview dashboard.
To recap this scenario, the NOC personnel want to have the Operations Overview dashboard showing in the NOC at all times so they can monitor the virtual environment after hours. We need to share this dashboard with them, but remember they have a web portal that they use. Therefore, we will need to provide them the embedded link that they can simply add to their existing web portal.
In this example we can simply create a URL to provide to anyone so they can view the dashboard. For Link Expiry, we have the options to select (1) Day, (1) Week, (1) Month, (3) Months and Never Expire. We see that the link to the dashboard is already filled in. We would then click on the COPY LINK button to copy it to the computer's clipboard allowing us to copy it into a file, email, etc.
In this example, we want to send the dashboard link to someone via an email address directly from the vRealize Operations interface. As a note, we won't actually be sending the link to the dashboard to an email address. We will just run through the steps as though we are going to.
Again, we have the options to select (1) Day, (1) Week, (1) Month, (3) Months and Never Expire. In this lab environment, we do not have an SMTP instance configured. In a production environment, we would configure this with the company email server information by clicking on the CONFIGURE button if it wasn't already configured within vRealize Operations. Then we would type the email address of the individual we are sending the link to. Finally, we would click the SEND button to send the email with the link to the dashboard to the receiver.
In the introduction of this lesson, we discussed the scenario of the (NOC) Network Operations Center having a web page that they wanted to embed the Operation Overview dashboard in. We will now go through the steps associated to accomplish providing them the embedded dashboard.
In this example, we need to authorize only a previously established security group in vRealize Operations access to this dashboard. Currently the Everyone group has access to this dashboard, but we want to limit it only to a specified group and remove the Everyone group.
We would then click on the INCLUDE button to give this group(s) access to the dashboard.
Lets pretend that the dashboard we are currently in is a custom dashboard that an administrator built and is not a default out-of-the-box dashboard. We want to export this dashboard because we have another instance of vRealize Operations in a (DR) Disaster Recovery datacenter and want to have the same dashboard in that instance as well. So we need to export the dashboard and then import it into the instance in the DR datacenter.
We see that it will download the dashboard as a ZIP file. We could then copy this ZIP file over the DR site and then import it into that vRealize Operations instance.
That's it, we have gone through all the options for sharing dashboards in vRealize Operations 8.1!
Congratulations, we have just completed the Sharing Dashboards lesson which is the last lesson of Module #1 - Creating and Sharing Dashboards!
In this lesson, we learned how to share vRealize Operations 8.1 dashboards through various methods. We can share them via a URL, Email, Embedded file, Groups or Export the dashboard to import into another instance of vRealize Operations.
Up next is Module #2 - Creating and Modifying Views and Reports.
In this module, we first looked at how to clone an existing dashboard and then modified it to our needs. We did this because it is never a good idea to modified one of the default dashboards in vRealize Operations. By cloning and modifying the cloned copy, it ensures that nothing happens to the default pre-built dashboards.
Next, we created a new custom dashboard that had all the specific widgets we wanted in it. We added various types of widgets, configured each of them and then created the relationships between them.
Lastly, we reviewed the various options on how to share dashboards which included copying the URL, sending via email, embedding the HTML, assigning specific groups and export the dashboard.
Congratulations on completing the lab module.
If you are looking for additional general information on vRealize Operations 8.1, try one of these:
From here you can:
Module 2 - Creating and Modifying Views and Reports (45 minutes)
This Module contains the following lessons:
This lab environment is running two different instances of vRealize Operations. We have the different vRealize Operations instances for different use cases. The lab instances are:
In this lesson we will be using the live Instance of vRealize Operations.
If you are not currently logged into any instance of vRealize Operations, continue to the next page, but if you are already logged into the live (not historical) instance of vRealize Operations, click here to skip ahead.
If your browser isn't already open, launch Google Chrome
The browser Bookmarks Bar has links to the different instances of vRealize Operations that are running in the lab.
1. The Live instance of vRealize Operations has already been started for you.
vRealize Operations is integrated with VMware Identity Manager which we will use for user authentication in this lab.
VMware Identity Manager should be pre-selected as the identity source. However, if it is not you will choose it.
For the Live instance of vRealize Operations instance, the default username and password should already be entered. However, if needed type them in.
username: holadmin
password: VMware1!
You should be at the vRealize Operations Home screen and ready to start the module.
In this lesson, we will create a view. A view can be used in dashboards and reports. A view is also viewable as its own content in the Details section of the vRealize Operations interface.
The view for this lesson is a starting point and intended to be a simple example to create. It will contain some basic metrics and properties for virtual machines.
The view creation wizard starts. Create a view with the following:
Name and Description
We've been working with Virtual Machine Properties, now we need to select Virtual Machine metrics.
Scroll down and Expand Memory
Clicking the box will toggle 'Select all' and 'De-select All'.
Make sure your screen matches the image. Nothing should be selected at this point.
Scroll down to find the following:
After clicking Save you will be in the view area again. The data we just selected will be displayed.
You should see the three properties and two metrics we selected. At this point, your view is created and saved.
The sum is for all the Virtual Machines contained in the view.
Because we used Virtual Machines as our subject matter, the view can be utilized for a single VM or anything that contains Virtual Machines like Hosts, Groups, Clusters, Datacenters, Applications, etc.
Feel free to navigate to a Host or any object that contains virtual machines to see the flexibility of a View.
This completes the Simple View creation. In the next lesson, we will show how to create a view with variable data.
In this lesson, we are going to create a custom view. The view will concentrate on Virtual Machine data but can be applied to any resource collected in vRealize Operations.
Views can be used within reports and dashboards. They also allow vRealize Operation Users to see data within vRealize Operations.
Create a view with the following data:
Section 1. Name
Note: You may need to manipulate the screen by scrolling down in the configuration area.
You may have to scroll down to see the metric correlation area. There will be a link to select the correlated metric.
In the pop-up window:
With this correlation, we are going to see the value of CPU Ready (%) when the CPU Demand (%) is at a maximum.
In the center of the screen:
In the center of the screen:
Switch from Metrics to Properties using the drop down lists for all of these properties.
We now have a view that shows us the last CPU Demand collected for each Powered ON Virtual Machine. We also show the Maximum CPU Demand as a percentage for the last 30 days. The last value in our view shows us what the Ready % was when the demand was at maximum during the same 30 day period.
This is a very powerful feature of the product. While we are showing the ready % when the CPU is highly demanded, you may wish to see what disk latency looks like when network transmissions are high. You can correlate any two metrics that are being collected in vRealize Operations.
This completes this lesson. In the next lesson, we will create a view with trended data.
In this lesson, we continue the concept of creating custom views. This time, we will create a view with data that is trended over a period of time.
NOTE: Be aware that properties cannot be trended, only metrics.
Once the metric is in the view, single-click on it and change the following:
You now have a view that shows selected virtual machines read latency trended over the last 30 days. While we unchecked the forecast data option, leaving it checked would have trended the forecast of the selected metrics for up to a year.
You have completed this lesson. The next lesson will show how to create a view with distribution data.
If you've completed the previous lessons in this module, we have created various views. In this lesson, we continue creating custom views with data transformation. With data transformation, we can represent the maximum value as well as expressions to show datacenter VM growth.
For this datapoint we are adding our own expression for growth. To show growth of VM's per datacenter we will use this expression: (((last-first)/first)*100). This will give use the percentage of growth in VM for the time period of this view.
When we look at the preview data, it is always best to ensure the view is working correctly, and it is the right data we want to represent. Notice that the VM current and Max are the same, and we have no VM growth. Now we will make a change in the environment to make our new expression work!
Note - this may take 1 minute for the next collection cycle to refresh content.
You will now see your new report at the bottom of the report list.
Now we have a report that includes detail about the growth of VM's in each DataCenter. We can send this to leadership to identify the growth trends each month, each week, or every day!
Let's restart the web-01a VM that we shutdown earlier because this VM will be needed in future lessons.
This concludes the Create a View that shows VM Growth Lesson.
If you've completed the previous lessons in this module, we have created various views. In this lesson, we continue creating custom views with the Distribution view. The distribution view gives us the ability to create pie charts based on data from selected object type.
Our view is blank because we haven't selected a data source.
We will now have a distribution of the VMware Tools versions in the environment!
We have completed this lesson on Creating a View with Distribution Data! In the next lesson, we take you through the process to put views and dashboards into reports.
In this lesson, we show how to create custom reports using views and dashboards.
We have the ability to include the following:
For a Cover Page:
Table of Contents
Footer
Don't make any changes here, we will use the default settings.
Each view and dashboard can be oriented to portrait or landscape mode. For dashboards in a report, landscape will likely be a better choice to simulate the aspect ratio of a monitor. Some dashboards require scrolling. When a dashboard is too large to be displayed on the screen, it will not fit into a report very well either. Make sure Assess Cost is set to Landscape.
We can now see our new report in the Reports List.
The report will be available as a PDF for viewing. Note, it may take a moment to complete before the PDF is available.
Clicking the PDF icon will allow you to save it to the local drive. Feel free to save and open the PDF to see the results.
You have completed the last lesson in this module. You should now have an understanding in creating new views. You also now have the tools to create reports from any view or dashboard.
In this module you explored a few approaches for creating new and customizing views and reports in vRealize Operations.
Congratulations on completing the lab module.
If you are looking for additional general information on vRealize Operations 8.1, try one of these:
From here you can:
Module 3 - Use Symptoms and Recommendations to Create Alerts (30 minutes)
vRealize Operations Alerts are similar to rules used for years in monitoring critical IT resources. However, previous rule-based systems tended to be static and difficult to build, deploy, and maintain. vRealize Operations leverages built-in analytics and pre-defined content to provide a dynamic, effective, and scalable approach for identifying and resolving issues in your environment.
Alert Definitions consist of the following components that raise alerts, provide recommendations, and take automated actions to resolve the issues:
Symptoms
Symptoms define conditions that trigger if a condition becomes true; they are based on metrics or super metrics, message events, or fault events. A symptom set combines one or more symptom definitions by applying an Any or All condition that can trigger the alert.
Recommendations
Recommendations are the remediation options provided to resolve the issues. Recommendations are provided by domain experts and can be extended to include tribal knowledge, local procedures, etc.
Actions
Actions are accessible in several places inside of vRealize Operations. They can link to recommendations for the user to execute after review, or be fully automated to execute when the alert is triggered.
This Lab
Upon completing this lab, you will be able to:
This lab environment is running two different instances of vRealize Operations. We have the different vRealize Operations instances for different use cases. The lab instances are:
In this lesson we will be using the live Instance of vRealize Operations.
If you are not currently logged into any instance of vRealize Operations, continue to the next page, but if you are already logged into the live (not historical) instance of vRealize Operations, click here to skip ahead.
If your browser isn't already open, launch Google Chrome
The browser Bookmarks Bar has links to the different instances of vRealize Operations that are running in the lab.
1. The Live instance of vRealize Operations has already been started for you.
vRealize Operations is integrated with VMware Identity Manager which we will use for user authentication in this lab.
VMware Identity Manager should be pre-selected as the identity source. However, if it is not you will choose it.
For the Live instance of vRealize Operations instance, the default username and password should already be entered. However, if needed type them in.
username: holadmin
password: VMware1!
You should be at the vRealize Operations Home screen and ready to start the module.
For this lesson, we will start by creating a Symptom Definition. Symptom Definitions enable the vRealize Operations to identify problems with objects in your environment. These Symptom Definitions will then trigger Alerts when conditions qualify as problems. In this scenario, the condition to monitor is the high CPU workload on the virtual machine "app-01a". Creating one or more of the Symptoms enables them to be added to an Alert Definition. When a symptom is triggered, vRealize Operations will then issue an alert. In this lesson, we'll go through this in more detail.
Alert, Symptom and Recommendation definitions are all managed under the Alerts tab.
Configure the Symptom Definition with the following parameters.
CPU|USAGE
and hit Enter. High CPU
for the symptom name.95
as the value the symptom must exceed to be triggered.
Alert Definitions are a combination of symptoms and recommendations you can use to identify problem areas in your environment and generate alerts.
To create Alert Definitions:
High CPU Alert
for the alert name.
Alert Impact
Alert Impact settings and their definitions are shown below. These settings determine how your alert will be classified and triggered.
Note: The default settings will be used in this scenario.
For Criticality, you can select one of the following values:
Finally, choose settings for your cycle, which are data collection intervals.
high cpu
and press the Enter key to filter the Symptom Definitions to what we created in a previous step.
Now, we will define a new Recommendation for our custom alert based on our organization's policies.
For Production Virtual Machines, please assess the trend and add CPU Resources if trend is high.
All development machines are shut down and the developer is notified.
production
in the filter field and hit Enter.
Verify that the Alert exists.
High CPU
in the Alert Definitions quick filter and then press the Enter key to reduce the Alert Definition list.Now that our symptoms and alert has been configured, we're ready to test it out!
We will now redirect dev/zero to dev/null to generate CPU load so that we can see the impact on the VM in vRealize Operations.
cat /dev/zero > /dev/null
and press the Enter key to start the CPU load.Leave this putty window open, we'll come back to this later in the lesson.
app-
to search for objects beginning with "app-".
Set up the CPU graphs by completing the following:
As shown here, we can see quite a bit of information about this particular object that we've selected. Under Active Alerts, we see we have 2 Critical Alerts.
In the Alerts Tab, we see all of the alerts related to this vm app-01a.
Note: You may see additional alerts for this VM as there are other alerts active within our environment. If it does not show as Critical, you may need to hit Refresh in the top right corner.
From this Alerts screen we can see details about the alert. We can see our Recommendation text we entered earlier, and again we see the POWER OFF VM ACTION button where we could manually kick off the action we configured earlier which was the shut down the VM.
From here we can see the CPU chart and we see the timing and details of this alert. We will now stop the CPU load so that we can complete some additional configuration to enable the automation of our configured recommendation for this High CPU alert.
We've seen how we can manually create Alerts and Recommended Actions based on Symptom Definitions. Now let's end this part of the lesson and look at how we can automate these Recommended Actions by using the vRealize Operations Policies.
Return to your open PuTTY window. Closing this PuTTY session will end the CPU load, and the alert will clear.
Here, we will create a custom policy for test VMs to enable the system to act based on the VM's policy assignment. In this case, we will automatically power off test VMs that spike CPU usage to prevent them from causing resource constraints in the virtual environment. By using the HOL Policy, all settings in that policy will be applied if they are not explicitly set in our new policy.
HOL Test Policy
in the Name field.
The policy allows us to set the action to be run at the time of an alert. In this case, we are adding resources, so it is important to know that the OS must support hot add for the changes to take effect. The critical VM in our lab does, so the change should take effect and the alert should clear. In cases where the OS does not support the change, the action would run but not take effect until the system is rebooted.
high cpu
in the filter box and hit Enter.
We will now create a new group for test VMs and apply our HOL Test Policy to it. In this lab, we only have one test VM, but we will be able to configure the group to add additional machines dynamically and apply our policy.
Test VMs
in the Name field. app-01a
for the selection criteria.
Verify our critical VM has the newly assigned policy.
app-
Redirect dev/zero to dev/null to generate CPU load again; this will trigger the alert and show how it behaves with the new policy.
cat /dev/zero > /dev/null
and press the Enter key to start the CPU load.Again, leave this PuTTY window open.
Let the CPU load command run for a couple minutes, and then return to vRealize Operations and check the alerts from the Alerts screen.
Note, you may need to hit refresh in the upper right hand corner. The High CPU Alert will not show until the next collection cycle runs.
We looked at the alert previously, so now we'll check the recent tasks and check the status of the action.
Let's take a look at the VM in the vSphere Client to ensure that the action has turned off our app-01a VM.
After completing the last lessons, you may still have vCenter open in a separate tab. If not, we need to open it now.
Automating actions in vRealize Operations is a key part of creating a Self Driving Datacenter!
Let's restart the app-01a VM as it may be needed in later lessons.
Let's keep the default Power On Recommendation.
Return to your open PuTTY window. Closing this PuTTY session will end the CPU load.
This concludes the Using Symptoms and Alerts to Trigger Recommendations and Actions Lesson.
Thank You.
Self-driving operations by VMware vRealize Operations automates and simplifies IT operations management and provides unified visibility from applications to infrastructure across physical, virtual and cloud environments. We hope in this module you learned how Intelligent Remediation helps predict, prevent and even take actions to resolve issues upon detection. VMware vRealize Operations allows for faster troubleshoot with actionable insights correlating metrics and logs and unified visibility from applications to infrastructure.
Congratulations on completing the lab module.
If you are looking for additional general information on vRealize Operations 8.1, try one of these:
From here you can:
Module 4 - Create Super Metrics Using the Super Metric Editor (45 minutes)
In this module you will learn about super metrics in vRealize Operations - how to create them and how to choose where they are calculated.
Super metrics have been available in vRealize Operations since the first version of the product. However, in newer versions VMware introduced a new way in the user interface to create, edit and apply super metrics to object types.
What is a Super Metric?
In vRealize Operations a super metric is a mathematical formula that contains one or more metrics or properties. It is a custom metric that you design to help track combinations of metrics or properties, either from a single object or from multiple objects. If a single metric does not inform you about the behavior of your environment, you can define a super metric.
After you define it, you assign the super metric to one or more object types. This action calculates the super metric for the objects of that object type and simplifies the metrics display. For example, you define a super metric that calculates the average CPU usage on all virtual machines, and you assign it to a cluster. The average CPU usage on all virtual machines in that cluster is reported as a super metric for the cluster.
You can define whether or not super metrics are calculated for a given group of objects by enabling or disabling them in the policy that is applied to that group of objects.
Because super metric formulas can be complex, plan your super metric before you build it. The key to creating a super metric that alerts you to the expected behavior of your objects is knowing your own enterprise and data.
This lab environment is running two different instances of vRealize Operations. We have the different vRealize Operations instances for different use cases. The lab instances are:
In this lesson we will be using the live Instance of vRealize Operations.
If you are not currently logged into any instance of vRealize Operations, continue to the next page, but if you are already logged into the live (not historical) instance of vRealize Operations, click here to skip ahead.
If your browser isn't already open, launch Google Chrome
The browser Bookmarks Bar has links to the different instances of vRealize Operations that are running in the lab.
1. The Live instance of vRealize Operations has already been started for you.
vRealize Operations is integrated with VMware Identity Manager which we will use for user authentication in this lab.
VMware Identity Manager should be pre-selected as the identity source. However, if it is not you will choose it.
For the Live instance of vRealize Operations instance, the default username and password should already be entered. However, if needed type them in.
username: holadmin
password: VMware1!
You should be at the vRealize Operations Home screen and ready to start the module.
Before we jump into creating super metrics, it is first important to understand that vRealize Operations maintains several hierarchical relationship trees. And whenever you install additional management packs for extensibility, each management pack will add at least one additional hierarchy in vRealize Operations.
This is important to understand in the context of super metrics because unless you are creating a new metric on an object or object type that is based only on metrics from that same object/object type you will need to know where in the hierarchy the related object types are. For example, in the vSphere Hosts and Clusters hierarchy, a virtual machine is a child of a host. If you want to create a super metric for hosts that shows the average CPU usage across all virtual machines that are running on a given host, you need to write your super metric formula with the proper syntax to look one level down from the host to the virtual machines for the metric inputs to the super metric.
We will focus here on the vSphere Hosts and Clusters hierarchy because that's the one we will be using for the examples in this lab module. The hierarchy is shown in the graphic. There would also be other object types in the hierarchy if they existed in our lab vCenter server (for example, resource pools).
For this hierarchy you can see that virtual machines are two levels below clusters. And that vSphere hosts are one or two levels above datastores (this dual relationship can be found in other places as well). In the super metric formulas, the relationship distance (number of hops) is represented by the depth parameter and we will use that parameter in some examples later in this module.
To see another way of looking at the vSphere Hosts and Clusters hierarchy within vRealize Operations:
The levels of indentation in this view indicate the relative depth of each object type.
To see the available hierarchies within vRealize Operations:
As stated earlier, if additional management packs were installed for extensibility (for example, NetApp or Dell EMC storage) hierarchies for those objects would also be here.
In this first example, we will create a simple super metric and explore the depth parameter in a super metric formula.
Your first assignment is to create a super metric that will calculate the average memory utilization across all virtual machines running on a vSphere host or in a vSphere cluster. This is an example of creating a metric on an object (host or cluster) that is based on metrics from related objects (virtual machines).
If you recall from a previous lesson, we learned that virtual machines are children of hosts and "grandchildren" of clusters in the vSphere Hosts and Clusters hierarchy. So if we create a super metric on the cluster object type and on the host object type and have it look one or two levels down the hierarchy to create the sum of the metric representing memory usage on virtual machines, we will have completed the assignment for this lesson.
Before we get started with the super metric, let's understand which virtual machine metric we will be using for this lesson. Since we want to average a vm metric (memory utilization), let's go find a vm to see what metrics are available. We will take a look at the web-01a virtual machine.
Now that we know which virtual machine metric we will be using, let's navigate to the new super metric editor window. The new super metric workspace can be found in the Administration section of vRealize Operations.
You will note that there are already some super metrics defined here. They are used in a different lab and can be ignored for this lesson.
Let's enter some basic information about the super metric. You want to create a name that is descriptive enough so you or others will understand what it is calculating when you use it later in dashboards or reports or alert definitions. It is also a good idea to include the unit of measure in the metric name - in this case we will calculate the value in gigabytes (GB).
If you haven't used a recent version of vRealize Operations this screen may look a little different. We'll cover some of newer features and differences as we build out our super metric.
The previous editor workspace had buttons above for Show Formula Description and for Preview.
Note that if you do expand the Legacy section and want to get back to the new editor, just click the arrow on the Legacy header to minimize it.
The list includes looping functions (avg, combine, count, max, min and sum) that work on more than one input value and can return either a single value or a set of values depending on the formula syntax. The remainder of the functions are single functions. They work on only a single value or a single pair of values.
To better understand the concept of looping functions, think about the example metric we are going to create in this lesson. We want to look for all descendant virtual machines (could be one or could be many), get the value for memory usage for each of those virtual machines, and then calculate an average of those values which we will then store a single super metric on our object (in this case a vSphere host or cluster). In the process, we will use a looping function to "loop through" all of the descendant virtual machines to get the memory usage metric value for each one.
Note: The product documentation for super metric functions and operators can be found here.
Operators are mathematical symbols and text to enclose or insert between functions. There are numerical operators and string operators
Note that string operators are only valid in 'Where' condition clauses. We will take a look at some different operators in the upcoming pages.
The editor is context sensitive and has hints to help guide you when creating a formula. To see some helpful hints,
We will be making use of the ctrl+space combination in the following steps.
Recall that we want to create an average of the memory usage across all virtual machines on hour host or in our datacenter so let's start by adding the avg function to our formula.
Note you are presented with three choices here:
You could scroll down the list of object types until you see Virtual Machine but let's make use of some filtering to make things easier.
Recall that we want to create an average of the memory utilization across all virtual machines on our host or in our datacenter.
You could scroll down the list of metrics until you see Memory|Utilization but let's make use of some filtering to make things easier.
Note that the the units of memory utilization are in KB but we want our super metric value to be in GB. That's OK because we can just add the additional math to the formula to do the conversion from KB to GB.
In order to convert our resulting value in kilobytes (KB) to gigabytes (GB), we need to divide the resultant average by 1024 to get to megabytes (MB) and then divide again by 1024 to get to GB.
Here you can see your super metric definition. Now we want to see if it calculates as expected. Luckily, the wizard allows you to preview the metric applied to any object.
Let's test our metric on the esx-02a.corp.local vSphere host. There are two VMs running on that host so we should see the average memory utilization across the two VMs.
You should see a preview of your super metric on the esx-02a.corp.local host. Note that your values will likely be different and you may or may not see the graph cover the entire time period depending on how long your lab environment has been running before you started this lesson.
Since we wanted our super metric to show the average vm memory utilization for hosts and clusters, let's test our metric on the Compute Cluster A vSphere cluster. There are five VMs running in that cluster so we should see the average memory utilization across the five VMs, right?
The depth parameter in a super metric formula is used to tell vRealize Operations how far down (or up) the object hierarchy to look for the objects and their metrics when performing the calculation. As mentioned earlier, within vRealize Operations there are multiple hierarchies (or traversal specs). Each adapter type will usually have at least one hierarchy. For example, the vCenter adapter creates vSphere Hosts and Clusters, vSphere Networking and vSphere Storage hierarchies.
If we look at the vSphere Hosts and Clusters hierarchy, it goes (from top to bottom): vSphere World --> vCenter Server(s) --> vSphere Datacenter(s) --> vSphere Cluster(s) --> vSphere Host(s) --> Virtual Machines --> Datastores. So in our case we want to calculate our super metric based on one (host --> vm) or two (cluster --> host --> vm) levels down the hierarchy. If you look at our super metric formula, you see that depth=1 was added automatically which is why the preview worked on the esx-02a host (the vms were one level below the host) but not for the Cluster-02 cluster (the vms were two levels below the cluster).
Something else you might notice about the depth parameter is that a positive value (1 in this case) will look down the hierarchy. If we wanted to look up the hierarchy, we would need to use a negative value for the depth parameter. That might seem opposite from what you would expect but you just need to remember that rule: positive depth = look down, negative depth = look up.
So let's update our formula to get it to look two levels down the hierarchy.
Now the formula is calculating the average VM memory utilization for our cluster. But does that mean it won't work for hosts any longer? Since it is looking down two levels down in the hierarchy for vms will it look past the vms when applied to a host? The good news is that it will still work for hosts. In fact, a depth of 2 means it will look down one level and two levels. A depth of 5 would look down one, two, three, four and five levels for vms (or whatever object type is in the formula).
Next we need to tell vRealize Operations what are the valid object types that our new super metric can be assigned to. This will limit the scope of available object types where the super metric can be calculated.
You can click the Select an Object Type drop-down and traverse through the hierarchies to find your object types or to save time, filter the list of object types and then select what you want.
The final (optional) step is to enable the super metric for the object types in one or more policies. If you don't enable the metric calculation in a policy here, you will have to go edit the policy(ies) where you want to enable the calculation later in the policy editor.
In our lab we only have one policy that is being used (the Hands On Lab Policy). In a production environment you might have several or more policies active in vRealize Operations. If you have multiple active policies you will see all of them listed on this screen and you can select which policies you want to activate the super metric calculation in for each object type.
Congratulations! You have created your first super metric and applied to to two object types in the active policy in your lab environment. There are a few more lessons ahead where we will explore creating other super metrics to learn about some additional super metric features. If you want to skip ahead and see the results of your work, use the Table of Contents at the top of the lab manual to jump past the other super metric creation lessons.
Let's create another super metric. For this example, the assignment is to use a super metric to calculate the percentage of a datastore's capacity that is being used to store virtual machine snapshots.
If you recall from a previous lesson, we learned that a datastore is a child of hosts and of virtual machines in the vSphere Hosts and Clusters hierarchy. In this case, we will be using the VM <--> datastore relationship. Note in the graphic (and in our lab environment) that the RegionA01-ISCSI0 datastore supports seven virtual machines and four hosts (4 objects in the Host System graphic). So if we create a super metric on the datastore object type and have it look one level up the hierarchy to create the sum of the metric representing snapshot space on virtual machines, we will have completed the assignment for this lesson.
Before we get started with the super metric, let's understand which virtual machine metric we will be using for this lesson. Since we want to average a vm metric (disk snapshot space), let's go find a vm to see what metrics are available. We will again take a look at the web-01a virtual machine.
Repeat the process to launch the wizard for creating a new super metric. From Administration --> Configuration --> Super Metrics:
The formula will be: The sum of the snapshot space from all VMs on the datastore divided by the total capacity of the datastore.
Start typing your formula - let's find the sum of the snapshot space from all VMs on the datastore that this super metric will be calculated on. Or you can select the function from the Functions drop-down.
We need the virtual machine object type.
Remember if you need hints during the formula creation, use the ctrl+space key combination.
Let's select the metric. Remember from earlier in this lesson that we will be using the Disk Space|Snapshot Space (GB) metric
Note that the metric we want to use shows up both in the Metric Type category and the Metric category. Metric Type is a general attribute and should be used any time there might be more than one instance of the metric on an object (for example a CPU core's usage where there are multiple cores in the host. Or the space used by individual snapshots when there are multiple snapshots on the virtual machine). In this case, the Disk Space|Snapshot Space is just a single metric that represents the total snapshot space used by the VM across all snapshots (if there are more than one).
We have the numerator of our formula (the sum of the snapshot space from all VMs on the datastore). Let's add the division operator and get ready to add the denominator.
What happens when depth=0?
Let's take the example we are working on from the perspective of the datastore. The metric will be applied to datastore objects and we want to know for each datastore, what is the sum of the disk snapshot space from all of the VMs attached to that datastore (VMs are the parents) and then divide the sum by a metric on the datastore itself (the total capacity of the datastore). So if we are going to create a metric that will be attached to datastore objects and one of the calculation inputs is a metric from the datastore object itself, can we just say object type = datastore and depth = 0 in the super metric formula? Actually, there is special syntax for this type of situation ... instead of saying depth=0, it entails prefacing the metric or metric attribute with 'This Resource' and there is a special way of building that into the metric definition - the THIS button in the editor.
Let's see how it works.
We know we want the object type to be Datastore.
Select the Total Capacity metric.
The result at this point will be a ratio of the sum of the snapshot space metric for all of the VMs on a datastore divided by the total capacity of that datastore. To convert it to a percentage, we just need to multiply by 100.
Let's see how our super metric looks.
Do you remember the relationship hierarchy between datastores and VMs? Do you remember how the depth parameter works in a super metric formula?
In this case, virtual machines are parents of datastores. Our depth parameter on the datastore object is 1. Remember that a depth of 1 means one level down the hierarchy. But here we need to look up the hierarchy one level - from the datastore to the VM. So instead of depth=1, we need to have depth=-1.
Remember? Positive depth means look down the hierarchy. Negative depth means look up the hierarchy.
Let's fix the depth parameter and try the preview again.
Assign the super metric to the datastore object type.
Just like in the last lesson, we need to enable the super metric in one or more policies if we want it to actually be calculated and then we can finish the process.
This topic confounds many people when they first start creating super metric formulas so it's worth spending some time to understand when you might run into this issue and how you can work around it. If you remember back in the lesson where we created our first super metric, there was a discussion about super metric functions and it was stated that the list of available functions includes looping functions (avg, combine, count, max, min and sum) that work on more than one input value and can return either a single value or a set of values depending on the formula syntax. The topic of this lesson centers on that notion of "either a single value or a set of values" depending on the syntax.
If you think back to the discussion about hierarchies in vRealize Operations, you will recall for example that in the vSphere Hosts and Clusters hierarchy, virtual machines are children of hosts and that a virtual machine's parent is a host. We understand that a host can have one or more VMs as children but that a VM can only have a single host as its parent. But if we think about the relationship between hosts and datastores, we realize that a host can have one or more datastores as descendants and a datastore can have one or more hosts as ascendants. We know this because we understand vSphere enough to realize that. However, vRealize Operations really has no way to know whether relationships between particular objects or object types are one-to-one or one-to-many. This is the thing that can cause confusion when creating a super metric formula until you understand the concept and how to work with it.
In this lesson we will explore this concept by creating a super metric that can be applied to virtual machine. It will calculate the percentage of a vSphere cluster's usable memory that the VM is using. For example, if a cluster has 200 GB of usable memory and a VM in that cluster was demanding 4 GB of memory, our value should be 4/200*100 (to make the ratio into a percentage). The assignment will require us to use some concepts that we covered in the previous lessons and will address the issue discussed above.
Click ADD to create a new super metric (not shown this time but follow the same procedure as the previous lessons).
Since the super metric will be applied to virtual machines and the first metric (the numerator) in the formula is the vm's memory demand we will again use the THIS button here.
Since we want the VM's memory utilization metric,
Before continuing, notice that the THIS button is still depressed. If we leave that toggled on and end up with "This Resource: ..." for the cluster metric then the formula isn't going to work because we will be applying it to virtual machines. So we need to remember to toggle that button off when we are done with it.
Be sure to select the correct metric here. There are a lot of similarly named that are returned by the filter.
Remembering what we learned earlier about the depth parameter and knowing that vSphere clusters are two levels above VMs in the hierarchy, we need to adjust the value. Remember for the depth parameter, a positive number means look down the hierarchy while a negative number means look up.
OK. We're done, right? Let's preview the super metric by selecting a virtual machine in our inventory. In the Preview section,
Uh oh. We got an error - Cannot convert aggregated result to number. This is the issue that was discussed at the beginning of the lesson. Remember that while we know there can only be one cluster as an ascendant (2 levels up) from the VM, vRealize Operations doesn't have any way of knowing that. As far as vRealize Operations knows, there could be a set of cluster objects that are two levels above the VM.
So how do we handle this? We need to modify the formula using a looping function. If you recall from the beginning of the lesson, it was reiterated that looping functions (avg, combine, count, max, min and sum) work on more than one input value and can return either a single value or set of values depending on the formula syntax. What does that mean in this context? It means we can use many of those looping functions to convert the results of the cluster portion of the formula to a single value. Essentially we can tell vRealize Operations to take the avg or min or max or sum of the values from all clusters above the VM and return a single number representing the calculation. What is the average or minimum or maximum or sum of a single number? It's that number.
In this case, we will use the max function (to find the maximum value from a set of one).
To add the function to the formula,
For reference, there is the completed formula so far: {This Resource: Memory|Utilization} / max({Cluster Compute Resource: Memory|Usable Memory, depth=-2})
There is only one thing left to do to complete this formula.
The formula is returning the ratio of vm memory utilization to cluster memory capacity. But the assignment was to calculate the value as a percentage.
You can refresh the preview below again to see the value as a percentage now.
Assign the super metric to the virtual machine object type.
Just like in the previous lessons, we need to enable the super metric in one or more policies if we want it to actually be calculated and then we can finish the process.
Super metrics can also include some logic in the formula. In this lesson we will look at using the "where" clause and a string operator to evaluate a VM property (the guest OS).
The task this time is to determine the total number of VMs in our datacenter that are running some variant of the CentOS operating system.
The following string operators are available for use in a super metric formula. Note that string operators are valid only when used in a "where" clause to evaluate whether or not the specified text does or does not exist in the string.
String Operators |
Description |
---|---|
equals |
Returns true if metric/property string value is equal to specified string. |
contains |
Returns true if metric/property string value contains specified string. |
startsWith |
Returns true if metric/property string value starts with the specified prefix. |
endsWith |
Returns true if metric/property string value ends with the specified suffix. |
!equals |
Returns true if metric/property string value is not equal to specified string. |
!contains |
Returns true if metric/property string value does not contain specified string. |
!startsWith |
Returns true if metric/property string value does not start with the specified prefix. |
!endsWith |
Returns true if metric/property string value does not end with the specified suffix. |
Let's first take a look at the VM property we are going to use in this super metric formula.
The guest operating system name is contained in the Guest OS Full Name property for a vm that is running VMtools.
On the app-01a object page:
We will create a super metric that counts all of the VMs with the text "CentOS" in that property field and then we can apply the super metric to our datacenter object type.
Let's go create the super metric using that Guest OS Full Name vm property.
Remember that we want to count the number of VMs running the CentOS operating system so we will use the count looping function.
At the cursor position (between the parenthesis):
At the cursor position:
Remember that we are going to want to apply this metric at the vSphere Datacenter object level. Going back to our discussion earlier about depth, we will need to set the depth to Datacenter --> Cluster --> Host --> VM or three levels down. Traversing down the hierarchy means a positive depth parameter so:
At the cursor position (just to the right of the 3 you typed), type the following. Note the leading comma, the quotation marks and the exact case. The syntax may not seem intuitive but that is the way it needs to be written. It might be easiest to just highlight the text below and drag it to the HOL console.
, where = "contains CentOS"
Let's see how this super metric works for our vSphere datacenter.
Here you can see:
Assign the super metric to the Datacenter object type and enable it in the HOL Policy (not shown but follow the same process as our previous lessons).
Note that you can also assign the super metric to Host System and Cluster Compute Resource object types with good results since this formula will look down 1, 2 and 3 levels to find Virtual Machine object types and check the operating system property for CentOS.
Super metric formulas also support the use of the ternary operator, also known as if/then/else conditional logic. The format for the ternary operator here is expression_condition ? expression_if_true : expression_if_false
For this lesson, the assignment will be to define a super metric for host objects that has a value of zero when the sum of allocated memory for all VMs on that host is less than the total memory capacity of the host and one when the sum is greater than total memory capacity. In other words, zero if memory is not over-allocated. One if it is.
Instead of building this super metric formula from scratch, we will look at how the formulas can be imported and exported for portability between different vRealize Operations instances. The super metric for this assignment has already been created and exported in json format. You are going to import it then we will take a look at the syntax of the formula.
To import a saved or downloaded super metric:
Note that you could also export a super metric here that you have defined if you want to use it in a different vRealize Operations Instance.
The dialog box should open in the correct directory. If not, you will have to browse to the directory.
Notice that there is now a new super metric - Host Memory is Overallocated. Let's take a look at it.
As noted in the lesson introduction, the format for the ternary operator here is expression_condition ? expression_if_true : expression_if_false. In this case:
expression_condition evaluates whether the ratio of the total memory allocated to all VMs divided by the physical memory on the host is greater than 1
expression_if_true is the number 1 in this case. We want the super metric value to be 1 if the allocated VM memory is greater than the physical host memory.
expression_if_false is the number 0 in this case. We want the super metric value to be 0 if the allocated VM memory is not greater than the physical host memory.
In our lab environment, none of the four hosts has memory overallocated so the super metric will evaluate to zero for all of them. If you want to, you can open the vSphere client and change the memory allocation on one of the VMs to something that makes the total VM allocated memory on a host greater than 10 GB since that's the configured memory on our hosts. If you do try this, you will need to wait at least one collection/analytics cycle (set at one minute in the HOL pod) before you will see the super metric value change from zero to one.
All that's needed now is to enable the super metric in our policy and save it.
We just created several super metrics. Let's check to make sure they are being calculated on the appropriate objects in our environment.
Let's first take a look at the Compute Cluster A vSphere cluster's metrics.
To see the calculated value of the cluster super metric:
It is important to understand that super metric values will only be stored in the database from the time you create the metric and enable it in the appropriate policy.
We also have the ability to visualize what a super metric value would have been for time frames prior to when the metric was created.
Note that the historical super metric calculation will be limited to the time range available for the metric(s) that are used in the super metric formula. In this lab environment, you may see large gaps in the data because of when the environment was created and the fact that the lab pod sits dormant (powered off) until shortly before you logged in and took this lab. Also note that while we have set a non-standard data collection interval of one minute in this lab pod (see frequency of data points in the top graph), the historical preview uses the standard 5-minute interval for calculations.
If you are interested, you can select VM, host, datacenter and datastore objects in this environment and confirm that the super metrics we created and enabled for each of those object types is also being calculated.
In this module we discovered how to use the super metrics editor. We also walked through creating several super metrics in order to show many concepts that are important to understand when working with super metrics. Finally, we saw how to import and export super metrics and then verified that the metrics we created were being calculated as expected.
Congratulations on completing the lab module.
If you are looking for additional general information on vRealize Operations 8.1, try one of these:
From here you can:
Module 5 - Managing Users and Roles (30 minutes)
In this module we will take a deeper look at the part that users and roles play in vRealize Operations. We will look at the built-in role based access controls, and how you can create additional roles with extremely granular control. We look at how to grant access to objects within the environment, and will also review dashboard sharing between user groups and how to manage content that is orphaned when the owner of that content is removed from vRealize Operations.
This lab environment is running two different instances of vRealize Operations. We have the different vRealize Operations instances for different use cases. The lab instances are:
In this lesson we will be using the live Instance of vRealize Operations, but we will need to login with the Local Admin account to modify the roles and permissions.
If you are already logged into the live (not historical) instance of vRealize Operations, please log out and follow the below instructions to log in as Local Admin.
If your browser isn't already open, launch Google Chrome
The browser Bookmarks Bar has links to the different instances of vRealize Operations that are running in the lab.
1. The Live instance of vRealize Operations has already been started for you.
VMware Identity Manager should be pre-selected as the identity source. However,we will be changing that so we can login as a Local Admin.
You should be at the vRealize Operations Home screen and ready to start the module.
To ensure security of the objects in your vRealize Operations Manager instance, as a system administrator you can manage all aspects of user access control. You create user accounts, assign each user to be a member of one or more user groups, and assign roles to each user or user group to set their privileges.
One of the more requested features of vRealize Operations Manager is how to create user specific content, for example a dashboard for leadership, where the user can see that content but no other information. This module will walk you through assigning content to a user, using the example of a user or group specific dashboard. We will create a user that has access to a single dashboard.
You can authenticate users in vRealize Operations Manager in several ways:
Most customers use VMware Identity Manager - this is the preferred single sign-on (SSO) source for VMware solutions, and enables simple SSO configuration and management between the vRealize solutions.
VMware Identity Manager (vIDM) is a service that extends on-premises directory infrastructure to provide a seamless Single Sign-On (SSO) experience. It is supported for all vRealize solutions, as well as many other VMware and non-VMware products.
VMware Identity Manager does not replace Active Directory, it integrates with it. Microsoft Active Directory integration will be configured in VMware Identity Manager instead of in the individual products.
Why not go directly to AD? Active Directory is an identity provider. vIDM is an identity service, which can connect to multiple identity providers, including AD.
To configure or view authentication sources within vRealize Operations Manager:
Additional sources can be added here, but this lab has only one - a vIDM identity source.
Here you can see the configuration settings that have been set up for the vIDM identity source in the lab. Further configuration is done on the vIDM appliance itself, including the AD or LDAP sources and users, groups and application entitlements. Once you have reviewed the options, cancel out of this screen:
Once the identity source is added, users and groups from that source can be granted access to vRealize Operation Manager. You are logged into vRealize Operations as holuser@corp.local. Let's take a quick look at this configuration:
holuser is an Active Directory (AD) user that has access to vRealize Operations Manager via the following configurations:
vRealize Operations Manager provides user group-based security. With group-based security, you control the access privileges for each user group. You add users to user groups, and assign access privileges to user groups. For example, one user group might be able to view only dashboards and another user group might be able to configure objects. This simplifies privilege management significantly.
You must have privileges to access specific features in the vRealize Operations Manager user interface. The roles associated with your user account determine the features you can access and the actions you can perform. Roles are covered in the next lesson.
In this module we are going to create a user with access to a single dashboard and no other area in the tool. Let's pretend we have a summer intern, who's role it will be to check the Operations Overview dashboard each day to check the number of running VMs in the environment.
The steps for this lesson will be:
Now we will create a User Group. Again, for simplicity sake, we will create a local group. This could have been done in in Active Directory and then imported into vRealize Operations.
Create the new user with the following properties:
From the Members tab, we could select local or directory-based users. However, we have not yet created our new intern user so we will skip this for now.
The objects tab is where we assign roles and objects to the group. There are a number of pre-defined roles in vRealize Operations. You can edit, clone or delete these roles or you can create new roles from scratch. For now we will use the pre-defined ReadOnly role for this group.
This is where you can assign permissions for this user group to have access to objects in vRealize Operations.
Notice in the left pane you will see all of the different hierarchies that are defined within vRealize Operations. If the environment had additional management packs installed (such as the Dell EMC storage management pack or the Blue Medora management pack for Microsoft SQL) you would see hierarchies for the objects that were discovered and imported into vRealize Operations from those management packs.
Note that if you are using vCenter as the authentication source for your vRealize Operations users, the object-level permissions set in vCenter for those users will override any object-level permissions set here for the vSphere hierarchies.
Here you can see that there is now an "interns" group. It has no members. It is a Local group. It has access to all objects in the inventory.
Now that we have a user group defined, let's add the user. Note that you could have defined the user first and assigned the user to a group at the time of group creation. In this case we have already created the group so we will assign this user to that group as part of the user creation process.
Create the new user with the following properties:
Since we have already created a user group for this user, we just need to make the group assignment here.
Note that we could directly assign a role and object-level access on the Objects tab of this wizard but you will typically want to do that a a group level and then just assign the user to a group like we are doing here.
What you don't see here is that the user (and all users) is also assigned to a local group called "Everyone". That is important from a dashboard sharing perspective as we will see later.
Because you didn't assign any object permissions directly to this user, you will get a warning message about that. However, since you added them to the group that we created earlier, the user will inherit the role and object permissions from that group so you can ignore this warning.
An incognito window in the Chrome browser allows us to create a new session that does not have any browser session cookies from the existing login. We want to test logging in as the new user without having to log out of our main session.
In the new private browser window, log in as the user we just created.
The new user is able to log in with read-only access. You will find that the user can't create or edit content or do many of the things on the Administration page. Feel free to explore the UI and see what this user can and cannot do. We will look at further restricting this user's access in a little while.
When you are ready to proceed,
In this lesson we touched on the different authentication sources that are supported in vRealize Operations Manager. We also created a new user group and a new user then added the user to the group.
In the next lesson, to illustrate the power of role based access in vRealize Operations Manager, we will look at modifying some of the permissions for this group and user. We will also look at dashboard management, and how to share dashboards with users.
You must have privileges to access specific features in the vRealize Operations user interface. The roles associated with your user account determines the features you can access and the actions you can perform.
vRealize Operations provides several predefined roles to assign privileges to users. You can also create your own roles.
Each predefined role includes a set of privileges for users to perform create, read, update, or delete actions on components such as dashboards, reports, administration, capacity, policies, problems, symptoms, alerts, user account management, and adapters.
When we created the interns user group, we chose the pre-defined ReadOnly role. Here we will explore roles a bit more.
There are several pre-defined roles in vRealize Operations. In addition, you can create your own roles. Roles are an efficient way to configure a standard set of privileges to a user or group of users.
Predefined Roles:
For more detailed information on these roles and the privileges assigned to them, refer to the documentation for vRealize Operations.
To configure or view User Accounts and Roles within vRealize Operations:
We will use one of the pre-defined roles to explore how permissions are managed in vRealize Operations.
Now let's look at the ReadOnly role:
In the Assign Permissions To Role window,
In the last lesson, User Access and Privileges, we created the user group interns and assigned it the ReadOnly role, added the user intern1 and added the user to the group. Now we are going to create a new role that only has permissions to view dashboards within vRealize Operations and then change the interns user group to have this new role.
First we create a role and assign permissions to it. Make sure you are still on the Roles tab in Access control.
Your new role should be visible in the list of roles now. To see the Users, Groups and Permissions assigned to a role, highlight the role. The information in the bottom pane "Details for Role" will change to the context of the the highlighted role. Let's assign permissions to your new role.
First, we must configure this role to be able to log into vRealize Operations Manager interactively. This permission is under Administration > Login Interactively.
Stay in this pane for the next step.
Now, we must configure this role to be able to see dashboards. This permissions is under Dashboard > Dashboard Management > View Dashboards List.
Stay in this pane for the next step.
Next, we must allow this role to render views, or the user will not be able to see widgets in the dashboards. This permission is under Dashboards > Views Management > Render.
Stay in this pane for the next step.
Finally, we must allow this role to have access to the environment, in this case we only want the interns to have access to the vCenter Adapter Inventory Tree. This permission is under Environment > Inventory Trees > vCenter Adapter.
It may seem like there were a lot of steps here, but in reality we assigned only 5 permissions:
Login Interactively:
View Dashboards:
Search inventory tree:
We are now going to assign the new role to the group that we created earlier.
Verify that you are editing the correct group. We are not going to make any changes to the user information, so we can click Next to move to Assign Members and Permissions:
We are not going to make any changes to the Members of this group, but you can verify that intern1 is checked. Select the Objects tab to assign the new role:
Groups and users can be assigned to multiple roles. When that happens, the effective permissions will be the union of all selected role permission. In this case, we don't want the group to keep the permissions from the ReadOnly role, we only want it to have the permissions from our new InternRole.
Now it all comes together! On the Objects tab, we are going to assign the role (the what) and objects (the where) to the user:
While still on the User Groups tab, select the interns user and verify that the Intern Role was applied and that the group now has permissions to objects:
An incognito window in the Chrome browser allows us to create a new session that does not have any browser session cookies from the existing login. We want to test logging in as the new user without having to log out of our main session.
In the new private browser window, log in as the user we just created.
If we configured the role correctly, the intern1 user should only see dashboards.
When you are ready to proceed,
In this lesson we discovered how granular we can get with the vRealize Operations permissions, using roles to determine which content and objects a user can access.
In the next lesson we will look at dashboard sharing, and how it can be used to target specific content to user groups.
In vRealize Operations you can share a dashboard with one or more user groups. When you share a dashboard, it becomes available to all of the users in that group. This is very useful when creating custom content for specific groups.
Most out-of-the-box dashboards are shared to the Everyone user group so all users see them when they log in to vRealize Operations. A very common use case, however, is that a vRealize Operations administrator will want some users to only see some dashboards but not others. Maybe they want to share high-level operational dashboards with the management team but not allow that team to see the detailed infrastructure troubleshooting dashboards. Another scenario might be to share application-specific dashboards with a line of business team but not give that team visibility to the infrastructure or to dashboards for other lines of business.
To share dashboards with all users, they should be shared to the Everyone group. To restrict access to dashboards, they should not be shared to the Everyone group. Out of the box, vRealize Operations has dozens of dashboards that are shared with the Everyone group so the first step if you decide to have some logged in users not see all dashboards is to first unshare all dashboards from the Everyone group. You will then need to selectively share various dashboards with different user groups. This is the process we will follow in this lesson.
Note: Some content packs will share their dashboards to the Everyone group on install. If you do decide to limit content to certain users, you will need to keep track of what you expect to be shared and to whom, and verify those sharing settings after adding additional content or upgrading vRealize Operations.
Before going down the path of configuring dashboard sharing, it is worth noting that in vRealize Operations versions 7 and later, you can share dashboards via direct link without requiring users to have an account or log into vRealize Operations. This makes for an easy way to give access to dashboards either via a URL or an iframe for embedding in a web page outside of vRealize Operations.
Let's take a quick look at how you can do that.
In the upper-right corner of the dashboard,
From this Share Dashboard pop-up you can:
For now we are not going to use this feature but it's worth knowing about to provide options other than having users log into vRealize Operations to see dashboards.
Before unsharing all of the dashboards from the Everyone group, you will first want to make sure that those dashboards are shared with your infrastructure team. Typically this would be an already-imported AD group but for simplicity, we are going to use the local user group called HOL Admin Group. Our holuser@corp.local and holadmin@corp.local users are members of this group already.
We are going to use our already logged in local admin user and assign all relevant dashboards to our HOL Admin Group and then remove all dashboards from the Everyone group. Finally, we will share a dashboard with the interns group and then verify that when the intern1 user logs in they will only see the dashboard that has been shared with their group.
Select the Dashboards tab and then select the Actions dropdown, and the Manage Dashboards menu item:
As mentioned above, if you just unshare all of the dashboards from the Everyone group, then nobody will see any dashboards when they log in. So you will first want to share the dashboards at least with your own team (HOL Admin Group in this case).
You should now have about the first 10 dashboards shared with the HOL Admin Group. Note that you can select more than 10 dashboards and attempt to share them but there is a limitation in the UI that won't let you share too many dashboards at a time this way. For the purposes of this exercise, you can just share that first group of dashboards with the HOL Admin Group. In a real-world environment you would want to scroll down and repeat the process of selecting about 10 dashboards at a time and dragging them to the group you want to share with until you have shared all of them.
Now that you have shared some (or all) of those dashboards with your infrastructure team's group, it's time to unshare all of them from the Everyone group.
Verify your work.
Note that we could have used this same Share Dashboards dialog box to share the individual dashboard(s) with our interns group but unless you are sharing multiple dashboards at the same time, it is much easier to do that sharing from the dashboards themselves.
Let's find a dashboard to share.
As we saw earlier, the first three options here allow you to share the dashboard with people without requiring them to log in to the vRealize Operations UI. In this case we want to share the dashboard with a user group in vRealize Operations so when an associated users logs in, the dashboard will be visible to them.
Log out as the admin user so we can test shared dashboards:
Log into vRealize Operations as intern1:
Now when the intern1 user logs in, they only see the one dashboard that has been shared with them.
Log out as the intern1 user:
In this lesson we touched on dashboard sharing - how to share or unshare dashboards to a user group. Most customers use the sharing feature to share new, custom created content with groups of users. We are also starting to see more administrators using the feature to restrict content to users.
Dashboards and report schedules that are created by a user are owned by that user object in vRealize Operations. If a user leaves the organization or is otherwise removed from vRealize Operations that content becomes orphaned and can't be managed until it is assigned a new owner.
Orphaned content can only be managed by the local admin user.
If you are already logged in as the local admin, you can skip this step.
Let's view the orphaned content page.
Let's delete a user. The user Jason has left the company. As the vRealize Operations administrator you want to delete his account.
Confirm the user deletion by,
Let's assign Jason's dashboard ownership to our intern1 user.
Note that you can also take ownership of the dashboard as the admin user or you can discard (delete) the dashboard.
To assign ownership to the intern1 user:
The jason user’s dashboard no longer shows up as orphaned content and that dashboard has been reassigned to the user intern1. However, Jason also had scheduled a report to be run monthly.
Let's assume that we don't need to keep any of jason's report schedules.
You just went through the process of deleting a user in vRealize Operations. Since that user owned some content when the account was deleted (a dashboard and a report schedule), the content was "orphaned". You learned how to manage that orphaned content.
At times you might need to provide documentation as evidence of the sequence of activities that took place in your vRealize Operations Manager environment. Auditing allows you to view the users, objects, and information that is collected. To meet audit requirements, such as for business critical applications that contain sensitive data that must be protected, you can generate reports on the activities of your users, the privileges assigned to users to access objects, and the counts of objects and applications in your environment.
Auditing reports provide traceability of the objects and users in your environment.
There are 4 preconfigured audit reports in vRealize Operations Manager to provide documentation to support traceability of objects and users in your environment:
They can all be found under Administration > History > Audit. We are going to look at each of these reports in this lesson.
The User Activity Audit report shows user related logging activity such as login, actions run, changes made and log out.
The options available from the options bar (from left to right) are:
You can filter the log entries by various fields, including User ID, User Name, Auth Source, Session, Category and Message. The filter is in the top right of the window.
The User Permissions Audit report shows permissions assigned to a user.
The only option for this report is to download it, to PDF or XLS format.
The report will show the following information about a user:
Scroll down to see the bottom of the report to see the user the we created in the last lesson, intern1. Do the permissions look correct?
The System Audit report shows object types, metrics, super metrics, applications and custom groups in your environment, including counts of each. This report can help you to understand the scale of your environment.
The only option for this report is to download it, to PDF or XLS format.
The System Component Audit report shows every component installed in the system, including version information.
The only option for this report is to download it in plain text format.
Is this lesson we looked at the different audit reports included with vRealize Operations Manager, and looked at them in the context of our new user. These reports provide an easy way to provide documentation of activities that have taken place in your environment and user permission levels. Auditing reports provide traceability of the objects and users in your environment.
In this module we walked through the Users and Roles in vRealize Operations. Access Control is an important part of a robust operational environment, and you should now be comfortable using users, groups, roles and object permissions to make sure users only have access to the content and objects needed.
We then looked at dashboard sharing- how to control which users see which dashboards.
We saw how you can manage content that is orphaned when the owner of the content is removed from vRealize Operations.
Finally, we reviewed how to audit user actions and configured permissions as well as some system configurations.
Congratulations on completing the lab module.
If you are looking for additional general information on vRealize Operations 8.1, try one of these:
From here you can:
Module 6 - Managing vRealize Operations with PowerCLI (30 minutes)
VMware PowerCLI contains modules of cmdlets based on Microsoft PowerShell for automating vSphere, VMware Site Recovery Manager, vSphere Automation SDK, vRealize Automation, vRealize Operations, and VMware Horizon administration. VMware PowerCLI provides a PowerShell interface to the VMware product APIs.
The module contains the following lessons:
Microsoft PowerShell Basics:
PowerCLI is based on Microsoft PowerShell and uses the PowerShell basic syntax and concepts.
Microsoft PowerShell is both a command-line and scripting environment, designed for Windows. It uses the .NET object model and provides administrators with system administration and automation capabilities. To work with PowerShell, you run commands, named cmdlets.
PowerCLI Concepts:
PowerCLI cmdlets are created to automate VMware environments administration and to introduce some specific features in addition to the PowerShell concepts.
Launching VMware Modules:
Before executing commands in a script or interacting at the PowerShell command prompt, we must load the VMware modules. The new preferred method to import these modules is to simply execute a 'Connect-..' command. By connecting to a VMware Infrastructure (VI) server, PowerShell now understands to automatically import the relevant PowerCLI modules for that VI provider.
Optional Areas to Execute Commands:
When executing PowerCLI commands, the user has a number of options of where they might want to execute the command(s) to affect the environment. The user may opt to execute a single command in an ad-hoc PowerShell window, use the ISE to execute commands so that variables may be stored, or they may choose to execute commands in an automated approach using scripts. Before scripts can be automated, it's important to first test and validate all commands in your environment to ensure proper compatibility and functionality. In this module we will help you through various steps of testing PowerCLI commands in both the native PowerShell window and with the ISE. We will only use the ISE when we expect to store a value in a variable that we plan to use in a subsequent command.
In this lesson you will learn:
In vSphere 6.7 you can install PowerCLI by running a Windows PowerShell command. You can install all official modules with a single command, or install modules individually.
The PowerCLI modules are available on the PowerShell Gallery website. When you run Install-Module from the Windows PowerShell prompt, the command downloads and installs the specified module. For a list of available PowerCLI modules, see the PowerShell Gallery website.
There is no need to follow this procedure since it was already done in the lab for you.
It is possible to get help on how to use any cmdlet and their respective parameters in Windows PowerShell using a cmdlet called "Get-Help". Since our focus is on VMware PowerCLI let's do this using the cmdlet "Get-VM" as parameter.
Get-Help Get-VM
and hit enter.
get-module
to check what modules are loaded and their respective versions.
VMware PowerCLI includes a module for interacting with vRealize Operations. Let's see what are the available cmdlets to manage a vRealize Operations environment.
Get-Command -Module VMware.VimAutomation.vROps
and hit enter.You can see that there is a function called "Get-vROpsCommand" in the listed cmdlets. That function has the same effect as the previous command and simplifies the listing of available vRealize Operations cmdlets so you can use it at anytime without having to write the whole syntax over and over.
The VMware PowerCLI snap-ins provide more than 500 cmdlets for managing vSphere, SRM, vRA, and vROps. You can view the available PowerCLI cmdlets by typing "Get-VICommand". This will list all PowerCLI commands. As the list is quite large, you may want to narrow it down to something more specific, for example to list the commands related to VMs:
Get-VICommand *VM
and hit enter.Please note, all of the Windows PowerShell Commands and Parameters can be auto-completed with the TAB key. Just start typing the first letters of the command and/or parameter and press the TAB key for auto-completion. Also note, when there is more than one possible command you can press TAB again and cycle through the available commands.
Up to this point you have been using the basic PowerShell command prompt. There is also the PowerShell ISE Script Pane. The advantage of using the ISE Script Pane is the ability to store variables and run multiple commands, this is an important capability when testing and validating a script for automation purposes. When using the ISE, a script can be executed by pressing the F5 key or by clicking on "Run Script (F5)" icon.
Once the Windows PowerShell ISE is launched you will see two panes by default, the Script Pane on the top and the Command Pane on the bottom. You can choose whether to show the Command Add-on window or not.
You can change how the panes are displayed in the Windows PowerShell ISE interface. You can resize the panes as well as hide the Command Add-on window to make the interface look a bit cleaner. Feel free to customize it as much as you want until you are comfortable with it.
In this lesson you will learn how to connect to a vRealize Operations server using VMware PowerCLI.
You will be using a vRealize Operations server with a database containing historical data on it, also known as vROps "HVM" mode.
You might want to increase the size of the PowerShell ISE by dragging the corners of the window or click the maximize button in the upper left hand corner of the ISE window.
Connect-OMServer -Server 192.168.110.71 -User admin -Password VMware1!
to connect to the historical instance of vRealize Operations.
In this lesson you will learn:
Get-OMAlert -Status Active -Criticality Critical -Impact Health
and hit enter. NOTE, it will take a few seconds to populate the full list after hitting Enter.
If you take a detailed look on the previous command result you will see that there are columns for Type and Subtype which can be used as input parameters for the Get-OMAlert cmdlet (when developing a script), there are also cmdlets provided for those specific parameters (Get-OMAlertType and Get-OMAlertSubtype). Using these cmdlets without input parameters returns a list of all valid types and subtypes on the server.
Get-OMAlertType
and press enter.
There are subtypes of alerts for the type Virtualization/Hypervisor Alerts. Let's get that list.
Get-OMAlertSubType -AlertType 'Virtualization/Hypervisor Alerts'
and hit enter.
For the remaining of this lesson you will work with the Live Instance of vRealize Operations. Let's connect to that server.
Connect-OMServer -Server vr-operations -User admin -Password VMware1!
Let's do a search on active alerts that contain snapshot in the name. This is a quick method to find all the old snapshots so we can then decide if we need to delete them.
Get-OMAlert -Status Active -Criticality Warning -Name *snapshot* -Resource *moad*
to get the snapshot list for all vm's with 'moad' in their name.
Get-OMAlert -Status Active -Criticality Warning -Name *snapshot* -Resource Moad-Web | Format-List
Notice we have all the details on the Alert. The next step is to connect to vCenter and then review the Snapshot information.
Connect-VIServer -Server vcsa-01a -User administrator@corp.local -Password VMware1!
Get-Snapshot -VM moad-web
Get-Snapshot -VM moad-web -Name Before* | Format-list
Now we have all the details on the snapshot and could use the Remove-Snapshot with this data to remove it but using stored variables it is much simpler.
Now let's write a script that stores the Snapshot we want to Remove and then show its details in formated list.
$snapshot = Get-Snapshot -VM moad-web -Name Before*
$snapshot | Format-list
Note that we used the "Format-List" parameter after a pipe to indicate to PowerShell that we want to see the list of all properties of this particular alert.
Remove-Snapshot $snapshot
That concludes this lesson on how to use PowerCLI to work with vRealize Operations Alerts. In the next lesson we'll look at how to work with vRealize Operations Statistics.
In this lesson you will learn how to retrieve metric data (or statistics) from vRealize Operations using VMware PowerCLI
Until now we were only working with the cmdlets from the vRealize Operations module. We are now going to work with PowerCLI VI module cmdlets to perform operations on a VM. We will use the Live version of vRealize Operations so we will need to connect to vCenter which exists in the lab environment.
vRealize Operations server | |
---|---|
Server: vr-operations User: admin Password: VMware1! |
We will now connect to the live instance of vRealize Operations Manager. NOTE, make sure to delete anything that you may still have in the Script Pane from the previous lesson.
Connect-OMServer -Server vr-operations -User admin -Password VMware1!
Many times customers will ask if they can export metric data from vRealize Operations for usage in other analytical tools or reports. While there are other methods, the PowerCLI module offers a really elegant way to extract that data.
The cmdlet Get-OMStat will provide the metric data output but it is useful to review the cmdlet Get-OMStatKey first. vRealize Operations stores hundreds of metrics for CPU, memory, disk, networking and other items. Each of these metrics is contained in a construct called statKeys. To retrieve these statKeys programmatically you need to use the cmdlet Get-OMStatKey.
For the remaining steps in this lesson we will still be using the VM named "web-01a".
(note that the VM name must be between double quotes since it is a string)
$vmresource = "web-01a"
Get-OMStatKey -Name cpu* -Resource $vmresource
and press ENTER
$statkey = Get-OMStatKey -Name "cpu|workload" -Resource $vmresource
$statkey | Format-List
and press ENTER
In this example we are going to list the "cpu|workload" metric average by minute for the last hour. Since this is a live instance of the vRealize Operations we dont have much data to work with a broader time range.
Get-OMStat -Resource $vmresource -Key $statkey -From ([DateTime]::Now).AddHours(-1) -IntervalType Minutes -IntervalCount 1 -RollupType Avg
There is a lot more capability than we have seen here, but hopefully this gives you a good start. For customers who have deep expertise in PowerShell and PowerCLI the vRealize Operations integration can be a huge help.
This lab environment is running two different instances of vRealize Operations. We have the different vRealize Operations instances for different use cases. The lab instances are:
In this lesson we will be using the live Instance of vRealize Operations.
If you are not currently logged into any instance of vRealize Operations, continue to the next page, but if you are already logged into the live (not historical) instance of vRealize Operations, click here to skip ahead.
If your browser isn't already open, launch Google Chrome
The browser Bookmarks Bar has links to the different instances of vRealize Operations that are running in the lab.
1. The Live instance of vRealize Operations has already been started for you.
vRealize Operations is integrated with VMware Identity Manager which we will use for user authentication in this lab.
VMware Identity Manager should be pre-selected as the identity source. However, if it is not you will choose it.
For the Live instance of vRealize Operations instance, the default username and password should already be entered. However, if needed type them in.
username: holadmin
password: VMware1!
You should be at the vRealize Operations Home screen and ready to start the module.
In this lesson you will learn:
To assist with the process of identifying and interacting with Operations Manager alerts via PowerShell, we will modify a alert symptom definition for a 70% CPU load and add a recommendation which will be included when the alert is generated. In order to trigger the alert there is a CPU load script on the server 'app-01a'. We will connect to this server using Putty and issue the commands to start the load script after modifying the symptom and the alert.
We will now redirect dev/zero to dev/null to generate CPU load on this VM.
cat /dev/zero > /dev/null
and press the Enter key to start the CPU load.Leave this putty window open, we'll come back and stop this script later in the lesson.
There are 2 lines to type into the script pane. The first to login to the vRealize Operations server just in case the connection is not active. The second line is for look for critical active alerts on web-01a.
Connect-OMserver -server vr-operations -user admin -password VMware1!
For line 2 type: Get-OMAlert -Status Active -Criticality critical -Resource app-01a
Note: It takes a couple minutes for the alert to appear so you may have to rerun the script and then it should appear as expected. Now to review the recommendations.
Included with the vRealize Operations alerts are recommendations that guide you to possible solutions. Earlier in the lesson you added an second recommendation to "Add more CPU". To see the recommendations you need to retrieve them from the alert you found in the last page.
Get-OMRecommendation -Alert "_App Server CPU Usage" | Format-List
and press Enter
There are two lines of script needed to change the number of CPUs for app-01a. The first is to connect to the vCenter server and the second is the PowerShell command to make the change.
Connect-VIServer -Server vcsa-01a -User administrator@corp.local -Password VMware1!
Get-VM -name app-01a | set-VM -NumCpu 2
Now we need to stop the CPU load script we have running on the app-01a VM. Return to your open PuTTY window. Closing this PuTTY session will end the CPU load script.
Now let's return to the PowerShell ISE to check for alerts.
This completes this lesson and module. We hope that you were able to learn some new skills around script writing and automation of vRealize Operations alerts, definitions, and recommendations.
In this module we learned the basics of the Windows PowerShell and VMware PowerCLI and also:
Congratulations on completing the lab module.
If you are looking for additional general information on vRealize Operations 8.1, try one of these:
From here you can:
Module 7 - Assess Your vSphere Configuration for Compliance With Industry or Custom Standards (30 minutes)
Compliance is about ensuring that objects in your environment meet industrial, governmental, regulatory, or internal standards. The standards are made up of rules about how objects should be configured to comply with best practices and to avoid security threats.
In this Module we will step through the complete configuration and use-cases to ensure your VMware environment is always compliant.
This lab environment is running two different instances of vRealize Operations. We have the different vRealize Operations instances for different use cases. The lab instances are:
In this lesson we will be using the live Instance of vRealize Operations.
If you are not currently logged into any instance of vRealize Operations, continue to the next page, but if you are already logged into the live (not historical) instance of vRealize Operations, click here to skip ahead.
If your browser isn't already open, launch Google Chrome
The browser Bookmarks Bar has links to the different instances of vRealize Operations that are running in the lab.
1. The Live instance of vRealize Operations has already been started for you.
vRealize Operations is integrated with VMware Identity Manager which we will use for user authentication in this lab.
VMware Identity Manager should be pre-selected as the identity source. However, if it is not you will choose it.
For the Live instance of vRealize Operations instance, the default username and password should already be entered. However, if needed type them in.
username: holadmin
password: VMware1!
You should be at the vRealize Operations Home screen and ready to start the module.
In this module we will turn on vSphere hardening guidelines that can be enabled with your instance of vRealize Operations for free! We will step you through a scenario of a company first turning on vSphere level compliance and understanding the impact of misconfiguration(s) inside their environment.
Notice we now get a message saying the initial assessment is running. This may take a few minutes to run and will depend on the size of your environment. We're going to go make a few changes to this policy, so we'll come back to this later in the lesson.
Notice now our initial assessment has completed and we have 26 items in our HOL environment that are flagged as non-compliant. NOTE: Assessment takes about 5 minutes to run, if it still says 'Running initial assessment', you may have to wait a few minutes for it to finish. Hit refresh in the top right corner of the vRealize Operations screen to refresh the compliance status.
We can again see that we have 26 assets that do not meet the vSphere Security Configuration Guide. The red number is the number that are out of compliance, and the other number is the total number of objects.
By using the vRealize Operations Management pack for vRealize Orchestrator, we're actually able to modify the configuration templates. We can then modify the items we want to check as part of this vSphere Host Security Configuration Guide.
We won't make changes to the policy in this lab, but let's take a look at how to modify it.
https://vr-automation.corp.local/orchestration-ui
and hit Enter.
Now notice we have a recommendation and an action on this alert to APPLY HOST SECURITY CONFIGURATION RULES.
This concludes this lesson on applying vSphere security configuration guides.
In the next lesson we'll walk through creating a custom Security Policy.
We are continuing to enhance the config management and regulatory compliance capabilities within vRealize Operations that were introduced in the previous releases. We can now manage the configuration of the entire SDDC stack including vSphere, NSX-T and vSAN. In addition to common compliance templates like PCI, HIPAA, DISA, ISO, CIS, and FISMA, we will also be able to create our own custom compliance standards and activate automated drift remediation with out-of-the-box workflows using the VMware vRealize Orchestrator integration. We will also be able to monitor the compliance for VMs in VMware Cloud on AWS. We can import and now will be able to also export custom compliance standards. In this module we'll walk through some of these compliance capabilities and create a new custom compliance standard.
Next on the compliance page we have Custom Benchmarks. This section allows us to create our own custom compliance policies.
Now our ESXi Hosts Only Policy should be done with it's initial assessment. If it's not, you may need to click refresh in the top corner to update the status.
Now we can see the details of our policy and we see that all 5 of our hosts are non-compliant.
This concludes the lesson on custom and regulatory benchmarks.
In this module we walked through compliance remediation that you can automate inside your organization on your VMware infrastructure. This is a key component of turning your vRealize Operations environment from a Read-Only solution, to taking automation remediation to ensure you are compliant and without issues.
Congratulations on completing the lab module.
If you are looking for additional general information on vRealize Operations 8.1, try one of these:
From here you can:
Conclusion
Thank you for participating in the VMware Hands-on Labs. Be sure to visit http://hol.vmware.com/ to continue your lab experience online.
Lab SKU: HOL-2101-06-CMP
Version: 20200803-144554