VMware Hands-on Labs - HOL-Lab-Development-Guide


vPod Development Guide

Welcome and Overview


Welcome to the VMware Hands-On Lab program.

This is your guide to developing a VMware Hands-On Lab. We will keep the document concise and easy to read in small chunks. This first article is a quick overview of the process and includes links to important documents.

A VMware Hands-On Lab begins with a compelling story showing the technical and business value of a solution leveraging VMware software. The Content Lead, Lab Principal and Lead Lab Captain submit the Development Plan in the Lab Roadmap on SharePoint (internal to VMware employees) where all design documents related to a lab are archived.  https://vmshare.vmware.com/marketing/VMWorld/hol/default.aspx

The Content Lead, Lab Principal and Lead Lab Captain develop the storyboard for the lab in ScreenSteps.

The Lead Lab Captain and Lab Principal submit a completed Lab Configuration Document to the Lab Roadmap on Sharepoint.  The LCD template documents are also in SharePoint location as above. Choose the LCD that most closely aligns to your starting vPod which are Single Site, Dual Site, SDDC and MBL. There is also a Hybrid LCD which is used if you have received permission from the Core Team for your vPod to have access to external resources. Those external resources must be identified so proper firewall rules can be constructed.  VLP also utilizes a callback notification mechanism that can assist in the creation of the external sandbox for a user's session. There is a nested VM tab to be completed as well.  Please note the virtual ESXi host to be used to host nested VMs and the expected configuration of those nested VMs.  Please do not over subscribe virtual ESXi hosts.

To begin vPod development in the HOL-Dev environment, you must complete your Development Plan, Storyboard and LCD.  If requested by your Lab Principal, you must also pass a short test covering material in this HOL Lab Development Guide.

VMware Hands-On Labs consist of a lab manual and a virtual run-time environment. Both components are presented in a modern web browser during the event. The lab manual can access Internet resources such as YouTube videos but the virtual run time environment is isolated in most cases. External access to the virtual environment is accomplished using a VM console proxy connection embedded in a client browser control. Please review the slide at the end of this article. Please note that your lab can use more than one VM console.  You may specify which VM is the primary console and should be displayed first.  Also note that the VM console is usually the Main Console (controlcenter) but an alternate VM can be used for the primary desktop.

The lab's virtual run time environment is referred to as a vPod. Generally, a vPod includes virtual ESXi hosts which run nested on physical ESXi hosts in VMware's vCloud Director virtual data centers. For more information on getting started building the vPod for your lab, please review the other articles in this series including vPod Configuration and Building Blocks:Creating your vPod.

The lab manual is created using ScreenSteps. The lab manual is exported and then imported to the Hands-on Lab portal. Please alert your Lab Principal when lab manual updates are needed. Updates are not automatically published to the portal. During VMworld, lab manual updates can only be done when VMworld HOL is closed.

Milestones for vPod Development can be reviewed on the HOL SharePoint site.  https://vmshare.vmware.com/marketing/VMWorld/hol/default.aspx

Functional testing for your lab will commence in the HOL portal immediately following the draft deadline for your lab which most cases is Release Window 1. Release Window 2 is permitted by Core Team approval only if additional time is needed for vPod completion. Please line up your peer reviewers and the core team will get them access to your lab and manual in the VLP portal using a restricted Pre-Release catalog. Please do NOT wait for a test event to get testers engaged and providing feedback.

Key milestones for vPod Development:

1. Functional Test Candidate vPod. This vPod contains all of your VMs and nearly all of your content. This vPod is functional. We will replicate this vPod to all available clouds. Do a round-trip smoke test from _WorkInProgress and then move (do NOT copy) your vPod to HOL-Staging. Only one of your vPods is in HOL-Staging at any time. Do NOT keep a copy of this vPod in _WorkInProgress. The Core Team will publish your one and only Functional Test Candidate vPod and lab manual as communicated by the Lead Lab Captain to a restricted Pre-Release catalog for functional testing among your peers. Get them testing now and do not wait for a test event.

2. Peer Review Checklist. Your peers will take your lab and review your manual to validate that the lab works and the manual is acceptable. Feedback from your peers will be sent to you so you can incorporate changes in your vPod and lab manual.  They will complete the Peer Review Checklist.  Here is a link to the checklist: https://vmshare.vmware.com/marketing/VMWorld/hol/_layouts/xlviewer.aspx?id=/marketing/VMWorld/hol/Shared%20Documents/2015/Templates%20for%20Development%20Cycle%202016/HOLPeerReview-Checklist.xlsx

3. Release Candidate vPod. This vPod is your final version. Please move (do NOT copy) your final vPod to the HOL-Staging catalog. No further edits are allowed. This version will be published for user-attended scale testing events and VMworld.  You will submit a vPod_Checklist which verifies that you have met the requirements in your vPod.  The Core Team will review your vPod and vPod_Checklist.  This is required in order for your lab to be available in the HOL Catalog. There is a tab in the vPod_Checkslist to provide all the user names and passwords for your vPod.  There is also a tab for you to provide the shutdown instructions for your vPod.  The Lead Lab Captain and Lab Principal need to complete the vPod_Checklist.  Please be sure to include your names. https://vmshare.vmware.com/marketing/VMWorld/hol/_layouts/xlviewer.aspx?id=/marketing/VMWorld/hol/Shared%20Documents/2015/Templates%20for%20Development%20Cycle%202016/vPod_Checklist-2015.xlsx

4. User-attended Scale Testing events. There are at least three user-attended scale testing events leading up to VMworld. This is an opportunity to test your lab at scale prior to VMworld and for your lab to be reviewed by multiple users to get feedback. These events primarily stress test the back-end infrastructure prior to VMworld. Functional testing of your vPod and lab manual is primarily completed at the Peer Review Checklist milestone.

Labs must be included in at least TWO user-attended Scale Tests prior to a Tier 1 event such as VMworld.  The Lab Policy document on SharePoint further explains the operational restrictions and lab availability policies during the event if this mandate is not met. https://vmshare.vmware.com/marketing/VMWorld/hol/Shared%20Documents/2015/VMworld%20US/Program%20Management/LabPolicyHOL_20140711.docx

In addition to providing user support for your lab at VMworld, please be aware that you are expected to provide on-going support and maintenance of the vPod and lab manual for one year past VMworld.

We appreciate your dedication to this effort and know that together we will present a high-quality and compelling HOL experience.

HOL Core Team

 


 

Hands-on Lab Architecture

 

 

Hands-on Labs Limitations and Guidelines


Below are some of the limitations and guidelines of what cannot be shown in a Hands-on Lab.  Where applicable, alternative methods are given.


 

Hands-on Labs (HOLs) are not well suited to show any of the following

 

 

 

Tips for what works in labs

 

 

HOL Development FAQ


Let's review some of the more common questions and hopefully answers.


 

I need to submit a Lab Configuration Document but how do I fill it out?

The LCD for your lab needs to be approved by the Core Team to gain access to HOL-Dev. The Core Team needs to review your plans for building the vPod for your lab. This is not a build document (although that would be nice) but rather your best estimate of the vCPU, memory, storage and networking needed for your lab. It is fine to have more resources during the build to make the build faster and easier. However, you need to try to reduce the resources to match your initial LCD or submit an updated as-built LCD based on performance testing in HOL-Dev. The LCD is not a negotiation. Do not ask for more resources than you realistically will need in the hopes that you will be approved for more than you actually require. If testing shows that more resources are required to provide a reasonable lab experience, then contact the Core Team with an updated LCD and the reasons for requesting more resources. Everyone wants to right-size the vPod for a lab use case.

The first step is to choose the LCD that matches your likely vPod: Single Site (most common basic vSphere), Dual Site (advanced lab), SDDC (includes vRA, NSX, Log Insight and VROPs), MBL (includes Horizon products) or Hybrid (if external resources are required).

Choose the Layer 1 tab and update your lab SKU and title.  Save the LCD as 18XX-LCD-v1.xlsm where XX is your lab number, of course.

The next step is to list the Layer 1 resources that you believe will be configured in the final vPod. Those resources are added up on the Summary page and are flagged based on Core Team limits.

Finally, list the Layer 2 VMs you will need on the "Nested VMs" tab. Note which module(s) for each vVM and which vESXi host. Try not to over commit vESXi hosts much (if at all).  Nested ESXi hosts do not behave like physical hosts in this regard. The total number of nested VMs is reflected on the summary page.

The Hybrid LCD includes two more tabs which concern network destinations and external sandbox calls from VLP. If your lab is an approved hybrid lab, expect more training and discussion on this information.

 

 

How do I log in and change password in vCloud Director?

We use vCloud Director local accounts with a default initial password which you may change in the "Preferences" area (upper right corner). Simply log out and log back in after changing your password.

While logged into VMware Workspace ONE, use this link to reset your HOL-Dev password: https://web.hol.vmware.com/hol-dev-mgmt/MainPage.aspx

Note that if your account does not exist or is disabled, you will need to work with the Core Team to gain access to HOL-Dev.

 

 

How do I license my vPod?

Product licenses for HOL are available in a spreadsheet on the HOL SharePoint site or from your Lab Principal. All products must be licensed through the year following VMworld. Labs break due to expired licenses if not licensed properly.  No evaluations, please, including vRealize Orchestrator (see below). Do NOT use internal licenses, permanent licenses or other non-HOL licenses.

 

 

Licenses are visible in the HOL.  Is this a problem?

This is not a problem.  VMware legal has accepted HOL as a worthwhile program and an acceptable risk to use expiring licenses. Unfortunately, there is no way today to hide licenses in many of VMware products. While it is possible to partially hide the vSphere host license using OEM license keys, unfortunately no other VMware products provide this feature. Please include the description "FOR VMWARE HANDS-ON LAB USE ONLY" for each license you use in your vPod.

 

 

Do I need to license vRealize Orchestrator?

Yes, if Orchestrator is used in your vPod. vRO assumes a 90-day evaluation license during initial installation. The best practice is to copy and paste the vCenter license into the vRO configuration web site (port 8283) for independent licensing. Verify that vRO is licensed through the end of the year following VMworld.  Your lab will break if vRO is not licensed properly. Note that the embedded vRO included with vRA shares the license with vRA so no further action is required.

 

 

What are the best practices for developing Hands-on Labs?

Here are few best practices:

Make deadlines and keep them. The dates for VMworld will not be adjusted.  Your lab will not be available if it is not ready and thoroughly tested.

Leverage peer reviews and user testing feedback to correct vPod issues and lab manual typos, etc... Do this in the HOL tenant "Release Window" catalogs.  Assign folks to review specific modules in your lab in the VLP environment and provide good quality feedback.  Quality is better than quantity in terms of feedback so this should occur BEFORE the large user-attended tests.  The larger test events primarily look for issues when different back-end clouds are used with dissimilar hardware and for issues that arise at scale. DO NOT WAIT FOR THE TEST EVENT TO HAVE YOUR LAB TESTED. Test early and often.

Make the vPods as small as possible.

Leverage the C:\HOL\LabStartup.ps1 script to ensure reliable start up as opposed to extensive vApp start-up timings. The LabStartup script will notify the Core Team if there is an issue during start up.  In many cases, we can correct the issue before a student begins. The script is responsible for indicating the lab is ready.  Remember that the ready state is only as good as what you actually test.

Use different screen colors to differentiate desktops in the lab.

Preserve the look and feel of the Main Console.  Do not remove items from the Quick Launch menu. Do not change the characteristics of Windows Explorer. (Show file extensions for known file types.) Do not change the characteristics of the Command window. Core Team sets the background for the Command window to white. Please do not change it to black.

Use RDP/PCoIP/SSH (PuTTy) to connect to other VMs and vVMs if possible during your lab instead of using a vSphere VM console. This approach allows for copy/paste and avoids mouse/cursor lock (ALT-ENTER to release).

Enable SSH public key authentication for Linux remote logins to ALL machines in your vPod. Include a PuTTy session for all Linux machines in your vPod whether needed in the lab or not. This helps the Core Team when re-mediation is needed. It also helps your colleagues when using the lab in OneCloud. This eliminates the need to document the password. Use the "account@" convention in the PuTTy session so it is clear which account to use when logging in. The exceptions to this PuTTy session rule include the vPodRouterHOL, the vRNI appliance and any other appliances where the owner objects to simple command line access from the Main Console.

Enable auto-login on Windows 2008 and Windows 7 VMs/vVMs to allow skipRestart to run automatically when the machine starts. Typically this will dismiss Windows restart dialog before a user connects. If you build a Windows 2008 or Windows 7 machine, include SkipRestart to avoid issues with dissimilar back-end hardware. It is better to use the machines built by the Core Team which include skipRestart and autologin.

Minimize username/password key-ins using shortcut properties, command line tricks and browser-supplied field values including Windows pass-through credentials in the vSphere Web Client. Be sure the CORP\Administrator user has sufficient rights in vCenter to complete your lab. This includes access to NSX functions.

Use your Color Group's Staging vPod as a staging area for downloading and then uploading OVFs and software. This approach avoids upload timeouts using your local client network. The Staging vPod has a scratch disk on the Main Console for more space and more CPU and memory for better productivity.  The Staging vPod includes InfraRecorder http://infrarecorder.org/ in order to create media ISOs for upload to the HOL-Dev-Resources catalog. The Staging vPod includes the GlobalProtect VPN on the Windows 8.1 VM. Use the RDP shortcut on the Main Console to connect to the Windows 8.1 VM and then enable the VPN using your RSA credentials. This is an umanaged GlobalProtect VPN machine.

 

 

Why does it work in HOL-Dev but not during the large test event?

The point of coordinated test events is to test your labs at scale (including contention) and when running on different clouds.

The short answer for failures is due to "dissimilar hardware" and sometimes "resource contention".

Hardware differences causes 3rd-party appliances to see a new disk ID and/or UUID. This can cause licensing issues or even boot failure. Always run 3rd-party appliances as layer 2 VMs and include the uuid.action = "keep" configuration in the VMX for the vVM.  Windows 2008 and Windows 7 machines will prompt for a reboot when processor differences are detected.  Always include the skipRestart utility. The templates provided by the Core Team include this utility.

Windows 2012 using MSDN licensing will sometimes show a "Activate Windows" tattoo on the desktop. This is purely cosmetic and is a consequence of Windows being isolated from the Microsoft licensing sites on the internet.

Resource contention can cause services to fail on vPod boot up. Sometimes this is just because they fail to start (timeouts) and other times it is due to a dependent service not being available when needed. The best approach is to use the LabStartup script to start services. The Core Team verifies vApp startup timings in the base templates so it is best to leave those "as is". Use the Connect-Restart functions for vCenter and for Horizon View Connection Servers. These functions will attempt to remediate issues during vPod boot.

 

 

Does the Main Console have to be the only desktop for the environment?

For HOL, you can use a different desktop. And you can expose multiple VM consoles. However, for other use cases at VMware, due to the routing rules in the vPodRouter, labs will be taken from the Main Console by default.

 

 

How do I get licenses for products in my HOL?

Most licenses are available in the licensing spreadsheet on SharePoint. If a license that you need is not there, just ask Bill Call in the Core Team to get it for you. Be sure to specify the quantity you need. VMware products are licensed in different ways. Please choose the smallest quantity required for your lab. Here is the link to the HOL License spreadsheet:

https://vmshare.vmware.com/marketing/VMWorld/hol/_layouts/xlviewer.aspx?id=/marketing/VMWorld/hol/Shared%20Documents/Licensing/HOL-vmware-licenses-exp-12-31-2018.xlsx

DO NOT use evaluation, NFR, or any other type of license in your lab. If you require a license that is not on the above spreadsheet, please contact Doug Baer so that he may acquire a license approved for use in the Hands-on Labs.

Licenses are visible in the HOL and so must be temporary in duration. Please include the label "FOR VMWARE HANDS-ON USE ONLY".  If people want to cheat, they will find a way.  HOL provides a valuable resource to future sales and it is worth the risk.

 

 

Should I enable Guest Customization for HOL Layer 1 VMs?

No.  (Unless you understand what vCloud Director will do to your VM.)   For the HOL use case it is RARELY a good idea to enable Guest Customization.  This is one of those times where you do NOT want the vCloud Director default when prompted.

 

 

What does DHCP on a VM network adapter mean in vCloud Director?

This is often confusing for lab captains. The net effect of setting a network adapter to DHCP in vCloud Director is that vCloud Director will not attempt to do anything and trust that the guest OS has the network configured. This is what you want actually. If you have DHCP configured in the VM then the vPodRouter will assign an IP in the 192.168.100.100-250 range. If you have a static IP defined then that is what you will get. You don't want to set a manual IP in vCloud Director because the vApp has the wrong IP address range defined.

 

 

Do I need to license and activate Windows VMs and vVMS?

Yes. MSDN account information is on the HOL SharePoint site. Use your lab's MSDN credentials to login and retrieve an appropriate Windows license.

To activate Windows you will need to wire up your vPod for Internet access. It is best to verify Internet access in the Windows machine prior to applying the license and activating. Setting the Windows license will automatically activate Windows by contacting Microsoft.

Never use internal Windows volume licenses.  MSDN licenses are approved for use in HOL per VMware legal.

There are a limited number of "Ultimate" MSDN licenses that include Microsoft Office products. If your lab needs to use Office products, please contact the Core Team. Never use VMware internal licenses in HOL vPods.

Sometimes there may be a desktop water mark that indicates Windows needs to be activated. This is because different clouds present dissimilar hardware to Windows and since the vPod has no access to the Microsoft activation service, Windows does not realize that the image is already activated.

 

 

How should I save my work in HOL-Dev?

At various times you will want to checkpoint your vPod work. Do not use snapshots. Do not suspend your vPod. Gracefully shut down your entire vPod. (Please see the instructions later in this document.) Then add your vPod to the _WorkInProgress catalog. You must include the "Make identical" option which is not the default or your saved vPod will be unusable for HOL.

For milestone checkpoints, disconnect the external network and then capture to _WorkInProgress as usual but then perform a "round-trip smoke test".  Copy your vPod back out from _WorkInProgress (the round trip) and without connecting the external network power up and verify that the vPod boots correctly and the lab environment looks ok (the smoke test). Once you are satisfied, MOVE (do not copy) your vPod from _WorkInProgress to HOL-Staging. Moving the vPod to HOL-Staging consolidates the linked clone disks. If the disk chains are greater than 15, consolidation is sometimes not possible due to timeouts on vCenter.

The vPod version in HOL-Staging is the one and only master version right after a milestone checkpoint so remember to DELETE all temporary "smoke-test" vPods.  Do NOT use a "smoke-test" vPod as your master vPod for continuing work. (The disk chains are still long.)  Always copy out a NEW master vPod from HOL-Staging for on-going vPod work. Remember to choose the "DMZ" VDC when deploying from HOL-Staging. Do not have more than one master vPod of the same name. Your vPod will NOT be published if the same name vPod exists in both HOL-Staging and _WorkInProgress.

If you find your vPod has not been consolidated after 10 round trips to _WorkInProgress, your team should then round trip the vPod to HOL-Staging. Move the vPod from _WorkInProgress to HOL-Staging. and then move the vPod from HOL-Staging back to _WorkInProgresss. Now you can continue development safely for another 10 vPod versions.

 

 

What's the story with the README.TXT file on the Main Console desktop?

No discussion of the README.TXT file would be complete without understanding that the vPods you create for Hands-on Labs are made available not just at VMworld and online via the VMware Learning Platform, but also to internal audiences via OneCloud published catalogs. In the past, the README.TXT has served two main purposes:

  1. Provide a high-level description of the contents of the vPod
  2. Serve as a repository for long command lines and other commands and passwords that may be confusing (e.g. Do you know what a backtick does on a Linux command line and why it is important to use it instead of a single quote?) or difficult to type on international keyboards that have been remapped to a standard US mapping within the Main Console.

The Send Text and now, drag and drop text features of the VMware Learning Platform enables text to be easily pulled from a manual into the vPod environment and eliminates much of the requirement for #2. Because of these advancements, it is no longer necessary to maintain commands in both the lab manual and the README.TXT within the vPod. By taking the commands out of the README.TXT, it is possible that fewer changes to vPods will need to be made for minor adjustments.

For 2016 and onward, the README.TXT file on the desktop of the Main Console should be used to store static information about the vPod that would be useful to someone who had acquired the pod without the accompanying manual. The lab SKU, name, a description of the purpose of the pod, and any non-standard login accounts and passwords should be placed in this document. When the vPods are published into OneCloud, the Description of the pod will include a brief description of the pod, but any additional helpful information should be included in the README.TXT file. 

When populating this file, consider information that you would like if you were looking for more information about the contents of the pod and its intended functionality. Envision that you have deployed and powered up this pod, without the manual, and you are looking to do something useful with the pod.

Aim to keep things simple: a paragraph describing the pod's purpose and its contents, followed by a list of any nonstandard logins or passwords, and a note for the user to download the accompanying HOL manual from http://docs.hol.vmware.com/ for more information.

The Core Team will NOT replicate vPods because of a change to the README.TXT file.

 

 

My vESXi host can't start vVMs!

 

There are times when you are working on your vPod and may notice that your vESXi host is unable to power on vVMs. Rebooting the host and trying again seems to resolve the issue, but deploying a new copy of the pod produces the "broken" behavior each time.

DON'T PANIC - This is a known issue that occurs when a vPod has been improperly shut down prior to being captured.

To prevent this behavior, ensure that you Shut Down your ESXi hosts within the vPod and wait until every ESXi host reports "Not connected" (1) within vCenter. Once every ESXi host is down and vCenter shows "Not connected," proceed to shut down the vCenter and remaining components in the vPod. This step takes a little more time, but ensures that the captured pod comes up properly. When using the web client, you may need to click on the Refresh icon (2) periodically to have the latest data show up. Be patient. It can take 5-10 minutes for all hosts to shut down and for vCenter to detect that they are no longer available.

Failure to wait for vCenter to detect hosts as "Not connected" will frequently result in a disagreement between the host and the vCenter, which can prevent VMs from being powered up on the host, even though vCenter shows the host online and ready.

If you find your pod in this situation, powering up the pod, rebooting each of the affected vESXI hosts, allowing them to come up and report into vCenter, and then shutting everything down correctly will resolve the issue. It is often able to be corrected with the "broken" hosts online by restarting the management agents on the vESXi host.

 

 

"The VDC associated with this vApp does not have the required network resources to start this vApp."

 

If you deploy a copy of your vPod into HOL-DEV and notice it takes a REALLY LONG time to deploy and you receive a "Cannot start" message (1) after it deploys, you may have deployed the vPod into the wrong virtual datacenter.

There are two distinct storage policies in the HOL-DEV environment, one is used mainly to RUN vPods and the other is used to STORE vPods and prepare them for publishing. When a vPod has been deployed to the global catalog VDC, it cannot be started and the reason given is "The VDC associated with this vApp does not have the required network resources to start this vApp."

When in doubt, you can look at the "VDC" column (2) and be sure that the VDC name does not contain "GC" -- for "Global Catalog" -- since this indicates a VDC that is used for storage and disk chain consolidation.

The main VDC provides faster template checkins and checkouts to facilitate development, and the GC VDC is used to collapse the disk chain and maintain the integrity of the template's component virtual machines. It takes longer, but its use ensures that we are able to process and publish your work effectively and in a timely fashion.

 

 

Deploying to the correct VDC

 

When deploying a vPod from the catalog, ensure that the Virtual datacenter that is selected by default is the correct one. In the HOL-DEV environment, you want the one containing DMZ and not GC. The few seconds it takes to verify this can save you many minutes spent waiting for something to deploy to the GC and then having to delete it because it will not power up.

Note that a deployment to the GC virtual datacenter is a Full Copy while a deployment to the DMZ virtual datacenter uses a linked clone. You can usually tell the difference.

 

What's New with vPods?


In this article, the Core Team will detail what has changed with vPods since the last development cycle.

This release contains the following updates

Read on to learn about these important changes.


 

Periodic vPod readiness check - LabCheck

 

While not as much of a concern during VMworld shows, in day-to-day operation, prepops can remain running for extended periods of time.  This can sometimes result in a customer taking a lab that was ready after initial start but has since had issues, e.g. services stopped, VMs crashed, etc...

 

 

Running LabStartup as a scheduled task

 

To verify vPod readiness, the updated LabStartup script configures a Windows Scheduled Task called "LabCheck" to run LabStartup.ps1 at a specified interval. The default is one hour. Notice that a run condition is that the Main Console has been idle for 10 minutes before running. Unfortunately, Windows 2012 R2 does not respect this idle run condition so the LabCheck task will run even if someone is actively taking the lab. However, LabCheck detects recent input activity or changes in desktop screen resolution and will exit immediately in those cases.

 

 

How to tell LabCheck has run

 

Check the top of C:\HOL\LabStartup.log to determine if LabCheck has run.

 

 

vPodRouterHOL Capabilities

 

Core Team has replaced the vPodRouter-v6.1 with a new vPodRouterHOL version for production use in HOL. This router is identical to the previous version except that it allows for enabling vPod external network access with dynamically-loaded iptables firewall rules.  This use case is for special workshops that might require external access. The vPodRouterHOL has logic for retrieving firewall rules from AWS S3 and if successful a non-standard root password is used . Please do not ask us what the vPodRouterHOL root password is because we will not tell you. If no firewall rules can be downloaded, then the standard HOL password is used for root about 5 minutes after boot. The standard vPodRouter-v6.1 will be used when your vPod is replicated to OneCloud for field use.

Note: Please do not make changes to the vPodRouterHOL without consulting with the Core Team. Please see the Add-Route function below.

 

 

By default no firewall is active in vPodRouterHOL

There is no firewall active by default in the vPodRouterHOL. If you have a lab with external access and need the firewall off during development, just ask Core Team to disable firewall for you temporarily. We don't want changes made to the vPodRouterHOL. It is important to keep the router secure if your lab is wired up for external network access.

 

 

Add-Route Function

 

In order to remove the need to update the vPodRouterHOL, if additional routes are required use the Add-Route function in your LabStartup.ps1 instead of editing the /etc/network/interfaces file on the vPodRouterHOL.  An even better idea would be to use one of the many existing routes that the vPodRouterHOL already knows about!

 

 

Connect-Restart-HVCS Function

 

In production, Horizon View Connection Servers have issues coming up during vPod boot. This function will attempt to remediate issues with each Horizon View Connection Server URL in the $HVCSURLs array. If a reboot of the Horizon View Connection Server is needed, additional time will be added to the $maxMinutesBeforeFail.  Only one reboot of the View Connection Server will be performed.  This function works whether your View Connection Server is Layer 1 or Layer 2.

 

vPod Configuration


The goal of this article is to explain basic vPod structure and configuration. For detailed base information, please see the engineering Wiki dedicated to vPods - https://wiki.eng.vmware.com/VPod

Note that the HOL base vPods are derived from the pods described here, and the HOL team works closely with that team, but our vPods do have some additional features that provide value in the HOL environment.


 

What do I do first?

Whether you are a veteran HOL captain or this is your first time joining us, welcome to the Hands-on Labs.

The Core Team has been working to improve overall attendee and content creator experiences and have made some adjustments to some of the procedures and configurations used in past years.

 

 

DesktopInfo - Set your SKU

 

When you deploy your first copy of an HOL Base Template into the HOL-DEV environment and power it up, the first task we would like you to perform is opening the C:\DesktopInfo\desktopinfo.ini file and entering your lab SKU. DesktopInfo uses this information to display the SKU for your lab on the desktop in white text. The important line (somewhere near line 25) looks like this:

COMMENT=active:1,interval:60,color:FFFFFF,style:b,text:HOL-17XX

and, if you are working in lab HOL-1799, you should modify it to look like this:

COMMENT=active:1,interval:60,color:FFFFFF,style:b,text:HOL-1799

Save the file and wait a few seconds. Your SKU should show up in the DesktopInfo area on the desktop, as in the picture. In addition to helping identify your pod when you RDP into it, this value is used to report readiness status of deployed copies of your pod to the AutoLabStatus dashboard and helps detect failed deployments.

The "Host" is read from C:\hol\hostname.txt. We feel that "Main Console" is more friendly than "ControlCenter" which is the actual hostname.

 

 

LabStartup - Enable LabStartup

 

After configuring the SKU for your lab in DesktopInfo, please enable the LabStartup script. More information about this script is available in the appendix of this document, but the simple change that you should make is to allow the script to run. This script is located in C:\HOL\LabStartup.ps1

In the default configuration, the script is set to exit around line 217:

#ATTENTION: Remove the next two lines when you implement this script for your pod
Set-Content -Value "Not Implemented" -Path $statusFile
Exit

when you take ownership of your shiny new base vPod, either remove these lines or comment them out as in the following example:

#ATTENTION: Remove the next two lines when you implement this script for your pod
#Set-Content -Value "Not Implemented" -Path $statusFile
#Exit

Whenever the Main Console machine is booted and the Administrator user auto logs in, the script will check the status of the base vPod's components: vCenter, ESXi hosts and  storage, and report using the Lab Status line in the DesktopInfo section of the desktop. Note that this text will display in Red until the pod passes all checks and is deemed Ready, at which point it will transition to Green.

As you add components to your vPod, you will need to extend the checks performed by this script. Please see the appendix of this article for more information. The article Scripts - SkipRestart, LabStartup and LabLogoff is dedicated to the scripts we run in the vPods and contains a lot more details. If you run into any issues or have any questions, please do not hesitate to ask Bill or Doug for assistance.

 

 

Base vPod Structure

 

Rest assured that you do not need to start from scratch when building a lab for Hands-on Labs. The core team has created baseline pods configured according to a lab standard. If you find yourself in need of extra components and believe that you may need to build something from scratch, please ask first. We have some predefined components that are not included in the base pods and it is much easier to support standardized components than custom, one-off installations.

The construct, which provides a user’s lab environment, is called a vPod. A vPod is currently implemented as a VMware vCloud Director vApp template. Our cloud management engine, VMware Learning Platform (formerly Project NEE), deploys a copy of this template to support each user of a particular lab.

It is important to note that a vPod entirely contains the user’s lab environment. This means that there are no external dependencies, no external access, and all services within the vPod are transient.

REMEMBER: If you do not put it into your vPod, it is not available when the lab is deployed for the user.

A vPod is not designed to run for long periods of time, and is not designed to preserve any state. Think of it like a paper towel: pull one off the roll, use it, and then throw it away. This is important to understand when sizing components to run within the vPod. We are not dealing with Enterprise datacenters here. That is not to say that a vPod should not be able to tolerate being run for extended periods. In order to optimize user experience, vPods will be pre-provisioned and started, perhaps hours or days in advance of being used. There should be no jobs or logging activities that consume large amounts of disk space or otherwise fill up allocated space within a vPod.

A vPod usually contains one or more virtual vSphere hosts in a simulated datacenter along with vCenter and storage shared over Ethernet.  A first-class or layer 1 VM exists as a VM within the vCD vApp.  A nested or layer 2 VM (sometimes referred to as a vVM), runs on one of the virtual vSphere hosts. The next section contains information about the core components of a vPod.

 

 

Base vPod Inventory

This section provides an overview of the components used to build the vPod environment. If you are mostly interested in using the vPod, that information is covered in a different document, but you can still use this section as a reference regarding where “stuff “is within the vPod. The most basic vPods are constructed using two items: the vPodRouter, and the Main Console. These are not the most interesting vPods, but we have to start somewhere.

 

 

vPodRouter v6.x

This is the networking “brain” of the vPod and allows multiple IP networks to be used within the vPod without consuming multiple vCD vApp networks. Leveraging a single vApp network means only one portgroup must be created per deployed instance of the vPod. This speeds deployment, allows greater scalability, and simplifies configuration of the vPod.

The vPodRouter acts as the default gateway (192.168.x.1) for a set of IP networks defined within the vPod, performs routing between these networks, and provides simple DHCP services.  The vPodRouter is a Debian Linux machine with routing, port forwarding, NTP and DHCP installed. In general, messing with it is discouraged unless you have specific needs, at which point you will want to engage the core team for assistance.

Note that the DHCP scope handled by the NICs we attach in HOL is on the 192.168.100.0/24 network and covers addresses 100 through 250.

For more information about the vPodRouter, please see the vPodRouter for the HOL Use Case article in this guide.

 

 

Main Console (controlcenter.corp.local)

We decided to change the name of controlcenter to Main Console this year based on feedback in order to make it clearer to lab takers. However, the Main Console hostname is still controlcenter and its A record in DNS is controlcenter.corp.local. There is a CNAME alias to mainconsole but to minimize issues we left the real hostname alone.

The Main Console is usually the main interface to a vPod. This first-class VM runs Windows 2012 R2 and provides, among other things, a familiar Windows desktop interface for accomplishing lab tasks. If you are familiar with the concept of a “jump server,” this VM more or less serves that purpose.

When a vPod is connected to an external network during development, the vpodrouter directs all RDP traffic to the Main Console VM.  When a vPod is published via VLP for user access, one or more first class (Layer 1) VMs may be accessed through the browser-embedded VM console, but the Main Console is the primary one.

The Main Console provides a web browser (Internet Explorer, Firefox, Chrome), terminal client (PuTTY), file transfer utilities (WinSCP), and other tools for interacting with other machines and services within the vPod. When uploading files to the vPod during development, the Main Console is the recommended staging area in your group's Staging vPod. From the Main Console, any other L1 VM can be accessed.

Perhaps more importantly, the Main Console (controlcenter) is an Active Directory domain controller and DNS server for the CORP (corp.local) domain. This domain is entirely contained within this pod and the DNS records for frequently used services are pre-defined and documented. Generally, all in-pod VMs should point at controlcenter.corp.local for DNS resolution unless there is a special case being demonstrated. External DNS forwarders are configured to use Goggle public DNS (8.8.8.8 and 8.8.4.4). Additional DNS records or Active Directory objects can be created on a per-lab basis.

If your application/service/component needs to run on a Windows machine, and can coexist with a Domain Controller, it should be installed here. This helps keep the lab footprint small. Otherwise, an additional Windows VM may be added at the first layer by using one of the available pre-built templates.

 

 

 

Datacenter-in-a-box

In general, think of the vPod as your own private workspace and the Main Console VM as your window into that workspace. Everything you need should be available, or can be added. Removing components from the Main Console is not allowed and will cause support issues later on. Furthermore, VMware Hands-on Labs strives to provide a consistent look and feel for lab takers. Do NOT change the Windows Explorer settings or remove shortcuts from the desktop or Quick Launch menu.

The primary interface to the vPod's component VMs is the VMRC console in vCD. Some of you may remember that the VMRC in vCD 5.1 did not work on MacOS X, so other work arounds were needed if you used a Mac. With vCD 5.5, it is possible to open a VMRC console using Firefox or Google Chrome under MacOS X.

 

 

Passwords

We use a standard password for all accounts and services within every vPod. This ensures that any current or future member of the team can maintain the vPods. The password is VMware1! and satisfies complexity requirements for most modern operating systems, appliances, and applications. This password is set for the root account on all Linux machines and appliances in the base vPods, it is the local Administrator password on all Windows VMs, and is the Domain Administrator password for the in-vPod Active Directory domain (corp.local). If your application requires a more complex password, you may use VMware1!VMware1! as needed.

 

 

 

Logging In

Once you have a vPod deployed and are able to get to the console of the Main Console machine, launch a web browser from the desktop and you will be taken to the vSphere Web Client for the vCenter at Site A (vcsa-01a). Login with the Windows credentials or user administrator@vsphere.local and password VMware1!

Note that the Main Console machine is configured for AutoAdminLogon. This is a Hands-on Labs standard and helps to improve user experience as well as enabling various “warm up” tasks to be performed prior to users logging in to take the lab.

 

 

Handling International Keyboards

 

The Main Console and WebMKS console assumes a US QWERTY keyboard is attached. If the user has a different keyboard, the keys do not always map properly and typing certain special characters can be challenging. This is especially problematic for passwords and certain command strings. To facilitate location of these characters, we have three main options:

  1. Send Text function of VLP (VMware Learning Platform) - this is PREFERRED. (Also can use drag and drop from lab manual to VM console.)
  2. Windows On-screen keyboard
  3. README "Copy-Paste" file on Desktop

The Send Text option via VLP allows a user to input text using their native keyboard and send that into the controlcenter VM. This requires some familiarization, but is pretty simple to use. Lab takers can also select some text in the lab manual and then drag and drop it on to the active window in the VM console.

The Windows On-screen keyboard presents a user with a clickable US keyboard. This makes inputting a special character or two possible, but is more than a little cumbersome and it consumes a lot of screen real estate.

The last option, the README "Copy-Paste" file, should be stored on the desktop of the Main Console and named "README.txt". Generally keep this fairly generic.

The Core Team will NOT replicate a vPod based on changes to the README.

 

 

Provided Software

In addition to Internet Explorer, we have provided some commonly used tools for your convenience:

You are logged into the Main Console and the in-vPod CORP domain with full administrative access and can update or add other software that you need for your lab. However, please be mindful of licensing requirements and restrictions for any software you add to your vPod.

 

 

Virtual ESXi (vESXi) Hosts

All but the most basic vPods include three virtual ESXi hosts, a vCenter, and Virtual SAN that provides shared storage. This configuration provides a common starting point for constructing most product or solution demonstrations. Unless your solution requires more than this pod offers, this is a great starting point.

When the base vPods are created, they are built using a default configuration with some basic configuration to make the environment functional and eliminate common warning messages. As such, we configure syslog server (udp://vcsa-01a:514) for each vESXi host, attach shared storage, and configure IP storage and vMotion interfaces on both hosts. In addition, we install an Enterprise Plus license, instantiate a vDS and migrate all host networking onto the vDS.

 

 

Storage

In the Single Site base vPod, all shared storage is provided via VSAN. Storage is provided via 5 GB "SSD" VMDKs and 40 GB "HDD" VMDKs attached to each host.

Raw Capacity  118 GB

HOL Virtual SAN Storage Policy - The default policy on the Virtual SAN datastore has been changed so that the Failures To Tolerate (FTT) is 0. This means that only one copy of a vVM is stored on the datastore. Since this is a lab, there should be no general need to replicate the blocks onto multiple hosts to accommodate a failure or a host going offline. In the event that this is necessary for some vVMs or certain lab workflows, it can be applied per-VM using the power of SPBM.

The name of the vSAN datastore(s) has been aligned with the recommendations from the VMware Validated Designs (VVD)

Datastore Name  RegionA01-VSAN-COMP01 (RegionB01-VSAN-COMP01 for the second site/region in the Dual-Site vPod)

Adding more capacity to the Virtual SAN datastore involves attaching additional VMDKs to the ESXi hosts and explicitly flagging them as HDD devices. The process for flagging devices is covered in detail in the article, Flagging ESXi SCSI LUNs for HOL. Please ensure that this process is followed or the VSAN will be non-functional in some clouds.

Note that VSAN has been configured in Manual mode to prevent it from automatically grabbing VMDKs for capacity as they are presented to the vESXi hosts. This allows you the chance to flag the devices as "SSD" to create additional VSAN disk groups, or to use the VMDK as a local datastore on the vESXi host -- for NSX controller storage, for example. The process for adding a new disk to the existing VSAN is covered in the article Virtual SAN: Adding Storage to the VSAN Datastore.

 

 

vCenter(s)

The vCenter implementations in the vPods are the vCenter Server Virtual Appliance (VCSA) for the base vPods. For the 2016 cycle, we have opted to use an embedded Platform Services Controller (PSC) and 4 vCPUs to speed vPod cold boot up. The default services have been configured, but AutoDeploy, the Dump Collector and other optional services have not been activated in the base.

 

 

Appendix

Additional configuration information is provided here.

 

 

Networking - hosts

 

Each vESXi host has 3 preconfigured vmkernel interfaces, as indicated in the table

The last octet for each host is 50 + the host number, and is consistent across all 3 networks. For example, host esx-01a has addresses 192.168.110.51, 10.10.20.51 and 10.10.30.51.

The vMotion vmknic is configured to use the new vSphere 6 "vMotion" TCP/IP stack while all other traffic traverses the default stack.

There are 2 network adapters assigned to each vESXi host. Both vmnic0 and vmnic1 are assigned to the vDS. If necessary, you may add more network adapters in vCloud Director.  Be sure to leave the network properties in vCloud Director as DHCP.  Remember this just means that you are taking care of the configuration manually in the host.

 

 

Scripts and Utilities

Some lab initialization and utility scripts are provided. These scripts are stored in the C:\HOL directory and some are called automatically via shortcuts in the All Users Startup Items directory (C:\ProgramData\Microsoft\Windows\Start Menu\Programs\Startup).

More information is available in the articles describing the scripts' use within the pod. Detailed information about the SkipRestart, LabStartup/LabStartupFunctions and LabLogoff is available in the article, Scripts - SkipRestart, LabStartup and LabLogoff.

 

Scripts - SkipRestart, LabStartup, LabCheck and LabLogoff


We try to ensure that users receive vPods that have not been freshly deployed. This is not always possible if the demand is greater than the configured pre-deployed labs ("pre-pops"). Sometimes, a user receives a vPod that has just been recently deployed and is not yet initialized and ready to go.

The LabStartup scripts are intended to be a standardized vPod bootstrap and startup automation system with available user progress notification. This is provided as a framework for your use. This document is intended to explain our thought process and the current implementation.

The goals of this script are 1) to ensure that a vPod is "READY" for a user to begin using it, 2) to communicate the vPod's readiness to the user in a clear and consistent manner and 3) communicate the readiness to our upstream monitoring system so that we may address failures prior to users being given "bad" pods.

Note that the LabStartup script runs each hour after ready using the labcheck argument. (Running LabStartup with the labcheck argument is referred to as "LabCheck". ) If something fails after initial boot, LabCheck will change the vPod status from READY to FAILED and alert our upstream monitoring to replace the prepop.


 

SkipRestart

The main function of this utility is to automatically dismiss the “Windows Restart” message that shows up when a Windows 7, 2003, or 2008 virtual machine boots up and detects a CPU that it has not seen previously. This dialog causes user confusion and results in a poor lab experience, so we want to eliminate it wherever possible.

This utility consists of two parts: SkipRestart.bat and SkipRestart.ps1, both of which are located in the C:\HOL directory on the Main Console.

More information on SkipRestart is available in a separate article dealing with L2 VMs.

 

 

DesktopInfo

While not technically a script, the DesktopInfo program is used to add vPod information to the desktop for easy access by users and administrators. This program has been configured via the C:\DesktopInfo\desktopinfo.ini file to read the C:\HOL\startup_status.txt file and put the text found there into its "Lab Status" field.

The configuration file has a line that looks like this to report status from the script:

FILE=active:1,interval:10,color:3A3AFA,type:text,text:Lab Status,file:C:\hol\startup_status.txt

In addition, the configuration displays the CO-OP ID portion of the Lab SKU or agreed-upon title for your lab on the desktop in white text. A vPod is typically used by multiple labs so only the CO-OP ID portion should be used and not an individual lab SKU. The line looks like this:

COMMENT=active:1,interval:60,color:FFFFFF,style:b,text:HOL-XXX-18XX

It is very important that you configure the last part of this line with the CO-OP ID portion of your Lab SKU replacing the HOL-XXX-18XX text because this value is displayed to the customer.

A lab SKU variable is defined in the LabStartup.ps1 script.  This value is used in our pre-pop monitoring application.

 

 

LabStartup

 

The C:\HOL\LabStartup.bat file is called using a shortcut in the the All Users Startup Items directory (C:\ProgramData\Microsoft\Windows\Start Menu\Programs\Startup). This shortcut specifies a Minimized window type and spawns a hidden PowerShell window. This reduces the chances that a user will be impacted by the script's execution and prevents them from accidentally interrupting its very important work.

The script is also run by LabCheck using a Windows Scheduled Task.  The "labcheck" command line argument signals LabStartup that it is a repeated run and not the initial start of the vPod.

The initial steps of the script perform various cleanup, setup, and environment "sanity check" tasks before checking for the presence of an AutoLab script. AutoLab is used in scale testing and will preempt the LabStartup run in order to hand control over to the AutoLab testing engine. If the environment is sane and there is no AutoLab script present, LabStartup continues to Phase 2.

 

 

Main LabStartup Flow

 

Several functions are provided by the HOL core team for use in the LabStartup script. The main skeleton of the script has been created to verify that the resources are available in the base vPod templates. The order in which these checks are performed has been refined over the past few years and works for a wide variety of use cases, but it is possible to customize the flow to meet the needs of a given environment.

Understanding the function of our LabStartup script is very important for not only the Hands-on Labs use case, but for reuse of the vPods by other users within the VMware Private Cloud environment. This is where I tell you that the circumference of the Earth is roughly 24,901 miles. If you ran a marathon each day, it would take over 950 days (~2.6 years) to run that distance! That may be useful to you later, although probably not for the running.

Each of the main steps is designed as a blocking task, and execution will not proceed without successful completion of each task along the way. This mirrors most infrastructure: it does not make sense to try and start VMs on a host unless the host has completed booting and its datastore is reporting Connected.

 

 

LabStartup Output

Outputs of the script are:

  1. C:\HOL\LabStartup.log - incremental progress for the current run of the script will be logged here. This is most useful for troubleshooting the script during development, or for identifying why a deployed pod has not entered the "READY" state.
  2. DesktopInfo desktop badge: "Lab Status" reads the C:\HOL\startup_status.txt file. This is the primary interface with our users.
  3. The IP address of eth5 on the vpodrouter is set to a value which encodes the lab SKU and the pod's current status. This is used to communicate status with the monitoring system.

In a base vPod, the DesktopInfo "Lab Status" badge will be initialized to "Implement LabStartup" until you modify the script to check and report readiness for your vPod. When you make the changes, the value will be initialized to "Not Ready" and then change to "Ready" with a time stamp once the script has completed execution. Incremental updates to this value, as described later in this article, are encouraged to help users see that progress is being made.

Each time the LabStartup.bat file is run, the C:\HOL\labStartup.log file will be overwritten with the current run -- there really is no value in preserving past run logs, except during debugging

 

 

LabCheck

We refer to LabCheck when LabStartup is run with the "labcheck" command line argument. At the end of the initial LabStartup run, a Windows Scheduled Task is created to run labcheck.bat each hour.

You can test for a labcheck run by checking the boolean $labcheck variable. You may want to avoid certain actions during a LabCheck run or you may want other actions to be taken.

If input (keyboard or mouse) is detected since the last run, LabCheck will exit immediately and will not run again. You can test for LabCheck and skip actions if desired or take additional actions in the LabStartup script as needed.

 

 

LabLogoff

This script is executed each time the user logs off of the Main Console, including during a shutdown. It has these main functions:

  1. Empty the contents of the Recycle Bin
  2. Reset the color of the lab status line in DesktopInfo.ini from Green to Red
  3. Removes the "parent.lock" file that Firefox uses to track the last time it was run
  4. Deletes the LabCheck scheduled task that re-runs the LabStarup script

We have found that there are times when the Recycle Bin was not emptied prior to a pod being checked in as final. To keep things tidy, this script handles that task automatically. Be aware that you should not store files temporarily in the Recycle Bin.

 

 

What do I do first?

In the HOL base vPods, a shortcut to the LabStartup trigger batch file has been placed in the All Users Startup Items directory (C:\ProgramData\Microsoft\Windows\Start Menu\Programs\Startup). The LabStartup.ps1 script itself is stored in the C:\HOL directory and will perform only basic environment checks before reporting "Implement LabStartup" and exiting.

Our thinking is that most teams are going to take our base vPods and add or remove resources in order to build the lab environments for each lab. When the base pod is modified, the LabStartup script must be configured to check the new resources and to NOT check the removed resources. During initial vPod construction, leaving the script set to "Implement LabStartup" is a reminder that the script must be implemented based on the final pod's configuration. Having the script exit also prevents ControlCenter from looking for resources that may no longer exist, attempting remediation of components it may detect as having failed, and eventually timing out or reporting a failure.

 

 

Set your SKU 1: DesktopInfo.ini

 

The first step is to configure your lab's SKU or agreed-upon title in the DesktopInfo.ini file so that DesktopInfo can display the title on the desktop of the Main Console. This file is located in C:\DesktopInfo\DesktopInfo.ini and you want to go to the COMMENT line (it should be on line 25 of the file) and replace the existing placeholder SKU with the SKU or agreed upon title for your lab, after the "text:" marker.

Save the file and wait a couple of minutes while the desktop refreshes. Your new SKU (CO-OP ID) or title should show up.

 

 

Set your SKU 2: LabStartup.ps1

 

The LabStartup.ps1 script uses the lab's SKU for reporting status to the monitoring system and to ensure that the correct vPod has been mapped to the lab record in the VMware Learning Platform.

To set the SKU in the LabStartup script, open the file C:\HOL\LabStartup.ps1 and scroll down to around line 56, or search for the $vPodSKU variable name. Change the value of the variable from the existing placeholder SKU to the proper SKU for your pod and save the file.

If multiple labs are using the same vPod, use the SKU of the first lab to use the vPod.

 

 

LabStartup - Enable LabStartup

 

After configuring the SKU for your lab, please proceed to enable the LabStartup script. Open the file C:\HOL\LabStartup.ps1 and make the required change.

In the default configuration, the script will exit around line 210:

##ATTENTION: Remove the next three lines when you implement this script for your pod
Set-Content -Value "Implement LabStartup" -Path $statusFile
Write-Output "LabStartup script has not been implemented yet. Please ask for assistance if you need it."
Exit

when you take ownership of your shiny new base vPod, either remove these lines or comment them out by putting a hash mark (#) at the beginning of the line, as in the following example:

##ATTENTION: Remove the next three lines when you implement this script for your pod
#Set-Content -Value "Implement LabStartup" -Path $statusFile
#Write-Output "LabStartup script has not been implemented yet. Please ask for assistance if you need it."
#Exit

Whenever the Main Console machine is booted and the Administrator user logs in, the script will check the status of the base vPod's components: vCenter, ESXi hosts and FreeNAS storage, and report using the Lab Status line in the DesktopInfo section of the desktop. Note that this text will display in Red until the pod passes all checks and is deemed Ready, at which point it will transition to Green.

As you add components to your vPod, you will need to extend the checks performed by this script. We realize that not everyone is comfortable with PowerShell, so please do not hesitate to ask the Core Team if you require assistance.

 

 

User Variables - what to check

The main script logic is contained in the LabStartup.ps1 file. Beginning around line 55, there are some user variables that should be populated. Sensible defaults have been provided based on the configuration of the base vPod.

Once the Exit line has been removed to enable the script's primary functionality, the script should run to completion with a base pod that has been completely powered up. This will report on the base components but does not necessarily indicate that the lab is ready for someone to begin taking your lab unless you require nothing beyond a base pod. The example checks must be augmented with values appropriate for your lab. Please ask Bill and Doug if you have any questions or require any assistance. We are constantly looking to improve this process.

In PowerShell, the "#" is a comment. We include example entries that begin with  the "#" meaning they are commented out. Delete the "#" to uncomment the line. If you vPod no longer contains a resource such as 'esx-02a.corp.local' then simply add a "# to the beginning of the line with 'esx-02a.corp.local'. Leave all the other lines as is.

#FQDN(s) of vCenter server(s)
$vCenters = @(
	'vcsa-01a.corp.local:linux'
)
# Will test ESXi hosts are responding on port 22
# be sure to enable SSH on all HOL vESXi hosts
$ESXiHosts = @(
	'esx-01a.corp.local:22'
	'esx-02a.corp.local:22'
)
# FreeNAS NFS datastore names in vCenter
$datastores = @(
	'stga-01a.corp.local:ds-site-a-nfs01'
)
# Windows Services to be checked / started
# uncomment and edit if service is present in your lab
$windowsServices = @(
	#'controlcenter.corp.local:VMTools'
	#'srm-01a.corp.local:vmware-dr-vpostgres' # Site A SRM embedded database
	#'srm-01a.corp.local:vmware-dr' # Site A SRM server
	#'srm-01b.corp.local:vmware-dr-vpostgres' # Site B SRM embedded database
	#'srm-01b.corp.local:vmware-dr' # Site A SRM server
)
#Linux Services to be checked / started
$linuxServices = @(
   'vcsa-01a.corp.local:vsphere-client'  # include this entry if using a vCenter appliance
)
# Nested Virtual Machines to be powered on
# if multiple vCenters, specify the FQDN of the owning vCenter after the colon
# optionally indicate a pause with the "Pause" record.  In this case the number 
#  after the colon is the number of seconds to wait before continuing.
$VMs = @(
#	'linux-base-01a'
#	'Pause:30'
#	'linux-desk-01a:vcsa-01a.corp.local'
	)
# as with vVMs, the format of these entries is VAPPNAME:VCENTER
$vApps = @(
#	'example vApp:vcsa-01a.corp.local'
)
#TCP Ports to be checked
$TCPservices = @(
#	'vcsa-01a.corp.local:443'
)
#URLs to be checked for specified text in response
$URLs = @{
	'https://vcsa-01a.corp.local:9443/vsphere-client/' = 'vSphere Web Client'
	'http://stga-01a.corp.local/account/login' = 'FreeNAS'
	'https://psc-01a.corp.local/websso/' = 'Welcome'
	}
# IP addresses to be pinged
$Pings = @(
	#'192.168.110.1'
)

This inventory of your lab is used by the LabStartup script to report "readiness" of your vPod both to the user and upstream to our monitoring system. Therefore, it is very important that you understand which services must be running in your lab in addition to the dependencies and possible remediation options that can be taken in the event that a lab does not initialize properly.

The base script has 6 main steps it goes through to validate the pod. There is a "step 0" that is only used for automation testing -- please do not remove or comment out the following line:

If( Start-AutoLab ) { exit } Write-Host "No autolab.ps1 found, continuing."

Once the script reaches the end, it will report READY and terminate. If the pod exceeds the $maxMinutesBeforeFail counter, the script will mark the pod as FAILED and terminate. The default value is 30 minutes, which is a pretty long time to wait for a pod to initialize. If the script times out or fails, our pre-pop monitoring application will let us know and we will take a look at the pre-pop.

 

Building Blocks: Creating your vPod


While we provide many base components to help you get started with vPod development, there are very likely going to be additional components you require to tell your story. The section on vPod Remote and Internet Access describes how to attach your vPod to the Internet and access various resources to pull components into your vPod.

Internet-based services like Dropbox can be very helpful here, but are probably not the best choice for restricted material, and won't help you if you need to deploy your solution as a Layer 1 VM.

A Quick Note on BuildWeb

Buildweb can be a challenge, the least of which has to do with where our development environment (HOL-DEV) runs within OneCloud. This means that VPN access is required to pull bits from the buildweb environment into a vPod. We recommend against installing the VPN software onto the Main Console, but if you absolutely must do so, you must remove that software prior to checking in the pod. The recommended approach is to use the VPN software in your group's Staging vPod.

If there is something that you need from buildweb, and you are having issues, please do not hesitate to ask the core team for assistance.


 

Attach ISO images via VMRC

 

There are times when you need to attach an ISO image to a Layer 1 VM in the environment. In these cases, you have a few options, and the one you choose has to do a lot with the size of the data, your Internet connection characteristics, urgency, and your level of patience.

  1. Attach ISO via VMRC on Windows client
  2. Upload ISO to vCD Catalog

If you need to attach an ISO image to a VM that is already running a guest operating system, and you have the ISO image on your desktop, the simplest method is probably to open a VMRC console and attach the ISO image via the VMRC connection.  However, this is only possible if your VMRC client is Windows. This option is not available on OSX.

To use this method, open the VMRC to the machine you want to use, click the CD icon in the upper right (1), click the radio button next to the blank space and type the path (2), then click the OK button. Unfortunately, there is no "Browse" option, so you need to type the whole path to the ISO image on your local machine.

If your data set is large (bigger than a CD-ROM), your Internet connection is slow or unreliable, or you have to do this more than once, this may not be the mechanism for you. It is very easy, though.

When you're finished, just follow click on the CD icon in the VMRC, click the None radio button, then OK to disconnect. Note: The ISO will also automatically disconnect when you close the VMRC.

Remember that your group's Staging vPod is local to HOL-Dev and is the best place to utilize the techniques discussed here.

 

 

Upload ISO to Catalog

 

If you have a large ISO image or you need to perform an installation into multiple machines, it is generally more efficient to upload the ISO to the HOL-Dev-Resources catalog and attach it from there. In general, this is also the preferred option if you want to boot from an ISO image -- to install a new version of ESXi, for example.

Navigate to the HOL-Dev-Resources catalog (3), then click on the Media tab (4)

STOP

Before you upload that ISO, check to see if what you need is already in the catalog. After many years of doing this, we have found that many teams use the same software and build versions. You can save yourself a lot of time if someone else has already uploaded the ISO and you just need to attach it to your VM.

PROCEED

If you have checked the catalog and do not see what you need, proceed to upload your ISO: click the upload icon (5) to launch the Java uploader.

Remember that your group's Staging vPod is local to HOL-Dev and is the best place to utilize the techniques discussed here.

 

 

No More Java!

 

In prior versions of vCloud Director, a Java applet was used to upload media. As new versions of Java were released, different versions began causing "weird" issues in various browsers on a mix of operating systems. This was not pleasant.

As of vCloud Director 5.5, this Java applet has been replaced with OVFTOOL, which has its own quirks, but is much more reliable and performs better than the applet. It does, however, require that the vCD client integration plugin has been loaded so that the OVFTOOL binaries are present on your system.

 

 

Uploading

 

In general, you want to specify a Local file as the source (1). Click the Browse button (2) and find the ISO you want to upload. If you wait a second after selecting your file, the Name (3) will be automatically populated with the file name of the ISO. This is a good choice since it helps everyone: others who might need the same software can see that it has already been uploaded and it saves time. Finally, click the Upload button (4) to begin the process.

 

 

Attaching the ISO

 

Once the upload has completed, attaching the ISO is very simple: open your vPod to the vApp Diagram view, click on the VM that you want to target for attachment (1), then click the CD icon (2) in the toolbar.

In the resulting dialog, select your uploaded media and click the Insert button to connect.

You may notice that the little grey CD icon under the miniature console screen on the VM turns blue. This is a vCD 5.5 indicator that media from the catalog is attached to that VM. Unfortunately, it does not indicate which image has been attached, but it is better than nothing.

 

 

Disconnecting the ISO

 

IMPORTANT: When you are finished using the media, please remember to disconnect the ISO. Failure to do so will result in failed export and replication to the hosting cloud(s).

As of vCD 5.5, you can see that an ISO from the catalog is attached because the little "CD" logo under the miniature console turns from grey to blue. As long as all of these icons are grey, you should not have any attached ISOs.

To disconnect the ISO, right-click on the VM (1) and select Eject CD/DVD (2) from the context menu.

 

 

Layer 1 VMs - OVF

vCloud Director 5.5 allows uploading of virtual appliances that are in the OVF format. The upload process follows the same process as uploading ISOs, but they are uploaded with the vApp Templates tab selected instead of Media. into the HOL-Dev-Resources catalog.

Once the appliance has been successfully uploaded to the catalog, you can add its component VMs to your vPod.

 

 

Adding VMs to Your vPod

 

To add a VM from a catalog item into your deployed vPod, click on the gear icon (1) in the vApp Diagram view and select Add VM... (2) from the context menu

 

 

Find the VM you want

 

Locate the VM you want to add by selecting My organization's catalogs (1), selecting Name (2), entering part of the VM's name (3), and clicking the Refresh button (4).

Your VM should show up in the middle list. Click on the VM you want (5), then click the Add button (6) and the Next button (7) to continue.

Click the Next button on the Configure Resources screen as well.

 

 

Configure Networking for the added VM

 

Take a moment to remove the "-0" from the end of the Computer Name -- this gets added automatically to help prevent duplicates, but is generally unnecessary unless you have multiple copies of this VM in your vPod.

On the Configure virtual machines screen, change all Networks to your vAppNet name and change all IP Assignment to DHCP.

Note that the DHCP setting here does not enable DHCP but indicates to vCD that it should not mess with the VM's networking configuration.

Click the Finish button to complete adding the VM to your vPod. Wait a few minutes while the VM is copied into your vPod, then you should be ready to go.

 

 

Layer 1 VMs - OVA

vCloud Director 5.5 is being used for VMworld 2016 development. This version allows direct importing of appliances in OVA format.

 

 

Layer 2 VMs

If you need to add an appliance to your vPod and want to run it nested -- as a VM on the vESXi host in the pod -- you can wrap the OVF or OVA in an ISO and upload the ISO to the HOL-Dev-Resources catalog. From there, you can mount the ISO to the Main Console and import the machine into the lab via vCenter.

Another option is to add a scratch drive to the Main Console, wire your vPod up for Internet access, download the OVF/OVA to the scratch drive, and import to your vCenter from here. Please remember to remove the scratch drive from the Main Console before the next Milestone Check-in. We don't want to replicate your scratch files.

If you need assistance with either of these processes, please ask. We are happy to discuss the most efficient way to get files into your vPods.

 

 

Creating an ISO image from a folder

On a MacOS X machine, you can easily create an ISO image from a folder by going to the Terminal and executing the hdiutil command:

$ hdiutil makehybrid -o MYSTUFF.iso  MYSTUFF -iso -joliet

This will create the MYSTUFF.iso file in the current directory with the contents of the MYSTUFF folder.

From a Windows machine, we usually use something like MagicISO or InfraRecorder to create ISO images. Your group's Staging vPod includes InfraRecorder.

 

vPod Milestones: Seeding the Clouds


In order to efficiently replicate vPods between geographically disparate clouds, we leverage a form of delta replication. This has allowed us to provide more time for development and has resulted in significant time savings, especially for staging the pods to the datacenters in EMEA and APAC. This mechanism relies upon the presence of similar data in the target cloud location. The initial "draft" data is an export of your template, but then we leverage your incremental development versions throughout the development cycle to reduce the time required for the final replication.

To receive maximum benefit from this mechanism, we have established a few milestones along the vPod development path:

  1. Draft
  2. Release Candidate

Beginning your lab development using one of our base vPods is required.  We copy the base pods to all of our hosting clouds and these can be used as the initial "seeds" for all pods created from them. We call this process "cloud seeding." This is similar to the concept of WAN acceleration or de-duplication. For some pods, the base pod is as much as 95% similar to the final pod. For others, it may be closer to 20%. Either way, leveraging the base pod saves replication time.

The first check-in we ask for is a Draft check-in. Software has been installed and configuration work has been ongoing. We ask for a check-in at this time so that we have a delta that is closer to the final. Again, this gives us as much time as possible to copy the potentially large volume of data that has changed during the configuration of your vPod. The Draft vPod is typically feature complete and is ready to be used for peer functional testing. Do not wait for user-attended scale tests to begin functional testing.

Release Candidate is the version that will be presented at the conference and user-attended scale test events unless a critical issue is discovered. The replication and preparation of the final versions still consumes time -- at least 12 hours per vPod for minimal changes.  The Core Team will schedule time to review your vPod prior to testing events if possible and VMworld.

 


 

Lab Configuration Document (LCD)

The Lab Configuration Document (LCD) is a high-level schema for your vPod. The core team uses this document for capacity planning and resource allocation activities. When we are playing "resource Tetris," we need to know the size and shape of each of the pieces involved.  The LCD can be found on SharePoint. Use the LCD that corresponds to the base vPod template you will use to begin developing the vPod for your lab.

The LCD also provides critical information about what products and versions you are using and what nested VMs are present. It is important to understand which vESXi hosts will be used for the nested VMs. Virtualized ESXi hosts should not be over-subcribed the way that physical ESXi hosts are. Determining when to nest or not to nest is tricky.  The LCD can assist the Core Team in helping with that decision and the responsibility for your lab's performance.

An optional Hybrid LCD provides information for vPods that leverage external resources.  The core team needs to understand what firewall rules to establish and what callbacks to create in the VMware Learning Platform.  This is only required if your lab requires outbound Internet access.  The Hybrid LCD is available on request.  (Just ask the Core Team.)

 

 

Catalogs and Resources

In the HOL-DEV environment, we maintain several resource and utility catalogs.

Resource Catalogs

HOL-Base-Templates - Location of the base templates for beginning new vPod development. All labs for the development cycle must begin with one of the base templates. Location of media, OVFs, and template VMs for building vPods. These resources have been provided by the Core team.

HOL-Dev-Resources - You may upload media (ISOs and OVFs), but please check first. Someone else may have already uploaded what you need. Please use the proper naming convention in order to facilitate item location by others and prevent duplicates from consuming valuable space. A proper name preserves the original file name which includes the product, version and build number.  If the product, version and build you need is already uploaded, you save time and it saves HOL-Dev storage space.

HOL-Released Labs - The vPods for labs currently running in the public HOL are available in this catalog for reference purposes. If you are unable to find a pod with components that you require, please contact the core team and we will get it transferred for you. Expect at least two days for the transfer to occur because the originals are currently stored in a different section of OneCloud.

Additional catalogs - there are no additional catalogs available in this portion of OneCloud. If you need something, please ask.

Utility Catalogs

_WorkInProgress - Use this catalog to store your incremental progress. Shut your pod down cleanly, check it in, then test deployment to ensure it has been captured properly. We recommend doing this at least once a week or after major changes to the vPod to help minimize potential data loss. Please store at most 2 versions.

HOL-Staging - Check your vPod in here at required ‘milestone’ intervals: Draft and Release Candidate. At most, one version of each vPod should be in this catalog at one time. This is the catalog used for replication to the hosting clouds. Please unwire your vPod before capturing to HOL-Staging. Note that HOL-Staging can also be used to consolidate your master working vPod every 10 versions to control disk chain lengths. Move the vPod from HOL-Staging to _WorkInProgress when the capture is complete unless this is a milestone checkin.

Note: Captures are much faster to WorkInProgress and HOL-Dev-Resources.  HOL-Staging does not use "Fast Provisioning".  These are catalogs and HOL-Dev-Resources are the only catalogs with write access for you.

 

 

vPod Naming

For deployed vPods, use something that is meaningful to you and your team. It is often helpful to include a portion of the lab SKU, a version number and your user ID in the name. For example, I might use dbaer_1808-v0.1 when working on a new HOL-1808 vPod.

For incremental check-ins to the _WorkInProgress catalog, please use the following naming convention:

Lab SKU, all uppercase, followed by a version number. For example, HOL-1801-v0.2

This helps everyone on your team understand which checked-in version is the most current. Your Lead Captain will identify the current Master Working vPod version.

Use "dot" versions in development from the last production version.  Use version suffixes -v0.1, -v0.2...-v0.10, -v0.11, -v0.12...-v0.99, -v0.100, etc. Development version labels will be used through VMworld US. Once the vPod is actually published in production on the HOL public portal, the Core Team will rename the vPod to -v1.0 and increment from there. The Core Team will record your last development version in the Description field for the vPod.

When moving your vPod into the HOL-Staging catalog for a milestone check in, please use the following naming convention:

Lab SKU, all uppercase, followed by the full version number:

Use of this naming convention ensures that we have the latest version of your vPod replicated to the hosting clouds and available for the testing events. Failure to follow this naming convention may result in the vPod not being replicated or presented properly.

At times, the Core Team will need to make minor updates in-place as "hot fixes".  We record those changes in our documentation for your vPod and will usually append a letter, as in

Once your vPod is approved for final publishing, the Core Team will assign a production version number which will be used in the HOL-Masters catalog throughout OneCloud.  In most cases this will be the production version designation with your development version in the description.

 

 

Capturing your Milestone vPod to a vCloud Director Catalog

 

Now that you understand the purpose of the different catalogs and the naming convention, it is time to capture or add your powered off vPod to a catalog in vCloud Director.

Your vPod is UNWIRED and captured with the Make Identical option, right?

The diagram above is the Milestone Workflow. Chain length is the enemy. If the chain length is too long (greater than 15), your capture to HOL-Staging may timeout and fail. YOUR WORK IS LOST. This has actually happened so this is not an idle threat. The Milestone Workflow is designed to reset the chain length and verify your captured vPod has a valid start state.

If the same vPod name is used in more than one catalog, Core Team automation will break. This is intentional and indicates a copy and not a move to HOL-Staging. Your milestone vPod is in HOL-Staging ONLY and no other catalog. If Core Team publishes the "wrong" vPod, that is on you. No one wants to "guess" whether the vPod to use is the one in HOL-Staging or _WorkInProgress.

 

 

Begin the process

 

  1. Select the vPod you wish to add to the catalog and right-click to open the "Actions" menu.  This is also available using the "Actions" icon above.  (arrow)
  2. Choose "Add to Catalog..." on the menu

 

 

Choose the catalog

 

  1. Select the catalog from the drop down.  The default catalog is _WorkInProgress and is the correct choice.
  2. Always use the _WorkInProgress catalog for the initial capture, including the release candidate vPod.

 

 

Set the name, description and "Make identical copy" option

 

  1. Set the name for the vApp Template.  Retain our development version "v0.99".  (And the next version would be v0.100 if you're curious.)
  2. Include any description you like.  The Core Team will change this description when published and replicated throughout OneCloud.  We use the description from SharePoint (limited to 256 characters)
  3. Choose the "Make identical copy" option.  This is not the default.
  4. Click the "OK" button.

NOTE: If "Make identical copy" is not selected, your vPod is WORTHLESS. It will not deploy properly.

 

 

Re-deploy the newly captured vPod from the catalog

 

Now you need to "round trip" the capture.  That is, add your newly captured vPod from the catalog into "My Cloud" and then perform a quick test ("smoke" test) to verify.

This means you deploy out as "HOL-1685-v0.99-smoke".  Do NOT wire it up.  

 

 

Check for "smoke"

 

Power up the vPod.  Open a VMRC to the Main Console. Review the C:\hol\labstartup.log and verify that your vPod achieves "Ready" state in less than 20 minutes.  Once your "smoke" test is proven good. DELETE that vPod. Do NOT use it any more. The chain length is too long.

 

 

MOVE your Final RC to HOL-Staging

 

Now that your "smoke" test is good, MOVE your vApp Template from _WorkInProgress to HOL-Staging.  Do NOT copy.

There must be one and only one milestone vPod for your lab.  Your next batch of updates must begin with a new deploy from HOL-Staging.  (Don't forget to choose the "DMZ" OrgVDC when you deploy. It is not the default.) Your vPod disks are consolidated and this safe guards your work. The first deploy from HOL-Staging to the DMZ virtual data center will take longer as the initial disk shadows are created.

There is NO reason to have a copy of this vPod in _WorkInProgress.  It is a waste of space. It will break our automation and it is confusing.  Deploy from HOL-Staging.

 

 

vPod Checklist

 

You must verify and complete the vPod Checklist using your Final vPod.  The vPod_Checklist spreadsheet can be found on SharePoint:

HOL2017-vPod-Checklist

Enter the names of the Lead Captain and Principal performing the vPod checks. Enter an "X" after each of you have verified that the vPod complies with each of these standards. The spreadsheet includes additional details and other important reminders. Look for vPod Checklist training materials (PowerPoint deck and WebEx recording) on SharePoint.

Please complete the user names and passwords tab as well as the specific shut down instructions for your vPod. The checklist includes a sample shutdown order that must be customized for your vPod.

Upload your completed vPod_Checklist to your lab's Roadmap entry on SharePoint.

 

 

Run the vPodChecker to automate some checks

 

The core team has been working on a tool to check some aspects of the vPod's compliance against the checklist.

Automated vPod Checking

Get the latest version if necessary

Edit C:\hol\Tools\vPodChecker.ps1 $URLs and copy $URLs from your LabStartup.ps1.

Choose your version of PowerCLI (line 80 or 83) so that the modules can be loaded.

Wait for your vPod to show Ready. (The script may not function properly if your vPod is not Ready.)

Start PowerShell

PS C:\> C:\hol\Tools\vPodChecker.ps1

Note that the script will attempt to correct the uuid.action and keyboard.typematicMinDelay values. In the event that it cannot do so, you may need to correct them manually.

 

 

Core Team vPod Review

 

The Core Team will review your vPod documents and walk through your vPod. We need to be certain that licenses, SSL certificates, passwords and password policies will not expire prematurely. All URLs must use valid SSL certificates unless replacing the SSL certificate is part of your lab. Please note that in your vPod Checklist.

When run in different clouds, L2 VMs can sometimes trigger a "move or copy" question and refuse to start until that question is answered. Adding the uuid.action = "keep" to the advanced configuration options prevents that question. Without the keyboard.typematicMinDelay setting, Linux L2 VMs and appliances will exhibit keyboard repeat issues on the console.  This is very frustrating for students.

Due to the nature of HOL, Linux machines will perform file system checks on every deploy once the re-check time has past. This slows boot and adds considerably to infrastructure I/O. Linux files system checks must be disabled.

When run in different clouds, processor changes can cause Windows 7, 2003 and 2008 to trigger a hardware detection request for reboot.  The skipRestart utility dismisses this dialog. Be sure to enable autologin and run the skipRestart from the startup menu.

When run in different clouds, Windows can sometimes detect the network as new which could trigger firewall rules if not disabled. Disable the Windows firewall for all network profiles.

 

 

Core Team vPod Review continued

 

Due to the nature of HOL, it is important to set the date and time using NTP. The vPodRouter provides NTP on ntp.corp.local (192.168.100.1). Some products require their own time settings such as vR OPs HVM. Horizon recommends using VMware Tools. Please note exceptions to HOL standard NTP settings on your vPod Checklist.

The Core Team reviews your LabStartup script and verifies that more than minimal checking is implemented. All machines and URLs for your lab need to be checked. Use the LabCheck test to avoid potentially disruptive actions after the intial vPod boot.

Please do not alter your vCloud Director vApp Startup timings and delays. We have learned that setting fixed delay times may work in HOL-Dev but will most likely NOT work at VMworld when there is considerable load on the infrastructure.  Use the LabStartup script to test for dependencies and then continue once confirmed. As a general rule, Windows Active Directory must be available before vCenter for AD logins to work and vCenter needs to be available as NSX Manager starts. These timings will be tested in the base vApp Templates for HOL.

Your vPod should go to Ready state in 20 minutes or so. If longer, please ask the Core Team to take a look.

Be sure to attend to the "fit and finish" of your vPod. Remove your scratch disk from the Main Console. (Don't increase the C drive unless needed. Don't put large files and then delete them. This creates "dirty" blocks that increases replication times. Remove all of your temporary files.

Browser issues are common due to the nature of HOL. A browser update could break your product's UI and start a cascade of upgrades to correct. Some plugins have expiration dates, but we need to allow them to always run. The Chrome link on the Main Console desktop includes the command line argument to allow running of out-dated plugins. If you are using Firefox, the per-plugin option you want to set is "Always Activate" and is available on the "about:addons" page.

Do not report usage information to the developer. This adds to network traffic and is not needed.

Always use static IPs unless you are demonstrating DHCP or Dynamic DNS. Using DHCP and referencing the IP address you happen to get in HOL-Dev in your lab manual will most likely be wrong at some point in the future.

 

 

Core Team vPod Review continued

 

Different clouds present different types of storage. For Virtual SAN labs, you MUST explicitly declare ALL disks as either SSD or HDD.  And if all flash, set flash capacity. Core Team can assist.

Unless your lab requires external connectivity, test your lab unwired using the VMRC only. Some products have external dependencies that must be addressed for use in isolated HOL vPods.

To assist the Core Team in supporting your vPod, enable SSH key authentication where possible and create PuTTy session entries for ALL machines in your vPod whether used in the lab or not. Use the account@server.corp.local format in your PuTTy session.  Please note exceptions in your vPod Checklist.  The vPodRouterHOL is an exception to SSH Key authentication and a PuTTy session.

 

 

Core Team vPod Review continued

 

VMware no longer has the same agreement with SuSE it once did.  To be legally safe, the Core Team prefers avoiding using SLES except for VMware appliances that have not yet transitioned to Photon.

Please work within the existing IP ranges. The new base templates include routing for storage networks. Do not add secondary IPs to the Main Console.

L2 VM reservations do not accomplish anything and can cause issues. The vESXi host has no ability to reserve resources in the backing infrastructure. Different clouds have different processors and a reservation in one cloud may not be able to be met in another cloud. Your L2 VM may not boot and your lab will break.

The Core Team will ask you about the I/O requirements of the lab. Are there L2 VM deployments, vMotions, etc.? How big is the VM? Remember that a 5-minute operation in HOL-Dev can easily take 30 minutes or more under load at VMworld.

If the vPod includes vCloud Director, storage leases need to be set to never expire for both templates and vApps. Otherwise, once the storage lease has expired, the L2 vApp will not be available and your lab will break.

Horizon uses an expiring one-year SSL Certificate for an internal tunnel with the broker. If proper steps are not taken (as documented), your Horizon application will cease to work one year after installation. Note that this is due to the nature of HOL. In production, the SSL Certificate will renew automatically as long as the product continues to run.

 

 

"Emergency" Repairs

 

If all goes according to plan, the milestones outlined previously have provided us with enough time to get all of the vPods replicated from the HOL-DEV environment to the hosting clouds. Sometimes we cut it pretty close, but we have usually gotten it done.

When things go wrong after the process has been followed -- maybe a bug was found in the beta code deployed into the lab, or a license is set to expire in the middle of a conference -- we have to look at our options. Typically, there are three:

  1. "Fix" the manual to either direct the user around the issue or alert them that there is an issue and explain the workaround.
  2. Repair the vPod in each of the hosting cloud(s) and re-shadow, then re-publish in VMware Learning Platform (VLP)
  3. Repair the vPod in HOL-DEV and reprocess completely (export, replicate, import, shadow, publish)

If the issue is not catastrophic, Option 1 is generally preferred after the "drop-dead" date for a show. This option typically takes the least amount of time since it just requires updating and re-publishing the manual. The time required here is on the order of minutes to hours.

If the issue affects vPod functionality but requires a simple fix -- installing a license key, marking an account as non-expiring, or resetting a password -- we may opt to make the change in place in each of the hosting clouds (Option 2). For smaller events, there may be only one hosting cloud, so this is easier than a show like VMworld where there may be as many as twelve hosting clouds. When we do this, we also have to ensure that this change makes it back into the HOL-DEV environment so that it does not hit us again. The time required here is on the order of a half to a full day for the return to service and another day to "reverse process" the vPod back to HOL-DEV.

If the issue affects vPod functionality but requires "major surgery" or Internet access, we're pretty much stuck with Option 3. In this case, it is almost a "back to the drawing board" option whereby the last good version of the vPod is deployed in HOL-DEV, cleaned up, shut down, checked in, exported, then replicated, and imported to each hosting cloud, shadowed across all OrgVDCs, and finally published in VLP. The time required here is generally on the order of days. Sometimes it can be done in one day if there is not significant load on the system, but we can't count on that.

Note that all of the time estimates above are optimistic estimates which assume there is only one "emergency fix" in flight at a time. This is why we appreciate everyone's attention to detail when creating and vetting these labs. The sooner we find a problem, the more time we have to correct it.

 

 

Help Us Help You

Over the years, we have collected some tips, tricks, and requests that should help make your vPod development experience more pleasant. That information is collected in this section.

 

 

 

Beyond

Once your vPod has been checked in, the core team will replicate it to the OneCloud hosting environment for VMworld. After replication, your vPod will be published via the VMware Learning Platform alongside your manual and should be ready for the vetting process. (The core team will copy your pod to an adjacent cloud org and publish it to a restricted pre-release catalog for more immediate testing)

At this point, please run through your entire lab, following the manual, note any discrepancies, and address them quickly.

Thank you for your hard work and commitment of your time, experience and knowledge. If there is anything that you need to be successful, or any suggestions that you have to help us streamline this process, please do not hesitate to let us know.

 

vPod Shutdown Guidelines



 

vPod VMs PowerOff instead of Shutdown

 

Our vPods are defined with “Power Off” rather than “Shutdown Guest OS” VM stop actions in order to facilitate a quick cleanup of environments that are deployed during events. There is no need to wait for the VMs to shut down gracefully since they are deleted immediately following the vCD vApp Stop procedure.  

During your vPod development cycles, you will need to shutdown your vPods in order to check them into the WorkInProgress or HOL-Staging catalogs to preserve progress and enable replication. We strongly recommend against simply stopping the vPod using the "Stop" action in vCD. This produces “crash-consistent” virtual machines and nested virtual machines. You may lose data if you stop your vApp while virtual machines are still running. Be aware that teams have lost virtual machines in the past by stopping a vApp while VMs are still running.

Storage inconsistency is the leading cause of "vPod weirdness." Shut down all of your vESXi VM gracefully from vCenter and wait for vCenter to report that they are disconnected before proceeding as documented below.

So, please take the time to shut your vPod down cleanly in order to maintain the integrity of the virtual machines and nested VMs. We have documented the preferred order and process for shutting down the components of a base vPod and offer guidelines for handling the virtual machines or appliances that you may have added. See the following step for more information.

 

 

Mind the dependencies

We have found that it helps to think of a vPod as self-contained datacenter. In a real environment, for example, the storage would never be shut down before all of the machines that use it -- at least, not intentionally. The same rules apply here and are described in the following recommended power-down order:

  1. Top-level user applications
  2. View Manager
  3. All nested (Layer 2) virtual machines (sometimes called vVMs)
  4. vCloud Director cell(s)
  5. NSX Manager(s)
  6. vESXi host(s)
  7. PSC(s) - if you have an external PSC in your pod, shut it down before shutting down the vCenter, just in case.
  8. vCenter(s) - wait until all attached vESXi hosts show Disconnected in the Web Client
  9. Storage appliance (if present)
  10. Any remaining Layer 1 virtual machines (except the vpodrouter)
  11. Main Console
  12. vPodRouterHOL (right-click and choose shut down guest OS)

Many of these components have different shut down processes and ways that they may be shut down cleanly. This document offers our suggestions, but the most important take away is that you should always shut down your vPod cleanly in order to preserve its integrity.

You must document your process on the Shutdown tab of your vPodChecklist.

 

 

WARNING: Attached Media

 

BEFORE shutting down your vPod, EJECT attached media.  Look for the blue disk on the VM icon.

Media can get "stuck" if the VM is shut down while still inserted.  Always eject the media while the VM is powered on.

If one of the VMs within your vPod has attached media, attempting to check it in to the catalog may fail.   But the vApp will fail when deploying across OrgVDCs as we do at VMworld or in the public HOL.  When it does so, the error message provided is spectacularly unhelpful:

Unable to perform this action. Contact your cloud administrator.

To prevent this issue, ensure that you disconnect any CD/DVD or Floppy media from your VMs once you have finished using them. If you are unsure about a possible media connection, it does not hurt to go through each of your VMs and execute the “Eject Media” operations whenever you are ready to shut down your vPod.

To do so, right-click on the virtual machine and select Eject CD/DVD or Eject Floppy if you happen to be using floppy disk images.

 

 

vPod Shutdown Process

The following provides some detail regarding the shutdown process for common components of the base vPods.

REMEMBER TO EJECT MEDIA BEFORE PROCEEDING. Failure to do so can result in media being "stuck" in the VM.

 

 

View Manager

In labs that contain a View Manager, it should be shut down first. If not, it will attempt to keep the layer 2 desktop virtual machines powered on and prevent a clean shutdown of the vESXi hosts.

View Manager may be installed as a layer 1 VM or it may be layer 2 (nested). Opening a console via RDP or VMRC and selecting Shut Down from the Start menu is the simplest way to shut it down.

 

 

All Layer 2 VMs

 

Any VMs running on the vESXi hosts in your vPod should be shut down following whatever process works best for you and your software.

The Core Team provides various Linux and Windows machines on ISO media, all with the VMware Tools installed. For Layer 2 VMs, selecting Shut Down Guest OS from each VM’s context menu is usually acceptable, although you may prefer to open a console to each powered-on virtual machine and shut it down according to the guest OS:

Ultimately, the call is yours. As mentioned before, we don’t recommend simply pulling the plug on the host.

 

 

vCloud Director Cells

If a vCloud Director cell is included in your vPod, you can shut it down at this point. The easiest way to shut down the cell is to open the console, login as root and execute init 0 or any other graceful Linux shutdown command you prefer.

Note that this is not a recommended way to shut down a production vCD cell, but the cells in the vPods are single-user and not busy; there is no need to suspend the scheduler and wait for all tasks to complete prior to shutting them down.  You may do this, however, if you prefer.

 

 

NSX Shutdown Process

NSX components should be shut down in the following order

  1. Any NSX Edges including ESGs and DLR Control VMs.  Highlight the VM, go to Power, select Shut Down Guest.
  2. Validate all Edges are shut down.
  3. Shut Down the three NSX Controllers.  Highlight the VM, go to Power, select Shut Down Guest.
  4. Validate all Controllers are shut down
  5. Shut Down the NSX Manager. Highlight the VM, go to Power, select Shut Down Guest.
  6. Validate the NSX Manager is completely shut down.

 

 

 

vESXi Hosts

Once all nested virtual machines on a host are shut down, the host can be shut down. Note that putting the host into Maintenance Mode is a bad idea unless you expect it to come up that way when your lab starts.

 

 

Shut down using Web Client

 

While opening a console to each ESXi host and shutting it down from there works, it is generally faster to perform the shutdowns from the managing vCenter. Odds are you’ll already be there shutting down the layer2 VMs. Just use the vSphere Web Client on the Main Console.

Right-click each host and select Shut Down, or select all of the hosts and do them all at once.

 

 

Answer the prompt

 

Complete the form -- you can leave the "Enter a reason here" text alone if you like.

Note that if you select multiple hosts, the dialog has an additional radio button that toggles between shutting down ONLY hosts in Maintenance Mode (the default), or ALL hosts. You want the second option.

Note that this step also quiesces Virtual SAN if present.

 

 

Wait until all vESXi hosts show disconnected

 

If you do not wait for hosts to show disconnected before shutting down vCenter, the hosts will look like they are connected at vPod boot but you won't be able to start any nested VMs. This is because vCenter sees the hosts as connected but the hosts themselves are actually not. It requires a disconnect and re-connect to correct of all vESXi hosts to synchronize the connection state.

It is MUCH easier to simply wait until all the vESXi hosts report disconnected in vCenter. If you don't do this, you will be re-capturing your vPod.  You have been warned.

 

 

vCenters

REMINDER: Please wait until each of the vESXi hosts show Disconnected before shutting down the vCenter server.  We have seen vESXi connectivity issues when the pods start if this is process is not followed.

We typically have two kinds of vCenters available in the vPods: the vCenter virtual appliance and one built on Windows 2012.

 

 

vCenter Server Appliance

 

The vCenter appliances include VMware Tools, meaning the VCSA can be shut down my highlighting the VMs in the vCD interface's vApp Diagram tab, right-clicking and selecting Shut Down Guest OS. No login is required.

You could SSH in as root and shut down using init 0.

You could use the VCSA DCUI and follow the prompts.

Just wait for them to be completely down before moving on to the next component.

 

 

Windows 2012 vCenter

 

Shutting down the Windows 2012 vCenter involves logging into the console as Administrator, running sconfig.cmd from a command prompt, then entering option 14, pressing Enter, then confirming the request by clicking the Yes button in the resulting dialog.

 

 

Storage Appliance - FreeNAS

Before turning off the storage, ensure that all vESXi hosts have been shut down completely.

 

 

Access the Admin UI and Login

 

Open a web browser and navigate to the storage appliance management interface. The base vPods contain a link to this interface on the toolbar (1). This should be removed prior to providing the vPod to end users.

The login screen should automatically pop up to the center of the screen. If it does not, click on the Log In button in the upper left corner under the FreeNAS logo.

Login as the user root with the password VMware1!

Note: If you are working with a vPod that has already been prepared for deployment and is missing the shortcut to the storage appliance, the main URL you can use is http://stga-01a.corp.local/

 

 

 

Click Shutdown in the lower left corner of the pane on the left. Note that you may need to scroll down or collapse other items in this pane to find the link.

 

 

Request Shutdown

 

Click the Shutdown button to confirm that you really want to stop the appliance. Remember that all of your vESXi hosts should be shut down before you do this.

 

 

Log out

 

When this screen shows up, you can close the browser window. It takes a few minutes for the appliance to completely shut down, but you can check for the "black screen" on the VM in your vPod's vApp Diagram view.

 

 

Any Remaining L1 VMs - need new screen shot showing powered off VMs

 

At this point, all nested VMs and their hosts are shut down, as well as the shared storage and any vCenters present in the vPod. Take a look at the vApp Diagram in vCloud Director and see which VMs are still powered up. It should look something like the picture here.

If you have more “light blue screens” than the Main Console, and the vpodrouterHOL, then you should follow appropriate procedures to shut those down before proceeding.

 

 

Main Console

 

The Main Console is just a Windows 2008 R2 server machine.

From the console, choose Shut Down from the Start menu.

 

 

vpodrouterHOL

Typically the last device remaining, the vpodrouterHOL does not have a traditional interface and gets powered off by selecting the VM in vCloud Director and choosing Shut Down Guest OS.

 

 

vApp Stop

 

Once all of the virtual machines in the vPod have been shut down, it is necessary to stop the vApp itself. This is accomplished by clicking on the vApp’s Stop widget in the vApp Diagram view.

This widget will both power off and mark all of the virtual machines “Powered Off.” You will be prompted to ensure this is your intent. Click Yes in the dialog to indicate that you wish to Stop your vApp.

 

 

The End

Thank you for reading. Now you can safely add your vPod to the vCloud Director Catalog for safekeeping.

Team vPod

 

vPod Remote and Internet Access


The vPods created for the Hands-on Labs do not have external network access. This significantly simplifies networking within the vPods, but can be challenging for lab content development, software installation and configuration.

Depending on the cloud in which the pod is hosted, it is sometimes possible to temporarily enable a vPod to access external network resources, such as the Internet, as well as to enable RDP access to the Main Console virtual machine within the vPod. Typically, the clouds used to host the pods for VMworld events do not allow external access to the pods.

Keep in mind that this configuration is available for use only during the vPod development process and will not be available for actual lab deployment and usage.


 

"Wiring up" a vPod

 

By the magic of vCloud Director, a vPod may access the external world by attaching its vApp network to a vCloud External Network. This process requires that the vApp be in the Stopped state. This means that all VMs contained within the vApp are powered off and the vApp itself has been Stopped.

 

 

Switch to Networking View

 

With a Stopped vApp, click on the Networking tab (1) and select the appropriate external network from the Connection drop-down menu (2). The name of the external network varies depending on the cloud and organization currently hosting the vPod, but there will normally be only one option besides None in the list. That is the one you want to use.

 

 

Configure Services

 

You can uncheck the “Firewall” option (under the Routing column) to keep things simple. Then with the network selected:

  1. click the Actions wheel
  2. choose "Configure Services..."

 

 

Verify the NAT type

 

In the resulting window, click the NAT tab (1) and ensure that IP Translation (2) is selected from the NAT type drop-down menu. While you're at it, verify that the Enable NAT checkbox is checked.

NOTE: There may be more than one way to accomplish this step. This path has been tested extensively. It works and provides good compatibility between clouds.

 

 

Add a New NAT Rule

 

Click the Add… button (1) and follow the steps to add a Port Forwarding rule to forward all TCP traffic to nic0 on the vpodrouter (2). This should be the only NIC in the list that does not have (DHCP) listed for its IP address. The vpodrouter uses iptables to forward all traffic to 192.168.110.10 (controlcenter.corp.local) which is routed through nic1 using the /etc/network/interfaces file.  Use the Debian "ip addr" command to see all IP addresses on eth1. Note that ifconfig is deprecated on Debian linux.

Click the OK button (3) to complete configuring the NAT service.

Click the OK button (4) to complete configuring the Networking services.

 

 

Apply Network Changes

 

You must then click the Apply button on the vApp’s Networking tab to actually make the changes. If you switch tabs before doing so, you get to start over.

 

 

Start your vApp

Switch back to the vApp Diagram view and Start your vApp.

Once it comes up, open the Main Console virtual machine using the VMRC console within vCD. Launch a browser and verify that you are able to access the Internet or other external resources.

Your mileage may vary here. Depending on the source of your vPod and the current cloud, you should be able to access external resources without changing anything else. You may need to make a few adjustments. See the next step for more information.

 

 

DNS Resolution

You may find that, even though you have completed the steps outlined in the previous section, you remain unable to access Internet-based resources. In this case, it is likely that you are not able to resolve DNS names and must alter the DNS resolution mechanism within the vPod.

It is important to make the change in the proper location so as to preserve the vPod’s configuration.

Our standard is to use Google Public DNS so if you are not using, please switch now.

In the HOL-DEV cloud, this change can be accomplished from the command line on the Main Console as an administrator:

DNSCMD /resetforwarders 8.8.8.8, 8.8.4.4

Follow any DNSCMD /resetforwarders command with a reboot or the following two commands to clear out “stale” cached records:

DNSCMD /clearcache

IPCONFIG /flushdns

Test DNS resolution by issuing a ping

PING www.google.com

If all goes well, you should resolve www.google.com to an IP address and receive responses to your pings.

 

 

Locating the IP Address for RDP

 

Once a vPod has been properly connected to the external network following the procedure described in this article, the Main Console VM can be accessed via an RDP client from a machine on the VMware network (internal network or VPN) by pointing it at the "external IP address" of the router.

To find this address, switch to the "Virtual Machines" tab and locate the only external IP address.

 

 

Open Your RDP Client

 

Direct your RDP client to the External router IP address -- note that you must be running your RDP client from a machine on the VMware corporate network (or logged in via VPN). External access via RDP is not allowed.

You can ignore the certificate warnings and then log into the Main Console using CORP\Administrator and password VMware1! as usual.

 

 

Do you use MacOS X?

 

If you are a Mac user, you need to use the Microsoft Remote Desktop client available from the App Store rather than the Remote Desktop Connection application that comes installed on your Mac.

You want to the one with the red/orange icon instead of the one with the colored Windows logo and satellite dish.

 

 

Log into the Main Console

 

… and login as administrator or CORP\Administrator with the vPod password: VMware1!

 

 

Problems?

If you are not able to access the Main Console machine via RDP, ensure that you are on the VMware network and double check that you configured the Port Forwarding as described in the “Wiring Up” a vPod section.

Sometimes you may need to try the RDP connection multiple times (2 or 3) depending on whether the Main Console is fully booted or due to an ARP bug in the vpodrouterHOL.

 

 

"Un-wiring" a vPod

 

Prior to checking a Milestone vPod into the catalog, it must be disconnected from the external network. This process is straightforward and involves essentially undoing what was done to “wire up” the vPod in the first place.

As with the “wiring up” procedure, the vApp must be in the Stopped state in order to make this change. With the vApp stopped, the process involves only 3 steps:

  1. Click on the vApp’s Networking tab (1)
  2. Click on the currently configured external network in the Connection column and select None (2)
  3. Click the Apply button (3)

 

vPodRouter for the HOL Use Case


This section will describe the vPodRouter in the context of VMware Hands-on Labs.  Additional documentation for other general use cases is located here: https://wiki.eng.vmware.com/VPod/vPodRouter

Before making changes to the vPodRouterHOL, please check with Core Team.


 

Solution Overview

The vPodRouter version 6.X for HOL is open and configurable based on a Debian Linux version 6.0.8 VM. It is pre-configured to provide IPv4/IPv6 routing within the vPod and IPv4 NAT capabilities external to the vPod. Using a widely-known Linux distribution allows for extended functionality. Although IPv6 is configured in the vPodRouter version 6.x, VMware Hands-on Labs do not use it typically. This includes DHCP IPv6 which is disabled.

 

 

vPodRouter Networks

 

The vPodRouter includes 6 vNICs although only vNIC0 and vNIC1 should be connected for HOL use.

vNIC 0 provides external NAT to the Main Console (controlcenter.corp.local 192.168.110.10) and is generally only used during lab development or for hybrid labs with external networking.

vNIC 1 is connected to the lab's vApp Network and provides 8 class C networks with a single DHCP range. The vast majority of vPod networking flows through vNICs over the single vApp network. This technique conserves resources and speeds vApp deployments in vCloud Director. A single vApp Network means a single DHCP range which is 192.168.100.100 - 250.

The remainder of the vNICs should only be considered for use outside of VMware Hands-on Labs.

vNIC 5 must never be used.  The Core Team uses eth5 to communicate the vPod startup status for prepop failure alerts.

Although the diagram above shows DHCP on eth2, eth3 and eth4 multiple DHCP ranges require a separate dedicated vApp network for each range.  This is not allowed in HOL.

 

 

vPodRouter Network Conventions

The routed networks configured on the vPodRouter have the following use conventions:

192.168.100.0/24 - DHCP on 100 - 250

192.168.110.0/24 - RegionA Management

192.168.120.0/24 - RegionA Nested VMs

192.168.130.0/24 - Available

192.168.200.0/24 - Available

192.168.210.0/24 - RegionB Management

192.168.220.0/24 - RegionB Nested VMs

192.168.230.0/24 - Available

10.10.30.0/24 - RegionA vMotion

10.20.30.0/24 - RegionB vMotion

The storage networks are not routed:

10.10.20.0/24 - RegionA Storage

10.20.20.0/24 - RegionB Storage

 

 

Dynamic Routing in the vPodRouter

The vPodRouter has dynamic routing capabilities using Quagga that allows for BGP and OSPF routing as of the 6.1 version of the vPod Router.  This is compatible with dynamic routing used in NSX and with ECMP enabled.

 

 

 

 

Initial OSPF Config & Changes

The vPodRouter is configured with the following OSPF settings, although these can be adjusted if needed with Core Team approval.

The default network that is setup under OSPF is 192.168.100.0/24 under Area 10.  Any NSX Edges placed in this range, using this Area 10 will become and OSPF neighbor and share routes.  This is assuming NSX is configured correctly.

The vPodRouter is also set to be the point of default originate and will distribute its default route.

If you need to make changes to this default network or add others, all configuration settings for OSPF are under /etc/quagga/ospfd.conf

Core Team MUST be consulted before making changes. The preferred method is to use LabStartup to make the change. See the Add-Route function for the approach to use instead.

For example, if you need to add another segment to the OSPF area, you would add the following line to the config file:

network 192.168.200.1/24 area 10

This assumes this network already exists and does not create it.

Upon making any changes, you must restart the service with the command /etc/init.d/quagga restart

NOTE - Since this vPodRouter is set to 1600 MTU for the interface, any edges connected to a Distrbuted Port Group must be set to "Ignore MTU Mismatch" when setting Area to Interface Mapping on OSPF.

 

 

 

Initial BGP Config & Changes

The vPodRouter is configured with the following BGP settings, although these can be adjusted if needed.

BGP Router AS 65002

Edge 1 IP - 192.168.100.3 - AS 65001  (These are the IP's and AS's of the NSX Edge Devices you would deploy to communicate.)

Edge 2 IP - 192.168.100.4 - AS 65003

The vPodRouter is also set to be the point of default originate and will distribute its default route.

If you need to make changes to this default network or add others, all configuration settings for BGP are under /etc/quagga/bgpd.conf

Core Team MUST be consulted before making changes. The preferred method is to use LabStartup to make the change. See the Add-Route function for the approach to use instead.

For example, if you need to add an addition edge, you would need to add the following lines to the config file:

neighbor {edge-IP} remote-as {edge AS}

neighbor {edge-IP} default-originate

Upon making any changes, you must restart the service with the command /etc/init.d/quagga restart

 

 

Monitoring OSPF & BGP from vPod Router

In order to monitor the status, routes, and neighbors of OSPF & BGP you must telnet into that particular module.

OSPF

  1. SSH to vPodRouter or access its console.
  2. From command line on vPod Router type "telnet localhost 2604"
  3. Password is VMware1!
  4. Type in various commands
  5. Type "Exit" to return to vPodRouter base CLI

Using the ? will show available commands.  The most common ones are:

BGP

  1. SSH to vPod Router or access its console.
  2. From command line on vPodRouter type "telnet localhost 2605"
  3. Password is VMware1!
  4. Type in various commands
  5. Type "Exit" to return to vPod Router base CLI

Using the ? will show available commands.  The most common ones are:

 

 

Questions on Quagga

For any specifics on the vPod Router dynamic routing configuration, please contact Joe Silvagi jsilvagi@vmware.com

 

 

How to...

Help for specific tasks germane to HOL.

 

 

Disable DHCPv4 on a network segment

#subnet 192.168.150.0 netmask 255.255.255.0 {
#        interface "eth3";
#        range 192.168.150.100 192.168.150.250;
#        option subnet-mask 255.255.255.0;
#        option broadcast-address 192.168.150.255;
#        option routers 192.168.150.1;
#}

Core Team MUST be consulted before making changes. The preferred method is to use LabStartup to make the change. See the Add-Route function for the approach to use instead.

 

 

Add a static route

Let's assume you want to add a static route for the network 10.1.10.0/24 to the IP address 192.168.100.3. This means that the vPodRouter will forward (or "route") all traffic destined for IP addresses in the 10.1.10.0/24 subnet to the host 192.168.100.3. The host 192.168.100.3 is then in charge of delivering these packets to the actual host - either by forwarding them to the actual host or routing them further.

Add the following lines to the end of the iface section that contains the subnet on which the remote hop (here 192.168.100.3) resides. You will find something like:

iface eth1 inet6 static
 address fdba:dd06:f00d:a000::1
 netmask 64
 up /sbin/ip addr add fe80::1/64 dev eth1
 up /sbin/ip addr add fd53::1/64 dev eth1
 up /sbin/ip addr add 192.168.100.2/24 dev eth1
 ...

At the end of this section add:

 post-up route add -net 10.1.10.0 netmask 255.255.255.0 gw 192.168.100.3
 pre-down route del -net 10.1.10.0 netmask 255.255.255.0 gw 192.168.100.3 

Core Team MUST be consulted before making changes. The preferred method is to use LabStartup to make the change. See the Add-Route function for the approach to use instead.

 

 

TODO: HIDE THIS STEP     Enable PXE-Boot for VMware Autodeploy

subnet 192.168.100.0 netmask 255.255.255.0 {
        interface eth1;
        range 192.168.100.100 192.168.100.250;
        if ((exists user-class) and (option user-class = "gPXE")) {
                filename "https://autodeployserver.corp.local:6501/vmw/rbd/tramp";
        } else {
                filename "undionly.kpxe.vmw-hardwired";
        }
        option subnet-mask 255.255.255.0;
        option broadcast-address 192.168.100.255;
        option routers 192.168.100.1;
        next-server 192.168.100.1;
}

 

 

Configure DHCP reservations

Core Team MUST be consulted before making changes. The preferred method is to use LabStartup to make the change. See the Add-Route function for the approach to use instead.

host esx-01a {
       hardware ethernet 00:11:22:33:44:55;
       fixed-address 192.168.100.123;
}

Make sure to replace the above examples for MAC address and IP address with your addresses.

The result will look like:

subnet 192.168.100.0 netmask 255.255.255.0 {
        interface eth1;
        range 192.168.100.100 192.168.100.250;
        option subnet-mask 255.255.255.0;
        option broadcast-address 192.168.100.255;
        option routers 192.168.100.1;
        next-server 192.168.100.1;
        host esx-01a {
                hardware ethernet 00:11:22:33:44:55;
                fixed-address 192.168.100.123;
        } 
}

 

 

Capture traffic for troubleshooting

Because the vPodRouter v6 is a Linux Debian machine, it is possible to capture traffic on an L2 network segment connected to vPodRouter. This captured traffic can then be inspected using a utility such as Wireshark for further analysis.

For the desired interface, run a tcpdump packet capture. An explanation of the tcpdump options can be found at http://www.tcpdump.org/manpages/tcpdump.1.html

Here is an example for eth0:

Note: The Core Team will need to assist if running a Hybrid vPod.

 

RDP JumpServer for External Remote Access


The vApps in the HOL-DEV environment are behind a firewall, so in order to directly access the Main Console VM of a vApp in HOL-DEV via RDP, you will need to be on a system that is behind the firewall. The HOL-DEV RDP JumpServer ( hol-dev-jump.hol.vmware.com ) can be used as a jump box to get you behind the firewall and provide you direct access to the HOL-DEV vApp network.

Note: Direct access to HOL-DEV vApps should be available from inside the VMware internal network.  To access vApps from inside the corporate network you should just need to follow the “vPod Remote and Internet Access”  guide and directly RDP into the Main Console from your desktop running on the VMware internal network.  You may still use the HOL-DEV RDP Jumpserver from the VMware internal network, but it should not be necessary.

Note:  The RDP JumpServer is not regularly backed up.  Please make sure your important files are backed up on a different server.


 

Jump Server IP / Hostname details

Hostname:                              hol-dev-jump.hol.vmware.com

External IP:                             207.189.188.203

Internal / vApp Network IP:      10.149.100.246

Port :                                      3389  (Standard RDP Port)

Note: If connecting from a Mac, you may need to upgrade your RDP client in order to access the RDP JumpServer.  Older Mac RDP clients may generate an error due to the licensing and RDP protocol used by the RDP JumpServer.  To correct the issue you can download a newer RDP client from the iTunes store https://itunes.apple.com/gb/app/id715768417?mt=12&affId=2064962

 

 

Requesting an Account / Lost Passwords

The RDP JumpServer in HOL-DEV uses its own local accounts and is independent from your VCD or ScreenSteps account/login details.  

To request an account, simply contact your Lab Principal and/or Core Team member.

Note:  You can also send an email to hands-on-lab-beta@vmware.com to request access if your Lab Principal and/or Core Team members are unavailable.

 

 

Resources Available on the Jump Server

The Jumpserver should have all the common software components you need in order to develop your lab.  If you require additional software products to be installed simply contact your Lab Principal or Core team member.

Currently Installed Products:

 

 

Tips and Tricks Using the HOL-DEV RDP JumpServer

 

 

File Storage Space and Sharing Files

 

There is a 1.5+ TB    E:\   drive on the RDP JumpServer that you can use to store your large temporary files. Be aware that the  E:\  Drive is a shared resource and all users on the system can access and view any files stored on the E drive.  

Please do not store confidential/sensitive files on the RDP JumpServer.

To Mount the E Drive on another Windows system in the HOL-DEV Org environment ( i.e. the VM's in your lab/vAPP running in HOL-DEV ) use the RDP JumpServer's internal IP for example...

    \\10.149.100.246\e

 

 

Copying Files In and Out of the RDP JumpServer

Transferring files to and from the RDP JumpServer is as easy as Ctrl C  and  Ctrl V

To copy a file into the RDP JumpServer,  simply select the file you wish to move from your PC.  press Ctrl C  (or select Copy ) then click inside the Remote Desktop session and select the location you wish to save the file to on the RDP JumpServer. then simple press Ctrl V  (or select Paste)  to paste/copy the file to the RDP JumpServer.    The process also works in reverse if you wish to copy files from the RDP JumpServer to your local PC.

 

 

 

Mounting Local Drives

 

You can mount your local PC's hard drives to your Remote Desktop session as a way to simply move small files into and out of the RDP JumpServer.  

To select which local PC drives are mounted, go to the Local Resources tab of the RDP client before you connect to the RDP JumpServer.

0.  Start your RDP Client

1. If the Local Resources tab is not visible,  Click on the Options button to expand the options that are visible

2. Then select the Local Resources tab.

3. Press the More button to bring up the Local Devices and Resources screen.

4. Select the local Drives on your PC that you which to have mounted / access to from the RDP session.   And Click OK

You can then click Connect to connect to the RDP JumpServer.  From the M Computer or Explorer window in the RDP session you will be able to see your mounted local hard drives.

Note: This option is a good and easy way to transfer relatively small files into and out of the RDP JumpServer. Larger (1GB+ in size) files are probably better transferred via FTP or SFTP or some other mechanism.

 

Key-based SSH from the Main Console


One of the basic tenets of Hands-on Labs is to reduce the menial tasks required of the people taking the labs. Repetitive and non-productive tasks like entering passwords and clicking through dialog boxes should be minimized where possible.

To that end, we have generated a public and private key pair on the Main Console for controlcenter.corp.local. With a little configuration, this key pair can be used to authenticate to any Linux machine in the vPod and eliminate the need for users to enter names and passwords when doing so.


 

Generate the keys

 

The first step in any SSH work is to acquire keys. This step is performed using the PuTTYgen utility and has already been performed on the Main Console.

The files containing the appropriate keys are available in the C:\HOL\SSH\keys directory on the Main Console machine:

Of these four files, only two will be used for the purposes of this document: cc_authkey.txt and cc_private.ppk.

 

 

Enable SSH on Host

In general, most appliances will have SSH enabled by default. You can test this by opening PuTTY on the Main Console, entering the IP address or DNS name of the appliance, clicking the “SSH” radio button, and attempting to connect.

If SSH has not been enabled on your appliance or Linux machine, it may be disabled, or it may not be installed. Handling every case is beyond the scope of this document as there are many possibilities, many of which depend on the flavor of Linux or appliance you’re dealing with. Your first step is to take care of that.

Common tests are as folows

# service sshd status

which should report something like,

Checking for service sshd                                            running

If that returns

service: no such service sshd

or

status: unrecognized service

Then, you can try

# invoke-rc.d ssh status

which should return

sshd is running.

If not, you either need to get the SSH software installed. It is usually called openssh-server and can be installed using whichever package management utility is available for your Linux platform.

For CentOS based systems, as long as your VM or vVM has Internet access, you should be able to use:

# yum -y install openssh-server
# chkconfig sshd on
# service sshd start

If you are using Debian variants like Ubuntu, you can try

# apt-get install openssh-server 
# invoke-rc.d ssh start

Which may work, depending on your version.

Also note that some distributions have a firewall in place by default. This may need to be disabled (a fine choice for an HOL pod) or modified to allow port 22 inbound (a much better idea for production machines).

 

 

Establish Trust

To establish trust, you need to put the Main Console’s (controlcenter.corp.local) public key into the authorized_keys file on the Linux host or appliance:

  1. Open PuTTY and log in to the Linux machine or appliance as the root user (or whichever account the lab will use)
  2. Ensure that the .ssh directory exists in the user’s home directory -- note the leading “.” on the directory name.
# ls -la ~
  1. If the .ssh directory does not exist, create it and assign the appropriate permissions.
# mkdir ~/.ssh
# chmod 700 ~/.ssh
  1. Open the cc_authkey.txt file on the Main Console machine using Notepad, Wordpad, or Notepad++ and copy all of the text onto the clipboard. Ensure you get everything, including the trailing “controlcenter”
  2. Append the public key from the clipboard to the authorized_keys file. To do so, replace PASTED_KEY in the following command with the text on the clipboard. You right-click in the PuTTY window to paste from the clipboard.
# echo PASTED_KEY >> ~/.ssh/authorized_keys 

NOTE: If the authorized_keys file did not exist previously, you may also need to set the proper permissions on this file. It does not hurt to do it again since incorrect permissions will cause the process to fail:

# chmod 600 ~/.ssh/authorized_keys 

Log out of the Linux machine or appliance

# exit

 

 

Configuring ESXi

To configure key-based authentication on ESXi hosts, follow the same procedures as above, but use a slightly different file path. The authorized_keys file is expected in a different location in ESXi than Linux:

/etc/ssh/keys-root/authorized_keys

instead of

~/.ssh/authorized_keys

 

 

 

Configure PuTTY

NOTE: In the 2016 base vPods, Pageant is running with the Main Console's controlcenter.corp.local private key loaded, so any new PuTTY sessions will automatically try to authenticate using this key. If you have configured the host properly with the public key, everything should work seamlessly.

Now that the Linux machine or appliance trusts the key, we need to configure PuTTY to specify a login name and present the trusted key during login. We do this using a saved session in PuTTY and it requires two steps:

  1. Create a PuTTY saved session
  2. Specify the user name
  3. Specify the key to present

 

 

Create a PuTTY Saved Session

 

Launch PuTTY and ensure that the default Session node (1) is selected.

If you already have a saved PuTTY session that you want to modify, select it in the Saved Sessions list and click the Load button.

Otherwise, enter the host's IP address or DNS name in the Host Name (or IP Address) field (2), preceded by "username@" where username is the name of the user you want to use to connect to the host. typically this will be root in the lab.

Enter just the hostname in the Saved Sessions box and then click the Save button to save it in the list.

 

 

[Background] Specify the key to present via Pageant

 

NOTE: This has already been done in the HOL Base vPods. It has been included here for reference purposes.

Instead of configuring each session with the same key, you can leverage the PuTTY Authentication Agent (Pageant.exe). By default, all sessions will attempt to use the keys loaded into Pageant, so you can do this just once.

The following command (all one line) will start Pageant and load the key:

"C:\Program Files (x86)\PuTTY\pageant.exe" C:\hol\ssh\keys\cc_private.ppk

In the HOL Base vPods, we have created a shortcut in the All Users Startup directory ( C:\ProgramData\Microsoft\Windows\Start Menu\Programs\StartUp ) that executes this command at every startup of the vPod, so there is no need to worry about doing this.

 

 

Test it out

 

With everything configured and saved, select your host’s name from the Saved Sessions list and click the Open button. Without any prompting, you should be presented with a PuTTY screen that looks like the image in this step.

 

Working with Layer 2 VMs


The Core Team provides sample layer 2 (nested) VMs on the ISOs in the HOL-Base-Templates catalog which include the configurations documented here. Simply insert the desired ISO into the Main Console and deploy the OVF template to your vCenter.  Ideally you can use these nested VMs as needed.  However, there are times where a lab has unique requirements. This short document discusses some additional steps that need to be taken when including new layer 2 VMs in a vPod. The key issue to consider is how the guest OS and/or vSphere will deal with changes to the underlying storage hardware and processor when running in different clouds.


 

Edit the vVM Advanced Configuration

 

At times, a virtual ESXi host will detect a change in the underlying storage hardware and ask to generate a new identity for the nested VMs. When the nested VM is powered on, vSphere will ask whether the VM was moved or copied in order to avoid “identity conflict” in the datacenter. Unfortunately, a VM will not power on until this question is answered which causes confusion for users and extra work to document the response if the vSphere question is seen. To avoid this issue, always define the advanced configuration option “uuid.action” with the value “keep”. vPods are isolated so there is never a reason to generate a new UUID and doing so may cause other issues in the lab.

With the layer 2 VM powered off, use the vSphere Web Client to edit the VM settings, then go to the “VM Options” tab (1), expand the “Advanced” section (2) and click the “Edit Configuration” button (3).

 

 

 

Click the “Add Row” Button

 

Click the "Add Row" button (1) and then enter the Name: uuid.action (2) and Value: keep (3).

Other configuration parameters may be needed as well.  For instance, the “keyboard.typematicMinDelay” parameter and “2000000” value helps users see keyboard “echo” or repeating characters when entering text into a Linux VM console.

 

 

Windows Restart Now or Later Dialog Box

 

When Windows 7 and Windows 2008 guest OSes detect a change in the hardware processor, a dialog is displayed prompting the user to “Restart Windows” either “Now” or “Later”.

This dialog confuses users and makes more work documenting the event if it occurs. Restarting now delays the user’s progress in the lab and may affect the lab in other ways. Restarting later dismisses the dialog and has never caused any issue in the lab.

 

 

Open the Startup folder for all users

 

To automate the dismissal of the Windows Restart dialog, the Core Team developed a small PowerShell script called “skipRestart” which sends a simulated keyboard command to the in-focus dialog within a second or so if the dialog appears. This dismisses the dialog so the user can continue the lab without interruption. The following procedure applies to all Layer 1 Windows 7 and Windows 2008 VMs. This step has been completed on the Main Console although not really needed since we run Windows 2012.

Notice that because the Main Console automatically logs in, this dialog is dismissed before the user even sees the it and has a chance to set keyboard focus somewhere else.  Therefore, for Windows 2008 and Windows 7 VMs and vVMs in your lab, you should ALWAYS enable auto logon at boot up as well as install this skipRestart utility.

Copy the “skipRestart.bat” and “skipRestart.ps1” files to all Windows 7 and Windows 2008 VMs in the vPod. These files reside on the Main Console VM in C:\HOL as well on the 2016-HOL-Utility media ISO in the HOL-Dev-Resources catalog.

In the target VM, create a Start->Programs->Startup shortcut to the “skipRestart.bat” file for all users.

Open the hidden folder C:\ProgramData\Microsoft\Windows\Start Menu\Programs\Startup folder. Click in the folder field of Windows Explorer near the top and key in the full path. (1)

 

 

 

Create New Shortcut

 

Right-click in the white space and choose create new shortcut.

 

 

Enter the location to the skipRestart.bat file

 

The "Create Shortcut" wizard prompts for the location of the location of the item. Browse to the location of the skipRestart.bat file which is usually placed in C:\HOL.

 

 

Name the Shortcut

 

Name the shortcut "skipRestart".

 

 

Adjust the properties on the skipRestart Shortcut

 

Right click on the shortcut to run in a minimized window. This is less noticeable to the user.

 

 

Set-ExecutionPolicy in PowerShell

 

Open a command prompt and enter "powershell" (1) to start the PowerShell environment. Then enter "Set-ExecutionPolicy remoteSigned"  (2) so that PowerShell will actually run the skipRestart script when a user logs in. Finally, be sure to exit PowerShell. (3)

 

 

Test the skipRestart shortcut from the Startup menu

 

Click on the Start menu, then "All Programs" and open the "Startup" menu. Choose your "skipRestart" shortcut in order to test your work.

 

 

Verify that PowerShell is running

 

Start Windows Task Manager and verify that the "powershell.exe" process is running. You should see it run for about 2 minutes and then it will exit.

 

 

Enable Auto Login so the skipRestart Utility runs at boot

The precise details for enabling the Windows auto login feature vary slightly depending on the version of Windows. Just search the Internet to find the Windows Registry settings to enable auto login for your lab. It is best to have the Windows restart dialog display and be dismissed BEFORE people log in and see it.

 

FreeNAS: Adding Storage to the iSCSI Datastore


The Single Site Base vPods include a simple FreeNAS appliance that presents an ~80 GB iSCSI LUN. After formatting overhead, that 80 GB turns into about 78.6 GB of usable space. There are times when 78 GB is not enough and lab teams need to add storage to the FreeNAS. This article covers the addition of more storage to the existing iSCSI datastore.

IMPORTANT: It is to your benefit to plan the size of your datastore prior to adding data to it. It is possible to add capacity to a datastore containing data, but there are performance impacts.

Adding new iSCSI datastores is out of scope for this article, but the core team can assist you if that is a requirement for your lab.


 

In the Beginning...

 

In this example, I need more than the 78.65 GB free space remaining on the datastore in my pod. Based on the way that FreeNAS handles storage, there are two main options:

  1. Add new, larger VMDKs, resilvering the existing data onto the larger disks, and then removing the existing disks
  2. Add a new VMDK that is the same size as the current ones, add it into the current pool by striping the volume onto it.
  3. Add new VMDKs to the FreeNAS and create a new iSCSI LUN for presentation to your hosts

While either of these can be used, there are use cases which may drive you to use one over the other.

Whenever you are altering storage configurations, always work on a backup copy and have all nonessential services shut down.

In general, option 2 would be preferred if you just need a little more storage above what is provided in the base pod... say, 20-30 GB more. It is also the option that we would use if the storage needs within a pod were miscalculated and we are too late in the development cycle to handle things properly. Using this as an emergency patch has performance implications that will be covered later.

At the start of development, option 1 is the preferred method for expanding the existing datastore and option 3 is even simpler if your use case can handle multiple datastores.

If you require complex storage configurations, or assistance with these processes, please do not hesitate to ask the core team for assistance. Should you wish to go it alone, please work on a copy of your pod and test the configuration thoroughly. Let Doug Baer on the core team know what you're thinking and it may save you time.

General FreeNAS documentation is available here http://doc.freenas.org/9.3/freenas.html

 

 

 

Shut Down all vESXi Hosts

 

Before reconfiguring the storage appliance (stgb-01a) in your vPod, please ensure that any hosts that are using the storage have been shut down properly. For instructions on cleanly shutting down your vESXi hosts, please see the article that describes shutting down a vPod.

It is possible to make some of these changes while the hosts and FreeNAS are online, but it is safer to perform the reconfiguration with the hosts offline.

 

 

Shut Down the Storage Appliance

 

Shutting down the FreeNAS appliance is as simple as logging in via the web interface, clicking the Shutdown option, and confirming.

To begin, launch one of the web browsers included in the vPod and click the "stgb-01a Admin" link. In the 2016 base pods, this link is located on the web browser toolbars in a folder called HOL Admin. At the login prompt, login as root with password VMware1!

NOTE: Internet Explorer 11 seems to be "FreeNAS-GUI-hostile," so we recommend using Chrome or Firefox to manage the FreeNAS.

If necessary, click on the System tab (1) then click on the Shutdown option (2) in the left pane. Note that you may need to scroll down or collapse other items in that pane if someone has been working in the FreeNAS management console previously.

 

 

Confirm Shutdown

 

Click the Shutdown button to confirm the shutdown.

NOTE: Please ensure that all vESXi hosts are powered off before doing this or you may corrupt the datastore.

 

 

Wait for shutdown

 

From the vApp Diagram tab of your vPod, verify that the FreeNAS appliance (and any vESXi hosts) are powered down

 

 

You have a plan, right?

 

Before you proceed any further, you need to know how large your datastore needs to be.

Just like production, do not assume that you will be filling your datastore 100% because you need space for log files, swap files, and growth of any thin provisioned volumes.

With the base vPod's datastore, there are 5 x 20 GB VMDKs presented for a total of 100 GB. This results in a 79.75 GB datastore with 78.65 GB usable. How did that happen?

  1. VMFS requires about 1 GB overhead for formatting and on-disk structures ( -1 GB )
  2. ZFS, which is used to aggregate the disks, performs best with 20% free space in the pool (-17.9 GB)
  3. The raw devices are formatted so that they may participate in the pool, which allows each one to contribute about 17.9 GB (-10.5 GB)

That leaves us with 70.6 GB usable. However, it is generally not a good practice to "red line" any file system, so the extra 8 GB or so is roughly 10% for a capacity buffer to allow VMFS to see some extra space that won't be used. It could be used, but might result in performance issues at the FreeNAS layer. Ultimately, it is a game we are playing to minimize overall overhead by sharing the overhead space between layers.

If you just need a small bump, adding one or two new 20 GB VMDKs will do the trick, but I would prefer not to see a massive chain of 20 GB drives when we can get by with a set of 5 "right-sized" devices.

The simple way to look at it is:

  1. Figure out how much space you really need to use on your datastore. (This is actual used space, not thin provisioned allocation.)
  2. Add 10%
  3. Divide by 4 and round up to the nearest GB. (Use your judgment here. This ends up being a bit of art mixed with science.)
  4. You need 5 VMDKs of this size to create an iSCSI datastore to contain your data.

Armed with this information, you are ready to move on.

 

 

A note on striping and extending storage

 

When a striped volume is created, the data written to that volume is spread evenly across all of the disks that make up that volume.

When a new disk is added to the volume, like Disk 4 on the right, new data will be written across all of the devices now available. However, the existing data continues to reside only on the volumes that were present when it was written. This results in an imbalance.

That might be fine... up until the original set of drives fills up. When this happens, the new drive is the only place new data is written. This is also the only drive that can be used to read that new data. This can cause a performance bottleneck and undesirable, inconsistent results.

Production storage systems often provide the ability to "re-level" or redistribute the data across an expanded pool. That process runs in the background on the array and can take days or weeks to complete. We do not have that kind of time in HOL, and FreeNAS does not offer an automatic way to do this.

This is why you want to extend the storage before you put anything on it. You will avoid this issue.

 

 

Reconfigure the Storage Appliance VM

The next step in the process is to add one or moe new VMDKs to the FreeNAS storage appliance.

We do not want to resize the existing VMDK since this requires action as sysadmin in vCloud Director.

Instead, we will add a new VMDK, extend the ZFS pool, grow the zvol, and extend the iSCSI-based datastore.

If that makes no sense, don't worry, we'll walk through it.

 

 

Edit VM Properties

 

Right-click on the powered-off FreeNAS appliance (stgb-01a) and select Properties (1) from the context menu.

 

 

Add New VMDK

 

Once the VM properties screen is open, click on the Hardware tab and scroll down to the Hard Disks section (1).

Click on the Add button (2) in the lower right corner of the Hard Disks section.

Enter the size of the new VMDK (3). Note that this is roughly the amount of storage that will be added to your datastore. There will be some formatting overhead, usually on the order of a couple of GB. Here, we are adding a 20 GB VMDK, which will provide approximately 19 GB of additional storage. Using this method, you must match the size of the existing VMDKs for the expansion to work.

Depending on the option you select, you may need to add more than one VMDK. Repeat the process to add as many devices as you require.

Be sure to change the Bus Number and Unit Number. This isn't a strict requirement but helps keep things organized:

I use Bus Number 0 for the OS, Bus Number 1 for LUNs on Site/Region A and Bus Number 2 for LUNs on Site/Region B. The Unit Numbers should increment automatically when you change the Bus Number. If you exceed the number of devices on a given bus, it may make sense to look at an alternate configuration.

If you need a custom storage configuration, please contact the core team for assistance.

Click the OK button to apply the change. Wait for the Busy label to disappear from the mini-console screen on the FreeNAS VM, then power it up. For now, you can leave the vESXi hosts powered off.

 

 

Option 1 - Extend a Volume by Replacing the Disk(s)

 

This article was written with NFS as an example, but growing a zvol by replacing the disks with larger ones follows much the same process. If you intend to follow this process, do it before there is data on the drives. Without data to move around, the process will take seconds instead of hours (or days!) to complete. The examples here were performed with data on the volume. With empty drives, the replacement time is barely noticeable.

At this point, the new VMDKs must have been added to the FreeNAS. If you have not completed the steps, take care of that before continuing.

  1. Click on the Storage item in the toolbar
  2. Click on the name of the Volume to be extended
  3. Click on the Volume Status button -- it shows up once the volume has been selected

If you are replacing the disks with larger ones, you must replace ALL of the existing disks with new ones of a uniform size.

 

 

Begin the Disk Replacement

 

  1. Select the disk to be replaced. To help identify the disks, you can use the View Disk button off the main Storage tab
  2. Click the Replace button

 

 

Select the replacement disk

 

The current disk will be listed in bold and identified as "In-place" in the list.

  1. Select the replacement device from the drop down list. In this case, the 60 GB da3 is being used to replace the 20 GB da2.
  2. Click the Replace Disk button to begin the process

 

 

Monitor Progress

 

In FreeNAS, this replacement process is called resilvering and can be monitored from the web interface, although it does not update very frequently. It is OK to navigate away from this page; the process will continue.

 

 

WARNING - Don't worry

 

While the disk is being replaced, you may see a CRITICAL warning. Note that this is just letting you know that a device is being replaced and that performance may be degraded during the copy. You're not trying to do anything else at this point anyway, right? No need to worry...

 

 

Wait for completion

 

To refresh the status of the resilvering process, go back to the Storage tab, select the volume, and click on the Volume Status button again.

 

 

Verify the Extension

 

You will be returned to the main Storage tab.

Note that the Available number should have increased by roughly the amount of storage you added. In this example, that was about 40 GB.

 

 

Almost there

You are not finished yet. Please jump to Grow the zvol in this article.

 

 

Option 2 - Extend the Volume by Striping (small bump, emergency use)

 

Once the appliance comes back up, go to its web interface from the web browser on the Main Console and click on the Storage button (1). Note the amount of space in the Available column (86.6 GiB in the example) on the ISCSI01 line.

For Option #1, click on the name of the volume you wish to extend (2) and click the Volume Manager button (3) to work on that volume. For Option #2, jump ahead to Option 2 - Extend the Volume by Striping and proceed from there.

 

 

Manual Configuration

 

In the labs, we are using a striped set with zero redundancy. This might sound scary, but these "disks" are VMDKs sitting on high-end storage systems. This helps us make efficient use of the limited storage and processing resources available, and is a very uncommon configuration for production data. The built-in wizards are not completely aware of this configuration, so we must use manual mode to extend the stripe set.

NOTE: Even if we configured redundancy here with various RAID options, the VMDKs containing the recovery data would be on the same datastore behind the scenes. A failure of that datastore means everything is gone, so there is no point in adding the compute/IO/and storage overhead.

In the Volume Manager dialog,click the Manual setup button in the lower right corner.

 

 

Select the volume, disk, and add the disk to the volume

 

Select the COMP01 volume in the Volume to extend box (1)

Using the Single Site Base vPod, a new disk should show up as "da6" in the Member disks section (2). Click on the disk to select it

Finally, click the (confusingly-named) Add Volume button to apply the change.

 

 

Warning/Error Message ?

 

Note that by completing the extension you are actually growing a ZFS pool. When you add a new device, you stripe the data across the new device(s). This only works if the new and existing devices are the same size and only in very specific configurations.

If you need a custom storage configuration, please do not hesitate to contact the core team and we can help you get what you need.

The red message in the top window results from an attempt to add a single device to a pool that is already configured with multiple devices.

Adding a new device to an existing stripe set requires the use of Manual setup, as outlined in the previous step. If you are seeing this message, odds are good that you are attempting to use the assisted setup.

 

 

Verify the Extension

 

You will be returned to the main Storage tab.

Note that the Available number should have increased by roughly the amount of storage you added.

 

 

Grow the zvol

 

It does not matter whether you grew the pool by adding new drives (Option 1) or swapping out the drives with larger ones (Option 2), the process from here on is the same.

Now that the new disk has been added to the pool, we have to grow the zvol -- this is the FreeNAS construct that represents the iSCSI LUN.

Click on the ISCSI01 device in the Name column on the Storage > Volumes tab (1).

Click on the wrench icon (2) to edit the zvol

 

 

Change the size

 

In the Edit zvol window, change the Size of the larger LUN.

Note that you should keep roughly 20% of the added space unallocated in the pool to prevent possible performance issues.

The previous size was 80G and I added a 20 GB LUN.

80% of 20 is 16, and 80 + 16 = 96

Make sure you include the "G" to indicate gigabytes.

Click the Edit ZFS Volume button to commit the change

 

 

Verify the Change

 

Within the FreeNAS interface, make sure that the ISCSI01 device shows the correct size. The system will adjust the allocation to be a multiple of the block size, so don't worry if it is a little different than what you specified.

Note that you're probably not going to fill up the entire VMFS volume, so being a little over the 80% threshold here should not be a problem.

 

 

Trigger a Rescan of the device on the Hosts

 

Within the vSphere client, go to the Hosts and Clusters view (1), right-click on the RegionA01-COMP01 cluster (2), select Storage (3) and Rescan Storage... (4)

This is required for the hosts to recognize the new size of the LUN.

 

 

Increase the size of the Datastore

 

Switch to the Datastore view and select the RegionA01-ISCSI01-COMP01 datastore.

Click on the Manage tab (1), then Settings (2), and General (3)

Click the Increase button to begin the wizard.

 

 

Select/Verify the device's new size

 

On the Select Device screen, ensure that the Capacity reflects the new capacity of the LUN. If not, close this, go back and rescan the storage.

Select the device by clicking on it and then click the Next button.

 

 

Grow the partition

 

  1. From the drop down menu, select the Use free space... item to have the partition fill the available free space on the LUN.
  2. Click the Next button

 

 

Execute the change

 

Verify that everything looks as it should and click the Finish button.

 

 

Verify from the Host

 

Once the process completes, check that the datastore reflects the additional space (1).

If it does not, you may need to click on the Refresh button (2) to have the host re-query the storage.

 

 

Clean up after yourself!

If you selected Option 1 and have not yet removed the old VMDKs from the FreeNAS appliance, now is the time to do that.

You must shut down the vVMs then ESXi hosts and any other systems that are using the FreeNAS, then shut down the FreeNAS.

 

 

Shut Down FreeNAS and Remove the old disks

 

Once the resilvering has completed, once again shut down the FreeNAS and remove the disk from the VM by getting properties and navigating to the Hardware tab as indicated in the Reconfigure the Storage Appliance VM step.

It should be simple to identify the proper disk as it will be the smaller disk in the list. Make sure you are removing the correct disk as this operation is destructive.

Click the Delete button and confirm the deletion of this old disk. Click the OK button to close the VM's properties and then power the FreeNAS VM up and ensure that everything looks as expected.

 

 

BONUS: Zeroing out the "Dirty Blocks"

 

NOTE: This process is not necessary unless you have added vVMs to your datastores and removed them prior to finishing up your development. The datastores created for the development cycle are already clean and space is only consumed by the vVMs and files that you have added to them.

After you have worked on your story, created your vPod and removed any vVMs that you did not need, it helps out a lot if you take some time to zero out the "dirty blocks" on the datastore. These "dirty blocks" still contain the vVM's data and must be exported and replicated when we move your pod to our other clouds.

If you take the time to zero them out in the development environment, the OVF export process will automatically exclude the unused blocks (it knows they're unused because they have zeroes written to them) from the export, which makes the export, replication, and import processes much more efficient.

Please note that you should do this at the END of your development cycle since this effectively inflates the thin provisioned disks to something more like EZT and we don't need a bunch of pods in DEV eating up real space with a bunch of zeroes!

 

 

Great idea! How do I do that?

NOTE: If you have compression enabled on your datastores, like we do on the default LUNs, this process will never complete. Just don't do it. The volume will never fill up because the zeroes are all compressed down to a single zero...

To do what we want, you need to log in to the console of the FreeNAS appliance. You can do this from the Main Console machine, or opening the VMRC console to the FreeNAS VM in vCD. If your pod is already up and running, I find it easier to do this from the Main Console.

Before you do anything else, you should shut down all of your vVMs. I like to have all of the ESXi hosts shut down as well since I'm extra paranoid, but it is not strictly required.

You could use PuTTY to connect to the FreeNAS via SSH, but you need to login to the FreeNAS web interface to enable SSH in order for that to work. So, we will start with that.

Login to the FreeNAS web interface from the Main Console by opening a web browser, pointing it at http://192.168.110.60. and logging in as root with the VMware1! password.

 

 

Quick Aside: Compression

 

The core team has disabled compression on the NFS datastores presented from the FreeNAS storage appliance. However, if you create a new mount point, it will default to having compression enabled.

Compression is important here because having it enabled will effectively render the "zero dirty blocks" process useless. When you write a bunch of zeroes to a compressed volume, the compression algorithm effectively collapses the zeroes and will not fill the empty space with zeroes.

To check whether an NFS share is compressed, click on the Storage button (1) in the FreeNAS web interface, then locate the line with your volume name (2) in the table. Check the corresponding entry in the Compression column and verify that it is off.

Prior to undertaking the steps that follow, ensure that you are not dealing with a compressed volume. The time you save will be your own.

You may notice that the volume shows a compression ratio of greater than 1.0 even though compression has been disabled. Compression was initially enabled on this volume, but has since been disabled. The effect is that the current data is compressed but the incoming data will not be. Inbound compression was disabled on these volumes because we are unsure of the performance impacts on CPU and memory when that feature is enabled and the vPods are deployed at scale.

 

 

Launch the Shell from the FreeNAS web interface

 

I think using the integrated Shell option in the FreeNAS management interface is simpler here than going through the process of enabling SSH then launching puTTY and connecting. Just click on Shell and you will be presented with a root console. Simple.

NOTE: You can go to the Services node and enable SSH instead -- you will need to enable the "Login as Root with Password" and "Allow Password Authentication" options. This is useful if you will be messing with the storage a lot in the future and would rather use the command line than the web interface. Go nuts, but be careful!

 

 

Determine the mount points

 

If you have neither added nor changed any mount points on the storage appliance, you will likely only have the basic /mnt/NFSA mount point and your job will be very easy. To verify this, execute the following command

# df -h | grep /mnt

and it will show the mounted devices. You can ignore the ones ending with .samba4

 

 

Write the zeroes

 

For each of the relevant mount points, create a new file full of zeroes that consumes all of the remaining space on the device.

# dd if=/dev/zero of=/mnt/NFSA/z.txt

Once that is done, remove the file.

# rm /mnt/NFSA/z.txt

Now, all free space is both zeroed and unallocated.

We combine this into one command:

# dd if=/dev/zero of=/mnt/NFSA/z.txt ; rm /mnt/NFSA/z.txt

Note that the dd command is not very chatty and you don't get any feedback until it fills up the disk. It is best to just let it run and go eat lunch or something.

At the end, dd will output something similar to the picture to indicate that the device is full. That's OK because you remembered to include the deletion step as well. :)

To make extra sure, run the df -h command again and ensure that the number in the Capacity column for this mount point is not 100%.

Repeat with any other NFS mount points you have defined, then shut down your pod and check it in to the catalog for replication and publishing.

 

FreeNAS: Adding a new iSCSI Datastore


The Single Site Base vPods include a simple FreeNAS appliance that presents an ~80 GB iSCSI LUN. After formatting overhead, that 80 GB turns into about 78.6 GB of usable space. There are times when 78 GB is not enough and lab teams need to add storage to the FreeNAS. The process of expanding the LUN and datastore is covered in the article, FreeNAS: Adding Storage to the iSCSI Datastore.

This article covers the creation of a new iSCSI datastore on the FreeNAS. Much of the content is similar, but this article details the specific processes involved with building and presenting the new datastore.

Even if you have years of experience with FreeNAS, please at least skim through this document. We do things differently in HOL than you would do in production, or even in your own lab.

NOTE: Plan the size of your datastore prior to adding data to it. It is possible to add capacity to a datastore containing data, but there are performance impacts.

As always, the core team can assist you if you have questions or require clarification. We would rather get it right than have to rework things later. Storage migration in a nested environment takes a LONG time.


 

Do you need a new datastore?

If you are reading this article, odds are that you have identified the need for an additional datastore. Keep in mind that the requirements for adding a datastore to a lab can be different from production and we attempt to balance the requirement with the overhead incurred in the vPod. Later in this article, when you see the math involved, you will gain a better understanding of what this means.

While adding new LUNs and datastores can be done online, adding new SCSI controllers (even virtual ones) typically requires downtime. Furthermore, whenever you alter storage configurations, always work on a backup copy. We like to try things out in a test copy of the pod, document the process, then deploy a fresh copy and walk through the validated configuration steps to produce a clean result.

If you require complex storage configurations, or assistance with these processes, please do not hesitate to ask the core team for assistance. We are happy to have to do the work on your own, but letting the core team know what you're thinking of doing may save you time.

For Hands-on Labs, we currently use FreeNAS 9.3, which is the last of the 9.x branch. General FreeNAS documentation is available here http://doc.freenas.org/9.3/freenas.html

 

 

Shut Down all vESXi Hosts

 

The preferred method of adding a new LUN to FreeNAS for HOL involves adding a SCSI controller to host the new disks. Doing this requires that the FreeNAS be taken offline. Because the ESXi hosts in the vPod use the FreeNAS, they should be shut down prior to working on the FreeNAS.

Before reconfiguring the storage appliance (stgb-01a) in your vPod, please ensure that any hosts that are using the storage have been shut down properly. For instructions on cleanly shutting down your vESXi hosts, please see the article that describes shutting down a vPod.

 

 

Shut Down the Storage Appliance

 

Shutting down the FreeNAS appliance is as simple as logging in via the web interface, clicking the Shutdown option, and confirming.

To begin, launch one of the web browsers included in the vPod and click the "stgb-01a Admin" link. In the 2017 base pods, this link is located on the web browser toolbars in a folder called HOL Admin. At the login prompt, login as root with password VMware1!

NOTE: Internet Explorer 11 seems to be "FreeNAS-GUI-hostile," so we recommend using Chrome or Firefox to manage the FreeNAS.

If necessary, click on the System tab (1) then click on the Shutdown option (2) in the left pane. Note that you may need to scroll down or collapse other items in that pane if someone has been working in the FreeNAS management console previously.

 

 

Confirm Shutdown

 

Click the Shutdown button to confirm the shutdown.

NOTE: Please ensure that all vESXi hosts are powered off before doing this or you may corrupt the datastore.

 

 

Wait for shutdown

 

From the vApp Diagram tab of your vPod, verify that the FreeNAS appliance (and all vESXi hosts) are powered down

 

 

You have a plan, right?

 

Before you proceed any further, you need to know how large your datastore needs to be.

Just like production, do not assume that you will be filling your datastore 100% because you need space for log files, swap files, and growth of any thin provisioned volumes.

With the base vPod's datastore, there are 5 x 20 GB VMDKs presented for a total of 100 GB. This results in a 79.75 GB datastore with 78.65 GB usable. How did that happen?

  1. VMFS requires about 1 GB overhead for formatting and on-disk structures ( -1 GB )
  2. ZFS, which is used to aggregate the disks, performs best with 20% free space in the pool (-17.9 GB)
  3. The raw devices are formatted so that they may participate in the pool, which allows each one to contribute about 17.9 GB (-10.5 GB)

That leaves us with 70.6 GB usable. However, it is generally not a good practice to "red line" any file system, so the extra 8 GB or so is roughly 10% for a capacity buffer to allow VMFS to see some extra space that won't be used. It could be used, but might result in performance issues at the FreeNAS layer. Ultimately, it is a game we are playing to minimize overall overhead by sharing the overhead space between layers.

The 5 devices may seem like an arbitrary number, but it was selected to make the math easy. The simple way to look at it is:

  1. Figure out how much space you really need to use on your datastore. (This is actual used space, not thin provisioned allocation.)
  2. Add 10%
  3. Divide by 4 and round up to the nearest GB. (Use your judgment here. This ends up being a bit of art mixed with science.)
  4. You need 5 VMDKs of this size to create an iSCSI datastore to contain your data.

Armed with this information, you are ready to move on.

 

 

Why do I need to plan ahead? My array lets me extend the LUN later!

 

When a striped volume is created, the data written to that volume is spread evenly across all of the disks that make up that volume.

When a new disk is added to the volume, like Disk 4 on the right, new data will be written across all of the devices now available. However, the existing data continues to reside only on the volumes that were present when it was written. This results in an imbalance.

That might be fine... up until the original set of drives fills up. When this happens, the new drive (Disk 4) is the only place new data is written. This is also the only drive that can be used to read that new data. This causes a performance bottleneck and undesirable, inconsistent results.

Production storage systems often provide the ability to "re-level" or redistribute the data across an expanded pool. That process runs in the background on the array and can take days or weeks to complete. We do not have that kind of time in HOL, and FreeNAS does not offer an automatic way to do this.

This is why you want to extend the storage before you put anything on it. You will avoid this issue.

 

 

Reconfigure the Storage Appliance VM

The next step in the process is to add one or more new VMDKs to the FreeNAS storage appliance.

We will add a new SCSI controller, add VMDKs, create a new ZFS pool, create a new zvol, present the new LUN, and format the iSCSI-based datastore in vSphere.

If that makes no sense, don't worry, we'll walk through it.

 

 

Edit VM Properties

 

Right-click on the powered-off FreeNAS appliance (stgb-01a) and select Properties (1) from the context menu.

 

 

Add New VMDKs

 

Once the VM properties screen is open, click on the Hardware tab and scroll down to the Hard Disks section (1).

Click on the Add button (2) in the lower right corner of the Hard Disks section.

Enter the size of each new VMDK (3). Here, we are adding 10 GB VMDKs (Disks 6-10), which will produce a datastore of size ~38 GB, of which ## GB would be considered usable.

Repeat the process to add as many devices as you require.

Be sure to change the Bus Number and Unit Number. This isn't a strict requirement but helps keep things organized and consistent:

The Unit Numbers should increment automatically when you change the Bus Number. If you exceed the number of devices on a given bus, it may make sense to look at an alternate configuration.

If you need a custom storage configuration, please contact the core team for assistance.

Click the OK button to apply the change. Wait for the Busy label to disappear from the mini-console screen on the FreeNAS VM, then power it up. For now, you can leave the vESXi hosts powered off.

 

 

Configure the Storage Pool

 

 

Create Storage Volume

 

Once you've logged back into the FreeNAS Appliance:

  1. Click Storage at the top of the screen
  2. Click the Volume Manager button

 

 

Configure Storage Volume

 

  1. In the screen that pops up, provide a Volume Name. In the example, I specified COMP01-2 because this is the second pool for COMP01.
  2. Click the + button in the Available disks section for the set of disks you wish to add to the volume
  3. For Volume layout, click the dropdown and select Stripe
  4. Make note of the Capacity (40 GB in this example) - you will need this when defining the Extent
  5. Click the Add Volume button when finished

NOTE: There is no real benefit to using any of the RAID or mirroring options in FreeNAS when running nested as a vCloud Director vApp.

 

 

Select the Volume

 

  1. Click on the name of the new storage pool to select it
  2. Click the Edit Options icon

 

 

Disable Access Time (atime)

 

Newly created volumes have the access time (atime) tracking enabled by default. For performance reasons, we disable this.

  1. Click the Off button in the Enable atime section
  2. Click Edit Dataset to save the setting

As tempting as it may look, do not enable ZFS Deduplication. The resources required to do so far exceed what we have available in the nested environment.

 

 

Create the volume (zvol)

An iSCSI LUN is represented in FreeNAS using a "zvol" object. It can also be represented as a set of files ("extent") on the zfs file system, but this is not as efficient.

 

 

Select the Volume

 

  1. Click on the name of the new storage pool to select it
  2. Click the Create zvol icon

 

 

Create the zvol

 

  1. Provide a name for the zvol. I like to use the unique portion of the datastore name here. Just use something short that makes sense and helps you distinguish this one from the other one(s).
  2. The size here is the raw, unformatted size that the LUN will report as to the guest operating system. It is specified here in gibibytes. Uh, what?!?

Right... The gibibyte is a multiple of the unit byte for digital information. The binary prefix "gibi" means 2^30, and the gibibyte is very closely related to the gigabyte (GB). The GB is defined by the IEC as 10^9 bytes (1000000000 bytes). So, 1GiB ≈ 1.074GB.

For all practical purposes here, 1 GiB ≈ 1 GB

The number you want to use here is 4 x the size of the VMDK you presented to the FreeNAS to create this LUN. So, in our example, in which we used 10 GB VMDKs, we would use 40 GiB. However, because the volume here is so small, the overhead incurred by the FreeNAS becomes significant enough to prevent that -- it reserves 2 GB capacity from any disk provided. I will be using 30 GiB for the remainder of this example.

 

 

CRITICAL: Delete the automatic scrub

The FreeNAS automatically creates a scheduled task to verify the consistency of the new zvol every 35 days. While this works great for production environments, in our vPods, once that 35 day threshold has been reached, the volume will be scanned EVERY TIME any copy of the pod boots. This is an intensive operation and can cause significant stress on our clouds.

Please follow these instructions to disable the scheduled "scrub" of any new volumes.

 

 

Remove the scrub

 

Once the new zvol has been created,

  1. Click on the Scrubs option
  2. Locate the existing scheduled scrub and click on its name to select it
  3. Click the Delete button to remove it
  4. Repeat for any scheduled scrubs remaining. There should be none.

 

 

iSCSI Target Configuration

 

 

***DOUG - WORK IN PROGRESS HERE ***

 

 

Sharing via iSCSI

 

The HOL base vPods already have iSCSI configured on the FreeNAS, so most of what is needed is presentation of the new LUN to the proper set of hosts.

  1. Click the Sharing icon on the FreeNAS toolbar
  2. Click Block (iSCSI) to enter iSCSI configuration
  3. Click on the Extents item

If required, you can review the Target Global Configuration, Portals, Initiators, Authorized Access, and Targets sections, but these generally do not require changes once they are working.

 

 

Extents

 

Extents will provide a mapping between your iSCSI target and the actual storage on the appliance.

  1. Notice the existing Extent, COMP01-ISCSI01, which is the LUN presented to the hosts at site A in the vPod.
  2. Click the Add Extent button to begin mapping in your new zvol

 

 

LEGACY - Rename Extent

 

  1. Set the Extent Name to something that makes sense. In this case, I used COMP01-ISCSI02 because it is the second LUN presented to the COMP01 cluster.
  2. If you are adding multiple volumes, select the proper one from this list. If you are just creating one, it will likely be the default selection.
  3. For the LUN RPM, select something other than the default of "SSD" to prevent the device from showing up as flash storage within ESXi. The nesting confuses the software, and this gives it a little nudge in the right direction.
  4. To create the extent, scroll down and click the OK button (not shown)

Click Associated Targets to finish up the configuration

 

 

Associated Targets

 

There is not a lot to see, so just get to it:

  1. Click the Add Target / Extent button
  2. Click the Target dropdown and select sitea to present this new LUN to the hosts in Site A of the vPod (unless you have a dual-site pod, and you want to present this to the OTHER hosts, this is what you want)
  3. Select a static LUN ID that is not currently in use. Unless you like losing access to your volumes, do not pick Auto here.
  4. Click the Extent dropdown and select your extent, COMP01-ISCSI02 (or whatever you named yours)
  5. Click OK

Now that the target has been created and presented, it is time to switch focus to the ESXi host. If they are not powered up, power one of them on and log in using the vSphere web client.

 

 

ESXi Host Configuration

 

 

Add iSCSI Storage Adapter

 

NOTE: This step has already been completed for each host in the HOL Base vPods, but may be required if you added more hosts to your vPod.

To access iSCSI datastores, each ESXi host must have the Software iSCSI adapter added.

  1. Using the vSphere Web Client, select an ESXi host
  2. Click Manage
  3. Click Storage
  4. Click Storage Adapters
  5. Click the Green + and select Software iSCSI adapter

Click OK on the confirmation screen that appears

 

 

Adjust Adapter Details

 

Once the iSCSI adapter has been added, you may be required to add Targets and rescan for storage devices.

  1. Select the newly added iSCSI Software Adapter. Notethat this is vmhba65 for vSphere 6.5 hosts rather than vmhba33 as in this screen shot.
  2. Click Targets
  3. If 10.10.20.60:3260 has not been added, click the Add button and add that now (note the IP address and Port matches the address set for Portals on the FreeNAS)

IMPORTANT: 10.20.20.60 should be used in place of the 10.10.20.60 address for "Site B" hosts.

 

 

Rescan Host Storage Adapter

 

  1. With the iSCSI Software Adapter still selected, click the Rescan Host Storage Adapter button to discover new storage devices
  2. You should see the 30 GB (or whatever size you specified) volume show up in the Devices list with the LUN that you assigned (2, in the case of this example)

 

 

Create Datastore

 

You may now create a Datastore on your host.

  1. Click the Datastores item under Storage on the host's Configure tab
  2. Click the New Datastore icon and follow the wizard

Note that there is a "VVD-inspired" naming convention used within the HOL vPods, so this new Datastore would be called RegionA01-ISCSI02-COMP01

Once the datastore creation and formatting is complete, rescan the storage adapters on each host to locate the new datastore. Asa shortcut, you can right-click on the cluster object in the Web Client and select Storage > Rescan Storage... from the context menu.

 

 

BONUS: Zeroing out the "Dirty Blocks"

 

NOTE: This process is not necessary unless you have added vVMs to your datastores and removed them prior to finishing up your development. The datastores created for the development cycle are already clean and space is only consumed by the vVMs and files that you have added to them.

After you have worked on your story, created your vPod and removed any vVMs that you did not need, it helps out a lot if you take some time to zero out the "dirty blocks" on the datastore. These "dirty blocks" still contain the vVM's data and must be exported and replicated when we move your pod to our other clouds.

If you take the time to zero them out in the development environment, the OVF export process will automatically exclude the unused blocks (it knows they're unused because they have zeroes written to them) from the export, which makes the export, replication, and import processes much more efficient.

Please note that you should do this at the END of your development cycle since this effectively inflates the thin provisioned disks to something more like EZT and we don't need a bunch of pods in DEV eating up real space with a bunch of zeroes!

 

 

Great idea! How do I do that?

NOTE: If you have compression enabled on your datastores, like we do on the default LUNs, this process will never complete. Just don't do it. The volume will never fill up because the zeroes are all compressed down to a single zero...

To do what we want, you need to log in to the console of the FreeNAS appliance. You can do this from the Main Console machine, or opening the VMRC console to the FreeNAS VM in vCD. If your pod is already up and running, I find it easier to do this from the Main Console.

Before you do anything else, you should shut down all of your vVMs. I like to have all of the ESXi hosts shut down as well since I'm extra paranoid, but it is not strictly required.

You could use PuTTY to connect to the FreeNAS via SSH, but you need to login to the FreeNAS web interface to enable SSH in order for that to work. So, we will start with that.

Login to the FreeNAS web interface from the Main Console by opening a web browser, pointing it at http://192.168.110.60. and logging in as root with the VMware1! password.

 

Virtual SAN: Adding Storage to the VSAN Datastore


NOTE: Due to some late-breaking developments, the VSAN-based Base HOL vPods are not recommended for use. Unless you have a requirement to showcase VSAN in your pod, please use one of the NOVSAN base vpods instead.

The Single Site Base vPods include a 3-node VSAN datastore made up of 5 GB "SSD" devices and 40 GB "HDD" devices, resulting in approximately 118 GB of usable space. There are times when this is not enough and lab teams need to add storage. In the base vPods, the VSAN is in Manual mode. This article covers the addition of more storage to the existing VSAN datastore.


 

In the Beginning...

 

In this example, I need more than the 118 GB free space remaining on the VSAN datastore. Adding capacity to a VSAN can be a simple process, especially if the VSAN is in Automatic mode. In the HOL vPods, VSAN is in Manual mode because the disks are virtual rather than physical and must be properly prepared before adding them.

Whenever you are altering storage configurations, always work on a backup copy and try to have all nonessential services shut down.

 

 

Shut Down vESXi Hosts

Before reconfiguring the storage in your vPod, please ensure that any hosts that are part of the VSAN have been shut down properly and Powered Off in vCloud Director. If there are multiple clusters in the pod and only one of them is being expanded, it is only required to shut down the hosts that are members of the cluster being expanded.

For instructions on cleanly shutting down your vESXi hosts, please see the article that describes shutting down a vPod.

 

 

Reconfigure the vESXi Hosts

The next step is to add new VMDKs to the ESXi hosts. Note that we do not want to resize the existing VMDK. We will add another VMDK and extend the VSAN onto the new VMDKs.

 

 

Edit VM Properties

 

Right-click on one of the powered-off ESXi hosts and select Properties (1) from the context menu.

 

 

Add New VMDK

 

Once the VM properties screen is open, click on the Hardware tab and scroll down to the Hard Disks section (1).

Click on the Add button (2) in the lower right corner of the Hard Disks section.

Enter the size of the new VMDK (3). Note that this is roughly the amount of storage that will be added to your VSAN datastore from this host. For a 3-host cluster, plan to add about 1/3 the storage from each host unless you have special requirements. Here, we are adding a new 50 GB VMDK. If you need a custom storage configuration, please contact the core team for assistance.

Click the OK button to apply the change. Wait for the Busy label to disappear from the mini-console screen on the ESXi VM and then move onto the other hosts in the cluster and repeat the process.

 

 

Flag the Disks as Hard Disk Devices

Flagging the disks is required because the VMDKs are not physical devices and their "personalities" must be explicitly configured in the lab to indicate whether they are for capacity (HDD), cache (SSD), or Flash-Capacity. This process is covered in detail in the article, Flagging ESXi SCSI LUNs for HOL

For this example, the following PowerShell code is used to mark all 50 GB VMDKs on all hosts managed by vcsa-01a.corp.local as HDD devices. First, open a PowerCLI window and then execute the following commands:

Connect-ViServer vcsa-01a.corp.local -user administrator@corp.local -pass VMware1!
Import-Module C:\HOL\Tools\HOL-SCSI.psm1
Get-VMHost | Get-ScsiLun | Where { $_.CapacityGB -eq 50 }  | Set-ScsiLunFlags -ExplicitHDD

Do not proceed without following the proper process or your vPod will not work in all of the clouds.

 

 

Add the Disks to the VSAN

Once the devices have been properly flagged, they can be added to the VSAN datastore and take on their proper role. In this example, we are just adding capacity drives (HDD) to the existing VSAN. The process is similar for cache devices, although additional cache devices cannot be added without corresponding capacity devices.

 

 

Launch the Claim Disks Wizard

 

Navigate to the Datastore view in the Web Client and select the VSAN datastore from the Navigator window.

Select the Manage tab (1) and click on the Settings section (2)

Click on Device Backing (3) and then click on the Claim Devices icon (4)

 

 

 

Select the Disks

 

If necessary, expand the window so that the detected disks are visible in the top pane.

To claim all disks at once, select the top node (1) and click the Claim for capacity tier icon (2) to mark the devices for claiming by the capacity. The "Claim For" column will change from Do not claim to Capacity tier.

Click the OK button to proceed.

 

 

See the disk added to the group

 

After a few seconds, the bottom pane of the Device Backing screen will update and show the newly-claimed devices

 

 

Verify

 

To complete the verification, navigate to the Storage section (1) then browse to the RegionA01-VSAN-COMP datastore (2) then click on the Summary tab (3) and check that the datastore reflects the additional space (4). If it does not, you may need to click on the Refresh link (5) to have the host query the storage.

 

SSL: Certificate Management with vSphere 6


Much has changed with respect to SSL certificates and vSphere 6. Primarily, there is a Certificate Authority (CA) on the Platform Services Controller (PSC) that manages SSL certificates for the infrastructure components. Currently, that means vCenter, the PSC, and the ESXi hosts, but that may extend to other solutions as they are updated to integrate with vSphere 6.

In the Hands-on Labs, we have chosen a simple integration which involves extracting the root CA certificate from the PSC (running on the vCenter server within each "region" - vcsa-01a.corp.local or vcsa-01b.corp.local in the base vPods) and adding it to the Trusted Root Certification Authorities store on Control Center. This certificate has also been added to the Default Domain Policy, which will add it to the store on all domain members. This means that any certificate issued to a machine by the PSC's CA will be trusted by any member of the corp.local domain that is running a Windows operating system.

Linux machines require a little more work and the process is slightly different depending on both the flavor and version of Linux.


 

Certificate Management Settings on vcsa-01a.corp.local

 

The Certificate Authority runs on the Platform Services Controller (PSC). In the base vPods, this service is running embedded on vcsa-01a.corp.local. The certificates' properties are configured on the vCenter server host under Manage > Advanced Settings and have the certmgmt string in the options' names - the list above has been filtered (1). For the Hands-on Labs, the defaults are being used with the exception that the organizationalUnitName has been changed to "Hands-on Labs" (2). Note the 1825 day certificate lifetime (3); this is about 5 years and good enough for the expected lifetime of an HOL vPod.

 

 

Host Certificate Configuration

 

After changing any certificate option on the vCenter Server, the certificates must be renewed on each host in order for the changes to take effect. The current certificate's properties for a host (1) can be viewed by selecting the host and going to Manage > Settings > Certificate

In addition to viewing the current certificate, there is an option (2) to renew the certificate on this screen. See the next step for a shortcut to renewing certificates for multiple hosts.

 

 

Renewing Multiple Hosts' Certificates

 

To simultaneously renew certificates for multiple hosts, click on an upper level container object -- the picture shows a cluster -- and click Related Objects > Hosts. Select all of the hosts that need renewed certificates, right-click and select Certificates > Renew Certificate. Confirm the action and the certificates will be renewed with the new settings. The new certificates can be viewed as in the previous step.

 

 

Control Center Certificate Authority (CA)

 

The Control Center machine has a Microsoft CA server running. In order to save resources, the web interface for the CA is not installed. To facilitate issuing certificates, a simple PowerShell module has been provided on Control Center in C:\hol\ssl\hol-ssl.psm1

To use this module, open a PowerShell prompt and type

Import-Module C:\hol\SSL\hol-ssl.psm1

This will load the module's four functions into the current shell:

 

 

 

Get-CaCertificate

 

This has already been done in the lab and the CA certificate has been stored in the default location: C:\hol\SSL\CA-Certificate.cer

For example:

Get-CaCertificate

When called with no parameters, the function exports the root CA certificate to the default location. Specifying the -CA_Certificate switch with a file path as its value will store the certificate in a different location.

The CA certificate file is used by the New-HostSslCertificate and New-WildSslCertificate functions to create the PEM and PFX certificate bundles.

 

 

New-HostSslCertificate

 

This function will create a private key, build the Certificate Signing Request (CSR), request the CA to issue the certificate, download the certificate, and build the PEM and PFX versions of the certificate.

The required options are -HOST_SHORTNAME and -HOST_IPv4, which are the shortname and IP v4 address of the host. Additional options are -HOST_FQDN and -CA_Certificate. If -HOST_FQDN is not specified, the FQDN is assumed to be HOST_SHORTNAME.corp.local. The default value for -CA_Certificate is C:\hol\SSL\CA-Certificate.cer. This is the root CA certificate used to build the PEM and PFX files' trust chains.

For example:

New-HostSslCertificate -HOST_SHORTNAME myhost -HOST_IPv4 192.168.120.200

All of the files will be stored in the C:\hol\SSL\host\HOSTNAME directory, where HOSTNAME is the short name of the host as specified on the command line.

Note that the password on the PFX file is testpassword -- it has to be set to something and that one seems to be commonly used at VMware for PFX files. Don't ask me...

 

 

New-HostSslCertificateFromCsr

 

There are times when a solution manages its own private keys, and it is simpler to just have that solution create its own CSR. In this case, simply feed the resulting CSR to this function and it will pop out a valid certificate. This one isn't very exciting, but it works. In this example, the CSR was generated by the vShield Manager and downloaded to ControlCenter. I created a new directory to hold the CSR and certificate file at C:\hol\SSL\host\vsm-01a and then pulled the ripcord:

New-HostSslCertificateFromCsr -CSR vShieldCert.csr

This function takes a single parameter, -CSR, which is the path to the CSR file.

Once the certificate has been created, it must be uploaded to the appliance or solution. Typically, these solutions require that the root CA certificate is uploaded before the issued certificate will be accepted as valid. In this case, just grab the C:\hol\SSL\CA-Certificate.cer file and upload that as the root CA certificate before proceeding with the actual certificate for the appliance.

 

 

New-WildSslCertificate

 

(experimental)

Some people have asked about using wildcard certificates in the labs. In basic testing, the certificates created using this function seem to be usable in that capacity. Note that your mileage may vary and it is generally recommended to issue certificates per host/service.

I am fairly certain that SRM will not work properly with such wildcard certificates, and most customers will not be using wildcard certificates, but feel free to try it if you like.

By default, this function will create a certificate for *.corp.local and store it in C:\hol\SSL\wild

In the following example, this is specified explicitly using the -WILD_FQDN parameter. Be sure to single-quote the name since it must have the asterisk (*) in it to be used as a wildcard certificate:

New-WildSslCertificate -WILD_FQDN '*.corp.local'

The other parameter this function takes is the -CA_Certificate parameter that points to the root CA certificate used to build the PEM and PFX. By default, it uses C:\hol\SSL\CA-Certificate.cer.

 

Working with SDDC Base vPod Template


The SDDC Base vPod has been prepared by a team of former/current captains who are specialists in each of the respective products to give you a huge jump start on building out your labs which may include any or all of these SDDC products:

Each of these products has been installed and has base configurations completed so that it is in a ready-to-be-used state.  This article will walk you through details about the base line configuration and suggestions on how to use it in the vPod you are developing, as well calling out any thing specifically that is NOT there, or suggestions on what you should NOT do.  These products have also been added to the lab startup scripts, so details will be included in each section.

In addition to the above, the following components have been added to this vPod:

If you need some, but not ALL, of the products for the lab you are building, it should be fine to remove unneeded components. Review the sections for the components you need to remove to see if there is any special instructions from the content developers on this topic.  The content developers will be called out in each section below, so you know who to reach out to for assistance or questions.


 

NSX

Details on NSX in this vPod below

 

 

Content Owner

Any questions on this section, please contant Scottie Ray - sray@vmware.com

NOTE: The LabStartup.ps1 script must be enabled to allow for the starting of the NSX Controllers. This is done by commenting out the "Set-Content - Value "NOT Implemented" -Path $statusFile" and "Exit" lines found on Line 222 and 223 of the script (the line numbers may be off depending on what else has been done to the script). You must also set your lab SKU in C:\DesktopInfo\desktopinfo.ini.

 

 

NSX Component Information

NSX Manager:

NSX Controllers

NSX VTEPs

NOTE: The LabStartup.ps1 script must be enabled to allow for the starting of the NSX Controllers as per the directions in the LabStartup.ps1. The controllers are also started via the VM power up settings on ESXi. It is okay that both functions have these settings as the scripts will check the power state before trying to power them on.   

 

 

Users & Roles

NSX Manager Admin Login

NSX Controller Login:

WebClient Account with NSX Admin Access  (IMPORTANT--YOU CANNOT USE THE HTML5 Client to work with NSX configuration)

 

 

 

Configurations Completed & Use Suggestions

The below configurations were completed.

At this point it is ready to deploy logical switches and edges.

The vPod Router is configured to work with both BGP and OSPF.  See the vPod Router section of this manual for configuration details.

One note during your testing.  Once you have configured your startup script, verify and check to see that all the clusters show green under Network and Security>Installation>Host Preparation.  In the past, we have seen some timing issues that create a "Not Ready" state.  This should not happen since we have added a cluster resolve function in Startup.ps1.  However, it is important to look at your NSX ready state during your smoke test phase.

 

 

Startup Script Details / Removal of NSX from this vPod

The startup script does the following things for NSX:

If you want to uninstall NSX from this vPod, complete the following in order:

  1. Delete and Remove the Transport Zone by going to the NSX plugin and performing the following:  Installation>Logical Network Preparation>Transport Zones>Actions>Disconnect Clusters.  Once the Clusters are removed, then you can delete the cluster.
  2. Delete the Segment IDs by going to the NSX plugin and performing the following:  Installation>Logical Network Preparation>Segment IDs>Edit and then delete the IDs.
  3. Unconfigure the VXLAN Configuration by going to the NSX plugin and performing the following:  Installation>Logical Network Preparation>VXLAN Transport> Click on the "Unconfigure" for each cluster.
  4. Shutdown and delete the three controllers on the Edge cluster
  5. Uninstall NSX from the hosts from the Web Client NSX Interface.  NSX > Installation > Host Preparation > Click Uninstall on the two clusters.  Note, this will require host reboots.
  6. Delete the Layer 1 NSX Manager VM from the vApp config.

Within the startup script file, the following must be removed or edited:

  1. Edit C:\hol\labstartup.ps script
  2. Remove the three controllers from the $VMs variable
  3. Remove IPs from the Pings variable
  4. Remove the NSX Noted IP Section

 

 

 

What Not to Do

Key Don't for NSX in this vPod

 

 

vRealize Operations

 

Content Owner: John Dias

Host Name: vrops-01a.corp.local / 192.168.110.70

Users and Roles:

Base configurations completed / content installed and configured:

Management Packs installed:

Suggestions for use:

Start-up Script details:

Checks for the https://vrops-01a.corp.local/ui/login.action url

What NOT to do / what is NOT there:

Can/Should this be removed if necessary, if so, HOW? (don't forget to remove anything that was added to the Startup script):

 

 

vRealize Operations - LDAP Import Sources

 

LDAP Import source is set to point to controlcenter.corp.local using the corp\administrator account using SSL.  Don't disable SSL connection for authentication.

 

 

vRealize Operations - User Groups

 

The 'Operations' groups in Active Directory is set to have Administrator Role to All Objects.  Include any example user accounts into this group to show active directory integration for Role Based Authenication.

 

 

vRealize Operations - Password Policy

 

Password Policy and Account Lockout policy is edited for the lifecycle of the 2017 HOL Season.

 

 

vRealize Operations - vSphere

 

vCenter Connection details are configured as depicted above for the vCenter and Python actions adapter using administrator@vsphere.local.

 

 

vRealize Operations - Avoid Data Expiration

 

The default for "Time Series Data" is "6 months" as shown.  Click the pencil and change the value to never expire or at least 2 years.  If you have alerts, etc. that are based on specific data, these will stop working once vR Ops removes the old data.

 

 

vRealize Operations - Capacity Calculations for HOL

 

Change the following settings in  capacity.properties file

Location:  /usr/lib/vmware-vcops/user/conf/analytics/

capacityComputationStartTime=-5

capacityComputationPeriod=5

capacityPrecomputationDelay=1

capacityPrecomputationPeriod=5

precomputationRange=1

 

 

vRealize Operations - vCenter Server Adapter Collection Interval

 

Change the Collection Interval from 5 minutes down to 2 (and in some cases even 1 if the environment is VERY SMALL)

 

 

vRealize Operations - Edit the Default Policies for Primary Objects

 

Edit the Data Range in the Default Policy for vCenter Server, Datacenter, Cluster Compute Resource, Host System and Virtual Machine object classes.  Please see next step.

 

 

 

vRealize Operations - Change Data Range to 180 days

 

Change the Data Range from Last 30 days to 180 days in the Default Policies for primary objects.

 

 

vRealize Operations Management Pack - vRealize Log Insight

 

vRealize Log Insight Management Pack configuration is not required, it is automatically connected and configured during Log Insight / vRealize Operations integration.

 

 

vRealize Operations Management Pack - Service Discovery

 

The Service Discovery Management Pack is installed and configured for the vcsa-01a vCenter.  Deep Discovery and Dynamic Application Group are enabled.

 

 

vRealize Operations Management Pack - Service Discovery (Credentials)

 

Credentials all use the same password - VMware1!

The Guest User Mapping CSV Password is VMware1!

 

 

vRealize Log Insight

 

A vRealize Log Insight v3.4 appliance is available as a tier 1 VM (Resized down to 2 vCPU and 4GB RAM to save resources)

Content Owner = John Dias

Host Name = vrli-01a.corp.local

IP Address = 192.168.110.24

Users and Roles

u: admin (uses Local authentication source) p:VMware1! (Super Admin)

u: cloudadmin@corp.local (uses Corp authentication source) p: VMware1! (Super Admin)

 

 

Spurious License Warning

 

A license warning appears in the UI that cannot be suppressed.  This is expected and due to a temporary license being installed.  Log Insight considers any license that is not permanent to be an evaluation license.  Be sure to note this in your lab introduction so that users are not concerned.

 

 

Log Insight - Content Packs

 

Installed Content Packs

Microsoft - Active Directory (Configure to collect on the Corp.local domain)

VMware - NSX-vSphere (Configured to collect from nsxmgr-01a)

Microsoft Windows

VMware vSphere - (Configured to collect from vcsa-01a and the hosts esx-01a. esx-02a, esx-03a and esx-04a)  **NOTE** Because the liagent is not supported on VCSA, you will not see enhanced dashboard information for the vCenter specific dashboards.  It is NOT RECOMMENDED to install the liagent in VCSA!

VMware vRA7 - (Configured to collect from vra-01a and iaas-01a)

VMware vRO - Configured to collect from vra-01a

VMware vR Ops 6.x - (Configured to collect from vrops-01a)

If additional or updated content packs are required you can connect the vPod to the internet and download them via the Marketplace.

 

 

Log Insight - Registered Agents

 

Registered Agents

    ControlCenter.corp.local (Main Console Server, Active Directory Server, DNS Server)

    iaas-01a.corp.local (vRA Web Server)

    vra-01a.corp.local (vRA Appliance)

    vcsa-01a.corp.local (vCenter Server appliance)

   vrops-01a (vR Ops Appliance)

Agent Configuration

    Additional agent configuration has been added to configure the vRA and vR Ops appliances to send their logs to Log Insight.

    Note: Even though all agents receive this configuration it is ignored where not applicable)

 

 

Log Insight - Agent Configuration

 

Agents have been placed into Agent Configuration groups where applicable.  

Microsoft - Active Directory 2012 ControlCenter.corp.local (Main Console)

Microsoft - Windows ControlCenter.corp.local (Main Console)

vRealize Automation 7 - Linux vra-01a.corp.local

vRealize Automation 7 - Windows iaas-01a.corp.local

vRops 6.1.x - Sample vrops-01a

vSphere 6.x - vCenter (Linux) Essential vcsa-01a.corp.local

 

 

 

Log Insight - Start-up Script Details

 

The lab startup script checks to see if the Web-UI of Log Insight is up.

Essentially, the URL 'https://log-01a.corp.local/home' should return the header text 'vRealize Log Insight - Login'

 

 

Log Insight - What is NOT configured

Email configuration for alerts is NOT configured.

NTP time server synchronization, although configured for the vPod NTP server time synchronization does not work.

 

 

Log Insight - How to Remove

Removing the Log Insight appliance will have very little effect on the vPod

Some caveats to be aware of:

1: Removing the appliance means there will be no Syslog collector in the vPod. All of the objects that are configure to send their logs to LI will no longer be sending their logs to an active log collector.

2: You should remove the Log Insight test from the Lab Startup script (detailed above)

 

 

vRealize Automation / vRealize Orchestrator

Content Owners: Jon Schulman, Kim Delgado

Hosts:

Dependancies (do NOT remove these if you are using vRA or vRO):

Recommend using Chrome browser for vRA.

Useful stuff on Main Console:

 

 

vRA Tenant Config

Tenant: vsphere.local (default tenant) has only

NOTE: Do NOT create additional tenants unless there is a strong / valid business use case to do so. Best practice we must demonstrate for customers is to use the default tenant whenever possible.

Fabric Groups:

Endpoints

Business Groups:

Services (active, with icons):

Developers Business Group has

Notifications

 

 

vRA Users / Roles

The following domain users have been configured in AD and vRA (all use standard VMware1! password) and have email addresses assigned in AD as @rainpole.com for domain:

Other users (not yet configured in vRA, available to use to walk through basic setup steps)

Default Administrator (for tenant management, etc)

 

 

vRA - Lab Startup Script

 

The Lab Startup Script is configured to validate the vRA environment is up and functional and ready for use.  The lab status is set to "Not Implemented" by default but the following code in the actual LabStartup.ps1 script is pre-configured for you:

If you make changes your POD which breaks vRA, the test tool will catch a lot of this - and your POD will have a failed status on restart. Details of failures can be found in the folder C:\hol\Tools\vRPT\test-output and C:\hol\Tools\vRPT\html-reports.

 

 

vRA /vRO Removal

If you need to remove the vRA / vRO components, you will need to complete the following:

 

 

Utility Tier-2 VM

Content Owner: Burke Azbill

Suggestion For Use: The utility VM is primarily in the pod as an E-mail server as well as web based e-mail client. Further details on that will follow. The VM provides additional features as well, including:

VM History: The Utility VM was initially developed for use by the Global Center of Excellence in its LiVefire vPod. The Confluence page for the Utility VM is: https://confluence.eng.vmware.com/display/gcoe/Utility+VM

Start-up Script details:

NOTE: The LabStartup.ps1 script must be enabled to allow for the starting of the Util VM. This is done by commenting out the "Set-Content - Value "NOT Implemented" -Path $statusFile" and "Exit" lines found on Line 222 and 223 of the script.  You must also update your Lab SKU in C:\DesktopInfo\desktopinfo.ini.

The LabStartup.ps1 script has the following entries:

 

 

 

VM and Service URLs and Credentials

Although these credentials are provided in the SDDC Base pod's README.txt found on the desktop, they are added here for completeness:

E-Mail: (SMTP/POP3/IMAP)

mail.rainpole.com (192.168.120.91)

Webmail administration: https://mail.rainpole.com/iredadmin postmaster@rainpole.com / VMware1!

NOTE: Login to the Webmail Administration URL with the credential above to see all the accounts pre-created for you.

Webmail Client: https://mail.rainpole.com/mail ldev@rainpole.com / VMware1! rpadmin@rainpole.com / VMware1!

iTop (Open Source CMDB / Help Desk / IPAM http://www.combodo.com/itop for info):

http://itop.rainpole.com/itop  admin / VMware1!

GitLab:

http://gitlab.rainpole.com:82

Standard/Admin login: root / VMware1!

LDAP Login: rpadmin@rainpole.com / VMware1!

phpMyAdmin: (mySQL Database management)

https://mail.rainpole.com/phpmyadmin root / VMware1!

Webmin: (Linux OS Management web UI)

https://mail.rainpole.com:10000 root / VMware1!

Jenkins:

http://jenkins.rainpole.com:18080/jenkins administrator / VMware1!

 

 

 

E-mail Management Bookmarks

 

The Util VM provides e-mail services for your pod. Bookmarks have been added for "Manage Mail Domains & Accounts" as well as "Access Webmail".

The E-mail services provided by the VM are NOT integrated to Active Directory, but the VM provides an easy to use Web Interface for managing domains and user accounts for each domain.

 

 

E-Mail Management Login

 

You may access the Mail Domain and Account management interface from the bookmark. Use the credentials: postmaster@rainpole.com / VMware1! to login.

 

 

Access Domains and Accounts

 

  1. Once logged in, click the Domains and Accounts tab
  2. Under the Users Column, click the blue number to access the user list for that domain
  3. If you need to add a domain, you may click the +Add domain tab

Since @rainpole.com has historically been the most used fictional domain for labs, many of the user accounts in Active Directory have had accounts created in this domain. Accounts may be safely created in multiple domains and additional domains may also be created if needed.

 

 

Users under Domain: rainpole.com

 

Once you've clicked that user count from the previous step, you are provided a list of all the accounts that have already been created for the selected domain.

You may Edit an account by clicking the Gear/Pencil icon in the Mail Address column next to the account you wish to modify

If you need to Add a new user account, click the +User tab

 

 

Adding a new E-mail Account

 

Adding accounts is quite simple using the web interface. Once you have clicked the +User tab, you are presented with the form shown above. For each of the accounts already created, the fields were filled out as shown above. Leaving the Mailbox Quota blank sets no quota. The password for ALL accounts should be the standard: VMware1!

After clicking the "Add" button, the screen will indicate "User created. Add one more?" and an additional (OPTIONAL) form will be presented. It is NOT necessary to make any further changes or to click the Save Changes button.

 

 

RoundCube E-mail Client

 

You may access webmail using the bookmark found in the HOL Admin folder as shown in the screen above.

Since the mail server and client support multiple domains, you must specify the full e-mail address as the username when logging in.

 

 

Webmail Interface

 

The RoundCube E-mail client is an easy to use Web based mail client that is commonly used by website hosting providers as you can see in the screenshot above.

 

 

Util VM Removal

Some apps in the SDDC Pod may be configured to use the Util VM's E-mail server (or any of its other provided services.) In particular, vRealize Automation IS setup to use it. If you choose to remove the Util VM from the pod, be sure to re-configure vRealize Automation (and any other products) to NOT use E-mail.

Once you have confirmed no services in the pod are configured to use the Util VM for mail, you may safely delete the util-01a VM using vCenter within the pod as it is a Tier 2 (nested) VM.

Remove the util-01a entry from the LabStartup.ps1 as per the notes in this section. ($VMs and $URLs arrays)

 

Flagging ESXi SCSI LUNs for HOL


There are times in the lab when it is necessary to showcase a feature that requires SSD storage. In a nested environment, there are two ways to handle "spoofing" the host to have it detect virtual SCSI devices as SSD (or not):

  1. SATP claim rules
  2. VMX edits (ExtraConfig options)

Of these two options, the first one is the simplest to manage within a vPod and allows lab teams to manage the configuration on their own. In a vCloud Director environment, where access to the underlying infrastructure is restricted, editing the VMX files directly is not possible, so this option is very difficult to implement. This article discusses the first option, but will address the second option in a little more detail for reference purposes.


 

WARNING: Explicit marking of ALL devices is required!

 

Even if you're familiar with the process of creating claim rules to mark devices as SSD in ESXi, please read this section. The time you save may be your own.

In the Hands-on Labs, we run our pods on several different clouds. While these clouds have similar hardware configurations, there are some differences. This is particularly important when the storage backending the environment contains SSD devices. If the underlying storage identifies itself as "SSD" to a physical ESXi host, that information is passed into the virtualization layer so that the VMs also detect that they are running on SSD storage.

This is interesting because an ESXi host running as a VM on a pESXi host with an "SSD" datastore will present ALL VMDKs as SSDs. In most labs, this is not a problem. However, in order for a hybrid Virtual SAN solution to initialize, it must have non-SSD storage. Yes, the new Virtual SAN 6.x all-flash option will accept this configuration, but it requires a different license and some additional prep of the SSD devices (more on that later).

When you develop your lab in our HOL-DEV environment, which is based on traditional (non-SSD) SAN storage, all devices show up to your vESXi hosts as non-SSD by default. You apply the claim rules to flag specific devices as SSD and your VSAN configures properly. However, when we move your pod into one of the hosting clouds which has SSD-based storage (like EMC XTremIO), your pod falls apart because VSAN can't find any HDDs to use for capacity.

The solution is to explicitly mark all SCSI devices attached to your ESXi host with the appropriate "personality." This means that you must tag every device as either SSD or non-SSD, based on how you expect it to be used in your lab. Need to know how to do that? Read on!

 

 

NEW in vSphere 6!

 

In previous versions of vSphere, it was only possible to set the appropriate flags using the command line on each ESXi host. This functionality has been added to the vSphere 6 Web Client and greatly simplifies flagging of devices and even allows users of a lab to easily make adjustments on the fly.

This is configured per host and per device:

  1. Select a host from the inventory pane
  2. Open the Manage tab
  3. Select Storage
  4. Select Storage Devices
  5. Click on the device you need to flag
  6. Locate the toggle button -- this will display as "F" to indicate it will switch to Flash or "HDD" to indicate that it will toggle back to Hard Disk

Note that this button only sets and "unsets" the flag to mark a device as an SSD. It does not set the explicit flag to mark a device as non-SSD, so this only handles part of the required configuration for a Hands-on Labs vPod.

 

 

Let's Make this Easier

 

Wouldn't it be great if there was a way to report on the rules you have configured on each of your hosts? Wouldn't it also be great to be able to set the proper rules on all hosts as well?

We have created a pair of PowerCLI functions that will do just that. For 2016 HOL base vPods, these functions are included in the C:\HOL\Tools directory in a module called HOL-SCSI.psm1. This module is not loaded by default, but can be simply loaded into a PowerCLI window by executing the following command:

Import-Module C:\hol\tools\hol-SCSI.psm1

Once this module has been loaded, and you have logged in to the appropriate vCenter with the Connect-VIserver cmdlet, you can execute a single line to report on all vESXi hosts managed by that vCenter:

Get-VMHost | Get-ScsiLun | Get-ScsiLunFlags

See the sample output in the picture. It will show a table with hostname, SCSI LUN Canonical Name, the rules configured for that LUN on that host, and the current SSD flag state for that LUN. Note that, if a device is in use, a reboot is required for the rule to take effect. Also note that any device that does not have any flags set will not show up in this list.

Ideally, all hosts in a cluster will have a consistent disk configuration so verifying the configuration would be easier, but that is not always possible in the labs. 

To limit the scope of the above report to a specific cluster, use

Get-Cluster "cluster name" | Get-VMHost | Get-ScsiLun | Get-ScsiLunFlags

 

 

Example: Preparing new hosts - Marking as SSD

 

In this example, we have 3 new hosts that we want to add to the cluster. They're in maintenance mode, so we filter Get-VMHost based on that to get our working set. (1)

Next, we look to see if there are any flags set on any LUNs attached to these hosts. (2) Because these are fresh builds, nothing should be there, but it is always good to check.

Next, we run the Set-ScsiLunFlags command without options to flag the mpx.vmhba2:C0:T0:L0 device on all 3 hosts as an SSD. The default action of this function is to mark the specified device(s) with the SSD flag.

Finally, we report on that configuration to show that the rules were created (4)

 

 

Example: Preparing New Hosts - Marking as non-SSD

 

In this example, we have already marked the "cache" device, mpx.vmhba0:C0:T0:L0, as SSD, as you can see in the output (1)

Now, we need to mark an additional device as HDD in order to make this pod portable across clouds. To do this, we use the ExplicitHDD flag to the Set-ScsiLunFlags function:

$hosts2Prep | Get-ScsiLun -CanonicalName mpx.vmhba2:C0:T0:L0 | Set-ScsiLunFlags -ExplicitHDD

You can see the effect this has in the Get-ScsiLunFlags output (3)

We would repeat this process using the CanonicalName of each LUN that will be used for hybrid Virtual SAN capacity.

 

 

EEK! An Error??

 

In this example, I had a lot of SCSI LUNs that I had to mark as "non-SSD," so I figured I would mark every drive on the host as an explicit HDD and then go back and flag only the cache LUN as SSD. This would save me time since I would not need to specify the CanonicalName of each capacity LUN.

$hosts2Prep | Get-ScsiLun | where { $_.LunType -eq "disk" } | Set-ScsiLunFlags -ExplicitHDD

The command worked as intended: a rule was created for each "disk" object on each of the hosts to flag all of them as "non-SSD" .. but, why the errors?

In this case, I was trying to flag the ESXi host's boot disk. While the rule has been created, it is not possible to unclaim the boot disk because it is currently in use. This is not a problem since Virtual SAN cannot use the boot disk for either cache or capacity, and it does not matter to the operating system if it is running on a HDD or an SSD. The function has no simple way of knowing if the LUN is in use until it tries to unclaim it, so you get an error. The nasty red messages are the price you pay for using the sledgehammer to mark ALL devices at once.

 

 

What About All-Flash VSAN?

 

Does it work for All-Flash VSAN?  Yes, it works for All-Flash VSAN!

With the release of vSphere 6, there is a new way to configure Virtual SAN. This configuration uses some SSD devices as cache and others as capacity. This configuration requires a little more work because the user must determine which ones will be used for which purpose.

This function can handle that configuration as well: for any devices that you would have flagged using the ExplicitHDD switch in a Hybrid configuration, use the FlashCapacity switch instead. This switch adds the flags that allows a properly-licensed (Enterprise) Virtual SAN to recognize those SSD devices as capacity rather than cache.

Starting at (1) with the cache SSD devices flagged, just change the switch (2) and execute the command

$hosts2Prep | Get-ScsiLun -CanonicalName mpx.vmhba2:C0:T0:L0 | Set-ScsiLunFlags -FlashCapacity

Verify the output (3) and notice that the rules have been created but some of the newly-flagged devices are not currently showing up as SSD yet. This happens sometimes when the host is busy or, like here, you execute a bunch of flagging commands in a row and the host hasn't caught up yet. A reboot of the host will fix this by reapplying the rules, but this gives me a chance to demonstrate another feature of the Set-ScsiLunFlags function: ReclaimOnly. This will not change any rules, but will tell the host to unclaim the specified device(s) and attempt to reclaim using the current rule set. This is shown in (5) by specifying the ReclaimOnly switch:

$hosts2Prep | Get-ScsiLun -CanonicalName mpx.vmhba2:C0:T0:L0 | Set-ScsiLunFlags -ReclaimOnly

As you can see in (6), this was successful and the devices are now showing up properly.

 

 

Reference: VMX Edits

I mentioned that the same behavior can be accomplished by editing the VMX file (or changing a VM's Advanced Settings) if you have access to the vCenter or pESXi hosts that run the vESXi virtual machines. Typically, this is not the case in a hosted environment like OneCloud.

Still, it might be interesting to know how to do this, so it is included here for informational purposes.

The key you set is of the form, "scsiX:Y.virtualSSD" and the value is either "0" (not SSD) or "1" (SSD). As with the claim rules, each device must be specified in this manner in order to ensure that its expected personality is consistently presented in each cloud. The X and Y values represent the virtual SCSI bus (X) and SCSI ID (Y) where the device resides in the virtual machine configuration.

It is also possible to configure this in an OVF file that can be imported to vCenter or vCloud Director using the following syntax:

<vmw:ExtraConfig ovf:required="true" vmw:key="scsi0:0.virtualSSD" vmw:value="0"/>
<vmw:ExtraConfig ovf:required="true" vmw:key="scsi0:1.virtualSSD" vmw:value="1"/>
<vmw:ExtraConfig ovf:required="true" vmw:key="scsi0:2.virtualSSD" vmw:value="0"/>

Note, however, that a user of vCenter must accept the ExtraConfig options at OVF deployment time in order for them to be pulled in.

For vCloud Director, it becomes more complicated because most ExtraConfig options are flagged as unsafe by default and must be explicitly added to the "whitelist" in every instance of vCD that manages any cloud where this OVF must be imported. William Lam wrote up the following article about this after working with us in 2014. http://www.virtuallyghetto.com/2014/05/configuring-a-whitelist-for-vm-advanced-settings-in-vcloud-director.html

This is only part of the solution, however. Even with the whitelist in place, the GUI import and export capabilities of vCD cannot be used to manage templates containing ExtraConfig options. Rather, these templates must be managed using OVFTOOL and the --allowExtraConfig switch must be specified each time.

 

Increasing Storage for Base Linux Machines


The core team has provided some base Linux machines (CentOS and VMware Photon) for use as Layer 1 and Layer 2 VMs in vPods. These machines have been created with a minimal disk configuration in order to maximize their flexibility and minimize overhead.

It is entirely possible that you may require additional storage attached to these machines in order for them to meet your needs. There are many ways to handle this kind of expansion, and the specific actions are largely dependent on your use case. Nevertheless, this article is intended to provide some guidance around this process and to outline a simple way to increase the usable storage for the CentOS base machines. This is a rather simplistic way to increase the storage for the root volume group. In a production environment, a separate volume group would likely be created to store applications and data. However, this is a lab and we like simple for labs.

Note that it is possible that VMware Photon may be expanded in a similar manner, but this process has been written based on the CentOS configuration in these base machines.


 

Deploy one of the Base CentOS machines

 

Add a new virtual machine to your vPod.

  1. Select vApp from the filter drop down
  2. Enter "base-" into the filter box
  3. Locate the template you want and select it
  4. Click the Add button to pull it in and follow the standard process to pull the Linux VM into your pod

 

 

Disable "Guest Customization"

 

If you have deployed a copy of the Linux images into vCloud Director as a Layer 1 virtual machine, it will default to having the "Enable guest customization" flag, which will cause the root user's password to be reset to something besides VMware1!

To get started, right-click on the new Linux VM and select Properties

  1. In the Properties screen, click on the Guest OS Customization tab
  2. Make sure the Enable guest customization checkbox is NOT checked

 

 

Option 1 - Add a new Hard Disk device

This is Option #1 and is usually the "safest" approach

 

 

Add New Hard Disk Device to VM

 

  1. Switch to the Hardware tab
  2. Click the Add button in the Hard Disks section
  3. Change the disk size to the amount of space you need to add

Click the OK button

 

 

Power on VM

 

  1. Click on the new Linux VM to select it
  2. Right click on the VM and select Power On from the menu, or click the Power On icon on the toolbar

 

 

Open a console to the VM

 

Once the VM powers up, you can open a VMRC console to the VM by simply clicking on the blue screen in the vApp Diagram.

As another option, you can get its IP address on the Virtual Machines tab (1) on the vApp in vCD. If the VM template has been configured for DHCP, this will be on the 192.168.100.0/24 network. In the above example, the address (2) is a static address. This address can be used to get into the VM from ControlCenter using PuTTY.

Login to the machine as the root user with the VMware1! password. If that does not work, be sure you remembered to un-check the guest customization box prior to booting the machine. Otherwise, go to the Properties > Guest Customization tab now to discover the new root password.

 

 

Check current disk space

 

Enter the following command to check the file system usage

df -h

Note that the base templates have little free space for anything but the OS and a simple set of applications.

 

 

Identify the new device

 

Enter the command

fdisk -l | less

to list the available devices. In this case, we expect it to be /dev/sdb

Note that /dev/sda already has two partitions on it (/dev/sda1 and /dev/sda2), but /dev/sdb is empty.

 

 

Format the new device

 

Enter the commands needed to create a new partition which spans the whole device:

  1. Act on the /dev/sdb device
fdisk /dev/sdb  
  1. Create a new partition
n
  1. Create it as a primary partition
p
  1. Start the partition at the beginning of the device (that's the number 1)
1
  1. Use the whole device (just press Enter)

 

 

Set the partition type so that LVM can use it

 

  1. Change the type of the partition
t
  1. Set it to "8e" to indicate Linux LVM
8e
  1. Write the changes
w

 

 

Enable LVM to use the new device

 

  1. Create the LVM physical volume on the new device
pvcreate /dev/sdb1
  1. Extend the volume group
vgextend VolGroup /dev/sdb1
  1. Extend the logical volume
lvextend /dev/mapper/VolGroup-lv_root /dev/sdb1

 

 

Grow the filesystem into the new volume space

 

  1. Now that the space has been allocated to the LVM volume group, we need to resize the file system to use the space:
resize2fs /dev/mapper/VolGroup-lv_root

 

 

Verify

 

  1. Execute the command to show the file system sizes and utilization
df -h
  1. Verify that we have 20 GB free

 

 

Option 2 - Grow the Existing Hard Disk

This option is not for the faint of heart, and can be a little trickier to pull off in a vCloud Director environment. Still, if you only need a little more space and don't want to deal with adding a new disk, you can do this one.

The first difference with this method is that you need to figure out how to grow the disk. Because we use Fast Provisioning in vCD for HOL, regular users are not able to grow a disk once it has been deployed. There are two main options here:

  1. Ask Bill or Doug to consolidate the drive for your Linux machine.
  2. Deploy the base Linux vApp from the HOL-Base-Templates catalog into your cloud and resize the disk during deployment

Yes, #2 is kind of a sneaky workaround, but it doesn't do the whole job.

 

 

Check the current space and disk partitions

 

Because this is a more advanced operation, we will skip the fanfare and jump right in. Assume I deployed this machine with a 10 GB disk instead of the default 2 GB.

Login as root and look at the current allocation:

df -h

list the partitions on disk /d/ev/sda using

fdisk -l /dev/sda

Notice that the operating system continues to use only the 2 GB capacity, although the disk shows up as 10.7 GB. This is because the partition is still the same size as it was on the 2 GB disk.

 

 

Launch fdisk

 

Open fdisk on the /dev/sda device. Note that the -cu switch disables the legacy "DOS-compatibility mode" and displays all sizes in sectors. You want this.

fdisk -cu /dev/sda

Note that /dev/sda2 has a type of 8e, which is Linux LVM. This is the partition we want to work on.

 

 

DANGER: Be careful!

 

In this step, we need to DELETE the existing (smaller) partition and create a new (bigger) one in its place. This is a perfectly safe process, but There is a possibility for data loss if you are not careful here.

Please read through the entire process before jumping in.

Tell fdisk to delete a partition

d

Tell fdisk to delete partition 2

2

See how easy that was? Now, don't go ANYWHERE.

Tell fdisk to create a new partition

n

You want a Primary partition

p

You want it to be partition 2 (just like the one you deleted)

2

By default, fdisk will select the proper start and end sectors to build the partition in the current free space, so you can safely accept the defaults. Just press Enter twice.

Finally, you need to change the partition type from the default of Linux (83) to Linux LVM (8e):

Tell fdisk that you need to change the type

t

Indicate the partition number

2

Specify the new type

8e

At this point, your changes are theoretical. That is, they're in RAM, but have not been committed to disk. Use

w

to commit the changes, or

q

to bail out and try again.

Notice the warning that the kernel is still using the old partition table. This is very important: you need to reboot. Now. Do not pass "GO" and do not collect $200. Just type:

shutdown -r now

Anyone who told you you could do this without downtime thought you meant something else... see option #1

 

 

LVM Physical Volume Expansion

 

Once you have rebooted and logged in again as root, run

fdisk -l /dev/sda

and notice that the End of the second partition is further out and the number of Blocks is much larger -- you can compare the screenshot in this step to the screenshot 2 steps back for reference.

Now that the partition has been resized, we need to allocate all of these new blocks to LVM. To do that, we need to resize what LVM sees as the physical volume. To see LVM's view of the world, execute

pvs

Notice that the "PSize" is 1.51g, which is pretty small given that we just added 8 GB to our original 2 GB drive. The command to grow the physical volume is

pvresize /dev/sda2

Once that completes, run

pvs

and notice that the PSize is closer to what you would expect.

 

 

But wait, there's more!

 

Almost there!

Use the

lvs

command to show the Logical Volume size. This is represented by the "LSize" value. This is the size of the LVM space on the physical volume. Just like the physical volume, we can grow this one:

lvresize /dev/mapper/VolGroup-lv_root /dev/sda2

This command tells LVM to grow the root volume group using the free space on the LVM physical volume on /dev/sda2

Once that is complete, you could execute lvs again, but you have one final object to extend before you are finished.

With the space available, we need to extend the filesystem onto that space. This command takes a little longer to complete, and the time will depend on how much space you are adding. For 10 GB in a VM it usually takes about 20-30 seconds.

resize2fs /dev/mapper/VolGroup-lv_root

 

 

 

Verify

 

Once the filesystem resize completes, you are finished. Use the following command to see all of the space you now have available to you!

df -h

 

 

Option 3 - No LVM (like VMware Photon)

 

In the case of VMware Photon, the default installation uses a single drive that is partitioned into a boot and a Linux filesystem. There is no LVM, so the process outlined in CentOS Option 1 is not possible.

A similar process can be used for other Linux machines that have not been installed with LVM-managed disks.

 

 

Photon does not use LVM

 

This is very like the CentOS Option 2 described in the previous step, but is slightly simpler due to the lack of LVM. The underlying VMDK must be grown to the proper size before continuing. In my example, I have deployed a Photon VM with a 16 GB disk, but I have grown the VMDK to 20 GB.

Login as the root user and check out the current configuration:

df -h

Notice the 14 GB of current free space

fdisk -l /dev/sda

Notice the 16 GB /dev/sda2 partition defined as a Linux filesystem

 

 

Open fdisk and rebuild the partition

 

Similar to the steps taken in Option 2 for CentOS, remove partition 2 and then recreate it. The version of fdisk in Photon is slightly different, but the commands are the same:

fdisk /dev/sda
d
2
n
2

[press Enter twice to accept the default Start and End for the new partition]

Note the message in white (1) that a new partition of size 20 GB has been created -- this is the size I expected here, based on the new size of the VMDK.

Write the changes to disk!

w

Note that the red errors/warnings (2) and (3) are expected.

Reboot the machine to for a clean start with the new partition table

reboot

 

 

 

Finish up

 

Once the machine has rebooted, login as root

Notice from the output of

df -h

that the partition is still the original size (14 GB), but

fdisk /dev/sda

now shows a larger partition (20 GB in this example)

We need to extend the filesystem with

resize2fs /dev/sda2

Again, use

df -h

and notice that the filesystem is now larger (17 GB in this example)

That's it!

 

Setting Static IP for Base CentOS Machines


In the VMware Hands-on Labs, we like to keep the configurations consistent, predictable and supportable. Part of that is setting static IP addresses for any machines used in the lab. If you need a Linux machine, but you are not familiar with the specific distribution we have available, it can be challenging to figure out which files need to be edited to make the change you need.

This article provides one method of doing that. We have provided the "system-config-network" tool in the CentOS installation to assist with getting all of the right pieces setup for you in a simple, almost-GUI interface.


 

Network Configuration

 

Open a VMRC console to the Linux VM and login as the root user

Run

system-config-network

 

 

 

system-config-network

 

It doesn't look like much, but it is better than searching for the right text files and parameters...

Use the arrow keys on your keyboard for navigation and the Enter key to make selections.

To get started, press Enter to view and change the device configuration

 

 

Select a Device

 

Unless you have added more Ethernet adapters, you should only see one device here.

Press the Enter key to select this device

 

 

Device Configuration

 

If you are reconfiguring one of the base templates, you likely want to change the IP address (1) and the default gateway (2)

Be careful with the default gateway and ensure that it is the .1 address on the network that you're using for your new IP address. The default static IP is on the 192.168.120.0/24 network, which is used for Site A virtual machines (vVMs). If you are intending to use this VM as a layer 1 VM (first level in vCD), you need to change that to 192.168.110.1.

Use the Tab key to highlight the Ok "button" and press Enter

 

 

Save changes

 

Use the Tab key to select the Save "button" and press Enter

 

 

DNS Configuration

 

Changing the hostname of the Linux machine is accomplished in the DNS configuration section.

Press the down arrow to select that option and press Enter

 

 

Setting the Hostname

 

The cursor is in the Hostname blank by default. To wipe out the existing one, press the Delete key several times.

Enter the hostname you want to use for this machine -- ideally, you will put this hostname into DNS on ControlCenter along with the IP you have selected so that the name resolves elsewhere, too!

Use the Tab key to select the Ok "button" and press Enter

 

 

Don't forget to Save!

 

Back at the main screen, use the Tab key to select the Save&Quit "button" and press Enter

 

 

Hostname warning

 

Note the very odd warning that shows up if you change the hostname. Also note that the hostname has not actually changed (yet). The simplest way to effect that change is to reboot the machine. This ensures that everything comes up clean and you can carry on.

As a simple test, ensure that you can ping the management network's gateway (192.168.110.1) and controlcenter (192.168.110.10).

NOTE: If you are on a MacOS X machine and accessing the VMRC via vCD, the Control key is gobbled. DO NOT execute a ping on the console without specifying the -c 2 switch, which will send two pings and terminate. Failure to include this switch will leave a runaway "ping" running on the console until the machine is rebooted (or the process is killed from another session).

...and perform DNS lookups using the DNS on Control Center

ping -c 2 controlcenter.corp.local

and

ping -c 2 controlcenter

 

 

Adding a Virtual Appliance to a vPod


This article briefly covers the process used to add virtual appliances to your vPod as Layer 1 VMs. Even if you think you know how to do this, we feel this article is worth skimming because there are things that people forget every year. Even those of us who do it all the time fall victim to the idiosyncrasies of vCD.


 

My Lonely vPod

 

This is my example vPod. It was deployed using our Minimal base template, which means Main Console and the vpodrouter are very lonely. Let's add an appliance and give them some friends!

Get started by clicking on the Add VM button indicated in the diagram

 

 

New Virtual Machine Wizard

 

I have decided that I want to add the vRO appliance to my vPod. I know that this is available in the HOL-Dev-Resources catalog in the vApp called vRO-01a.

  1. Use the search options to locate the vapp and VM that you want to add to your pod. If you don't know the name, it is much easier to browse using the Catalog screens than this one, so close this window and go get the name of the appliance in the catalog, then come back.
  2. Once you locate the appliance that you want, click on the VM's name in the upper pane
  3. Then click the Add button to mark that VM for addition to the pod -- if you want to import multiple appliances, you can do that, and they will be collected in the bottom pane
  4. Once you have marked the appliance for addition to the pod, click the Next button to move on.

 

 

Configure Virtual Machines

 

I have skipped the Accept License and Configure Resources sections of this wizard since they're pretty self-explanatory and not that interesting. Just click Next twice to get to this screen.

  1. Make sure the Computer Name is correct. we typically use the shortname of the VM here to keep things simple and unique. I usually like to use all lowercase for the names.
  2. Next, select the vApp network. This is where all VMs in a vPod attach. If you are building a new pod, it will be called vAppNet-Transit
  3. Select DHCP for networking. No matter how you are going to configure the IP, just tell vCD that it will be DHCP.
  4. Click the Next button to move on
  5. Note that you can click Next again and then Finish to begin copying the appliance into your pod.

 

 

STOP - Do not pass Go, Do not collect $200

 

Once the appliance is copied into your pod, which can take a while, you will be returned to the vApp Diagram screen. Eventually, the status will change from Updating to Stopped and you will be tempted to fire up your pod. Don't.

Before you do anything else, you MUST edit the properties of the appliance.

 

 

 

Edit Properties of the Appliance VM

 

  1. Click on the newly-imported appliance VM to select it
  2. Click on the Action (the picture of the gear) menu
  3. Select Properties

 

 

Disable Guest Customization

 

vCD really loves Guest Customization. So much so that it turns it on each time you copy a VM. Unfortunately, most appliances DO NOT love Guest Customization. In fact, it causes many appliances to become unusable.

So, click on the Guest Customization tab and uncheck the Enable guest customization checkbox.

This is the #1 issue we have with importing appliances into vPods. Notice that you must do this BEFORE you power the appliance up the first time. If you forgot or jumped the gun, shut the appliance down, delete it, pull it back into your pod and try again.

Before you close the Properties, it is worth clicking on the Guest Properties tab to see what's going on there.

 

 

Set Guest Properties

 

Not all appliances use this tab, but the ones that do will not be happy if you don't feed them the information they need.

Notice the angry red outlines on the Confirm password boxes. That's not me this time, that's vCD telling you that these are required. It isn't kidding, too. If you don't fill these blanks in, the appliance VM won't even start. In fact, leaving these empty will cause your entire pod to fail to start, and all you get is some vague error message in the vCD log.

All of these properties must be filled in accurately THE FIRST TIME.  You only get one chance in most cases.  Generally, once the appliance boots it will not look at these properties again and you will be stuck.  Delete and repeat.  The expression in the United States is "Experience is the best teacher but the tuition is very high."  Be warned.

So, do everyone a favor and make sure you fill these out.

 

 

Set Guest Properties - Finished

 

It is important that you fill out each of these blanks completely and correctly. This diagram shows the completed page for the vro-01a.corp.local appliance.

Note the IP address: 192.168.110.79. This IP address has been assigned to the DNS name vro-01a.corp.local in the DNS server that runs on ControlCenter. Note also that the reverse DNS lookup of the IP address returns the vro-01a.corp.local name. Many of the new appliances validate the name and IP that you provide against DNS and may fail to configure correctly if these do not match.

To be safe, please ensure that you have valid DNS resolution BEFORE you try to setup your appliance.

Click the OK button to save your changes and return to the vApp Diagram. You remembered to Disable Guest Customization, right?

 

 

Power it up!

 

Now that you have disabled Guest Customization and set the Guest Properties, you can power up your appliance VM.

I would recommend having both the vpodrouter and the Main Console online whenever you start the appliance for the first time. This ensures that things like NTP, DHCP, and DNS are all available for the appliance to use.

 

 

Disabling automatic file system check (fsck)

 

If you are importing a Linux appliance, a required optimization for Hands-on Labs is disabling the periodic on-boot checks of the filesystem. This check is implemented for production appliances in order to keep the file systems healthy. Periodic checks are implemented based on two factors: number of times the volume is mounted and number of days since the last check.

This check is unnecessary for our use case and actually results in longer boot times once the date or mount thresholds have been reached. Because our vPods are captured in a read-only state and then deployed and "woken up" at the current date, once the stored date has been reached, the check will happen at EVERY boot of the pod. This unnecessarily delays the startup of the pod while the file systems are analyzed. Analysis is unnecessary because the pods have been captured in a known good state.

While this may not seem like a big deal for a single appliance or vPod, think about hundreds of vApps being deployed and having to perform a full file system check on each appliance. This could easily cause a massive I/O issue and result in poor performance all around.

Disable these checks using either the tune2fs utility or by editing the /etc/fstab file, as in the following

 

 

(option 1) Skip automatic fsck by updating /etc/fstab file

 

The /etc/fstab file contains descriptive information about the various file systems. You will see two numbers at the end of the line for each partition. To skip the check, change the second number to 0 (zero digit). This will indicate to the system that it should mount the partition without running the check.

Note in this screenshot from the vRA appliance that the second (/dev/sda1) and last 3 devices are all mounted with this flag set to "1" and they will run the check periodically by default.

 

 

A brief explanation of the fields in /etc/fstab

Each line in the /etc/fstab file contains the following fields separated by spaces or tabs:

file_system    dir    type    options    dump    pass

file_system - the partition or storage device to be mounted

dir - the point where <file system> is mounted

type - the file system type of the partition or storage device to be mounted. Typically, one of the following: ext2, ext3, ext4, btrfs, reiserfs, xfs, jfs, smbfs, iso9660, vfat, ntfs, swap, or auto. The auto type lets the mount command guess what type of file system is used, which can useful for optical media (CD/DVD)

options - mount options of the filesystem to be used. See the man page for the mount utility. Please note that some options are specific to certain types of filesystems.

dump - the dump utility uses this entry to decide if a file system should be backed up. Possible entries are 0 and 1. If 0, dump will ignore the file system; if 1, dump will make a backup. Most users will not have dump installed, so using 0 for the dump entry is recommended.

pass - Used by fsck to decide the order in which filesystems are to be checked. Possible values are 0, 1 and 2. The root file system should have the highest priority 1. All other file systems you want to have checked should have a value of 2. File systems with a value of 0 will not be checked by the fsck utility.

 

 

(option 2) Skip automatic fsck using tune2fs

 

You can check the currently configured automatic mount checks using the dumpe2fs utility:

# dumpe2fs /dev/sdb1 | grep -i check
# dumpe2fs /dev/sdb1 | grep -i mount

Note in the screenshot that the default for the vRA appliance is to check every 6 months and after a certain number of mounts (37 in this example). This is common for many appliances and should be changed for Hands-on Labs.

The tune2fs utility can be used to permanently disable both the interval (-i) and mount count (-c) triggers:

# tune2fs -i 0 -c 0 /dev/sdb1

Note that, like editing the /etc/fstab file, this must be performed for each filesystem that is mounted on the appliance. You can get this information list by listing the /etc/fstab file, or using either the df or mount commands. The change is persistent across reboots.

 

MicroCore Linux Template for HOL (base-linux-micro)


When selecting a VM (or vVM) type to use for Hands-on Labs demonstration workloads, understanding the requirements of the workload is important. In the lab, we try to use the fewest resources possible to adequately support a given product or feature. In some cases, a full Windows or Linux operating system is required, and in others the demonstration can be performed with empty "shell" VMs that do not contain any operating system at all.

A typical Windows or Linux load takes several gigabytes of space and hits both the CPU and RAM pretty hard during the boot process. An empty VM does not consume any significant amount of storage space, can be created on the fly, has negligible CPU and RAM footprint, and deploys quickly. Unfortunately, shell VMs also have zero functionality aside from being objects that can be manipulated in vSphere.

Somewhere in the middle of these two options lies a requirement for a lightweight VM that will boot quickly, can be configured with an IP address and allows a user to login to perform basic operations. Having VMware tools report the IP address back into vCenter is nice, too.

In the HOL-Base-Templates catalog, the core team provides various "base" loads of Windows flavors and both a GUI or CLI (minimal) load of CentOS. These are useful starting points for installing products, deploying agents, showcasing management capabilities and demonstrating customization options.

The "micro" Linux vVM was created to address a fairly specific, but very common use case for the Hands-on Labs. Its functionality is limited, but fulfills the following needs very well:

  1. Small foorprint: 1 vCPU (0 MHz used at idle), 64 MB RAM (5 MB active at idle), 1 GB disk (~13 MB used)
  2. Fast boot: Power On to login prompt is typically 10-15 seconds.
  3. Boots from a VMDK: Not using an ISO makes it behave more like a "real" user VM
  4. Basic VMware Tools: DNS name and IP address are reported up to vCenter
  5. Network connectivity: pingable and login possible remotely using key-based SSH

The micro Linux VM is a simple appliance based on MicroCore Linux. It is mostly non-persistent and does not support guest customization, updating the VMware Tools or installing software via the common mechanisms such as YUM, YaST, or APT. Any customization to this appliance, beyond the changes outlined here, should be carefully considered. It will likely be much simpler to use one of the the linux-base templates, which are based on CentOS and use common Linux tools and processes.

If you need a simple VM to demonstrate any flavor of vMotion, and want to have the user remain logged in or executing a simple workload within the VM as it moves, the micro-linux template may be the answer for you.


 

Obtaining the VM

 

As with the other base vVM templates, the base-linux-micro template lives in the HOL-Base-Templates catalog in HOL-DEV. The template is an OVF, which is provided on an ISO image for easy attachment to your Main Console machine for import to vCenter.

 

 

Import the Template

 

Attach the ISO image to the Main Console and then log into vCenter using the Web Client, and import the linux-micro-01a.ovf template from the linux-micro-01a folder on the root of the ISO image.

Ensure that you attach it to a valid network and select Thin Provisioned for the storage type to keep the storage utilization as small as possible.

 

 

Power it on

Once the VM has been imported, it can be powered on and used as-is. It will receive an IP address via DHCP and be ready to use.

Note that the default hostname for this VM is "linux-micro-01a" and the root user's password is the standard VMware1!

The default configuration also includes a non-root user, holuser, although root is most commonly used. This user has the same password aas the root user.

 

 

Setting a static IP address and new hostname

Any VM that needs to be available at a specific IP address or reachable by a specific DNS name should have a static IP address.

This section details the process required to set the static IP address assigned to the template VM (192.168.120.51/24). Note that this process has already been followed on the template, but should be understood in order to change the IP address, should that be required.

Begin with the VM powered up and logged in as the root user on the console.

 

 

Create a script to set the IP address and hostname

The simplest way to preserve an IP address and hostname is to create a script that runs when the VM boots.

Create the script file /opt/eth0.sh

# vi /opt/eth0.sh

Set the contents of this file to the following.

#!/bin/sh
pkill udhcpc
ifconfig eth0 192.168.120.51 netmask 255.255.255.0 broadcast 192.168.120.255 up
route add default gw 192.168.120.1
echo nameserver 192.168.110.10 > /etc/resolv.conf
hostname NewHostname

Note that the "51" should be replaced with the final octet of the IP address that you want to assign. This will assign an IP address on the 192.168.120.0/24 subnet, which is intended for use by vVMs on "Site A" in the Hands-on Labs. On this network, IP addresses from 192.168.120.3 through 192.168.120.253 should be available for static assignment.

Likewise, "NewHostname" should be set to the value that you would like to use as the new hostname for this VM.

 

 

Make the new script executable

This important step is required to make sure that the script can actually run:

# sudo chmod 775 /opt/eth0.sh

Set the script to be automatically run at startup:

# echo /opt/eth0.sh >> /opt/bootlocal.sh

 

 

Preserve the changes

MicroCore Linux preserves a specified list of files and ignores the rest. You need to add the your new script to the list of preserved files:

# echo opt/eth0.sh >> /opt/.filetool.lst

Note that the initial slash (before "opt") is unnecessary in the path to eth0.sh, but is necessary in the output path. Also note that the file extension "lst" begins with a lowercase "L" and not the number "1".

Finally, back up the configuration using

filetool.sh -b </opt/.backup_device

 

 

Reboot and verify

 

Once the changes have been made, reboot the VM, login, and ensure that everything comes up as expected

# reboot

Login as root user

# hostname

Verify that the hostname is as you expect. You can also just look at the prompt: root@YOURHOSTNAME

# ifconfig eth0

Verify that the IP address is on the 192.168.120.0 network and not 192.168.100.0 (that's the DHCP range)

 

SSL: Issuing New Certificates


It is never a good practice to use self-signed SSL certificates in production, and it looks unprofessional in a lab environment that is presented to users. As a security mechanism, the technology is evolving to address threats that arise in the wild, and slight adjustments may be required to prevent up-to-date web browsers from reporting warnings.

This article will cover a few methods for issuing SSL certificates by leveraging PowerShell scripts that have been provided in in the Hands-on Labs environment. The main considerations for new certificates are matching FQDNs and IP addresses in the generated certificates. In order for a certificate to be reported as valid, the FQDN in the SSL certificate must match the name used to access the remote system. For example, if your user will be accessing the URL http://vrops-01a.corp.local, the name in the certificate must be "vrops-01a.corp.local"

The scripts in the Hands-on Labs environment require a short name (the "vrops-01a" part) and an IP address. These scripts will automatically generate an FQDN with the "corp.local" domain and apply that to the new SSL certificate generated. When using these tools, new certificates will have the primary name set to the FQDN and will include Subject Alternative Names (SANs) of both the short name and the IP address. This allows the same SSL certificate to be used whether the system is accessed as, for example, any of the following:

The goal of the provided scripts is to make the certificate generation as simple as possible while also providing flexibility. Please take the time to read the introductory material in this article so that you have an idea of what is going on at a high level.

Note that the web interface for the Microsoft CA server on the Main Console has not been installed. This process is simpler and less prone to errors.


 

Prerequisites

All of the scripts in the Hands-on Labs environment issue certificates from the Certificate Authority (CA) installed on the Main Console machine using the Windows CA role. A new "VMware Certificate" template has been created by following the guidelines in VMware KB - http://kb.vmware.com/kb/2062108 - to encapsulate the general settings required for certificates used in the environment.

A collection of PowerShell functions for creating SSL certificates to fulfill various functions has been copied to the Main Console machine in the C:\HOL\SSL directory. This module, hol-ssl.psm1, must be loaded into any PowerShell session before the HOL SSL functions are available.

A "chain of trust" has been built by importing the root certificate, the identifier for the CA on the Main Console, into the Trusted Root Certification Authorities store on the Main Console and providing this certificate as a trusted root for all members of the CORP (corp.local) Windows domain. This allows the Main Console (controlcenter.corp.local) to trust all certificates issued by the CA and does not require trusting each individual certificate. The Firefox web browser maintains its own independent list of trusted root CAs, and the root certificate for the Main Console (controlcenter.corp.local) CA has been imported to this store as well.

NOTE: as of vSphere 6.0, all vCenter and ESXi host certificates in the labs are managed natively by the CA on the PSC, so it should not be required to generate new host certificates using this method. For completeness, the root certificate for the CA on the (embedded) PSC in the base vPods has been imported to both the Trusted Roots store in Windows and Firefox's trust store.

 

 

Generating a New Host Certificate and Private Key

With all of the prerequisites in place, the information that is required to create a simple certificate for a new host is pretty much its name and IP address. The current scripts support placing IPv4 addresses into the certificates. For additional special cases like multiple, non-standard Subject Alternative Names, certificate encodings and types, or IPv6 addresses, please do not hesitate to reach out to the core team for assistance.

 

 

The New-HostSslCertificate function

 

To begin, open a new PowerShell window and load the HOL SSL module:

PS C:\> cd C:\HOL\SSL
PS C:\hol\ssl> Import-Module hol-ssl.psm1

In the example screen, the Import-Module function was called with the -Verbose option so that it would list the functions imported from the module. This is not necessary, but is good for an example to show what is available. To create a new certificate, you use the New-HostSslCertificate function, which requires two parameters (note that the FQDN is automatically generated in the "corp.local" domain).

PS C:\hol\ssl> New-HostSslCertificate -HOST_SHORTNAME vrops-01a -HOST_IPv4 192.168.110.70

The console will populate as the certificate request is generated, submitted to the CA, issued, and transformed into a few common formats. All of the files for this host are stored in the C:\hol\ssl\host\<hostname> directory, where <hostname> is "vrops-01a" in this case.

PS C:\hol\ssl\host\vrops-01a> dir
    Directory: C:\hol\ssl\host\vrops-01a
Mode                LastWriteTime     Length Name
----                -------------     ------ ----
-a---         1/12/2016   7:02 AM       1679 rui-orig.key
-a---         1/12/2016   7:02 AM       2226 rui.crt
-a---         1/12/2016   7:02 AM       1679 rui.key
-a---         1/12/2016   7:02 AM       5175 rui.pem
-a---         1/12/2016   7:02 AM       3228 rui.pfx
-a---         1/12/2016   7:02 AM       4326 rui.rsp
-a---         1/12/2016   7:02 AM        769 vrops-01a.cfg
-a---         1/12/2016   7:02 AM       1257 vrops-01a.csr 

It is important to understand the files in this directory. Most of them are part of the creation process and not needed except for troubleshooting. The following files are the most important, and which one(s) you use depends on the requirements of the application:

NOTE: the use of "testpassword" here has to do with a legacy requirement in vSphere 5, which can be seen in VMware KB articles  #How to change the password on a PFX file.

The following files can be considered extraneous files and can be ignored.

For new systems that are not part of the CORP Windows domain, the root CA certificate may be needed in addition to the .CRT and .KEY files (it is already part of the .PEM and .PFX files). The CA certificate for the Main Console CA is located in the C:\hol\SSL\CA-Certificate.cer file.

Note that all of the files have been created in the ASCII format (with the exception of PFX, which is a binary format) so that the text of the files may be copied and pasted as needed. The files are built with UNIX-style line endings, so opening them in Notepad may look "weird." Use Notepad++ and you should be fine.

 

 

How to change the password on a PFX file

 

The scripts in the Hands-on Labs hol-ssl.psm1 module are hard-coded to generate a PFX with the password of testpassword.

Following this process is only required if you need to change the password for some reason.

This section outlines the process for removing or changing the password on a PFX file with a known password. The first step to changing the password is removing the existing password. Note that PFX files with no password are handled differently by different systems and applications. You have been warned.

All of the manipulation is accomplished using the openssl tool, which is installed as part of the ssl-certificate-updater-tool in the C:\hol\ssl directory on the Main Console machine. The easiest way to deal with this LONG path is to create an Alias object in PowerShell:

PS C:\> $OpenSSLExe =	"c:\hol\ssl\ssl-certificate-updater-tool-1308332\tools\openssl\openssl.exe"	
PS C:\> New-Alias -Name OpenSSL $OpenSSLExe

Next, switch to the directory containing the PFX file you want to modify

PS C:\> cd \hol\ssl\host\myhost

Strip off the password by converting to a "bare" PEM:

PS C:\hol\ssl\host\myhost> openssl pkcs12 -in rui.pfx -out tmpcert.pem -nodes

You will be prompted for the current password on the PFX file. Key in the password and press the Enter key.

If the password matches, the certificate will be exported from the rui.pfx to the tmpcert.pem file.

To create a new PFX with the desired password, repackage the PEM with a new password

PS C:\hol\ssl\host\myhost> openssl pkcs12 -export -out rui-new.pfx -in tmpcert.pem

You will be prompted for the new password, and then again to verify it. If all goes well, you will have a rui-new.pfx file that has been encrypted with your new password.

If you would like to test your new PFX file, openssl can help with that as well:

PS C:\hol\ssl\host\myhost> openssl pkcs12 -info -in rui-new.pfx

You will be prompted three times for your new password -- once for the certificate and twice for the private key. Ensure that it works before moving on.

Finally, remove the temporary file

PS C:\hol\ssl\host\myhost> del .\tmpcert.pem

There are more complex configurations, but this should cover the majority of use cases.

 

 

Generating a New Host Certificate from a CSR

Sometimes, an appliance or application manages the private key for you and will provide the Certificate Signing Request (CSR) with a specific configuration that the system requires. In this case, a different function can be used to generate the required certificate.

Please note that you should perform the required SSL configuration setup on the appliance prior to generating the CSR. This often includes setting the proper name (short and FQDN), IP address, and maybe even running a wizard of some sort that will generate the private key, request, and CSR.

Once you have generated the CSR, you must download it and transfer it to the Main Console machine. As there are various methods for doing this, each depending on the specific appliance and interface, this process is out of scope for this article. However, modern appliances typically have a web interface and a "Download CSR" link.

For the sake of this example, download the CSR to the C:\HOL\SSL directory on the Main Console machine and name the file myhost.csr

 

 

The New-HostSslCertificateFromCsr function

 

With the myhost.csr file in hand, open up a PowerShell window, change to the C:\hol\ssl directory and load the hol-ssl.psm1 module:

PS C:\> cd C:\HOL\SSL
PS C:\hol\ssl> Import-Module hol-ssl.psm1

Then execute the New-HostSslCertificateFromCsr function, passing it the CSR:

PS C:\hol\ssl> New-HostSslCertificateFromCsr -CSR myhost.csr

The resulting certificate (.CRT) and response (.RSP) files will be placed into the same directory as the CSR.

That's it! The certificate can now be uploaded into the system that generated the CSR. The files you should see are as follows:

Note that the New-HostSslCertificateFromCsr function only generates the certificate in x.509 format and not the PEM and PFX files that the New-HostSslCertificate function generates.

When a system manages its own private keys, typically the only inputs it requires are the x.509 certificate for the host, and the certificates in the chain back to the root CA. In this case, just the certificate for root CA on the Main Console machine, which is located in C:\hol\SSL\CA-Certificate.cer and may be required prior to installing the new certificate generated here. Some systems require loading the root CA certificate first, others require it to be appended to the new certificate, and others may require the CA certificate to be pasted into another field in the "apply CA-signed certificate" form. It should be obvious from the form and/or the documentation included with the application or appliance.

If you are creating a single certificate from a CSR, this is a quick way to do it. I usually create a hosts folder in C:\hol\ssl on the Main Console and create a subfolder with the short name of each host to contain everything specific to each host. This helps keep the SSL folder tidy and my certificates and keys organized.

 

 

ERROR!?!

 

Please note that, while the functions contained in the hol-ssl.psm1 module generally work within the HOL environment, there are times when odd things happen. These tools are meant to be "quick and dirty" job aids and do not have much in the way of error trapping. If something goes wrong, usually the certificate is either not issued or it is issued and not downloaded.

In most cases, the issue comes down to a missing parameter, mistyped function name, or a failure to load the hol-ssl.psm1 module prior to running one of the contained functions.

There have been a few instances in which the CA server on the Main Console has tried to initialize before Active Directory is fully online, causing the CA to be unusable. In this case, a reboot of the Main Console has usually corrected the issue.

In the event that you run into some kind of issue with creation of SSL certificates, please do not hesitate to contact Doug Baer on the core team. He's the one to blame for the "hacky" code in this module.

 

SSL: Accepting Existing Certificates


In the Hands-on Labs, we like to provide a clean and consistent user experience. One of the most common user experience issues has to do with invalid SSL certificates. Correcting this issue can range from nearly trivial to an arduous multi-day endeavor (I'm looking at YOU, vSphere 5!)

The new vSphere 6 certificate management infrastructure handles SSL certificates for ESXi hosts, vCenter Server, and its component services. Unfortunately, this does not handle certificates for most (any?) add-on products, so you are left with the task of ensuring that the SSL certificates in your lab pass muster and are valid for at least the lifespan of your lab - that's through December 31 of the year following a lab's release at VMworld.


 

Getting the Red out

 

In production, all of these warnings can be scary, and rightly so! An invalid certificate in the real world may mean that your connection has been compromised and now your credentials or credit card information is in the hands of the "bad guys." Unfortunately, many users have become so accustomed to ignoring these warnings that the whole process seems pointless. We want to help stop that behavior by not telling our users to simply "click Ignore" and move on.

Besides, these warnings give the environment an unprofessional look.

 

 

Chain of Trust

 

A public key infrastructure (PKI) is about trust... and heinous math... but mostly about trust. The math is a deterrent to prevent abuse.

Let's look at an example.

  1. I trust a certificate authority (CA)
  2. The CA trusts an entity, such as a host
  3. The CA vouches for the host by issuing a certificate and signing it with its own key (so, I know where to point the finger if things go sideways)
  4. I trust that host as well

This core concept is pretty much all you need to know to get started.

 

 

Self-Signing

A chain of trust needs to start somewhere and we call that a "root." At the root, there is no established trust, so we trust ourselves. If you can't trust yourself, you've got larger problems...

Each entity's certificate must be signed by another entity that can vouch for it. At the root there is nobody else, so, although it may seem strange, the root CA's certificate is signed by itself. We have to start somewhere. Once I accept this root certificate, I am signing up to automatically trust any certificate issued by that root. The convenience here is that I do not need to go through the process of explicitly trusting each certificate that is presented to me. As long as the certificate is deemed OK by my trusted root, it is OK with me.

The key here is that a self-signed certificate is effectively a root of trust. Trusting a self-signed certificate implies explicit trust in the entity that is providing that certificate: it is like me calling you up, telling you that I am a fantastic demolition expert and that I want a job... and my only reference is me.

There are other concepts like delegation, validity periods and revocation, but we just need to know about the basics here. Self-signed certificates are not bad, but they introduce overhead, do not scale, and are difficult to audit. (A CA keeps track of each certificate it has issued, including its expiration date).

 

 

Do I really have to replace ALL certificates in my vPod?

 

The answer to that question is complicated. Since this is a lab, the main goal is to make sure the user experience clean, not necessarily to ensure security and manageability. To that end, if your appliance already has a properly configured self-signed certificate, you can just accept that certificate following the procedure later in this article.

So, what is properly configured?

There are a few critical areas to check, which can be done from Windows:

  1. Issued To - this must be the FQDN, short name, or IP address that users will use to access the environment
  2. Validity Period - the certificate must be valid until at least December 31 of the year following the lab's release
  3. Signature Hash Algorithm - this is shown on the "Details" tab of the certificate. This should be sha256 or sha512 or the web browsers will throw a warning.

Note that it is not possible to change any of the values in the certificate without replacing the certificate and the name in the Issued To field MUST match the name used when accessing the system or the web browser will always report an error.

 

 

Verifying a certificate with openssl

 

In addition to viewing the certificate in Windows, which works for files with a .CRT extension, you can verify the certificate using openssl. Most Linux machine/appliances have openssl installed, so the verification can be performed natively as well.

On the Main Console, the openssl binary is available here:

C:\hol\SSL\ssl-certificate-updater-tool-1308332\tools\openssl\openssl.exe

The validation command on Windows or Linux is the same, assuming openssl is in the current path:

openssl x509 -in mycert.crt -text -noout

This will produce output similar to the example. Ensure that the Subject, Not After, and Signature Algorithm fields contain appropriate values.

 

 

Accepting Certificates - Windows

If you have issued certificates from the CA on the Main Console, your certificates will be automatically trusted by all members of the CORP domain. Otherwise, you will need to manually import the certificates into the trust stores on each machine that will be accessing the system.

On Windows, there is a specific set of steps that will ensure the certificate is imported to the correct location. These steps are covered in this article. A similar process can be performed on Linux machines, but is out of this article's scope. Please ask the core team if you require assistance.

 

 

Open the certificate - Before

 

To begin the process, download a copy of the certificate to the desktop of the Main Console.

Double-click the certificate file to open the certificate. Take a minute to verify the Valid from/to, Issued to, and Signature Algorithm, as described in the previous section of this article. Notice that the certificate icon in the upper left corner of the window has an (X) icon on it.

Click the Install Certificate button

 

 

Certificate Import Wizard - Select Store

 

Select the Local Machine from the Store Location.

Click Next

 

 

Certificate Import Wizard - Certificate Store

 

Select the option to Place all certificates in the following store

Click Browse

Select the Trusted Root Certification Authorities store and click OK

Click Next

 

 

Certificate Import Wizard - Review

 

Click Finish

 

 

Successful Import

 

Click OK to acknowledge that the certificate has been imported to the local machine's Trusted Root Certification Authorities store.

Close any web browser windows that you have open and then re-launch them to try again.

 

 

Open the certificate - After

 

Once again double-click on the certificate file in Windows and verify that the certificate icon no longer has the (X) on it.

This is a good sign that Windows now trusts the certificate.

Internet Explorer and Chrome web browsers will now accept this certificate. Note that Firefox maintains its own trust store, so the certificate would need to be added to that store in order to use it within that browser. The way this is done is a little different for different versions of Firefox, but the following process should give a good idea.

 

 

Accepting Certificates - Firefox

Firefox maintains its own trust store, so any certificates that are added to Windows should also be added to Firefox to ensure that a user does not receive an error when choosing a browser other than Chrome.

 

 

Firefox Certificate Store

 

  1. Open Firefox and enter about:preferences into the URL bar
  2. Click to the Advanced open the advanced preferences
  3. Click on Certificates to switch to the certificates context
  4. Click on the View Certificates button

 

 

Firefox Certificate Manager

 

  1. Click on the Servers tab
  2. Click on the Import... button and navigate to the certificate file you want to import
  3. Click the OK button

Once this is finished, try browsing to the site in question and there should be no warnings.

 

A Simple 3-Tier App for Labs


There are times when it is instructive to show communication among a set of machines in a virtual infrastructure so that the communication can be monitored, filtered, or otherwise secured. Creating a multi-tier application that fits within the resource envelope of a Hands-on Labs vPod can be time consuming and less than simple. The core team has created a simple web application to serve this purpose.

This simple application provides an environment that uses 3 machines which communicate in order to provide access to data via a web page. It is nothing terribly fancy, but allows for demonstration of various communication and security options between distributed components of an application within a vPod.

The overarching goal was to create something that is simple and documented so that it is both functional as-is and modifiable to suit a variety of use cases. As such, full configuration/build details are available which contain all of the commands used to create the VMs that make up this application, using as many default settings as possible. This application is by no means secure, although the host-based firewall has been left enabled on each host and the required ports have been opened on them to ensure that the application will function and allow SSH access to each VM.

The base package consists of 3 base CentOS 6.6, 64-bit machines with SELinux disabled and VMware tools installed. The initial IP addresses are indicated here, and the application will function right out of the box. The IPs may be changed as long as the appropriate files are updated on each machine to reflect the new addresses of the components. To demonstrate load sharing, the "web tier" may be scaled out and placed behind a load balancer, and a simple alteration to the script will display which node in the pool has serviced the request.


 

Web App Components - Default Configuration

 

 

 

Import the Layer 2 VMs

 

The VMs exist on the 2016-HOL_3-Tier-App ISO image in the HOL-Base-Templates catalog in the HOL-DEV environment. Mount this ISO to your Main Console and import them using the Web Client.

Just follow the steps in the Deploy OVF Template Wizard, making sure to check the box to Accept extra configuration options since this will ensure that the important uuid.action=keep and keyboard.typematicMinDelay=2000000 settings are brought in with the VM.

I usually Thin Provision them, but that's up to you for your use case. The VMs have been sized with enough free space to perform updates to the OS, packages, and VMware tools, and to provide some room for customization as needed, but the drives are still pretty small (<5 GB) each.

Once all 3 VMs have been imported, power them up and proceed to the next step.

 

 

Import the SSL certificate into the trust store on your client

You will need to import the self-signed SSL certificate into your trusted root certificate store(s) in order for full validation to occur. For your convenience, the certificate file has been included on the root of the 3-Tier Web App ISO image. Instructions for performing the trust setup for Windows/IE/Chrome and Firefox are included in another section of this guide: Accepting Existing SSL Certificates

The short version is that, for Chrome and IE, you double-click the Certificate file on the root of the ISO then import it into the Trusted Root Certification Authorities store of your Local Machine. Close your web browser if it is open and then open and access the site. For Firefox, or for more detailed instructions, see the other article.

 

 

Simple Configuration

In the simplest configuration, the VMs can be used as-is.

 

 

Configure the DNS Records

 

While not absolutely required, creating appropriate DNS records for the client(s) to use makes the application cleaner and easier to access.

Each of the 3 base machines has its /etc/hosts file populated with the default names and IP addresses of all machines that make up the application. These machines use file-based resolution rather than DNS because it is not possible to guarantee that they will have access to DNS servers on the network(s) where they are instantiated.

Using names rather than IP addresses facilitates configuration and re-configuration of the application by allowing all changes to be made in one place on each machine once the IPs have been updated to the desired values.

Creating matching records in the DNS server on the Main Console (controlcenter.corp.local) can make managing the application easier -- you can then create PuTTY sessions pointing at root@web-01a, for example. The following Powershell commands can be used to create DNS records (forward and reverse) on the Main Console (controlcenter.corp.local) server:

Add-DnsServerResourceRecordA -ComputerName 'controlcenter.corp.local' -ZoneName 'corp.local' -name 'db-01a' -IPv4Address '192.168.120.10' -CreatePtr
Add-DnsServerResourceRecordA -ComputerName 'controlcenter.corp.local' -ZoneName 'corp.local' -name 'app-01a' -IPv4Address '192.168.120.20' -CreatePtr
Add-DnsServerResourceRecordA -ComputerName 'controlcenter.corp.local' -ZoneName 'corp.local' -name 'web-01a' -IPv4Address '192.168.120.30' -CreatePtr

Creating a DNS alias record to point webapp.corp.local at web-01a.corp.local is the recommended method for accessing the application in a simple configuration (1 web, 1 app, 1 database). This name matches the name in the SSL certificate used by the website.

Add-DnsServerResourceRecordCNAME -ComputerName 'controlcenter.corp.local' -ZoneName 'corp.local' -name 'webapp' -HostNameAlias 'web-01a.corp.local.'

For scale out or load balancing demonstrations, pointing the webapp name at the load balancer IP address is recommended instead:

Add-DnsServerResourceRecordA -ComputerName 'controlcenter.corp.local' -ZoneName 'corp.local' -name 'webapp' -IPv4Address 'YOUR_LB_IP_ADDRESS' -CreatePtr

Note that you may need to create the reverse DNS zone for the network that your load balancer's IP address lives on prior to executing the previous command. As an example, the following command will create a reverse lookup zone for the 172.16.10.x network:

Add-DnsServerPrimaryZone -Name '10.16.172.in-addr.arpa' -ComputerName 'controlcenter.corp.local' -ReplicationScope 'Forest' 

 

 

 

Access the application

 

Once all 3 of the base VMs are powered up on the same L2 network, the application can be tested by opening a web browser and pointing it to https://192.168.120.30/cgi-bin/hol.cgi - note that the web browser will alert about the certificate and a "dangerous" or "untrusted" connection, but bypassing that should display the data shown in this step. This validates that all of the components have been imported correctly, have received the expected IP addresses, and can communicate with one another. This should be done prior to changing any IP addresses or configurations on these machines.

Once proper name resolution is in place and the VMs have been imported and powered up, the correct URL to use for accessing the web application is https://webapp.corp.local/cgi-bin/hol.cgi

NOTE: you will need to import the self-signed SSL certificate into your trusted root certificate store(s) in order for full validation to occur. For your convenience, the certificate file has been included on the root of the 3-Tier Web App ISO image. Instructions for performing the trust for Windows/Chrome and Firefox are included in another section of this guide: Accepting Existing SSL Certificates

 

 

Finished!

 

If you just want a simple application that you can use to show micro-segmentation or other features that involve nodes being on the same L2 network, you are ready to go.

If you want to change the IP addresses to move the VMs onto different L3 networks, or implement a simple web server pool read on.

 

 

More Complicated Configurations

 

This application was intended to be flexible enough to be split across networks to demonstrate some real-world scenarios. Each step beyond the basic setup requires additional setup and adds complexity. The following example uses these base machines to create

  1. A simple round-robin "pool" of 2 web servers on a Web network (using simple DNS resolution round-robin)
  2. A single application server on the Application network
  3. A single database server on the Database network

To show IP address reconfiguration and routing requirements, these 3 networks are different from the default 192.168.120.0/24 network. The 3 networks selected are not part of the networks currently configured on the vpodrouter, so new addresses must be added.

The process for changing the hostnames is covered by using the web-02a VM, which is a copy of the web-01a VM template. The IP addresses selected for this example are as follows:

web-01a => 172.16.30.31/24

web-02a => 172.16.30.32/24

app-01a => 172.16.20.20/24

db-01a => 172.16.10.10/24

The gateways on each network will be the .1 addresses: 172.16.10.1, 172.16.20.1, 172.16.30.1

 

 

Move each VM onto the appropriate port group

To most accurately represent a traditional 3-tier distributed application, each tier should be on a different subnet. Moving each of the VMs onto a different port group is a good way to show this. In the real world, each port group would be its own VLAN, but a limitation of the nested environment prevents us from handling tagged traffic properly. I only mention this because your vVMs in the lab will still be able to communicate with one another even after you move them to different port groups -- at least until you change their IP addresses.

If you have not already created the appropriate port groups using the solution you will be showcasing, you should do that. For this example, I just created 3 port groups on the VDS: pg-Web, pg-App, pg-DB and assigned the web-01a, app-01a, and db-01a vVMs to those port groups.

 

 

Enable routing your networks via the vpodrouter (only if necessary)

 

If you are using one of the IP ranges not already handled by the vpodrouter, and you are not implementing another routing technology, you will need to add new IP addresses to the vpodrouter so that it can handle routing to and from your new networks.

This is accomplished by adding the appropriate values to the /etc/network/interfaces file on the router.

Log in to the vpodrouter using the root user -- you can use PuTTY on the Main Console, or the vpodrouter's VMRC console -- and add lines similar to the ones in the picture (based on your use case) to the file. I usually put them after the 192.168.230.1 address, as in the picture.

Once you have saved your changes, rebooting the router will apply them properly -- just don't try to do it while connected to the Main Console using RDP.

Once the router comes online, verify that the interfaces have been added:

root@vPodRouter-60:~# ip addr show | grep 172
    inet 172.16.10.1/24 scope global eth1
    inet 172.16.20.1/24 scope global eth1
    inet 172.16.30.1/24 scope global eth1

NOTE: It is best to get the routes implemented before you change the IP addresses of the vVMs so that you can make changes via PuTTY instead of the VMRC consoles.

 

 

Configure the DNS records

Create the reverse lookup zones for the new networks using Powershell. You can use the DNS MMC if you prefer.

Add-DnsServerPrimaryZone -Name '10.16.172.in-addr.arpa' -ComputerName 'controlcenter.corp.local' -ReplicationScope 'Forest' 
Add-DnsServerPrimaryZone -Name '20.16.172.in-addr.arpa' -ComputerName 'controlcenter.corp.local' -ReplicationScope 'Forest' 
Add-DnsServerPrimaryZone -Name '30.16.172.in-addr.arpa' -ComputerName 'controlcenter.corp.local' -ReplicationScope 'Forest' 

Create the DNS records for our 4 vVMs

Add-DnsServerResourceRecordA -ComputerName 'controlcenter.corp.local' -ZoneName 'corp.local' -name 'db-01a' -IPv4Address '172.16.10.10' -CreatePtr
Add-DnsServerResourceRecordA -ComputerName 'controlcenter.corp.local' -ZoneName 'corp.local' -name 'app-01a' -IPv4Address '172.16.20.20' -CreatePtr
Add-DnsServerResourceRecordA -ComputerName 'controlcenter.corp.local' -ZoneName 'corp.local' -name 'web-01a' -IPv4Address '172.16.30.31' -CreatePtr
Add-DnsServerResourceRecordA -ComputerName 'controlcenter.corp.local' -ZoneName 'corp.local' -name 'web-02a' -IPv4Address '172.16.30.32' -CreatePtr

In this example, we will use DNS to round robin for us by using the webapp.corp.local name with two IP addresses:

Add-DnsServerResourceRecordA -ComputerName 'controlcenter.corp.local' -ZoneName 'corp.local' -name 'webapp' -IPv4Address '172.16.30.32' -CreatePtr
Add-DnsServerResourceRecordA -ComputerName 'controlcenter.corp.local' -ZoneName 'corp.local' -name 'webapp' -IPv4Address '172.16.30.31' -CreatePtr

NOTE: to make this simple load balancing work for a single client machine in the lab, the DNS cache on the client needs to be bypassed

New-ItemProperty HKLM:\SYSTEM\CurrentControlSet\Services\Dnscache\Parameters\ -Type DWORD -Name MaxCacheTtl -Value 2 -Force

The previous command will set the client's DNS cache to 2 seconds, effectively making it query the DNS server for each lookup. Each lookup is a chance for the server to provide a different answer and round-robin among our web servers in the pool.

 

 

Change the IP addresses (and, optionally, hostnames) of the VMs

 

Typically, leaving the hostnames set to what they are is fine. However, if you spawn multiple copies of the web server for hosting behind a load balancer, you may want to also change the hostname to be, I don't know, something like web-02a, web-03a, web-04a, etc...

In this example, I have created a copy of the web-01a vVM as web-02a. With the web-01a vVM offline, I boot the web-02a vVM. It will come up with the pre-configured static IP address 192.168.120.30.

Begin by logging in to the VMRC console of the web-02a vVM or opening an SSH connection. If you use PuTTY from ControlCenter, it can automatically log in using the SSH key. Just tell PuTTY to connect to "root@192.168.120.30" and accept the host's key when prompted.

Once logged in, run the following command

system-config-network

This will get you to the screen shown on this step.

 

 

Select the Device

 

Press the Enter key to select the highlighted Device configuration option

Press the Enter key again to select the default eth0 device

 

 

Change the Configuration

 

Use the Tab key to move between fields

Change the IP address and don't forget to change the default gateway IP if you are moving to a new network, as we are here.

web-02a.corp.local => 172.16.30.32

Default Gateway => 172.16.30.1

Highlight the OK button, as in the picture, then press the Enter key to continue.

 

 

Save the changes

 

Use the Tab key to select the Save option and press the Enter key to save the changes.

 

 

Update DNS configuration

 

Back at this screen, use the Tab key to highlight the DNS configuration option and press the Enter key to select it.

 

 

Change the hostname

 

Use the Tab key to move between fields

Change the hostname to web-02a

Highlight the OK button, as in the picture, then press the Enter key to continue.

 

 

Save changes and Exit

 

Use the Tab key to highlight the Save&Quit option and press the Enter key to select it.

This will save your changes, although they are not yet active -- which is good because changing the IP address is going to kill your SSH connection.

 

 

REALLY set the hostname

 

Back at the command line, use the following command to update the host's name

hostname web-02a

Then, open the /etc/hosts file so that you can update it with the proper values:

vi /etc/hosts

 

 

Clean up the hosts file

 

In this example, the line for web-02a has been added to the end of the file.

I took the opportunity to also update the values for the other hosts in the environment with the values I plen to use. This will save me time later on.

NOTE: The system-config-network utility likes to assign the system's hostname to the loopback addresses (the first two lines in this file). Remove it if you see the hostname after the "localhost" on those two lines. It can cause issues later.

Save the file and then reboot to activate the changes

reboot

Remember that you will need to connect to the new IP address when it comes back online.

 

 

Repeat with the other hosts

Now that web-02a has its own IP address, the web-01a vVM can be brought online and modified, along with the other vVMs.

web-01a - change IP, update /etc/hosts, reboot

db-01a - change IP, update /etc/hosts, reboot

app-01a - change IP, update /etc/hosts, update script, reboot

 

 

Update the script on app-01a

 

The "application" that runs on the app-01a server is a simple perl script that makes a connection to the database server and dumps the records to a nice HTML file. Nothing really fancy. One feature it does have is the ability to report the IP address of the webserver that made the request. This makes demonstration of load balancing a little easier since the user can see where the request came from.

To provide useful feedback, the application script must be updated with the IP addresses of the web servers that you will deploy so that it can report both the name and the IP of the connecting server.

The script lives in the /var/www/cgi-bin/hol.cgi file and the only part that you need to worry about is the %webservers table at the top. Simply update the IP addresses here and add lines as needed. Save the file and log out.

 

 

Test the application

 

Open your web browser and navigate to the application's URL - https://webapp.corp.local/cgi-bin/hol.cgi

Notice two things:

  1. The HTTPS connection reports secure -- you remembered to import the certificate on the client, right?
  2. The IP address and name of the webserver is reported on the page. Refresh a few times and it should report the other web server, too.

 

 

Finished!

 

That one was a little more complicated, but should cover most of the modifications that might be needed for these simple VMs.

Your use cases may be more complicated than this simple walk-through, but we hope this application can serve as a starting point for a large portion of them. If you run into any issues, please let the core team know and we will be happy to assist you.

 

HOL Module Switcher


This document is intended to outline the intended functionality of the Hands-on Labs Module Switcher script, explain its current functionality and provide documentation of its configuration, implementation, and use.


 

Background

 

VMware Hands-on Labs are made up of a lab environment (a “vPod”) paired with a manual. Each vPod may be paired with multiple manuals, and each manual may contain multiple modules. When a vPods starts up, it can only enter one Ready state, and it would be up to the user to get the vPod from the base Ready state to any other state expected by the manual or module that is being used.

For example, consider that a user wants to jump into a lab at Module #3. During the course of the lab work in Modules 1-2, a user may have been tasked with making changes to the environment as part of the lab exercises. Once Module 3 is reached, there may be an expectation that certain virtual machines have been powered up, powered down, or otherwise reconfigured from the base state. While not a strict dependency, the workflow of the lab may leave the lab environment in a different state. At the very least, users may see differences between the console and screen shots provided in the manual.

There are generally two options to address such issues: have the user perform the necessary reconfiguration manually, or provide some form of automation to facilitate the process. In the VMware Hands-on Labs, we try to minimize non-essential inputs as much as possible to provide a better user experience, so going the automation route is preferred.

 

 

Goals and History

Standalone DOS/Windows batch files and PowerShell scripts have been used to provide “fast-forward” or reconfiguration capability within some of our labs. In other labs, a binary HOL Optimizer was used to provide a button dashboard of sorts. it was mostly limited to powering on and off vVMs and reverting snapshots of layer 2 VMs (vVMs), and it was not as open and extensible as we would have liked. The goal of the HOL Module Switcher solution is to combine the best parts of these options while providing an open and manageable framework. I think this is summed up nicely by Bill Call: “Simple. Generic. Better.”

The guiding principle for the HOL Module Switcher is that it is to be kept as simple as possible so that it can be maintained with a minimum effort. A pure PowerShell solution was preferred over DOS/Windows batch or compiled binary for several reasons: access to the Windows graphical toolbox, ease of management, no need to install a compiler, and minimal overhead.

In addition, because this solution is written entirely in PowerShell, most of the functions the Hands-on Labs Core Team have created and tested to support the LabStartup script may be used to provide functionality to the the Module Switcher’s module scripts. These functions are contained within the C:\HOL\LabStartupFunctions.ps1 file.

Starting with the VMworld 2016 development cycle, we have introduced the concept of co-op vPods. The Module Switcher can be used to move a shared vPod from its base running state into the start state required for whichever manual the user has selected. The Modle Switcher has no knowledge of which manual a user has selected, but it is possible to have the user provide that information by selecting the Module Switcher configuration that matches the selected manual. More on this later.

 

 

Specifications

A vPod boots into a known base state. For the sake of simplicity, we will call this state Module 0. Every other module should be able to be reached from Module 0 state by applying specific scripted actions to the environment. This provides the capability for a user to jump in at any module within the lab: click the Module X Start button and you’re good to go with Module X once the scripts have been run.

As users progress through a lab manual, they should be instructed to activate the START action for the module that they would like to take prior to beginning other tasks for that module. Once the START action has completed preparation of the lab environment for the specified module, the user can proceed with the lab. As an example, many modules contain a few pages of expository material at the beginning of the module. Prior to presenting this material, the user would be instructed to activate the associated START action to have the script begin reconfiguration. While the user reads the material in the manual, the script can be performing its work. This minimizes user wait time and provides a good lab experience. Of course, if a user chooses to watch the script execute, there should be adequate feedback provided to them in the console as well (see R. Grider, K. Luck WebEx presentation section regarding script output within labs).

To keep implementation of the START/STOP actions as simple as possible, the Module Switcher limits its presentation of the ability to “roll back” the environment. When a user jumps into a lab module, all previous modules’ activation buttons will be disabled. Teams may elect to implement the ability to take any module in any order, however, this increases the number of test cases to manage and endangers lab sanity.

NOTE: Before you consider allowing “roll back,” please think about not only the impact on vVMs, but also the configuration changes to Layer 1 VMs that occur during the course of your lab. Some changes are difficult or impossible to roll back without causing significant trauma to the lab environment. We are not able to roll back snapshots to Layer 1 VMs from within the lab.

 

 

Specifications Summary

For those who just want the highlights:

 

 

Module Switcher Details

 

To support this effort, the Hands-on Labs Core Team is providing the Module Switcher and the functions within LabStartupFunctions.ps1. The Module Switcher utility’s core functions are to present a panel of buttons, one for each module, and facilitate the execution of lab team-provided START/STOP actions.

The Module Switcher is made up of the ModuleSwitcher.ps1 file and a directory containing the scripts used to manage the environment. The default directory is C:\HOL\ModuleSwitcher, but that can be changed on the command line by providing the -ModuleSwitchDirPath switch and providing an alternate path. The ModuleSwitcher.ps1 file contains the form display, management, and script launching functionality. Typically, it will not be necessary to modify this script.

The ModuleSwitchDirPath (by default, C:\HOL\ModuleSwitcher) directory must contain one script file for each module in your lab, even if the script does nothing more than report that it ran. Having one file per module allows the Module Switcher to automatically configure the number of buttons in the window to match the number of modules in the lab.

The naming convention used is important: these are the names the ModuleSwitcher.ps1 script uses to enumerate and launch the scripts contained in the ModuleSwitchDirPath. The ModuleSwitcher automatically creates a Module group and a Start button for each Module##.ps1 script it finds.

Note that two digits are used for the script numbering (“Module01.ps1” rather than “Module1.ps1”). This ensures that the scripts sort in the proper order when displayed in Explorer or are enumerated and sorted in PowerShell.

 

 

BYO Scripts

That is most of what you need to know about the Module Switcher. It may be helpful to look at it as a simple GUI menu system. The core functionality, the switching, is provided by your team in the form of the Module scripts’ START and STOP actions.

Ideally, anything that you need to do to check or alter the lab environment state such as URLs, services, etc. in preparation for a given module can be called using this mechanism. If it is something that cannot be done directly with PowerShell, there are other options, such as calling out to a Linux machine to execute a command line option, or calling a Windows/DOS batch file from a PowerShell context. Please let the core team know if you need assistance, and I am certain the SDK/API team would be interested in hearing about anything that you might be trying to do but are unable to do.

 

 

Module##.ps1 Script Contents

 

Each ModuleXX.ps1 script file in C:\HOL\ModuleSwitcher has a START and a default action. The simplest script looks something like the image in this step.

When ModuleSwitcher.ps1 calls the Module##.ps1 script for module ##, it will specify an action of START if the module is being started (the clicked button's text is "Start"). Any other call will trigger the STOP (default) action. At a high level, the START block should prepare the environment for the user to begin the specified module and the STOP block should undo those actions.

Please note the PAUSE at the end of the script. ModuleSwitcher.ps1 will call this script in its own PowerShell window so that its progress and actions can be reported to the user. The PAUSE will leave that information on the console until the user presses ENTER to continue. This is helpful in showing the user that the environment is ready for them to proceed, still working on it, or blocked for some reason.

 

 

Execution

Calling the Module Switcher is as simple as running the PowerShell script. It is a little cleaner for users if you wrap it in a batch file that calls PowerShell and provides the ModuleSwitcher.ps1 script as an argument. The following 2-line batch file will run the ModuleSwitcher.ps1 in a hidden PowerShell window so that only the main panel is visible. This has been provided as C:\HOL\ModuleSwitcher.bat

@echo off
C:\WINDOWS\system32\windowsPowerShell\v1.0\PowerShell.exe -windowstyle hidden "& 'C:\HOL\ModuleSwitcher.ps1'"

 

 

Initial Run

 

During the initial running of the ModuleSwitcher.ps1 script, it will look for files matching the “Module##.ps1” pattern in the C:\HOL\ModuleSwitcher directory. Based on the number of scripts found, a window will be created containing the required number of buttons, in numerical order, one for each module. Continuing our example, there are 6 scripts the directory.

NOTE: The default layout is 4 columns across, which should work well for most use cases. As of the time of this writing, the maximum supported layout is 4 columns by 5 rows, which will support 20 modules. If you require support for more than that, please let the core team know.

Each module is presented with a single Start button. Clicking the Start button for a module will first call the STOP action from the currently active module. Once that script has run and the user has pressed the ENTER key, the Module##.ps1 script for the selected module is called with the START action.

NOTE: As of version 1.19 of ModuleSwitcher, pods are assumed to enter the Module 0 state at boot time. Prior versions assumed Module 1, but this behavior did not necessarily make sense for shared pods.

 

 

Jump to Module 3

 

In this example, if a user wants to jump to Module 3 and clicks on the Start button in the Module 3 section, the Module01.ps1 script will be called with the STOP action, followed by the Module03.ps1 script with the START action. Why?  Because Module 1 is listed as the current active module (see the status bar at the bottom of the window in the previous step's image), the script determines which module needs to be cleaned up prior to beginning the selected one.

Once the Module 3 START action has been processed, and the user presses the ENTER key to continue, the ModuleSwitcher buttons look like the image in this step.

Note that the Module 1 and Module 2 buttons are now disabled (greyed out) because they have been either completed or bypassed. The Module 3 button changes from Start to Stop and the Active module has been updated to show module 3 in the status bar at the bottom of the window.

 

 

Jump to Module 5

 

Continuing the example, and for the sake of completeness, jumping to Module 5 by clicking on its Start button will run the Module 3 script with the STOP action (since it is the current active module) and then run the Module 5 script with the START action. The buttons for Modules 1 through 4 are disabled because they have been completed or bypassed. Note that you can see which ones have been bypassed because the disabled buttons still show the name “Start”

 

 

OOPS! I closed the form/window/script...

Getting rid of the Module Switcher form is a matter of clicking the “X” in the red box in the upper right corner. The thought is that this form will remain open throughout the lab session, and the window can be minimized as needed. Things happen and in the event that the form is accidentally closed, it can be reopened by re-running the batch file and the state will be preserved: the current active module will be preserved and the previous buttons will be disabled.

Note: to preserve state, the currently active module number is stored in a text file called currentModule.txt inside the ModuleSwitcher directory. In the event that you want to test your ModuleSwitcher from the beginning, you can exit the form, delete this file, and then reopen the form. The ModuleSwitcher script will automatically ignore a state file that was not created on the current day. This prevents stale files within captured vPods from causing issues for users.

 

 

User Workflow

 

What does it look like when a user clicks the Start button? In this example, the user has clicked the Module 3 Start button from the initial lab state (active module = 1). In this image, the Module01 script has been run with the STOP action and the user has pressed ENTER to continue.

The Module03.ps1 script has been called with the START action in the new PowerShell window and is waiting for user confirmation.

In this case, the Module03 script needs to check and manage power state of a VM, so it loads PowerCLI and LabStartupFunctions.ps1, connects to the vCenter, and then checks the state of the app-01a VM. If that VM is powered off, this script will start it. Otherwise, it just reports that the VM is online, no warnings or errors, then activates the “Chaos Monkey” and waits for the user to press the ENTER key.

 

 

Known Limitations

This simple script has some known limitations which are called out in this section.

  1. A Module##.ps1 script must exist for every module in the lab, even if the script does nothing more than answer to the START action and return. Please be sure that you include a Module##.ps1 script for every module in your lab.
  2. A maximum of 20 modules supported (5 rows of 4 buttons) per instance  an error message is thrown on the console if more than 20 Module##.ps1 scripts are detected. Please keep the number of modules to 20 or fewer. Even at 20 buttons, the panel starts to consume a lot of screen real estate and look a bit cluttered. If you really need more, please contact the HOL core team.
  3. The title text for the window and the large “Hands-on Labs Module Switcher” text is a single line and a fixed width (the static width of the window). At this time, the text’s length is not validated. Please keep the name of your ModuleSwitchDir folder to < 20 characters to prevent display issues.
  4. The LabStartupFunctions code has been developed with some specifics for LabStartup execution, including automatic remediation when failures are detected. While this should not typically cause issues in a pod that has reached the Ready state, we are in the process of validating and updating the contained functions as required. While it may seem obvious, it is worth noting that using these functions requires formatting the inputs the way that they are in the LabStartup.ps1 script.
  5. Multiple simultaneous executions - the script does not handle being run multiple times simultaneously. Doing so will create multiple panels and may confuse the user.

 

 

Obtaining the ModuleSwitcher

The ModuleSwitcher code is not currently a part of the base vPod images. The script and its example files are available on an ISO image in the HOL-Dev-Resources catalog. Look for a media item called something like "ModuleSwitcher-v1.19" (note that the version number may be different if there have been changes).

The recommended location for the files is C:\HOL\ for the main ModuleSwitcher.ps1 script and C:\HOL\ModuleSwitcher for the Module##.ps1 scripts if only one panel is needed in the vPod. For multiple panels, as in a co-op vPod, the structure should look something like the following, where HOL-1700-1, HOL-1700-2 and "My Really Long Module Name" are different sets of modules/buttons corresponding to different manuals.

C:\HOL
│   ModuleSwitcher.ps1
│
└───ModuleSwitcher
    │   MS_for_HOL-1700-1.bat
    │   MS_for_HOL-1700-2.bat
    │
    ├───HOL-1700-1
    │       Module01.ps1
    │       Module02.ps1
    │       Module03.ps1
    │       moduleMessages.txt
    │
    ├───HOL-1700-2
    │       Module01.ps1
    │       Module02.ps1
    │       Module03.ps1
    │       Module04.ps1
    │       Module05.ps1
    │       Module06.ps1
    │       Module07.ps1
    │       Module08.ps1
    │       Module09.ps1
    │       Module10.ps1
    │       Module11.ps1
    │       Module12.ps1
    │       Module13.ps1
    │       Module14.ps1
    │       Module15.ps1
    │       Module16.ps1
    │       Module17.ps1
    │
    └───My Really Long Module Name
            Module01.ps1
            Module02.ps1

The MS_for_HOL-1700-1.bat and MS_for_HOL-1700-2.bat files at the root of C:\HOL\ModuleSwitcher are batch files that launch the respective panels.

As the following article indicates, this structure can be used to produce 3 different panels for 3 different sets of modules.

 

 

Advanced Configuration

 

There are three advanced options that are implemented as command line parameters to the ModuleSwitcher.ps1 script.

-Force : this is a command line switch which ignores the data stored in currentModule.txt file, enables all buttons, sets the current active module to 1, and overwrites the data in currentModule.txt. This option is a shortcut during development and should not be used in a released pod since it defeats the state preservation function.

-ModuleSwitchDirPath : this option takes a string which is a custom path to the directory containing the ModuleXX.ps1 scripts. The name of the directory is also used in the name displayed on the ModuleSwitcher window. This means that it is possible to have multiple sets of module scripts which can be used depending on the manual the user is following.

-PanelName : this option allows a specific name to be used for both the panel's window title and the large text in the window. Do not use a title that is too long or the text will be "squished" or overflow the bounds of the window.

Calling

C:\HOL\ModuleSwitcher.ps1 -ModuleSwitchDirPath C:\HOL\ModuleSwitcher\HOL-1700-1

Results in the panel in this step.

Note that both the title of the window and the name displayed within the window reflect the name of the specified directory. The script uses the leaf directory name, so the path may include other intermediate directories if it facilitates organization.

Note: If the default ModuleSwitcher name is used, the panel will show Hands-on Labs Module Switcher instead of the folder name.

 

 

Example - Module 3

This is the example code for Module03.ps1. To keep the code cleaner, the START and STOP actions have each been encapsulated within their own functions: ModuleStart and ModuleStop. The main logic is simplified to:

if( $args[0] -eq 'START' ) {
  ModuleStart
} else {
  ModuleStop
}
PAUSE

A LoadPowerCLI function has been written so that it may be reused in other module scripts. This also loads the C:\HOL\LabStartupFunctions.ps1 file to access some of the pre-written code for connecting to vCenter. Note that a bare Connect-ViServer could be called here, but this shows how to access LabStartupFunctions if needed:

 

 

 

More Example Code - the EXAMPLE directory

The Module Switcher package includes an EXAMPLE directory which contains two separate module directories and two batch files which can be used to launch panels from those directories.

 

 

HOL-1700-1

 

This example contains 3 modules. Module01.ps1 is a skeleton that simply prints out that it is stopping. The START action for Module 1 is never called explicitly as it is assumed that LabStartup handles getting the environment to Module 1 state.

Module02.ps1 is an example that loads PowerCLI and LabStartupFunctions, then performs action on some vVMs. Note that this will not function correctly if the vVMs are not present in your environment, but you can modify the script to reference vVMs in your pod. Finally, Module03.ps1 is more of a shell or starting point which contains minimal suggested functionality.

This is probably the code that you want to start with when building your new module scripts.

 

 

HOL-1700-2

 

This example has 17 modules. Each script is just a skeleton that responds to START/STOP action and reports which module’s script was called. In addition to showing what the panel looks like with a large number of buttons and how to load a different set of modules using the command line options, this can be used to test the user workflow and script-calling logic of the ModuleSwitcher.

 

 

My Really Long Module Name

 

Because the Module Switcher script will take the name of the ModuleSwitchDirPath directory as the window title and form header, you can place text that you want in those locations. Be aware, though, that the text box is a fixed size, so the amount of text you can put in there is limited. Too much text will overrun the box and/or cause it to look cramped.

By default, the text " Module Switcher" is appended to your text to create the form header text.

If you want to use something else for the header text, specify that string on the command line using the -PanelName option.

 

 

How To: Display Current Module on DesktopInfo

 

Some of the feedback we received on the initial version of the Module Switcher was that it would be nice to display the currently active module on the desktop in addition to inside the ModuleSwitcher window. This is accomplished by an edit to the C:\DesktopInfo\DesktopInfo.ini file to provide a spot for the data, a file containing one line of "message" data to display, and a couple of lines to reset its contents at Main Console shutdown:

1. C:\DesktopInfo\DesktopInfo.ini (add around line 25, in the [items] section after the COMMENT):

#ModuleSwitcher
FILE=active:1,interval:20,color:55CC77,style:b,type:text,text:Module,file:C:\HOL\ModuleSwitcher\currentMessage.txt

2. <ModuleSwitchDirPath>\moduleMessages.txt:

The loneliest number
Who does #2 work for?
Three's a crowd
I am Number Four
Number Five is Alive!
Six Ways to Sunday
The Magnificent Seven

This file is provided per panel by including it in the ModuleSwitchDirPath (with the Module##.ps1 files). Each line in the file corresponds to one module: line 1 is module 1, line 2 is module 2, and so on. Using the above file, ModuleSwitcher wipp populate the C:\HOL\ModuleSwitcher\currentMessage.txt file with the corresponding data. So, a few seconds after module 2 becomes the active module, the desktop will show as in the image associated with this step.

NOTE: There is limited space available in the DesktopInfo region, 20-25 characters, so please keep the text you wish to display as short as possible while still being useful.

3. C:\HOL\LabLogoff.ps1

Unless you want to remember to cleanup the file each time you shut down your pod, one of the automatic scripts should be modified to reset the currentMessage.txt file. If this step is not performed and the pod was shut down while Module 4 was active, it will boot up showing Module 4 as the active module until the user clicks on another module in the Module Switcher. To address this, add these two lines to the end of the LabLogoff script and let the system handle it for you.

$messageFile = 'C:\HOL\ModuleSwitcher\currentMessage.txt'
Set-Content -Value "None Selected" -Path $messageFile

 

SSL: Replacing the VAMI certificate


While many VMware products support a web-based interface for replacing the SSL certificate used by the product itself, or its primary web interface, many still do not include replacement of the VAMI certificate in the workflow.

For products that use the VAMI -- look for something like https://hostname:5480 -- the replacement is pretty standard and follows the provcess outlined in this article. While it is all command-line work, it is one of the simpler replacements as long as the files have been constructed properly. Fortunately, creating the certificate using the HOL-SSL.psm1 module and process described in the article SSL: Issuing New Certificates takes care of most of the heavy lifting.


 

Getting started

 

While there are standards for how the certificates and keys are encoded, how they are assembled for certain use cases varies. It seems that every system has its own requirements that must be followed.

For the VAMI, PEM format is required and the private key and certificate must be in the same file. Furthermore, the private key cannot be encrypted. Lastly, the file should have UNIX-style line endings (LF) and not DOS-style (CR/LF).

Generate the certificate on the Main Console machine using the PowerShell functions provided. Be sure you provide the proper DNS name and IP address so that the certificate can be created correctly.

NOTE: If you generated the certificate using the PowerShell functions in the HOL pod, the rui.pem file is already formatted correctly.

If you are unsure about the line-ending type, you can run the file through the following awk script on the appliance to make sure it is good.

# awk '{sub (/\r$/,"");print}' MYFILE.pem > MYNEWFILE.pem

 

 

Where is the file?

 

Most of our new appliances run the lighttpd (pronounced "lighty") web server for the VAMI and store its SSL certificate and private key in the file: /opt/vmware/etc/lighttpd/server.pem

 

 

Make a backup

 

To begin, access the appliance using SSH and enter credentials for the root user and back up your current certificate file.


# cp /opt/vmware/etc/lighttpd/server.pem /opt/vmware/etc/lighttpd/server.pem-bak

 

 

Upload and replace the certificate

Next, using WinSCP, copy the new certificate to the appliance and replace the content of the /opt/vmware/etc/lighttpd/server.pem
 file.

If you upload the rui.pem file to the root user's home directory, you can use the following command to replace the old one:

# cp /root/rui.pem /opt/vmware/etc/lighttpd/server.pem

NOTE: a PEM file is just a text file, so, if you like, you can open up the target file on the appliance inside the SSH session, open the new certificate in Notepad++ on Main Console, and simply copy/paste the content via SSH rather than transferring it. Your call.

If you want to make sure the certificate transferred OK, you can use openssl to verify it:

# openssl x509 -in /opt/vmware/etc/lighttpd/server.pem -text -noout

 

 

Restart the web server and test

 

Run the following command to restart the lighttpd server and load the new certificate

# service vami-lighttp restart

Finally, access the VAMI's URL and validate that the certificate has been replaced.

NOTE: You might need to restart your browser to see the new certificate

 

SSL: Replacing certificates for vRealize Orchestrator (vRO) Appliance


There are a few different ways to handle replacement of the SSL certificate used by vRealize Orchestrator (vRO), but this process is probably the easiest to understand and follows HOL standard certificate generation processes.

To begin, get the host name and the IP address of the vRO machine. The standard name for HOL vPods is vro-01a.corp.local with an IP address of 192.168.110.79. Unless you have a very good reason, you should use this combination.

Follow the instructions in the article SSL: Issuing New Certificates to generate the proper certificate and private key with the New-HostSslCertificate function.

From a PowerShell window on the Main Console, it will look something like this

PS> Import-Module C:\hol\SSl\hol-ssl.psm1
PS> New-HostSslCertificate -HOST_SHORTNAME vro-01a -HOST_IPv4 192.168.110.79

Once you have successfully created the files, proceed with the replacement steps in this article.


 

All the same but different

 

As far as certificate replacement goes, vRO might have the least standard process of any current VMware product. Brace yourself.

Before you do anything else, ensure that you have SSH access to the vRO appliance in your pod. While you're at it, make sure you can login seamlessly using the key on Main Console. It just takes a few minutes to setup, so go ahead and do it now. You can go here for reference - Key-based SSH from the Main Console

 

 

Copy the certificate files to the vRO appliance

 

The replacement work for vRO occurs at the command line of the appliance. There are two certificates that are our primary concern:

  1. VAMI certificate
  2. vRO certificate

Replacement of the VAMI certificate follows the standard process outlined in the article SSL: Replacing the VAMI certificate and is not covered here. Replacing the certificate in the vRO keystore covers the certificate used on both the vRO server (8281) and configurator (8283) interfaces, so, in a sense, you get two for one.

Connect to the vRO appliance from Main Console using WinSCP. Use the root account with the VMware1! password.

Upload two files to the root user's home directory:

For replacing the VAMI certificate, also upload the following file:

Disconnect from the appliance and close WinSCP.

 

 

Stop the services

 

Connect to the vRO appliance from the Main Console using puTTY as the root user.

It is a bad idea to try and replace the certificates while they are in use, so stop the services that will be using the certificates you need to replace:

# /etc/init.d/vco-server stop
# /etc/init.d/vco-configurator stop

 

 

Change to the directory containing the keystore and make a backup

vRO makes use of a Java keystore to hold its certificate and provate key. It isn't quite as simple as swapping out a pair of files and restarting the services, but it is not terrible once you know exactly how to do it. Nevertheless, making a backup is a good idea before beginning any changes.

# cd /var/lib/vco/app-server/conf/security
# cp jssecacerts jssecacerts_BAK

 

 

Follow the process

 

At any time, if you want to see the contents of this keystore, you can execute the following command:

# keytool -list -keystore /var/lib/vco/app-server/conf/security/jssecacerts -storepass dunesdunes

If you are starting with a "clean" vRO appliance, there will be one key in the store, with an alias of dunes, just like in the screen shot.

To replace this certificate, you need to

  1. Remove the current "dunes" certificate
  2. Import the CA certificate from Main Console
  3. Import a new "dunes" certificate bundle, which is the certificate and private key stored in the PFX file

 

 

Remove the "dunes" certificate

 

To eliminate confusion, remove the old certificate prior to adding the new one

# keytool -delete -alias dunes -keystore jssecacerts

Note that you can either type the password "dunesdunes" when prompted, or include it on the command line with the -storepass option

 

 

Import the Main Console CA Certificate

 

Building a chain of trust for certificates begins with the root. Because this vRO appliance has no idea whether it should trust the CA on Main Console, you need to tell it that everything will be OK by importing the CA certificate and explicitly establishing the trust chain.

# keytool -importcert -alias root -keystore jssecacerts -storepass "dunesdunes" -file ~/CA-Certificate.cer -trustcacerts

This command will display the CA certificate and ask if you want to trust this certificate. Type yes and Enter to confirm.

Listing the contents of the keystore after this step should show one certificate in the store. It's alias should be root and it should be flagged as trustedCertEntry, as in the screen shot.

 

 

Import the new certificate from the PFX keystore

 

The final manipulation is to import the protected certificate and key from the PFX file into the vRO keystore. This can be accomplished with a single step as long as all of the parameters are included and correct

# keytool -importkeystore -srckeystore ~/rui.pfx -srcstoretype pkcs12 -srcstorepass testpassword -alias rui  -destkeystore jssecacerts -deststoretype jks -deststorepass dunesdunes -destkeypass dunesdunes -destalias dunes

 

 

 

Restart the services

 

With the certificates in place, it is time to bring the services back online

# /etc/init.d/vco-server start
# /etc/init.d/vco-configurator start

Make sure both services show a status of RUNNING

 

 

Verify

 

It takes a fre minutes for the services to come up fully, but once they do, the following URLs should show "Green" https text in Google Chrome and further analysis should show that the new certificate is being used.

https://vro-01a.corp.local:8283/vco-config/

https://vro-01a.corp.local:8281/vco/

 

How to use Test-URL to Find a Lookup String


Part of the LabStartup checking in the Hands-on Labs vPods involves querying a URL and looking for an expected text string in the result. Passing this check provides reasonable confidence that the service is not only responding on the proper port, but it is up enough to provide an expected result.

The obvious question that arises is, "How do I determine a proper expected result?"

Before you proceed, it is best to make sure the system you are checking is completely online and responding normally. You don't want to waste time trying to connect to and validate something that is not ready yet. That's never happened to me...


 

The Web Browser Method

 

Determining what to pass as an expected return string should be a simple matter of opening a web browser, entering the URL, and looking at the resulting page for a string to match.

If there isn't much useful text displayed -- as in this case where we get "User name:" and "Password:" -- sometimes viewing the page source can help. This is usually the same text that will be passed to the Test-URL function for validation in LabStartup.

How you show the source depends on the browser, but it is usually listed under Developer Tools or something similar. Notice in the screen shot that it would be much more useful to match on "Virtual Appliance Management" than the generic "Password:"

This is the simplest use case, and, if it works for you, use it! Unfortunately, the text shown here is not always the same that is provided to Test-URL. Due to some intelligence built in to web browser applications, they are able to follow multiple redirections and combine content from various sources, including dynamic content. Test-URL uses the .NET WebClient object, which just knows how to pull text provided at a given URL.

 

 

Using Test-URL

 

If LabStartup uses Test-URL to perform the validation, why not use that same function to look for the validation string? This process is a little more complicated than pointing a web browser at the URL, but using Test-URL will allow you to see exactly what the LabStartup script will see.

Start by opening a new PowerShell window and sourcing the LabStartupFunctions.ps1 file. This gives you access to the same functions used by LabStartup.ps1. NOTE: that is a dot "." and a space at the beginning of the command. This executes the function definitions within the LabStartupFunctions.ps1 script into the current environment.

. C:\HOL\LabStartupFunctions.ps1

The Test-URL function uses a reference to an external variable to provide status to the LabStartup script. The function will not work without the external variable defined, so you must create one to pass in and receive the status.  It is sufficient to set the value of the variable to the empty string, just so it has some value. If you have a favorite number, you can use that if you like.

$result = ''

Finally, call the Test-URL function, passing in the URL and something to look for in the text retrieved from that URL. The syntax of the parameter passed to the -result switch may seem crazy but that is required to cast the $result variable as a reference for consumption by the Test-URL function. The important parts are the values of the -url and the -lookup parameters.

Test-URL -url https://vcsa-01a.corp.local/ -lookup "SOMETHING" -result ([ref]($result))

The previous command will look for the string "SOMETHING" in the text returned from the HTTPS connection to the root of vcsa-01a.corp.local.

 

 

Not found?

 

In the previous step, I looked for the text "SOMETHING" and it was not found. We know that would not be a good check to put into LabStartup since it will fail, even when the appliance is online. By adding the -Verbose switch to the Test-URL call, you can have it display the text it receives from the URL that you passed in addition to the result of its search.

Test-URL -url https://vcsa-01a.corp.local/ -lookup "SOMETHING" -result ([ref]($result)) -Verbose

So, if "SOMETHING" was not a good candidate, something else from the text in yellow should be good to use. Often, you will be able to find something useful in this text. I think something like "VMware vSphere 6" could be a good choice here:

Test-URL -url https://vcsa-01a.corp.local/ -lookup "VMware vSphere 6" -result ([ref]($result))

This should return "Successfully connected to https://vcsa-01a.corp.local" and would successfully pass validation in LabStartup.

If you are unable to find something useful to match on, it may be necessary to select a different URL. In the case of the VCSA, checking the vSphere Client URL is a little more useful than checking the root of the appliance since the client is what the user really cares about being able to access

 

 

Putting it into LabStartup.ps1

 

Once you have the URL and lookup string that works when the service is up, you can put it into the URLs table in LabStartup.ps1. Simply add a new line between the $URLs (line 166 in the screen shot) and the closing brace (line 173 in the screen shot) and populate it as two single-quoted strings separated by an equals sign:

'URL I want to check' = 'value I want to look for'

If you have any questions or require any assistance with this, please ask the core team. We are constantly trying to improve these checks while also attempting to keep them as simple as possible.

 

How to Upgrade VMware Tools on Linux Machines


Many people are familiar with the process used to upgrade VMware tools on Windows machines. For the most part, this can be handled from within vSphere, and the process will be automatic. For Linux, it can be a little more complicated, and in the Hands-on Labs, where the virtual machines we use are very scaled down, it can be even more of a challenge, especially for those not very comfortable at the command line.

This article covers two options for upgrading VMware Tools to the latest version on CentOS linux machines when there is minimal space available on the existing partitions.

Thanks to Henrik Moenster for sharing the process he used to upgrade SLES machines that had almost no free space.


 

Check the version of VMware Tools

Before starting, check the version of VMware Tools that is currently installed

# vmware-toolbox-cmd -v
9.10.0.43888 (build-2476743)

 

 

Choose your own Adventure

 

There are several options to address this issue. Each of the options has different requirements for access to the environment and levels of difficulty. The two options presented here fall into the following categories:

  1. There is not enough space to handle unpacking the tarball containing the VMware Tools installer from the ISO image.
  2. There is not enough space to handle the upgraded VMware Tools binaries, even if the existing version is removed first

The solution to the first problem is to add a temporary drive, unpack the installer to that space, and perform the installation, replacing the currently installed tools. Following a successful install, the temporary drive is removed and deleted. You want to use this option if you can.

The solution to the second problem is to provide additional capacity to the VM so that tools can be accommodated. The best option, if you have the correct access to the environment, is to extend the VMDK backing the VM and then grow the partition. This is covered in the article Increasing Storage for Base Linux Machines. If you perform the expansion described in that article, you can just run through the usual process to upgrade the tools.

In a more constrained environment, it is possible to attach a small permanent drive to the VM and mount it under the path used by the VMware Tools binaries. The resulting VM is a little more complicated, but this may be required in certain scenarios.

 

 

Option 1 - Using a Temporary Drive to hold the VMware Tools installer

The process looks something like this:

  1. Add a new VMDK to the VM (whether you need to do this via vCD or vCenter, it does not matter)
  2. Create a new partition on the drive
  3. Format and mount the new partition
  4. Mount the VMware Tools Installation ISO and extract the VMware Tools Installation package onto the temp drive
  5. Run the VMware Tools Installation
  6. Shut down the VM
  7. Remove the temporary drive

 

 

 

Add a new VMDK to the VM

With the VM powered off, follow the process in your environment to add a new drive to the VM. Something between 500 MB and 1 GB should be fine. Be sure that you can identify this one so that you can remove it later. If the current disk is 1 GB, maybe use a 500 MB temp drive so that it can be easily identified later.

 

 

Create a new partition on the drive

Power up the VM and log in as the root user.

The device you added should be the "next" device, but you can look for it a few different ways.

In a VMware virtual machine, this usually works:

# ls /dev/sd*

In the resulting list of devices, /dev/sdb does not have a partition identified by the OS:

/dev/sda /dev/sda1 /dev/sda2 /dev/sdb

You can check further by using fdisk -l /dev/sdb to list the partitions... and then count the 0 that it finds on the disk, but if you only have one disk on your VM, plus the one that you added, this is a solid bet.

Use fdisk to create the new partition, passing the device you located to the command. We don't need to get fancy since this is just a temporary drive.

# fdisk -c -u /dev/sdb
Command (m for help): n
Command action
  e extended
  p primary partition (1-4)
p
Partition number (1-4): 1
First sector (2048-1023999, default 2048): [hit Return to accept default]
Using default value 2048
Last sector, +sectors or +size{K,M,G} (2048-1023999, default 1023999):
Using default value 1023999
Command (m for help): w
The partition table has been altered!
Calling ioctl() to re-read partition table. 
Syncing disks. 

Verify that the new partition shows up

# ls /dev/sd*

Note that the resulting list of devices shows the partition you created:

/dev/sda /dev/sda1 /dev/sda2 /dev/sdb /dev/sdb1

 

 

 

Format and mount the new partition

Create a file system (again, nothing fancy) by passing the new partition's device to the mkfs.ext3 utility:

# mkfs.ext3 /dev/sdb1

After this command completes, create a temporary mount point and mount the drive there. I use /tmp/tools in this example.

# mkdir /tmp/tools
# mount /dev/sdb1 /tmp/tools

 

 

Mount the VMware Tools Installation ISO and extract the VMware Tools Installation package onto the temp drive

Begin the process of installing tools by selecting the appropriate option from vCenter or vCD. If prompted, use the "Interactive" installation method. If you don't have space, the automatic option will fail anyway.

In Interactive mode, the VMware Tools ISO is loaded into the drive of the VM, but does not mount for most command line versions. That's where you come in:

# mount /dev/cdrom /media

Now, unpack the VMware Tools installer bundle to the temporary drive:

# tar xfzp /media/VMwareTools*.tar.gz -C /tmp/tools/

 

 

Run the VMware Tools Installation

Unless you have specific requirements, specify the -d option to the installer to accept the defaults and prevent all of the prompts:

# /tmp/tools/vmware-tools-distrib/vmware-install.pl -d

If you want to be thorough, check the version of the VMware tools that is now installed:

# vmware-toolbox-cmd -v

 

 

Shut down the VM

Although not strictly necessary, I like to clean up a little before I shut down:

# umount /media
# umount /dev/sdb1
# rmdir /tmp/tools
# shutdown -h now

 

 

Remove the temporary drive

Follow the required process to remove and delete the temporary drive that you added at the beginning of this workflow. Just make sure you remove the correct one.

Boot the VM and bask in the results of your efforts

 

 

Note that this is not a recommended solution, but has been included for the sake of completeness. If you can grow the existing filesystem, that is a much cleaner solution. However, if you lack the access to expand the backing VMDKs, need to get this done quickly, and don't mind that the resulting VM's configuration is a bit complicated, this is an option.

The process looks a lot like the previous process, without the removing of the drive:

  1. Add a new VMDK to the VM (whether you need to do this via vCD or vCenter, it does not matter)
  2. Create a new partition on the drive
  3. Format and mount the new partition
  4. Mount the VMware Tools Installation ISO and extract the VMware Tools Installation package onto the new drive
  5. Run the VMware Tools Installation
  6. Update /etc/fstab
  7. Shut down the VM

 

 

Add a new VMDK to the VM

With the VM powered off, follow the process in your environment to add a new drive to the VM. Typically, 250 MB should be fine to hold the tools.

 

 

Create a new partition on the drive

Power up the VM and log in as the root user.

The device you added should be the "next" device, but you can look for it a few different ways.

In a VMware virtual machine, this usually works:

# ls /dev/sd*

In the resulting list of devices, /dev/sdb does not have a partition identified by the OS:

/dev/sda /dev/sda1 /dev/sda2 /dev/sdb

You can check further by using fdisk -l /dev/sdb to list the partitions... and then count the 0 that it finds on the disk, but if you only have one disk on your VM, plus the one that you added, this is a solid bet.

Use fdisk to create the new partition, passing the device you located to the command. We don't need to get fancy since this is just a temporary drive.

# fdisk -c -u /dev/sdb
Command (m for help): n
Command action
  e extended
  p primary partition (1-4)
p
Partition number (1-4): 1
First sector (2048-1023999, default 2048): [hit Return to accept default]
Using default value 2048
Last sector, +sectors or +size{K,M,G} (2048-1023999, default 1023999):
Using default value 1023999
Command (m for help): w
The partition table has been altered!
Calling ioctl() to re-read partition table. 
Syncing disks. 

Verify that the new partition shows up

# ls /dev/sd*

Note that the resulting list of devices shows the partition you created:

/dev/sda /dev/sda1 /dev/sda2 /dev/sdb /dev/sdb1

 

 

 

Remove the old VMware Tools

This frees up some space on the drive to accommodate the installation media and empties out the directory that will be used as the mount point for the new drive:

# cd /usr/lib
# rm -Rf vmware-tools
# mkdir vmware-tools

 

 

Format and mount the new partition

Create a file system (again, nothing fancy) by passing the new partition's device to the mkfs.ext3 utility:

# mkfs.ext3 /dev/sdb1

After this command completes, remove the automatic file system checks

# tune2fs -c 0 -i -0 /dev/sdb1

Mount the new drive to the VMware Tools directory

# mount /dev/sdb1 /usr/lib/vmware-tools

Make a temporary directory to hold the installer files

# mkdir /tmp/tools

 

 

Mount the VMware Tools Installation ISO and extract the VMware Tools Installation package

Begin the process of installing tools by selecting the appropriate option from vCenter or vCD. If prompted, use the "Interactive" installation method. If you don't have space, the automatic option will fail anyway.

In Interactive mode, the VMware Tools ISO is loaded into the drive of the VM, but does not mount for most command line versions. That's where you come in:

# mount /dev/cdrom /media

Now, unpack the VMware Tools installer bundle to the temporary drive:

# tar xfzp /media/VMwareTools*.tar.gz -C /tmp/tools/

 

 

Run the VMware Tools Installation

Unless you have specific requirements, specify the -d option to the installer to accept the defaults and prevent all of the prompts:

# /tmp/tools/vmware-tools-distrib/vmware-install.pl -d

If you want to be thorough, check the version of the VMware tools that is now installed:

# vmware-toolbox-cmd -v

 

 

Update /etc/fstab

In order to automatically mount the drive containing the tools at boot time, an entry must be created in the /etc/fstab file

Using your favorite editor, add the following line to the /etc/fstab file. (The last two characters are zeroes)

/dev/sdb1    /usr/lib/vmware-tools      ext3      defaults      0  0

Save the file.

 

 

Reboot the VM and ensure it looks good

Although not strictly necessary, I like to clean up a little before I reboot:

# umount /media
# rmdir /tmp/tools
# shutdown -r now

Once the VM comes back online, ensure that the tools have loaded correctly

# vmware-toolbox-cmd -v

 

 

Finished!

At this point, you should have current tools, based on the host that the VM was sitting on when the process was followed.

No more warnings in vCenter!

 

NTP Configuration for Linux and Standalone Windows


Maintaining consistent time among the hosts in a vPod is important for many reasons, including consistent logging and authentication.

There is an NTP (Network Time Protocol) server running on the vPodRouter, answering on 192.168.100.1 with a DNS record of ntp.corp.local.


 

Linux Appliances

 

If the Linux-based appliance has a management UI, check there first to see if an NTP configuration option is included. It is better to manage this via the built-in and supported tools whenever possible.

The screen shot in this step is an example from the vRealize Orchestrator VAMI interface. The NTP configuration is managed from the Time Settings section on the Admin tab.

If the appliance does not have a friendly interface for managing the time source configuration, it really should, but you may have to treat the appliance as a basic Linux machine and follow the process outlined in the next step instead.

 

 

Linux VMs

 

The following examples are for CentOS, but the commands are mostly standardized across platforms.

The NTP client is usually included as a core package, so it should not need to be installed, but it may not be there for Minimal installations. If the /etc/ntp.conf file is missing on your machine, you likely need to install ntp. For CentOS, you can do this if the vPod is wired up for Internet access and the VM has an IP address. Use the following

# yum install ntp

From there, edit the /etc/ntp.conf file. Depending on the current contents, a simple change may be all that is required. In general, adding the following line to the bottom of the file and restarting the NTP daemon is sufficient, but if there are other lines beginning with server, you will want to comment them out or remove them so that only the following server remains:

server ntp.corp.local iburst 

With the configuration change made, start (or restart, if necessary) the service:

# service ntpd start

Don't forget the configure ntpd to start up each time the machine boots:

# chkconfig ntpd on

FIREWALL: If you have a host-based firewall enabled (we usually turn it off in HOL), you may need to open udp/123 in the firewall. The command to do so depends on what firewall you have running.

 

 

Checking the Configuration

Checking the configuration is done with the following command:

# ntpq -p

And a one-time synchronization can be performed with the following. As long as the time difference between the client and the server is not too great, the client will be brought to the same time as the server.

# ntpdate -u ntp.corp.local

For large jumps in time, the following command can be used to set the clock on the client. This is typically only needed once, prior to establishing NTP synchronization, to bring the host close enough to the NTP server's time that a sync can be established. Substitute the current date and time values in the double-quoted string.

# date -s "22 JUN 2016 18:00:00"

Then the following to write the value to the system's "hardware clock"

# hwclock -w

 

 

 

Standalone Windows Hosts (not Domain Members)

 

 

Windows 2012

Run using PowerShell as admin:

PS> w32tm /config /manualpeerlist:ntp.corp.local /syncfromflags:MANUAL
PS> Stop-Service w32time
PS> Start-Service w32time

Then, you can check it as so:

PS> w32tm /query /status

 

Reference - http://www.sysadminlab.net/windows/configuring-ntp-on-windows-server-2012

 

 

Windows 2008

The command is slightly different for Windows 2008, but the idea is the same:

C:\> w32tm /config /manualpeerlist:ntp.corp.local,0x8 /syncfromflags:MANUAL
C:\> net stop w32time
C:\> net start w32time

Then, you can check it with the following:

PS> w32tm /query /status

Reference - http://www.sysadminlab.net/windows/configuring-ntp-on-windows-2008-r2

 

Horizon in the Hands On Labs - Introduction


Over the last several years we have learned quite a bit about the HOL and VLP which has allowed us to build more complex labs that remain active in the catalog for longer than 12 months.  With this length of time we have uncovered several oddities in Horizon View that need to be addressed before the vPod is checked in.

These include:

These topics will be covered in the following three lessons that follow:


Horizon Security Server - Request Signed Certificate


The Horizon Security Server(s) are not Domain joined and the process for securing the connection with a CA signed certificate is outlined below.

In this section we'll request a new certificate from the Microsoft Certificate Authority from a non domain joined Windows Server.


 

Self Signed Certificate for Horizon Security Server

 

 

 

Launch the Microsoft Management Console (MMC)

 

*NOTE*: Information only.  No lab work to be done

From Security Server desktop

  1. Click on Start
  2. Enter MMC
  3. Choose the MMC application to launch

 

 

MMC - Add Snap-in

 

*NOTE*: Information only.  No lab work to be done

We need to add the Certificate Snap-in to the MMC Console

  1. Choose File
  2. Select Add/Remove Snap-in

 

 

Add Certificates Snap-in

 

*NOTE*: Information only.  No lab work to be done

To manage the local Certificates you need to install/enable the snap-in

  1. Click Certificates
  2. Click Add
  3. Click Computer account
  4. Click Next

 

 

Select Computer and Finish Snap-in Install

 

*NOTE*: Information only.  No lab work to be done

  1. Select Local computer
  2. Click Finish

When prompted click OK

 

 

Self-signed Cert

 

*NOTE*: Information only.  No lab work to be done

  1. Expand the Certificates (Local Computer)
  2. Expand the Personal folder
  3. Click on the Certificates folder

Note the existing self-signed certificate, which was created during the Horizon Connection server installation.

 

 

Modify Existing Certificate

 

*NOTE*: Information only.  No lab work to be done

  1. Right-click on the existing certificate (HVCS-W8-02.corp.local)
  2. Select Properties

 

 

Change the Friendly Name Value

 

*NOTE*: Information only.  No lab work to be done

Note the friendly name "vdm". This value is key to enabling the certificate used by the Horizon Connection Server.

  1. Change the friendly name value to self-signed
  2. Click OK

 

 

Request New SSL Certificate

 

*NOTE*: Information only.  No lab work to be done

To start the Certificate Enrollment

  1. Right click Certificates
  2. Select All Tasks
  3. Select Advanced Operations
  4. Select Create Custom Request...

 

 

Certificate Enrollment

 

*NOTE*: Information only.  No lab work to be done

Click Next to begin the enrollment

 

 

Certificate Enrollment Policy

 

*NOTE*: Information only.  No lab work to be done

 

 

Custom Request

 

*NOTE*: Information only.  No lab work to be done

  1. In the Template drop down choose (No template) Legacy Key
  2. For the Request format: Choose the PKCS #10 option

 

 

Certificate Information - Custom request

 

*NOTE*: Information only.  No lab work to be done

  1. Click the Details drop down
  2. Click on Properties to modify the request

 

 

Certificate Properties - General

 

*NOTE*: Information only.  No lab work to be done

  1. Select the General tab
  2. Add the name to the Friendly name field by entering vdm
  3. Click Apply but do NOT click OK yet.

 

 

Certificate Properties - Subject

 

*NOTE*: Information only.  No lab work to be done

On the Subject Tab of the properties of the certificate request

  1. Change the Subject name type to Common name from the drop-down list.
  2. In the Value field add the server name by entering: (YOUR SERVER NAME HERE) HVSS-W8-02.corp.local
  3. Click Add
  4. Change the Alternative name type to DNS from the drop-down list.
  5. In the Value field add the FQDN by entering (YOUR FQDN NAME HERE) HVSS-W08-02.corp.local
  6. Click Add
  1. Click Apply but do NOT click OK yet.

 

 

Certificate Properties - Extensions

 

*NOTE*: Information only.  No lab work to be done

On the Extensions Tab of the properties of the certificate request

  1. Expand Key Usage option
  2. Select Decipher Only for the Key Usage
  3. Click Add
  4. Click Apply but do NOT click OK yet

 

 

Certificate Properties - Private Key

 

*NOTE*: Information only.  No lab work to be done

  1. Select the Private Key tab.
  2. Expand Key Options
  3. Check the box Make private key exportable
  4. Expand Key Options
  5. Select Exchange for the Key Type
  6. Click Apply
  7. Click OK

 

 

Create the request

 

*NOTE*: Information only.  No lab work to be done

Select Next to complete the custom request

 

 

Save the Offline request

 

*NOTE*: Information only.  No lab work to be done

  1. Choose a file name and location for the certificate request
  2. Choose Base 64 as the file format
  3. Click Next to complete the custom request

 

 

Web Enrollment for Custom Request

 

*NOTE*: Information only.  No lab work to be done

Open Internet Explorer and go to your CA Web Enrollment server

  1. http://controlcenter.corp.local/certsrv
  2. Authenticate to start the enrollment process
  3. Click OK

 

 

Request a Certificate Task

 

*NOTE*: Information only.  No lab work to be done

  1. Select Request a certificate

 

 

Submit an Advanced Certificate Request

 

*NOTE*: Information only.  No lab work to be done

  1. Select advanced certificate request

 

 

Submit using a base-64 PKCS #10

 

*NOTE*: Information only.  No lab work to be done

  1. Select Submit a certificate request by using a base-64-encoded CMC or PKCS #10 file or submit a renewal request by using a base-64-encoded PKCS #7 file

 

 

Open you Custom Request

 

*NOTE*: Information only.  No lab work to be done

  1. Navigate to your custom certificate request file
  2. Right-click and Open with...Notepad or Wordpad

 

 

Copy your Certificate Request hash

 

*NOTE*: Information only.  No lab work to be done

Select all text from your custom request and copy or cut the information

 

 

Submit a Certificate Request hash

 

*NOTE*: Information only.  No lab work to be done

  1. Paste the custom request data into the Saved Request section of the form
  2. Change the Certificate Template type to Web Server
  3. Click Submit

 

 

Download Signed Certificate Chain

 

*NOTE*: Information only.  No lab work to be done

  1. Change the encoded type to Based 64 encoded
  2. Download certificate chain

 

 

Save Signed Certificate

 

*NOTE*: Information only.  No lab work to be done

  1. Name the signed certificate
  2. Save type as PKCS #7 extension to your Security Server
  3. Click Save

 

 

Import the Signed Certificate

 

*NOTE*: Information only.  No lab work to be done

From the MMC with the Certificate Snap-in installed on the Horizon Security Server

  1. Click Certificates
  2. Expand Personal folder
  3. Right Click on the Certificates folder
  4. Select All Tasks
  5. Select Import

 

 

Start Certificate Import Wizard

 

*NOTE*: Information only.  No lab work to be done

Click Next

 

 

Import the Signed Certificate Chain file

 

*NOTE*: Information only.  No lab work to be done

  1. Click on Browse
  2. Navigate to the PKCS #7 Signed Certificate file
  3. Select your certificate
  4. Click Open
  5. Click Next

 

 

Certificate Store

 

*NOTE*: Information only.  No lab work to be done

  1. Choose the defaults or Personal Certificate store
  2. Click Next

 

 

Finish the Import Wizard

 

*NOTE*: Information only.  No lab work to be done

Click Finish

Click OK when prompted

 

 

Move the Root Certificate

 

*NOTE*: Information only.  No lab work to be done

  1. Expand the Certificates (Local Computer)
  2. Expand the Personal folder
  3. Click on the Certificates folder
  4. Drag the Root-CA Certificate to the folder Trusted Root Certification Authority, Certificates folder

 

 

Review the new issued SSL Certificate

 

*NOTE*: Information only.  No lab work to be done

  1. Expand the Certificates (Local Computer)
  2. Expand the Personal folder
  3. Click on the Certificates folder

Notice the new certificate has been issued by the ControlCenter-CA

  1. Close MMC when finished reviewing and DO NOT disconnect your RDP connection

 

 

Restart the Horizon Security Server Services

 

*NOTE*: Information only.  No lab work to be done

On the Horizon Security Server

  1. Restart the VMware Horizon View Security Server service
  2. Restart the VMware Horizon View Blast Secure Gateway service (if using Blast)

 

 

Verify Signed Certificate is used.

 

*NOTE*: Information only.  No lab work to be done

Notice that the Security Server is now Green with No problem detected and the SSL Certificate is Valid

 

SSL Certificates in Horizon


With Horizon, all communication channels between the Horizon components are secured with SSL authentication mechanisms. Starting in Horizon 5.1, upgrades or new installs, you will find a higher security standard for SSL certificates than in previous releases.

When you install the Horizon servers in your environment, each one includes a default self-signed certificate. Self-signed certificates are issued by the server itself, not by a Certificate Authority. The server identifies and validates itself, which results in an untrusted certificate. Self-signed certificates provide very low-level security because untrusted server certificates are at risk of having traffic intercepted between the clients and the servers. If an unauthorized server steps into the middle of a transaction and responds to the same IP address as the organization’s server, the administrator receives no additional warning beyond the original warning resulting from the self-signed certificate.

Self-signed certificates are acceptable only for a testing environment, and are not secure enough for a production environment. Horizon 6 now makes using the default self-signed certificates more difficult to use by warning users and administrators if certificates are not signed by a Certificate Authority. To ensure a secure production environment, you need to install SSL certificates that are signed by a Certificate Authority (CA).

SSL certificates signed by a CA protect communications against tampering, eavesdropping, and “man-in-the-middle” (MITM) attacks. These certificates provide a secure channel between Horizon clients and Horizon servers for passing of private information, such as passwords and PINs. If you use the default self-signed certificates installed with Horizon servers, communication between Horizon servers and Horizon clients can be compromised.


 

Microsoft Certificate Authority

 

The Microsoft Certificate Authority service has already been installed and configured to issue certificates for the corp.local domain.

In your organization you might use the MCA, or a third-party signing authority.

Active Directory Certificate Services Overview can be found here: http://technet.microsoft.com/en-us/library/hh831740.aspx

 

 

Confirm Self-Signed Certificate with the Horizon Administrator Console

In this section we'll confirm that the default, self-signed cert is being used for Horizon Connection Server so we can replace it with a custom signed cert.

 

 

Login to the Horizon Administrator Console

 

Launch a browser ( I used Firefox )

  1. Enter your Horizon Admin url.  Mine is https://horizon-01a.corp.local/admin
  2. Your connection will probably give an security error
  3. Click Advanced
  4. Click Add Exception...
  5. Confirm Security Exception

 

 

Log into the Horizon Administrator Console

 

To log in enter the

  1. User name: Administrator
  2. Password: VMware1!
  3. Domain: CORP
  4. Click Log In

 

 

Untrusted Certificate

 

  1. Expand Connection Servers
  2. Click on your Connection Server(s).  For mine it is Horizon-01a
  3. You should see the Untrusted, self-signed certificate in use.
  4. Click OK to close the Connection Server Details pane.

 

 

Close Your Browser

 

We'll need to completely reload the window once we install a valid certificate.

 

 

Certificate Templates

 

Launch the Certificate Template Console on the Main Console

  1. Click on Start and Choose Run
  2. Type certtmpl.msc
  3. Click on certtmpl.msc

 

 

VMware Certificate

 

We need to modify the VMware Template.  This will not break the template for other use in the HOL

  1. Scroll to the bottom
  2. Right Click on the VMware Certificate
  3. Click Properties

 

 

General Tab

 

  1. Click on the General Tab
  2. Click the Publish certificate in Active Directory

 

 

Security Tab

 

  1. Click on the Security Tab
  2. Choose the Authenticated Users
  3. Ensure that you check Enroll and Autoenroll
  4. Click OK

 

 

Close the Template Console

 

Click the X to close the console

 

 

Validate your Certs

 

Start MMC to validate your Tunnel SSL certificates

  1. Right Click on the Start icon
  2. Choose Run
  3. Enter mmc
  4. Click Ok

 

 

MMC Add Snap-in

 

When we need to add the Certificates Snap-in

  1. Click on File
  2. Click on Add/Remove Snap-in...

 

 

Add Certificates Snap-in

 

  1. Choose Certificates
  2. Choose Add >
  3. Choose Computer Account
  4. Click Next
  5. Choose Local computer
  6. Click Finish
  7. Review the Snap-in is selected
  8. Click OK

 

 

Self-signed Certificate

 

  1. Expand the Certificates (Local Computer)
  2. Expand the Personal folder
  3. Click on the Certificates folder
  4. Note the existing self-signed certificate, which was created during the Horizon Connection Server installation.  To identify a certificate is self-signed, usually, the Issued By field will have the same name as the local server as you see in the picture.  Notice the Friendly Name of vdm. (You may need to expand the window to see this information).  This friendly name is used by the Horizon Connection Server to identify which server certificate to use.

 

 

Modify Existing Certificate

 

  1. Right click on your existing certificate (horizon-01a.corp.local)
  2. Select Properties

 

 

Change the Friendly Name Value

 

Note the friendly name "vdm". This value is key to enabling the Horizon Connection Server to use the certificate.

  1. Click on the General tab
  2. Change the friendly name value to vdm_orig
  3. Click OK

 

 

Request New SSL Certificate

 

To start the Certificate Enrollment

  1. Right click on Certificates
  2. Choose All Tasks
  3. Select Request New Certificate...

 

 

Click "Next"

 

Click Next to begin the enrollment

 

 

Certificate Enrollment Policy

 

  1. Select the Active Directory Enrollment Policy
  2. Click Next

 

 

Request Certificates

 

  1. Choose the VMware Certificate.
  2. Click the link for More information is required to enroll this certificate

 

 

Certificate Properties - Subject

 

On the Subject Tab of the properties of the certificate request

  1. Click on Subject tab
  2. Pick Common name from the drop-down list.
  3. In the Value field, add the server name by entering: FQDN horizon-01.corp.local
  4. Click Add
  5. Pick DNS from the Alternative name drop-down list.
  6. In the Value field enter the FQDN horizon-01a.corp.local
  1. Click Add
  2. Click Apply but do NOT click OK yet.

 

 

Certificate Properties - General

 

  1. Select the General tab
  2. In the Friendly name field enter vdm
  3. Click Apply but do NOT click OK yet.

 

 

Certificate Properties - Private Key

 

  1. Select the Private Key tab.
  2. Expand Key options
  3. Check the box Make private key exportable
  4. Click OK

 

 

Enroll Certificate

 

Click Enroll to have the certificate issued from the CA

 

 

Certificate Enrollment Success

 

  1. You should receive a "Succeeded" message like this.
  2. Click Finish

 

 

Review the new issued SSL Certificate

 

  1. Expand the Certificates  (Local Computer)
  2. Expand the Personal folder
  3. Click on the Certificates folder
  4. Notice the new certificate has been issued by the ControlCenter-CA
  5. Click the X to close the MMC when finished reviewing
  6. Select No to save console settings

 

 

Restart the Horizon Connection Server service

Now that we've completed the certificate request and have added the new certificate for the Horizon connection server, we'll need to restart a couple of services. This process will cause the connection server to make use of the new certificate.

 

 

Launch Services Control Panel

 

From your Connection Server

  1. Right click on Start
  2. Select Run
  3. Enter services.msc
  4. Click OK

 

 

Restart the Horizon View Connection Server Service

 

  1. Scroll to find the service to restart
  2. Click on VMware Horizon View Connection Server
  3. Click on Restart

 

 

Restart the Horizon View Blast Secure Gateway Service

 

  1. Scroll to find the service to restart
  2. Click on VMware Horizon View Blast Secure Gateway
  3. Click on Restart

Validate that all the required VMware Horizon Services are Running.  Only the VMware Horizon View Script Host service is not required.  Wait for the required services to have a status of Running before moving on.

  1. Click on the X to close the Services Console

 

 

Confirm CA Signed Certificate with the Horizon Administrator Console

In this section we'll confirm that the default, self-signed cert was replaced.

 

 

Login to the Horizon Administrator Console

 

Launch a browser ( I used Chrome )

  1. Enter your Horizon Admin url.  Mine is https://horizon-01a.corp.local/admin  Notice that your https is green.
  2. User name: Administrator
  3. Password: VMware1!
  4. Domain: CORP
  5. Click Log In

 

 

Untrusted Certificate

 

  1. Expand Connection Servers
  2. Click on your Connection Server(s).  For mine it is Horizon-01a
  3. You should see the your new certificate in use and Valid
  4. Click OK to close the Connection Server Details pane.

 

 

Close Your Browser

 

We'll need to completely reload the window once we install a valid certificate.

 

 

Conclusion

You have now replace your SSL certs with HOL friendly one.  

Note you will need to repeat these steps for every Horizon Connection Server you have deployed.

 

Horizon Tunnel/SSL certificates


Due to the awesomeness of the HOL the vPOD you are creating may live for longer than 12 months but remains in a "frozen state" until a pre-pop is deployed.

The VLP guys have a crazy algorithm that dynamically creates vPods or pre-populates your lab for use so no one gets a cold lab.  Since these labs are created from your gold master, all dates and times are set to when you shut it down cleanly.  When they are pre-populated they power back up and grabs the current date and time.

The issue is that if that date and time is greater than you expiring SSL certs or passwords, then you will not have a successful pod.


 

What is the Tunnel/SSL used for and why?

 

A tunnel is created between all Horizon components and each Connection Server and Security Server is responsible for creating and securing that tunneled link so all messages are secure.

Messages between secure gateways and message routers are over TLS, with mutual authentication (that is, the secure gateway validates the message router certificate, while the message router validates the secure gateway certificate. Message routers also exchange messages over TLS, and here each message router validates the other’s certificate. Hence in all cases, both certificates must be valid.

 

 

Tunnel SSL validity

 

Message security certificates for the secure gateway have the name “tunnel/…”. You’ll find a secure gateway on every Connection Server and Security Server. By default, these certificates have a lifetime of 180 days and are regenerated every 90 days (or the first available opportunity after that). Message security certificates for the message router have the name “router/…”. You’ll find a message router on every Connection Server, but not on a Security Server. Again, these certificates have a default lifetime of 180 days and a turnover at 90 days.

With these default lifetimes, then, it is possible for a machine in a pod to be down for up to 180 days, but after that the trust is broken and cannot be recovered without reinstallation.

Note: If you need to put a machine on the shelf for longer than that, then you’ll need to extend the certificate lifetimes. There’s no maximum (other than the size of an integer), so use whatever makes sense to you.

 

 

Changes Required

You need to make changes in two places.

Message routers get their configuration from LDAP, whereas secure gateways get their configuration from a stack of property files.

 

 

Validate your Certs

 

Start MMC to validate your Tunnel SSL certificates

  1. Right Click on the Start icon
  2. Choose Run
  3. Enter mmc
  4. Click Ok

 

 

MMC Add Snap-in

 

When we need to add the Certificates Snap-in

  1. Click on File
  2. Click on Add/Remove Snap-in...

 

 

Add Certificates Snap-in

 

  1. Choose Certificates
  2. Choose Add >
  3. Choose Computer Account
  4. Click Next
  5. Choose Local computer
  6. Click Finish
  7. Review the Snap-in is selected
  8. Click OK

 

 

Validate Expiration Date

 

  1. Expand the Certificated folder
  2. Expand the VMware Horizon View Certificates folder
  3. Click on the Certificates
  4. Review your tunnel/CONNECTION SERVER NAME certificates and the Expiration Date
  5. Click the minus to minimize

 

 

Modify Horizon ADAM

 

You need to launch ADSI Edit to modify the required attribute.

  1. Right Click on the Start icon
  2. Choose Run
  3. Enter adsiedit.msc
  4. Click OK

 

 

Connect to Horizon ADAM

 

  1. Right Click on the ADSI Edit icon
  2. Choose Connect to...
  3. One the pop-up enter the Name: RootDSE
  4. Choose Select or type a Distinguished Name
dc=vdi,dc=vmware,dc=int
  1. Choose Select or type a domain or server and enter localhost

 

 

Modify CN=Common

 

Navigate CN=Common,OU=Global,OU=Properties,DC=vdi,DC=vmware,DC=int object to modify.

  1. Expand RootDSE
  2. Expand DC=vdi,dc=vmware,dc=int folder
  3. Expand OU=Properties folder
  4. Expand OU=Global folder
  5. Find the CN=Common object
  6. Right Click and choose Properties

 

 

 

Modify the pae-NameValuePair attribute

 

 In the Attribute Editor find the attribute and edit the value

  1. Find the attribute pae-NameValuePair
  2. Click Edit
  3. Enter
mq-router-ssl-cert-validity=720
  1. Click Add
  2. Click Ok
  3. Validate the value is present
  4. Click Apply
  5. Click OK

 

 

Close ADSI Edit

 

  1. Click X to close ADSI Edit

 

 

Create or Modify locked.properties

 

You will need to modify or create a locked.properties

  1. Right Click on the Start icon
  2. Click on Command Prompt (Admin)

 

 

Navigate to the the Horizon Server install directory

 

 Change directories to the location of your Horizon install.

Enter the following:

1. cd "c:\program files\vmware\vmware view\server\sslgateway"
2. cd conf
3. notepad locked.properties

 

 

 

Create or Edit locked.properties

 

  1. Click Yes to create a new file
  2. Enter
mq-sg-ssl-cert-validity=720
  1. Click the X to close notepad
  2. Click Save

Note: You must create this file on every Connection Server and Security Server you have installed.

 

 

Restart the Horizon Connection Server service

 

 

Launch Services Control Panel

 

From your Connection Server

  1. Right click on Start
  2. Select Run
  3. Enter services.msc
  4. Click OK

 

 

Restart the Horizon View Connection Server Service

 

  1. Scroll to find the service to restart
  2. Click on VMware Horizon View Connection Server
  3. Click on Restart

 

 

Restart the Horizon View Blast Secure Gateway Service

 

  1. Scroll to find the service to restart
  2. Click on VMware Horizon View Blast Secure Gateway
  3. Click on Restart

Validate that all the required VMware Horizon Services are Running.  Only the VMware Horizon View Script Host service is not required.  Wait for the required services to have a status of Running before moving on.

  1. Click on the X to close the Services Console

 

 

Generate New Tunnel SSL Certs

 

 Change the directory to the Horizon tools folder

cd ..\..\tools\bin

 

 

Create Pending Connection Server Certificates

 

 Enter the following command to create a new pending connection server cert.

vdmutil --authAs administrator --authDomain corp.local --authPassword VMware1! --createPendingConnectionServerCertificates  --connectionServerName Horizon-01a

Note that Connection Server name Horizon-01a needs to be the name of your Connection Server

 

 

Active Pending Connection Server Certificates

 

Enter the following command to active the pending certificate.

vdmutil --authAs administrator --authDomain corp.local --authPassword VMware1! --activatePendingConnectionServerCertificates  --connectionServerName Horizon-01a

Note that Connection Server name Horizon-01a needs to be the name of your Connection Server

 

 

Refresh Security Server Certificate

 

 Enter the following command to refresh the security server certificate

vdmutil --authAs administrator --authDomain corp.local --authPassword VMware1! --refreshSecurityServerCertificate --securityServerName Horizon-01a

Note that Security Server name Horizon-01a needs to be the name of your Security Server.  You must run this command for each Security Server you have installed.

 

 

Restart the Connection Server

 

  1. Enter shutdown /r to restart your Connection Server
  2. Notice the warning message and click Close

 

 

Validate your Certs Changed

 

Start MMC to validate your new Tunnel SSL certificates

  1. Right Click on the Start icon
  2. Choose Run
  3. Enter mmc
  4. Click Ok

 

 

MMC Add Snap-in

 

When we need to add the Certificates Snap-in

  1. Click on File
  2. Click on Add/Remove Snap-in...

 

 

Add Certificates Snap-in

 

  1. Choose Certificates
  2. Choose Add >
  3. Choose Computer Account
  4. Click Next
  5. Choose Local computer
  6. Click Finish
  7. Review the Snap-in is selected
  8. Click OK

 

 

Validate Expiration Date

 

  1. Expand the Certificated folder
  2. Expand the VMware Horizon View Certificates folder
  3. Click on the Certificates
  4. Review your tunnel/CONNECTION SERVER NAME certificates and the Expiration Date
  5. Click the minus to minimize

 

 

Conclusion

You have now changed the Tunnel SSL certificates to expire in about 4 years with a refresh of 720 days.

 

Base vPod - Removing Extra Hosts


In past years, the HOL Base vPods have included a minimal number of hosts to get teams started building their labs. We then provided additional vESXi host templates and instructions for importing and configuring these within the vPod.

This year, we are trying something new. The base vPods will contain 5 ESXi hosts per site -- 5 for the Single Site Base and 10 for the Dual Site Base. These hosts have already been imported to vCenter and have been added to the vDS within the relevant site.

The intent is that teams will remove the excess hosts from the base vPods and keep only what is required. Hopefully, this saves time integrating additional hosts into the environments.

Unfortunately, the process for unhooking an ESXi host from a vDS is not as simple as merely removing the host from the vCenter inventory.

To simplify this process, the core team has provided a PowerShell script that will take care of the required tasks. Generally, this takes under a minute as long as the hosts in question have not been altered from their default states.

Consequently, it is a recommended practice to jettison the extraneous hosts as early in your team's development as possible.


 

Which hosts do you need to remove?

To use the script, you need to know two things about the hosts you want to remove:

  1. the site letter for the site containing the hosts you want to remove (e.g. the "a" part of the hostname esx-05a.corp.local)
  2. the numbers of the hosts you want to remove  (e.g. the "5" part of the hostname esx-05a.corp.local)

The Dual Site Base vPod consists of sites "a" and "b" while the Single Site vPod contains only site "a"

By default, each site contains hosts 1-5.

 

 

Running the script

Before starting, ensure that the vPod is ready to go. This means that the vCenter servers are online and the ESXi hosts are reporting "Connected" or "Maintenance" -- log in using the HTML5 client to make sure everything looks OK. This ensures a clean removal of the host(s).

The script can be found in the C:\HOL\Tools directory as Remove-HolEsxiBase.ps1

To get started, just open a PowerShell window and type the path:

C:\HOL\Tools\Remove-HolEsxiBase.ps1

 

 

 

Initial prompts

 

Once the script has started, it will prompt first for the letter of the site and then the numbers of the hosts you want to remove.

If you need to remove hosts from two sites, run the script once for each site.

Once you have provided the inputs, the script generates the hostnames for the ESXi hosts and presents the list for you to verify. If it looks good, confirm them and let the script run.

 

 

Progress

 

After confirming the hosts for removal, the script performs the required actions on each host to disconnect it from the vDS and remove it from vCenter.

Ensure success is reported for each host prior to moving on.

 

 

Clean up

Once the script has finished, the associated vESXi host(s) can be powered off using the Shut Down Guest OS option in vCD, and the VM(s) can be removed from your vPod.

 

TOOL: Load Generation Scripts


Sometimes you need to generate load within a vPod to showcase a specific technology. While we try to keep load to a minimum in our environments, and we would not like to have scripts such as these running constantly, here are some ideas that we have used in the Hands-on Labs.

These scripts stay away from a couple of the more expensive load types: storage and network. Especially in nested environments, those can be brutal.


 

Quick Note - Adding Run Permissions to a file in Linux

In Linux, adding the ability to execute a file is straightforward, but may not be something familiar to those who spend most of their time in Windows. For the file called "my_cool_script.sh"

# chmod u+x my_cool_script.sh

will allow the owning user to execute the file.

Note that running a file from the current directory is not allowed by specifying the file name alone. You must provide a path of some sort. The easiest way to do this is to provide the relative path to the current directory, "./" as follows:

# ./my_cool_script.sh

 

 

Linux Shell Script for CPU Consumption

 

There are many ways to generate load, and you can choose your favorite, but a variation of this one has been used successfully in our environment before. This script can be used even with the minimal Photon installation. It requires bzip2, which is commonly included in most modern Linux distributions.

#!/bin/bash
travelTheTrail()
{
  dd if=/dev/urandom | bzip2 -9 >> /dev/null
};
loadTheWagon()
{
  partyMembers=0
  while [ ${partyMembers} -lt $1 ]; do
  travelTheTrail &
  let partyMembers=partyMembers+1
  done
};
now="$(date)"
printf "== Beginning CPU Load: %s ==\n" "$now"
loadTheWagon ${1:-1};
echo "CPU load running. Press Enter to stop."
read;
pkill -x dd
now="$(date)"
printf "== Stopped CPU Load: %s ==\n" "$now"

This script is called with a single parameter, which is the number of partyMembers that you have. This is roughly equivalent to the number of CPUs that you wish to consume. It will default to 1 if no parameter is specified.

The script will spike the CPU and run until the user presses the ENTER key.

In the example image, the script is called burnItPhoton.sh and it was run on 1 CPU.

NOTE: Some Linux distributions do not have a pkill command and use killall instead. In that case, the third line from the bottom of the script should be adjusted. Replace

pkill -x dd

with

killall dd

which will do the same thing: kill off the dd processes that are generating the load.

 

 

Python Script for CPU Consumption

 

I like the previous script due to its minimal requirements and ability to run on just about any minimal Linux distribution. Some people like Python, and it is installed in even the minimal Photon OS distribution, so, I figured having an example would be useful.

Plus, this one has a coolness factor: it uses the Python multiprocessing library to spawn its instances. Each instance will sit in a very tight infinite loop, squaring a number. Hardly useful work, but it eats CPU cycles rather prodigiously and there is not much that can beat it for simplicity.

from multiprocessing import Pool
def eatCPU(x):
  while True:
    x * x

num_cpu_to_eat = 1
p = Pool(processes=num_cpu_to_eat)
p.map(eatCPU, range(num_cpu_to_eat))

Be careful to call this one as a background process since it does not exit or clean up nicely when run in the foreground.

Modify the value of the variable num_cpu_to_eat with the number of CPUs that you need to consume.

If you have not used Python before, be sure to pay attention to the spacing in the script. It is important and ScreenSteps tends to mangle that during copy/paste. Just make sure the indents match the code block in this article.

Save the script as burnCPU.py and call it like this:

# python burnCPU.py &

then you can kill the process and all of its children with:

# pkill -x python

 

 

Getting Started with ScreenSteps v4

What is ScreenSteps?


ScreenSteps is a software product published by Blue Mango Learning Systems.  It was originally created to assist organizations improve their software documentation and customer support processes.  At VMware, we use ScreenSteps to help create manuals for the Hands-on Labs.

All the content is stored on a web back-end and can be accessed by way of a desktop client or web application.  One of the updates to ScreenSteps v4 is the elimination of the check-in/check-out process and the way the desktop client is used.  This means teams will have to communicate more when working on lessons.  It still has the ability to track changes a user has made and any revision can be made the current one.

The desktop client is only used at the lesson level.  Most of the work will be done via the web client and when a lesson needs to be modified, the desktop client will launch to make those edits.


How does my Storyboard relate to ScreenSteps?


The first step in creating a lab starts with the Storyboard.  The Storyboard is an outline of what will be shown in the lab and at high level what steps will be taken by the user taking the lab.

In this lesson, we will look at how the Storyboard matches to the different components in ScreenSteps.


 

Lab Module Details - Storyboard

 

The Storyboard for you lab starts with an overview, but then each Module is broken down to provide more details.  Let's talk about the different components in a Module.

  1. This is the Module title itself.
  2. This is the Lesson under the module and is comprised of the steps underneath it.
  3. These are the Steps that the user will perform.

 

 

How do they Translate in ScreenSteps?

 

The translation from the Storyboard to ScreenSteps is shown above.

 

 

Lab Module Details - ScreenSteps

 

Visually, this is how they map up in ScreenSteps.

  1. This is the Module name from the Storyboard.  In ScreenSteps, these are called Chapters.
  2. These are the Lessons from the Storyboard.  In ScreenSteps, these are called Articles.

 

 

Lab Module Details - ScreenSteps (cont.)

 

When you click a link to open a lesson, you will be shown what steps are included in that lesson.  The concept of what a step is has changed with ScreenSteps v4.  Since a step can include more than just a step title, image or video and step text, it must constructed based on what you want to show in each step.

 

How to Download and Install the Desktop Client


This lesson will focus on how to download and install the ScreenSteps Desktop Client.  The client is available for both Windows and Macintosh computers.


 

Download the Client

 

You can download the latest Desktop Client from the ScreenSteps site:

http://www.screensteps.com/downloads

 

 

Installing the Windows Desktop Client

 

After downloading the file, run the installation executable.

Click Next to Continue.

 

 

Accept the License Agreement

 

 

 

Choose a location to Install

 

 

 

Create a Start Menu Folder

 

 

 

Create a Desktop Icon

 

 

 

Ready to Install

 

 

 

Setup Complete

 

Setup is complete.  You can click Finish to launch the ScreenSteps Client.  The next lesson will cover the different ways to login.

 

 

Installing the Macintosh Client

 

After downloading the Client, double click the DMG file.

Click Agree to accept the License Agreement and continue with the installation.

 

 

Drag the ScreenSteps Icon to the Applications Folder

 

 

 

Don't forget to eject!

 

Now that the Desktop Client is installed, we can move to the nest lesson to see how it is configured.

 

How to Login to ScreenSteps


One of the major changes to ScreenSteps v4 is the way you access the manuals.  The majority of the work is done in the Web Client and the Desktop Client is only used to edit individual lessons.  When you need to edit a lesson, the Desktop Client is selected as the method of editing a lesson.  The process to start working with manuals begins with logging in to the Web Client.


 

Logging into the Live Website with a Password

 

To login to the Live Website with a password, you will need to start by going to this site:

https://vmware.screenstepslive.com/login

Note: You may have to click the Login link to see this page.

 

 

Login with your email address and password

 

The default password is set to VMware@HOL1

 

 

Logging in using the Desktop Client and a Password

 

When you launch the Desktop Client for the first time, it will ask for an account.  Enter 'vmware' and click Next.

 

 

Authentication via Live Website

 

You will then be asked ot log in.  Use the same credentials you would for the Live Website and click Log In.

 

 

Success!

 

You have now configured the desktop client.  As mentioned earlier, the majority of the work will be done in the Web Client.  You are offered the chance to open the Web Client now.

 

ScreenSteps Live Website Interface Overview


This lesson will walk through an overview of the ScreenSteps Live Website interface.


 

Sites

 

When you first log into the ScreenSteps Live Website, you will be presented with a list of Sites that you have access to.  For Hands-on Labs, the sites are broken up by Group.

Start by clicking on your Group.

 

 

Your Group

 

You can now see the Lab Manuals that are part of your Group.

To start working with your Manual, click the lab SKU that you have been assigned to.

 

 

Your Lab Manual

 

You are now viewing your lab manual.  You can click through the various Modules and Lessons to review the content.

This is a read-only view and what Proctors will use when reviewing your Manual.

 

 

 

To add or make changes to your Manual, you will need to click the Admin button in the upper left-hand corner.

 

 

Admin mode

 

A new tab in your browser will open and you will be brought back to all the Lab Manuals in your group.

Click on your lab SKU in the left-hand column to start working with it.

 

 

Creating Chapters

 

To create a Chapter, click the 'New +' button next to Chapter.

 

 

Name the Chapter (Module)

 

Just enter a name for the Chapter.  Remember to follow the appropriate naming convention we use in the Hands-on Labs for naming Chapter.  

 

 

Creating Articles (Lesson)

 

To create an Article (Lesson), hover right under the Chapter (Module) you would like it under and when the blue line with a '+' appears, click it.

 

 

Naming the Article (Lesson)

 

Name your Article and click the Create article button.

 

 

Alternate Way of Creating Articles

 

You can also create Articles by clicking the 'Create New Article' Link.

This time, you will need to specify what Chapter (Module) you wan the Article under.

 

 

Moving Articles

 

If you need to Articles around, hover over the one you want to move and click the compass icon and drag and drop the Article where you want it.  The can be moved to a different Chapter, but only within the same manual.

 

 

Moving Chapters

 

To rearrange Chapters, it's the same process, but in a different location.  Look under the Chapters section and hover over the Chapter you want to move.  Click and drag it to where it needs to be.

 

 

Duplicating Articles

 

If you need to duplicate an article, just click on the drop-down menu next to the article you want to copy and select Duplicate.

 

 

Where to place the Article

 

You will need to specify where the duplicated article will be copied, broken down by chapters (modules).  Note that you can only duplicate an article withint the same manual.

 

 

Newly duplicated article

 

You should now see your new article at the bottom the chapter you copied it to.  It will be unpublished and have "(copied)" at the end of the name.

 

 

Renaming Items

 

To rename a Chapter or Article, look for the pencil icon next to it and click on it.

This will open the name in an editable text field.  Once you have made your edit, press the Enter key to save your changes.

 

 

Return to Beginning Screen

 

To return back to the normal view, click the 'X' button on the right-hand side.

 

 

Editing an Article

 

Start by clicking on the Article name you want to edit.

 

 

Article Properties

 

This brings up the Article's properties window.

Click on the 'Edit Contents' link.

 

 

Edit in Desktop

 

To edit the Article in the Desktop Client, click the Edit in Desktop link.

Note that the legacy web editor will be deprecated in April/May of 2017, so you may or may not see this option.

 

 

First time?

 

If this is the first time you have edited an Article in the Desktop Client, you may be asked if it is OK to open the web link a local application.

Click the 'Always open links from ScreenSteps' box and then click 'Open in ScreenSteps'.

Continue on to the next lesson to see how to use the Desktop Client to edit an Article.

 

ScreenSteps Desktop Client Interface Overview


Unlink ScreenSteps v3, you will only use the Desktop Client to modify Articles.  You cannot launch the Desktop Client; it can only be launched inside the Web Client when you choose to edit an Article.

There are some new features in ScreenSteps v4, but we will not be using all of them.  Some are applicable only to HTML publishing and are not a fit for the Hands-on Labs.  This lesson will cover the supported features in the new client.  Any features used outside those that are supported will need to be corrected prior to publishing a manual.


 

No Check-out/Check-in Process

A major change to ScreenSteps v4 is the check-out / check-in process has been removed.  This means Captains will need to plan out and really own the content they are creating.  ScreenSteps will save each edit as a version, so changes will not be lost, but they don't provide a way as of now to merge two versions of an article.  This makes it that much more important that once you have made the changes you need to an Article to save that version so someone else can start their edits with your revision.

 

 

Main Screen

 

Here are the main features of the Desktop Client:

  1. When you have finished working with an Article, always click the Save & Publish button.  Never use the Save as Draft.
  2. We will only be using the center and the right icons.  The center icon takes a screen capture, while the right icon opens the capture palette for screen captures.  Here you can select the time delay and which step the screen capture will be added to.
  3. This tells you which version of the Article you are editing.
  4. This is the Article name.  You can modify the name by clicking on it.
  5. If you have made a mistake and are unable to correct it, simply click the discard changes button and your changes and revision to the Article will not be saved.

 

 

Steps

 

In ScreenSteps v3, a step was composed of three elements, a step header(1), an image(2) and step text(3).  ScreenSteps v4 offers more flexibility with content placed in a manual and each of these are individual pieces now.  In order to replicate what a typical step would look like, you would need to insert three pieces separately.

 

 

Module Introduction Text

 

One other piece that will be missing in ScreenSteps v4, is the Module Introduction text space.  This is where introduce the Module or Lesson and if there was nothing in it, the dreaded "This page was initially left blank' message.

If this text is desired, you will need to add a text block first before adding the steps.  To do this, click the '+' sign and select Text.

 

 

Adding Step Structure - Heading

 

As mentioned previously, you will need to create the step structure in three pieces.  Start by clicking the '+' icon and selecting Heading.

 

 

Adding Step Structure - Heading

 

You can now give the step a title(1).

If you want to remove a Heading, you can click the trashcan icon to delete it.  This is true for all elements.

 

 

Adding Step Structure - Image

 

Next, add the image.  For this, you will need to click the screen capture icon and take a screen shot.  You can set a delay, if needed.

Make sure the Heading you want to add the image to is highlighted in the left-hand navigation window.

 

 

Screen Capture Options

 

Use your mouse to capture the area.  You can use the 'R' key to capture a previous area (helpful for Wizard screens) or toggle the cursor being displayed by using the 'C' key.

When you have the area captured, press the Enter key or click the camera icon to save the image and add it to ScreenSteps.

Please read the section on image capture best practices!!!

 

 

Adding Step Structure - Image

 

If you already have a image, you can use the '+' and select Image File to just add an image block with your picture.  

 

 

Adding Step Structure - Step Text

 

Now that you have your step title and image, the only thing missing is the step text.

You can add this by once again selecting the '+' button and clicking Text.

 

 

Adding Step Structure - Step Text

 

To add your text in the right place, make sure the step right above is highlighted in the left-hand navigation pane.

 

 

Adding Step Structure - Step Text

 

You can also hover over the grey dot right above where you want to add your text block (or any other block of content).  It will turn in to a blue '+' and by clicking on it, you will have the same options to add content blocks, but it will be placed below.

 

 

Step Text - Formatting

 

You can then add the step text you need in the text block.  There are formatting options, like bold and italic, bullet and number lists and subscript and superscript.  We'll cover some of the specifics next.

 

 

Step Text - Formatting Numbered Lists

 

If you have a numbered list and need to add text in between the numbered list, you will notice that the numbers start from 1 each time.

To keep your numbering consistent, you have the ability to start the numbering from any number.

 

 

Step Text - Formatting Numbered Lists

 

Make sure you click on the list item you want to change the numbering sequence on.  In our case, this would be 1. This should be numbered 3.

From the Format menu, select List, then Start Numeric List At...

 

 

Step Text - Formatting Numbered Lists

 

Enter the number you would like to start at and click OK.

 

 

Step Text - Formatting Code Blocks

 

To make the manual more readable and to make it easier to just highlight and drag the code to the Main Console, you should use code blocks.

In our example, the second line is the actual code the user should input in to the Main Console, but it looks just like the rest of the step text.

 

 

Step Text - Formatting Code Blocks

 

Start by highlight the text you want to format as code and then click the '{}' button at the top.

 

 

Step Text - Formatting Code Blocks

 

the text is not formatted in a code block.  This will make it easier for users to distinguish what they should be entering in to the Main Console.

 

 

 

You can add hyperlinks in your steps, either external links or links that will jump to another section in the same manual.  You cannot link to content in other manuals.

Highlight the text you want to hyperlink and then click the hyperlink icon.

 

 

 

You are given two choices, Knowledge Base Link or External.  A Knowledge Base Link is used to point to another article or step in the same manual where External is used to point to an external link.  

In this case, we will use the Knowledge Base Link.

 

 

Chose the article or step

 

  1. By default, the manual you are working in should be selected, if not, be sure it is selected.
  2. Use the center windows to select either the article or step you would like to link the text to.

For Knowledge Base Links, do not click the "Open link in new window" box.  This should only be used for External links.  

Click OK when you have finished.

 

 

 

Here you can see the new link created, denoted with blue, underlined text.

 

 

New Features - Tables

 

After long last, ScreenSteps now offers the ability to add a table to the manual.

To do this, click the '+' icon and select Table.

 

 

New Features - Tables

 

You will now see a simple table added.

To modify it, click anywhere on the table.

 

 

New Features - Tables

 

You have the familiar options to modify text along the top bar.

If you click on Column or Value, you can change the name, but there is a pop-up menu with additional items.

  1. This will toggle the first row as headers or normal cells.
  2. This lets you add additional rows to the table.
  3. This lets you add additional columns to the table.
  4. This lets you add dashed borders or Shade alternate rows.

 

 

New Features - Tables

 

  1. This option will let you split or merge cells.
  2. This option lets you shade the cells a color (unkown what this will look like in VLP).
  3. You can vertically align the text in the cell with this option.
  4. You can horizontally align the text in the cell with this option.

 

 

New Features - Tables

 

When you are finished editing your table, click the save button to return to the Article.  You can also click the 'X' button to discard your changes or just exit.

 

 

Save and Publish an Article

 

As noted previously, there is no check-out / check-in process in ScreenSteps v4, there is just a way to save your Article.

When you have finished working on your Article, click the Save & Publish button.

NOTE:  Please DO NOT use the Save as Draft button.  Just like the previous version of ScreenSteps, when you click the Save as Draft button, the changes are kept on your local device and are not visible to other users.  If they start editing the same Article, it will not be based on your draft Article and these changes will need to be incorporated later.

 

 

Article Notes

 

Once you click the Save & Publish button, you are given the option to add notes.  This becomes more important now that the check out process has been eliminated.  In the Notes section, included what you added/modified/deleted from the Article.  This will be helpful if someone else creates a version prior to you saving yours.

For the Article Owner, select your name so others in your group know you are taking ownership of the Article and that they should check with you prior to making any changes to it.

For Article Status, work with your Principals to see how best to use this field.   It can denote the different states the Article is in and each Principal has their own method for using this.

 

 

Confirmation

 

Once the Article has been published to the ScreenSteps server, you will receive a confirmation window with additional options.

Probably the most useful options are:

 

 

For More Information...

If you would like more information on using the Desktop Client, you can view the ScreenSteps help site:

http://help.screensteps.com/m/authoring

 

Lab Manual Best Practices

Introduction


In this Module, we will cover some of the Best Practices to use when creating your Manual in ScreenSteps.

This lesson will focus on a few quick Best Practices, while the rest of the module will go into deeper topics.


 

Manual Naming

 

It's import that the manual name be just the lab SKU.  Leave this as is!  If this is modified, we will not be able to export your manual.

 

 

No Blank Titles or Descriptions

 

Never leave a Title or Description in ScreenSteps blank.  If you do, when the Manual gets published, a message will appear stating "This Page was initially left blank".

Ensure that these always have content in them.

 

Storyboard Guidance


One of the most important pieces to have in place prior to creating your manual in ScreenSteps is a Storyboard.  A Storyboard tells what will happen in the lab and how it will happen.  Having a solid Storyboard will make creating the Manual in ScreenSteps that much easier.  

This lesson will walk you the Best Practices for ensuring you have a solid Storyboard in place.


 

Docs that Rock

A great resource provided by ScreenSteps is Docs that Rock.   It does reference using it to create software documentation for customers, but the same ideas apply.  The goal is to provide clear instructions for users to follow and keep them on course.  

http://help.screensteps.com/m/docs-that-rock

 

 

Great Demo Approach

For those that have gone through the Great Demo training, you can apply that same methodology here.  The idea of "Doing the last thing first".  Just like when you are delivering a live demo, in a lab, you have about a minute to captivate your audience.  Don't try to build up to the good stuff, show them that first!   Grab their attention with the coolest stuff the solution you're working with can do.

Now that you have their attention and have completely blown their mind with how cool the solution is, you can show them the "how".  How they can create the awesomeness you have just shown them, with just a few simple clicks!  This is what sells the "product", not only that it can do really cool things, but even a simpleton like themselves are capable of creating it.

Keep in mind what eventually gets written in the manual is not there to impress the audience, it's what's happening in the vPod that should impress them.  The manual is just roadmap that gets them there!

For those of you that have not taken the Great Demo training, I would highly encourage you to do so!  In the mean time, you can watch this quick 3-minute video to help you understand the methodology a bit more.  For those of you using the PDF version, here is the link to the video:

http://youtu.be/iSOsSh3XJPY

 

 

Content Guidance - Lab Module Level

 

The first decision you need to make when planning the different Modules is whether or not it will be targeted for a Beginner or Advanced user.  Do note that you can have both types of Modules in a single Lab.  This makes the Lab attractive to multiple consumer types.  As an example, there could be a Module or two that are at the beginner level that give a high level overview of the Product or Solution and then offer a few advanced modules for those that want a deeper focus.  

 

 

Content Guidance - Lab Module Content Type

 

The next decision you will need to make for your Storyboard is the Content Type you want to deliver, either a Feature Lab or Use Case Lab.  You can see the description of each above, but essentially, a Feature Lab is typically focused around a single product.  Maybe it's a new feature in the product you want to quickly showcase, without a scenario or any fluff around it, just straight content.

A Use Case Lab can be great for showcasing a simple solution using multiple VMware products together.  Depending on the Lab Module Duration (covered next), you can come up with some interesting scenarios. These types of Modules typically include a backstory where you would use a scenario to setup a problem a customer would face and then show how VMware Products can be used to solve it.

 

 

Content Guidance - Lab Module Duration

 

The final item that needs to be determined is the Lab Module Duration.  The two different types are a Lightning Lab or Full-Length lab.  You can see the targeted times noted above.  A Lightning Lab is designed to give the user a quick overview of a Product or Solution where a Full-Length Lab may go deeper or cover integration of more than one Product or Solution.  One great combination of Level, Type and Duration is a Beginner - Feature - Lightning Lab.   These Modules would offer a quick introduction to a Product or Solution or maybe a new feature(s) or a Product.  A Lightning Lab could also be combined with Use Case to offer a quick scenario based lab that walks a customer through a problem they may face and demonstrate a VMware Solution in "Four Easy Steps" to solve it.

As a side note... it is definitely a "Lightning Lab" and not a "Lightening Lab". Lightning refers to speed or quickness while Lightening refers to decreasing weight or color. For more information on this, you can review this SocialCast post:

https://vmware-com.socialcast.com/messages/16115182?mini=true

 

 

Content Guidance - Duration and Mapping

 

When we put all three together, it creates a matrix of different options we have for creating Modules.  Again, keep in mind a Lab can have a mix and match of all these different types of Modules.  By having all these different types of Modules, it can attract a larger audience.  In addition to the "techies" that have always enjoyed the HOL experience, we are starting to see Business Leaders now take in interest in them.  Because of this, the more content you can include in your lab that focuses on the two different audience types, the more useful you lab will be overall.

 

 

Module Independence

While creating your Modules, keep in mind that each Module should be independent and not rely on any other Module to be complete prior to starting another.  This may mean using robust, well-documented scripts or the "cooking show" method.  Basically creating a duplicate environment, one of them already configured and in the desired state to start a specific Module.

This will allow the user to start at any Module and work with the content that is important to them.  Also, with Labs having multiple Modules and lasting anywhere from 2-8 hours, it's important to not have a user walk through 60 minutes of content, just to get to what they might really want to view.  In most cases, they will run out of time before they get to the end.

That being said, it is also important not to repeat content in each Module.  It's not practical or a good use of time to have the user complete the same actions at the beginning of each Module as a way around Module independence.

One other note, it is important to coordinate with all the Captains in the Lab to ensure that by completing your Module(s), you aren't doing anything to break things in other Captain's Modules.  

 

 

Lab Manual Composition

 

Now that we have discussed the different ways a Module can be used, let's focus on how they are used in a lab.  The idea is to have fewer overall Lab Topics and more Modules.  This way, a user can focus on a Use-Case or Solution and see all the content surrounding, thus narrowing down the content catalog and making it easier to find things.

You can see at the top, we the Lab Topic, or just the lab itself.  Under the lab, are Modules and we can have multiple Modules under each lab.  Ideally, each Module would be 15-60 minutes in length, with no more than 8 Modules per lab topic and in total, 2-8 hours of content in a Lab.

Under each lab are Lessons.  Lessons are simply a collection of steps used to convey a tasks or concept.

Each Lesson is made up of Steps.  Steps are the individual actions the user will take to complete the Lesson.

A Step can be broken down into a Sub-Step if it is needed to break the Table of Contents down another layer or provide additional separation in a Lesson.

 

ScreenSteps and Manual Guidance


In the first Chapter you were given an overview of the ScreenSteps Live and Desktop client interface.  In this Lesson, we'll cover a bit more and offer some guidance on a few specific areas.


 

Lab Overview

 

The first Module to be included in every Lab is the Lab Guidance Module.  This Module/Chapter appears prior to any Module and is used to give an introduction to your lab including guidance on how a user can best complete the lab.  This is more import than in years past especially at live events, like VMworld, because labs now contain more content than can be completed in a single 90-minute seating.

  1. The Lab Manual should only include the Lab SKU.  In the example above, this would be HOL-1803-01-NSX.  This is critical to ensure the Manuals get exported into VLP correctly.
  2. The first Chapter in the Manual should be in the following format and include the Lab Guidance lesson:

 

 

Lab Guidance - Example - Part 1

 

Here is an example of a Lab Overview.  If you have started drafting your manual with the template provided in your lab group, there will be a template that will guide you through filling this section out.

  1. First, there is a note on the time allotted vs the amount of content in the lab.  This lets users know that they shouldn't expect to complete the entire lab in one sitting, but can come back and pick up where they left off.
  2. Next, an overview is given about the lab in general.  What is the lab about?  What can the user expect to learn by taking the lab?  This is also a great place to offer suggestions on how best to consume the Modules.  If they are already familiar with the basics, what Modules should they take?  If they are new to the topic, are their any specific Modules they should start with to gain a better understanding?
  3. Finally, list out the Modules included in the lab.  Each Module listed includes the name, the time needed to complete it and what level it is.  There is also a brief description of what will be covered.  Also note that the name of the Module is hyperlinked so the user will be taken directly to the start of the Module by clicking the link.  We will cover how to do this a bit further down in this lesson.

NOTE: If you are using pre-release code, you must include the pre-release / tech preview template included in the sample manual,  This template has been approved by VMware legal and is needed to ensure we maintain compliance, especially around revenue recognition.

 

 

Lab Guidance - Example - Part 2

 

Continuing on, after the Module listing:

  1. You worked hard creating this content, may sure you give yourself and your co-Captain(s) credit!
  2. One of the most frequently asked questions we get is how to obtain a copy of the manual.  Please make sure this link is included to let users know where they can go to obtain a copy in PDF or HTML format.
  3. We spend a lot of time, money and effort localizing manuals each year and want to make sure everyone is aware that manuals may be available in their native language.  This link will show them how to change their default language and point them to the latest blog post where localized manuals are listed.

 

 

Lab Guidance - Other Content

A lot of work was spent prior to the 2016 VMworld development cycle to create templates and best practices to include in the manual.  This was compiled with feedback from users over the past year.  These templates have been included in your sample manual and must be used.  This will save you a lot of time at VMworld as it answers the majority of the questions we receive from attendees, letting you focus on answering product and solution based questions.  These templates are also key when the labs are published online.  With no Proctors available to answer questions, the more guidance you can have in the manual will make for a better experience for the users.

The templates that have been created as individual steps are:

 

 

Module Headings

 

Each Chapter/Module title should be in the following format:

    Module X - Module Title (YY Min)

Where X = the Module number and YY = the duration in minutes of the Module (either 15/30/45/60)

 

 

Sending Text to the Control Center Desktop

There are a few ways to have a user copy text from the manual and paste it in to the desktop.  We will cover those methods, from the easiest to the more involved.  This is useful for long strings of command text that a user may mistype and offers the ability for error free input (as long as what you give them is correct!).

 

 

Click and Drag Method

This should be covered in your Lab Guidance section and it includes a video on how this is done.  This is the recommended method and you can watch the video here:

https://youtu.be/xS07n6GzGuo

Basically, a user drags the mouse and highlights the content they want to copy.  A hand will appear and they can click and drag that string of text to a dialog box, Putty window or any other item that accepts input.  The VMware Learning Platform will then type that text in for the user.  This is by far the easiest (and coolest) way to get text from point A to point B.

 

 

Send Text to Console

This method is a bit more involved, but is still useful.  It is pretty much the same method as Click and Drag, but there is an additional step to copy the text in a VLP window before it gets pasted in to the Main Console.

 

 

Start by showing how to copy the text from the Manual

 

Highlight the text of the command you want to issue in the Control Center Desktop.

 

 

Bring the application you want to send the copied text to.

 

In this example case, we want to send it to a PuTTY session.  Make sure the PuTTY window is active (indicated by the light blue bar at the top.

 

 

Click on the 'send text' button

 

Now the user will need to click the 'send text' button in the VMware Learning Platform.

 

 

Paste the contents.

 

The user will need to right click and select paste or issue a Ctrl+V to paste the contents into the Send Text to Console window.

 

 

Click send and watch the magic!

 

After clicking send, the focus window (in our case PuTTY) will start typing what we have asked to send to the console.

If it is a command they are issuing, they will need to hit the Enter key to execute it.

 

 

Linking a Command in the Manual to Send Text

You can also use the HTML function to send text from the manual directly to the active window in Control Center.  This will allow a user to send the text to Control Center  just by clicking a link.  This can in handy when you are try to send text to the console of a 2nd layer VM and using PuTTY.

This is the more difficult for you to create and not the most elegant solution in the manual.

 

 

Change the Media Type

 

You will first need to change the media type of the Step (or Sub-Step) by choosing it from the Step Inspector Menu.

 

 

Choose HTML under Media Type

 

 

 

 

In the newly created HTML section, use the code block below to create a link in the manual that will automatically enter the text "Enter the very long text that is needed to send here".  You just need to modify what is in quotes.  The above screen shot reflects the code entered below.

The HTML code block will replace any image you want to use, but you will still have access to use the Step description field. When the user clicks the link, it will be entered in the active window, so make sure to inform them to have the correct application in focus before they click it.  The Step description field is a great place for this.

<span class="send-text">Enter the very long text that is needed to send here</span> 

 

 

Insert the HTML Code for a Button

 

You also have the option of creating a button to click to send text to the console.  You do get an additional option of naming the button.  Modify what is in quotes to what you want to send to the console and the "What you want to name your button goes here!" with the text you want to appear on your button.  The screen shot above show the results of the below code.

<button class="send-text" value="Enter the long string of text here">What you want to name your button goes here!</button> 

 

 

Lesson Guidance

 

If you remember, each Module is made up of one or more Lessons.  Each Lesson should start with an introduction that talks to what will be covered in the Lesson.  

You will want to provide more guidance in the first Lesson of each Module.  This will not only introduce the Lesson, but the overall Module and act as a starting point for the Module.  Otherwise, as the user clicks through the Manual, they may have no idea they have completed a previous Module and are starting a new one.  It should almost be like the Lab Overview, but at the Module level.  It should include what will be covered in the Module and the approximate time it will to complete.

 

 

Other Lesson Headers

 

We've talked about the first header that needs to be created for a start of a Module.  Next we'll talk about any subsequent Lesson headers that follow in the Module.  These Lesson headers just need to describe what will be happening in the Lesson that follows and what knowledge the user should walk away with after completing this Lesson.

 

 

When to use Substeps

 

ScreenSteps 4 includes the ability to break a Step into a Substep.  Substeps can be useful of you need to break things down another level in the Table of Contents.  This is only for you, the content creator's benefit.  It has no effect on how the manual gets presented in VLP.

You create a Substep by clicking on the Substep box in the Step.

 

 

Subset Table of Contents View

 

As mentioned previously, this will add another layer to the Table of Contents.  You can see:

  1. This is a Step.
  2. These are Substeps.

In this Step, they were used to define a set of specific of actions.

 

 

Ending a Module

 

Just as you provided a defined starting point for each Module, you will want to have a clearly defined ending point for each of your Modules.  This will help the participant taking your lab know that specific Module has ended and they can either move forward to the next Module or start any other Module.

In this example, it concludes the lab by thanking them and reminding them to take the survey when they have finished.  If you do not have a Summary Step that goes over what you just covered, then it should be included here as well.

It then shows them the other Modules in the Lab, with the time to needed to complete each one and walks them through how to open the Table of Contents so they can jump right into another Module.  You can also just link the Modules to other places in the Manual.  This new feature will be covered later in this lesson.

 

 

Module Cleanup

 

You will also want to include any instructions at the end of the Module to help cleanup or prepare for any of the other Modules a user may take.

This could include shutting down worker VMs, leaving certain console Windows open or running a script.

 

 

Call to Action

 

At the end of the Lab, it's also important to have a Call to Action.  What should the next steps be?  Is there another lab they can take to go deeper into the technology you just covered? Can you point them to any websites for further reading?  A tool being used more recently is QR codes.

In any case, now that the participant has learned all this cool new stuff from you, what should they do with it?

 

Hands-on Lab Style Guide


In order to maintain a consistent look and feel for the users of the Hands-on Labs, we have created a Style Guide to assist you in authoring your manual.

In this lesson, we offer some guidance on working with screen captures, formatting step text and writing tips.  Please use these as you craft your manual so users will see a commonality not only between manuals, but inside your own!


 

Avoid using screen references of ABOVE or BELOW

 

Referring to the screen capture as "above" or "below" is confusing in the lab environment as the manual moves left and right.  Although the screen capture is always above the text, try to generically refer to it such as; "in the screen capture shown".

 

 

Use Number list tool for multiple commands per screen capture

 

  1. After typing your steps, you can highlight your steps and use the Number list tool.

 

 

Use the Numbered List tool to provide matching instruction steps

 

Use Numbered steps in the instructions.

Even if you only have one step on a screen capture, place a number bubble in the capture and give one numbered instruction.

 

 

The Number Tool Highlights the items

 

The Number Tool highlights the steps by placing them on a gray background box and changes the font and point.  If you manually number the steps they do not get this same treatment.

 

 

Add comments to numbered list and restart numbering.

 

You may wish to give numbered steps and then add commentary.  This interrupts the flow of the numbering tool.  Now when you continue adding steps the numbering tool starts over with #1.

 

 

Use of Format Tool to continue on number list

 

  1. Place your cursor on the line you wish to reorder.
  2. Click the ScreenSteps Toolbar Tool Format.  Choose List > Start Numeric List At...
  3. Enter the desired number.

 

 

See the list below after using Format tool.

  1. Step One.
  2. Step Two.

Commentary

  1. Step Three
  2. Step Four

 

 

Use of Bold for user input

 

The combined use of numbers in the screen captures and numbers steps is very clear.  Use bolding on terms that match what the user will see in the Main Console.

 

 

Use Text Code Formatting Tool for CLI commands

 

The "Code Format" tool allows for ASCII code formatting of text.  YOU WILL NOT USE IT FOR THAT PURPOSE.  But it does automatically set the text apart in the manual.

 

 

Setting CLI commands apart for easy use of click and drag to current window in lab console.

 

Using the "Code Format" tool separates your CLI commands in the manual window.  This makes it very convenient to highlight the text and click-and-drag the command into the active lab console window for execution.  Below is an example of how the CLI command looks while editing to be presented in the screen capture.

ping -c 2 172.16.10.12

 

 

Make sure the workflow is complete and easy to follow

 

Every UI transition and step in the lab must be documented in the manual.

Make sure that no steps are skipped, even minor ones. No step is too small to include in the manual! The manual should always match what users are doing in the lab at that time. If there are skipped steps, users will get stuck and cannot complete your lab.

It's okay to "hand hold", because some people need it. Some users aren't familiar with VMware products at all.

Make sure there are no skipped steps by walking through your lab to test its continuity.

 

 

Check your module completion times

 

During the storyboarding phase, we estimated each module's length (e.g. 30 minutes). This estimate has most likely changed now that you've written the lab, so make sure the completion time is accurate!

Users' time is valuable and they need to plan their time.

To make sure the module completion time is accurate, ask a colleague to take the lab and time themselves.

Generally, we would rather overestimate completion time. It's better that users end the module happy that they finished early than upset because they couldn't finish in the allotted time.

 

 

Inform users when they will have to wait

 

Many operations in the lab can take time to complete, like running a script or rebooting vCenter Server.

Have you ever hit a button or run a command and it looks like nothing is happening? When an operation takes more than 5 seconds to finish, inform the user "It might take a few seconds [or minutes] for the operation to complete".

 

 

Writing tips

 

Odds are that you will write much more as a Lab Captain than you ever expected! We write a lot, and we write quickly.

Check to make sure:

 

 

Word usage

 

Keep an eye out for correct word usage. Incorrect word usage will not be caught by spell check.

For instance,

 

 

Define acronyms

 

Spell out an abbreviation on first reference unless it's a common term.

 

 

Use standard Hands-on Lab terminology

 

Terms that are okay for the manual:

Avoid:

 

 

Use standard product names

 

Can I refer to VROPS in the manual? VRA? VCA?

Check http://productnames.eng.vmware.com/.

You can find any product's full name, short name, and any outdated names. Do not use acronyms as shorthand for VMware product names.

 

 

Not using the Main Console?

 

Most modules require the user to interact with the Main Console.

If your module does not involve user interaction, make this clear to the user. For instance, "This module is informational only and does not require interaction with the Hands-on Lab Console."

If your module has many pages of background information at the beginning, some users may want to skip it and go directly to interacting with the lab. Consider informing the user: "The following module is informational in nature.  If you would like to jump directly to the lab work, please advance to step X."

If you are using an offline demo, explain what it is and when it will be used. For instance, "This module will be completed in an offline demo. An offline demo gives you the look and feel of interacting with the product without being truly 'live'. "

 

 

Users see only the current step

 

Remember that only the current step is visible to the user. If there is a warning or something you need the user to know about the current step, it needs to be included in that step, not the next step.

 

Additional Best Practices and Tips

Article Owners and Launch Pad


With the removal of the check-in / check-out process, the Article Owners assignment becomes more critical.  ScreenSteps now contains a tool to make things easier for Principals to make those assignments and Captains to see what Articles are assigned to them.  In this lesson we'll cover Article assignments and Launch Pad.


 

Principals - Article Assignments

 

Principals will need to assign content Captains need to be responsible for at the Article level.  The good news is that through the Web Client, Principals now have the option to bulk edit manuals to make those assignments.

Once you are in the Web Client, you can make assignments either at the site level or for an individual manual.

  1. Click the Site Contents link to make assignments to all Articles across all the manuals or you can pick a single manual and make assignments to Articles just in that one manual.
  2. Click the Bulk Edit Article Properties link to start making assignments.

 

 

Selecting Article Assignments

 

From this screen, you can use the Status drop-down box for the Article's status, like Needs Content, Update, Review or Approved.  Using the Owner drop-down, you can assign the Captain who will be responsible for updating the Article.

Next to each Article is a check-box that can be ticked to make the assignment specified from the Status and Owner drop-down boxes.  Note that you can also just use the Select All link to choose all Articles in a manual.

Finally, click the 'Do it' button to make those assignments.

 

 

Captains - Viewing Article Assignments

 

As a Captain, you can look through the ScreenSteps Web Client to see what Articles you will be responsible for updating, but this isn;t very efficient and some Articles may get overlooked.

It's best to use the Launch Pad to see what Articles have been assigned to you.

 

 

Using Launch Pad

 

You may have noticed that when you double click the ScreenSteps Desktop Client, nothing happens.  It does load an icon in the system tray for Windows or an icon in the menu bar for OS X.

Right-click on the icon in Windows or use the Window menu option in OS X and select Launch Pad.

 

 

Viewing Article Assignments

 

In the Launch Pad you can see all the Articles that have been assigned to you.  There are also some interesting operations you perform here.

  1. Articles Being Edited - Clicking this button will show you all the Articles you currently have open in the Desktop Client and are in the process of editing.
  2. Recent Articles - Clicking this button will show you the most recent Articles you have edited.
  3. + Create Article - You can click this button to quickly create an Article in any manual.
  4. Article Launcher - You can click on the Article name on the pencil icon to open that Article in the Desktop Client to edit or click the home icon to open the Article in the Web Client and view its properties.

 

 

Opening the Launch Pad by Default

 

At the very bottom of the Launch Pad is a box to 'Open when all article editor windows are closed'  This does just that, but also has an added benefit.  Once you check this box, the next time you launch the ScreenSteps Desktop Client, the Launch Pad will open allowing you start you session here without having to go through the Web Client first.  this should provide a quick and easy way to start editing Articles!

 

Adding Links to your Manual


This is covered in the template for the Lab Introduction, but is worth calling out here.  You can use hyperlink in the manual to point to external sites or to another part the same manual.  Note that you cannot link to another manual in your group or another lab.

This can be useful for a few reasons.  Maybe you want to allow a user to skip past the introduction content if they are already familiar with it.  It is also used extensively in the Challenge labs to provide hints to help solve the various challenges.  It can also be used to refer back to previous steps you would like the user to perform or as a reference.

This lesson will walk through the process of creating links in manuals.


 

 

Creating an external link is a fairly straight forward process and is very similar to creating links in other applications.

Start by highlighting the text you would like to hyperlink and click the hyperlink button, then External Link.

 

 

Enter the URL

 

Just enter the URL and tick the 'Open link in new window button'.  Click OK when have finished.

 

 

That's it!

 

Now you will see the word you have highlighted and link to the site you have specified.

Note that if you enter a URL directly in the manual, it will automatically create the hyperlink for you one you press the Enter key.

 

 

 

To create a link that points to another section in the manual, the process starts off the same.  Highlight the text you wan the user to click and select the button, then the Knowledge Base Link.

Keep in mind, you can only link to content within the same manual.

 

 

Select the Chapter/Article/Step

 

A new window will pop up for you to select the link.

  1. Make sure the manual you are working on is selected.  Again, links across manuals are not supported!
  2. You can either use the search box or navigation tree to select the chapter, article or step you want to link to.

Once you click the link, if it is a step, you may be asked to specify a anchor for the link.  Keep reading for details.

 

 

Adding an anchor for a step

 

You can keep the suggestion or rename it.  Keep in mind you cannot have spaces in anchor names.

Click OK to create the link

 

 

Finish up!

 

Verify the link is correct, if not, just click the 'X' and select the correct one.  Otherwise, click OK.

 

 

Done!

 

The highlighted section will now link to the content you have specified.

 

Adding Video to your Manual


In some instances, you may want to use a video rather than having something performed in a lab.  It could be a video of an installation (or something else that might cause a performance issue) or marketecture.  In this lesson, we will walk through the steps to include a video in the manual.


 

Should I use a video?

 

Not always!  Use an Interactive Simulation (offline demo) if at all possible.  If you cannot show something live in a lab, the next best way is using an iSIM.  This way, the user will still be interacting with the lab environment and have a much better experience than just watching a video.

Talk to your content lead first to see if there's an existing video which can fit your need.

Consider where the video is hosted. Could a third party (such as a different BU) remove the content and break your lab? Is the video accessible from all geos?

The best practice is to let the Hands-on Labs host the video.

If you create a video, make sure the voiceover is audible, that there is no background noise, and that the screen is easily readable. Use a lower resolution screen to record the video.

 

 

Where do I upload my video?

 

So maybe you do need a video to show a high level marketing message.  Just upload your video to YouTube or better yet, contact the HOL Core team and we will be happy to host it for you.

 

 

Add an Embedded HTML Block

 

since we need to create the step in pieces, you will need a Header for the Step title, a Text block to the Step text and an Embedded HTML block.

Click the '+' button and select Embedded HTML to add this to your lesson.

 

 

Copy the Embedded <iframe>

 

Next, if the Video you want to use is hosted on YouTube, go to that page, click on the Share tab, then Embed tab.  Now copy the code that is highlighted.

 

 

Do Not Modify the Code!

You may be thinking...  I need to add some code to make sure the video is in HD or is formatted correctly.  DON'T!  When the Manual is exported, VLP will handle all of this for you, so there is no need to modify the embedded code.

 

 

Paste into the Step

 

Finally, just paste the embedded code you copied from YouTube into the Step.

It's also a good idea to include the length of the video in the title of the Step so the user will know if this is something they have the time to watch.

One issue we have come across with headphones and the ThinClients used at the live events is the headphones don't always get plugged in all the way.  This can lead to very low or no volume coming through.  It may be worth reminding them to make sure if they are using headphones, to be sure they are plugged all the way in.  It's little tips like this that you include in the Manual that will make their experience better and less running around for you answering the same question!

 

Screen Capture Best Practices and Optimization


To ensure users have a great experience, it important to make sure the images you are using in your Manual are optimized for the environment they will be used in.  If images are too large in size, it will slow down the Manual and take longer to load.  We also want to make sure they are sized right for the Manual when using the VMware Learning Platform  interface to display them.

In this Lesson, we'll cover a couple of issues we have run into and how to solve them.


 

One Image per Step

Try to limit each Step to one screenshot.  When you include multiple images in a single step, it no only increases the overall size of the image, but it also makes it very hard to view when the user is taking the lab.  They can click on an image to increase the size a bit, but when you have 5 images stacked together it makes difficult to perform that Step.  Usually it means going back and forth between opening the image and closing it to perform the step.

If you do need to use more than one image in a step, mainly to show a side-by side difference, make sure the images are small and combined and in total, no larger than 1024x768.

Otherwise, the best thing to do is break the images apart and give them each their own Step.

 

 

Avoid using full screen captures

 

The size of the manual is part of the overall size of our HOL labs.  In an effort to improve the performance of our labs, reduce the size of the screen captures.  This is an example of a full screen shot.  When the screen captures are shown with the manual, large areas result in a small graphic that requires the user to click the graphic and then search to find the focus.

The scenario for this step might be to enter a support search. 

 

 

Use only that part that is needed to show

 

This example will show up large enough in the manual to not require the extra steps of clicking on the graphic to enlarge and then to close.  You are reducing lab size and time during the lab.

 

 

Avoid showing a specific browser in screen captures

 

As browser compatibility issues may surface during testing and result in causing us to switch browsers, avoid capturing the browser in screen captures.  This is an example of Chrome.  

 

 

Capture without showing a specific browser

 

This will allow us to change browsers and avoid having to re-capture all the screens.

 

 

External Images

If you are using images captured outside of ScreenSteps, make sure they are optimized before they are placed imported in.  Typically saving an images (or exporting) using the JPEG format and setting it to High image quality can provide some significant space savings while still maintaining the quality of the image.

Once the image is in ScreenSteps it converts it to a PNG format and you will need to use other tools to scale the size of the image back.

 

 

Limiting the Size of the Image

For the manuals to load quickly, it's ideal that the image size be kept under 256kb.  We covered External images and how to reduce their size, but if ScreenSteps was used to capture the image and it is still very large?

A list will be sent out indicating the size of the image and where it is located in the Manual.  You can use this as a starting point to as to which images need to be reduced.

 

 

Determining the Size of an Image

 

In ScreenSteps, you will need to export the image to see its size.  You can do this by right-clicking on the image and selecting 'Export Image Step' from the menu.

Once you save the image, you can view its size.

 

 

Using TinyPNG to Shrink Images

 

If the image is over 256kb, you will need to try and reduce its size.  One tool found effective in reducing PNG file types is TinyPNG.  It's a web based tool that allows you to drag and drop your images, either one or multiple at a time.  It quickly reduces them and provides a link to download them.  As a bonus, you make the Panda happy!

 

 

Replacing the Image

 

Now that your image is the right size, how do you replace it?  

Back in ScreenSteps, right click on the image you want to replace and select 'Replace Image With --> Image File...'.  Just point it to the reduce image file and remember to check your Article in!

 

 

Reducing Image Clutter

 

While not tied to the overall size of image, making sure an image is easily digestible by the user falls under the category of Optimization. Try not to overload an image with arrows, numbers and boxes.  If you find yourself with too many of any or all of these, then it is best to break the single step into multiple ones.

 

Content Review for Proctors and Captains


There will be a review process during the lab development cycle where Proctors will review the Captain's Manuals for spelling and grammatical errors.  Proctors will use ScreenSteps to review the Manual and make comments.

In this lesson we will walk through logging in, making comments and subscribing to email alerts.


 

Proctor - Making Comments

 

Log into ScreenSteps Live using your credentials.

vmware.screenstepslive.com

 

 

Logged In

 

You should now be logged in and see something similar to the screen above.

 

 

Click 'My Profile'

 

To get started, click on your Group's Name.

 

 

Find the right Lab

 

Find the Lab that is related to the Module you will be working with and select the link.

 

 

Manual Main Page

 

You should now be at the main Manual page.

Select the Module you have been assigned to review.

 

 

Selecting the Module

 

To make it easier, you may want to click the header ,in this case Module 3 - Introduction to vSOM Storage. It will narrow down the Module list to the ones you want to see.

 

 

Select the Lesson to Review

 

Under each Module, are Lessons. Lessons are comprised of a series of steps that are related to the overall objective of the lesson.

We'll start with the first lesson, 'What is Virtualization?'.

 

 

Navigating the Lesson

 

You are now viewing the first lesson.

NOTE: At the top of the lesson, you can navigate to the previous or next lesson using the links.

 

 

Making Comments

 

As you read through the lesson, you can make a note of the errors you've found in the Comments section. If you want to be notified if the Captain responds or makes a comment based on your initial comment, you will need to check the 'Subscribe' check box.

 

 

Comment Added

 

Once you click the Submit Comment, you should see your new comment added.

 

 

Making Useful Comments

 

While the Comment above may have been hilarious to some... for a Captain who is surviving on little to no sleep, it wasn't very useful.

In order to make everyone's life easier, try to use a standard format. He's one that I've found very helpful, especially in long Lessons.

Include Step, followed by the step name in the Comment field. This will let the Captain quickly find out where the change needs to be made. Next, try to put some context around where the correction needs to be. Include what it is and what it should be.

By using Step first, it makes it easier to break things down when you include all the corrections in a single comment.

 

 

That's it!

That's all there is too it. This should be easier than passing around a Word document and all the fun that goes with it. ScreenSteps Live will always be up to date, there will be no issues trying to figure out what page of the Word document you're looking at versus what I'm looking at and you changes will be documented for each Captain to look at. If they have questions, they can either reach out to you directly or add a comment. Again, if you checked the Subscribe box, you will be notified when they respond to your comment through ScreenSteps Live.

 

 

Captains - Subscribe to Lesson Comments

There are two ways to subscribe and be notified when someone comments on the Lessons in your Modules.

First, if you created the content and you have the 'Auto Subscribe' checked in your Profile, there is nothing additional you need to do, but it may be worth verifying that it is setup correctly.

If you did not create the content or do not have the "Auto Subscribe' checked in your profile, you will need to follow some additional steps in order to be notified when someone comments on your Lessons.

 

 

Verifying Auto Subscribe

Let's walk through the steps to verify that Auto Subscribe is enabled.  We'll also check a Lesson to make sure we are subscribed.

 

 

Login into ScreenSteps Live

 

Login to ScreenSteps Live.

 

 

Select My Profile

 

Click on My Profile.

 

 

Scroll to Preferences

 

Scroll down to Preferences and verify that 'Auto Subscribe' is checked.  If you did need to make changes, click Update, otherwise click Cancel.

 

 

Select your Group

 

Let's verify that you are subscribed to comments.  Click on your Group.

 

 

Select your Lab

 

Select your lab.  For this example, we'll pick HOL-SDC-1301.

 

 

Pick your Module

 

Pick your Module, the one we want to verify we are subscribed to. We'll select Module 5 - VMware Log Insight.

 

 

Pick the Lesson

 

This will expand out the Module and reveal the individual Lessons.

Click on the first lesson in the Module, in our case, that would be Introduction to VMware vCenter Log Insight.

 

 

Scroll Down a Bit...

 

Scroll down a bit until you see the Comments section.  If it says 'Unsubscribe from comments', you are all set and there is nothing else to do.  If it says Subscribe to comments, click the 'Subscribe to comments' to subscribe and then you will need to verify that each Lesson in the Modules you authored are set correctly.

 

 

Auto Subscribe not Set

If you do not have Auto Subscribe set, you will need to follow these directions to ensure you receive notification when someone makes a comment on your Lessons.

 

 

Login into ScreenSteps Live (Copy)

 

You can only subscribe to comments from ScreenSteps Live, not the Desktop version. If you have not already, login to ScreenSteps Live.

 

 

Select your Lab Group

 

Select your Lab Group.

 

 

Select your Lab

 

Select your lab.  For this example, we'll pick HOL-SDC-1301.

 

 

Pick your Module

 

Pick your Module, the one we want to subscribe to. We'll select Module 5 - VMware Log Insight.

 

 

Pick the Lesson

 

This will expand out the Module and reveal the individual Lessons.

Click on the first lesson in the Module, in our case, that would be Introduction to VMware vCenter Log Insight.

 

 

Scroll Down a Bit...

 

Scroll down a bit until you see the Comments section.   Click the 'Subscribe to comments' to subscribe.

 

 

Repeat....

 

That's it!

You will need to repeat this for every Lesson you want to subscribe to.

Above is an email with what you will receive once a comment has been made. The first link will take you to the Content tab for the lesson, while the second link will take you to the User Comments tab for the lesson.

 

Conclusion

Thank you for participating in the VMware Hands-on Labs. Be sure to visit http://hol.vmware.com/ to continue your lab experience online.

Lab SKU: HOL-Lab-Development-Guide

Version: 20190606-174305