Lab Overview - HOL-1806-02-SLN - Automate IT: Making Private Cloud Easy
Note: It will take more than 90 minutes to complete this lab. You should expect to only finish 2-3 of the modules during your time. The modules are independent of each other so you can start at the beginning of any module and proceed from there. You can use the Table of Contents to access any module of your choosing.
The Table of Contents can be accessed in the upper right-hand corner of the Lab Manual.
Lab Module List:
This lab manual can be downloaded from the Hands-on Labs Document site found here:
This lab may be available in other languages. To set your language preference and have a localized manual deployed with your lab, you may utilize this document to help guide you through the process:
During this module, you will input text into the Main Console. Besides directly typing it in, there are two very helpful methods of entering data which make it easier to enter complex data.
You can also click and drag text and Command Line Interface (CLI) commands directly from the Lab Manual into the active window in the Main Console.
You can also use the Online International Keyboard found in the Main Console.
In this example, you will use the Online Keyboard to enter the "@" sign used in email addresses. The "@" sign is Shift-2 on US keyboard layouts.
Notice the @ sign entered in the active console window.
When you first start your lab, you may notice a watermark on the desktop indicating that Windows is not activated.
One of the major benefits of virtualization is that virtual machines can be moved and run on any platform. The Hands-on Labs utilizes this benefit and we are able to run the labs out of multiple datacenters. However, these datacenters may not have identical processors, which triggers a Microsoft activation check through the Internet.
Rest assured, VMware and the Hands-on Labs are in full compliance with Microsoft licensing requirements. The lab that you are using is a self-contained pod and does not have full access to the Internet, which is required for Windows to verify the activation. Without full access to the Internet, this automated process fails and you see this watermark.
This cosmetic issue has no effect on your lab.
Please check to see that your lab is finished all the startup routines and is ready for you to start. If you see anything other than "Ready", please wait a few minutes. If after 5 minutes your lab has not changed to "Ready", consider restarting the lab.
Module 1 - Better Together: vRealize Automation and NSX App Centric Networking & Security (45 Minutes)
In this module users will see how we can build out network services on a vRealize Automation blueprint to deliver a holistic multi-tier application architecture. We will cover building multiple networks and security groups on the same blueprint.
Delivering Application-Centric Network and Security Services vRealize Automation provisions, updates and decommissions network and security services in lockstep with your virtualized applications. Network and security services are deployed as part of the automated delivery of the application, consistent with its connectivity, security, and performance requirements. Automation creates a standardized repeatable process that helps accelerate delivery, reducing the time needed to perform the task. At the same time automation also improves the consistency and reliability of the final configuration by elimination of manual errors. Finally automation reduces operational costs by eliminating many manual tasks, and improves development productivity by delivering application environments to engineers faster. vRealize Automation, used in conjunction with NSX, automates an application’s network connectivity, security, performance, and availability.
Connectivity: Proper network connectivity is fundamental to any business service. Various groups and different applications can have unique requirements. vRealize Automation’s resource reservations, service blueprints and network profiles assure that each application receives the right level of network connectivity, with the appropriate service level. For example, each business group can be provided with reserved network connectivity between the virtual and physical world, or specific mission critical applications can be configured with dedicated virtual switches and routers depending on their performance and reliability needs. In addition, virtual machines can be moved with a tool like VMware vSphere® vMotion® live migration without changes to the virtual machine networking configuration. This allows for the optimal placement of workloads on the compute infrastructure, ultimately leading to reduced capital expenditure.
Security: Ensuring appropriate security policies are applied is one of the most critical steps to delivering and managing your applications and data. Now with vRealize Automation and NSX, applications can be deployed on demand with network security at the application level or between application tiers to ensure that firewall rules are placed as close to the virtual machine as possible. This leads to a true defense-in-depth solution that cannot be achieved by other solutions. The IT administrator can define vRealize Automation application blueprints that specify NSX security policies which contain firewall rules, intrusion detection integration, and agentless anti-virus scanning at each application tier to allow application and per-tier security. When the application is provisioned, dynamic security groups are configured with the defined policies to safeguard the service from day one. These services can also be tagged with a security label, for example DB servers, PCI, HIPPA that enforces policies dynamically based on the tags (e.g. type of application) throughout their lifecycle. Finally, application isolation for these business services can also be defined to fence the service from the rest of the network entirely or to deny all traffic to the service except for what is defined in the applied security policies. This granular level of isolation keeps traffic to specific group environments (e.g., development, test, production) or even isolated at the individual application or application tier level.
Performance: vRealize Automation‘s governance policies and automated delivery can be used to meet the specific network performance needs of each application being deployed. vRealize Automation can also configure NSX to minimize traffic through the oversubscribed core. Traffic between virtual machines on the same host will remain within the host while still receiving the distributed routing, switching, load-balancing, firewalling, and security services that are required by modern applications.
Availability: vRealize Automation improves application availability through the dynamic configuration of network load balancers in the context of deploying or updating application configurations. NSX load balancer can be used in all phases of the application lifecycle (development, staging, production) without requiring expensive physical hardware or manual configuration of legacy load-balancing components. Depending on vRealize Automation’s application blueprints and network profiles, applications can be added to an existing load balancer pool or configured with their own dedicated load balancers. This integration provides organizations with application centric availability management.
The Converged Blueprint Designer(CBP) redefines how applications and services are authored, incorporating the full IT services stack using a unified drag-and-drop canvas. NSX has become a first-class citizen of vRealize Automation to provide application-centric networking and security through deep integration between the two products.
There you have it, a fully automated multi-tier application which includes network, application and availability services This is the converged blueprint designer in all of its glory. As you can see, we have a web tier consisting of 3 nodes with an on-demand load balancer providing application availability and application services are provided by a database tier. Security policies for both web and database services are applied with each deployment ensuring enforcement of desired network policies. In the backend we have a database tier supporting the application. For security we have securtiy groups configured so that every time this blueprint is requested, we enforce our network security policies.
vRealize Automation 7 features a new Blueprint format. A converged blueprint can have one to many components including:
Combining these allow for the designing of complex applications
For example using the vRealize Automation design canvas you can put together a multiple instance machine to deploy cluster nodes with XaaS components to create shared disks and software components to install and configure the Operating System and application layers. In the following lessons, we will explore various key components NSX which can be used in the vRA Design canvas while building out a converged blueprint.
Network security, for a long time, has worked off of the old Russian maxim, "trust but verify". Trust a user, but verify it's them. However, today's network landscape -- where the internet of things, the cloud, and more are introducing new vulnerabilities-makes the "verify" part of "trust but verify" difficult and inefficient. We need a simpler security model. That model: Zero Trust.
Forrester Research coined the term "Zero Trust" to describe a model that prevents common and advanced persistent threats from traversing laterally inside a network. This can be done through a strict, micro-granular security model that ties security to individual workloads and automatically provisions policies. It's a network that doesn't trust any data packets. Everything is untrusted. Hence: Zero Trust.
In this module, we will show you how to automate a zero trust network with vRealize Automation. In a blueprint we can drag and drop an on demand security group. This Creates a new Security Group and binds one or more existing Security Policies. The desired security policies are added to the ODSG during blueprint configuration. This will ensure your deployed virtual machines, get the policies they require.
This is one of the value adds of the blueprint designer. We can copy existing blue prints and use them. We can even apply existing blueprints on to a new blueprints with additional components added. One of the clear themes of software development over the last ten years has been "re-use". This should be no different when you are managing and developing private cloud automation,
A transport zone controls which hosts a logical switch can reach and can span one or more vSphere clusters. Transport zones dictate which clusters VMs can participate in the use of a particular network.
A security policy is a set of Guest Introspection, firewall, and network introspection services that can be applied to a security group. A Security Group is a way to manage multiple policies in one context. Security Groups are very powerful as they allow you to group a collection of objects in your vSphere Inventory. Sounds simple, however this collection can be statically defined as an object in inventory, for example a Virtual Machine, Cluster, or Datacenter, or Dynamically defined for example a security tag on a virtual machine, Guest OS Type, or VM name. The groups can also be defined as a combination of static or dynamic; and you can start to see just how powerful a security group is.
You have created a blueprint with an on demand security group, and when this VM gets provisioned, it will consume the security policy we assigned to it.
On-Demand Routed Network (ODRTD) - Adds a dedicated interface on the upstream distributed logical router. This also creates a new logical switch for the router interface and the machine vnic. This also applies IP policy as defined in the corresponding network profile. In this module we will build on the last task, by adding an application tier to the existing blueprint.
Any corporate network that is larger than a very small business is likely going to have a routed network already. Segmenting networks improves performance and more importantly is used for security purposes. Many compliance regulations such as PCI-DSS state that machines need to be segmented from each other unless there is a specific reason for them to be on the same network. For instance your corporate file server doesn’t need to communicate directly with your CRM database full of credit card numbers. The quickest way to fix this is to put these systems on different networks but this can be difficult to manage in a highly automated environment. Developers might need to spin up new applications, which may need to be on different network segments from the rest of the environment. Its not very feasible to assume we can now spin up test and delete hundred of machines each day, but need the network team to manually create new network segments and tear them down each day. That wouldn’t be a nice thing to do to your network team.
Luckily NSX has the ability to create routed networks and vRealize Automation can leverage this, to automatically setup a new network when we deploy blueprints. The initial setup requires setting up an NSX edge and a transit network. This is done manually to get the environment prepared for the automation piece. In the module below, we will add an on-demand routed network to the blueprint designer.
This is the blueprint we configured in the previous module.
What is a routed network profile? A routed network profile is used when end-to-end routable access with unique IP addresses is needed.
How long does this take in the physical world? For most organizations, there is a massive length of time it takes to get to this point. With NSX and vRealize Automation we can offer this in minutes to hours.
As you can see we have implemented a routed network so end users can access the web application that runs on this CentOS server. This has taken us minutes to create and provision, leading to lower opex and faster time to market for your lines of business.
Load Balancing is another network service available within NSX that can be natively enabled on the NSX Edge device. The two main drivers for deploying a load balancer are, scaling out an application (through distribution of workload across multiple servers), as well as improving its high-availability characteristics
The NSX load balancing service is specially designed for cloud with the following characteristics:
The load balancing services natively offered by the NSX Edge satisfies the needs of the majority of the application deployments. This is because the NSX Edge provides a large set of functionalities:
We can use the blueprint designer to drag and drop an on demand load balancer(ODLB). This deploys a dedicated NSX Edge Services Gateway (ESG) and logical switch. It then automatically configures the appropriate load balancing policy (one-arm and inline load balancing policies are supported). This will shorten the time to deploy HA based network services, ensuring your web servers or other services are highly available at all times. In the following module we will add an on-demand load balancer to the blueprint designer.
Remove Existing Load Balancer
1.Click On ON-Demand_Load Balancer
2. Click on X
1. Click Yes
How long does it take your operations teams to get an application added to a physical load balancer? Weeks? Months? With the power of NSX and vRA, you can see here we are able to do this in minutes. From design to consumption, we are driving down the overall operating expense of managing and using load balancers.
On the General tab for the On-Demand Load Balancer configuration
On the General tab for the CentOS VM
Now we have designed a complete software defined application stack using the converged blueprint designer. We started with automating our security policies with the on-demand security groups. We then moved on to adding a segmented vxlan for the application tier traffic. And finally we added an on-demand load balancer to ensure our application continues to function in the event of an issue.
In these modules we covered how to deploy a fully automated network stack with our application stack. In IT today, it's not just how fast you can deploy a virtual machine, but also how fast you can provide the network, security and availability services.
In the next module we are going to cover day two actions with NSX.
Module 2 - vRealize Automation Day 2 Actions for NSX (30 Minutes)
In this module we will show how to add and remove an existing security group to a deployed blueprint. We will also show how to scale out an existing deployment to add another node to a load balancer. This will show how we can manage network services from a day two perspective.
A day two operation is simply an action that you can take on an existing resource, such as a running virtual machine. For a VM, they typically include power on, power off, reconfigure, destroy, etc. All of the basic and essential day two operations for standard objects, such as VMs and multi-machine deployments come out of the box with vRealize Automation. As VMware continues to invest in vRealize automation we will continuously add more day two operations. In the following lessons we will show the student how to perform day two operations on the security and availability of your virtual machines.
It is no longer acceptable to utilize the traditional approach to data-center network security built around a very strong perimeter defense but virtually no protection inside the perimeter. This model offers very little protection against the most common and costly attacks occurring against organizations today, which include attack vectors originating within the perimeter. These attacks infiltrate your perimeter, learn your internal infrastructure, and laterally spread through your data center. In order to address these concerns customers are adopting micro-segmentation.
Micro-segmentation decreases the level of risk and increases the security posture of the modern data center. So what exactly defines micro-segmentation? For a solution to provide micro-segmentation requires a combination of the following capabilities, enabling the ability to achieve the below-noted outcomes.
Distributed stateful firewalling for topology agnostic segmentation – Reducing the attack surface within the data center perimeter through distributed stateful firewalling and ALGs (Application Level Gateway) on a per-workload granularity regardless of the underlying L2 network topology (i.e. possible on either logical network overlays or underlying VLANs).
Centralized ubiquitous policy control of distributed services – Enabling the ability to programmatically create and provision security policy through a RESTful API or integrated cloud management platform (CMP). In this module we will cover how cloud consumers can change NSX security groups on an active vRA deployment. If security creates some new policies, this will allow cloud consumers to add or remove a security policy from a running virtual machine from within the vRA portal. This allows end users to apply micro-segmentation to there existing deployments. Perhaps for testing as well. If you can turn these off and on via day two actions, this will help accelerate testing of these said rules.
Load balancing is a critical component for most enterprise applications to provide both availability and scalability to the system. Over the last decade we have moved from bare metal servers to virtual servers and from manual deployment of operating systems to using tools like vRealize Automation or other custom workflows. In addition to the movement towards virtualization and the API being the new CL. We are also seeing a movement to Network Functions Virtualization (NFV) where Virtualized Network Functions (VNF) such as routing, VPN, firewalls, and load balancing are moving to software. The value of automation, SDN, and NFV has been proven in the largest networks today and this migration to software has proven to have tremendous ROI. Many companies also want to leverage the same cost effective models. From a day zero perspective we have been able to use vRA to automate on-demand load balancers for a few releases. What about for day two network actions? In vRA 7.3 we provide the ability to reconfigure which services the load balancerbalancer is providing high availability to.
In this module will you show you how to reconfigure a load Balancer in a deployment. The reconfigure action will allow a cloud consumer to add, edit, or delete a virtual server in a deployed NSX load balancer.
Click on the Items tab
Click the arrow to the left of Centos
Congrats! You have just completed Module 2 - NSX Day 2 Operations. We showed you how to reduce the overall security risk of your data center by implementing security groups on deployed machines. We also showed you how to provide day 2 operations for providing high availability for data center services.
Thank you for participating in the VMware Hands-on Labs. Be sure to visit http://hol.vmware.com/ to continue your lab experience online.
Lab SKU: HOL-1806-02-SLN