Loading...

 

Error

Your web browser doesn't support some required capabilities.

This demo works best with the latest version of Chrome, Firefox, or Safari. IE 9+ also sort of works...

This simulation works best with the latest version of Chrome, Firefox, or Safari. IE 9+ also sort of works...

Error

This demo file is incomplete or damaged. Please reload the page, or download again:

For VMware partners:
www.vmware.com/go/partnerdemos

For VMware employees:
www.vmware.com/go/demos

This simulation did not load correctly. Please reload the page.

Error

Visit the VMware Demo Library
to get more demos!

For VMware partners:
www.vmware.com/go/partnerdemos

For VMware employees:
www.vmware.com/go/demos

The demo will restart in 5 seconds.

Hit Esc to cancel

X
↩ RETURN TO THE LAB
HOL-1851-10: NVIDIA Graphics Processing Unit (GPU) Manager VIB Installation

This is an interactive demo

Drive it with your mouse, your finger, or just use the arrow keys.

Use Learn mode to learn the demo. The orange boxes show where to click.

Use Present mode to hide the orange boxes and notes.

Use Autoplay mode to make it play like a movie. Hit the Esc key to stop.

Click a Shortcut to jump to a specific part of the demo.

X
Copyright © 2017 VMware, Inc. All rights reserved.
Hide notes
Restore notes
Open notes window
Increase font size
Decrease font size

This part of the lab is presented as a Hands-on Labs Interactive Simulation. This will allow you to experience steps which are too time-consuming or resource intensive to do live in the lab environment. In this simulation, you can use the software interface as if you are interacting with a live environment.

The orange boxes show where to click, and the left and right arrow keys can also be used to move through the simulation in either direction.

Prepare to Install NVIDIA GPU Manager vSphere Installation Bundle (VIB)

In this lab we will be using the  NVIDIA Grid Tesla M60 Graphics Processing Unit (GPU) cards. The NVIDIA vSphere Installation Bundle (VIB) contains drivers that are required for the host to recognize the GPU.  This is also referred to as the vGPU Manager.   Before you begin the installation process,  you should confirm:

  • ESXi host's bios power and performance settings are set to the high performance policy
  • The latest supported version of vCenter and ESXi has been installed
  • ESXi host(s) have been added to vCenter
  • Host(s) has been added to NTP and DNS
  • The vGPU Manager VIB that corresponds with the version of ESXi you are running has been downloaded.

The vGPU Manager VIB is loaded similar to a driver in the hypervisor.  The  vGPU Manager allows for up to eight users to share each physical GPU.  With the M60 that is up to 32 users per card.  The VIB can be downloaded from http://www.nvidia.com.  

Note:  NVIDIA does not support an in place upgrade of the vGPU Manager VIB.

 

Maintenance Mode

 

To prepare for the instalation of the VIB, the host must be placed in maintenance mode.  This has been done for you.

Install NVIDIA vGPU Manager VIB

Now you are ready to install the VIB

  1. Click on  PuTTY, pinned to your Task Bar
  2. Click on session w3-eucvra-730-006
  3. Click on Open
    • Enter credentials :
      • Username:  holadmin
      • Password:  VMware1!
  4. Press Enter on your keyboard
  5. Enter cd /tmp
  6. Press Enter
  7. Type ls , press Enter
  8. Confirm the NVIDIAVMware.vib is listed
  9. Enter esxcli software vib install --no-sig-check -v /tmp/NVIDIAVMware.vib
  10. Press Enter

Note: The VIB installation command requires the entire path to VIB in order to be successful.

  1. Once the process is complete, confirm status is Message: Operation finished successfully

Reboot Required states false.  This is not correct.  A reboot is necessary in order for the xorg, the configuration file for the NVIDIA card to load, and the driver to start.

  1. Enter reboot and press Enter to restart your host
  2. Click OK to acknowledge the PuTTY error.  
  3. Click on the X in the PuTTY window to close  your session

Confirm VIB Installed Correctly

You will now need to confirm the VIB installed correctly

  1. Right click on w3-eucvra-r730-006
  2. Click Maintenance Mode
  3. Click on Exit Maintenance Mode
  4. Click on PuTTY, located on the taskbar
  5. Click on w3-eucvra-r730-006
  6. Click on Open
  7. Login with user name: holadmin password VMware1!
  8. Enter esxcli software vib list | grep -i nvidia
  9. Press Enter
    • The output should be similar
    • NVIDIA-VMware_ESXi_6.5_Host_Driver 367.64-1OEM.650.0.0.4240417       NVIDIA VMwareAccepted 2017-05-05

Confirm GPU has been detected by the host

Now that the VIB has been loaded correctly, the final step is to verify that the vGPU is now being detected by the host.  

  1. From the same PuTTY session enter:  nvidia-smi
  2. Press Enter
    • Note: For continued monitoring you can run nvidia-smi -l .  This allows the screen to continuously refresh.
    • An output that includes GPU  Numbers Type, and Memory should come up.  If no output appears, it may be a sign that the VIB did not install correctly.  You should consult the card manufacturers documentation.  Guidance around troubleshooting for the NVIDIA GPUs can be found in the  NIVIDA Grid GPU Deployment Guide
  3. Type in Exit to close putty session

Set Graphic Device Option

In this lab we are using ESXi 6.5 host.  When using ESXi 6.5 you will need to modify your Graphic Devices option in order for the vGPU device to be available to a VM.

  1. Confirm host w3-eucvra-r730-006 is highlighted in the Web Client
  2. Click on the Configure tab
  3. Under the Hardware section, click on Graphics
  4. Confirm Host Graphics tab is selected
  5. Under Host Graphics Settings confirm:
    • Default graphics type: Shared
    • Shared pass through GPU assignment policy: Spread VMs across  GPU(s)
  6. Click on the Graphics Devices tab
    • 4 NVIDIA devices are listed.  This is representative of each of the GPU cores that reside on the card.
  7. Click the first NVIDIATesla M60 card listed
  8. Hit the Shift key, then click the last card
  9. Click on the pencil
  10. Change the option to Shared Direct
  11. Click on OK
    • Notice how the Configured Type is now listed as Shared Direct but the Active Type is still listed as shared.  You must reboot your host for the changes to take affect.  
  12. Right click on host w3-eucvra-r730-006
  13. Click Power
  14. Click on Reboot
  15. Enter the reason: Graphics Type
  16. Click on OK
  17. Graphics Devices Configured Type should now be listed as Shared Direct

You have now completed the initial installation and configuration process on your host.  The vGPU device is now ready to be used by the VMs in your environment.  

To return to the lab, click the link in the top right corner or close this browser tab.