Loading

Sorry

Your web browser doesn't support some required capabilities.

This interactive simulation works best with the latest version of Chrome, Firefox, or Safari.

Sorry

An error occurred. Please reload the page or report this error to:
hol-feedback@vmware.com

Sorry

Sorry

Unable to initialize the simulation player:

Please reload the page or report this error to:
hol-feedback@vmware.com

X
↩ Return to the lab
HOL-1947-01: Using GPUs in Pass-Through Mode on vSphere

This is an interactive demo

Drive it with your mouse, your finger, or just use the arrow keys.

Use Learn mode to learn the demo. The orange boxes show where to click.

Use Present mode to hide the orange boxes and notes.

Click a Shortcut to jump to a specific part of the demo.

X
Hide notes
Restore notes
Open notes window
Increase font size
Decrease font size

This part of the lab is presented as a Hands-on Labs Interactive Simulation. This will allow you to experience steps which are too time-consuming or resource intensive to do live in the lab environment. In this simulation, you can use the software interface as if you are interacting with a live environment.

The orange boxes show where to click, and the left and right arrow keys can also be used to move through the simulation in either direction.

In this lab there will be a lot of typing in a putty window.  Each command that needs to be type will be highlighted in blue.

Configure Passthrough for a GPU PCI device on a host

  1. Click on GPU_Cluster
  2. Click on host sc2esx01.vslab.local
  3. Click on Configure
  4. Click on the scrolling bar to get to the hardware section
  5. Click on PCI Devices
  6. Click on EDIT
  7. Click on the scrolling bar, to find the NVIDIA PCI device
  8. Click on the NVIDIA adapter 0000:82...., this is the 0000:82:00.0 | GP100GL [Tesla P100 PCIe 16GB] NVIDIA controller installed in the ESXi host
  9. Click on OK
  10. Click on Reboot This Host, The NVIDIA device will not be available to VMs until the host reboots.
  11. For the reason to reboot type Configure DirectPath I/O requires reboot and press Enter
  12. Click OK

Assign the available GPU PCI device for the VM

  1. Click on BitFusion-GPU-VMs resource pool
  2. Click on bf-gpuvm-01
  3. Click on ACTIONS
  4. Click on Edit Settings
  5. Click on ADD NEW DEVICE
  6. Click on PCI Device
  7. Click on the scrolling bar, to find New PCI device
  8. Click on the drop down menu for New PCI device
  9. Click on 0000:82:00.0 | GP100GL [Tesla P100 PCIe 16GB] NVIDIA Corporation
  10. Click on OK

Add Advanced Configuration Parameters for enabling a large BAR GPU device

 

  1. Click on ACTIONS
  2. Click on Edit Settings
  3. Click on VM Options
  4. Click on Advanced
  5. Click on the scrolling bar, to find Configuration Parameters
  6. Click on EDIT CONFIGURATION...
  7. Click on ADD CONFIGURATION PARAMS
  8. Click on Name
  9. Type pciPassthru.use64bitMMIO for the Parameter Name
  10. Click on Value
  11. Type TRUE for the Parameter Value
  12. Click on ADD CONFIGURATION PARAMS
  13. Click on Name
  14. Type pciPassthru.64bitMMIOSizeGB for the Parameter Name
  15. Click on Value
  16. Type 32 for the Parameter Value
  17. Click on OK

Power on VM and check GPU in the VM

  1. Click on ACTIONS
  2. Click on Power
  3. Click on Power On
  4. Take note of this VM's IP address, 172.16.31.181
  5. Click on the Putty icon in the menu bar at the bottom of the screen
  6. Type the VM's IP address, 172.16.31.181 in the Host Name box
  7. Click on Open
  8. Log on to the VM with:
    • Username = root
    • Password = no password press Enter

Note: if the text stops, hit the tab key for command autocompletion

Type lspci | grep -i nvidia and press Enter

The NVIDIA controller is now ready to be use for ML workloads.

  1. Click on X to close the putty window
  2. Click OK to end your putty session

To return to the lab, click the link in the top right corner or close this browser tab.

Copyright © 2018 VMware, Inc. All rights reserved.