The GPU only connects to the CPU and monitors, and don't care about other components.
But on laptops things are different, and they even differ between laptops.
If you spent a few hundred dollars on a low-to-mid-range gaming laptop, the connection may look like:
The difference is, instead of directly connecting to the monitor, the dGPU transfers the rendered image to iGPU, which in turn sends them to the monitor.
This is called the MUXless scheme of NVIDIA Optimus.
If you spent a bit more than a thousand on a mid-to-higher-range laptop, you may get:
Compared to the last scheme, there is a switch on the motherboard circuit, and the HDMI port and the monitor can be allocated to different GPUs on demand.
This is another scheme of NVIDIA Optimus, called MUXed scheme.
If you spent thousands on a top-end laptop, you may get this:
You're asking where had the iGPU gone? How come you need it on a multi-thousand-dollar laptop for gaming?
Under this scheme the manufacturer cuts power to the iGPU component, so all the power budget can be allocated to CPU and dGPU for better performance. This is basically the same as a desktop computer.
How to determine the actual scheme:
Run lspci
on the Linux OS, and look for entries about Intel HD Graphics or NVIDIA.
3D Controller
, you have the first Optimus scheme (iGPU connected to monitor).VGA Controller
, and there is a HD Graphics
GPU, you have the second Optimus scheme (switching between two GPUs).VGA Controller
, and there is no HD Graphics
GPU, you have the last scheme without iGPU.When writing this article I'm using this laptop and OS;
And here are my goals:
Before starting you need to prepare:
Important tips:
The NVIDIA driver on the Host OS will hold control of the dGPU, and stop VM from using it. Therefore you need to replace the driver with vfio-pci
, built solely for PCIe passthrough.
Even if you don't plan to passthrough the dGPU, you need to switch the graphics display of Host OS to the iGPU, or later Virt-Manager will crash. You may disable NVIDIA drivers with the steps below, or use software such as optimus-manager
for management.
Here are the steps for disabling the NVIDIA driver and passing control to PCIe passthrough module:
Run lspci -nn | grep NVIDIA
and obtain an output similar to:
01:00.0 3D controller [0302]: NVIDIA Corporation GP107M [GeForce GTX 1050 Mobile] [10de:1c8d] (rev a1)
Here [10de:1c8d]
is the vendor ID and device ID of the dGPU, where 10de
means this device is manufacturered by NVIDIA, and 1c8d
means this is an GTX 1050.
Create /etc/modprobe.d/lantian.conf
with the following content:
options vfio-pci ids=10de:1c8d
This configures vfio-pci
, the kernel module responsible for PCIe passthrough, to manage the dGPU. ids
is the vendor ID and device ID of the device to be passed through.
Modify /etc/mkinitcpio.conf
, add the following contents to MODULES
:
MODULES=(vfio_pci vfio vfio_iommu_type1 vfio_virqfd)
And remove anything related to NVIDIA drivers (such as nvidia
)
Now PCIe passthrough module will take control of the dGPU in early booting process, preventing NVIDIA drivers from taking control.
Run mkinitcpio -P
to update initramfs.
Reboot.
Remember the multi-thousand-dollar NVIDIA GRID GPUs? If you get hold on one of these, the GPU driver itself will support creating multiple virtual GPUs to be used on different VMs, just like the CPU virtualization technology.
But different from NVIDIA, 5th gen and later Intel CPUs support this out of the box, and you don't need to pay the ransom for an expensive GPU. Although iGPU is weak, at least it allows for smooth web browsing in VM compared to QXL, etc.
And passing through this virtual Intel GPU is relatively easy, and may serve as a practice.
Modify your kernel parameters (Usually located at /boot/loader/entries/arch.conf
if you use Systemd-boot), and add:
i915.enable_gvt=1 kvm.ignore_msrs=1 intel_iommu=on
Modify /etc/modules-load.d/lantian.conf
and add the next 3 lines:
kvmgt
vfio-iommu-type1
vfio-mdev
These 3 lines correspond to required kernel modules.
Reboot.
Run lspci | grep "HD Graphics"
to look for PCIe address of iGPU. I get this output for example:
00:02.0 VGA compatible controller: Intel Corporation HD Graphics 630 (rev 04)
In this case iGPU is located at 00:02.0
on PCIe bus.
Run the following command to create the virtual GPU:
# Must run as root
sudo su
echo "af5972fb-5530-41a7-0000-fd836204445b" > "/sys/devices/pci0000:00/0000:00:02.0/mdev_supported_types/i915-GVTg_V5_4/create"
Pay attention to the iGPU PCIe bus location. In addition you can optionally replace the UUID.
Run virsh edit Win10
, where Win10
is the name of your VM. Insert the following contens above </devices>
:
<hostdev mode='subsystem' type='mdev' managed='no' model='vfio-pci' display='off'>
<source>
<address uuid='af5972fb-5530-41a7-0000-fd836204445b'/>
</source>
</hostdev>
Replace the UUID to match the last step. Also display
here is set to off
which is intentional (normal).
Do not remove the QXL GPU yet.
Start the VM and open Device Manager. You should see a Microsoft Basic Display Adapter
.
Connect the VM to the Internet and wait. Windows will automatically install the iGPU drivers, and you will see the Intel Control Panel in Start Menu.
After driver is installed, the VM can use the Intel GPU now. But since the current monitor is displaying images from QXL GPU, and Intel GPU is not the primary GPU, Windows hasn't set any program to run on Intel GPU yet.
In the <hostdev>
added above, change display='off'
to display='on'
.
Remove everything in <graphics>...</graphics>
and <video>...</video>
, and replace with:
<graphics type='spice'>
<listen type='none'/>
<image compression='off'/>
<gl enable='yes'/>
</graphics>
<video>
<model type='none'/>
</video>
Add these lines before </domain>
:
<qemu:commandline>
<qemu:arg value='-set'/>
<qemu:arg value='device.hostdev0.ramfb=on'/>
<qemu:arg value='-set'/>
<qemu:arg value='device.hostdev0.driver=vfio-pci-nohotplug'/>
<qemu:arg value='-set'/>
<qemu:arg value='device.hostdev0.x-igd-opregion=on'/>
<qemu:arg value='-set'/>
<qemu:arg value='device.hostdev0.xres=1920'/>
<qemu:arg value='-set'/>
<qemu:arg value='device.hostdev0.yres=1080'/>
<qemu:arg value='-set'/>
<qemu:arg value='device.hostdev0.romfile=/vbios_gvt_uefi.rom'/>
<qemu:env name='MESA_LOADER_DRIVER_OVERRIDE' value='i965'/>
</qemu:commandline>
The vbios_gvt_uefi.rom
can be downloaded from http://120.25.59.132:3000/vbios_gvt_uefi.rom, or from this site, and should be put to root folder. If you moved it elsewhere, you need to modify the romfile
parameter correspondingly.
Change the first line of the configuration file, <domain type='kvm'>
, to <domain type='kvm' xmlns:qemu='http://libvirt.org/schemas/domain/qemu/1.0'>
.
In previous steps the official NVIDIA drivers on the host OS are disabled, and the dGPU is managed by vfio-pci
for PCIe passthrough.
Passing through the dGPU itself is simple, but NVIDIA added a lot of driver limitations for money:
So we have to hack through all these pitfalls.
First reboot the physical machine to Windows and do the following things:
PCI\VEN_10DE&DEV_1C8D&SUBSYS_39D117AA&REV_A1
, and record this somewhere.Then reboot back to Linux. If you haven't exported the GPU vBIOS, you may use VBiosFinder
software to extract it from the BIOS update of your computer.
# Download VBiosFinder
git clone https://github.com/coderobe/VBiosFinder.git
# Download BIOS update from your computer's manufacturer site, usually an EXE file.
# My BIOS update is named as BIOS-4KCN45WW.exe, replace accordingly
mv BIOS-4KCN45WW.exe VBiosFinder/
# Install dependencies
pikaur -S ruby ruby-bundler innoextract p7zip upx
# Install rom-parser
git clone https://github.com/awilliam/rom-parser.git
cd rom-parser
make
mv rom-parser ../VBiosFinder/3rdparty
cd ..
# Install UEFIExtract
git clone https://github.com/LongSoft/UEFITool.git -b new_engine
cd UEFITool
./unixbuild.sh
mv UEFIExtract/UEFIExtract ../VBiosFinder/3rdparty
cd ..
# Extract vBIOS
cd VBiosFinder
bundle update --bundler
bundle install --path=vendor/bundle
./vbiosfinder extract BIOS-4KCN45WW.exe
ls output
# There will be a few files in the output folder:
# - vbios_10de_1c8c.rom
# - vbios_10de_1c8d.rom
# - vbios_10de_1c8e.rom
# - ...
# Find the one corresponding to the vendor ID and device ID, which is your vBIOS.
Then add the vBIOS to VM's UEFI firmware (or OVMF).
On an Optimus laptop, NVIDIA drivers will search for the vBIOS from system's ACPI table, and load it to the GPU. The ACPI table is managed by the UEFI firmware, so it needs to be modified to add the vBIOS.
# Based on reports on GitHub, UEFI firmware shouldn't be moved once built
# So find somewhere to permanently store the files
cd /opt
git clone https://github.com/tianocore/edk2.git
# Install dependencies
pikaur -S git python2 iasl nasm subversion perl-libwww vim dos2unix gcc5
# Assuming your vBIOS is at /vbios.rom
cd edk2/OvmfPkg/AcpiPlatformDxe
xxd -i /vbios.rom vrom.h
# Modify vrom.h, and rename the unsigned char array to VROM_BIN
# and modify the length variable at the end to VROM_BIN_LEN, and record the number, 167936 in my case
wget https://github.com/jscinoz/optimus-vfio-docs/files/1842788/ssdt.txt -O ssdt.asl
# Modify ssdt.asl, change line 37 to match VROM_BIN_LEN
# Run the following commands. Errors may pop up, but they're fine as long as Ssdt.aml is generated
iasl -f ssdt.asl
xxd -c1 Ssdt.aml | tail -n +37 | cut -f2 -d' ' | paste -sd' ' | sed 's/ //g' | xxd -r -p > vrom_table.aml
xxd -i vrom_table.aml | sed 's/vrom_table_aml/vrom_table/g' > vrom_table.h
# Switch back to edk2's folder and apply a patch
cd ../..
wget https://gist.github.com/jscinoz/c43a81882929ceaf7ec90afd820cd470/raw/139799c87fc806a966250e5686e15a28676fc84e/nvidia-hack.diff
patch -p1 < nvidia-hack.diff
# Compile OVMF
make -C BaseTools
. ./edksetup.sh BaseTools
# Modify these variables in Conf/target.txt:
# - ACTIVE_PLATFORM = OvmfPkg/OvmfPkgX64.dsc
# - TARGET_ARCH = X64
# - TOOL_CHAIN_TAG = GCC5
build
# Wait until compilation is complete, and verify file's existence in Build/OvmfX64/DEBUG_GCC5/FV:
# - OVMF_CODE.fd
# - OVMF_VARS.fd
# Replace UEFI variables of your VM, remember to change VM names
cp Build/OvmfX64/DEBUG_GCC5/FV/OVMF_VARS.fd /var/lib/libvirt/qemu/nvram/Win10_VARS.fd
Modify your VM configuration, virsh edit Win10
, and do the following changes:
<!-- Modify the os section, remember to match the path to OVMF_CODE.fd -->
<os>
<type arch='x86_64' machine='pc-q35-4.2'>hvm</type>
<loader readonly='yes' type='pflash'>/opt/edk2/Build/OvmfX64/DEBUG_GCC5/FV/OVMF_CODE.fd</loader>
<nvram>/var/lib/libvirt/qemu/nvram/Win10_VARS.fd</nvram>
</os>
<!-- Modify the features section, so QEMU will hide the fact that this is a VM -->
<features>
<acpi/>
<apic/>
<hyperv>
<relaxed state='on'/>
<vapic state='on'/>
<spinlocks state='on' retries='8191'/>
<vendor_id state='on' value='GenuineIntel'/>
</hyperv>
<kvm>
<hidden state='on'/>
</kvm>
<vmport state='off'/>
</features>
<!-- Add the PCIe passthrough device, must be below the hostdev for iGPU -->
<hostdev mode='subsystem' type='pci' managed='yes'>
<source>
<address domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
</source>
<rom bar='off'/>
<!-- The PCIe bus address here MUST BE EXACTLY 01:00.0 -->
<!-- If there is a PCIe bus address conflict when saving config changes, -->
<!-- Remove <address> of all other devices -->
<!-- And Libvirt will reallocate PCIe bus addresses -->
<address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0' multifunction='on'/>
</hostdev>
<!-- Add these parameters before </qemu:commandline> -->
<qemu:arg value='-set'/>
<qemu:arg value='device.hostdev1.x-pci-vendor-id=0x10de'/>
<qemu:arg value='-set'/>
<qemu:arg value='device.hostdev1.x-pci-device-id=0x1c8d'/>
<qemu:arg value='-set'/>
<qemu:arg value='device.hostdev1.x-pci-sub-vendor-id=0x17aa'/>
<qemu:arg value='-set'/>
<qemu:arg value='device.hostdev1.x-pci-sub-device-id=0x39d1'/>
<qemu:arg value='-acpitable'/>
<qemu:arg value='file=/ssdt1.dat'/>
The IDs here shoudl match the hardware ID from Device Manager, PCI\VEN_10DE&DEV_1C8D&SUBSYS_39D117AA&REV_A1
. Replace accordingly.
The ssdt1.dat corresponds to the Base64 below, which can be converted to binary file with Base64 decoding website and put to root folder, or download from this site. If you moved its location, you should modify the file parameter accordingly. This is also an ACPI table that emulates a fully-charged battery, but instead of being merged to OVMF, it simply works as an QEMU argument addition.
U1NEVKEAAAAB9EJPQ0hTAEJYUENTU0RUAQAAAElOVEwYEBkgoA8AFVwuX1NCX1BDSTAGABBMBi5f
U0JfUENJMFuCTwVCQVQwCF9ISUQMQdAMCghfVUlEABQJX1NUQQCkCh8UK19CSUYApBIjDQELcBcL
cBcBC9A5C1gCCywBCjwKPA0ADQANTElPTgANABQSX0JTVACkEgoEAAALcBcL0Dk=
Do not miss any steps, or you will be welcomed by Code 43 (Driver load failure).
Start the VM and wait a while. Windows will automatically install NVIDIA drivers.
Device by Connection
, and verify that dGPU is at Bus 1, Slot 0, Function 0. The parent PCIe port to the dGPU should be at Bus 0, Slot 1, Function 0.Even if you've done every step above, and got both iGPU and dGPU working in VM, this is still not very helpful to gaming:
Therefore, currently Optimus GPU passthrough is more for tinkerers than for actual gamers. If you are experienced in driver development, you may research in the following directions;
Huge thanks to previous explorers on the topic of GPU passthrough. Without their effort this article won't have existed in the first place.在。
Here are the sources I referenced when I did my configuration:
<domain type='kvm' xmlns:qemu='http://libvirt.org/schemas/domain/qemu/1.0'>
<name>Win10</name>
<uuid>6f0e09e1-a7d4-4d33-b4f8-0dc69eaaed9b</uuid>
<metadata>
<libosinfo:libosinfo xmlns:libosinfo="http://libosinfo.org/xmlns/libvirt/domain/1.0">
<libosinfo:os id="http://microsoft.com/win/10"/>
</libosinfo:libosinfo>
</metadata>
<memory unit='KiB'>4194304</memory>
<currentMemory unit='KiB'>4194304</currentMemory>
<vcpu placement='static'>8</vcpu>
<os>
<type arch='x86_64' machine='pc-q35-4.2'>hvm</type>
<loader readonly='yes' type='pflash'>/opt/edk2/Build/OvmfX64/DEBUG_GCC5/FV/OVMF_CODE.fd</loader>
<nvram>/var/lib/libvirt/qemu/nvram/Win10_VARS.fd</nvram>
</os>
<features>
<acpi/>
<apic/>
<hyperv>
<relaxed state='on'/>
<vapic state='on'/>
<spinlocks state='on' retries='8191'/>
<vendor_id state='on' value='GenuineIntel'/>
</hyperv>
<kvm>
<hidden state='on'/>
</kvm>
<vmport state='off'/>
</features>
<cpu mode='host-model' check='partial'>
<topology sockets='1' dies='1' cores='4' threads='2'/>
</cpu>
<clock offset='localtime'>
<timer name='rtc' tickpolicy='catchup'/>
<timer name='pit' tickpolicy='delay'/>
<timer name='hpet' present='no'/>
<timer name='hypervclock' present='yes'/>
</clock>
<on_poweroff>destroy</on_poweroff>
<on_reboot>restart</on_reboot>
<on_crash>destroy</on_crash>
<pm>
<suspend-to-mem enabled='no'/>
<suspend-to-disk enabled='no'/>
</pm>
<devices>
<emulator>/usr/bin/qemu-system-x86_64</emulator>
<disk type='file' device='disk'>
<driver name='qemu' type='raw'/>
<source file='/var/lib/libvirt/images/Win10.img'/>
<target dev='vda' bus='virtio'/>
<boot order='1'/>
<address type='pci' domain='0x0000' bus='0x07' slot='0x00' function='0x0'/>
</disk>
<disk type='file' device='cdrom'>
<driver name='qemu' type='raw'/>
<source file='/mnt/files/LegacyOS/Common/virtio-win-0.1.141.iso'/>
<target dev='sda' bus='sata'/>
<readonly/>
<boot order='2'/>
<address type='drive' controller='0' bus='0' target='0' unit='0'/>
</disk>
<controller type='usb' index='0' model='qemu-xhci' ports='15'>
<address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
</controller>
<controller type='sata' index='0'>
<address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
</controller>
<controller type='pci' index='0' model='pcie-root'/>
<controller type='pci' index='1' model='pcie-root-port'>
<model name='pcie-root-port'/>
<target chassis='1' port='0x10'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0' multifunction='on'/>
</controller>
<controller type='pci' index='2' model='pcie-root-port'>
<model name='pcie-root-port'/>
<target chassis='2' port='0x11'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x1'/>
</controller>
<controller type='pci' index='3' model='pcie-root-port'>
<model name='pcie-root-port'/>
<target chassis='3' port='0x12'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
</controller>
<controller type='pci' index='4' model='pcie-root-port'>
<model name='pcie-root-port'/>
<target chassis='4' port='0x13'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x3'/>
</controller>
<controller type='pci' index='5' model='pcie-root-port'>
<model name='pcie-root-port'/>
<target chassis='5' port='0x14'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x4'/>
</controller>
<controller type='pci' index='6' model='pcie-root-port'>
<model name='pcie-root-port'/>
<target chassis='6' port='0x15'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x5'/>
</controller>
<controller type='pci' index='7' model='pcie-root-port'>
<model name='pcie-root-port'/>
<target chassis='7' port='0x8'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x6'/>
</controller>
<controller type='pci' index='8' model='pcie-root-port'>
<model name='pcie-root-port'/>
<target chassis='8' port='0x9'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x7'/>
</controller>
<controller type='pci' index='9' model='pcie-to-pci-bridge'>
<model name='pcie-pci-bridge'/>
<address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
</controller>
<controller type='pci' index='10' model='pcie-root-port'>
<model name='pcie-root-port'/>
<target chassis='10' port='0xa'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
</controller>
<controller type='pci' index='11' model='pcie-root-port'>
<model name='pcie-root-port'/>
<target chassis='11' port='0xb'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
</controller>
<controller type='virtio-serial' index='0'>
<address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
</controller>
<controller type='scsi' index='0' model='virtio-scsi'>
<address type='pci' domain='0x0000' bus='0x06' slot='0x00' function='0x0'/>
</controller>
<interface type='bridge'>
<mac address='52:54:00:b0:65:5a'/>
<source bridge='br0'/>
<model type='virtio'/>
<address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
</interface>
<serial type='pty'>
<target type='isa-serial' port='0'>
<model name='isa-serial'/>
</target>
</serial>
<console type='pty'>
<target type='serial' port='0'/>
</console>
<channel type='spicevmc'>
<target type='virtio' name='com.redhat.spice.0'/>
<address type='virtio-serial' controller='0' bus='0' port='1'/>
</channel>
<input type='tablet' bus='usb'>
<address type='usb' bus='0' port='1'/>
</input>
<input type='mouse' bus='ps2'/>
<input type='keyboard' bus='ps2'/>
<graphics type='spice'>
<listen type='none'/>
<image compression='off'/>
<gl enable='yes'/>
</graphics>
<sound model='ich9'>
<address type='pci' domain='0x0000' bus='0x00' slot='0x1b' function='0x0'/>
</sound>
<video>
<model type='none'/>
</video>
<hostdev mode='subsystem' type='mdev' managed='no' model='vfio-pci' display='on'>
<source>
<address uuid='af5972fb-5530-41a7-0000-fd836204445b'/>
</source>
<address type='pci' domain='0x0000' bus='0x0a' slot='0x00' function='0x0'/>
</hostdev>
<hostdev mode='subsystem' type='pci' managed='yes'>
<source>
<address domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
</source>
<rom bar='off'/>
<address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0' multifunction='on'/>
</hostdev>
<redirdev bus='usb' type='spicevmc'>
<address type='usb' bus='0' port='2'/>
</redirdev>
<redirdev bus='usb' type='spicevmc'>
<address type='usb' bus='0' port='3'/>
</redirdev>
<memballoon model='virtio'>
<address type='pci' domain='0x0000' bus='0x08' slot='0x00' function='0x0'/>
</memballoon>
</devices>
<qemu:commandline>
<qemu:arg value='-set'/>
<qemu:arg value='device.hostdev0.ramfb=on'/>
<qemu:arg value='-set'/>
<qemu:arg value='device.hostdev0.driver=vfio-pci-nohotplug'/>
<qemu:arg value='-set'/>
<qemu:arg value='device.hostdev0.x-igd-opregion=on'/>
<qemu:arg value='-set'/>
<qemu:arg value='device.hostdev0.xres=1920'/>
<qemu:arg value='-set'/>
<qemu:arg value='device.hostdev0.yres=1080'/>
<qemu:arg value='-set'/>
<qemu:arg value='device.hostdev0.romfile=/vbios_gvt_uefi.rom'/>
<qemu:arg value='-set'/>
<qemu:arg value='device.hostdev1.x-pci-vendor-id=0x10de'/>
<qemu:arg value='-set'/>
<qemu:arg value='device.hostdev1.x-pci-device-id=0x1c8d'/>
<qemu:arg value='-set'/>
<qemu:arg value='device.hostdev1.x-pci-sub-vendor-id=0x17aa'/>
<qemu:arg value='-set'/>
<qemu:arg value='device.hostdev1.x-pci-sub-device-id=0x39d1'/>
<qemu:arg value='-acpitable'/>
<qemu:arg value='file=/ssdt1.dat'/>
<qemu:env name='MESA_LOADER_DRIVER_OVERRIDE' value='i965'/>
</qemu:commandline>
</domain>