iommu with sr-iov: gpu accelerated windows vm on a linux laptop that has only igpu
screenshot of a windows 11 vm guest with gpu hardware acceleration
the first thing that came to my mind when you need to have a gpu passthrough, Often it requires another secondary dedicated graphics card that's unused by the host. if not that, then paravirtualization solution such as virtualbox/vmware/spice guest tools, or virgl3d.
but then we have sr-iov which specially targets a intel processor computer that let us create a virtual function (VF) of Intel UHD Graphics iGPU. This lets us to be able to do gpu passthrough to our VM without needing to have it being unused. in this blog, i will talk on how i set this up
note: you need a kernel atleast from between version 6.12.x ~ 6.19.x
disclaimer: sriov is still considered experimental. things might break here if you're unlucky, but it shouldn't be.
setting up
you will need to ensure that the intel processor that you are using is Gen 9.5 and newer for the best experience as possible. Ensure that both VT-x and VT-d are enabled on the bios settings so you can use IOMMU for hardware passthrough, otherwise you will still limited in paravirtualization.
the operating system that i use at the time of this writing is arch linux. kernel is the default arch kernel. since sriov hasn't getting mainstreamed yet, so we just install the i915-sriov dkms module via aur:
yay -S i915-sriov-dkms
just to make it compatible with other kernel, i will use the dkms variant since it's easier to maintain and switch kernel back and forth just in case.
for your convenience, please add yourself to kvm group: usermod -aG kvm you
kernel
continuing, we begin by explicitly making the kernel to enable intel iommu by adding this to the kernel boot param. 1 VFs is generally enough. you can pick which driver you will want to use in between the two:
i915:
intel_iommu=on i915.enable_guc=3 i915.max_vfs=1 module_blacklist=xe
xe (the new experimental driver. you will need it if you have for like, Intel Arc/Iris):
intel_iommu=on xe.max_vfs=1 xe.force_probe=device-id module_blacklist=i915
You can also try to use xe on iGPU, however given how new this driver is, it might be unstable
To get device-id, You can obtain it by executing lspci -nn | grep -i vga and then obtain the 16 bit hexadecimal digit of the iGPU id:
[yonle@yonle ~]$ lspci -nn | grep -i vga
0000:00:02.0 VGA compatible controller [0300]: Intel Corporation Alder Lake-UP3 GT1 [UHD Graphics] [8086:46b3] (rev 0c)
as you see here, the device id of my iGPU is 46b3, which then you will use this for the boot param above and later on.
now reboot, and then check dmesg to see whenever SR-IOV is actually loaded properly:
[yonle@yonle ~]$ doas dmesg | grep -i sriov
[ 5.169765] i915: You are using the i915-sriov-dkms module, a ported version of the i915/xe module with SR-IOV support.
[ 5.169767] i915: Please file any bug report at https://github.com/strongtz/i915-sriov-dkms/issues/new.
[ 5.169768] i915: Module Homepage: https://github.com/strongtz/i915-sriov-dkms
[ 5.289502] intel_sriov_compat: loaded
replace “doas” as “sudo” if you use sudo.
if you saw intel_sriov_compat: loaded, you're good to go.
making the virtual function
technically, you can manually load it via command line. but for some reason it might give you more works than necessary to get your things working.
so, assuming you're on arch linux, make a systemd-tmpfiles config specifically to make just 1 vf. edit /etc/tmpfiles.d/i915-set-sriov-numvfs.conf:
#Path Mode UID GID Age Argument
#Uncomment the next line and change the argument to the number of VFs you want
w /sys/devices/pci0000:00/0000:00:02.0/sriov_numvfs - - - - 1
and then, make a udev rules to block vf (except host) to be used as a main by host (eg, your main de/wm). edit /etc/udev/rules.d/99-i915-vf-vfio.rules:
ACTION=="add", SUBSYSTEM=="pci", KERNEL=="0000:00:02.1", ATTR{vendor}=="0x8086", ATTR{device}=="0x46b3", DRIVER!="vfio-pci", RUN+="/bin/sh -c 'echo \$kernel > /sys/bus/pci/devices/\$kernel/driver/unbind; echo vfio-pci > /sys/bus/pci/devices/\$kernel/driver_override; modprobe vfio-pci; echo \$kernel > /sys/bus/pci/drivers/vfio-pci/bind'"
note: replace 46b3 with your gpu device id that you obtained above.
then, reboot.
you must see 2 iGPU now when running lspci now:
[yonle@yonle module]$ lspci | grep -i vga
0000:00:02.0 VGA compatible controller: Intel Corporation Alder Lake-UP3 GT1 [UHD Graphics] (rev 0c)
0000:00:02.1 VGA compatible controller: Intel Corporation Alder Lake-UP3 GT1 [UHD Graphics] (rev 0c)
remember: your guest must only use the vf one, in this case, it's 0000:00:02.1
kvmfr
as we're also going to use looking glass, let's prepare kvmfr for the shared memory.
first, ensure that your kernel header has been installed properly before installing the dkms module.
installing manually
obtain the source code tarball from here, and then extract module folder, and then,
cd module
doas dkms install .
caution: you must rebuild the DKMS on each kernel update / when you switch to different kernel.
installing via AUR
yay -S looking-glass-module-dkms
👍
configuring kvmfr
you should be able to load kvmfr now:
doas modprobe kvmfr static_size_mb=32
the looking glass docs has a fantastic explanation on how do you determine a shared memory for your DMA, which you should read.
now, let's make this module gets loaded automatically on boot. First we need to set the default kvmfr load param by editing /etc/modprobe.d/kvmfr.conf, then putting this:
options kvmfr static_size_mb=32
then edit /etc/modules-load.d/kvmfr.conf and add this:
kvmfr
now, we make a udev rule to ensure that the device got a proper permission. edit /etc/udev/rules.d/99-kvmfr.rules:
SUBSYSTEM=="kvmfr", OWNER="user", GROUP="kvm", MODE="0660"
replace user with your username.
to apply the udev permission immediately, do
doas chown you:kvm /dev/kvmfr0
doas chmod 660 /dev/kvmfr0
now edit /etc/libvirt/qemu.conf, and uncomment cgroup_device_acl and add /dev/kvmfr0 in it:
cgroup_device_acl = [
"/dev/null", "/dev/full", "/dev/zero",
"/dev/random", "/dev/urandom",
"/dev/ptmx", "/dev/userfaultfd",
"/dev/kvmfr0"
]
then restart libvirtd daemon.
make a vm
we will use libvirt with virt-manager as the client.
the vm that we will create will be a Microsoft Windows 11 VM.
before you begin your installation, Do a customization first. On [Overview]'s XML, Replace the following top:
<domain type="kvm">
with this:
<domain xmlns:qemu="http://libvirt.org/schemas/domain/qemu/1.0" type="kvm">
and then, add the following inside the <domain> field:
<qemu:commandline>
<qemu:arg value="-device"/>
<qemu:arg value="{'driver':'ivshmem-plain','id':'lg','memdev':'looking-glass'}"/>
<qemu:arg value="-object"/>
<qemu:arg value="{'qom-type':'memory-backend-file','id':'looking-glass','mem-path':'/dev/kvmfr0','size':33554432,'share':true}"/>
</qemu:commandline>
replace 33554432 with your calculated shared memory.
then, [Add Hardware] –> [PCI Host Device], Look for your VF iGPU (from the previous lspci, it must be 0000:00:02.1)
remove the existing keyboard and tablet input, and make new inputs for both things with virtio bus in it. Additionally, If there's “EvTouch” or anything outside of “ps2” and “virtio” inputs, you may also want to attach it too.
and then proceed installation with vga as usual until you finished installing the OS with Intel Graphics Driver and Windows Virtio Drivers installed.
Since i'm in Gen 12th alder lake, I installed the ones with version 32.0.101.7085 as of the time of this writing (or, “Intel® 11th – 14th Gen Processor Graphics – Windows*“).
you should only need to install Intel Graphics Driver and that's all it takes to work. after installation, reboot to ensure that the driver is actually being loaded properly and then check via device manager (right click on the start button and then go from here). it should look like this:
note: i will guide you on setting up Virtual Display Driver from here
until looking glass and VDD has been configured, you must not turn VGA to None until you finished following the steps below
Virtual Display Driver (VDD)
usually, you use a dummy HDMI or DP adapter to make GPU start drawing a screen. Since we can't do that via VF, We basically make a screen of our own here.
open your terminal, and execute
winget install --id=VirtualDrivers.Virtual-Display-Driver -e --source winget
once succesfully installing it, open new terminal tab, and type
& 'VDD Control.exe'
It will launch new window looking like this:

Press [Install Driver] and proceed driver installation. If succeed, The OS will make an animation as if a secondary monitor has got plugged in.
Looking Glass
Now, Install looking glass on your host machine:
yay -S looking-glass
and then on your VM, Install the Looking Glass Host, which you can obtain from here,
after getting looking glass host started in VM, try connecting it on the host machine by just typing looking-glass.
if you saw the virtual display monitor from here, then congrats. your setup works properly.

now, power off your VM, and then set VGA to None. and then start the VM. The console display won't be visible, then launch Looking Glass on your host machine again.
after you finished configuring the display here, it should look like this now:

audio enhancement
the default ich9 sound driver suffer through latency issues if got bombarded with a lot of things all at once, especially rhythm games that requires low latency.
we can use scream audio driver here. but do the following first:
– ensure that the network card is virtio, if not, switch to it
– remove ich9 audio card
for arch linux, you can get scream receiver via aur:
yay -S scream-git
and then just launch the receiver in the background:
scream -u -o jack -i virbr0
note: this assumes that your setup is using pipewire, and it's recommended that you use jack as the output. ensure you have pipewire-jack installed on your host.
note2: in case if you need a serious low latency audio need (eg, editing, rhythm gaming, etc) and jack did not make it well, use sndio, however this would require all apps on your system to not even playing any audio first.
while it's chilling in the background, let's set up the windows driver for it.
first, turn on test mode. open terminal as administrator (right click start –> Terminal (Admin)), run:
bcdedit /set testsigning on
then restart the vm. You must see [Test Mode] on the bottom-right corner of your wallpaper.
download the non-source zip (Usually named in ScreamX.X.zip) from here, extract it, Navigate to <scream folder>/Install/driver/x64/ and open terminal as an admin here, then run:
pnputil /add-driver .\Scream.inf /install
before we finally disable test mode:
bcdedit /set testsigning off
configure the scream audio driver via registry editor. in this case, it can be done via command line below:
REG ADD HKLM\SYSTEM\CurrentControlSet\Services\Scream\Options /v UnicastIPv4 /t REG_SZ /d "192.168.122.1" /f
REG ADD HKLM\SYSTEM\CurrentControlSet\Services\Scream\Options /v UnicastPort /t REG_DWORD /d 4010 /f
Replace 192.168.122.1 to your local address of your host that's being assigned by your virtual bridge, eg virbr0.
and then reboot.
caveats
- if you only loaded
i915withxeblacklisted, any activities on the host (except the guest) that has involvement of vulkan may start slowly / temporarily stall the entire drm. this can be fixed by trying to use thexedriver instead, however xecan get both OpenGL and Vulkan properly, but might stall the entire drm once the vm finished booting
loading both drivers simultaneously does not currently resolve these problems.
if these issues become annoying, consider adding a separate bootloader entry that disables SR-IOV and IOMMU entirely. This allows you to boot into a normal configuration when you don't plan to run a VM.
extra

if necessary, you can try debloat your windows VM by using these tools: – raphire/Win11Debloat to remove most of the bloats – es3n1n/defendnot to disable windows defender by pretending there's other antivirus in order to reduce load
cpu usage:

ram usage:

extra note: if at some point you plan to do gaming here, including low latency gaming, consider attaching your input devices (keyboard, mouse) into your VM via passthrough, as the existing input grabber is meant for a normal workstation purpose.
extra note2: given that you have installed virtio drivers above, switch your VM network card to virtio for best performance.
extra note3: if you're still overwhelmed by both visual and audio latency even with solutions above, consider using dGPU passing and passing external soundcard into VM instead.
if you need to see how the config looks like on the system, you can check on my dotfiles to see what has been configured here and there.
troubleshooting
looking glass suddenly stuck on [The host application seems to not be running]
the VF of your iGPU could have been overwhelmed probably due to multiple reusage (eg, constant reboot, you suspended your laptop while VF is still in effect, etc). to fix this, try reboot your host. if this still didn't fix the problem, then try reinstalling the intel graphics driver inside the VM.
mouse sensitivity is way too high on looking glass
trying to navigate the display without capturing your mouse will have this kind of behavior by default. enable capture mode by pressing the looking glass's escape key (the default is scrolllock. you can change this by setting -m KEY_<KEY> when launching looking glass via command line).
aand that's it.
for references: – Looking Glass B7 Installation Documentation – i915-sriov-dkms docs – Libvirt: Domain XML Format documentation – Github: Scream audio driver README
honorable mention: My best friend RionWijaya for informing me with his experiments and then help me a bit
happy VM-ing once again.
p.s.: you can try to apply the same logic if you have a spare dGPU by skipping the intel sr-iov & iommu step