- What to expect?
- Prerequisites
- Bumblebee setup guide
- dGPU passthrough guide
- FAQ
- Known issue
- Reference
Just like most of you, and partially because of the lack of success report, I once thought that this is a mission impossible for a laptop. But here we are, with the lack of words to express my excitement, I am now sharing my success and this tutorial to help you achieve the goal you might have been striving for so long.
Depends on your hardware, you can have a laptop that:
- Physically running a Linux distribution as the host machine,
- Can power on/off and utilize your Nvidia dGPU on demand with bumblebee,
- Can pass your Nvidia dGPU to your VM when you don't need it in your host machine,
- Can have your dGPU back when the VM shutdown,
- Can use your dGPU with bumblebee again without any problem,
- No need to reboot during this dGPU binding/unbinding process,
- No need for external display (depend on your hardware and the version of Windows your VM running),
- Can connect external display directly to your VM (only some machine with specific setup).
With all that said, this tutorial does not mean any laptop with a Optimus setup will be able to passthrough their dGPU. Generally, a pretty high end laptop is still required, and it is highly possible you will success if you have a laptop that use a swappable MXM form factor graphics card.
-
A CPU that support hardware virtualization (Intel VT-x) and IOMMU (Intel VT-d).
- Check here for a full list of qualified CPU
- To achieve the "no external display" purpose, we will use RemoteFX. With proper configuration, gaming in 1600x900 with fps higher that 60 should not be a problem for most passthrough qualified laptop (RemoteFX codec is CPU intensive). For those who want to gaming in 1920x1080 60 fps+ through RemoteFX, a overclockable CPU like 6820HK or 7820HK, or 6920HQ and 7920HQ (which can actually overclock to +600Mhz using XTU) is recommended.
-
A motherboard that support IOMMU with decent IOMMU layout e.g. your dGPU is in its own IOMMU group aside from other devices.
- For the reason that there is no ACS support for laptop (maybe some bare-bone does) so far, a decent IOMMU layout is crucial since the ACS override patch is not applicable.
-
Verification:
-
Boot with
intel_iommu=onkernel parameter and usedmesg | grep -i iommuto verify you IOMMU support, this will also print your IOMMU layout. -
Example:
-
# 00:01.0 PCI bridge: Intel Corporation Sky Lake PCIe Controller (x16) (rev 05) # 01:00.0 VGA compatible controller: NVIDIA Corporation Device 1bb6 (rev a1) [ 0.000000] DMAR: IOMMU enabled [ 0.086383] DMAR-IR: IOAPIC id 2 under DRHD base 0xfed91000 IOMMU 1 [ 1.271222] iommu: Adding device 0000:00:00.0 to group 0 [ 1.271236] iommu: Adding device 0000:00:01.0 to group 1 [ 1.271244] iommu: Adding device 0000:00:04.0 to group 2 [ 1.271257] iommu: Adding device 0000:00:14.0 to group 3 [ 1.271264] iommu: Adding device 0000:00:14.2 to group 3 [ 1.271277] iommu: Adding device 0000:00:15.0 to group 4 [ 1.271284] iommu: Adding device 0000:00:15.1 to group 4 [ 1.271293] iommu: Adding device 0000:00:16.0 to group 5 [ 1.271301] iommu: Adding device 0000:00:17.0 to group 6 [ 1.271313] iommu: Adding device 0000:00:1c.0 to group 7 [ 1.271325] iommu: Adding device 0000:00:1c.2 to group 8 [ 1.271339] iommu: Adding device 0000:00:1c.4 to group 9 [ 1.271360] iommu: Adding device 0000:00:1f.0 to group 10 [ 1.271367] iommu: Adding device 0000:00:1f.2 to group 10 [ 1.271375] iommu: Adding device 0000:00:1f.3 to group 10 [ 1.271382] iommu: Adding device 0000:00:1f.4 to group 10 [ 1.271390] iommu: Adding device 0000:00:1f.6 to group 10 [ 1.271395] iommu: Adding device 0000:01:00.0 to group 1 [ 1.271407] iommu: Adding device 0000:02:00.0 to group 11 [ 1.271418] iommu: Adding device 0000:03:00.0 to group 12 -
Here the GPU and its root port are in the same group, and there is no other device in this group, thus make it a decent IOMMU layout.
-
-
Note: If your laptop use a mobile server grade motherboard, e.g. Chipset model that start with letter "C", you are good to go. Most high-end laptop that use MXM graphics card come with a chipset like that. For example, MSI GT72/73, Dell Precision 7000 series mobile workstation.
- Host:
- I'm currently running Ubuntu 16.04 (with 4.10 kernel), but it should also work on other distribution.
- System should be installed in UEFI mode, and boot via UEFI.
- Guest:
- A Windows that support RemoteFX (If you don't want an external display). The latest Windows 10 Pro for example.
- QEMU:
- Currently running a Intel GVT-g version QEMU (2.9.0, for testing iGPU virtualization), other main stream QEMU should also work.
- RDP Client:
- Freerdp 2.0 or above for RDP 8 with RemoteFX connection.
Note: Keep your dual-boot Windows if you want to use software like XTU.
Note: You might need to disable secure boot before following this guide.
Note: This bumblebee setup is based on this guide. Thank you whizzzkid.
We will first go through my bumblebee setup process. I did install bumblebee first and setup passthrough the second. But it should work the other way around.
-
Install Intel Graphics Patch Firmwares
- Refer to https://01.org/linuxgraphics/downloads/firmware, select your platform and download corresponding GuC, DMC and HuC firmware. Installation instructions are self-contained within those tar file.
-
Solving the known interference between TLP and Bumblebee
- TLP is a must have for a Linux laptop since it provide extra policy to save your battery. Install TLP by
sudo apt install tlpif you haven't install it. - Add the output of
lspci | grep "NVIDIA" | cut -b -8toRUNTIME_PM_BLACKLISTin/etc/default/tlp, uncomment it if necessary. This will solve the interference.
- TLP is a must have for a Linux laptop since it provide extra policy to save your battery. Install TLP by
-
Install Nvidia proprietary driver through Ubuntu system settings (Or other install method you prefer).
-
Solving the library linking problem in Nvidia driver.
-
# Replace 'xxx' to the velrsion of nvidia driver you installed # You might need to perform this operation everytime your upgrade your nvidia driver. sudo mv /usr/lib/nvidia-xxx/libEGL.so.1 /usr/lib/nvidia-xxx/libEGL.so.1.org sudo mv /usr/lib32/nvidia-xxx/libEGL.so.1 /usr/lib32/nvidia-xxx/libEGL.so.1.org sudo ln -s /usr/lib/nvidia-xxx/libEGL.so.375.66 /usr/lib/nvidia-xxx/libEGL.so.1 sudo ln -s /usr/lib32/nvidia-xxx/libEGL.so.375.66 /usr/lib32/nvidia-xxx/libEGL.so.1
-
If everything work correctly,
sudo prime-select nvidiaand then logout will give you a login loop. Executesudo prime-select intelin other tty (Ctrl+Alt+F1) will solve the login loop problem. -
It is recommended to switch back and forth once if you run into some problem after a nvidia driver update.
-
-
Blocking nouveau
-
Adding content below to
/etc/modprobe.d/blacklist-nouveau.conf:-
blacklist nouveau options nouveau modeset=0
-
-
sudo update-initramfs -uwhen finish. -
Reboot.
-
-
(Optional) Install CUDA, since the CUDA installation process is well guided by Nvidia, I will skip this part.
- With all that said, I personally recommend runfile installation. It is far more easy to maintain compare to other install method. Just make sure neither the display driver (self-contain in the runfile) nor the OpenGL libraries is check during the runfile installation process. ONLY install the CUDA Toolkit and don't run
nvidia-xconfig.
- With all that said, I personally recommend runfile installation. It is far more easy to maintain compare to other install method. Just make sure neither the display driver (self-contain in the runfile) nor the OpenGL libraries is check during the runfile installation process. ONLY install the CUDA Toolkit and don't run
-
Solve some ACPI problem before bumblebee installation:
- Add
nogpumanager acpi_osi=! acpi_osi=Linux acpi_osi=\"Windows 2015\" pcie_port_pm=offforGRUB_CMDLINE_LINUX_DEFAULTin/etc/default/grubnogpumanageris actually part of the CUDA installation guide.- You might need
acpi_osi=\"Windows 2009\", if2015disable you trackpad. - For further information about these parameters, check:
sudo update-grubwhen finish.- Reboot.
- Add
-
Install bumblebee
-
sudo add-apt-repository ppa:bumblebee/testing sudo apt update sudo apt install bumblebee bumblebee-nvidia
-
Edit
/etc/bumblebee/bumblebee.conf:- Change
Driver=toDriver=nvidia - Change all occurrences of
nvidia-currenttonvidia-xxx(xxxis your nvidia driver version) KernelDriver=nvidia-xxx
- Change
-
Save the file and
sudo service bumblebeed restart
-
-
Kernel module loading modification:
-
Make sure corresponding section in
/etc/modprobe.d/bumblebee.conflook like below-
# Again, xxx is your nvidia driver version. blacklist nvidia-xxx blacklist nvidia-xxx-drm blacklist nvidia-xxx-updates blacklist nvidia-experimental-xxx
-
-
Add content below to
/etc/modules-load.d/modules.conf-
i915 bbswitch
-
-
sudo update-initramfs -uwhen finish. -
Reboot.
-
-
Create a group for bumblebee so that you won't need to
sudoevery time:groupadd bumblebee && gpasswd -a $(whoami) bumblebee
-
Verification:
-
cat /proc/acpi/bbswitchshould output something likeOuput:0000:01:00.0 OFF -
optirun cat /proc/acpi/bbswitchshould output something likeOuput:0000:01:00.0 ON-
You might highly possible run into the
[ERROR][XORG] (EE) Failed to load module "mouse" (module does not exist, 0)problem, append content below to/etc/bumblebee/xorg.conf.nvidiato solve this:-
Section "Screen" Identifier "Default Screen" Device "DiscreteNvidia" EndSection
-
-
Check here for more information about this problem.
-
-
optirun nvidia-smishould give your something like:-
Wed Nov 15 00:36:53 2017 +-----------------------------------------------------------------------------+ | NVIDIA-SMI 384.90 Driver Version: 384.90 | |-------------------------------+----------------------+----------------------+ | GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. | |===============================+======================+======================| | 0 Quadro P5000 Off | 00000000:01:00.0 Off | N/A | | N/A 44C P0 30W / N/A | 9MiB / 16273MiB | 3% Default | +-------------------------------+----------------------+----------------------+ +-----------------------------------------------------------------------------+ | Processes: GPU Memory | | GPU PID Type Process name Usage | |=============================================================================| | 0 7934 G /usr/lib/xorg/Xorg 9MiB | +-----------------------------------------------------------------------------+
-
-
-
Congratulations, stay and enjoy this moment a little bit before run into the next part.
-
Set up QEMU:
-
QEMU from Ubuntu official PPA should work, just
sudo apt install qemu-kvm qemu-utils qemu-efi ovmf. -
For those who want to use GVT-g QEMU as I did:
-
sudo apt-get install git libfdt-dev libpixman-1-dev libssl-dev vim socat libsdl1.2-dev libspice-server-dev autoconf libtool xtightvncviewer tightvncserver x11vnc libsdl1.2-dev uuid-runtime uuid uml-utilities bridge-utils python-dev liblzma-dev libc6-dev libusb-1.0-0-dev ovmf checkinstall git clone https://github.com/01org/igvtg-qemu cd igvtg-qemu git checkout stable-2.9.0 # QEMU does not support python3 ./configure --prefix=/usr \ --enable-kvm \ --disable-xen \ --enable-debug-info \ --enable-debug \ --enable-sdl \ --enable-libusb \ --enable-vhost-net \ --enable-spice \ --disable-debug-tcg \ --target-list=x86_64-softmmu \ --python=/usr/bin/python2 make -j8 # QEMU does not provide 'make uninstall' # Use checkinstall here so that you can remove it by 'dpkg -r' # Assign a version number start with number is mandatory when using checkinstall sudo checkinstall
-
-
-
Setup kernel module and parameters:
-
Add
intel_iommu=on,igfx_off kvm.ignore_msrs=1toGRUB_CMDLINE_LINUX_DEFAULTin/etc/default/grub, thensudo update-grub.- From here: Since some windows guest 3rd patry application / tools (like GPU-Z / Passmark9.0) will trigger MSR read / write directly, if it access the unhandled msr register, guest will trigger BSOD soon. So we added the
kvm.ignore_msrs=1into grub for workaround.
- From here: Since some windows guest 3rd patry application / tools (like GPU-Z / Passmark9.0) will trigger MSR read / write directly, if it access the unhandled msr register, guest will trigger BSOD soon. So we added the
-
Add content below to
/etc/initramfs-tools/modules(order matters!)-
vfio vfio_iommu_type1 vfio_pci vfio_virqfd vhost-net -
sudo update-initramfs -uwhen finish.
-
-
Reboot.
-
lsmodfor verification.
-
-
(Optional) Setup hugepages
- Check
cat /proc/cpuinfosee if it has thepseflag (for 2MB pages) or thepdpe1gbflag (for 1GB pages) - For
pdpe1gb:- Add
default_hugepagesz=1G hugepagesz=1G hugepages=8 transparent_hugepage=nevertoGRUB_CMDLINE_LINUX_DEFAULTin/etc/default/grub, this will assign a 8GB huge page.
- Add
- For
pse:- Add
default_hugepagesz=2M hugepagesz=2M hugepages=4096 transparent_hugepage=nevertoGRUB_CMDLINE_LINUX_DEFAULTin/etc/default/grub, this does the same thing above.
- Add
sudo update-grubwhen finish.- Reboot.
ls /dev | grep hugepagesfor verification.
- Check
-
Get your Subsystem ID (SSID) and Subsystem Vendor ID (SVID):
-
Run
optirun lspci -nnk -s 01:00.0, you will get an output like this:-
01:00.0 VGA compatible controller [0300]: NVIDIA Corporation Device [10de:1bb6] (rev a1) Subsystem: Dell Device [1028:07b1] Kernel driver in use: nvidia Kernel modules: nvidiafb, nouveau, nvidia_384_drm, nvidia_384
-
-
Here,
1028is the SVID and07b1is the SSID. We will use them later.
-
-
Setup VM:
-
Note: Command here just serve as a reference, check QEMU documentation for more detail.
-
Note: I personally don't prefer libvirt as editing xml is annoying for me. Use libvirt if you like it, and use
virsh domxml-from-native qemu-argv xxx.shto convert a QEMU launching script to libvirt XML. Refer here for more information. -
Create a disk for your VM:
qemu-img create -f raw WindowsVM.img 75G
-
Install
iptablesandtunctlif you don't have it. -
Create two script for tap networking:
- tap_ifup (check file below in this gist)
- tap_ifdown (check file below in this gist)
-
Use
dpkg -L ovmfto locate yourOVMF_VARS.fdfile, copy that to the directory where you store your VM image, then rename it toWIN_VARS.fd(or other names you like). -
Create a QEMU launching script:
-
Recall that our GPU have a SVID
1028, and a SSID07b1. Convert these two hexadecimal value to decimal. Which is4136for SVID, and1969for SSID. Use these two value to set the corresponding vfio-pci options (see script below).- This will solve the SSID/SVID all zero problem inside the VM.
-
#!/bin/bash # Use command below to generate a MAC address # printf '52:54:BE:EF:%02X:%02X\n' $((RANDOM%256)) $((RANDOM%256)) qemu-system-x86_64 \ -name "Windows10-QEMU" \ -machine type=q35,accel=kvm \ -global ICH9-LPC.disable_s3=1 \ -global ICH9-LPC.disable_s4=1 \ -cpu host,kvm=off,hv_vapic,hv_relaxed,hv_spinlocks=0x1fff,hv_time,hv_vendor_id=12alphanum \ -smp 8,sockets=1,cores=4,threads=2 \ -m 8G \ -mem-path /dev/hugepages \ -mem-prealloc \ -balloon none \ -rtc clock=host,base=localtime \ -vnc 127.0.0.1:1 \ -device qxl-vga,bus=pcie.0,addr=1c.2 \ -vga none \ -nographic \ -serial none \ -parallel none \ -k en-us \ -usb -usbdevice tablet \ -device ioh3420,bus=pcie.0,addr=1c.0,multifunction=on,port=1,chassis=1,id=root.1 \ -device vfio-pci,host=01:00.0,bus=root.1,addr=00.0,x-pci-sub-device-id=1969,x-pci-sub-vendor-id=4136,multifunction=on,romfile=MyGPU.rom \ -drive if=pflash,format=raw,readonly=on,file=/usr/share/OVMF/OVMF_CODE.fd \ -drive if=pflash,format=raw,file=WIN_VARS.fd \ -boot menu=on \ -boot order=c \ -drive id=disk0,if=virtio,cache=none,format=raw,file=WindowsVM.img \ -drive file=windows10.iso,index=1,media=cdrom \ -drive file=virtio-win-0.1.141.iso,index=2,media=cdrom \ -netdev type=tap,id=net0,ifname=tap0,script=tap_ifup,downscript=tap_ifdown,vhost=on \ -device virtio-net-pci,netdev=net0,addr=19.0,mac=<address your generate>
-
-
- Binding your dGPU to vfio-pci driver:
echo "10de 1bb6" > "/sys/bus/pci/drivers/vfio-pci/new_id"
- Run the QEMU launching script
- Install your Windows system through host side VNC (
127.0.0.1:5901). - Add
192.168.99.0/24to your Windows VM firewall exception:- In
Control Panel\System and Security\Windows Defender Firewall, clickAdvance settingsin the right panel, andInbound Rules->New rules. - Make sure you can
pingto your VM from host.
- In
- Enable remote desktop in Windows VM:
- Right click
This PC, clickRemote settingsin the right panel.
- Right click
- Verify that your GPU have correct the hardware ID.
Device manager-> double click your dGPU ->Detailtab ->Hardware Ids- For me, its
PCI\VEN_10DE&DEV_1BB6&SUBSYS_07B11028 - In some cases, you will find your dGPU as a
Video controller(VGA compatible)underUnknown Devicebefore your install nvidia driver.
- For me, its
- Install the official nvidia driver.
- If everything goes smoothly, you will now be able to see your GPU within
Performancetab inTask Manager.
- If everything goes smoothly, you will now be able to see your GPU within
- Install your Windows system through host side VNC (
- Post VM shut down operation:
- Unbind your dGPU from vfio-pci driver,
echo "0000:01:00.0" > "/sys/bus/pci/drivers/vfio-pci/0000:01:00.0/driver/unbind" - Power off your dGPU,
echo "OFF" >> /proc/acpi/bbswitch - Run
optirun nvidia-smifor verification.
- Unbind your dGPU from vfio-pci driver,
Configure RemoteFX
- Run
gpedit.mscthroughWin+R. - Locate yourself to
Computer Configuration->Administrative Templates->Windows Components->Remote Desktop Service->Remote Desktop Session Host->Remote Session Environment- Enable
Use advanced RemoteFX graphics for RemoteApp - (Optional) Enable
Configure image quality for RemoteFX adaptive Graphics, set it toHigh - Enable
Enable RemoteFX encoding for RemoteFX clients designed for Windows Servier 2008 R2 SP1 - Enable
Configure compression for RemoteFX data, set it toDo not use an RDP compression algorithm- Connection compression will result extra latency for encode and decode, we don't want this.
- Enable
- Locate yourself to
Computer Configuration->Administrative Templates->Windows Components->Remote Desktop Service->Remote Desktop Session Host->Remote Session Environment->RemoteFX for Windows Server 2008 R2- Enable
Configure RemoteFX - (Optional) Enable
Optimize visual experience when using RemoteFX, set both option toHighest.
- Enable
FreeRDP client configuration:
- Make sure your have FreeRDP 2.0 (Do NOT use Remmina from Ubuntu Official PPA)
- Compile one yourself or get a nightly build from here
- Get your Windows VM IP address (or assign a static one), here we use
192.168.99.2as an example. xfreerdp /v:192.168.99.2:3389 /w:1600 /h:900 /bpp:32 +clipboard +fonts /gdi:hw /rfx /rfx-mode:video /sound:sys:pulse +menu-anims +window-drag- Refer here for more detail.
Lifting 30-ish fps restriction:
- Start Registry Editor.
- Locate and then click the following registry subkey: HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Terminal Server\WinStations
- On the Edit menu, click New, and then click DWORD(32-bit) Value.
- Type DWMFRAMEINTERVAL, and then press Enter.
- Right-click DWMFRAMEINTERVAL, click Modify.
- Click Decimal, type 15 in the Value data box, and then click OK. This sets the maximum frame rate to 60 frames per second (FPS).
Verification codec usage and fine tuning your frame rate:
- Bring up your task manager, if a simple start menu pop-up animation (Windows 10) could consume you 40+ Mbps, then you are NOT using RemoteFX codec but just vanilla RDP. With a 1600x900 resolution, the start menu pop-up animation should consume a bandwidth less than 25 Mbps, while a 1600x900 Heaven benchmark consume less than 170 Mbps at peak.
- Fire up a benchmark like Unigine Heaven in the VM, check if your dGPU can maintain a higher than 90~95% utility stably. If not, tune down your resolution and try again. You will find a sweet spot that suits your hardware.
- For those don't concern much about image quality, try adding
/gfx-h264:AVC444option to your FreeRDP script. This will use RDP 8.1 with H.264 444 codec, which consume only 20~30-ish bandwidth even when runing full window Heaven benchmark. But artifacts this codec bring is more than noticeable.
For gaming:
- 1600x900 or lower resolution RFX connection is recommended for most Core i7 laptop.
- 1080p connection with game running at 1600x900 windowed mode have the same performance as above.
For other task:
- Tasks that are more GPU compute intensive (which does its operation asynchronously from display update) will not be bottlenecked by CPU, thus you can choose a higher resolution like 1080p.
External display require a BIOS setting that can rarely be seen on Optimus laptop.
- For some Dell laptop (such as mine), There is a
Display port direct output modeoption inVideo->Switchable Graphics, enable it and it will assign all display port (mDP, HDMI, Thunder Bolt etc.) directly to the dGPU. Check if your BIOS offer some similar options.- To also get the audio output from display port, follow this guide and this reference script. Thanks for Verequies from reddit.
- However, you will lose your capability to extend your host machine display. As there is no display output port connect to the iGPU, e.g. your host.
- While RemoteFX will compress the image in exchange for performance (which is not good if you required extreme image quality for professional use), such problem don't exist for external display setup, as it hook the dGPU directly.
Well, except for laptop that use MXM graphics card, vBIOS of onboard graphics card is actually part of the system BIOS.
- For the record, I did success without
romfileoption, but there is no guarantee for this approach. - For MXM graphics card, try using nvflash instead of GPU-Z. (In Windows) Disable your dGPU in device manager and run command
nvflash -6 xxx.romwith privilege will extract your vBIOS as xxx.rom (This is the way I did). Try different version of nvflash if you fail. - For on board GPU:
- Put the AFUDOS.EXE provided by Intel in a DOS-bootable USB device, then use it to extract your entire BIOS (which is a .rom file)
- Then boot to windows and use PhoenixTool (or other similar tools) to extract modules contain in that BIOS.
- Noted that those extracted modules will have weird name thus you can't be sure which one is for your onboard graphics card.
- Finally use some vBIOS Tweaker (MaxwellBiosTweaker or Mobile Pascal Tweaker or other equivalence) to find out which module is your vBIOS.
- Simply drag those module rom to the tweaker. Module roms that are not a vBIOS will be displayed as Unsupport Device, while vBIOS (typically around 50~300KB in size) will be successfully readed and show is information like device ID and vendor ID.
- Manufactures tend to include several vBIOS for generic purpose. Be sure you find the correct vBIOS that have the same device ID as the one shown in device manager.
- Disclaimer: I just know that you can use this method to extract the vBIOS of onboard graphics in the old days. However laptop BIOS may vary and I am not sure either the extraction process can go smoothly or the extracted and identified vBIOS rom can be used in QEMU without any problem.
Never own a laptop with AMD CPU myself, worth trying though. Don't forget to share you experience.
I Know nothing about dGPU from the red team.
As for now, No. GVT-g can run on Q35 or PC machine but only with SeaBIOS. Won't boot with ovmf.
Though passing dGPU to a GVT-g VM is possible, but the dGPU will report Code 12 with "no enough resources" inside the VM. Don't know why.
Bare-bone laptop with desktop CPU already have their iGPU disabled in a way you cannot revert (as far as I know), and can only use their dGPU to render the display. Thus there will be no display if you pass it to your VM.
For those bare-bone laptops who have two dGPUs, passing one to your VM sounds pretty possible. Though, be sure to take extra care if you have two identical dGPU. Check here for more detail.
Try nvidia gamestream with moonlight client, or Parsec. Or just pick whatever handful for you.
For RemoteFX connection, only windowed game can work, full screen will triger d3d11 0x087A0001 cannot set resolution problem. Media player doesn't not affect by this.
XPS-15 9560 Getting Nvidia To Work on KDE Neon
Hexadecimal to Decimal Converter
PCI passthrough via OVMF - Arch Wiki
Frame rate is limited to 30 FPS in Windows 8 and Windows Server 2012 remote sessions

