Blog

  • Proxmox Guide: Adding a Windows 11 VM with GPU Pass-through

    Getting Started

    Ensure you have the Windows 11 ISO and the latest VirtIO Drivers ISO added to Proxmox. The easiest way is to visit your node’s storage pool and select the Download from URL option. You’ll be able to paste the file URL and upload the file to Proxmox for later use.

    Create the VM

    Select “Create VM” in the Proxmox GUI.

    On the “General” tab, give the VM a memorable name.

    On the “OS” tab, locate and select the Windows 11 ISO downloaded previously. Under “Guest OS,” change “Type” to “Microsoft Windows” and “Version” to “11/2022/2025.” You’ll need to select “Add additional drive for VirtIO drivers,” then locate and select the VirtIO drivers ISO downloaded previously.

    On the “System” tab, ensure “Machine” type “q35” is selected. I recommend changing the “SCSI Controller” option to “VirtIO SCSI.” Check the box for “Qemu Agent” if you want the ability to monitor the VM from within the Proxmox GUI. Ensure both “Add TPM” and “Add EFI Disk” are selected, and choose a location for the storage of those disks. These are used to store BIOS and boot information for the VM.

    On the “Disks” tab, choose an appropriate amount of disk space for your use case. While you can increase this later, it is easier to start with a larger drive as increasing storage on an existing Windows VM requires additional steps within Windows itself.

    For cache, I generally choose “Default (no cache),” based on guidance from Proxmox: https://pve.proxmox.com/wiki/Performance_Tweaks

    On the “CPU” tab, select an appropriate socket and core count for your use case. Generally, selecting 1 socket and 8-10 cores is sufficient for most purposes. I generally select “host” as the “Type” as Windows will see the true processor brand and model type, instead of a generic virtual processor.

    On the “Memory” tab, choose an appropriate amount of RAM for your use case. Enter the amount in binary, i.e. 1GB = 1024MB; 8GB = 8192MB.

    On the “Network” tab, the default settings should be sufficient for most use cases. Add a VLAN tag if needed for your use case.

    On the “Confirm” tab, double check your machine’s details and click “Finish.” I usually leave “Start after created” unchecked as we’ll need to initialize Windows setup within the VM with the press of a key on our keyboard.

  • Proxmox Guide: GPU Passthrough

    1. Configure GRUB

    a. Open GRUB:

    nano /etc/default/grub

    b. Look for this line:

    GRUB_CMDLINE_LINUX_DEFAULT="quiet"

    c. Update it based on the CPU type:

    For Intel CPUs:

    GRUB_CMDLINE_LINUX_DEFAULT="quiet intel_iommu=on"

    For AMD CPUs:

    GRUB_CMDLINE_LINUX_DEFAULT="quiet amd_iommu=on"

    Additional commands may be necessary based on other hardware requirements. For example:

    GRUB_CMDLINE_LINUX_DEFAULT="quiet intel_iommu=on iommu=pt pcie_acs_override=downstream,multifunction nofb nomodeset video=vesafb:off,efifb:off"

    After saving, update GRUB:

    update-grub

    2. Update VFIO Modules

    Edit the modules file:

    nano /etc/modules

    Add the following to the file:

    vfio
    vfio_iommu_type1
    vfio_pci
    vfio_virqfd

    Save and exit.

    3. IOMMU interrupt remapping

    Run the following commands:

    echo "options vfio_iommu_type1 allow_unsafe_interrupts=1" > /etc/modprobe.d/iommu_unsafe_interrupts.conf
    echo "options kvm ignore_msrs=1" > /etc/modprobe.d/kvm.conf

    4. Blacklist Drivers

    Blacklisting the drivers ensures that the Proxmox host doesn’t try to load the drivers and initialize the device.

    echo "blacklist radeon" >> /etc/modprobe.d/blacklist.conf
    echo "blacklist nouveau" >> /etc/modprobe.d/blacklist.conf
    echo "blacklist nvidia" >> /etc/modprobe.d/blacklist.conf

    5. Add GPU to VFIO

    Run this command:

    lspci -v

    The shell will output the details of each PCI device available on the host. Find the PCI device ID that corresponds with the GPU you want to passthrough. It should look something like this:

    01:00.0 VGA compatible controller: NVIDIA Corporation TU102 [GeForce RTX 2080 Ti] (rev a1) (prog-if 00 [VGA controller])
    01:00.1 Audio device: NVIDIA Corporation TU102 High Definition Audio Controller (rev a1)
    01:00.2 USB controller: NVIDIA Corporation TU102 USB 3.1 Host Controller (rev a1) (prog-if 30 [XHCI])
    01:00.3 Serial bus controller: NVIDIA Corporation TU102 USB Type-C UCSI Controller (rev a1)

    (Note, the GPU used in this example has additional device IDs for the USB controller and ports. Some may only have IDs for the video and audio devices.)

    Make note of the first set of numbers (e.g. 01:00.0 and 01:00.1). We’ll need them for the next step.

    Run the command below. Replace 01:00 with whatever number was next to your GPU when you ran the previous command:

    lspci -n -s 01:00

    Doing this should output your GPU card’s Vendor IDs, usually one ID for the GPU and one ID for the Audio bus. I will look like this:

    01:00.0 0300: 10de:1e04 (rev a1)
    01:00.1 0403: 10de:10f7 (rev a1)
    01:00.2 0c03: 10de:1ad6 (rev a1)
    01:00.3 0c80: 10de:1ad7 (rev a1)

    Next, add these Vendor IDs to the VFIO:

    echo "options vfio-pci ids=10de:1e04,10de:10f7,10de:1ad6,10de:1ad7 disable_vga=1"> /etc/modprobe.d/vfio.conf

    (Note, if you’re only passing through one GPU, and the system isn’t using any other VGA-compatible device, disable_vga=1 may not be necessary. In most homelab setups where you leave an iGPU for the host and pass a dGPU to a VM, using disable_vga=1 is a good idea.)

    Next, run this command:

    update-initramfs -u

    Finally, reboot the host.

    reboot

    The system is now ready to pass through the dGPU to a Windows or Linux-based VM.

    Follow these guides to add your GPU to a Windows or Ubuntu VM in Proxmox.

    Sources:

    https://www.reddit.com/r/homelab/comments/b5xpua/the_ultimate_beginners_guide_to_gpu_passthrough/