Skip to content

Instantly share code, notes, and snippets.

@thimslugga
Forked from Impact123/Proxmox VE Tips.md
Created September 4, 2025 12:20
Show Gist options
  • Save thimslugga/aadf5ec45daab955dd54f7e35a6ff4f6 to your computer and use it in GitHub Desktop.
Save thimslugga/aadf5ec45daab955dd54f7e35a6ff4f6 to your computer and use it in GitHub Desktop.

Revisions

  1. @Impact123 Impact123 revised this gist Sep 4, 2025. 1 changed file with 1 addition and 1 deletion.
    2 changes: 1 addition & 1 deletion Proxmox VE Tips.md
    Original file line number Diff line number Diff line change
    @@ -1207,5 +1207,5 @@ localectl
    File based disks such as `.qcow2`, `.raw` and so on can have some issues.
    PVE does not enable the [`Content Type`s](https://pve.proxmox.com/pve-docs/pve-admin-guide.html#storage_directory) of the `local` storage to store such files by default.

    - [They are slow and inefficient](https://bugzilla.proxmox.com/show_bug.cgi?id=6140).
    - [They can be slow and inefficient](https://bugzilla.proxmox.com/show_bug.cgi?id=6140).
    - CTs only support `.raw` files [which provide no snapshot ability](https://pve.proxmox.com/pve-docs/pve-admin-guide.html#_storage_types).
  2. @Impact123 Impact123 revised this gist Sep 3, 2025. 1 changed file with 2 additions and 2 deletions.
    4 changes: 2 additions & 2 deletions Proxmox VE Tips.md
    Original file line number Diff line number Diff line change
    @@ -87,7 +87,7 @@ Some of the steps will not work if you run part of them in a script and copy pas
    - [PVE 9 / Debian 13](#pve-9--debian-13)
    - [Fix locales](#fix-locales)
    - [FAQ](#faq)
    - [Why not use `local` for guest disks](#why-not-use-local-for-guest-disks)
    - [Why not use `local` for guest disks?](#why-not-use-local-for-guest-disks)

    ## Discard

    @@ -1203,7 +1203,7 @@ localectl


    ### FAQ
    #### Why not use `local` for guest disks
    #### Why not use `local` for guest disks?
    File based disks such as `.qcow2`, `.raw` and so on can have some issues.
    PVE does not enable the [`Content Type`s](https://pve.proxmox.com/pve-docs/pve-admin-guide.html#storage_directory) of the `local` storage to store such files by default.

  3. @Impact123 Impact123 revised this gist Sep 3, 2025. 1 changed file with 1 addition and 1 deletion.
    2 changes: 1 addition & 1 deletion Proxmox VE Tips.md
    Original file line number Diff line number Diff line change
    @@ -1205,7 +1205,7 @@ localectl
    ### FAQ
    #### Why not use `local` for guest disks
    File based disks such as `.qcow2`, `.raw` and so on can have some issues.
    PVE does not enable the []`Content Type`s](https://pve.proxmox.com/pve-docs/pve-admin-guide.html#storage_directory) of the `local` storage to store such files by default.
    PVE does not enable the [`Content Type`s](https://pve.proxmox.com/pve-docs/pve-admin-guide.html#storage_directory) of the `local` storage to store such files by default.

    - [They are slow and inefficient](https://bugzilla.proxmox.com/show_bug.cgi?id=6140).
    - CTs only support `.raw` files [which provide no snapshot ability](https://pve.proxmox.com/pve-docs/pve-admin-guide.html#_storage_types).
  4. @Impact123 Impact123 revised this gist Sep 3, 2025. 1 changed file with 12 additions and 4 deletions.
    16 changes: 12 additions & 4 deletions Proxmox VE Tips.md
    Original file line number Diff line number Diff line change
    @@ -86,6 +86,8 @@ Some of the steps will not work if you run part of them in a script and copy pas
    - [PVE 8 / Debian 12](#pve-8--debian-12)
    - [PVE 9 / Debian 13](#pve-9--debian-13)
    - [Fix locales](#fix-locales)
    - [FAQ](#faq)
    - [Why not use `local` for guest disks](#why-not-use-local-for-guest-disks)

    ## Discard

    @@ -114,7 +116,6 @@ Alternatively you can select `discard` (`8.3.x`+) as mount option so this happen
    **You do not need to enable this for `pct fstrim` to work**.
    Use the mount option when you want it to be immediate/continuous and the `pct fstrim` cronjob to trigger it on a schedule like it usually works for VMs. I prefer the latter.

    This setting can be changed by editing the disk(s) in the CT's `Resources` tab.
    ![image](https://gist.github.com/user-attachments/assets/1de6263c-28d2-4ab3-92c6-324d3c5f310d)

    ### VM
    @@ -128,8 +129,6 @@ To edit it to happen more frequently run `systemctl edit fstrim.timer`.
    If you would like a drive to be presented to the guest as a solid-state drive rather than a rotational hard disk, you can set the SSD emulation option on that drive. There is no requirement that the underlying storage actually be backed by SSDs; this feature can be used with physical media of any type.

    For above to work the disk(s) should have the `Discard` flag set.

    This setting can be changed by editing the disk(s) in the VM's `Hardware` tab.
    ![image](https://gist.github.com/user-attachments/assets/6a7fd22f-b848-49ec-b535-bf0e7713b8a4)

    If you use the Guest Agent (which you really should) I'd also recommend enabling this under `Options > QEMU Guest Agent`.
    @@ -1090,7 +1089,7 @@ watch -cd -n1 "zpool iostat -yr 1 1"


    ### Set up no-subscription apt repositories
    With PVE 9 / Debian 13 the file suffix can now also be `.sources` rather than just `.list` so don't get confused by that.
    With PVE 9 / Debian 13 the file suffix can now also be `.sources` so don't get confused by that.

    Also see official docs:
    - <https://pve.proxmox.com/pve-docs/pve-admin-guide.html#sysadmin_no_subscription_repo>
    @@ -1201,3 +1200,12 @@ Verify with these
    locale
    localectl
    ```


    ### FAQ
    #### Why not use `local` for guest disks
    File based disks such as `.qcow2`, `.raw` and so on can have some issues.
    PVE does not enable the []`Content Type`s](https://pve.proxmox.com/pve-docs/pve-admin-guide.html#storage_directory) of the `local` storage to store such files by default.

    - [They are slow and inefficient](https://bugzilla.proxmox.com/show_bug.cgi?id=6140).
    - CTs only support `.raw` files [which provide no snapshot ability](https://pve.proxmox.com/pve-docs/pve-admin-guide.html#_storage_types).
  5. @Impact123 Impact123 revised this gist Sep 3, 2025. 1 changed file with 1 addition and 1 deletion.
    2 changes: 1 addition & 1 deletion Proxmox VE Tips.md
    Original file line number Diff line number Diff line change
    @@ -1090,7 +1090,7 @@ watch -cd -n1 "zpool iostat -yr 1 1"


    ### Set up no-subscription apt repositories
    With PVE 9 / Debian 13 the file suffix can now also be `.sources` so don't get confused by that.
    With PVE 9 / Debian 13 the file suffix can now also be `.sources` rather than just `.list` so don't get confused by that.

    Also see official docs:
    - <https://pve.proxmox.com/pve-docs/pve-admin-guide.html#sysadmin_no_subscription_repo>
  6. @Impact123 Impact123 revised this gist Sep 2, 2025. 1 changed file with 3 additions and 0 deletions.
    3 changes: 3 additions & 0 deletions Proxmox VE Tips.md
    Original file line number Diff line number Diff line change
    @@ -114,6 +114,7 @@ Alternatively you can select `discard` (`8.3.x`+) as mount option so this happen
    **You do not need to enable this for `pct fstrim` to work**.
    Use the mount option when you want it to be immediate/continuous and the `pct fstrim` cronjob to trigger it on a schedule like it usually works for VMs. I prefer the latter.

    This setting can be changed by editing the disk(s) in the CT's `Resources` tab.
    ![image](https://gist.github.com/user-attachments/assets/1de6263c-28d2-4ab3-92c6-324d3c5f310d)

    ### VM
    @@ -127,6 +128,8 @@ To edit it to happen more frequently run `systemctl edit fstrim.timer`.
    If you would like a drive to be presented to the guest as a solid-state drive rather than a rotational hard disk, you can set the SSD emulation option on that drive. There is no requirement that the underlying storage actually be backed by SSDs; this feature can be used with physical media of any type.

    For above to work the disk(s) should have the `Discard` flag set.

    This setting can be changed by editing the disk(s) in the VM's `Hardware` tab.
    ![image](https://gist.github.com/user-attachments/assets/6a7fd22f-b848-49ec-b535-bf0e7713b8a4)

    If you use the Guest Agent (which you really should) I'd also recommend enabling this under `Options > QEMU Guest Agent`.
  7. @Impact123 Impact123 revised this gist Aug 30, 2025. 1 changed file with 48 additions and 47 deletions.
    95 changes: 48 additions & 47 deletions Proxmox VE Tips.md
    Original file line number Diff line number Diff line change
    @@ -40,19 +40,19 @@ Some of the steps will not work if you run part of them in a script and copy pas
    - [Add devices](#add-devices)
    - [Install intel drivers/modules](#install-intel-driversmodules)
    - [CT](#ct-2)
    - [Install nvidia drivers/modules via .run file](#install-nvidia-driversmodules-via-run-file)
    - [Links and release notes](#links-and-release-notes)
    - [Download and install the .run file](#download-and-install-the-run-file)
    - [CT](#ct-3)
    - [VM](#vm-1)
    - [Node](#node)
    - [Install nvidia drivers/modules via apt](#install-nvidia-driversmodules-via-apt)
    - [Prerequisites](#prerequisites)
    - [Node / VM](#node--vm)
    - [CT](#ct-4)
    - [CT](#ct-3)
    - [Verify installation](#verify-installation)
    - [Post install](#post-install)
    - [Enable Persistence Daemon](#enable-persistence-daemon)
    - [Install nvidia drivers/modules via .run file](#install-nvidia-driversmodules-via-run-file)
    - [Links and release notes](#links-and-release-notes)
    - [Download and install the .run file](#download-and-install-the-run-file)
    - [CT](#ct-4)
    - [VM](#vm-1)
    - [Node](#node)
    - [Install and configure Docker Container Toolkit](#install-and-configure-docker-container-toolkit)
    - [ZFS tips](#zfs-tips)
    - [Check space usage and ratios](#check-space-usage-and-ratios)
    @@ -571,46 +571,6 @@ apt install -y va-driver-all ocl-icd-libopencl1 intel-opencl-icd vainfo intel-gp
    Validate with `vainfo`, `intel_gpu_top` and `nvtop`.


    ## Install nvidia drivers/modules via .run file
    This installation method gives you more control over the version but you have to update yourself.
    These commands should work for both the nodes, VMs and CTs as long as they are based on debian/ubuntu.
    This assumes you use the `root` user. **These command are to be run on the node/VM/CT. Copy & paste.**

    ### Links and release notes
    For datacenter (Some links are broken but you can google for the version)
    - <https://developer.nvidia.com/datacenter-driver-archive>
    - <https://docs.nvidia.com/datacenter/tesla/index.html>

    For linux/unix
    - <https://www.nvidia.com/en-us/drivers/unix/linux-amd64-display-archive/>
    - <https://www.nvidia.com/en-us/drivers/unix/>

    ### Download and install the .run file
    `<TAB>` here means pressing the `TAB` key to auto complete the file name.

    #### CT
    ```bash
    wget LINKFROMABOVEHERE
    chmod +x NVIDIA*.run
    ./NVIDIA<TAB> --no-kernel-modules
    ```

    #### VM
    ```bash
    wget LINKFROMABOVEHERE
    apt install -y linux-headers-generic gcc make dkms
    chmod +x NVIDIA*.run
    ./NVIDIA<TAB> --dkms
    ```

    #### Node
    ```bash
    wget LINKFROMABOVEHERE
    apt install -y pve-headers gcc make dkms
    chmod +x NVIDIA*.run
    ./NVIDIA<TAB> --dkms
    ```

    ## Install nvidia drivers/modules via apt
    This is my current recommendation for **PVE 9 / Debian 13**. [If you have to use PVE 8 or Debian 12 see older version of this guide](https://gist.github.com/Impact123/3dbd7e0ddaf47c5539708a9cbcaab9e3/79c02ab9654ae368a60d9ff23fec147dc59d82c8#install-nvidia-driversmodules-via-apt).
    It's a simpler method as it uses packages straight from the debian repos. They might be a bit older but this should be fine and it makes installation simpler.
    @@ -688,6 +648,47 @@ You can see the status in `nvidia-smi`.
    ![image](https://gist.github.com/user-attachments/assets/e92eb823-470b-43f4-8e02-d962e749b27c)


    ## Install nvidia drivers/modules via .run file
    This alternative to the apt installation method gives you more control over the version but you have to update yourself.
    These commands should work for both the nodes, VMs and CTs as long as they are based on debian/ubuntu.
    This assumes you use the `root` user. **These command are to be run on the node/VM/CT. Copy & paste.**

    ### Links and release notes
    For datacenter (Some links are broken but you can google for the version)
    - <https://developer.nvidia.com/datacenter-driver-archive>
    - <https://docs.nvidia.com/datacenter/tesla/index.html>

    For linux/unix
    - <https://www.nvidia.com/en-us/drivers/unix/linux-amd64-display-archive/>
    - <https://www.nvidia.com/en-us/drivers/unix/>

    ### Download and install the .run file
    `<TAB>` here means pressing the `TAB` key to auto complete the file name.

    #### CT
    ```bash
    wget LINKFROMABOVEHERE
    chmod +x NVIDIA*.run
    ./NVIDIA<TAB> --no-kernel-modules
    ```

    #### VM
    ```bash
    wget LINKFROMABOVEHERE
    apt install -y linux-headers-generic gcc make dkms
    chmod +x NVIDIA*.run
    ./NVIDIA<TAB> --dkms
    ```

    #### Node
    ```bash
    wget LINKFROMABOVEHERE
    apt install -y pve-headers gcc make dkms
    chmod +x NVIDIA*.run
    ./NVIDIA<TAB> --dkms
    ```


    ### Install and configure Docker Container Toolkit
    **These command are to be run inside a CT. Copy & paste.**
    In case you run docker containers inside a CT. Adapted from [the official guide](https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/latest/install-guide.html).
  8. @Impact123 Impact123 revised this gist Aug 30, 2025. 1 changed file with 1 addition and 1 deletion.
    2 changes: 1 addition & 1 deletion Proxmox VE Tips.md
    Original file line number Diff line number Diff line change
    @@ -679,7 +679,7 @@ Now see if `nvidia-smi` works. A reboot might be necessary for the node or a VM.
    ### Post install
    #### Enable Persistence Daemon
    This can help save power and decrease access delays. [See docs](https://download.nvidia.com/XFree86/Linux-x86_64/396.51/README/nvidia-persistenced.html).
    **These command are to be run on the node or VM. Copy & paste.**
    **These commands are to be run on the node or VM. Copy & paste.**
    Enable and start it with
    ```bash
    systemctl enable --now nvidia-persistenced.service
  9. @Impact123 Impact123 revised this gist Aug 30, 2025. 1 changed file with 1 addition and 1 deletion.
    2 changes: 1 addition & 1 deletion Proxmox VE Tips.md
    Original file line number Diff line number Diff line change
    @@ -206,7 +206,7 @@ Also see here: <https://pve.proxmox.com/pve-docs/pve-admin-guide.html#nomodeset_

    When passing through devices it can sometimes happen that your device shares an IOMMU group with something else that's important.
    It's also possible that groups shift if you exchange a device. All of this can cause a system to become unbootable.
    If editing the boot arguments doesn't help, the simplest fix is to go into the UEFI/BIOS and disable every virtualization related thing. VT-x/VT-d/SVM/ACS/IOMMU or whatever it's called for you.
    If [editing the boot arguments](#temporary-kernel-arguments) doesn't help, the simplest fix is to go into the UEFI/BIOS and disable every virtualization related thing. VT-x/VT-d/SVM/ACS/IOMMU or whatever it's called for you.

    ## Passthrough tips

  10. @Impact123 Impact123 revised this gist Aug 25, 2025. 1 changed file with 4 additions and 0 deletions.
    4 changes: 4 additions & 0 deletions Proxmox VE Tips.md
    Original file line number Diff line number Diff line change
    @@ -1088,6 +1088,10 @@ watch -cd -n1 "zpool iostat -yr 1 1"
    ### Set up no-subscription apt repositories
    With PVE 9 / Debian 13 the file suffix can now also be `.sources` so don't get confused by that.

    Also see official docs:
    - <https://pve.proxmox.com/pve-docs/pve-admin-guide.html#sysadmin_no_subscription_repo>
    - <https://pve.proxmox.com/pve-docs/pve-admin-guide.html#repos_secure_apt>

    #### GUI
    Go to `node > Updates > Repositories` and add the `no-subscription` repo.
    ![image](https://gist.github.com/user-attachments/assets/f6386651-f457-4f48-a22f-4bcd245ba899)
  11. @Impact123 Impact123 revised this gist Aug 24, 2025. 1 changed file with 2 additions and 2 deletions.
    4 changes: 2 additions & 2 deletions Proxmox VE Tips.md
    Original file line number Diff line number Diff line change
    @@ -1171,15 +1171,15 @@ Now check with `apt update` for errors.


    ### Fix locales
    Do you have strance characters in your CLI tools rather than unicode? The default `C` locale might be the cause.
    Do you have strange characters in your CLI tools rather than unicode symbols? The default `C` locale might be the cause.
    This is mostly useful for CTs. For VMs you generally set this up during install.

    To interactive change it you can use
    ```bash
    dpkg-reconfigure locales
    ```

    Non-interactively you can use something like this
    To non-interactively change it you can use something like this
    ```bash
    echo "en_US.UTF-8 UTF-8" > /etc/locale.gen
    echo 'LANG=en_US.UTF-8' > /etc/locale.conf
  12. @Impact123 Impact123 revised this gist Aug 24, 2025. 1 changed file with 2 additions and 2 deletions.
    4 changes: 2 additions & 2 deletions Proxmox VE Tips.md
    Original file line number Diff line number Diff line change
    @@ -85,7 +85,7 @@ Some of the steps will not work if you run part of them in a script and copy pas
    - [CLI](#cli)
    - [PVE 8 / Debian 12](#pve-8--debian-12)
    - [PVE 9 / Debian 13](#pve-9--debian-13)
    - [Fix locales](#fix-locales)
    - [Fix locales](#fix-locales)

    ## Discard

    @@ -1170,7 +1170,7 @@ sed -i '/^#/!s/^/#/' /etc/apt/sources.list.d/pve-enterprise.sources
    Now check with `apt update` for errors.


    #### Fix locales
    ### Fix locales
    Do you have strance characters in your CLI tools rather than unicode? The default `C` locale might be the cause.
    This is mostly useful for CTs. For VMs you generally set this up during install.

  13. @Impact123 Impact123 revised this gist Aug 24, 2025. 1 changed file with 25 additions and 0 deletions.
    25 changes: 25 additions & 0 deletions Proxmox VE Tips.md
    Original file line number Diff line number Diff line change
    @@ -85,6 +85,7 @@ Some of the steps will not work if you run part of them in a script and copy pas
    - [CLI](#cli)
    - [PVE 8 / Debian 12](#pve-8--debian-12)
    - [PVE 9 / Debian 13](#pve-9--debian-13)
    - [Fix locales](#fix-locales)

    ## Discard

    @@ -1168,3 +1169,27 @@ sed -i '/^#/!s/^/#/' /etc/apt/sources.list.d/pve-enterprise.sources

    Now check with `apt update` for errors.


    #### Fix locales
    Do you have strance characters in your CLI tools rather than unicode? The default `C` locale might be the cause.
    This is mostly useful for CTs. For VMs you generally set this up during install.

    To interactive change it you can use
    ```bash
    dpkg-reconfigure locales
    ```

    Non-interactively you can use something like this
    ```bash
    echo "en_US.UTF-8 UTF-8" > /etc/locale.gen
    echo 'LANG=en_US.UTF-8' > /etc/locale.conf
    ln -sf /etc/locale.conf /etc/default/locale
    source /etc/locale.conf
    locale-gen
    ```

    Verify with these
    ```bash
    locale
    localectl
    ```
  14. @Impact123 Impact123 revised this gist Aug 24, 2025. 1 changed file with 2 additions and 2 deletions.
    4 changes: 2 additions & 2 deletions Proxmox VE Tips.md
    Original file line number Diff line number Diff line change
    @@ -751,7 +751,7 @@ NAME AVAIL USED USEDSNAP USEDDS USEDREFRESERV USEDCH
    nvmezfs/subvol-120-disk-0 7.02G 23.0G 160K 23.0G 0B 0B 28.3G 30G
    ```

    Take note of `USED` and then simply set the `refquota` to what you want
    Take note of `USED` and then simply set the `refquota` to what you want. Don't set the quota too low or lower than `USED`.
    ```bash
    zfs set refquota=29G nvmezfs/subvol-120-disk-0
    ```
    @@ -763,7 +763,7 @@ rescan volumes...
    CT 120: updated volume size of 'nvmezfs:subvol-120-disk-0' in config.
    ```

    This works for growing it too, of course, but the GUI already provides that option.
    This works for growing it too, but the GUI already provides that option.

    ### Update ZFS ARC size

  15. @Impact123 Impact123 revised this gist Aug 24, 2025. 1 changed file with 25 additions and 0 deletions.
    25 changes: 25 additions & 0 deletions Proxmox VE Tips.md
    Original file line number Diff line number Diff line change
    @@ -57,6 +57,7 @@ Some of the steps will not work if you run part of them in a script and copy pas
    - [ZFS tips](#zfs-tips)
    - [Check space usage and ratios](#check-space-usage-and-ratios)
    - [Find old ZFS snapshots](#find-old-zfs-snapshots)
    - [Shrink a CT's disk](#shrink-a-cts-disk)
    - [Update ZFS ARC size](#update-zfs-arc-size)
    - [Validate](#validate)
    - [Adapt config](#adapt-config)
    @@ -740,6 +741,30 @@ It might make sense to add a ` | less` at the end if you have lots of snapshots.
    zfs list -ospace,logicalused,compression,compressratio,creation -rs creation -t snap
    ```

    ### Shrink a CT's disk
    Since CTs use datasets this is very trivial and should be reasonably safe but make sure to take backups.
    First grab some information about the CT (ID 120 in this example) you want to modify

    ```bash
    # zfs list -ospace,logicalused,refquota | grep -E "NAME|120"
    NAME AVAIL USED USEDSNAP USEDDS USEDREFRESERV USEDCHILD LUSED REFQUOTA
    nvmezfs/subvol-120-disk-0 7.02G 23.0G 160K 23.0G 0B 0B 28.3G 30G
    ```

    Take note of `USED` and then simply set the `refquota` to what you want
    ```bash
    zfs set refquota=29G nvmezfs/subvol-120-disk-0
    ```

    Lastly run a `pct rescan`
    ```bash
    # pct rescan
    rescan volumes...
    CT 120: updated volume size of 'nvmezfs:subvol-120-disk-0' in config.
    ```

    This works for growing it too, of course, but the GUI already provides that option.

    ### Update ZFS ARC size

    Adapted from [the official documentation](https://pve.proxmox.com/pve-docs/pve-admin-guide.html#sysadmin_zfs_limit_memory_usage)
  16. @Impact123 Impact123 revised this gist Aug 24, 2025. 1 changed file with 1 addition and 1 deletion.
    2 changes: 1 addition & 1 deletion Proxmox VE Tips.md
    Original file line number Diff line number Diff line change
    @@ -855,7 +855,7 @@ Then delete if needed
    # qm set 500 --delete unused0
    ```

    Here's a little script to do all of this for you. It's not cluster aware at the moment. It will list unused disks for all guests but you cannot necessarily remove them via the command without being on the node they originate from.
    Here's a little script to do all of this for you. It only tells you the commands, not run them.
    ```bash
    find /etc/pve/ -name '[0-9]*.conf' | while read -r config; do
    [[ "$config" == *"/lxc/"* ]] && CMD="pct" || CMD="qm"
  17. @Impact123 Impact123 revised this gist Aug 23, 2025. 1 changed file with 3 additions and 3 deletions.
    6 changes: 3 additions & 3 deletions Proxmox VE Tips.md
    Original file line number Diff line number Diff line change
    @@ -835,7 +835,7 @@ pct rescan

    Now find unused disks in the configs
    ```bash
    # grep -sR "^unused.*: " /etc/pve/
    # grep -sR "^unused[0-9]+: " /etc/pve/
    /etc/pve/nodes/pve/qemu-server/500.conf:unused0: nvmezfs:vm-500-disk-1
    ```

    @@ -847,15 +847,15 @@ Investigate their source

    Show all of their paths
    ```bash
    grep -sR "^unused.*: " /etc/pve/ | awk -F': ' '{print $2}' | xargs -I{} pvesm path {}
    grep -sR "^unused[0-9]+: " /etc/pve/ | awk -F': ' '{print $2}' | xargs -I{} pvesm path {}
    ```

    Then delete if needed
    ```bash
    # qm set 500 --delete unused0
    ```

    Here's a little script to do this for you. It's not cluster aware at the moment. It will list unused disks for all guests but you cannot remove them via the command without being on the node they originate from.
    Here's a little script to do all of this for you. It's not cluster aware at the moment. It will list unused disks for all guests but you cannot necessarily remove them via the command without being on the node they originate from.
    ```bash
    find /etc/pve/ -name '[0-9]*.conf' | while read -r config; do
    [[ "$config" == *"/lxc/"* ]] && CMD="pct" || CMD="qm"
  18. @Impact123 Impact123 revised this gist Aug 23, 2025. 1 changed file with 4 additions and 2 deletions.
    6 changes: 4 additions & 2 deletions Proxmox VE Tips.md
    Original file line number Diff line number Diff line change
    @@ -858,13 +858,15 @@ Then delete if needed
    Here's a little script to do this for you. It's not cluster aware at the moment. It will list unused disks for all guests but you cannot remove them via the command without being on the node they originate from.
    ```bash
    find /etc/pve/ -name '[0-9]*.conf' | while read -r config; do
    guest=$(basename $config .conf)
    [[ "$config" == *"/lxc/"* ]] && CMD="pct" || CMD="qm"

    guest=$(basename "$config" .conf)
    unused_lines=$(grep -E '^unused[0-9]+: ' "$config") || continue

    echo "$unused_lines" | while read -r line; do
    echo "# $line"
    disk=$(echo "$line" | awk -F':' '{print $1}')
    echo -e "qm set $guest --delete $disk\n"
    echo -e "$CMD set $guest --delete $disk\n"
    done
    done
    ```
  19. @Impact123 Impact123 revised this gist Aug 23, 2025. 1 changed file with 13 additions and 6 deletions.
    19 changes: 13 additions & 6 deletions Proxmox VE Tips.md
    Original file line number Diff line number Diff line change
    @@ -835,7 +835,7 @@ pct rescan

    Now find unused disks in the configs
    ```bash
    # grep -sR "unused" /etc/pve/
    # grep -sR "^unused.*: " /etc/pve/
    /etc/pve/nodes/pve/qemu-server/500.conf:unused0: nvmezfs:vm-500-disk-1
    ```

    @@ -852,14 +852,21 @@ grep -sR "^unused.*: " /etc/pve/ | awk -F': ' '{print $2}' | xargs -I{} pvesm pa

    Then delete if needed
    ```bash
    # pvesm free nvmezfs:vm-500-disk-1
    Removed volume 'nvmezfs:vm-500-disk-1'
    # qm set 500 --delete unused0
    ```

    Delete all of them
    **Extremely dangerous!** List them first as shown above
    Here's a little script to do this for you. It's not cluster aware at the moment. It will list unused disks for all guests but you cannot remove them via the command without being on the node they originate from.
    ```bash
    grep -sR "^unused.*: " /etc/pve/ | awk -F': ' '{print $2}' | xargs -I{} pvesm free {}
    find /etc/pve/ -name '[0-9]*.conf' | while read -r config; do
    guest=$(basename $config .conf)
    unused_lines=$(grep -E '^unused[0-9]+: ' "$config") || continue

    echo "$unused_lines" | while read -r line; do
    echo "# $line"
    disk=$(echo "$line" | awk -F':' '{print $1}')
    echo -e "qm set $guest --delete $disk\n"
    done
    done
    ```

    ### Restore guest configs
  20. @Impact123 Impact123 revised this gist Aug 23, 2025. 1 changed file with 8 additions and 0 deletions.
    8 changes: 8 additions & 0 deletions Proxmox VE Tips.md
    Original file line number Diff line number Diff line change
    @@ -56,6 +56,7 @@ Some of the steps will not work if you run part of them in a script and copy pas
    - [Install and configure Docker Container Toolkit](#install-and-configure-docker-container-toolkit)
    - [ZFS tips](#zfs-tips)
    - [Check space usage and ratios](#check-space-usage-and-ratios)
    - [Find old ZFS snapshots](#find-old-zfs-snapshots)
    - [Update ZFS ARC size](#update-zfs-arc-size)
    - [Validate](#validate)
    - [Adapt config](#adapt-config)
    @@ -732,6 +733,13 @@ This sorts by used size
    zfs list -ospace,logicalused,compression,compressratio -rS used
    ```

    ### Find old ZFS snapshots
    If above shows `USEDSNAP` being very high and you already deleted snapshots or have none it might be from a old/broken migration task.
    It might make sense to add a ` | less` at the end if you have lots of snapshots.
    ```bash
    zfs list -ospace,logicalused,compression,compressratio,creation -rs creation -t snap
    ```

    ### Update ZFS ARC size

    Adapted from [the official documentation](https://pve.proxmox.com/pve-docs/pve-admin-guide.html#sysadmin_zfs_limit_memory_usage)
  21. @Impact123 Impact123 revised this gist Aug 23, 2025. 1 changed file with 11 additions and 0 deletions.
    11 changes: 11 additions & 0 deletions Proxmox VE Tips.md
    Original file line number Diff line number Diff line change
    @@ -837,12 +837,23 @@ Investigate their source
    /dev/zvol/nvmezfs/vm-500-disk-1
    ```

    Show all of their paths
    ```bash
    grep -sR "^unused.*: " /etc/pve/ | awk -F': ' '{print $2}' | xargs -I{} pvesm path {}
    ```

    Then delete if needed
    ```bash
    # pvesm free nvmezfs:vm-500-disk-1
    Removed volume 'nvmezfs:vm-500-disk-1'
    ```

    Delete all of them
    **Extremely dangerous!** List them first as shown above
    ```bash
    grep -sR "^unused.*: " /etc/pve/ | awk -F': ' '{print $2}' | xargs -I{} pvesm free {}
    ```

    ### Restore guest configs

    A script that can extract the `.conf` file out of [`pmxcfs`](<https://pve.proxmox.com/wiki/Proxmox_Cluster_File_System_(pmxcfs)>)'s `config.db`.
  22. @Impact123 Impact123 revised this gist Aug 23, 2025. 1 changed file with 1 addition and 1 deletion.
    2 changes: 1 addition & 1 deletion Proxmox VE Tips.md
    Original file line number Diff line number Diff line change
    @@ -817,7 +817,7 @@ There is no reboot necessary.
    Just some miscellaneous small tips and scripts which don't have a good place yet or are better to be linked from above to keep things structured and organized.

    ### Find unused disks/volumes
    If goes without saying that yoi should be careful here. I trust you have backups.
    If goes without saying that you should be careful here. I trust you have backups.

    First rescan
    ```bash
  23. @Impact123 Impact123 revised this gist Aug 23, 2025. 1 changed file with 29 additions and 2 deletions.
    31 changes: 29 additions & 2 deletions Proxmox VE Tips.md
    Original file line number Diff line number Diff line change
    @@ -61,6 +61,7 @@ Some of the steps will not work if you run part of them in a script and copy pas
    - [Adapt config](#adapt-config)
    - [Final steps](#final-steps)
    - [Misc tips and scripts](#misc-tips-and-scripts)
    - [Find unused disks/volumes](#find-unused-disksvolumes)
    - [Restore guest configs](#restore-guest-configs)
    - [Monitor disk SMART information](#monitor-disk-smart-information)
    - [Credentials](#credentials)
    @@ -82,7 +83,6 @@ Some of the steps will not work if you run part of them in a script and copy pas
    - [CLI](#cli)
    - [PVE 8 / Debian 12](#pve-8--debian-12)
    - [PVE 9 / Debian 13](#pve-9--debian-13)
    - [Archive](#archive)

    ## Discard

    @@ -814,7 +814,34 @@ update-initramfs -u -k all
    There is no reboot necessary.

    ## Misc tips and scripts
    Just some miscellaneous tips and scripts which don't have a good place yet or are better to be linked from above to keep things structured and organized.
    Just some miscellaneous small tips and scripts which don't have a good place yet or are better to be linked from above to keep things structured and organized.

    ### Find unused disks/volumes
    If goes without saying that yoi should be careful here. I trust you have backups.

    First rescan
    ```bash
    qm rescan
    pct rescan
    ```

    Now find unused disks in the configs
    ```bash
    # grep -sR "unused" /etc/pve/
    /etc/pve/nodes/pve/qemu-server/500.conf:unused0: nvmezfs:vm-500-disk-1
    ```

    Investigate their source
    ```bash
    # pvesm path nvmezfs:vm-500-disk-1
    /dev/zvol/nvmezfs/vm-500-disk-1
    ```

    Then delete if needed
    ```bash
    # pvesm free nvmezfs:vm-500-disk-1
    Removed volume 'nvmezfs:vm-500-disk-1'
    ```

    ### Restore guest configs

  24. @Impact123 Impact123 revised this gist Aug 23, 2025. 1 changed file with 6 additions and 1 deletion.
    7 changes: 6 additions & 1 deletion Proxmox VE Tips.md
    Original file line number Diff line number Diff line change
    @@ -721,12 +721,17 @@ docker run --rm --runtime=nvidia --gpus all ubuntu nvidia-smi

    ### Check space usage and ratios

    This shows and sorts by compression ratio
    This sorts by compression ratio

    ```bash
    zfs list -ospace,logicalused,compression,compressratio -rS compressratio
    ```

    This sorts by used size
    ```bash
    zfs list -ospace,logicalused,compression,compressratio -rS used
    ```

    ### Update ZFS ARC size

    Adapted from [the official documentation](https://pve.proxmox.com/pve-docs/pve-admin-guide.html#sysadmin_zfs_limit_memory_usage)
  25. @Impact123 Impact123 revised this gist Aug 19, 2025. 1 changed file with 2 additions and 2 deletions.
    4 changes: 2 additions & 2 deletions Proxmox VE Tips.md
    Original file line number Diff line number Diff line change
    @@ -955,9 +955,9 @@ https://docs.kernel.org/accounting/delay-accounting.html#usage

    Run this and check the column (select it via arrow keys) you're interested in.
    ```bash
    iotop-c -cPo
    iotop-c -cP
    ```
    Also try to press `a` for cumulative mode and let it run for a while.
    Also try `iotop-c -cPa` or press `a` to toggle cumulative mode and let it run for a while.
    ![image](https://gist.github.com/user-attachments/assets/d2995763-1bdb-4cfb-98ca-9b87ae279b8d)

    ##### iostat
  26. @Impact123 Impact123 revised this gist Aug 19, 2025. 1 changed file with 1 addition and 3 deletions.
    4 changes: 1 addition & 3 deletions Proxmox VE Tips.md
    Original file line number Diff line number Diff line change
    @@ -324,9 +324,7 @@ See here for more:
    - https://www.kernel.org/doc/html/v4.14/admin-guide/kernel-parameters.html

    ### Step 3
    Note that a reboot is only needed if the VM was running before you added the serial port.

    Reboot the VM via the PVE button or power it off and on again.
    Reboot the VM via the PVE button or power it off and on again to apply the Hardware and bootloader config change.
    This is so the VM is cold booted. A normal `reboot` command from within the VM will not do the same.
    You can see if a `Hardware` change was applied by the color. If it's orange it's still to be applied.

  27. @Impact123 Impact123 revised this gist Aug 19, 2025. 1 changed file with 1 addition and 4 deletions.
    5 changes: 1 addition & 4 deletions Proxmox VE Tips.md
    Original file line number Diff line number Diff line change
    @@ -306,10 +306,7 @@ To enable the TTY edit `/etc/default/grub` via
    nano /etc/default/grub
    ```

    and add this to the command line (replace `ttyS0` with yours from above)
    ```bash
    console=ttyS0 console=tty0
    ```
    Find the line starting with `GRUB_CMDLINE_LINUX_DEFAULT` and add `console=ttyS0 console=tty0` at the end (replace `ttyS0` with yours from above).

    It can look like this for example
    ```bash
  28. @Impact123 Impact123 revised this gist Aug 18, 2025. 1 changed file with 22 additions and 4 deletions.
    26 changes: 22 additions & 4 deletions Proxmox VE Tips.md
    Original file line number Diff line number Diff line change
    @@ -301,12 +301,30 @@ For example it's `ttyS0` for me
    Aug 18 02:17:16 nodename kernel: 00:04: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A
    ```

    To enable the TTY you can run this. Replace `ttyS0` with yours.
    [See here for more](https://0pointer.de/blog/projects/serial-console.html).
    To enable the TTY edit `/etc/default/grub` via
    ```bash
    systemctl enable --now serial-getty@ttyS0
    nano /etc/default/grub
    ```
    `--now` makes it start right away.

    and add this to the command line (replace `ttyS0` with yours from above)
    ```bash
    console=ttyS0 console=tty0
    ```

    It can look like this for example
    ```bash
    GRUB_CMDLINE_LINUX_DEFAULT="console=ttyS0 console=tty0"
    ```

    Save via `CTRL+X` and exit. Afterwards run
    ```bash
    update-grub
    ```

    See here for more:
    - https://0pointer.de/blog/projects/serial-console.html
    - https://docs.kernel.org/admin-guide/serial-console.html
    - https://www.kernel.org/doc/html/v4.14/admin-guide/kernel-parameters.html

    ### Step 3
    Note that a reboot is only needed if the VM was running before you added the serial port.
  29. @Impact123 Impact123 revised this gist Aug 18, 2025. 1 changed file with 2 additions and 1 deletion.
    3 changes: 2 additions & 1 deletion Proxmox VE Tips.md
    Original file line number Diff line number Diff line change
    @@ -339,7 +339,8 @@ I recommend to change the following settings. At least for linux guests.

    [A NIC's (Network Interface Controller/Card) name is hardware dependent](https://pve.proxmox.com/pve-docs/pve-admin-guide.html#systemd_network_interface_names) and can change when you add or remove PCI(e) devices. Sometimes major kernel upgrades can also cause this.
    Since the `/etc/network/interfaces` file which handles networking uses these names to configure your network, changes to the name will break it.
    To prevent those changes you can [use a systemd `.link` file to permanently override the name](https://pve.proxmox.com/pve-docs/pve-admin-guide.html#network_override_device_names).
    To prevent those changes you can [use a systemd `.link` file to permanently override the name](https://pve.proxmox.com/pve-docs/pve-admin-guide.html#network_override_device_names).
    [PVE 9 comes with the `pve-network-interface-pinning` pinning tool](https://pve.proxmox.com/pve-docs/pve-admin-guide.html#_using_the_pve_network_interface_pinning_tool).

    ### Network testing

  30. @Impact123 Impact123 revised this gist Aug 18, 2025. 1 changed file with 14 additions and 0 deletions.
    14 changes: 14 additions & 0 deletions Proxmox VE Tips.md
    Original file line number Diff line number Diff line change
    @@ -622,6 +622,20 @@ sed -i 's/^Components: .*/Components: main contrib non-free non-free-firmware/'
    apt update
    ```

    **If your node/VM uses Secure Boot** (check with `mokutil --sb-state`) follow this section.
    **Make sure to monitor the next boot process via noVNC**. You will be asked for the password when importing the key.
    ```bash
    apt install dkms && dkms generate_mok

    apt install linux-headers-generic

    # Set a simple password (a-z keys)
    mokutil --import /var/lib/dkms/mok.pub

    # If you followed this section after you already installed the driver run this and reboot
    # dpkg-reconfigure nvidia-kernel-dkms
    ```

    ### Node / VM
    ```bash
    apt install nvidia-detect