Skip to content

Instantly share code, notes, and snippets.

@scyto
Last active August 22, 2025 16:12
Show Gist options
  • Save scyto/dbbe5483f2779228ff743c5f333effe0 to your computer and use it in GitHub Desktop.
Save scyto/dbbe5483f2779228ff743c5f333effe0 to your computer and use it in GitHub Desktop.

Revisions

  1. scyto revised this gist Apr 30, 2025. 1 changed file with 2 additions and 2 deletions.
    4 changes: 2 additions & 2 deletions routed-vm-mesh-access.md
    Original file line number Diff line number Diff line change
    @@ -276,13 +276,13 @@ allow-hotplug eth1
    iface eth1 inet static
    address 10.0.81.41
    netmask 255.255.255.0
    gateway 10.0.81.1
    # gateway 10.0.81.1 - not strictly needed, causes issues on ifreload based systems
    up ip route add 10.0.0.80/28 via 10.0.81.1 dev eth1 || true

    iface eth1 inet6 static
    address fc00:81::41
    netmask 64
    gateway fc00:81::1
    # gateway fc00:81::1 - not strictly needed, causes issues on ifreload based systems
    up ip -6 route add fc00::80/124 via fc00:81::1 dev eth1 || true
    ```

  2. scyto revised this gist Apr 30, 2025. 1 changed file with 5 additions and 3 deletions.
    8 changes: 5 additions & 3 deletions routed-vm-mesh-access.md
    Original file line number Diff line number Diff line change
    @@ -248,7 +248,8 @@ i had to install ifupown2 in my debian swarm vms as an upgrade from from 11 to 1

    <details>
    <summary>Click me</summary>
    ```

    ```bash
    auto eth0
    allow-hotplug eth0
    iface eth0 inet static
    @@ -264,8 +265,8 @@ iface eth0 inet6 static
    address 2001:db8:1000:1::41
    netmask 64
    gateway 2001:db8:1000:1::1
    dns-domain alexbal.com
    dns-search alexbal.com
    dns-domain mydomain.com
    dns-search mydomain.com
    dns-nameservers 2001:db8:1000:1::5 2001:db8:10001::6


    @@ -284,4 +285,5 @@ iface eth1 inet6 static
    gateway fc00:81::1
    up ip -6 route add fc00::80/124 via fc00:81::1 dev eth1 || true
    ```

    </details>
  3. scyto revised this gist Apr 30, 2025. 1 changed file with 2 additions and 2 deletions.
    4 changes: 2 additions & 2 deletions routed-vm-mesh-access.md
    Original file line number Diff line number Diff line change
    @@ -248,7 +248,7 @@ i had to install ifupown2 in my debian swarm vms as an upgrade from from 11 to 1

    <details>
    <summary>Click me</summary>

    ```
    auto eth0
    allow-hotplug eth0
    iface eth0 inet static
    @@ -283,5 +283,5 @@ iface eth1 inet6 static
    netmask 64
    gateway fc00:81::1
    up ip -6 route add fc00::80/124 via fc00:81::1 dev eth1 || true

    ```
    </details>
  4. scyto revised this gist Apr 30, 2025. 1 changed file with 45 additions and 0 deletions.
    45 changes: 45 additions & 0 deletions routed-vm-mesh-access.md
    Original file line number Diff line number Diff line change
    @@ -240,3 +240,48 @@ router openfabric 1
    exit
    ```
    </details>

    ---
    ### Example interfaces file from a VM on my pve1 node after this gist.
    note this is for VMs running ifupdown2 instead of networking.service
    i had to install ifupown2 in my debian swarm vms as an upgrade from from 11 to 12 didn't not automatically make this switch!

    <details>
    <summary>Click me</summary>

    auto eth0
    allow-hotplug eth0
    iface eth0 inet static
    address 192.168.1.41
    netmask 255.255.255.0
    gateway 192.168.1.1
    dns-domain mydomain.com
    dns-search mydomain.com
    dns-nameservers 192.168.1.5 192.168.1.6

    iface eth0 inet6 static
    accept_ra = 2
    address 2001:db8:1000:1::41
    netmask 64
    gateway 2001:db8:1000:1::1
    dns-domain alexbal.com
    dns-search alexbal.com
    dns-nameservers 2001:db8:1000:1::5 2001:db8:10001::6


    # This is a manuall configured interface fro the ceph mesh
    auto eth1
    allow-hotplug eth1
    iface eth1 inet static
    address 10.0.81.41
    netmask 255.255.255.0
    gateway 10.0.81.1
    up ip route add 10.0.0.80/28 via 10.0.81.1 dev eth1 || true

    iface eth1 inet6 static
    address fc00:81::41
    netmask 64
    gateway fc00:81::1
    up ip -6 route add fc00::80/124 via fc00:81::1 dev eth1 || true

    </details>
  5. scyto revised this gist Apr 29, 2025. 1 changed file with 2 additions and 0 deletions.
    2 changes: 2 additions & 0 deletions routed-vm-mesh-access.md
    Original file line number Diff line number Diff line change
    @@ -24,6 +24,8 @@ Enable VMs hosted on proxmox to be able to access ceph mesh - my usecase is for
    - VMs are routed using vmbr100 on each node
    - you have a true dual stack setup on your mesh (if you only have IPv4 including for ceph you drop the IPv6 sections)

    REMEMBER ceph clients want to access the MONSs / OSDs /MGRs and MDSs on the `lo` interface loopback addresses - thats the goal!


    ---

  6. scyto revised this gist Apr 29, 2025. 1 changed file with 3 additions and 0 deletions.
    3 changes: 3 additions & 0 deletions routed-vm-mesh-access.md
    Original file line number Diff line number Diff line change
    @@ -168,10 +168,13 @@ iface ens18 inet6 static

    # The primary network interface
    auto ens19
    iface ens19 inet auto

    iface ens19 inet6 auto
    accept_ra 1
    autoconf 1
    dhcp 1

    ```

    > **Notes:**
  7. scyto revised this gist Apr 29, 2025. 1 changed file with 2 additions and 1 deletion.
    3 changes: 2 additions & 1 deletion routed-vm-mesh-access.md
    Original file line number Diff line number Diff line change
    @@ -22,6 +22,7 @@ Enable VMs hosted on proxmox to be able to access ceph mesh - my usecase is for
    - No bridging of `en05` or `en06` is done - if these are bridged all mesh networking breaks, so never put them in a bridge!
    - The openfabric mesh remains as-is for ceph traffic
    - VMs are routed using vmbr100 on each node
    - you have a true dual stack setup on your mesh (if you only have IPv4 including for ceph you drop the IPv6 sections)


    ---
    @@ -180,7 +181,7 @@ iface ens19 inet6 auto
    > - while it may work without these i found some error situations where connecvity failed due to their being two default routes - maybe someone can suggest more elegant fix
    > - the IPv4 and IPv6 addresses need to be from the hosts vmbr100 /24 and /64 ranges.
    You should find you can ping just about any of the non-vmbr100 adressed on any node - full routing!
    You can now test pinging from the VM to various node and ceph addresses.

    Now you need to setup ceph client in the vm - coming soon.

  8. scyto revised this gist Apr 29, 2025. 1 changed file with 1 addition and 0 deletions.
    1 change: 1 addition & 0 deletions routed-vm-mesh-access.md
    Original file line number Diff line number Diff line change
    @@ -177,6 +177,7 @@ iface ens19 inet6 auto
    > - uses `vmbr100` on the host to access the mesh
    > - uses `vmb0` on the host to access the internet
    > - static routes defined via `fc00:83::1` and `10.0.83.1` in the VM (using up command) to avoid using the defatul route on vmbr0
    > - while it may work without these i found some error situations where connecvity failed due to their being two default routes - maybe someone can suggest more elegant fix
    > - the IPv4 and IPv6 addresses need to be from the hosts vmbr100 /24 and /64 ranges.
    You should find you can ping just about any of the non-vmbr100 adressed on any node - full routing!
  9. scyto revised this gist Apr 29, 2025. 1 changed file with 1 addition and 1 deletion.
    2 changes: 1 addition & 1 deletion routed-vm-mesh-access.md
    Original file line number Diff line number Diff line change
    @@ -1,5 +1,5 @@
    # Give VMs Accesss to Ceph Mesh (routed not bridged access)
    ## Version 0.9 2025.04.29
    ## Version 0.9 (2025.04.29)


    Routed is needed, you can't jut simply bridge en05 and en06 and have VMs work, bridging seems to not work on thundebolt interfaces, at least i could never get the interfaces working when bridged and it broke the ceph mesh completely.
  10. scyto revised this gist Apr 29, 2025. 1 changed file with 3 additions and 3 deletions.
    6 changes: 3 additions & 3 deletions routed-vm-mesh-access.md
    Original file line number Diff line number Diff line change
    @@ -174,9 +174,9 @@ iface ens19 inet6 auto
    ```

    > **Notes:**
    > - VMs use `vmbr100` as their "uplink"
    > - They route via `fc00:83::1` and `10.0.83.1`\
    > - the `up ip route` commands are needed on the mesh interface to ensure these packets never try and go via the default gateway
    > - uses `vmbr100` on the host to access the mesh
    > - uses `vmb0` on the host to access the internet
    > - static routes defined via `fc00:83::1` and `10.0.83.1` in the VM (using up command) to avoid using the defatul route on vmbr0
    > - the IPv4 and IPv6 addresses need to be from the hosts vmbr100 /24 and /64 ranges.
    You should find you can ping just about any of the non-vmbr100 adressed on any node - full routing!
  11. scyto revised this gist Apr 29, 2025. 1 changed file with 2 additions and 0 deletions.
    2 changes: 2 additions & 0 deletions routed-vm-mesh-access.md
    Original file line number Diff line number Diff line change
    @@ -1,4 +1,6 @@
    # Give VMs Accesss to Ceph Mesh (routed not bridged access)
    ## Version 0.9 2025.04.29


    Routed is needed, you can't jut simply bridge en05 and en06 and have VMs work, bridging seems to not work on thundebolt interfaces, at least i could never get the interfaces working when bridged and it broke the ceph mesh completely.

  12. scyto revised this gist Apr 29, 2025. 1 changed file with 4 additions and 2 deletions.
    6 changes: 4 additions & 2 deletions routed-vm-mesh-access.md
    Original file line number Diff line number Diff line change
    @@ -1,14 +1,16 @@
    # Give VMs Accesss to Ceph Mesh (routed not bridged access)

    Routed is needed, you can jut simply bridge en05 and en06 and have VMs work, bridging seems to not work on thundebolt interfaces (at least i could never get the interfaces working when bridged and it broke the ceph mesh completely) tl;dr can't bridge thunderbolt interfaces.
    Routed is needed, you can't jut simply bridge en05 and en06 and have VMs work, bridging seems to not work on thundebolt interfaces, at least i could never get the interfaces working when bridged and it broke the ceph mesh completely.

    tl;dr can't bridge thunderbolt interfaces

    ### Goal

    Enable VMs hosted on proxmox to be able to access ceph mesh - my usecase is for my docker swarmVMs to be able store their bind mounts on cephFS

    > ### Imperatives
    >
    > you y **MUST** change your ceph public and private network in ceph.conf from `fc00::/64` to `fc00::80/124` if you do not ceph might get super funky as `fc00::/64` is actually treated as a /8 by ceph!? - this change should allow you have ceph mons `fc00:81 though fc00::8e`. Make sure to change, then reboot just one node and ensure all logs are clean before you move on
    > you **MUST** change your ceph public and private network in ceph.conf from `fc00::/64` to `fc00::80/124` if you do not ceph might get super funky as `fc00::/64` is actually treated as a /8 by ceph!? - this change should allow you have ceph mons `fc00:81 though fc00::8e`. Make sure to change, then reboot just one node and ensure all logs are clean before you move on
    ### Assumptions

  13. scyto revised this gist Apr 29, 2025. 1 changed file with 2 additions and 2 deletions.
    4 changes: 2 additions & 2 deletions routed-vm-mesh-access.md
    Original file line number Diff line number Diff line change
    @@ -1,10 +1,10 @@
    # Give VMs Accesss to Ceph Mesh (routed not bridged access)

    > Note: I have both implemented, i need to verify its possible to do just phase 1 - please waiti until i have done that before using this.
    Routed is needed, you can jut simply bridge en05 and en06 and have VMs work, bridging seems to not work on thundebolt interfaces (at least i could never get the interfaces working when bridged and it broke the ceph mesh completely) tl;dr can't bridge thunderbolt interfaces.

    ### Goal

    Eenable VMs hosted on proxmox to be able to access ceph mesh - my usecase is for my docker swarmVMs to be able store their bind mounts on cephFS
    Enable VMs hosted on proxmox to be able to access ceph mesh - my usecase is for my docker swarmVMs to be able store their bind mounts on cephFS

    > ### Imperatives
    >
  14. scyto revised this gist Apr 29, 2025. 1 changed file with 1 addition and 0 deletions.
    1 change: 1 addition & 0 deletions routed-vm-mesh-access.md
    Original file line number Diff line number Diff line change
    @@ -173,6 +173,7 @@ iface ens19 inet6 auto
    > - VMs use `vmbr100` as their "uplink"
    > - They route via `fc00:83::1` and `10.0.83.1`\
    > - the `up ip route` commands are needed on the mesh interface to ensure these packets never try and go via the default gateway
    > - the IPv4 and IPv6 addresses need to be from the hosts vmbr100 /24 and /64 ranges.
    You should find you can ping just about any of the non-vmbr100 adressed on any node - full routing!

  15. scyto revised this gist Apr 29, 2025. 1 changed file with 19 additions and 13 deletions.
    32 changes: 19 additions & 13 deletions routed-vm-mesh-access.md
    Original file line number Diff line number Diff line change
    @@ -135,28 +135,33 @@ IS-IS paths to level-2 routers with hop-by-hop metric
    > - vm virtial nic attached to vmbr0 must be set in VM config **with MTU the same as vmbr0**
    > - vm virtual nic attached to vmbr100 must be set in VM config **with MTU same as vmbr100**
    Inside the routed VM:
    Inside the routed VM (this is aVM on pve3):

    ```bash
    # This file describes the network interfaces available on your system
    # and how to activate them. For more information, see interfaces(5).

    source /etc/network/interfaces.d/*

    # The loopback network interface
    auto lo
    iface lo inet loopback

    # interface bound to hosts vmbr100 (access routed mesh)

    # This is a manuall configured interface fro the ceph mesh
    allow-hotplug ens18
    iface ens18 inet6 static
    address fc00:81::100
    netmask 64
    gateway fc00:81::1
    iface ens18 inet static
    address 10.0.83.105
    netmask 255.255.255.0
    gateway 10.0.83.1
    up ip route add 10.0.0.80/28 via 10.0.83.1 dev ens18

    allow-hotplug ens18
    iface ens18 inet6 static
    address 10.0.81.100
    netmask 64
    gateway 10.0.81.1
    address fc00:83::105
    netmask 64
    gateway fc00:83::1
    up ip -6 route add fc00::80/124 via fc00:83::1 dev ens18

    # interface bound to vmbr0 (access internet and lan)
    # The primary network interface
    auto ens19
    iface ens19 inet6 auto
    accept_ra 1
    @@ -166,7 +171,8 @@ iface ens19 inet6 auto

    > **Notes:**
    > - VMs use `vmbr100` as their "uplink"
    > - They route via `fc00:81::1` and `10.0.81.1`
    > - They route via `fc00:83::1` and `10.0.83.1`\
    > - the `up ip route` commands are needed on the mesh interface to ensure these packets never try and go via the default gateway
    You should find you can ping just about any of the non-vmbr100 adressed on any node - full routing!

  16. scyto revised this gist Apr 29, 2025. 1 changed file with 2 additions and 2 deletions.
    4 changes: 2 additions & 2 deletions routed-vm-mesh-access.md
    Original file line number Diff line number Diff line change
    @@ -42,7 +42,7 @@ This bridge is what a VM will bind to that allows it to reach the ceph network,
    #### create a new file called `/etc/network/interfaces.d/vmbridge` for Node 1 (`pve1`). Repeat on pve3 and pve3, changing addresses as per the table above.

    ```bash

    # VM routed Bridge IPv4
    auto vmbr100
    iface vmbr100 inet static
    address 10.0.81.1/24
    @@ -51,14 +51,14 @@ iface vmbr100 inet static
    bridge-stp off
    bridge-fd 0

    # VM routed Bridge IPv4
    iface vmbr100 inet6 static
    address fc00:81::1/64
    mtu 65520
    bridge-ports none
    bridge-stp off
    bridge-fd 0


    ```
    > **Notes:**
    > - the MTU is set the same as thunderbolt interface MTUs - this is critical
  17. scyto revised this gist Apr 29, 2025. 1 changed file with 2 additions and 1 deletion.
    3 changes: 2 additions & 1 deletion routed-vm-mesh-access.md
    Original file line number Diff line number Diff line change
    @@ -172,7 +172,8 @@ You should find you can ping just about any of the non-vmbr100 adressed on any n

    Now you need to setup ceph client in the vm - coming soon.

    #### this is an example frr.conf from my live running pve1 node as an example after completing these steps
    ---
    ### Example frr.conf from my pve1 node after this gist.

    <details>
    <summary>Click me</summary>
  18. scyto revised this gist Apr 29, 2025. 2 changed files with 50 additions and 49 deletions.
    50 changes: 50 additions & 0 deletions routed-vm-mesh-access.md
    Original file line number Diff line number Diff line change
    @@ -171,3 +171,53 @@ iface ens19 inet6 auto
    You should find you can ping just about any of the non-vmbr100 adressed on any node - full routing!

    Now you need to setup ceph client in the vm - coming soon.

    #### this is an example frr.conf from my live running pve1 node as an example after completing these steps

    <details>
    <summary>Click me</summary>

    ```
    root@pve1 13:19:03 ~ # cat /etc/frr/frr.conf
    frr version 8.5.2
    frr defaults datacenter
    hostname pve1
    log syslog informational
    service integrated-vtysh-config
    interface en05
    ip router openfabric 1
    ipv6 router openfabric 1
    openfabric hello-interval 1
    openfabric hello-multiplier 3
    openfabric csnp-interval 5
    openfabric psnp-interval 2
    exit
    interface en06
    ip router openfabric 1
    ipv6 router openfabric 1
    openfabric hello-interval 1
    openfabric hello-multiplier 3
    openfabric csnp-interval 5
    openfabric psnp-interval 2
    exit
    interface lo
    ip router openfabric 1
    ipv6 router openfabric 1
    openfabric passive
    exit
    interface vmbr100
    ip router openfabric 1
    ipv6 router openfabric 1
    openfabric passive
    exit
    router openfabric 1
    net 49.0000.0000.0081.00
    lsp-gen-interval 5
    exit
    ```
    </details>
    49 changes: 0 additions & 49 deletions z-example-frr-config.md
    Original file line number Diff line number Diff line change
    @@ -1,49 +0,0 @@
    ## this is an example frr.conf from my live running pve1 node as an example after completing these steps

    <details>
    <summary>Click me</summary>

    ```
    root@pve1 13:19:03 ~ # cat /etc/frr/frr.conf
    frr version 8.5.2
    frr defaults datacenter
    hostname pve1
    log syslog informational
    service integrated-vtysh-config
    interface en05
    ip router openfabric 1
    ipv6 router openfabric 1
    openfabric hello-interval 1
    openfabric hello-multiplier 3
    openfabric csnp-interval 5
    openfabric psnp-interval 2
    exit
    interface en06
    ip router openfabric 1
    ipv6 router openfabric 1
    openfabric hello-interval 1
    openfabric hello-multiplier 3
    openfabric csnp-interval 5
    openfabric psnp-interval 2
    exit
    interface lo
    ip router openfabric 1
    ipv6 router openfabric 1
    openfabric passive
    exit
    interface vmbr100
    ip router openfabric 1
    ipv6 router openfabric 1
    openfabric passive
    exit
    router openfabric 1
    net 49.0000.0000.0081.00
    lsp-gen-interval 5
    exit
    ```
    </details>
  19. scyto revised this gist Apr 29, 2025. 1 changed file with 5 additions and 1 deletion.
    6 changes: 5 additions & 1 deletion z-example-frr-config.md
    Original file line number Diff line number Diff line change
    @@ -1,5 +1,8 @@
    ## this is an example frr.conf from my live running pve1 node as an example after completing these steps

    <details>
    <summary>Click me</summary>

    ```
    root@pve1 13:19:03 ~ # cat /etc/frr/frr.conf
    frr version 8.5.2
    @@ -42,4 +45,5 @@ router openfabric 1
    net 49.0000.0000.0081.00
    lsp-gen-interval 5
    exit
    ```
    ```
    </details>
  20. scyto renamed this gist Apr 29, 2025. 1 changed file with 0 additions and 0 deletions.
    File renamed without changes.
  21. scyto renamed this gist Apr 29, 2025. 1 changed file with 0 additions and 0 deletions.
    File renamed without changes.
  22. scyto revised this gist Apr 29, 2025. 1 changed file with 45 additions and 0 deletions.
    45 changes: 45 additions & 0 deletions example-frr-config.md
    Original file line number Diff line number Diff line change
    @@ -0,0 +1,45 @@
    ## this is an example frr.conf from my live running pve1 node as an example after completing these steps

    ```
    root@pve1 13:19:03 ~ # cat /etc/frr/frr.conf
    frr version 8.5.2
    frr defaults datacenter
    hostname pve1
    log syslog informational
    service integrated-vtysh-config
    interface en05
    ip router openfabric 1
    ipv6 router openfabric 1
    openfabric hello-interval 1
    openfabric hello-multiplier 3
    openfabric csnp-interval 5
    openfabric psnp-interval 2
    exit
    interface en06
    ip router openfabric 1
    ipv6 router openfabric 1
    openfabric hello-interval 1
    openfabric hello-multiplier 3
    openfabric csnp-interval 5
    openfabric psnp-interval 2
    exit
    interface lo
    ip router openfabric 1
    ipv6 router openfabric 1
    openfabric passive
    exit
    interface vmbr100
    ip router openfabric 1
    ipv6 router openfabric 1
    openfabric passive
    exit
    router openfabric 1
    net 49.0000.0000.0081.00
    lsp-gen-interval 5
    exit
    ```
  23. scyto revised this gist Apr 29, 2025. 1 changed file with 2 additions and 1 deletion.
    3 changes: 2 additions & 1 deletion routed-vm-mesh-access.md
    Original file line number Diff line number Diff line change
    @@ -53,10 +53,11 @@ iface vmbr100 inet static

    iface vmbr100 inet6 static
    address fc00:81::1/64
    mtu 65520
    bridge-ports none
    bridge-stp off
    bridge-fd 0
    mtu 65520


    ```
    > **Notes:**
  24. scyto revised this gist Apr 29, 2025. 1 changed file with 1 addition and 1 deletion.
    2 changes: 1 addition & 1 deletion routed-vm-mesh-access.md
    Original file line number Diff line number Diff line change
    @@ -60,7 +60,7 @@ iface vmbr100 inet6 static

    ```
    > **Notes:**
    > - ???
    > - the MTU is set the same as thunderbolt interface MTUs - this is critical
    ---
    ## FRR Configuration addition repeat on node 2 & 3 with changes from table
  25. scyto revised this gist Apr 29, 2025. 1 changed file with 5 additions and 5 deletions.
    10 changes: 5 additions & 5 deletions routed-vm-mesh-access.md
    Original file line number Diff line number Diff line change
    @@ -128,11 +128,11 @@ IS-IS paths to level-2 routers with hop-by-hop metric

    ## How to configure VM - Example for VM on node pve1

    - the vm has two interfaces, one bound to vmbr0 and one bound to vmbr100
    - this configuration is not intended to be migrated to other nodes (the guest adddressing is node specific)
    - this could be mitigate through some use of an IPAM solution - unclear how yet
    - interface attached to vmbr0 must be set in VM config with MTU the same as vmbr0
    - interface attached to vmbr100 must be set in VM config with MTU same as vmbr100
    > - the vm has two interfaces, one bound to vmbr0 and one bound to vmbr100
    > - this configuration is not intended to be migrated to other nodes (the guest adddressing is node specific)
    > - this could be mitigate through some use of an IPAM solution - unclear how yet
    > - vm virtial nic attached to vmbr0 must be set in VM config **with MTU the same as vmbr0**
    > - vm virtual nic attached to vmbr100 must be set in VM config **with MTU same as vmbr100**
    Inside the routed VM:

  26. scyto revised this gist Apr 29, 2025. 1 changed file with 3 additions and 9 deletions.
    12 changes: 3 additions & 9 deletions routed-vm-mesh-access.md
    Original file line number Diff line number Diff line change
    @@ -8,23 +8,17 @@ Eenable VMs hosted on proxmox to be able to access ceph mesh - my usecase is for

    > ### Imperatives
    >
    > you absolutely **MUST** change your ceph public and private network in ceph.conf from `fc00::/64` to `fc00::80/124` if you do not ceph might get super funky as `fc00::/64` is actually treated as a /8 by ceph!? - this change should allow you have ceph mons `fc00:81 though fc00::8e`. If you need more than 14 mons you are on your own, lol.
    >
    > once you have changed them, reboot one mode, be sure it is working before rebooting the other nodes and ESPECICALLY before doing anything below. You have been warned. And don't cute and decide to change your mon subnet - it can be done... but its ver hard.... ask me how i know :-).
    > you y **MUST** change your ceph public and private network in ceph.conf from `fc00::/64` to `fc00::80/124` if you do not ceph might get super funky as `fc00::/64` is actually treated as a /8 by ceph!? - this change should allow you have ceph mons `fc00:81 though fc00::8e`. Make sure to change, then reboot just one node and ensure all logs are clean before you move on
    ### Assumptions

    - You already implemented [thunderbolt networking](/67fdc9a517faefa68f730f82d7fa3570) and [frr setup](/58b5cd9a18e1f5846048aabd4b152564) as per those gists. Steps from them will not be re-documented here.
    - Three Proxmox nodes: `pve1`, `pve2`, `pve3`
    - Thunderbolt mesh links: `en05` and `en06`
    - LAN interface: `vmbr0`
    - Thunderbolt mesh links are : `en05` and `en06`
    - No bridging of `en05` or `en06` is done - if these are bridged all mesh networking breaks, so never put them in a bridge!
    - The openfabric mesh remains as-is for ceph traffic
    - VMs are routed using vmbr100 on each node
    - BGP is used to advertise routes for both LAN based clients and VM based clients
    - BGP P2P links have to be used as other BGP mechnanisms didn't seem to work
    - BGP routes are explictly defined for dvertisement, no broadcast is used
    - Assumes a Ubiquiti unifi OS router with BGP feature (EFG and maybe others)


    ---

  27. scyto revised this gist Apr 29, 2025. 1 changed file with 3 additions and 9 deletions.
    12 changes: 3 additions & 9 deletions routed-vm-mesh-access.md
    Original file line number Diff line number Diff line change
    @@ -2,15 +2,9 @@

    > Note: I have both implemented, i need to verify its possible to do just phase 1 - please waiti until i have done that before using this.
    ### Goals

    - enable VMs hosted on proxmox to be able to access ceph mesh - my usecase is for my docker swarmVMs to be able store their bind mounts on cephFS
    - enable any machine on my LAN to be able to access ceph as a ceph client
    - I assume you have IPv6 ceph, if you don't be very careful how you interpret this guide
    - use BGP because thats supported by the UI in UnifiOS - i risk unifi blowingaway openfabric on firmware upgrade - but i bet that would work too....
    - Non-goals:
    - this isn't a how to, if you have come this far you already know how to do this
    - it uses my previous gist convestions, if you diverted from that be super careful how you implement this
    ### Goal

    Eenable VMs hosted on proxmox to be able to access ceph mesh - my usecase is for my docker swarmVMs to be able store their bind mounts on cephFS

    > ### Imperatives
    >
  28. scyto revised this gist Apr 29, 2025. 1 changed file with 1 addition and 227 deletions.
    228 changes: 1 addition & 227 deletions routed-vm-mesh-access.md
    Original file line number Diff line number Diff line change
    @@ -181,230 +181,4 @@ iface ens19 inet6 auto
    You should find you can ping just about any of the non-vmbr100 adressed on any node - full routing!


    ---

    # Phase 2 - Extending the routable mesh to your LAN
    only needed if you want LAN clients to access the ceph mesh

    ## Phase 2 - FRR BGP Settings
    | Field | pve1 | pve2 | pve3 |
    |:-----------------:|:-----------------------:|:-----------------------:|:-----------------------:|
    | BGP Router-ID | `192.0.2.81` | `192.0.2.82` | `192.0.2.83` |
    | IPv6 Neighbors | `dead:beef:dead:1::82`<br> `dead:beef:dead:1::83`<br>`dead:beef:dead:1::1` |`dead:beef:dead:1::81`<br> `dead:beef:dead:1::83`<br>`dead:beef:dead:1::1` | `dead:beef:dead:1::81`<br> `dead:beef:dead:1::82`<br>`dead:beef:dead:1::1` |
    | IPv4 Neighbors | `192.168.1.82`<br>`192.168.1.83`<br>`192.168.1.1`|`192.168.1.81`<br>`192.168.1.83`<br> `192.168.1.1`|`192.168.1.81`<br>`192.168.1.82`<br>`192.168.1.1`|
    | IPv6 Routes<br>(network) |`fc00::81/128`<br>`fc00:81::/64`|`fc00::82/128`<br>`fc00:82::/64`|`fc00::83/128`<br>`fc00:83::/64` |
    | IPv4 Routes<br>(network)| `10.0.0.81/32`<br>`10.0.81.0/24`| `10.0.0.82/32`<br>`10.0.82.0/24`| `10.0.0.83/32`<br>`10.0.83.0/24` |

    > **note instead of explicitly defining the networks you could use `redistribute connected` in place of the bgp `network` lines this will advetise all learnt routes (but not kernel / static routes). I think this is a better way of doing it, but given the unpredictably of this in different peoples environments elected to explicitly define the routes for predictability.
    interface file changes

    > **Notes:**
    > - `vmbr0` MTU settinsg must be the same as:
    > - the seetings on your routers br0
    > - must be set the same on you vmbr1000 members (in my case enp86s0)
    > - if not BGP routing will fail
    > - `vmbr100` is for routed VMs hosted on the proxmox nodes
    > - the IPv6 /65 subnet `dead:beef:dead:1::/64` is to hide my real IPv6 addresses


    ## Advertise to the LAN with BGP

    1. add this to `frr.conf` and restart frr

    ```
    router bgp 65001
    bgp router-id 192.0.2.81
    no bgp ebgp-requires-policy
    neighbor dead:beef:dead:1::82 remote-as 65001
    neighbor dead:beef:dead:1::83 remote-as 65001
    neighbor dead:beef:dead:1::1 remote-as 65001
    neighbor 192.168.1.82 remote-as 65001
    neighbor 192.168.1.83 remote-as 65001
    neighbor 192.168.1.1 remote-as 65001
    address-family ipv6 unicast
    network fc00::81/128
    network fc00:81::/64
    neighbor dead:beef:dead:1::82 activate
    neighbor dead:beef:dead:1::83 activate
    neighbor dead:beef:dead:1:1 activate
    exit-address-family
    address-family ipv4 unicast
    network 10.0.0.81/32
    network 10.0.81.0/24
    neighbor 192.168.1.82 activate
    neighbor 192.168.1.83 activate
    neighbor 192.168.1.1 activate
    exit-address-family
    ```




    ## Ubiuiti UnifiOS Router Integration

    This has only be tested on an EFG running network app 9.1.92

    1. create a file called bgp.conf
    2. populate it with the contents below
    3. in unifi network application go to `settings > routing > bgp`
    4. name = ceph-mesh
    5. device = name of router (should be on the drop down)
    6. click upload and upload file
    7. do NOT select the check box `override wan monitors` (uncheck it it is checked)
    8. you can shell into the router and use `xxxxx` and `yyyyyy` to check the routes formed correctly

    ```bash
    router bgp 65001
    bgp router-id 192.168.1.1
    no bgp ebgp-requires-policy

    neighbor 192.168.1.81 remote-as 65001
    neighbor 192.168.1.81 update-source 192.168.1.1
    neighbor 192.168.1.81 next-hop-self

    neighbor 192.168.1.82 remote-as 65001
    neighbor 192.168.1.82 update-source 192.168.1.1
    neighbor 192.168.1.82 next-hop-self

    neighbor 192.168.1.83 remote-as 65001
    neighbor 192.168.1.83 update-source 192.168.1.1
    neighbor 192.168.1.83 next-hop-self

    neighbor 2600:a801:830:1::81 remote-as 65001
    neighbor 2600:a801:830:1::81 update-source 2600:a801:830:1::1

    neighbor 2600:a801:830:1::82 remote-as 65001
    neighbor 2600:a801:830:1::82 update-source 2600:a801:830:1::1

    neighbor 2600:a801:830:1::83 remote-as 65001
    neighbor 2600:a801:830:1::83 update-source 2600:a801:830:1::1

    address-family ipv6 unicast
    neighbor 2600:a801:830:1::81 activate
    neighbor 2600:a801:830:1::81 next-hop-self
    neighbor 2600:a801:830:1::82 activate
    neighbor 2600:a801:830:1::82 next-hop-self
    neighbor 2600:a801:830:1::83 activate
    neighbor 2600:a801:830:1::83 next-hop-self
    exit-address-family

    address-family ipv4 unicast
    neighbor 192.168.1.81 activate
    neighbor 192.168.1.81 next-hop-self
    neighbor 192.168.1.82 activate
    neighbor 192.168.1.82 next-hop-self
    neighbor 192.168.1.83 activate
    neighbor 192.168.1.83 next-hop-self
    exit-address-family
    ```

    > **Notes:**
    > - The router learns `fc00::8x/128`, `fc00:8x::/64`, `10.0.0.8x/32`, and `10.0.8x.0/24` routes from your the nodes.
    > - The LAN can access VMs without NAT.
    > - I found some times times frr.service can crash on UI whem you upload, if it does just restart it
    > - the format above is ordered for east reading `vtysh -c "show running-config"` will show a different layout
    > - the settings above wont be written to `/etc/frr/frr.conf` - so don't worry if thats empty
    > - you can use the following command to check everything looks good:
    > - `vtysh -c "show bgp ipv4 unicast bestpath"`
    > - `vtysh -c "show bgp ipv6 unicast bestpath"`
    it will look something like this:

    > note the i infront of the IPv6 addresses is just a known display bug in frr 8.1 that my router is running
    ```
    root@EFG:/etc/frr# vtysh -c "show bgp ipv6 unicast"
    BGP table version is 6, local router ID is 192.168.1.1, vrf id 0
    Default local pref 100, local AS 65001
    Status codes: s suppressed, d damped, h history, * valid, > best, = multipath,
    i internal, r RIB-failure, S Stale, R Removed
    Nexthop codes: @NNN nexthop's vrf id, < announce-nh-self
    Origin codes: i - IGP, e - EGP, ? - incomplete
    RPKI validation codes: V valid, I invalid, N Not found
    Network Next Hop Metric LocPrf Weight Path
    *>ifc00::81/128 fe80::4a21:bff:fe58:9c45
    0 100 0 i
    *>ifc00::82/128 fe80::4a21:bff:fe56:a5d8
    0 100 0 i
    *>ifc00::83/128 fe80::4a21:bff:fe56:a650
    0 100 0 i
    *>ifc00:81::/64 fe80::4a21:bff:fe58:9c45
    0 100 0 i
    *>ifc00:82::/64 fe80::4a21:bff:fe56:a5d8
    0 100 0 i
    *>ifc00:83::/64 fe80::4a21:bff:fe56:a650
    0 100 0 i
    Displayed 6 routes and 6 total paths
    root@EFG:/etc/frr# vtysh -c "show bgp ipv4 unicast"
    BGP table version is 6, local router ID is 192.168.1.1, vrf id 0
    Default local pref 100, local AS 65001
    Status codes: s suppressed, d damped, h history, * valid, > best, = multipath,
    i internal, r RIB-failure, S Stale, R Removed
    Nexthop codes: @NNN nexthop's vrf id, < announce-nh-self
    Origin codes: i - IGP, e - EGP, ? - incomplete
    RPKI validation codes: V valid, I invalid, N Not found
    Network Next Hop Metric LocPrf Weight Path
    *>i10.0.0.81/32 192.168.1.81 0 100 0 i
    * i 192.168.1.81 0 100 0 i
    *>i10.0.0.82/32 192.168.1.82 0 100 0 i
    * i 192.168.1.82 0 100 0 i
    *>i10.0.0.83/32 192.168.1.83 0 100 0 i
    * i 192.168.1.83 0 100 0 i
    *>i10.0.81.0/24 192.168.1.81 0 100 0 i
    * i 192.168.1.81 0 100 0 i
    *>i10.0.82.0/24 192.168.1.82 0 100 0 i
    * i 192.168.1.82 0 100 0 i
    *>i10.0.83.0/24 192.168.1.83 0 100 0 i
    * i 192.168.1.83 0 100 0 i
    Displayed 6 routes and 12 total paths
    ---
    # Quick Checklist
    - [x] Configure `/etc/network/interfaces` correctly on each node.
    - [x] Assign unique IPv6 and IPv4 loopbacks.
    - [x] Setup `openfabric` on Thunderbolt interfaces.
    - [x] Advertise loopback and VM networks into BGP.
    - [x] Peering between Proxmox nodes and LAN router.
    - [x] Configure VMs to route through `vmbr100`.
    ---
    # Bonus Tip: Monitor Convergence
    You can monitor bgp mesh convergence using on any node or the router:
    ```bash
    vtysh -c "show bgp ipv6 unicast summary"
    vtysh -c "show bgp ipv4 unicast summary"
    ```

    You can monitor openfabric mesh convergence on any node (but not router) using:

    ```bash
    vtysh -c "show openfabfric toplology"
    ```

    Super tip you can shorten word like above "sh open topo" will work instead

    ---



    ---

    # Conclusion

    thanks for listening (there was some chipper and annoying chatgpt output here before)
    i will be posting how to join ceph from the docker client over in my swarm gist later - i have instructions written up that just need to be tested....

    Now you need to setup ceph client in the vm - coming soon.
  29. scyto revised this gist Apr 29, 2025. 1 changed file with 2 additions and 2 deletions.
    4 changes: 2 additions & 2 deletions routed-vm-mesh-access.md
    Original file line number Diff line number Diff line change
    @@ -75,7 +75,7 @@ iface vmbr100 inet6 static
    > - ???
    ---
    ### FRR Configuration addition repeat on node 2 & 3 with changes from table
    ## FRR Configuration addition repeat on node 2 & 3 with changes from table

    Key things to note compared to the normal non-routed setup:
    - additon of vmbr100 to openfabric to allow VM connectivity
    @@ -138,7 +138,7 @@ IS-IS paths to level-2 routers with hop-by-hop metric
    ---

    ## How to cinfigure VM - Example for VM on node pve1
    ## How to configure VM - Example for VM on node pve1

    - the vm has two interfaces, one bound to vmbr0 and one bound to vmbr100
    - this configuration is not intended to be migrated to other nodes (the guest adddressing is node specific)
  30. scyto revised this gist Apr 29, 2025. 1 changed file with 3 additions and 2 deletions.
    5 changes: 3 additions & 2 deletions routed-vm-mesh-access.md
    Original file line number Diff line number Diff line change
    @@ -43,8 +43,8 @@
    | **pve2** | `vmbr100` | VM bridge | `fc00:82::1/64` | `10.0.82.1/24` | 65520 |
    | **pve3** | `vmbr100` | VM bridge | `fc00:83::1/64` | `10.0.83.1/24` | 65520 |

    ## VM Bridge Setup
    ---
    ## VM Bridge Setup

    This build on the work from the normal mesh gist and adds some additonal bridges to enable routing.

    @@ -75,7 +75,6 @@ iface vmbr100 inet6 static
    > - ???
    ---

    ### FRR Configuration addition repeat on node 2 & 3 with changes from table

    Key things to note compared to the normal non-routed setup:
    @@ -137,6 +136,8 @@ IS-IS paths to level-2 routers with hop-by-hop metric
    > - This enabled openfabric routing on the vmbr100 you created earlier
    > - you wont see the IP address you added to vmbr100 - just the subet
    ---

    ## How to cinfigure VM - Example for VM on node pve1

    - the vm has two interfaces, one bound to vmbr0 and one bound to vmbr100