Skip to content

Instantly share code, notes, and snippets.

@wrkode
Forked from dgiebert/README.md
Created September 28, 2024 07:47
Show Gist options
  • Save wrkode/ed01d691a4180f6dc11a9345a00a8403 to your computer and use it in GitHub Desktop.
Save wrkode/ed01d691a4180f6dc11a9345a00a8403 to your computer and use it in GitHub Desktop.
Rancher Turtles + KubeVirt CAPI Provider with Harvester

RKE2 + KubeVirt CAPI Provider with Harvester

Installing

  1. Install Harvester link
  2. Prepare Harvester
    • Install the CDI Plugin into Harvester link
    • Create /var/lib/rancher/rke2/agent/etc/containerd/config.toml.tmpl
      version = 2
      [plugins."io.containerd.internal.v1.opt"]
        path = "/var/lib/rancher/rke2/agent/containerd"
      [plugins."io.containerd.grpc.v1.cri"]
        stream_server_address = "127.0.0.1"
        stream_server_port = "10010"
        enable_selinux = false
        enable_unprivileged_ports = true
        enable_unprivileged_icmp = true
        sandbox_image = "index.docker.io/rancher/mirrored-pause:3.6"
        device_ownership_from_security_context = true
      [plugins."io.containerd.grpc.v1.cri".containerd]
        snapshotter = "overlayfs"
        disable_snapshot_annotations = true
      [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc]
        runtime_type = "io.containerd.runc.v2"
      [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options]
        SystemdCgroup = true
      [plugins."io.containerd.grpc.v1.cri".registry]
        config_path = "/var/lib/rancher/rke2/agent/etc/containerd/certs.d"
    • Restart rke2 systemctl restart rke2-server
    • Create a VMI to be used for faster deployments
      apiVersion: harvesterhci.io/v1beta1
      kind: VirtualMachineImage
      metadata:
        labels:
          harvesterhci.io/image-type: raw_qcow2
          harvesterhci.io/imageDisplayName: opensuse-leap-micro-6.0.qcow2
          harvesterhci.io/os-type: openSUSE
        namespace: harvester-public
        name: opensuse-leap-micro-6.0
      spec:
        displayName: opensuse-leap-micro-6.0.qcow2
        retry: 3
        sourceType: download
        storageClassParameters:
          migratable: 'true'
          numberOfReplicas: '3'
          staleReplicaTimeout: '30'
        url: >-
          https://download.opensuse.org/distribution/leap-micro/6.0/appliances/openSUSE-Leap-Micro.x86_64-Base-qcow.qcow2
  3. Rancher Managment Server
    ---
    apiVersion: helm.cattle.io/v1
    kind: HelmChart
    metadata:
      namespace: kube-system
      name: cert-manager
    spec:
      targetNamespace: cert-manager
      createNamespace: true
      version: 1.15.2
      chart: cert-manager
      repo: https://charts.jetstack.io
      valuesContent: |-
        installCRDs: true
    ---
    apiVersion: helm.cattle.io/v1
    kind: HelmChart
    metadata:
      namespace: kube-system
      name: rancher
    spec:
      targetNamespace: cattle-system
      createNamespace: true
      version: 2.9.1
      chart: rancher
      repo: https://releases.rancher.com/server-charts/latest
      valuesContent: |-
        hostname:
        ingress:
          tls:
            source: letsEncrypt
        letsEncrypt:
          email:
        replicas: 1
  4. Rancher Turtles
    ---
    apiVersion: helm.cattle.io/v1
    kind: HelmChart
    metadata:
      namespace: kube-system
      name: rancher-turtles
    spec:
      repo: https://rancher.github.io/turtles
      targetNamespace: rancher-turtles-system
      createNamespace: true
      version: 0.11.0
      chart: rancher-turtles
  5. KubeVirt CAPI Provider
    ---
    apiVersion: v1
    kind: Namespace
    metadata:
      name: capk-system
    ---
    apiVersion: turtles-capi.cattle.io/v1alpha1
    kind: CAPIProvider
    metadata:
      name: kubevirt
      namespace: capk-system
    spec:
      name: kubevirt
      type: infrastructure
      version: v0.1.9
  6. Manually create this LoadBalancer within Harvester and extract the IP
    apiVersion: loadbalancer.harvesterhci.io/v1beta1
    kind: LoadBalancer
    metadata:
      name: cluster-1-cp
      namespace: cluster-1
    spec:
      backendServerSelector:
        cluster.x-k8s.io/cluster-name:
        - kubevirt-test
        cluster.x-k8s.io/role:
        - control-plane
      ipam: dhcp
      listeners:
        - backendPort: 6443
          port: 6443
          protocol: TCP
      workloadType: vm
  7. Use generate_addon.sh to extract and store the kubeconfig link
  8. Export the needed variables
    #/bin/sh
    export CLUSTER_NAME=cluster-1
    export NAMESPACE=cluster-1
    export RKE2_VERSION=v1.30.4+rke2r1
    export HARVESTER_KUBECONFIG_B64=$(cat kubeconfig | envsubst | base64 -w0)
  9. Assemble the cluster config
    ---
    # Namespace to host the CAPI Cluster
    apiVersion: v1
    kind: Namespace
    metadata:
      name: "${NAMESPACE}"
      labels:
        cluster-api.cattle.io/rancher-auto-import: "true"
    ---
    # Connection Details for the Harvester Cluster
    apiVersion: v1
    kind: Secret
    metadata:
      name: "${CLUSTER_NAME}-kubeconfig-harvester"
      namespace: "kube-system"
    data: 
      kubeconfig: ${HARVESTER_KUBECONFIG_B64}
    ---
    apiVersion: infrastructure.cluster.x-k8s.io/v1alpha1
    kind: KubevirtCluster
    metadata:
      name: "${CLUSTER_NAME}"
      namespace: "${NAMESPACE}"
    spec:
      # Extract the IP from the previously created LoadBalancer
      controlPlaneEndpoint:
        host: "${LB_IP}"
        port: "${LB_PORT}"
      infraClusterSecretRef:
        apiVersion: v1
        kind: Secret
        name: "${CLUSTER_NAME}-kubeconfig-harvester"
        namespace: "kube-system"
    ---
    apiVersion: cluster.x-k8s.io/v1beta1
    kind: Cluster
    metadata:
      name: "${CLUSTER_NAME}"
      namespace: "${NAMESPACE:-default}"
      # Used to map external CCM and CSI for Harvester
      labels:
        ccm: external
        csi: external
    spec:
      clusterNetwork:
        pods:
          cidrBlocks:
            - "${POD_CIDR:-10.42.0.0/16}"
        services:
          cidrBlocks:
            - "${SERVICE_CIDR:-10.43.0.0/16}"
      infrastructureRef:
        apiVersion: infrastructure.cluster.x-k8s.io/v1alpha1
        kind: KubevirtCluster
        name: kubevirt-test
        namespace: kubevirt-test
      controlPlaneRef:
        apiVersion: controlplane.cluster.x-k8s.io/v1beta1
        kind: RKE2ControlPlane
        name: 'kubevirt-test-control-plane'
        namespace: kubevirt-test
    
    ---
    apiVersion: infrastructure.cluster.x-k8s.io/v1alpha1
    kind: KubevirtMachineTemplate
    metadata:
      name: "${CLUSTER_NAME}-control-plane"
      namespace: "${NAMESPACE}"
    spec:
      template:
        spec:
          providerName: ${PROVIDER_NAME:-harvester}
          virtualMachineTemplate:
            metadata:
              namespace: ${NAMESPACE}
            spec:
              # https://kubevirt.io/user-guide/user_workloads/templates/#using-datavolumes
              dataVolumeTemplates:
              - metadata:
                  name: k8s-disk
                spec:
                  storage:
                    volumeMode: Block
                    resources:
                      requests:
                        storage: ${CONTROL_PLANE_DISK_SIZE:-40Gi}
                    storageClassName: ${DISK_IMAGE:-longhorn-opensuse-leap-micro-6.0}
                    accessModes:
                      - ReadWriteMany
                  source:
                    blank: {}
                  # source:
                  #   http:
                  #     url: https://download.opensuse.org/distribution/leap-micro/6.0/appliances/openSUSE-Leap-Micro.x86_64-Base-qcow.qcow2
              runStrategy: RerunOnFailure
              template:
                spec:
                  domain:
                    cpu:
                      cores: ${CONTROL_PLANE_CORES:-4}
                      sockets: 1
                      threads: 1
                    resources:
                      limits:
                        memory: ${CONTROL_PLANE_MEMORY:-4Gi}
                        cpu: ${CONTROL_PLANE_CORES:-4}
                    features:
                      acpi:
                        enabled: true
                    devices:
                      disks:
                        - bootOrder: 1
                          disk:
                            bus: virtio
                          name: disk-0
                      inputs:
                        - bus: usb
                          name: tablet
                          type: tablet
                      interfaces:
                        - bridge: {}
                          model: virtio
                          name: default
                  evictionStrategy: LiveMigrateIfPossible
                  networks:
                    - multus:
                        networkName: ${NETWORK:-harvester-public/vlan-2}
                      name: default
                  volumes:
                  - dataVolume:
                      name: k8s-disk
                    name: disk-0
    ---
    apiVersion: controlplane.cluster.x-k8s.io/v1alpha1
    kind: RKE2ControlPlane
    metadata:
      name: "${CLUSTER_NAME}-control-plane"
      namespace: "${NAMESPACE}"
    spec:
      replicas: ${CONTROL_PLANE_MACHINE_COUNT:-3}
      agentConfig:
        version: "${RKE2_VERSION}"
        additionalUserData:
          config: |
            # Add additional users for connection
            users:
              - name: dgiebert
                sudo: ALL=(ALL) NOPASSWD:ALL
                ssh-authorized-keys:
                  - 'ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOY5nEt0qssNTouZzN4LPg8M3OyDAwGDDvreTUMA6hQ5'
            # Add additional packages for longhorn
            packages:
            - bash-completion
            - open-iscsi
            - nfs-client
      files:
        # Export kubectl settings for rke2
        - path: "/etc/profile.d/rke2.sh"
          owner: "root:root"
          permissions: "0640"
          content: |
            PATH=/var/lib/rancher/rke2/bin:$PATH
            export KUBECONFIG=/etc/rancher/rke2/rke2.yaml
            alias k=kubectl
            complete -o default -F __start_kubectl k
            source <(kubectl completion bash)
        # Write needed file for the CSI and CCM
        - path: "/var/lib/rancher/rke2/etc/config-files/cloud-provider-config"
          owner: "root:root"
          permissions: "0640"
          content: ${HARVESTER_KUBECONFIG_B64}
          encoding: base64
      serverConfig:
        cni: cilium
        # Disable the built in CCM
        cloudProviderName: external
      infrastructureRef:
        apiVersion: infrastructure.cluster.x-k8s.io/v1alpha1
        kind: KubevirtMachineTemplate
        name: kubevirt-test-control-plane
        namespace: kubevirt-test
    ---
    apiVersion: bootstrap.cluster.x-k8s.io/v1alpha1
    kind: RKE2ConfigTemplate
    metadata:
      name: "${CLUSTER_NAME}-worker"
      namespace: "${NAMESPACE}"
    spec:
      template:
        spec:
          agentConfig:
            version: ${RKE2_VERSION}
    ---
    apiVersion: cluster.x-k8s.io/v1beta1
    kind: MachineDeployment
    metadata:
      name: "${CLUSTER_NAME}-workers"
      namespace: ${NAMESPACE}
    spec:
      clusterName: "${CLUSTER_NAME}"
      replicas: ${WORKER_MACHINE_COUNT:-0}
      selector:
        matchLabels:
      template:
        spec:
          clusterName: "${CLUSTER_NAME}"
          bootstrap:
            configRef:
              apiVersion: bootstrap.cluster.x-k8s.io/v1alpha1
              kind: RKE2ConfigTemplate
              name: "${CLUSTER_NAME}-control-plane"
              namespace: ${NAMESPACE}
          infrastructureRef:
            name: "${CLUSTER_NAME}-control-plane"
            namespace: ${NAMESPACE}
            apiVersion: infrastructure.cluster.x-k8s.io/v1alpha1
            kind: KubevirtMachineTemplate
    ---
    apiVersion: addons.cluster.x-k8s.io/v1beta1
    kind: ClusterResourceSet
    metadata:
      name: "${CLUSTER_NAME}-harvester-csi"
      namespace: ${NAMESPACE}
    spec:
      clusterSelector:
        matchLabels:
          csi: external
      resources:
      - kind: ConfigMap
        name: "${CLUSTER_NAME}-harvester-csi-addon"
      strategy: Reconcile
    ---
    apiVersion: v1
    kind: ConfigMap
    metadata:
      name: "${CLUSTER_NAME}-harvester-csi-addon"
      namespace: ${NAMESPACE}
    data:
      harvester-csi-deployment.yaml: |
        apiVersion: helm.cattle.io/v1
        kind: HelmChart
        metadata:
          name: harvester-csi-driver
          namespace: kube-system
        spec:
          targetNamespace: kube-system
          repo: https://charts.harvesterhci.io/
          chart: harvester-csi-driver
          version: 0.1.18
    ---
    apiVersion: addons.cluster.x-k8s.io/v1beta1
    kind: ClusterResourceSet
    metadata:
      name: "${CLUSTER_NAME}-harvester-ccm"
      namespace: ${NAMESPACE}
    spec:
      clusterSelector:
        matchLabels:
          ccm: external
      resources:
      - kind: ConfigMap
        name: "${CLUSTER_NAME}-ccm-addon"
      strategy: Reconcile
    ---
    apiVersion: v1
    kind: ConfigMap
    metadata:
      name: "${CLUSTER_NAME}-ccm-addon"
      namespace: "${NAMESPACE}"
    data:
      harvester-cloud-provider-deploy.yaml: |
        apiVersion: helm.cattle.io/v1
        kind: HelmChart
        metadata:
          name: harvester-cloud-provider
          namespace: kube-system
        spec:
          targetNamespace: kube-system
          bootstrap: true
          repo: https://charts.harvesterhci.io/
          chart: harvester-cloud-provider
          version: 0.2.2
          valuesContent: |-
            cloudConfigPath: "/var/lib/rancher/rke2/etc/config-files/cloud-provider-config"
  10. Get it started: clusterctl generate yaml --from harvester-kubevirt.yaml | kubectl apply -f -
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment