Last active
October 16, 2024 16:00
-
-
Save julianpistorius/1f02e928e1d195c88f169ac4257e9e7f to your computer and use it in GitHub Desktop.
Run Cluster API demo
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| bastion: | |
| enabled: true | |
| spec: | |
| flavor: m3.medium | |
| image: | |
| filter: | |
| name: Featured-Ubuntu22 | |
| sshKeyName: your-key-name |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| secret: | |
| enabled: true | |
| create: false | |
| name: cloud-config | |
| clusterID: "your-workload-cluster-name" |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| [Global] | |
| auth-url=https://js2.jetstream-cloud.org:5000/v3/ | |
| application-credential-id=redacted-redacted-redacted | |
| application-credential-secret=redacted-redacted-redacted-redacted-redacted-redacted | |
| region=IU | |
| domain-name=access |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| clouds: | |
| openstack: | |
| auth: | |
| auth_url: https://js2.jetstream-cloud.org:5000/v3/ | |
| application_credential_id: "redacted-redacted-redacted" | |
| application_credential_secret: "redacted-redacted-redacted-redacted-redacted-redacted" | |
| region_name: "IU" | |
| interface: "public" | |
| identity_api_version: 3 | |
| auth_type: "v3applicationcredential" |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| singleuser: | |
| storage: | |
| dynamic: | |
| storageClass: csi-cinder-sc-delete | |
| hub: | |
| db: | |
| pvc: | |
| storageClassName: csi-cinder-sc-delete |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| #!/usr/bin/env bash | |
| export OS_AUTH_TYPE=v3applicationcredential | |
| export OS_AUTH_URL=https://js2.jetstream-cloud.org:5000/v3/ | |
| export OS_IDENTITY_API_VERSION=3 | |
| export OS_REGION_NAME="IU" | |
| export OS_INTERFACE=public | |
| export OS_APPLICATION_CREDENTIAL_ID=redacted-redacted-redacted | |
| export OS_APPLICATION_CREDENTIAL_SECRET=redacted-redacted-redacted-redacted-redacted-redacted |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| # The list of nameservers for OpenStack Subnet being created. | |
| # Set this value when you need create a new network/subnet while the access through DNS is required. | |
| export OPENSTACK_DNS_NAMESERVERS="129.79.1.1" | |
| # FailureDomain is the failure domain the machine will be created in. | |
| export OPENSTACK_FAILURE_DOMAIN=nova | |
| # The flavor reference for the flavor for your server instance. | |
| export OPENSTACK_CONTROL_PLANE_MACHINE_FLAVOR="m3.medium" | |
| # The flavor reference for the flavor for your server instance. | |
| export OPENSTACK_NODE_MACHINE_FLAVOR="m3.medium" | |
| # The name of the image to use for your server instance. If the RootVolume is specified, this will be ignored and use rootVolume directly. | |
| export OPENSTACK_IMAGE_NAME="ubuntu-jammy-kube-v1.30.0-240430-1242" | |
| # The SSH key pair name | |
| export OPENSTACK_SSH_KEY_NAME="your-key-name" | |
| # The external network | |
| export OPENSTACK_EXTERNAL_NETWORK_ID="3fe22c05-6206-4db2-9a13-44f04b6796e6" |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| #!/usr/bin/env bash | |
| ################################# | |
| # This demo is based on the Cluster API Quick Start: https://cluster-api.sigs.k8s.io/user/quick-start | |
| # | |
| # Save all the the .example files in ~/staging on your Ubuntu 22 instance, | |
| # rename them to remove .example, and customize for your own purpose | |
| # | |
| # Replace all occurrences of 'wrk7s' with the desired name of your workload cluster. | |
| # | |
| # This script uses demo-magic to run: https://github.com/paxtonhare/demo-magic | |
| # Make sure demo-magic.sh is in the same directory as this script, then run ./run-capi-demo.sh | |
| # | |
| # To record an asciinema cast file, run it like this: | |
| # | |
| # clear;reset;asciinema rec -i 2.5 -t "Cluster API demo" -c "./server-side.sh" capi-demo.cast | |
| # | |
| # Via: https://stackoverflow.com/questions/53969566/how-can-i-use-scripting-automation-to-record-asciinema-recordings | |
| ################################# | |
| # To disable simulated typing | |
| #. ./demo-magic.sh -d | |
| # To enable simulated typing | |
| . ./demo-magic.sh | |
| TYPE_SPEED=20 | |
| #SHOW_CMD_NUMS=true | |
| #NO_WAIT=true | |
| clear | |
| ## SECTION: Hand-waving | |
| # We are not going to cover: | |
| # installing software | |
| # downloading your OpenStack credentials | |
| # making an OpenStack image for the nodes, or | |
| # uploading an SSH key to OpenStack. | |
| ## SECTION: Required Software | |
| # kubectl is the command line client for the Kubernetes API | |
| # kind stands for 'Kubernetes in Docker' and is a single-node Kubernetes cluster for testing and development | |
| # Helm is a Kubernetes 'package manager' | |
| # Lastly, and very importantly, clusterctl is the command line client for the Cluster API | |
| # You can use other ways to talk to the Cluster API, but this is where most people begin | |
| # | |
| # P.S. yq for parsing and querying YAML files | |
| # Install software | |
| # kubectl: | |
| # curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl.sha256" | |
| # sudo install -o root -g root -m 0755 kubectl /usr/local/bin/kubectl | |
| # | |
| # kind: | |
| # curl -Lo ./kind https://kind.sigs.k8s.io/dl/v0.24.0/kind-linux-amd64 | |
| # sudo install -o root -g root -m 0755 kind /usr/local/bin/kind | |
| # | |
| # helm: | |
| # curl https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 | bash | |
| # | |
| # clusterctl: | |
| # curl -L https://github.com/kubernetes-sigs/cluster-api/releases/download/v1.8.4/clusterctl-linux-amd64 -o clusterctl | |
| # sudo install -o root -g root -m 0755 clusterctl /usr/local/bin/clusterctl | |
| # | |
| # rm kubectl kind clusterctl | |
| # | |
| # sudo snap install yq | |
| ## SECTION: Housekeeping | |
| # Create a working directory for cluster | |
| pe "mkdir -p ~/wrk7s" | |
| pe "cd ~/wrk7s" | |
| mkdir -p ~/backup | |
| rsync -a ~/wrk7s ~/backup/wrk7s-$(date +%Y-%m-%dT%H%M) | |
| rm -f ~/wrk7s/* | |
| # Notes: Copy some files into the working directory for setting up the cluster | |
| cp ~/staging/* ~/wrk7s/ | |
| pe "ls -l" | |
| ## SECTION: Create Bootstrap K8s Cluster | |
| p "# Create bootstrap K8s cluster using Kind (Kubernetes in Docker)" | |
| pe "kind create cluster --kubeconfig ~/.kube/config-kind-mgmt --name mgmt" | |
| pe "kind get clusters" | |
| pe "export KUBECONFIG=~/.kube/config-kind-mgmt" | |
| pe "kubectl config current-context" | |
| pe "kubectl cluster-info" | |
| pe "kubectl get nodes -A" | |
| pe "kubectl get pods -A" | |
| watch -n 5 'kubectl get pods -A' | |
| ## SECTION: Initialize the management cluster | |
| p "# Initialize the management cluster" | |
| p "## Add an extension that will help tidy up OpenStack resources" | |
| pe "helm repo add cluster-api-janitor-openstack https://azimuth-cloud.github.io/cluster-api-janitor-openstack" | |
| pe "helm repo update" | |
| pe "helm upgrade cluster-api-janitor-openstack cluster-api-janitor-openstack/cluster-api-janitor-openstack --install --version \">=0.6.0\"" | |
| p "## Add the Cluster API services and custom resources to the bootstrap cluster, turning it into a CAPI management cluster" | |
| pe "clusterctl init --infrastructure openstack" | |
| p "## Check that the CAPI services are running" | |
| p "kubectl get pods -A" | |
| watch -n 5 'kubectl get pods -A' | |
| ## SECTION: Generate a Workload Cluster YAML | |
| p "# Generate a workload cluster YAML manifest file" | |
| p "## Check which enviroment variables we'll need to set to create a cluster on OpenStack infrastructure" | |
| pe "clusterctl generate cluster --infrastructure openstack --list-variables wrk7s" | |
| p "## Populate the necessary OpenStack environment variables" | |
| pe "cat clouds.yaml | grep -v application_credential" | |
| pe "env | grep ^OPENSTACK_ | cut -d \"=\" -f1" | |
| p "## Download a convenience script to load environment variables from the clouds.yaml files" | |
| pe "wget https://raw.githubusercontent.com/kubernetes-sigs/cluster-api-provider-openstack/master/templates/env.rc -O ~/wrk7s/env.rc" | |
| pe "source env.rc clouds.yaml openstack" | |
| pe "env | grep ^OPENSTACK_ | cut -d \"=\" -f1" | |
| pe "cat openstack-extra-env-vars.sh" | |
| pe "source openstack-extra-env-vars.sh" | |
| pe "env | grep ^OPENSTACK_ | cut -d \"=\" -f1" | |
| #pe "clusterctl generate cluster wrk7s --kubernetes-version v1.30.0 --control-plane-machine-count=1 --worker-machine-count=1 --flavor without-lb > wrk7s.yaml" | |
| p "## Create the cluster manifest now that the necessary OpenStack environment variables are set" | |
| pe "clusterctl generate cluster wrk7s --kubernetes-version v1.30.0 --control-plane-machine-count=3 --worker-machine-count=1 > wrk7s.yaml" | |
| #p "## Customize the cluster definition to add a bastion host (optional)" | |
| #pe "vim wrk7s.yaml bastion.yaml" | |
| # Add bastion host: | |
| # Copy the content of bastion.yaml file to just under the `spec:` section of the line that _begins_ with `kind: OpenStackCluster` | |
| # Add provider for apiServerLoadBalancer (only necessary on Rescloud) | |
| # provider: amphora | |
| ## SECTION: Provision workload cluster | |
| p "# Provision the workload cluster" | |
| source openrc.sh | |
| p "## Get OpenStack servers - should be none" | |
| p "openstack server list" | |
| openstack server list --name 'wrk7s-*' --fit-width | |
| p "## Apply the workload cluster manifest file to create the cluster" | |
| pe "kubectl apply -f wrk7s.yaml" | |
| p "## This can take a while..." | |
| p "## Get all the workload clusters, and use clusterctl to describe our new workload cluster" | |
| p "kubectl get clusters && clusterctl describe cluster wrk7s" | |
| watch -n 5 'kubectl get clusters; clusterctl describe cluster wrk7s' | |
| # pe "kubectl get cluster" | |
| # pe "clusterctl describe cluster wrk7s" | |
| # Noisy way to describe cluster | |
| p "## Describe the workload cluster" | |
| pe "kubectl describe cluster wrk7s" | |
| # kubectl logs --follow -n capo-system capo-controller-manager- | |
| # Get OpenStack servers | |
| p "## Get OpenStack servers" | |
| p "openstack server list" | |
| openstack server list --name 'wrk7s-*' --fit-width | |
| p "kubectl get kubeadmcontrolplane" | |
| watch -n 5 'kubectl get kubeadmcontrolplane' | |
| ## SECTION: Use Workload Cluster | |
| # | |
| p "# Use the workload cluster (and finish configuring it from the inside)" | |
| p "## Retrieve workload cluster config" | |
| pe "clusterctl get kubeconfig wrk7s > wrk7s.kubeconfig" | |
| # Set permissions on .kubeconfig to avoid Helm complaining | |
| chmod 600 wrk7s.kubeconfig | |
| p "## Set KUBECONFIG to use the new workload cluster" | |
| pe "export KUBECONFIG=~/wrk7s/wrk7s.kubeconfig" | |
| pe "kubectl config current-context" | |
| p "## Install the janitor extension to the workload cluster as well" | |
| pe "helm upgrade cluster-api-janitor-openstack cluster-api-janitor-openstack/cluster-api-janitor-openstack --install --version \">=0.6.0\"" | |
| ## SECTION: Install cloud controller manager | |
| p "# Install the cloud controller manager in the worload cluster" | |
| # TODO: Cloud controller manager explanation | |
| p "## Create a secret in the workload cluster for the cloud controller manager to use" | |
| pe "cat cloud.conf | grep -v application-credential" | |
| pe "kubectl -n kube-system create secret generic cloud-config --from-file=cloud.conf" | |
| p "## Apply the cloud controler manager manifests" | |
| pe "kubectl apply -f https://raw.githubusercontent.com/kubernetes/cloud-provider-openstack/master/manifests/controller-manager/cloud-controller-manager-roles.yaml" | |
| pe "kubectl apply -f https://raw.githubusercontent.com/kubernetes/cloud-provider-openstack/master/manifests/controller-manager/cloud-controller-manager-role-bindings.yaml" | |
| pe "kubectl apply -f https://raw.githubusercontent.com/kubernetes/cloud-provider-openstack/master/manifests/controller-manager/openstack-cloud-controller-manager-ds.yaml" | |
| ## SECTION: Install a CNI provider | |
| p "# Install a CNI provider in the workload cluster" | |
| pe "kubectl apply -f https://raw.githubusercontent.com/projectcalico/calico/v3.26.1/manifests/calico.yaml" | |
| p "echo This can take a while..." | |
| p "openstack server list" | |
| watch -n 5 openstack server list --name 'wrk7s-*' --fit-width | |
| p "kubectl get nodes -A" | |
| watch -n 5 'kubectl get nodes -A' | |
| p "kubectl get pods -A" | |
| watch -n 5 'kubectl get pods -A' | |
| ## SECTION: Install CSI Provider | |
| p "# Install CSI provider in the workload cluster" | |
| # Next install Cinder storage provider: | |
| p "## Check if this workload cluster currently have any storage classes" | |
| pe "kubectl get storageclass" | |
| pe "helm repo add cpo https://kubernetes.github.io/cloud-provider-openstack" | |
| pe "cat cinder-csi-values.yaml" | |
| pe "helm install --namespace=kube-system cinder-csi cpo/openstack-cinder-csi --values cinder-csi-values.yaml" | |
| pe "kubectl get storageclass" | |
| # Back on management cluster | |
| p "echo Back on management cluster..." | |
| pe "export KUBECONFIG=~/.kube/config-kind-mgmt" | |
| pe "kubectl config current-context" | |
| pe "kubectl get cluster" | |
| watch -n 5 "kubectl get cluster" | |
| pe "clusterctl describe cluster wrk7s" | |
| watch -n 5 "clusterctl describe cluster wrk7s" | |
| # Noisy way to describe cluster | |
| pe "kubectl describe cluster wrk7s" | |
| # SECTION: Scale cluster | |
| p "# Scale the workload cluster" | |
| p "echo Back on management cluster..." | |
| pe "export KUBECONFIG=~/.kube/config-kind-mgmt" | |
| pe "kubectl config current-context" | |
| p "## Get OpenStack servers - should be 1 worker machine now" | |
| p "openstack server list" | |
| openstack server list --name 'wrk7s-*' --fit-width | |
| p "## Check the number of nodes" | |
| pe "kubectl get nodes -A" | |
| p "## Modify replicas for machine deployments from 1 to 3" | |
| pe "vim wrk7s.yaml" | |
| pe "kubectl apply -f wrk7s.yaml" | |
| p "## Get OpenStack servers - should be 3, with 2 starting up" | |
| p "openstack server list" | |
| openstack server list --name 'wrk7s-*' --fit-width | |
| p "## Check the number of nodes" | |
| pe "kubectl get nodes -A" | |
| p "clusterctl describe cluster wrk7s" | |
| watch -n 5 "clusterctl describe cluster wrk7s" | |
| p "## Get OpenStack servers - should be 3 worker machines now" | |
| p "openstack server list" | |
| openstack server list --name 'wrk7s-*' --fit-width | |
| p "## Check the number of nodes" | |
| pe "kubectl get nodes -A" | |
| ## SECTION: Install JupyterHub | |
| p "# Install JupyterHub on workload cluster" | |
| p "echo Back on workload cluster..." | |
| pe "export KUBECONFIG=~/wrk7s/wrk7s.kubeconfig" | |
| pe "kubectl config current-context" | |
| p "## Check what OpenStack volumes exist" | |
| p "openstack volume list" | |
| openstack volume list --fit-width | grep wrk7s | |
| # Install JupyterHub using helm | |
| pe "helm repo add jupyterhub https://hub.jupyter.org/helm-chart/" | |
| #echo '"jupyterhub" has been added to your repositories' | |
| pe "vim jupyterhub.values.yaml" | |
| p "helm upgrade --cleanup-on-fail \\ | |
| --install jhub jupyterhub/jupyterhub \\ | |
| --namespace jh \\ | |
| --create-namespace \\ | |
| --version=3.3.8 \\ | |
| --values jupyterhub.values.yaml" | |
| helm upgrade --cleanup-on-fail \ | |
| --install jhub jupyterhub/jupyterhub \ | |
| --namespace jh \ | |
| --create-namespace \ | |
| --version=3.3.8 \ | |
| --values jupyterhub.values.yaml | |
| # pe "kubectl create deployment synergychat-web --image=bootdotdev/synergychat-web:latest" | |
| pe "kubectl get pods -n jh" | |
| watch -n 5 'kubectl get pods -n jh' | |
| p "## Check what PVCs & PVs exist. There should be at least one for the JupyterHub database." | |
| pe "kubectl get persistentvolumeclaims -A" | |
| pe "kubectl get persistentvolumes -A" | |
| p "## Check what OpenStack volumes exist now." | |
| p "openstack volume list" | |
| openstack volume list --fit-width | grep wrk7s | |
| # Forward proxy | |
| pe "kubectl --namespace=jh port-forward service/proxy-public 8000:http" | |
| p "## Open browser to 127.0.0.1:8000" | |
| # kubectl port-forward synergychat-web-6b7889c476-7c67t 9090:8080 | |
| # cmd | |
| p "## Check which pods exist in the jh namespace. There should be one for each JupyterHub user." | |
| pe "kubectl get pods -n jh" | |
| p "## Check what PVCs & PVs exist now. There should be one for each JupyterHub user, as well as the JuputerHub database" | |
| pe "kubectl get persistentvolumeclaims -A" | |
| pe "kubectl get persistentvolumes -A" | |
| p "## Check that there are now new volumes for the JupyterHub users" | |
| p "openstack volume list" | |
| openstack volume list --fit-width | grep wrk7s | |
| # SECTION: Cleanup | |
| p "# Cleanup" | |
| mkdir -p ~/backup | |
| rsync -a ~/wrk7s ~/backup/wrk7s-$(date +%Y-%m-%dT%H%M) | |
| p "## First delete Jupyter" | |
| pe "helm list -A" | |
| pe "helm delete -n jh jhub" | |
| pe "kubectl delete namespace jh" | |
| pe "helm repo remove jupyterhub" | |
| # Then delete workload cluster | |
| p "echo Back on management cluster..." | |
| pe "export KUBECONFIG=~/.kube/config-kind-mgmt" | |
| pe "kubectl config current-context" | |
| # Option 1: Not really deleting workload cluster. Do it out of band instead. | |
| # p "kubectl delete cluster wrk7s" | |
| # Option 2: Really deleting workload cluster | |
| pe "kubectl delete cluster wrk7s" | |
| # Then delete management cluster | |
| p "# IMPORTANT: Don't delete management cluster until workload cluster is truly gone. Otherwise manual cleanup of OpenStack resources..." | |
| # Option 1: Not really deleting management cluster. Do it out of band instead. | |
| # p "kind delete cluster --name mgmt" | |
| # echo 'Deleting cluster "mgmt" ...' | |
| # echo 'Deleted nodes: ["mgmt-control-plane"]' | |
| # p "rm ~/.kube/config-kind-mgmt" | |
| # Option 2: Really deleting management cluster. But be sure workload cluster is truly gone! | |
| pe "kind delete cluster --name mgmt" | |
| pe "rm ~/.kube/config-kind-mgmt" | |
| pe "echo THE END" | |
| p "" |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment