Let's look at some basic kubectl output options.
Our intention is to list nodes (with their AWS InstanceId) and Pods (sorted by node).
We can start with:
kubectl get no
| // Updated: Aug. 15, 2024 | |
| // Run: node testRegex.js testText.txt | |
| // Used in https://jina.ai/tokenizer | |
| const fs = require('fs'); | |
| const util = require('util'); | |
| // Define variables for magic numbers | |
| const MAX_HEADING_LENGTH = 7; | |
| const MAX_HEADING_CONTENT_LENGTH = 200; | |
| const MAX_HEADING_UNDERLINE_LENGTH = 200; |
| /** | |
| * Will generate versionCode from versionName that follows Semantic Versioning | |
| */ | |
| ext { | |
| /** | |
| * Application version is located version variable. | |
| * And should follow next policy: | |
| * X1.X2.X3-type-flavor, where X - any digits and type is optional alphabetical suffix. | |
| * X1 - major version | |
| * X2 - minor version |
| onServerStartup () { | |
| const { serverId, ip } = getServerInfo() // serverId does not change across restarts | |
| this.serverId = serverId | |
| // We don't have any routers or producers (yet). Clear any value that exists in the DB related to our serverId | |
| clearSharedDB(serverId, 'routers') | |
| clearSharedDB(serverId, 'producers') | |
| // Update the DB with our serverId and ip so that others will know how to reach us | |
| registerServerInDB(serverId, ip) | |
I hereby claim:
To claim this, I am signing this object:
The official guide for setting up Kubernetes using kubeadm works well for clusters of one architecture. But, the main problem that crops up is the kube-proxy image defaults to the architecture of the master node (where kubeadm was run in the first place).
This causes issues when arm nodes join the cluster, as they will try to execute the amd64 version of kube-proxy, and will fail.
It turns out that the pod running kube-proxy is configured using a DaemonSet. With a small edit to the configuration, it's possible to create multiple DaemonSets—one for each architecture.
Follow the instructions at https://kubernetes.io/docs/setup/independent/create-cluster-kubeadm/ for setting up the master node. I've been using Weave Net as the network plugin; it see
The official guide for setting up Kubernetes using kubeadm works well for clusters of one architecture. But, the main problem that crops up is the kube-proxy image defaults to the architecture of the master node (where kubeadm was run in the first place).
This causes issues when arm nodes join the cluster, as they will try to execute the amd64 version of kube-proxy, and will fail.
It turns out that the pod running kube-proxy is configured using a DaemonSet. With a small edit to the configuration, it's possible to create multiple DaemonSets—one for each architecture.
Follow the instructions at https://kubernetes.io/docs/setup/independent/create-cluster-kubeadm/ for setting up the master node. I've been using Weave Net as the network plugin; it see
Picking the right architecture = Picking the right battles + Managing trade-offs
Press minus + shift + s and return to chop/fold long lines!
git stash list
git stash apply stash@{0}
git stash pop stash@{3}
git stash show stash@{2}