Skip to content

Instantly share code, notes, and snippets.

@mikejoh
Last active October 22, 2022 15:24
Show Gist options
  • Select an option

  • Save mikejoh/b825b973e92c77ac21f041a3ea70eb71 to your computer and use it in GitHub Desktop.

Select an option

Save mikejoh/b825b973e92c77ac21f041a3ea70eb71 to your computer and use it in GitHub Desktop.

Revisions

  1. mikejoh revised this gist Sep 24, 2018. 1 changed file with 0 additions and 60 deletions.
    60 changes: 0 additions & 60 deletions cka-preparation.md
    Original file line number Diff line number Diff line change
    @@ -509,66 +509,6 @@ kubectl --context=mike-admin get pods -n default
    Error from server (Forbidden): pods is forbidden: User "mikeadmin" cannot list pods in the namespace "default"
    ```

    # kubectl one-liners

    Enable `kubectl` completion (needs the `bash-completion` package):
    ```
    source <(kubectl completion bash)
    ```
    Dry-run, outputs Service (--expose) and a Deployment in yaml:
    ```
    kubectl run --image=apache \
    --port=80 \
    --replicas=3 \
    --restart='Always' \
    --expose \
    --requests='cpu=100m,memory=256Mi' \
    --limits='cpu=200m,memory=512Mi' \
    --labels=app=apache,version=1 \
    --dry-run=true \
    -o yaml
    ```
    In a running container run `date`:
    ```
    kubectl exec POD -- bash -c "date"
    kubectl exec POD -- date
    kubectl exec POD date
    ```
    Remove label `this` from a pod:
    ```
    kubectl label pod POD this-
    ```
    Add label `that=thing` to a pod:
    ```
    kubectl label pod POD that=thing
    ```
    Select pods based on selector across all namespaces:
    ```
    kubectl get pods --all-namespaces --selector this=label
    ```
    Create a single Pod without a Deployment (`--restart=Never`) and a ReplicaSet:
    ```
    kubectl run nginx --image=nginx --restart=Never
    ```
    Create a single Pod with a Deployment (and a ReplicaSet):
    ```
    kubectl run nginx --image=nginx --replicas=1
    ```
    How the `--restart` flag behaves with `kubectl run`:
    ```
    --restart=Never Creates a single Pod without a Deployment or a ReplicaSet. You can achive this by creating a single Pod manifest and apply it.
    --restart=OnFailure Creates a Pod and a Job.
    --restart=Always Default, creates a Deployment and a ReplicaSet object.
    ```
    Copy file to/from a Pod:
    ```
    kubectl cp POD:/path/to/file.txt ./file.txt
    kubectl $HOME/file.txt POD:/path/to/file.txt
    ```
    Patch a Deployment with a new image:
    ```
    kubectl patch deployment nginx -p '{"spec":{"template":{"spec":{"containers":[{ "name":"nginx", "image":"nginx:1.13.1"}]}}}}'
    ```
    # Manifests

    Almost all Kubernetes objects and their Manifests looks the same, at least in the first few lines:
  2. mikejoh revised this gist Aug 3, 2018. 1 changed file with 0 additions and 57 deletions.
    57 changes: 0 additions & 57 deletions cka-preparation.md
    Original file line number Diff line number Diff line change
    @@ -620,63 +620,6 @@ spec:
    - "10"
    EOF
    ```
    # gcloud one-liners

    ## Fetch all Pod logs with severity error from StackDriver (parsing with `jq`)

    ```
    gcloud logging read "resource.labels.pod_id:backend AND severity:ERROR" --order asc --format json | jq '.[].textPayload'
    ```

    ## Stop all instances
    ```
    gcloud compute instances stop $(gcloud compute instances list | grep -v "NAME" | awk '{ print $1}')
    ```
    ## Start all instances
    ```
    gcloud compute instances start --async $(gcloud compute instances list | grep -v NAME | awk '{ print $1 }')
    ```
    ## Manually create a network (`--subnet-mode custom`)
    ```
    gcloud compute networks create k8s --subnet-mode custom
    ```
    ## Create a subnet within a network
    ```
    gcloud compute networks subnets create k8s-nodes --network k8s --range 10.0.0.0/24
    ```
    ## Change configuration settings, set project for gcloud
    ```
    gcloud config set core/project cka-exam-prep
    ```
    ## Add firewall allowing internal traffic between components and pod networks
    ```
    gcloud compute firewall-rules create k8s-cluster-fw --network k8s --allow tcp,udp,icmp --source-ranges 10.0.0.0/24,10.100.0.0/16
    ```
    ## Add firewall allowing external traffic to the network (port 6443 are used by API server TLS)
    ```
    gcloud compute firewall-rules create k8s-allow-external \
    --allow tcp:22,tcp:6443,icmp \
    --network k8s \
    --source-ranges 0.0.0.0/0
    ```
    ## List all firewall rules filtering on a specific network
    ```
    gcloud compute firewall-rules list --filter="network:k8s"
    ```
    ## Allocate a external IP address
    ```
    gcloud compute addresses create k8s-external --region $(gcloud config get-value compute/region)
    ```
    ## List allocated external IP addresses
    ```
    gcloud compute addresses list
    NAME REGION ADDRESS STATUS
    k8s-external europe-west1 1.2.3.4 RESERVED
    ```
    ## Query the metadata server from within a compute instance and fetch it's IP address
    ```
    curl -s -H "Metadata-Flavor: Google" http://metadata.google.internal/computeMetadata/v1/instance/network-interfaces/0/ip
    ```

    # systemd

  3. mikejoh revised this gist Aug 3, 2018. 1 changed file with 6 additions and 0 deletions.
    6 changes: 6 additions & 0 deletions cka-preparation.md
    Original file line number Diff line number Diff line change
    @@ -622,6 +622,12 @@ EOF
    ```
    # gcloud one-liners

    ## Fetch all Pod logs with severity error from StackDriver (parsing with `jq`)

    ```
    gcloud logging read "resource.labels.pod_id:backend AND severity:ERROR" --order asc --format json | jq '.[].textPayload'
    ```

    ## Stop all instances
    ```
    gcloud compute instances stop $(gcloud compute instances list | grep -v "NAME" | awk '{ print $1}')
  4. mikejoh revised this gist Jun 13, 2018. 1 changed file with 66 additions and 1 deletion.
    67 changes: 66 additions & 1 deletion cka-preparation.md
    Original file line number Diff line number Diff line change
    @@ -49,14 +49,39 @@ Location | Component | Comment

    # kubeadm

    In this part i'll try to setup a Kubernetes cluster using `kubeadm` on a couple of instances in GCE (with an external etcd cluster).
    In this part i'll try to setup a Kubernetes cluster using `kubeadm` on a couple of instances in GCE (with an external etcd cluster). I'm using Ubuntu-based instances in my setups.

    The cluster i will be creating will look like this:
    * 1 etcd
    * 1 master
    * 2 worker
    * flannel

    Every node besides the etcd one will need the following components to be able to run and join in a Kubernetes cluster:
    * `kubectl`
    * `kube-proxy`
    * `kubeadm`
    * Docker

    To install the Kubernetes components needed you'll need to run the following (as root):
    ```
    apt-get update && apt-get install -y apt-transport-https curl
    curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add -
    cat <<EOF >/etc/apt/sources.list.d/kubernetes.list
    deb http://apt.kubernetes.io/ kubernetes-xenial main
    EOF
    apt-get update
    apt-get install -y kubelet kubeadm kubectl
    ```

    And for Docker you can run the following (as root):
    ```
    curl -fsSL get.docker.com -o get-docker.sh
    bash get-docker.sh
    ```

    You'll get the latest CE version of Docker installed (18.x+)

    ### 1. Generate compute instances in GCE

    TODO!
    @@ -232,6 +257,46 @@ Oh, and remember, you can add extra args and configuration to all of the compone
    ```
    kubeadm init --config=master_config.yaml --dry-run
    ```
    See the last section of this guide for more info on what `kubeadm` does behind the scenes.

    3. Now apply the manifest without the `--dry-run` flag. If everything went fine you'll get a output which you'll use to join your nodes to the cluster. It looks similar to this:
    ```
    kubeadm join 10.0.0.11:6443 --token <string> --discovery-token-ca-cert-hash sha256:<string> <string>
    ```

    4. Run the following to make sure you can run `kubectl` against the API server on the master:
    ```
    sudo cp /etc/kubernetes/admin.conf $HOME/
    sudo chown $(id -u):$(id -g) $HOME/admin.conf
    export KUBECONFIG=$HOME/admin.conf
    ```

    5. Proceed with joining the nodes to the cluster.

    ### 3. Worker nodes

    1. Make sure you have Docker and the following components installed on all worker nodes:
    * `kubectl`
    * `kube-proxy`
    * `kubeadm`

    2. Use the output from the `kubeadm init` command you ran on the master to join each node to the cluster

    ### 4. Using the cluster

    1. On the master run `kubectl get nodes`, if none of the components are ready (in a `NotReady` state). This is due to the fact that you don't have a Overlay network installed. For this inital cluster i'll be using `flannel`. On the master run:
    ```
    kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/v0.10.0/Documentation/kube-flannel.yml
    ```

    2. Now when you run `kubectl get nodes` you'll see (after a while) that the state have now changed:
    ```
    root@master-0:~# kubectl get nodes
    NAME STATUS ROLES AGE VERSION
    master-0 Ready master 40m v1.10.4
    worker-0 Ready <none> 30m v1.10.4
    worker-1 Ready <none> 3m v1.10.4
    ```

    #### Analyze of `--dry-run`

  5. mikejoh revised this gist Jun 13, 2018. 1 changed file with 14 additions and 3 deletions.
    17 changes: 14 additions & 3 deletions cka-preparation.md
    Original file line number Diff line number Diff line change
    @@ -220,6 +220,8 @@ etcd:
    caFile: /etc/kubernetes/pki/etcd/ca.pem
    certFile: /etc/kubernetes/pki/etcd/client.pem
    keyFile: /etc/kubernetes/pki/etcd/client-key.pem
    networking:
    podCidr: 10.100.0.0/16
    apiServerCertSANs:
    - 10.0.0.11
    EOF
    @@ -282,10 +284,19 @@ OK, so the following happened:
    13. The RBAC `RoleBinding` that references the `Role` created earlier. `subjects` are `User` with the name `system:anonymous`
    14. A `ServiceAccount` are created with the `name`: `kube-dns`
    15. A `Deployment` are created for `kube-dns`
    - There's a total of three containers running in this Pod. `kube-dns`, `dnsmasq` and `sidecar`.
    - `selector` are set to `matchLabels` and `k8s-app=kube-dns`
    - A `rollingUpdate` strategy are configured with `maxSurge=10%` and `maxUnavailable=0`. EXPLAIN MORE!

    **CONTINUE HERE and on the `kube-dns` component
    - A `rollingUpdate` strategy are configured with `maxSurge=10%` and `maxUnavailable=0`. Which means that during a rolling update the deployment allows for 10% over committment of new Pods (or 110%) but with 0% unavailable.
    - The `spec` defines `affinity` (which would replace `nodeSelector` in the end). In this case it's a `nodeAffinity` with a 'hard' affinity of the type `requiredDuringSchedulingIgnoredDuringExecution` and with the requirement that the architecture should be `amd64`
    - There's a configured `livenessProbe` and a `readinessProbe`. Liveness are for restarting during failure, readiness are for determing when the container are ready to accept traffic
    - Container ports configured are: 10053 TCP/UDP and 10055 for metrics
    - Resource limits and requests are set for memory and CPU
    16. A `Service` are created for handling DNS traffic
    - A clusterIP of 10.99.0.10 are added. Default you'll have a service network of 10.96.0.0/12, i changed this to 10.99.0.0/24 in my master_config.yaml manifest.
    17. A `ServiceAccount` for `kube-proxy` are created
    18. A `ConfigMap` containing the kube-proxy configuration
    19. A `DaemonSet` for the kube-proxies are created.
    20. Last but not least a `ClusterRoleBinding` are created for kube-proxy referencing the one of the system provided ClusterRoles `node-proxier`. The subject for this binding are the service account created earlier.

    # minikube

  6. mikejoh revised this gist Jun 12, 2018. 1 changed file with 15 additions and 0 deletions.
    15 changes: 15 additions & 0 deletions cka-preparation.md
    Original file line number Diff line number Diff line change
    @@ -271,6 +271,21 @@ OK, so the following happened:
    _Static Pods are managed directly by `kubelet` daemon on a specific node, without the API server observing it. It does not have an associated replication controller, and kubelet daemon itself watches it and restarts it when it crashes. There is no health check. Static pods are always bound to one kubelet daemon and always run on the same node with it._

    If you're running Kubernetes clustered and Static Pods on each node you probably want to create a `DaemonSet` instead.
    5. Wait for the API server `/healthz` endpoint to return `ok`
    6. Store and create the configuration used in the `ConfigMap` called `kubeadm-config` (within the `kube-system` namespace). This configuration is what we created in earlier in the Master configuration manifest and passed to `kubeadm`, note that the version added as a `ConfigMap` holds all of the other default configuration (that we didn't touch).
    7. `master-0` would be marked as master by adding a `label` and a `taint`. The taint being basically that the master node should not have Pods scheduled on it.
    8. A `secret` will be created with the bootstrap token.
    9. A RBAC `ClusterRoleBinding` will be created referencing the `ClusterRole`: `system:node-bootstrapper` and the `subjects` would be a `Group` with the `name`: `system:bootstrappers:kubeadm:default-node-token`. This will allow Node Bootstrap tokens to post `CSR` in order for nodes to get long term certificate credentials.
    10. Another RBAC `ClusterRoleBinding` will be created to allow the `csrapprover` controller automatically approve `CSR`'s from a Node Bootstrap Token. The `ClusterRole` referenced are: `system:certificates.k8s.io:certificatesigningrequests:nodeclient`. Subjects are a `Group` with the `name`: `system:nodes`
    11. A `ConfigMap` called `cluster-info` will be created in the `kube-public` namespace. This ConfigMap will hold the cluster info which will be the API server URL and also the CA data (public certificate?)
    12. A RBAC `Role` will be created that allows `get` on the `cluster-info` ConfigMap.
    13. The RBAC `RoleBinding` that references the `Role` created earlier. `subjects` are `User` with the name `system:anonymous`
    14. A `ServiceAccount` are created with the `name`: `kube-dns`
    15. A `Deployment` are created for `kube-dns`
    - `selector` are set to `matchLabels` and `k8s-app=kube-dns`
    - A `rollingUpdate` strategy are configured with `maxSurge=10%` and `maxUnavailable=0`. EXPLAIN MORE!

    **CONTINUE HERE and on the `kube-dns` component

    # minikube

  7. mikejoh revised this gist Jun 12, 2018. 1 changed file with 1 addition and 0 deletions.
    1 change: 1 addition & 0 deletions cka-preparation.md
    Original file line number Diff line number Diff line change
    @@ -267,6 +267,7 @@ OK, so the following happened:
    - `hostNetwork` are set to `true`
    4. After these three components `kubeadm` will wait for the `kubelet` to boot up the control plane as `Static Pods`.
    - Quick note on Static Pods:

    _Static Pods are managed directly by `kubelet` daemon on a specific node, without the API server observing it. It does not have an associated replication controller, and kubelet daemon itself watches it and restarts it when it crashes. There is no health check. Static pods are always bound to one kubelet daemon and always run on the same node with it._

    If you're running Kubernetes clustered and Static Pods on each node you probably want to create a `DaemonSet` instead.
  8. mikejoh revised this gist Jun 12, 2018. 1 changed file with 3 additions and 5 deletions.
    8 changes: 3 additions & 5 deletions cka-preparation.md
    Original file line number Diff line number Diff line change
    @@ -266,12 +266,10 @@ OK, so the following happened:
    - A CPU request limit are configured
    - `hostNetwork` are set to `true`
    4. After these three components `kubeadm` will wait for the `kubelet` to boot up the control plane as `Static Pods`.
    - Quick note on Static Pods:
    _Static Pods are managed directly by `kubelet` daemon on a specific node, without the API server observing it. It does not have an associated replication controller, and kubelet daemon itself watches it and restarts it when it crashes. There is no health check. Static pods are always bound to one kubelet daemon and always run on the same node with it._

    Quick note on Static Pods:

    _Static Pods are managed directly by `kubelet` daemon on a specific node, without the API server observing it. It does not have an associated replication controller, and kubelet daemon itself watches it and restarts it when it crashes. There is no health check. Static pods are always bound to one kubelet daemon and always run on the same node with it._

    If you're running Kubernetes clustered and Static Pods on each node you probably want to create a `DaemonSet` instead.
    If you're running Kubernetes clustered and Static Pods on each node you probably want to create a `DaemonSet` instead.

    # minikube

  9. mikejoh revised this gist Jun 12, 2018. 1 changed file with 3 additions and 3 deletions.
    6 changes: 3 additions & 3 deletions cka-preparation.md
    Original file line number Diff line number Diff line change
    @@ -267,11 +267,11 @@ OK, so the following happened:
    - `hostNetwork` are set to `true`
    4. After these three components `kubeadm` will wait for the `kubelet` to boot up the control plane as `Static Pods`.

    Quick note on Static Pods:
    Quick note on Static Pods:

    _Static Pods are managed directly by `kubelet` daemon on a specific node, without the API server observing it. It does not have an associated replication controller, and kubelet daemon itself watches it and restarts it when it crashes. There is no health check. Static pods are always bound to one kubelet daemon and always run on the same node with it._
    _Static Pods are managed directly by `kubelet` daemon on a specific node, without the API server observing it. It does not have an associated replication controller, and kubelet daemon itself watches it and restarts it when it crashes. There is no health check. Static pods are always bound to one kubelet daemon and always run on the same node with it._

    If you're running Kubernetes clustered and Static Pods on each node you probably want to create a `DaemonSet` instead.
    If you're running Kubernetes clustered and Static Pods on each node you probably want to create a `DaemonSet` instead.

    # minikube

  10. mikejoh revised this gist Jun 12, 2018. 1 changed file with 43 additions and 8 deletions.
    51 changes: 43 additions & 8 deletions cka-preparation.md
    Original file line number Diff line number Diff line change
    @@ -224,19 +224,54 @@ apiServerCertSANs:
    - 10.0.0.11
    EOF
    ```
    Oh, and remember, you can add extra args and configuration to all of the components through this file.

    2. Apply the manifest but this first time with `--dry-run` flag to see what the h3ll is going on (there's alot happening in the background):
    ```
    kubeadm init --config=master_config.yaml --dry-run
    ```

    #### Analyze

    * I got a warning about the version of Docker i had installed which was the latest CE (18+), recommended version are 17.03, i think i will revisit this for one reason or the other later on. I'll keep this note here for reference.

    When running the `kubeadm init` command with `--dry-run` the following will happen:
    1. The `kube-apiserver` Pod will be created. Alot of configuration flags will be sent as `command` to the Pod. A couple of them are the ones we added to the Master configuration manifest regarding etcd.
    - asdf

    #### Analyze of `--dry-run`

    Notes:
    * I got a warning about the version of Docker i had installed which was the latest CE (18+), recommended version are 17.03, i think i will revisit this for one reason or the other later on. I'll keep this note here for reference
    * All manifest files will be written to the following directory on the Master node: `/etc/kubernetes/manifests/`
    * The namespace `kube-system` will be the home for all components
    * `kubeadm` creates all the certificates you'll need to secure your cluster and cluster components
    * The namespace `kube-system` will be the home for all componens
    * `kubeconfig` files are created and used by e.g. components

    OK, so the following happened:
    1. The `kube-apiserver` Pod will be created:
    - The API Server services REST operations and provides the frontend to the cluster’s shared state through which all other components interact.
    - Alot of configuration flags will be sent as `command` to the Pod. A couple of them are the ones we added to the Master configuration manifest regarding etcd
    - A `livenessProbe` are configured, probing the `/healthz` path on port 6443
    - Two `labels` are configured: `component=kube-apiserver` and `tier=control-plane`
    - A CPU request limit are configured
    - Two read-only `volumeMounts` are configured, source directories are: `/etc/ssl/certs` and `/etc/kubernetes/pki`
    2. The `kube-controller-manager` Pod will be created:
    - The Kubernetes controller manager is a daemon that embeds the core control loops shipped with Kubernetes.
    - The `--controllers` flag are set to `*,bootstrapsigner,tokencleaner`, which basically means that all controllers available are enabled
    - The `--leader-elect` flag are set to `true` which means that a leader election will be started, viable when running replicated components for high availability
    - A `livenessProbe` are configured, probing the `/healthz` path on port 10252 (localhost)
    - Two `labels` are configured: `component=kube-controller-manager` and `tier=control-plane`
    - A CPU request limit are configured
    - Besides mounting the same source directories as the `kube-apiserver` Pod this one also mounts `/etc/kubernetes/controller-manager.conf` and `/usr/libexec/kubernetes/kubelet-plugins/volume/exec`. The latter one are for adding plugins on-the-fly to `kubelet`.
    - `hostNetwork` are set to `true`, this means that the controller manager Pod shares the `master-0` instance network stack
    3. The `kube-scheduler` Pod will be created:
    - The Kubernetes scheduler is a policy-rich, topology-aware, workload-specific function that significantly impacts availability, performance, and capacity. The scheduler needs to take into account individual and collective resource requirements, quality of service requirements, hardware/software/policy constraints, affinity and anti-affinity specifications, data locality, inter-workload interference, deadlines, and so on.
    - The only `volumeMount` are the one for mounting the `scheduler.conf` file
    - Two `labels` are configured: `component=kube-scheduler` and `tier=control-plane`
    - A `livenessProbe` are configured, probing the `/healthz` path on port 10251 (localhost)
    - A CPU request limit are configured
    - `hostNetwork` are set to `true`
    4. After these three components `kubeadm` will wait for the `kubelet` to boot up the control plane as `Static Pods`.

    Quick note on Static Pods:

    _Static Pods are managed directly by `kubelet` daemon on a specific node, without the API server observing it. It does not have an associated replication controller, and kubelet daemon itself watches it and restarts it when it crashes. There is no health check. Static pods are always bound to one kubelet daemon and always run on the same node with it._

    If you're running Kubernetes clustered and Static Pods on each node you probably want to create a `DaemonSet` instead.

    # minikube

  11. mikejoh revised this gist Jun 12, 2018. 1 changed file with 1 addition and 1 deletion.
    2 changes: 1 addition & 1 deletion cka-preparation.md
    Original file line number Diff line number Diff line change
    @@ -235,7 +235,7 @@ kubeadm init --config=master_config.yaml --dry-run

    When running the `kubeadm init` command with `--dry-run` the following will happen:
    1. The `kube-apiserver` Pod will be created. Alot of configuration flags will be sent as `command` to the Pod. A couple of them are the ones we added to the Master configuration manifest regarding etcd.
    - asdf
    - asdf


    # minikube
  12. mikejoh revised this gist Jun 12, 2018. 1 changed file with 1 addition and 1 deletion.
    2 changes: 1 addition & 1 deletion cka-preparation.md
    Original file line number Diff line number Diff line change
    @@ -235,7 +235,7 @@ kubeadm init --config=master_config.yaml --dry-run

    When running the `kubeadm init` command with `--dry-run` the following will happen:
    1. The `kube-apiserver` Pod will be created. Alot of configuration flags will be sent as `command` to the Pod. A couple of them are the ones we added to the Master configuration manifest regarding etcd.
    1. asdf
    - asdf


    # minikube
  13. mikejoh revised this gist Jun 12, 2018. 1 changed file with 1 addition and 1 deletion.
    2 changes: 1 addition & 1 deletion cka-preparation.md
    Original file line number Diff line number Diff line change
    @@ -235,7 +235,7 @@ kubeadm init --config=master_config.yaml --dry-run

    When running the `kubeadm init` command with `--dry-run` the following will happen:
    1. The `kube-apiserver` Pod will be created. Alot of configuration flags will be sent as `command` to the Pod. A couple of them are the ones we added to the Master configuration manifest regarding etcd.
    * asdf
    1. asdf


    # minikube
  14. mikejoh revised this gist Jun 12, 2018. 1 changed file with 35 additions and 2 deletions.
    37 changes: 35 additions & 2 deletions cka-preparation.md
    Original file line number Diff line number Diff line change
    @@ -61,7 +61,7 @@ The cluster i will be creating will look like this:

    TODO!

    ### 2. Run etcd
    ### 2. One-node etcd cluster

    1. Login to etcd-0
    2. Install cfssl with `apt-get install golang-cfssl`
    @@ -203,7 +203,40 @@ EOF
    * `client-key.pem`
    Place them in the following `master-0` directory: `/etc/kubernetes/pki/etcd`. The client certificate and key will be used by the API server when connecting to etcd, this information will be passed to `kubeadm` through a Master configuration manifest, see the next step.

    9. Create the Master configuration manifest file
    ### 2. Master node
    * After `kubeadm` are done the master node will run the needed Kubernetes components (all but etcd) in Docker containers (or Pods) within Kubernetes.

    1. Create the Master configuration manifest file, my configuration file looks like this, yours will look somewhat different in regards to the IP addresses:
    ```
    cat > master_config.yaml <<EOF
    apiVersion: kubeadm.k8s.io/v1alpha1
    kind: MasterConfiguration
    api:
    advertiseAddress: 10.0.0.11
    controlPlaneEndpoint: 10.0.0.11
    etcd:
    endpoints:
    - https://10.0.0.10:2379
    caFile: /etc/kubernetes/pki/etcd/ca.pem
    certFile: /etc/kubernetes/pki/etcd/client.pem
    keyFile: /etc/kubernetes/pki/etcd/client-key.pem
    apiServerCertSANs:
    - 10.0.0.11
    EOF
    ```
    2. Apply the manifest but this first time with `--dry-run` flag to see what the h3ll is going on (there's alot happening in the background):
    ```
    kubeadm init --config=master_config.yaml --dry-run
    ```

    #### Analyze

    * I got a warning about the version of Docker i had installed which was the latest CE (18+), recommended version are 17.03, i think i will revisit this for one reason or the other later on. I'll keep this note here for reference.

    When running the `kubeadm init` command with `--dry-run` the following will happen:
    1. The `kube-apiserver` Pod will be created. Alot of configuration flags will be sent as `command` to the Pod. A couple of them are the ones we added to the Master configuration manifest regarding etcd.
    * asdf


    # minikube

  15. mikejoh revised this gist Jun 12, 2018. 1 changed file with 11 additions and 3 deletions.
    14 changes: 11 additions & 3 deletions cka-preparation.md
    Original file line number Diff line number Diff line change
    @@ -160,8 +160,10 @@ cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=serv
    ```
    Note that we're using another profile to create the server certificate, the profile was created along with the CA certificate.

    6. Now create the systemd unit file needed:
    6. Now create the systemd unit file needed, remember to pass the instance private IP address to the `--listen-client-urls` flag:
    ```
    export PRIVATE_IP=$(curl -s -H "Metadata-Flavor: Google" http://metadata.google.internal/computeMetadata/v1/instance/network-interfaces/0/ip)
    cat > /etc/systemd/system/etc.service <<EOF
    [Unit]
    Description=etcd
    @@ -171,7 +173,7 @@ Documentation=https://github.com/coreos/etcd
    ExecStart=/usr/local/bin/etcd \
    --name=etcd0 \
    --data-dir=/var/lib/etcd \
    --listen-client-urls=https://localhost:2379 \
    --listen-client-urls=https://$PRIVATE_IP:2379,https://localhost:2379 \
    --advertise-client-urls=https://localhost:2379 \
    --cert-file=/etc/kubernetes/pki/etcd/server.pem \
    --key-file=/etc/kubernetes/pki/etcd/server-key.pem \
    @@ -195,7 +197,13 @@ EOF
    systemctl start etcd
    }
    ```
    8. TODO! Continue here: https://kubernetes.io/docs/setup/independent/high-availability/#acquire-etcd-certs and add the correct MasterConfiguration manifest.
    8. Now copy the following certificate files from `etcd-0` to the `master-0` instance:
    * `ca.pem`, the Certificate Authority certificate, used for signing all the other certificates, everyone will trust this certificate
    * `client.pem`
    * `client-key.pem`
    Place them in the following `master-0` directory: `/etc/kubernetes/pki/etcd`. The client certificate and key will be used by the API server when connecting to etcd, this information will be passed to `kubeadm` through a Master configuration manifest, see the next step.

    9. Create the Master configuration manifest file

    # minikube

  16. mikejoh revised this gist Jun 11, 2018. 1 changed file with 36 additions and 1 deletion.
    37 changes: 36 additions & 1 deletion cka-preparation.md
    Original file line number Diff line number Diff line change
    @@ -159,8 +159,43 @@ sed -i 's/example\.net/'"$PEER_NAME"'/' config.json
    cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=server config.json | cfssljson -bare server
    ```
    Note that we're using another profile to create the server certificate, the profile was created along with the CA certificate.
    6. adfs

    6. Now create the systemd unit file needed:
    ```
    cat > /etc/systemd/system/etc.service <<EOF
    [Unit]
    Description=etcd
    Documentation=https://github.com/coreos/etcd
    [Service]
    ExecStart=/usr/local/bin/etcd \
    --name=etcd0 \
    --data-dir=/var/lib/etcd \
    --listen-client-urls=https://localhost:2379 \
    --advertise-client-urls=https://localhost:2379 \
    --cert-file=/etc/kubernetes/pki/etcd/server.pem \
    --key-file=/etc/kubernetes/pki/etcd/server-key.pem \
    --client-cert-auth=true \
    --trusted-ca-file=/etc/kubernetes/pki/etcd/ca.pem \
    --initial-cluster-token=my-etcd-token \
    --initial-cluster-state=new
    Restart=on-failure
    RestartSec=5
    Type=notify
    [Install]
    WantedBy=multi-user.target
    EOF
    ```
    7. Run the following to start the etcd service
    ```
    {
    systemctl daemon-reload
    systemctl enable etcd
    systemctl start etcd
    }
    ```
    8. TODO! Continue here: https://kubernetes.io/docs/setup/independent/high-availability/#acquire-etcd-certs and add the correct MasterConfiguration manifest.

    # minikube

  17. mikejoh revised this gist Jun 11, 2018. 1 changed file with 33 additions and 0 deletions.
    33 changes: 33 additions & 0 deletions cka-preparation.md
    Original file line number Diff line number Diff line change
    @@ -128,6 +128,39 @@ cfssl gencert -initca ca-csr.json | cfssljson -bare ca -
    }
    ```
    4. Create client certificates
    ```
    {
    cat > client.json <<EOF
    {
    "CN": "client",
    "key": {
    "algo": "ecdsa",
    "size": 256
    }
    }
    EOF
    cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=client client.json | cfssljson -bare client
    }
    ```
    5. Generate the server cert, remember we're doing a one node etcd cluster, we don't need the peer certificates:
    ```
    export PEER_NAME=$(hostname -s)
    export PRIVATE_IP=$(curl -s -H "Metadata-Flavor: Google" http://metadata.google.internal/computeMetadata/v1/instance/network-interfaces/0/ip)
    cfssl print-defaults csr > config.json
    sed -i '0,/CN/{s/example\.net/'"$PEER_NAME"'/}' config.json
    sed -i 's/www\.example\.net/'"$PRIVATE_IP"'/' config.json
    sed -i 's/example\.net/'"$PEER_NAME"'/' config.json
    cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=server config.json | cfssljson -bare server
    ```
    Note that we're using another profile to create the server certificate, the profile was created along with the CA certificate.
    6. adfs


    # minikube

  18. mikejoh revised this gist Jun 11, 2018. 1 changed file with 71 additions and 3 deletions.
    74 changes: 71 additions & 3 deletions cka-preparation.md
    Original file line number Diff line number Diff line change
    @@ -57,9 +57,77 @@ The cluster i will be creating will look like this:
    * 2 worker
    * flannel

    1. Create instances in GCE
    2. Generate certificates on etcd-0
    3. Complete the etcd-0 setup
    ### 1. Generate compute instances in GCE

    TODO!

    ### 2. Run etcd

    1. Login to etcd-0
    2. Install cfssl with `apt-get install golang-cfssl`
    3. Create CA certificate
    * CA cert config
    * CSR
    * Run cfssl to create everything
    ```
    {
    mkdir -p /etc/kubernetes/pki/etcd
    cd /etc/kubernetes/pki/etcd
    cat > ca-config.json <<EOF
    {
    "signing": {
    "default": {
    "expiry": "43800h"
    },
    "profiles": {
    "server": {
    "expiry": "43800h",
    "usages": [
    "signing",
    "key encipherment",
    "server auth",
    "client auth"
    ]
    },
    "client": {
    "expiry": "43800h",
    "usages": [
    "signing",
    "key encipherment",
    "client auth"
    ]
    },
    "peer": {
    "expiry": "43800h",
    "usages": [
    "signing",
    "key encipherment",
    "server auth",
    "client auth"
    ]
    }
    }
    }
    }
    EOF
    cat > ca-csr.json <<EOF
    {
    "CN": "etcd",
    "key": {
    "algo": "rsa",
    "size": 2048
    }
    }
    EOF
    cfssl gencert -initca ca-csr.json | cfssljson -bare ca -
    }
    ```

    # minikube

  19. mikejoh revised this gist Jun 11, 2018. 1 changed file with 17 additions and 3 deletions.
    20 changes: 17 additions & 3 deletions cka-preparation.md
    Original file line number Diff line number Diff line change
    @@ -47,6 +47,20 @@ Location | Component | Comment
    /var/log/kubelet.log | Kubelet | Responsible for running containers on the node
    /var/log/kube-proxy.log | Kube Proxy | Responsible for service load balancing

    # kubeadm

    In this part i'll try to setup a Kubernetes cluster using `kubeadm` on a couple of instances in GCE (with an external etcd cluster).

    The cluster i will be creating will look like this:
    * 1 etcd
    * 1 master
    * 2 worker
    * flannel

    1. Create instances in GCE
    2. Generate certificates on etcd-0
    3. Complete the etcd-0 setup

    # minikube

    ## RBAC Role and RoleBinding
    @@ -356,7 +370,7 @@ k8s-external europe-west1 1.2.3.4 RESERVED
    curl -s -H "Metadata-Flavor: Google" http://metadata.google.internal/computeMetadata/v1/instance/network-interfaces/0/ip
    ```

    # Other nice to have one-liners
    # systemd

    ## systemd
    * Provides a tool to help configure time/date called `timedatectl`
    @@ -371,6 +385,6 @@ Command | Description
    `journalctl -u kubelet` | Look at logs for a specified systemd process
    `journalctl -u kubelet -f` | Look at logs for a specified systemd process and follow the output
    `journalctl -u kubelet -r` | Look at logs for a specified systemd process in reverse order, latest first
    `journalctl -u kubelet --since "10 min ago"` | Look at the logs from the last 10 minutes
    `timedatectl list-timezones` | List time zones
    `timedatectl set-timezone Europe/Stockholm` | Set the timezone
    `
    `timedatectl set-timezone Europe/Stockholm` | Set the timezone
  20. mikejoh revised this gist Jun 11, 2018. 1 changed file with 16 additions and 5 deletions.
    21 changes: 16 additions & 5 deletions cka-preparation.md
    Original file line number Diff line number Diff line change
    @@ -358,8 +358,19 @@ curl -s -H "Metadata-Flavor: Google" http://metadata.google.internal/computeMeta

    # Other nice to have one-liners

    ## journalctl
    * _Part of `systemd`_
    * _A centralized management solution for logging all kernel and userland processes_

    ###
    ## systemd
    * Provides a tool to help configure time/date called `timedatectl`

    ## journald
    * Part of `systemd`
    * A centralized management solution for logging all kernel and userland processes

    ### Cheat Sheet
    Command | Description
    --- | ---
    `journalctl -u kubelet` | Look at logs for a specified systemd process
    `journalctl -u kubelet -f` | Look at logs for a specified systemd process and follow the output
    `journalctl -u kubelet -r` | Look at logs for a specified systemd process in reverse order, latest first
    `timedatectl list-timezones` | List time zones
    `timedatectl set-timezone Europe/Stockholm` | Set the timezone
    `
  21. mikejoh revised this gist Jun 11, 2018. 1 changed file with 9 additions and 1 deletion.
    10 changes: 9 additions & 1 deletion cka-preparation.md
    Original file line number Diff line number Diff line change
    @@ -354,4 +354,12 @@ k8s-external europe-west1 1.2.3.4 RESERVED
    ## Query the metadata server from within a compute instance and fetch it's IP address
    ```
    curl -s -H "Metadata-Flavor: Google" http://metadata.google.internal/computeMetadata/v1/instance/network-interfaces/0/ip
    ```
    ```

    # Other nice to have one-liners

    ## journalctl
    * _Part of `systemd`_
    * _A centralized management solution for logging all kernel and userland processes_

    ###
  22. mikejoh revised this gist Jun 11, 2018. 1 changed file with 4 additions and 0 deletions.
    4 changes: 4 additions & 0 deletions cka-preparation.md
    Original file line number Diff line number Diff line change
    @@ -310,6 +310,10 @@ EOF
    ```
    gcloud compute instances stop $(gcloud compute instances list | grep -v "NAME" | awk '{ print $1}')
    ```
    ## Start all instances
    ```
    gcloud compute instances start --async $(gcloud compute instances list | grep -v NAME | awk '{ print $1 }')
    ```
    ## Manually create a network (`--subnet-mode custom`)
    ```
    gcloud compute networks create k8s --subnet-mode custom
  23. mikejoh revised this gist Jun 6, 2018. No changes.
  24. mikejoh revised this gist Jun 6, 2018. 1 changed file with 2 additions and 2 deletions.
    4 changes: 2 additions & 2 deletions cka-preparation.md
    Original file line number Diff line number Diff line change
    @@ -347,7 +347,7 @@ gcloud compute addresses list
    NAME REGION ADDRESS STATUS
    k8s-external europe-west1 1.2.3.4 RESERVED
    ```
    ## Create a simple cluster (as shown in the table in the first section of this document)
    ## Query the metadata server from within a compute instance and fetch it's IP address
    ```
    See another repo.
    curl -s -H "Metadata-Flavor: Google" http://metadata.google.internal/computeMetadata/v1/instance/network-interfaces/0/ip
    ```
  25. mikejoh revised this gist Jun 6, 2018. 1 changed file with 2 additions and 0 deletions.
    2 changes: 2 additions & 0 deletions cka-preparation.md
    Original file line number Diff line number Diff line change
    @@ -33,13 +33,15 @@ TCP | Inbound | 30000-32767 | NodePort Services
    ## Important logs and their locations

    Master node(s):

    Location | Component | Comment
    --- | --- | ---
    /var/log/kube-apiserver.log | API Server | Responsible for serving the API
    /var/log/kube-scheduler.log | Scheduler | Responsible for making scheduling decisions
    /var/log/kube-controller-manager.log | Controller | Manages replication controllers

    Worker node(s):

    Location | Component | Comment
    --- | --- | ---
    /var/log/kubelet.log | Kubelet | Responsible for running containers on the node
  26. mikejoh revised this gist Jun 6, 2018. 1 changed file with 1 addition and 1 deletion.
    2 changes: 1 addition & 1 deletion cka-preparation.md
    Original file line number Diff line number Diff line change
    @@ -28,7 +28,7 @@ Protocol | Direction | Port Range | Purpose
    --- | --- | --- | ---
    TCP | Inbound | 10250 | Kubelet API
    TCP | Inbound | 10255 | Read-only Kubelet API
    TCP | Inbound | 30000-32767 | NodePort Services**
    TCP | Inbound | 30000-32767 | NodePort Services

    ## Important logs and their locations

  27. mikejoh revised this gist Jun 6, 2018. No changes.
  28. mikejoh revised this gist Jun 6, 2018. 1 changed file with 2 additions and 2 deletions.
    4 changes: 2 additions & 2 deletions cka-preparation.md
    Original file line number Diff line number Diff line change
    @@ -24,8 +24,8 @@ TCP | Inbound | 10255 | Read-only Kubelet API

    ## Ports on worker node(s)

    Protocol | Direction | Port | Range | Purpose
    --- | --- | --- | --- | ---
    Protocol | Direction | Port Range | Purpose
    --- | --- | --- | ---
    TCP | Inbound | 10250 | Kubelet API
    TCP | Inbound | 10255 | Read-only Kubelet API
    TCP | Inbound | 30000-32767 | NodePort Services**
  29. mikejoh revised this gist Jun 6, 2018. 1 changed file with 2 additions and 2 deletions.
    4 changes: 2 additions & 2 deletions cka-preparation.md
    Original file line number Diff line number Diff line change
    @@ -13,8 +13,8 @@ ik8s | 1 etcd, 1 master, 1 base node | loopback | Missing worker node

    ## Ports on master node(s)

    Protocol | Direction | Port | Range | Purpose
    --- | --- | --- | --- | ---
    Protocol | Direction | Port Range | Purpose
    --- | --- | --- | ---
    TCP | Inbound | 6443* | Kubernetes API server
    TCP | Inbound | 2379-2380 | etcd server client API
    TCP | Inbound | 10250 | Kubelet API
  30. mikejoh revised this gist Jun 6, 2018. No changes.