Skip to content

Instantly share code, notes, and snippets.

@je2ryw
Forked from johnandersen777/.gitignore
Created July 1, 2020 23:14
Show Gist options
  • Select an option

  • Save je2ryw/11e40648d5441e7227b1c4d86d79e6e7 to your computer and use it in GitHub Desktop.

Select an option

Save je2ryw/11e40648d5441e7227b1c4d86d79e6e7 to your computer and use it in GitHub Desktop.

Revisions

  1. John Andersen revised this gist Jul 24, 2019. 1 changed file with 1 addition and 1 deletion.
    2 changes: 1 addition & 1 deletion setup.sh
    Original file line number Diff line number Diff line change
    @@ -2,7 +2,7 @@
    set -xe

    if [ -f "${K3D_ENV}" ]; then
    source ${K3D_ENV}
    source "${K3D_ENV}"
    fi

    if [ "x${DOMAIN}" == "x" ]; then
  2. @pdxjohnny pdxjohnny revised this gist Jul 23, 2019. 2 changed files with 41 additions and 3 deletions.
    36 changes: 35 additions & 1 deletion README.md
    Original file line number Diff line number Diff line change
    @@ -24,6 +24,33 @@ inclined to believe that's the most secure option at the moment.
    - [DigitalOcean VM Creation Form Pre-populated for CoreOS](https://cloud.digitalocean.com/droplets/new?image=coreos-stable)
    - [CoreOS Docs](https://coreos.com/os/docs/latest/booting-on-digitalocean.html)

    ## Add swap

    Our VM has 1GB of memory, we'll most assuredly need some swap.

    https://coreos.com/os/docs/latest/adding-swap.html

    ```console
    sudo mkdir -p /var/vm
    sudo fallocate -l 5G /var/vm/swapfile1
    sudo chmod 600 /var/vm/swapfile1
    sudo mkswap /var/vm/swapfile1
    sudo tee /etc/systemd/system/var-vm-swapfile1.swap > /dev/null <<EOF
    [Unit]
    Description=Turn on swap

    [Swap]
    What=/var/vm/swapfile1

    [Install]
    WantedBy=multi-user.target
    EOF
    sudo systemctl enable --now var-vm-swapfile1.swap
    echo 'vm.swappiness=30' | sudo tee /etc/sysctl.d/80-swappiness.conf
    sudo systemctl restart systemd-sysctl
    sudo swapon
    ```

    ## Setup `PATH`

    Create a `~/.local/bin` directory and a `~/.profile` which will add that
    @@ -189,9 +216,16 @@ helm init \

    ## Installing Istio

    *Important* `--set gateways.custom-gateway.type='ClusterIP'` needs to be
    **Work In Progress** When deploying Istio's pods on our cheap VM you'll notice
    that kubernetes is going to leave some of the Istio pods in Pending state. This
    is due to our 1GB of RAM. Yes, we gave the VM 5GB of swap, but Kubernetes
    doesn't add our swap to the total amount of RAM.

    **Important** `--set gateways.custom-gateway.type='ClusterIP'` needs to be
    `--set gateways.custom-gateway.type='NodePort'`.

    > TODO Enable `global.mtls.enabled` and `global.controlPlaneSecurityEnabled`
    https://knative.dev/docs/install/installing-istio/#installing-istio-with-sds-to-secure-the-ingress-gateway

    ```console
    8 changes: 6 additions & 2 deletions setup.sh
    Original file line number Diff line number Diff line change
    @@ -22,7 +22,7 @@ fi

    k3d create --auto-restart --workers 3 --publish 80:80 --publish 443:433 --image docker.io/rancher/k3s:v0.7.0-rc6

    function kube_up() {
    kube_up() {
    k3d get-kubeconfig --name='k3s-default' 2>&1
    }

    @@ -130,9 +130,13 @@ helm template --namespace=istio-system \
    `# Enable SDS in the gateway to allow dynamically configuring TLS of gateway.` \
    --set gateways.istio-ingressgateway.sds.enabled=true \
    `# More pilot replicas for better scale` \
    --set pilot.autoscaleMin=2 \
    --set pilot.autoscaleMin=1 \
    `# Set pilot trace sampling to 100%` \
    --set pilot.traceSampling=100 \
    `# Tune down required resources for pilot.` \
    --set pilot.resources.requests.cpu=30m \
    `# Tune down required resources for telemetry.` \
    --set mixer.telemetry.resources.requests.cpu=30m \
    istio-?.?.?/install/kubernetes/helm/istio \
    > ./istio.yaml

  3. @pdxjohnny pdxjohnny revised this gist Jul 23, 2019. 3 changed files with 115 additions and 14 deletions.
    2 changes: 2 additions & 0 deletions .gitignore
    Original file line number Diff line number Diff line change
    @@ -6,3 +6,5 @@
    *.backup
    istio-*/
    cert-manager-*/
    *.swp
    env
    25 changes: 16 additions & 9 deletions install.sh
    Original file line number Diff line number Diff line change
    @@ -1,33 +1,40 @@
    #!/bin/sh
    set -xe

    curl -L -o k3d https://github.com/rancher/k3d/releases/download/v1.3.0-dev.0/k3d-linux-amd64
    curl -LO https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/amd64/kubectl
    export K3D_VERSION=${K3D_VERSION:-"1.3.0-dev.0"}
    export KUBECTL_VERSION=${KUBECTL_VERSION:-"$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)"}
    export HELM_VERSION=${HELM_VERSION:-"2.14.2"}
    export TERRAFORM_VERSION=${TERRAFORM_VERSION:-"1.1.7"}
    export ISTIO_VERSION=${ISTIO_VERSION:-"1.1.7"}
    export KNATIVE_VERSION=${KNATIVE_VERSION:-"0.7.0"}
    export CERT_MANAGER_VERSION=${CERT_MANAGER_VERSION:-"0.6.1"}

    curl -L -o k3d https://github.com/rancher/k3d/releases/download/v${K3D_VERSION}/k3d-linux-amd64
    curl -LO https://storage.googleapis.com/kubernetes-release/release/${KUBECTL_VERSION}/bin/linux/amd64/kubectl
    chmod 700 k3d kubectl
    mv k3d kubectl ~/.local/bin/

    curl -sSL https://get.helm.sh/helm-v2.14.2-linux-amd64.tar.gz | tar xvz
    curl -sSL https://get.helm.sh/helm-v${HELM_VERSION}-linux-amd64.tar.gz | tar xvz
    mv linux-amd64/{helm,tiller} ~/.local/bin/
    rm -rf linux-amd64/

    curl -LO https://releases.hashicorp.com/terraform/0.12.5/terraform_0.12.5_linux_amd64.zip
    unzip terraform_0.12.5_linux_amd64.zip
    rm terraform_0.12.5_linux_amd64.zip
    curl -LO https://releases.hashicorp.com/terraform/${TERRAFORM_VERSION}/terraform_${TERRAFORM_VERSION}_linux_amd64.zip
    unzip terraform_${TERRAFORM_VERSION}_linux_amd64.zip
    rm terraform_${TERRAFORM_VERSION}_linux_amd64.zip
    mv terraform ~/.local/bin/

    # TODO version tag
    curl -LO https://github.com/jbussdieker/tiller-ssl-terraform/raw/master/tiller_certs.tf

    # Download and unpack Istio
    export ISTIO_VERSION=1.1.7
    # TODO version tag
    curl -L https://git.io/getLatestIstio | sh -
    cd istio-${ISTIO_VERSION}

    # Download Knative
    export KNATIVE_VERSION=0.7.0
    curl -LO https://github.com/knative/build/releases/download/v${KNATIVE_VERSION}/build.yaml
    curl -LO https://github.com/knative/serving/releases/download/v${KNATIVE_VERSION}/serving.yaml
    curl -LO https://github.com/knative/serving/releases/download/v${KNATIVE_VERSION}/monitoring.yaml
    curl -LO https://github.com/knative/eventing/releases/download/v${KNATIVE_VERSION}/release.yaml

    export CERT_MANAGER_VERSION=0.6.1
    curl -sSL https://github.com/jetstack/cert-manager/archive/v${CERT_MANAGER_VERSION}.tar.gz | tar xz
    102 changes: 97 additions & 5 deletions setup.sh
    Original file line number Diff line number Diff line change
    @@ -1,14 +1,26 @@
    #!/bin/sh
    set -xe

    export DOMAIN=${DOMAIN:-"chadig.com"}
    if [ -f "${K3D_ENV}" ]; then
    source ${K3D_ENV}
    fi

    if [ "x${DOMAIN}" == "x" ]; then
    echo "[-] ERROR: DOMAIN (aka example.com) not set" >&2
    exit 1
    fi

    if [ "x${EMAIL}" == "x" ]; then
    echo "[-] ERROR: EMAIL (Your email for Let's Encrypt ACME) not set" >&2
    exit 1
    fi

    if [ "x${DO_PA_TOKEN}" == "x" ]; then
    echo "[-] ERROR: DO_PA_TOKEN (DigitalOcean Personal Access Token) not set" >&2
    exit 1
    fi

    k3d create --workers 3 --publish 80:80 --publish 443:433 --image docker.io/rancher/k3s:v0.7.0-rc6
    k3d create --auto-restart --workers 3 --publish 80:80 --publish 443:433 --image docker.io/rancher/k3s:v0.7.0-rc6

    function kube_up() {
    k3d get-kubeconfig --name='k3s-default' 2>&1
    @@ -168,14 +180,15 @@ kubectl apply \
    --filename release.yaml \
    --filename monitoring.yaml

    # TODO Auto TLS
    kubectl apply \
    --filename serving.yaml \
    --selector networking.knative.dev/certificate-provider!=cert-manager \
    --selector networking.knative.dev/certificate-provider=cert-manager \
    --filename build.yaml \
    --filename release.yaml \
    --filename monitoring.yaml

    sleep 2

    kubectl get pods --namespace knative-serving
    kubectl get pods --namespace knative-build
    kubectl get pods --namespace knative-eventing
    @@ -196,9 +209,13 @@ data:
    ${DOMAIN}: ""
    EOF

    sleep 2

    kubectl apply -f cert-manager-?.?.?/deploy/manifests/00-crds.yaml
    kubectl apply -f cert-manager-?.?.?/deploy/manifests/cert-manager.yaml

    sleep 2

    kubectl apply -f - <<EOF
    apiVersion: certmanager.k8s.io/v1alpha1
    kind: ClusterIssuer
    @@ -210,7 +227,7 @@ spec:
    server: https://acme-v02.api.letsencrypt.org/directory
    # This will register an issuer with LetsEncrypt. Replace
    # with your admin email address.
    email: [email protected]
    email: ${EMAIL}
    privateKeySecretRef:
    # Set privateKeySecretRef to any unused secret name.
    name: letsencrypt-issuer
    @@ -223,4 +240,79 @@ spec:
    key: ${DO_PA_TOKEN}
    EOF

    sleep 2

    kubectl get clusterissuer --namespace cert-manager letsencrypt-issuer --output yaml

    kubectl apply -f - <<EOF
    apiVersion: certmanager.k8s.io/v1alpha1
    kind: Certificate
    metadata:
    name: my-certificate
    # Istio certs secret lives in the istio-system namespace, and
    # a cert-manager Certificate is namespace-scoped.
    namespace: istio-system
    spec:
    # Reference to the Istio default cert secret.
    secretName: istio-ingressgateway-certs
    acme:
    config:
    # Each certificate could rely on different ACME challenge
    # solver. In this example we are using one provider for all
    # the domains.
    - dns01:
    provider: digitalocean
    domains:
    # Since certificate wildcards only allow one level, we will
    # need to one for every namespace that Knative is used in.
    # We don't need to use wildcard here, fully-qualified domains
    # will work fine too.
    - "*.default.$DOMAIN"
    - "*.other-namespace.$DOMAIN"
    # The certificate common name, use one from your domains.
    commonName: "*.default.$DOMAIN"
    dnsNames:
    # Provide same list as `domains` section.
    - "*.default.$DOMAIN"
    - "*.other-namespace.$DOMAIN"
    # Reference to the ClusterIssuer we created in the previous step.
    issuerRef:
    kind: ClusterIssuer
    name: letsencrypt-issuer
    EOF

    sleep 2

    kubectl get certificate --namespace istio-system my-certificate --output yaml

    kubectl apply -f - <<EOF
    apiVersion: networking.istio.io/v1alpha3
    kind: Gateway
    metadata:
    name: knative-ingress-gateway
    namespace: knative-serving
    spec:
    selector:
    istio: ingressgateway
    servers:
    - port:
    number: 80
    name: http
    protocol: HTTP
    hosts:
    - "*"
    tls:
    # Sends 301 redirect for all http requests.
    # Omit to allow http and https.
    httpsRedirect: true
    - port:
    number: 443
    name: https
    protocol: HTTPS
    hosts:
    - "*"
    tls:
    mode: SIMPLE
    privateKey: /etc/istio/ingressgateway-certs/tls.key
    serverCertificate: /etc/istio/ingressgateway-certs/tls.crt
    EOF
  4. @pdxjohnny pdxjohnny revised this gist Jul 22, 2019. 4 changed files with 133 additions and 3 deletions.
    1 change: 1 addition & 0 deletions .gitignore
    Original file line number Diff line number Diff line change
    @@ -5,3 +5,4 @@
    *.yaml
    *.backup
    istio-*/
    cert-manager-*/
    83 changes: 80 additions & 3 deletions README.md
    Original file line number Diff line number Diff line change
    @@ -254,15 +254,92 @@ helm template --namespace=istio-system \
    kubectl apply -f istio-local-gateway.yaml
    ```

    ## Auto-TLS Dependencies
    ## Assign Domain Name

    https://knative.dev/docs/serving/using-auto-tls/
    https://knative.dev/docs/serving/using-a-custom-domain/

    ```console
    export DOMAIN=example.com
    cat <<EOF | kubectl apply -f -
    apiVersion: v1
    kind: ConfigMap
    metadata:
    name: config-domain
    namespace: knative-serving
    data:
    # Default value for domain, for routes that does not have app=prod labels.
    # Although it will match all routes, it is the least-specific rule so it
    # will only be used if no other domain matches.
    ${DOMAIN}: ""
    EOF
    ```

    ## Enabling Auto-TLS Via Let's Encrypt

    Auto-TLS means our Knative applications will get HTTPS certs from Let's Encrypt
    without us doing anything (other than setting it up)! Awesome!

    - https://knative.dev/docs/serving/using-auto-tls/

    ### Cert Manager

    First we need to install cert manager which is what talks to Let's Encrypt to
    get us certificates. We'll need to combine a few guides for this.

    https://knative.dev/docs/serving/installing-cert-manager/

    ```console
    export CERT_MANAGER_VERSION=0.6.1
    curl -sSL https://github.com/jetstack/cert-manager/archive/v${CERT_MANAGER_VERSION}.tar.gz | tar xz
    kubectl apply -f cert-manager-?.?.?/deploy/manifests/00-crds.yaml
    kubectl apply -f cert-manager-?.?.?/deploy/manifests/cert-manager.yaml
    ```

    Now that cert-manager is installed, we need to set up a way to answer the ACME
    DNS challenge. Since we're on DigitalOcean we'll use the cert-manager plugin for
    them.

    - https://knative.dev/docs/serving/using-cert-manager-on-gcp/#adding-your-service-account-to-cert-manager
    - https://docs.cert-manager.io/en/latest/tasks/issuers/setup-acme/dns01/digitalocean.html

    Create your DigitalOcean personal access token and export it as an environment
    variable, so that the DNS `TXT` records can be updated for the ACME challenge.

    ```console
    export DO_PA_TOKEN=9a978d78fe57a9f6760ea
    kubectl apply -f - <<EOF
    apiVersion: certmanager.k8s.io/v1alpha1
    kind: ClusterIssuer
    metadata:
    name: letsencrypt-issuer
    namespace: cert-manager
    spec:
    acme:
    server: https://acme-v02.api.letsencrypt.org/directory
    # This will register an issuer with LetsEncrypt. Replace
    # with your admin email address.
    email: [email protected]
    privateKeySecretRef:
    # Set privateKeySecretRef to any unused secret name.
    name: letsencrypt-issuer
    dns01:
    providers:
    - name: digitalocean
    digitalocean:
    tokenSecretRef:
    name: digitalocean-dns
    key: ${DO_PA_TOKEN}
    EOF
    kubectl get clusterissuer --namespace cert-manager letsencrypt-issuer --output yaml
    ```

    https://knative.dev/docs/serving/using-cert-manager-on-gcp/#adding-your-service-account-to-cert-manager

    ## Installing Knative

    https://knative.dev/docs/install/knative-with-iks/#installing-knative

    There's an open issue about how there's a race condition on these apply
    There's an open issue about how there's a race condition on these `apply`
    commands. I can't find it right now but just wait a bit and try re-running them
    if they complain.

    3 changes: 3 additions & 0 deletions install.sh
    Original file line number Diff line number Diff line change
    @@ -28,3 +28,6 @@ curl -LO https://github.com/knative/build/releases/download/v${KNATIVE_VERSION}/
    curl -LO https://github.com/knative/serving/releases/download/v${KNATIVE_VERSION}/serving.yaml
    curl -LO https://github.com/knative/serving/releases/download/v${KNATIVE_VERSION}/monitoring.yaml
    curl -LO https://github.com/knative/eventing/releases/download/v${KNATIVE_VERSION}/release.yaml

    export CERT_MANAGER_VERSION=0.6.1
    curl -sSL https://github.com/jetstack/cert-manager/archive/v${CERT_MANAGER_VERSION}.tar.gz | tar xz
    49 changes: 49 additions & 0 deletions setup.sh
    Original file line number Diff line number Diff line change
    @@ -1,6 +1,13 @@
    #!/bin/sh
    set -xe

    export DOMAIN=${DOMAIN:-"chadig.com"}

    if [ "x${DO_PA_TOKEN}" == "x" ]; then
    echo "[-] ERROR: DO_PA_TOKEN (DigitalOcean Personal Access Token) not set" >&2
    exit 1
    fi

    k3d create --workers 3 --publish 80:80 --publish 443:433 --image docker.io/rancher/k3s:v0.7.0-rc6

    function kube_up() {
    @@ -175,3 +182,45 @@ kubectl get pods --namespace knative-eventing
    kubectl get pods --namespace knative-monitoring

    kubectl get deploy -n knative-serving --label-columns=serving.knative.dev/release

    cat <<EOF | kubectl apply -f -
    apiVersion: v1
    kind: ConfigMap
    metadata:
    name: config-domain
    namespace: knative-serving
    data:
    # Default value for domain, for routes that does not have app=prod labels.
    # Although it will match all routes, it is the least-specific rule so it
    # will only be used if no other domain matches.
    ${DOMAIN}: ""
    EOF

    kubectl apply -f cert-manager-?.?.?/deploy/manifests/00-crds.yaml
    kubectl apply -f cert-manager-?.?.?/deploy/manifests/cert-manager.yaml

    kubectl apply -f - <<EOF
    apiVersion: certmanager.k8s.io/v1alpha1
    kind: ClusterIssuer
    metadata:
    name: letsencrypt-issuer
    namespace: cert-manager
    spec:
    acme:
    server: https://acme-v02.api.letsencrypt.org/directory
    # This will register an issuer with LetsEncrypt. Replace
    # with your admin email address.
    email: [email protected]
    privateKeySecretRef:
    # Set privateKeySecretRef to any unused secret name.
    name: letsencrypt-issuer
    dns01:
    providers:
    - name: digitalocean
    digitalocean:
    tokenSecretRef:
    name: digitalocean-dns
    key: ${DO_PA_TOKEN}
    EOF

    kubectl get clusterissuer --namespace cert-manager letsencrypt-issuer --output yaml
  5. @pdxjohnny pdxjohnny revised this gist Jul 22, 2019. 1 changed file with 3 additions and 3 deletions.
    6 changes: 3 additions & 3 deletions README.md
    Original file line number Diff line number Diff line change
    @@ -2,7 +2,7 @@

    The cheapest Kubernetes deployment either side of the Mississippi

    > BEWARE This guide is devoid of binary verification
    > **BEWARE** This guide is devoid of binary verification
    ## Generate Your SSH keys

    @@ -165,8 +165,8 @@ mv terraform ~/.local/bin/
    Now we download the terraform file and run it to create the certs. More info on
    the terraform file can be found here: https://github.com/jbussdieker/tiller-ssl-terraform

    *Warning* terraform will *not* regenerate the certs when you re-run the apply
    command if they exist on disk.
    **Warning** terraform will **not** regenerate the certs when you re-run the
    apply command if they exist on disk.

    ```console
    curl -LO https://github.com/jbussdieker/tiller-ssl-terraform/raw/master/tiller_certs.tf
  6. @pdxjohnny pdxjohnny revised this gist Jul 22, 2019. 1 changed file with 5 additions and 5 deletions.
    10 changes: 5 additions & 5 deletions install.sh
    Original file line number Diff line number Diff line change
    @@ -22,9 +22,9 @@ export ISTIO_VERSION=1.1.7
    curl -L https://git.io/getLatestIstio | sh -
    cd istio-${ISTIO_VERSION}

    curl -LO https://github.com/knative/serving/releases/download/v0.7.0/serving.yaml
    curl -LO https://github.com/knative/build/releases/download/v0.7.0/build.yaml
    curl -LO https://github.com/knative/eventing/releases/download/v0.7.0/release.yaml
    curl -LO https://github.com/knative/serving/releases/download/v0.7.0/monitoring.yaml

    # Download Knative
    export KNATIVE_VERSION=0.7.0
    curl -LO https://github.com/knative/build/releases/download/v${KNATIVE_VERSION}/build.yaml
    curl -LO https://github.com/knative/serving/releases/download/v${KNATIVE_VERSION}/serving.yaml
    curl -LO https://github.com/knative/serving/releases/download/v${KNATIVE_VERSION}/monitoring.yaml
    curl -LO https://github.com/knative/eventing/releases/download/v${KNATIVE_VERSION}/release.yaml
  7. @pdxjohnny pdxjohnny revised this gist Jul 21, 2019. 1 changed file with 93 additions and 0 deletions.
    93 changes: 93 additions & 0 deletions README.md
    Original file line number Diff line number Diff line change
    @@ -194,10 +194,103 @@ helm init \

    https://knative.dev/docs/install/installing-istio/#installing-istio-with-sds-to-secure-the-ingress-gateway

    ```console
    export ISTIO_VERSION=1.1.7
    curl -L https://git.io/getLatestIstio | sh -
    cd istio-${ISTIO_VERSION}

    for i in istio-?.?.?/install/kubernetes/helm/istio-init/files/crd*yaml; do kubectl apply -f $i; done

    cat <<EOF | kubectl apply -f -
    apiVersion: v1
    kind: Namespace
    metadata:
    name: istio-system
    labels:
    istio-injection: disabled
    EOF

    helm template --namespace=istio-system \
    --set sidecarInjectorWebhook.enabled=true \
    --set sidecarInjectorWebhook.enableNamespacesByDefault=true \
    --set global.proxy.autoInject=disabled \
    --set global.disablePolicyChecks=true \
    --set prometheus.enabled=false \
    `# Disable mixer prometheus adapter to remove istio default metrics.` \
    --set mixer.adapters.prometheus.enabled=false \
    `# Disable mixer policy check, since in our template we set no policy.` \
    --set global.disablePolicyChecks=true \
    `# Set gateway pods to 1 to sidestep eventual consistency / readiness problems.` \
    --set gateways.istio-ingressgateway.autoscaleMin=1 \
    --set gateways.istio-ingressgateway.autoscaleMax=1 \
    --set gateways.istio-ingressgateway.resources.requests.cpu=500m \
    --set gateways.istio-ingressgateway.resources.requests.memory=256Mi \
    `# Enable SDS in the gateway to allow dynamically configuring TLS of gateway.` \
    --set gateways.istio-ingressgateway.sds.enabled=true \
    `# More pilot replicas for better scale` \
    --set pilot.autoscaleMin=2 \
    `# Set pilot trace sampling to 100%` \
    --set pilot.traceSampling=100 \
    istio-?.?.?/install/kubernetes/helm/istio \
    > ./istio.yaml

    kubectl apply -f istio.yaml

    helm template --namespace=istio-system \
    --set gateways.custom-gateway.autoscaleMin=1 \
    --set gateways.custom-gateway.autoscaleMax=1 \
    --set gateways.custom-gateway.cpu.targetAverageUtilization=60 \
    --set gateways.custom-gateway.labels.app='cluster-local-gateway' \
    --set gateways.custom-gateway.labels.istio='cluster-local-gateway' \
    --set gateways.custom-gateway.type='NodePort' \
    --set gateways.istio-ingressgateway.enabled=false \
    --set gateways.istio-egressgateway.enabled=false \
    --set gateways.istio-ilbgateway.enabled=false \
    istio-?.?.?/install/kubernetes/helm/istio \
    -f istio-?.?.?/install/kubernetes/helm/istio/example-values/values-istio-gateways.yaml \
    | sed -e "s/custom-gateway/cluster-local-gateway/g" -e "s/customgateway/clusterlocalgateway/g" \
    > ./istio-local-gateway.yaml

    kubectl apply -f istio-local-gateway.yaml
    ```

    ## Auto-TLS Dependencies

    https://knative.dev/docs/serving/using-auto-tls/

    ## Installing Knative

    https://knative.dev/docs/install/knative-with-iks/#installing-knative

    There's an open issue about how there's a race condition on these apply
    commands. I can't find it right now but just wait a bit and try re-running them
    if they complain.

    ```console
    curl -LO https://github.com/knative/serving/releases/download/v0.7.0/serving.yaml
    curl -LO https://github.com/knative/build/releases/download/v0.7.0/build.yaml
    curl -LO https://github.com/knative/eventing/releases/download/v0.7.0/release.yaml
    curl -LO https://github.com/knative/serving/releases/download/v0.7.0/monitoring.yaml

    kubectl apply \
    --selector knative.dev/crd-install=true \
    --filename serving.yaml \
    --filename build.yaml \
    --filename release.yaml \
    --filename monitoring.yaml

    # TODO Auto TLS
    kubectl apply \
    --filename serving.yaml \
    --selector networking.knative.dev/certificate-provider!=cert-manager \
    --filename build.yaml \
    --filename release.yaml \
    --filename monitoring.yaml

    kubectl get pods --namespace knative-serving
    kubectl get pods --namespace knative-build
    kubectl get pods --namespace knative-eventing
    kubectl get pods --namespace knative-monitoring

    kubectl get deploy -n knative-serving --label-columns=serving.knative.dev/release
    ```
  8. @pdxjohnny pdxjohnny revised this gist Jul 21, 2019. 4 changed files with 148 additions and 2 deletions.
    1 change: 1 addition & 0 deletions .gitignore
    Original file line number Diff line number Diff line change
    @@ -4,3 +4,4 @@
    *.tfstate
    *.yaml
    *.backup
    istio-*/
    3 changes: 3 additions & 0 deletions README.md
    Original file line number Diff line number Diff line change
    @@ -189,6 +189,9 @@ helm init \

    ## Installing Istio

    *Important* `--set gateways.custom-gateway.type='ClusterIP'` needs to be
    `--set gateways.custom-gateway.type='NodePort'`.

    https://knative.dev/docs/install/installing-istio/#installing-istio-with-sds-to-secure-the-ingress-gateway

    ## Auto-TLS Dependencies
    30 changes: 30 additions & 0 deletions install.sh
    Original file line number Diff line number Diff line change
    @@ -0,0 +1,30 @@
    #!/bin/sh
    set -xe

    curl -L -o k3d https://github.com/rancher/k3d/releases/download/v1.3.0-dev.0/k3d-linux-amd64
    curl -LO https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/amd64/kubectl
    chmod 700 k3d kubectl
    mv k3d kubectl ~/.local/bin/

    curl -sSL https://get.helm.sh/helm-v2.14.2-linux-amd64.tar.gz | tar xvz
    mv linux-amd64/{helm,tiller} ~/.local/bin/
    rm -rf linux-amd64/

    curl -LO https://releases.hashicorp.com/terraform/0.12.5/terraform_0.12.5_linux_amd64.zip
    unzip terraform_0.12.5_linux_amd64.zip
    rm terraform_0.12.5_linux_amd64.zip
    mv terraform ~/.local/bin/

    curl -LO https://github.com/jbussdieker/tiller-ssl-terraform/raw/master/tiller_certs.tf

    # Download and unpack Istio
    export ISTIO_VERSION=1.1.7
    curl -L https://git.io/getLatestIstio | sh -
    cd istio-${ISTIO_VERSION}

    curl -LO https://github.com/knative/serving/releases/download/v0.7.0/serving.yaml
    curl -LO https://github.com/knative/build/releases/download/v0.7.0/build.yaml
    curl -LO https://github.com/knative/eventing/releases/download/v0.7.0/release.yaml
    curl -LO https://github.com/knative/serving/releases/download/v0.7.0/monitoring.yaml

    curl -LO https://github.com/knative/serving/releases/download/v${KNATIVE_VERSION}/serving.yaml
    116 changes: 114 additions & 2 deletions setup.sh
    Original file line number Diff line number Diff line change
    @@ -2,9 +2,23 @@
    set -xe

    k3d create --workers 3 --publish 80:80 --publish 443:433 --image docker.io/rancher/k3s:v0.7.0-rc6
    # TODO Remove sleep
    sleep 2

    function kube_up() {
    k3d get-kubeconfig --name='k3s-default' 2>&1
    }

    set +e

    KUBE_UP="$(kube_up | grep -E 'does not exist|copy kubeconfig')"
    while [ "x${KUBE_UP}" != "x" ]; do
    sleep 0.25s
    KUBE_UP="$(kube_up | grep -E 'does not exist|copy kubeconfig')"
    done

    set -e

    export KUBECONFIG="$(k3d get-kubeconfig --name='k3s-default')"

    kubectl cluster-info
    kubectl create namespace helm-world
    kubectl create namespace tiller-world
    @@ -54,6 +68,8 @@ terraform init
    rm -f *.pem
    terraform apply -auto-approve

    # Installing Helm - Helm init

    helm init \
    --override 'spec.template.spec.containers[0].command'='{/tiller,--storage=secret}' \
    --tiller-tls \
    @@ -63,3 +79,99 @@ helm init \
    --tiller-tls-key ./tiller.key.pem \
    --tiller-namespace=tiller-world \
    --service-account=tiller-user \

    # Installing Istio

    for i in istio-?.?.?/install/kubernetes/helm/istio-init/files/crd*yaml; do kubectl apply -f $i; done

    cat <<EOF | kubectl apply -f -
    apiVersion: v1
    kind: Namespace
    metadata:
    name: istio-system
    labels:
    istio-injection: disabled
    EOF

    helm template --namespace=istio-system \
    --set sidecarInjectorWebhook.enabled=true \
    --set sidecarInjectorWebhook.enableNamespacesByDefault=true \
    --set global.proxy.autoInject=disabled \
    --set global.disablePolicyChecks=true \
    --set prometheus.enabled=false \
    `# Disable mixer prometheus adapter to remove istio default metrics.` \
    --set mixer.adapters.prometheus.enabled=false \
    `# Disable mixer policy check, since in our template we set no policy.` \
    --set global.disablePolicyChecks=true \
    `# Set gateway pods to 1 to sidestep eventual consistency / readiness problems.` \
    --set gateways.istio-ingressgateway.autoscaleMin=1 \
    --set gateways.istio-ingressgateway.autoscaleMax=1 \
    --set gateways.istio-ingressgateway.resources.requests.cpu=500m \
    --set gateways.istio-ingressgateway.resources.requests.memory=256Mi \
    `# Enable SDS in the gateway to allow dynamically configuring TLS of gateway.` \
    --set gateways.istio-ingressgateway.sds.enabled=true \
    `# More pilot replicas for better scale` \
    --set pilot.autoscaleMin=2 \
    `# Set pilot trace sampling to 100%` \
    --set pilot.traceSampling=100 \
    istio-?.?.?/install/kubernetes/helm/istio \
    > ./istio.yaml

    kubectl apply -f istio.yaml

    helm template --namespace=istio-system \
    --set gateways.custom-gateway.autoscaleMin=1 \
    --set gateways.custom-gateway.autoscaleMax=1 \
    --set gateways.custom-gateway.cpu.targetAverageUtilization=60 \
    --set gateways.custom-gateway.labels.app='cluster-local-gateway' \
    --set gateways.custom-gateway.labels.istio='cluster-local-gateway' \
    --set gateways.custom-gateway.type='NodePort' \
    --set gateways.istio-ingressgateway.enabled=false \
    --set gateways.istio-egressgateway.enabled=false \
    --set gateways.istio-ilbgateway.enabled=false \
    istio-?.?.?/install/kubernetes/helm/istio \
    -f istio-?.?.?/install/kubernetes/helm/istio/example-values/values-istio-gateways.yaml \
    | sed -e "s/custom-gateway/cluster-local-gateway/g" -e "s/customgateway/clusterlocalgateway/g" \
    > ./istio-local-gateway.yaml

    kubectl apply -f istio-local-gateway.yaml

    ISTIO_UP="$(kubectl get pods --namespace istio-system 2>&1)"
    while [ "x${ISTIO_UP}" == "xNo resources found." ]; do
    sleep 0.25s
    ISTIO_UP="$(kubectl get pods --namespace istio-system 2>&1)"
    done

    kubectl get pods --namespace istio-system

    set +e

    ISTIO_UP="$(kubectl get pods --namespace istio-system | grep -viE 'status|running|complete')"
    while [ "x${ISTIO_UP}" != "x" ]; do
    sleep 0.25s
    ISTIO_UP="$(kubectl get pods --namespace istio-system | grep -viE 'status|running|complete')"
    done

    kubectl get pods --namespace istio-system

    kubectl apply \
    --selector knative.dev/crd-install=true \
    --filename serving.yaml \
    --filename build.yaml \
    --filename release.yaml \
    --filename monitoring.yaml

    # TODO Auto TLS
    kubectl apply \
    --filename serving.yaml \
    --selector networking.knative.dev/certificate-provider!=cert-manager \
    --filename build.yaml \
    --filename release.yaml \
    --filename monitoring.yaml

    kubectl get pods --namespace knative-serving
    kubectl get pods --namespace knative-build
    kubectl get pods --namespace knative-eventing
    kubectl get pods --namespace knative-monitoring

    kubectl get deploy -n knative-serving --label-columns=serving.knative.dev/release
  9. John Andersen revised this gist Jul 21, 2019. 2 changed files with 71 additions and 0 deletions.
    6 changes: 6 additions & 0 deletions .gitignore
    Original file line number Diff line number Diff line change
    @@ -0,0 +1,6 @@
    .terraform/
    *.pem
    *.tf
    *.tfstate
    *.yaml
    *.backup
    65 changes: 65 additions & 0 deletions setup.sh
    Original file line number Diff line number Diff line change
    @@ -0,0 +1,65 @@
    #!/bin/sh
    set -xe

    k3d create --workers 3 --publish 80:80 --publish 443:433 --image docker.io/rancher/k3s:v0.7.0-rc6
    # TODO Remove sleep
    sleep 2
    export KUBECONFIG="$(k3d get-kubeconfig --name='k3s-default')"
    kubectl cluster-info
    kubectl create namespace helm-world
    kubectl create namespace tiller-world
    kubectl create serviceaccount tiller --namespace tiller-world
    kubectl create -f - << EOF
    apiVersion: v1
    kind: ServiceAccount
    metadata:
    name: helm
    namespace: helm-world
    ---
    apiVersion: rbac.authorization.k8s.io/v1
    kind: Role
    metadata:
    name: tiller-user
    namespace: tiller-world
    rules:
    - apiGroups:
    - ""
    resources:
    - pods/portforward
    verbs:
    - create
    - apiGroups:
    - ""
    resources:
    - pods
    verbs:
    - list
    ---
    apiVersion: rbac.authorization.k8s.io/v1
    kind: RoleBinding
    metadata:
    name: tiller-user-binding
    namespace: tiller-world
    roleRef:
    apiGroup: rbac.authorization.k8s.io
    kind: Role
    name: tiller-user
    subjects:
    - kind: ServiceAccount
    name: helm
    namespace: helm-world
    EOF

    terraform init
    rm -f *.pem
    terraform apply -auto-approve

    helm init \
    --override 'spec.template.spec.containers[0].command'='{/tiller,--storage=secret}' \
    --tiller-tls \
    --tiller-tls-verify \
    --tls-ca-cert ca.cert.pem \
    --tiller-tls-cert ./tiller.cert.pem \
    --tiller-tls-key ./tiller.key.pem \
    --tiller-namespace=tiller-world \
    --service-account=tiller-user \
  10. John Andersen revised this gist Jul 20, 2019. 1 changed file with 7 additions and 0 deletions.
    7 changes: 7 additions & 0 deletions README.md
    Original file line number Diff line number Diff line change
    @@ -165,6 +165,9 @@ mv terraform ~/.local/bin/
    Now we download the terraform file and run it to create the certs. More info on
    the terraform file can be found here: https://github.com/jbussdieker/tiller-ssl-terraform

    *Warning* terraform will *not* regenerate the certs when you re-run the apply
    command if they exist on disk.

    ```console
    curl -LO https://github.com/jbussdieker/tiller-ssl-terraform/raw/master/tiller_certs.tf
    terraform init
    @@ -188,6 +191,10 @@ helm init \

    https://knative.dev/docs/install/installing-istio/#installing-istio-with-sds-to-secure-the-ingress-gateway

    ## Auto-TLS Dependencies

    https://knative.dev/docs/serving/using-auto-tls/

    ## Installing Knative

    https://knative.dev/docs/install/knative-with-iks/#installing-knative
  11. John Andersen revised this gist Jul 20, 2019. 1 changed file with 84 additions and 10 deletions.
    94 changes: 84 additions & 10 deletions README.md
    Original file line number Diff line number Diff line change
    @@ -93,21 +93,95 @@ rm -rf linux-amd64/
    Now we need to configure Role Based Access Control (RBAC) and create a
    Certificate Authority (CA) which will secure our helm/tiller installation.

    You should read these guides, but I'll summarize the CLI commands.

    - https://helm.sh/docs/using_helm/#securing-your-helm-installation
    - https://helm.sh/docs/using_helm/#role-based-access-control
    - https://helm.sh/docs/using_helm/#generate-a-certificate-authority

    ### Configuring Role Based Access Control (RBAC)

    We're going to [Deploy Helm in a namespace, talking to Tiller in another namespace](https://helm.sh/docs/using_helm/#helm-and-role-based-access-control)

    ```console
    kubectl create namespace helm-world
    kubectl create namespace tiller-world
    kubectl create serviceaccount tiller --namespace tiller-world
    kubectl create << EOF
    apiVersion: v1
    kind: ServiceAccount
    metadata:
    name: helm
    namespace: helm-world
    ---
    apiVersion: rbac.authorization.k8s.io/v1
    kind: Role
    metadata:
    name: tiller-user
    namespace: tiller-world
    rules:
    - apiGroups:
    - ""
    resources:
    - pods/portforward
    verbs:
    - create
    - apiGroups:
    - ""
    resources:
    - pods
    verbs:
    - list
    ---
    apiVersion: rbac.authorization.k8s.io/v1
    kind: RoleBinding
    metadata:
    name: tiller-user-binding
    namespace: tiller-world
    roleRef:
    apiGroup: rbac.authorization.k8s.io
    kind: Role
    name: tiller-user
    subjects:
    - kind: ServiceAccount
    name: helm
    namespace: helm-world
    EOF
    ```

    ### Install Terraform

    We're going to use terraform to generate all the certificates.

    ```console
    curl -LO https://releases.hashicorp.com/terraform/0.12.5/terraform_0.12.5_linux_amd64.zip
    unzip terraform_0.12.5_linux_amd64.zip
    rm terraform_0.12.5_linux_amd64.zip
    mv terraform ~/.local/bin/
    ```

    ### Generating Certificates

    Now we download the terraform file and run it to create the certs. More info on
    the terraform file can be found here: https://github.com/jbussdieker/tiller-ssl-terraform

    ```console
    curl -LO https://github.com/jbussdieker/tiller-ssl-terraform/raw/master/tiller_certs.tf
    terraform init
    terraform apply -auto-approve
    ```

    ### Helm init

    ```console
    openssl genrsa -out ./ca.key.pem 4096
    openssl req \
    -key ca.key.pem \
    -new \
    -x509 \
    -days 7300 \
    -sha256 \
    -out ca.cert.pem \
    -extensions v3_ca \
    -subj "/CN=chadig.com/[email protected]/C=US/ST=OR/L=Portland/O=chadig/OU=infra"
    helm init \
    --tiller-tls \
    --tiller-tls-cert ./tiller.cert.pem \
    --tiller-tls-key ./tiller.key.pem \
    --tiller-tls-verify \
    --tls-ca-cert ca.cert.pem \
    --tiller-namespace=tiller-world \
    --service-account=tiller-user \
    ```

    ## Installing Istio
  12. John Andersen revised this gist Jul 20, 2019. 1 changed file with 27 additions and 2 deletions.
    29 changes: 27 additions & 2 deletions README.md
    Original file line number Diff line number Diff line change
    @@ -81,9 +81,34 @@ kubectl cluster-info

    ## Installing Helm

    https://helm.sh/docs/using_helm/#securing-your-helm-installation
    Grab the latest release from https://github.com/helm/helm/releases and install
    it.

    https://helm.sh/docs/using_helm/#role-based-access-control
    ```console
    curl -sSL https://get.helm.sh/helm-v2.14.2-linux-amd64.tar.gz | tar xvz
    mv linux-amd64/{helm,tiller} ~/.local/bin/
    rm -rf linux-amd64/
    ```

    Now we need to configure Role Based Access Control (RBAC) and create a
    Certificate Authority (CA) which will secure our helm/tiller installation.

    - https://helm.sh/docs/using_helm/#securing-your-helm-installation
    - https://helm.sh/docs/using_helm/#role-based-access-control
    - https://helm.sh/docs/using_helm/#generate-a-certificate-authority

    ```console
    openssl genrsa -out ./ca.key.pem 4096
    openssl req \
    -key ca.key.pem \
    -new \
    -x509 \
    -days 7300 \
    -sha256 \
    -out ca.cert.pem \
    -extensions v3_ca \
    -subj "/CN=chadig.com/[email protected]/C=US/ST=OR/L=Portland/O=chadig/OU=infra"
    ```

    ## Installing Istio

  13. John Andersen revised this gist Jul 19, 2019. 1 changed file with 1 addition and 1 deletion.
    2 changes: 1 addition & 1 deletion README.md
    Original file line number Diff line number Diff line change
    @@ -87,7 +87,7 @@ https://helm.sh/docs/using_helm/#role-based-access-control

    ## Installing Istio

    https://knative.dev/docs/install/installing-istio/
    https://knative.dev/docs/install/installing-istio/#installing-istio-with-sds-to-secure-the-ingress-gateway

    ## Installing Knative

  14. John Andersen revised this gist Jul 19, 2019. 1 changed file with 2 additions and 0 deletions.
    2 changes: 2 additions & 0 deletions README.md
    Original file line number Diff line number Diff line change
    @@ -83,6 +83,8 @@ kubectl cluster-info

    https://helm.sh/docs/using_helm/#securing-your-helm-installation

    https://helm.sh/docs/using_helm/#role-based-access-control

    ## Installing Istio

    https://knative.dev/docs/install/installing-istio/
  15. John Andersen revised this gist Jul 19, 2019. 1 changed file with 10 additions and 0 deletions.
    10 changes: 10 additions & 0 deletions README.md
    Original file line number Diff line number Diff line change
    @@ -79,4 +79,14 @@ export KUBECONFIG="$(k3d get-kubeconfig --name='k3s-default')"
    kubectl cluster-info
    ```

    ## Installing Helm

    https://helm.sh/docs/using_helm/#securing-your-helm-installation

    ## Installing Istio

    https://knative.dev/docs/install/installing-istio/

    ## Installing Knative

    https://knative.dev/docs/install/knative-with-iks/#installing-knative
  16. John Andersen revised this gist Jul 19, 2019. 1 changed file with 8 additions and 3 deletions.
    11 changes: 8 additions & 3 deletions README.md
    Original file line number Diff line number Diff line change
    @@ -52,18 +52,21 @@ To add `~/.local/bin` to your `PATH`, which is where we'll install the binaries.
    - `kubectl` is the binary we'll use to interact with our Kubernetes cluster

    ```console
    curl -L -o k3d https://github.com/rancher/k3d/releases/download/v1.2.2/k3d-linux-amd64
    curl -L -o k3d https://github.com/rancher/k3d/releases/download/v1.3.0-dev.0/k3d-linux-amd64
    curl -LO https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/amd64/kubectl
    chmod 700 k3d kubectl
    mv k3d kubectl ~/.local/bin/
    ```

    ## Cluster Creation

    Create a cluster with 3 workers.
    Create a cluster with 3 workers and exposing ports 80 and 443 (HTTP and HTTPS).

    > At time of writing the Rancher devs have just recently fixed bugs related to
    > Knative deployment. As such we need to specify the k3s image that now works.
    ```console
    k3d c -w 3
    k3d create --workers 3 --publish 80:80 --publish 443:433 --image docker.io/rancher/k3s:v0.7.0-rc6
    ```

    ## Access Your Cluster
    @@ -75,3 +78,5 @@ re-running it a few times until it works.
    export KUBECONFIG="$(k3d get-kubeconfig --name='k3s-default')"
    kubectl cluster-info
    ```

    ## Installing Knative
  17. John Andersen revised this gist Jul 19, 2019. No changes.
  18. John Andersen revised this gist Jul 19, 2019. 1 changed file with 2 additions and 2 deletions.
    4 changes: 2 additions & 2 deletions README.md
    Original file line number Diff line number Diff line change
    @@ -46,13 +46,13 @@ To add `~/.local/bin` to your `PATH`, which is where we'll install the binaries.

    ## Install Binaries

    - [k3d](https://github.com/rancher/k3s#manual-download)
    - [k3d](https://github.com/rancher/k3d/releases)
    - `k3d` is `k3s` in docker
    - [kubectl](https://kubernetes.io/docs/tasks/tools/install-kubectl/#install-kubectl-on-linux)
    - `kubectl` is the binary we'll use to interact with our Kubernetes cluster

    ```console
    curl -LO
    curl -L -o k3d https://github.com/rancher/k3d/releases/download/v1.2.2/k3d-linux-amd64
    curl -LO https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/amd64/kubectl
    chmod 700 k3d kubectl
    mv k3d kubectl ~/.local/bin/
  19. @pdxjohnny pdxjohnny revised this gist Jul 19, 2019. No changes.
  20. John Andersen revised this gist Jul 19, 2019. 1 changed file with 77 additions and 1 deletion.
    78 changes: 77 additions & 1 deletion README.md
    Original file line number Diff line number Diff line change
    @@ -1 +1,77 @@
    # Setting up k3s
    # Setting Up k3s for Serverless (knative) on a $5 DigitalOcean Droplet Using k3d

    The cheapest Kubernetes deployment either side of the Mississippi

    > BEWARE This guide is devoid of binary verification
    ## Generate Your SSH keys

    Generate an ssh key with 4096 bits (if you don't already have one you want to
    use). I would recommend putting a password on it and using it only for this
    VM.

    ```console
    ssh-keygen -b 4096
    ```

    > TODO Add info on storing keys in TPM via [tpm2-pkcs11](https://github.com/tpm2-software/tpm2-pkcs11/blob/master/docs/SSH.md)
    ## Provision VM

    We'll be using a CoreOS Container Linux VM since only because I'm personally
    inclined to believe that's the most secure option at the moment.

    - [DigitalOcean VM Creation Form Pre-populated for CoreOS](https://cloud.digitalocean.com/droplets/new?image=coreos-stable)
    - [CoreOS Docs](https://coreos.com/os/docs/latest/booting-on-digitalocean.html)

    ## Setup `PATH`

    Create a `~/.local/bin` directory and a `~/.profile` which will add that
    directory to your `PATH` when you source it.

    ```console
    mkdir -p "${HOME}/.local/bin"
    cat >> ~/.profile <<'EOF'
    export PATH="${PATH}:${HOME}/.local/bin"
    EOF
    ```

    Whenever you ssh in (and now) you'll want to run

    ```console
    . .profile
    ```

    To add `~/.local/bin` to your `PATH`, which is where we'll install the binaries.

    ## Install Binaries

    - [k3d](https://github.com/rancher/k3s#manual-download)
    - `k3d` is `k3s` in docker
    - [kubectl](https://kubernetes.io/docs/tasks/tools/install-kubectl/#install-kubectl-on-linux)
    - `kubectl` is the binary we'll use to interact with our Kubernetes cluster

    ```console
    curl -LO
    curl -LO https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/amd64/kubectl
    chmod 700 k3d kubectl
    mv k3d kubectl ~/.local/bin/
    ```

    ## Cluster Creation

    Create a cluster with 3 workers.

    ```console
    k3d c -w 3
    ```

    ## Access Your Cluster

    The `k3d get-kubeconfig` may take a second or two before it works, just try
    re-running it a few times until it works.

    ```console
    export KUBECONFIG="$(k3d get-kubeconfig --name='k3s-default')"
    kubectl cluster-info
    ```
  21. @pdxjohnny pdxjohnny created this gist Jul 19, 2019.
    1 change: 1 addition & 0 deletions README.md
    Original file line number Diff line number Diff line change
    @@ -0,0 +1 @@
    # Setting up k3s