Skip to content

Instantly share code, notes, and snippets.

@satriashp
Created March 17, 2020 04:02
Show Gist options
  • Save satriashp/58b855cb66774de6e919734b3aca71fc to your computer and use it in GitHub Desktop.
Save satriashp/58b855cb66774de6e919734b3aca71fc to your computer and use it in GitHub Desktop.

Setup Kubernetes Cluster on AWS

Installing kops

Prerequisite

kubectl is required, see here.

Linux

curl -LO https://github.com/kubernetes/kops/releases/download/$(curl -s https://api.github.com/repos/kubernetes/kops/releases/latest | grep tag_name | cut -d '"' -f 4)/kops-linux-amd64
chmod +x kops-linux-amd64
sudo mv kops-linux-amd64 /usr/local/bin/kops

Release History

See the releases for more information on changes between releases.


Configure AWS CLI

Prerequisite

Install python pip

sudo apt-get install python-pip

Install aws cli

sudo pip install awscli

Setup IAM user

In order to build clusters within AWS we'll create a dedicated IAM user for kops. This user requires API credentials in order to use kops. Create the user, and credentials, using the AWS console.

The kops user will require the following IAM permissions to function properly:

AmazonEC2FullAccess
AmazonRoute53FullAccess
AmazonS3FullAccess
IAMFullAccess
AmazonVPCFullAccess

You should record the SecretAccessKey and AccessKeyID of IAM user your already created, and then use them below:

# configure the aws client to use your new IAM user
aws configure           # Use your new access and secret key here
aws iam list-users      # you should see a list of all your IAM users here

# Because "aws configure" doesn't export these vars for kops to use, we export them now
export AWS_ACCESS_KEY_ID=$(aws configure get aws_access_key_id)
export AWS_SECRET_ACCESS_KEY=$(aws configure get aws_secret_access_key)

Configure DNS

In order to build a Kubernetes cluster with kops, we need to prepare somewhere to build the required DNS records.

Cluster State storage

In order to store the state of your cluster, and the representation of your cluster, we need to create a dedicated S3 bucket for kops to use. This bucket will become the source of truth for our cluster configuration. In this guide we'll call this bucket example-com-state-store, but you should add a custom prefix as bucket names need to be unique.

aws s3api create-bucket \
    --bucket kops-state-satriashp \
    --region ap-southeast-1

Note: We STRONGLY recommend versioning your S3 bucket in case you ever need to revert or recover a previous state store.

aws s3api put-bucket-versioning --bucket kops-state-satriashp  --versioning-configuration Status=Enabled

Creating Cluster

Prepare local environment

We're ready to start creating our first cluster! Let's first set up a few environment variables to make this process easier.

export NAME=myfirstcluster.example.com
export KOPS_STATE_STORE=s3://kops-state-satriashp

Create cluster

kops create cluster \
    --zones ap-southeast-1a \
    --dns-zone=<route53-zone-name> \
    --kubernetes-version=v1.16.2 \
    --master-size=<ec2-instance-type> \
    --node-size=<ec2-instance-type> \
    --node-count=2
    ${NAME}

Customize Cluster Configuration

Now we have a cluster configuration, we can look at every aspect that defines our cluster by editing the description.

kops edit cluster ${NAME}

Build the Cluster

Now we take the final step of actually building the cluster. This'll take a while. Once it finishes you'll have to wait longer while the booted instances finish downloading Kubernetes components and reach a "ready" state.

kops update cluster ${NAME} --yes

Setup AWS External DNS

Create IAM Policy

This script adds permissions to the nodes IAM role, enabling any pod to use these AWS privileges. Open aws IAM console and attach this policy to node IAM role node.<cluster-name>

AWS JSON Policy:

{
 "Version": "2012-10-17",
 "Statement": [
   {
     "Effect": "Allow",
     "Action": [
       "route53:ChangeResourceRecordSets"
     ],
     "Resource": [
       "arn:aws:route53:::hostedzone/*"
     ]
   },
   {
     "Effect": "Allow",
     "Action": [
       "route53:ListHostedZones",
       "route53:ListResourceRecordSets"
     ],
     "Resource": [
       "*"
     ]
   }
 ]
}

Deploy External DNS

Follow this documentation docs.

Dashboard

Installation

Deploy this .yaml file with this command:

kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v1.10.1/src/deploy/recommended/kubernetes-dashboard.yaml

Basic Auth

kubectl config view

Login token

kubectl -n kube-system get secret | grep admin-user
kubectl -n kube-system describe secret admin-user-token-<id displayed by previous command>

Ingress NGINX

Installing ingress

Prerequisite

The following Mandatory Command is required for all deployments.

kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/static/mandatory.yaml

AWS ELASTIC LOAD BALANCER - ELB

This setup requires to choose in which layer (L4 or L7) we want to configure the ELB:

  • Layer 4: use TCP as the listener protocol for ports 80 and 443.
  • Layer 7: use HTTP as the listener protocol for port 80 and terminate TLS in the ELB

Layer 4: Execute this command:

kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/static/provider/aws/service-l4.yaml
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/static/provider/aws/patch-configmap-l4.yaml

Layer 7:

Download this two files. service and configmap.

Change line of the file provider/aws/service-l7.yaml replacing the dummy id with a valid one "arn:aws:acm:us-west-2:XXXXXXXX:certificate/XXXXXX-XXXXXXX-XXXXXXX-XXXXXXXX"

Then Execute:

kubectl apply -f service-l7.yaml
kubectl apply -f patch-configmap.yaml

Redis

Install using helmv3.0.0 package manager.

helm install redis \
--set metrics.enabled=true,nameOverride=redis,fullnameOverride=redis \
stable/redis

Redis can be accessed via port 6379 on the following DNS names from within your cluster:

redis-master.default.svc.cluster.local for read/write operations
redis-slave.default.svc.cluster.local for read-only operations

To get your password run:

export REDIS_PASSWORD=$(kubectl get secret --namespace default redis -o jsonpath="{.data.redis-password}" | base64 --decode)

To connect to your Redis server:

  1. Run a Redis pod that you can use as a client:
kubectl run --namespace default redis-client --rm --tty -i --restart='Never' \
--env REDIS_PASSWORD=$REDIS_PASSWORD \
--image docker.io/bitnami/redis:5.0.7-debian-9-r0 -- bash
  1. Connect using the Redis CLI:
   redis-cli -h redis-master -a $REDIS_PASSWORD
   redis-cli -h redis-slave -a $REDIS_PASSWORD

To connect to your database from outside the cluster execute the following commands:

    kubectl port-forward --namespace default svc/redis-master 6379:6379 &
    redis-cli -h 127.0.0.1 -p 6379 -a $REDIS_PASSWORD

Postgres

Install using helmv3.0.0 package manager.

helm install <release-name> \
--set postgresqlDatabase=<database-name>,postgresqlUsername=<username>,metrics.enabled=true \
stable/postgresql

PostgreSQL can be accessed via port 5432 on the following DNS name from within your cluster:

postgres.default.svc.cluster.local - Read/Write connection

To get the password for "devops" run:

export POSTGRES_PASSWORD=$(kubectl get secret --namespace default postgres -o jsonpath="{.data.postgresql-password}" | base64 --decode)

To connect to your database run the following command:

kubectl run postgres-client --rm --tty -i --restart='Never' --namespace default --image docker.io/bitnami/postgresql:11.6.0-debian-9-r0 --env="PGPASSWORD=$POSTGRES_PASSWORD" --command -- psql --host postgres -U devops -d printerous -p 5432

To connect to your database from outside the cluster execute the following commands:

kubectl port-forward --namespace default svc/postgres 5432:5432 &
PGPASSWORD="$POSTGRES_PASSWORD" psql --host 127.0.0.1 -U devops -d printerous -p 5432

To restore database, run postgres client pod

kubectl run postgres-client --rm --tty -i --restart='Neve' --namespace default --image docker.io/bitnami/postgresql:11.6.0-debian-9-r0 --env="PGPASSWORD=$POSTGRES_PASSWORD" --command -- psql --host <release-name>-postgresql -U <username> -d <database-name> -p 5432

cat backup.tar | kubectl exec -i postgres-client -- pg_restore -h postgres -U <username> -S <username> -d <database> -Ft -C --no-owner

Mysql

Install using helmv3.0.0 package manager.

helm install <release-name> \
--set mysqlUser=printerous,metrics.enabled=true \
stable/mysql

To get your root password run:

MYSQL_ROOT_PASSWORD=$(kubectl get secret --namespace default mysql -o jsonpath="{.data.mysql-root-password}" | base64 --decode; echo)

To connect to your database:

  • Run an Ubuntu pod that you can use as a client:
kubectl run -i --tty ubuntu --image=ubuntu:16.04 --restart=Never -- bash -il
  • Install the mysql client:
$ apt-get update && apt-get install mysql-client -y
  • Connect using the mysql cli, then provide your password:
$ mysql -h mysql -p

To connect to your database directly from outside the K8s cluster:

MYSQL_HOST=127.0.0.1
MYSQL_PORT=3306

# Execute the following command to route the connection:
kubectl port-forward svc/mysql 3306

mysql -h ${MYSQL_HOST} -P${MYSQL_PORT} -u root -p${MYSQL_ROOT_PASSWORD}

MongoDB

Install using helmv3.0.0 package manager.

helm install mongodb \
--set fullnameOverride=mongodb,volumePermissions.enabled=true,mongodbUsername=printerous,mongodbDatabase=printerous \
stable/mongodb

MongoDB can be accessed via port 27017 on the following DNS name from within your cluster:

mongodb.default.svc.cluster.local

To get the root password run:

export MONGODB_ROOT_PASSWORD=$(kubectl get secret --namespace default mongodb -o jsonpath="{.data.mongodb-root-password}" | base64 --decode)

To connect to your database run the following command:

kubectl run --namespace default mongodb-client --rm --tty -i --restart='Never' --image bitnami/mongodb --command -- mongo admin --host mongodb --authenticationDatabase admin -u root -p $MONGODB_ROOT_PASSWORD

To connect to your database from outside the cluster execute the following commands:

kubectl port-forward --namespace default svc/mongodb 27017:27017 &
mongo --host 127.0.0.1 --authenticationDatabase admin -p $MONGODB_ROOT_PASSWORD

Developer Access

Install and Set Up kubectl

The Kubernetes command-line tool, kubectl, allows you to run commands against Kubernetes clusters. You can use kubectl to deploy applications, inspect and manage cluster resources, and view logs. For a complete installation guide, see Install and Set Up kubectl.

Install kubectl on Linux

  • Download the latest release with the command:
curl -LO https://storage.googleapis.com/kubernetes-release/release/`curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt`/bin/linux/amd64/kubectl
  • Make the kubectl binary executable.
chmod +x ./kubectl
  • Move the binary in to your PATH.
sudo mv ./kubectl /usr/local/bin/kubectl
  • Test to ensure the version you installed is up-to-date:
kubectl version

Setup cluster access

Add a Cluster to kubectl

kubectl config set-cluster staging --server=https://api.k8s.printerous.com --insecure-skip-tls-verify=true

Add User (credentials) to kubectl

Ask your cluster admin for access <token>.

kubectl config set-credentials developer --token=<TOKEN>

Add a Context to kubectl

kubectl config set-context staging --cluster=staging --user=developer --namespace staging
kubectl config use-context staging

Test to ensure you have access to cluster:

kubectl get svc,pods,deployments

kubectl usefull command and cheat sheet here.

Guide for updating environment

Prerequisite

Setup container

Check your docker and docker-compose installation.

$ docker version
$ docker-compose version

Copy gpg key to printerous/k8s root directory. Inside printerous/k8s project roots, run command below, to build helm container.

$ docker-compose up -d

Access helm container with this command.

$ docker exec -it helm bash

For editing existing env key.

Inside helm container go to directory of your secrets file, example dir: /app/helm/friday/helm_vars and run this command for editing secrets.staging.yaml

$ cd app/helm/friday/helm_vars
$ helm secrets edit secrets.staging.yaml

Edit, save and commit changes to github.

For adding new env key.

In example, we want to add this key below to our staging env.

ENV['SUPER_SECRETS_PASSWORD']

Steps:

  • Add new key to secrets.staging.yaml, make sure your in same directory
$ cd app/helm/friday/helm_vars
$ helm secrets edit secrets.staging.yaml
  • Add new key
others keys
---
---
---
SUPER_SECRETS_PASSWORD: SOME_SECRET_PASSWORD
  • Add new key to (secret.yaml or configMap.yaml) of project. In example helm/friday/templates/secret.yaml or helm/friday/templates/configMap.yaml.
    • secret.yaml
    apiVersion: v1
    kind: Secret
    metadata:
      name: {{ include "friday.fullname" . }}
      labels:
        {{- include "friday.labels" . | nindent 4 }}
    data:
      SECRET_KEY_BASE: {{ .Values.SECRET_KEY_BASE | b64enc }}
      ---
      ---
      ---
      SUPER_SECRETS_PASSWORD: {{ .Values.SUPER_SECRETS_PASSWORD | b64enc }}
    
    • configMap.yaml
    apiVersion: v1
    kind: ConfigMap
    metadata:
      name: {{ include "friday.fullname" . }}
      labels:
        {{- include "friday.labels" . | nindent 4 }}
    data:
      PROMETHEUS_EXPORTER: {{ .Values.PROMETHEUS_EXPORTER | default "enabled" | quote }}
      ---
      ---
      ---
      SUPER_SECRETS_PASSWORD: {{ .Values.SUPER_SECRETS_PASSWORD | quote }}
    

Save and commit changes to github.

Adding dockerheub credential as secret

kubectl create secret generic dockerhub --from-file=.dockerconfigjson=.docker/config.json --type=kubernetes.io/dockerconfigjson --namespace=staging
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment