kubectl is required, see here.
curl -LO https://github.com/kubernetes/kops/releases/download/$(curl -s https://api.github.com/repos/kubernetes/kops/releases/latest | grep tag_name | cut -d '"' -f 4)/kops-linux-amd64
chmod +x kops-linux-amd64
sudo mv kops-linux-amd64 /usr/local/bin/kopsSee the releases for more information on changes between releases.
Install python pip
sudo apt-get install python-pip
sudo pip install awscli
In order to build clusters within AWS we'll create a dedicated IAM user for kops. This user requires API credentials in order to use kops. Create the user, and credentials, using the AWS console.
The kops user will require the following IAM permissions to function properly:
AmazonEC2FullAccess
AmazonRoute53FullAccess
AmazonS3FullAccess
IAMFullAccess
AmazonVPCFullAccess
You should record the SecretAccessKey and AccessKeyID of IAM user your already created, and then use them below:
# configure the aws client to use your new IAM user
aws configure # Use your new access and secret key here
aws iam list-users # you should see a list of all your IAM users here
# Because "aws configure" doesn't export these vars for kops to use, we export them now
export AWS_ACCESS_KEY_ID=$(aws configure get aws_access_key_id)
export AWS_SECRET_ACCESS_KEY=$(aws configure get aws_secret_access_key)
In order to build a Kubernetes cluster with kops, we need to prepare somewhere to build the required DNS records.
In order to store the state of your cluster, and the representation of your cluster, we need to create a dedicated S3 bucket for kops to use. This bucket will become the source of truth for our cluster configuration. In this guide we'll call this bucket example-com-state-store, but you should add a custom prefix as bucket names need to be unique.
aws s3api create-bucket \
--bucket kops-state-satriashp \
--region ap-southeast-1
Note: We STRONGLY recommend versioning your S3 bucket in case you ever need to revert or recover a previous state store.
aws s3api put-bucket-versioning --bucket kops-state-satriashp --versioning-configuration Status=Enabled
We're ready to start creating our first cluster! Let's first set up a few environment variables to make this process easier.
export NAME=myfirstcluster.example.com
export KOPS_STATE_STORE=s3://kops-state-satriashp
kops create cluster \
--zones ap-southeast-1a \
--dns-zone=<route53-zone-name> \
--kubernetes-version=v1.16.2 \
--master-size=<ec2-instance-type> \
--node-size=<ec2-instance-type> \
--node-count=2
${NAME}
Now we have a cluster configuration, we can look at every aspect that defines our cluster by editing the description.
kops edit cluster ${NAME}
Now we take the final step of actually building the cluster. This'll take a while. Once it finishes you'll have to wait longer while the booted instances finish downloading Kubernetes components and reach a "ready" state.
kops update cluster ${NAME} --yes
This script adds permissions to the nodes IAM role, enabling any pod to use these AWS privileges.
Open aws IAM console and attach this policy to node IAM role node.<cluster-name>
AWS JSON Policy:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"route53:ChangeResourceRecordSets"
],
"Resource": [
"arn:aws:route53:::hostedzone/*"
]
},
{
"Effect": "Allow",
"Action": [
"route53:ListHostedZones",
"route53:ListResourceRecordSets"
],
"Resource": [
"*"
]
}
]
}Follow this documentation docs.
Deploy this .yaml file with this command:
kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v1.10.1/src/deploy/recommended/kubernetes-dashboard.yaml
kubectl config view
kubectl -n kube-system get secret | grep admin-user
kubectl -n kube-system describe secret admin-user-token-<id displayed by previous command>
The following Mandatory Command is required for all deployments.
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/static/mandatory.yaml
This setup requires to choose in which layer (L4 or L7) we want to configure the ELB:
- Layer 4: use TCP as the listener protocol for ports 80 and 443.
- Layer 7: use HTTP as the listener protocol for port 80 and terminate TLS in the ELB
Layer 4:
Execute this command:
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/static/provider/aws/service-l4.yaml
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/static/provider/aws/patch-configmap-l4.yaml
Layer 7:
Download this two files. service and configmap.
Change line of the file provider/aws/service-l7.yaml replacing the dummy id with a valid one "arn:aws:acm:us-west-2:XXXXXXXX:certificate/XXXXXX-XXXXXXX-XXXXXXX-XXXXXXXX"
Then Execute:
kubectl apply -f service-l7.yaml
kubectl apply -f patch-configmap.yaml
Install using helmv3.0.0 package manager.
helm install redis \
--set metrics.enabled=true,nameOverride=redis,fullnameOverride=redis \
stable/redisRedis can be accessed via port 6379 on the following DNS names from within your cluster:
redis-master.default.svc.cluster.local for read/write operations
redis-slave.default.svc.cluster.local for read-only operations
To get your password run:
export REDIS_PASSWORD=$(kubectl get secret --namespace default redis -o jsonpath="{.data.redis-password}" | base64 --decode)To connect to your Redis server:
- Run a Redis pod that you can use as a client:
kubectl run --namespace default redis-client --rm --tty -i --restart='Never' \
--env REDIS_PASSWORD=$REDIS_PASSWORD \
--image docker.io/bitnami/redis:5.0.7-debian-9-r0 -- bash- Connect using the Redis CLI:
redis-cli -h redis-master -a $REDIS_PASSWORD
redis-cli -h redis-slave -a $REDIS_PASSWORDTo connect to your database from outside the cluster execute the following commands:
kubectl port-forward --namespace default svc/redis-master 6379:6379 &
redis-cli -h 127.0.0.1 -p 6379 -a $REDIS_PASSWORDInstall using helmv3.0.0 package manager.
helm install <release-name> \
--set postgresqlDatabase=<database-name>,postgresqlUsername=<username>,metrics.enabled=true \
stable/postgresqlPostgreSQL can be accessed via port 5432 on the following DNS name from within your cluster:
postgres.default.svc.cluster.local - Read/Write connection
To get the password for "devops" run:
export POSTGRES_PASSWORD=$(kubectl get secret --namespace default postgres -o jsonpath="{.data.postgresql-password}" | base64 --decode)To connect to your database run the following command:
kubectl run postgres-client --rm --tty -i --restart='Never' --namespace default --image docker.io/bitnami/postgresql:11.6.0-debian-9-r0 --env="PGPASSWORD=$POSTGRES_PASSWORD" --command -- psql --host postgres -U devops -d printerous -p 5432To connect to your database from outside the cluster execute the following commands:
kubectl port-forward --namespace default svc/postgres 5432:5432 &
PGPASSWORD="$POSTGRES_PASSWORD" psql --host 127.0.0.1 -U devops -d printerous -p 5432To restore database, run postgres client pod
kubectl run postgres-client --rm --tty -i --restart='Neve' --namespace default --image docker.io/bitnami/postgresql:11.6.0-debian-9-r0 --env="PGPASSWORD=$POSTGRES_PASSWORD" --command -- psql --host <release-name>-postgresql -U <username> -d <database-name> -p 5432
cat backup.tar | kubectl exec -i postgres-client -- pg_restore -h postgres -U <username> -S <username> -d <database> -Ft -C --no-owner
Install using helmv3.0.0 package manager.
helm install <release-name> \
--set mysqlUser=printerous,metrics.enabled=true \
stable/mysqlTo get your root password run:
MYSQL_ROOT_PASSWORD=$(kubectl get secret --namespace default mysql -o jsonpath="{.data.mysql-root-password}" | base64 --decode; echo)To connect to your database:
- Run an Ubuntu pod that you can use as a client:
kubectl run -i --tty ubuntu --image=ubuntu:16.04 --restart=Never -- bash -il- Install the mysql client:
$ apt-get update && apt-get install mysql-client -y- Connect using the mysql cli, then provide your password:
$ mysql -h mysql -p
To connect to your database directly from outside the K8s cluster:
MYSQL_HOST=127.0.0.1
MYSQL_PORT=3306
# Execute the following command to route the connection:
kubectl port-forward svc/mysql 3306
mysql -h ${MYSQL_HOST} -P${MYSQL_PORT} -u root -p${MYSQL_ROOT_PASSWORD}Install using helmv3.0.0 package manager.
helm install mongodb \
--set fullnameOverride=mongodb,volumePermissions.enabled=true,mongodbUsername=printerous,mongodbDatabase=printerous \
stable/mongodbMongoDB can be accessed via port 27017 on the following DNS name from within your cluster:
mongodb.default.svc.cluster.localTo get the root password run:
export MONGODB_ROOT_PASSWORD=$(kubectl get secret --namespace default mongodb -o jsonpath="{.data.mongodb-root-password}" | base64 --decode)To connect to your database run the following command:
kubectl run --namespace default mongodb-client --rm --tty -i --restart='Never' --image bitnami/mongodb --command -- mongo admin --host mongodb --authenticationDatabase admin -u root -p $MONGODB_ROOT_PASSWORDTo connect to your database from outside the cluster execute the following commands:
kubectl port-forward --namespace default svc/mongodb 27017:27017 &
mongo --host 127.0.0.1 --authenticationDatabase admin -p $MONGODB_ROOT_PASSWORD
The Kubernetes command-line tool, kubectl, allows you to run commands against Kubernetes clusters. You can use kubectl to deploy applications, inspect and manage cluster resources, and view logs. For a complete installation guide, see Install and Set Up kubectl.
- Download the latest release with the command:
curl -LO https://storage.googleapis.com/kubernetes-release/release/`curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt`/bin/linux/amd64/kubectl- Make the kubectl binary executable.
chmod +x ./kubectl- Move the binary in to your PATH.
sudo mv ./kubectl /usr/local/bin/kubectl- Test to ensure the version you installed is up-to-date:
kubectl version
kubectl config set-cluster staging --server=https://api.k8s.printerous.com --insecure-skip-tls-verify=true
Ask your cluster admin for access <token>.
kubectl config set-credentials developer --token=<TOKEN>
kubectl config set-context staging --cluster=staging --user=developer --namespace staging
kubectl config use-context staging
Test to ensure you have access to cluster:
kubectl get svc,pods,deployments
kubectl usefull command and cheat sheet here.
dockerdocker-compose- gpg key (Ask your admin for keys)
gpg-printerous.asc - clone repo printerous/k8s
Check your docker and docker-compose installation.
$ docker version
$ docker-compose versionCopy gpg key to printerous/k8s root directory.
Inside printerous/k8s project roots, run command below, to build helm container.
$ docker-compose up -dAccess helm container with this command.
$ docker exec -it helm bash
Inside helm container go to directory of your secrets file, example dir: /app/helm/friday/helm_vars and run this command for editing secrets.staging.yaml
$ cd app/helm/friday/helm_vars
$ helm secrets edit secrets.staging.yaml
Edit, save and commit changes to github.
In example, we want to add this key below to our staging env.
ENV['SUPER_SECRETS_PASSWORD']
Steps:
- Add new key to
secrets.staging.yaml, make sure your in same directory
$ cd app/helm/friday/helm_vars
$ helm secrets edit secrets.staging.yaml
- Add new key
others keys
---
---
---
SUPER_SECRETS_PASSWORD: SOME_SECRET_PASSWORD
- Add new key to (
secret.yamlorconfigMap.yaml) of project. In examplehelm/friday/templates/secret.yamlorhelm/friday/templates/configMap.yaml.- secret.yaml
apiVersion: v1 kind: Secret metadata: name: {{ include "friday.fullname" . }} labels: {{- include "friday.labels" . | nindent 4 }} data: SECRET_KEY_BASE: {{ .Values.SECRET_KEY_BASE | b64enc }} --- --- --- SUPER_SECRETS_PASSWORD: {{ .Values.SUPER_SECRETS_PASSWORD | b64enc }}- configMap.yaml
apiVersion: v1 kind: ConfigMap metadata: name: {{ include "friday.fullname" . }} labels: {{- include "friday.labels" . | nindent 4 }} data: PROMETHEUS_EXPORTER: {{ .Values.PROMETHEUS_EXPORTER | default "enabled" | quote }} --- --- --- SUPER_SECRETS_PASSWORD: {{ .Values.SUPER_SECRETS_PASSWORD | quote }}
Save and commit changes to github.
kubectl create secret generic dockerhub --from-file=.dockerconfigjson=.docker/config.json --type=kubernetes.io/dockerconfigjson --namespace=staging