# HTTP tunnel * https://en.wikipedia.org/wiki/HTTP_tunnel # On prem k8s cluster set up with bastion vm 1. Create a bastion vm in your data center or in cloud with connectivity set up (usually vpn) to the on prem data center. 2. Install tinyproxy on the bastion vm and pick a random port as it would be too easy for spam bot with default 8888, set up as systemd service according to https://nxnjz.net/2019/10/how-to-setup-a-simple-proxy-server-with-tinyproxy-debian-10-buster/. Make sure it works by validating with `curl --proxy http://127.0.0.1: https://httpbin.org/ip`. And I don't use any user authentication for proxy, so I locked down the firewall rules with my laptop IP/32. 3. Download the kubeconfig file for the k8s cluster to your laptop 4. From your laptop, run ``` HTTPS_PROXY=: KUBECONFIG=my-kubeconfig kubectl get nodes NAME STATUS ROLES AGE VERSION k8s-node-0 Ready control-plane,master 32h v1.20.4 k8s-node-1 Ready 32h v1.20.4 k8s-node-2 Ready 32h v1.20.4 k8s-node-3 Ready 32h v1.20.4 k8s-node-4 Ready 32h v1.20.4 k8s-node-5 Ready 32h v1.20.4 ``` # Private GKE cluster with HTTP proxy solutions According to [private GKE cluster](https://cloud.google.com/kubernetes-engine/docs/how-to/private-clusters#private_cp), At this point, these are the only IP addresses that have access to the control plane: * The primary range of my-subnet-0. * The secondary range used for Pods. Hence, we can use a bastion vm in the primary range or a pod from the secondary range. * tinyproxy with bastion vm * https://github.com/terraform-google-modules/terraform-google-kubernetes-engine/tree/master/examples/safer_cluster_iap_bastion * https://medium.com/google-cloud/accessing-gke-private-clusters-through-iap-14fedad694f8 * https://medium.com/google-cloud/gke-private-cluster-with-a-bastion-host-5480b44793a7 * privoxy in cluster * https://cloud.google.com/architecture/creating-kubernetes-engine-private-clusters-with-net-proxies ## My own hackish way Given private GKE cluster with public endpoint access disabled, here is one hack I did with Cloud IAP SSH forwarding via an internal bastion vm. In this workaroud it is not using any HTTP proxy and no external IP address from user VPC. It works well for one cluster, but I would aim for deploying tinyproxy for more than one cluster as it is a cleaner solution without handling TLS SAN. ## create a private GKE cluster Please refer to https://cloud.google.com/kubernetes-engine/docs/how-to/private-clusters#private_cp for the latest info. e.g. ``` gcloud container clusters create "$CLUSTER_NAME" \ --region ${REGION} \ --network ${NETWORK} \ --subnetwork ${SUBNET} \ --machine-type "${GKE_NODE_TYPE}" \ --num-nodes=1 \ --enable-autoupgrade \ --enable-autorepair \ --preemptible \ --enable-ip-alias \ --cluster-secondary-range-name=pod-range \ --services-secondary-range-name=service-range \ --enable-private-nodes \ --enable-private-endpoint \ --enable-master-authorized-networks \ --master-ipv4-cidr= 172.16.0.32/28 # Get the kubectl credentials for the GKE cluster. KUBECONFIG=~/.kube/dev gcloud container clusters get-credentials "$CLUSTER_NAME" --region "$REGION" ``` ## create a private compute instance "bastion" with only internal IP ## enable and set up Cloud IAP in GCP console, grant users/group that can access the private instance from the last step ## on the laptop, start the SSH forwarding proxy at local port 8443 via CloudIAP tunnel e.g. 172.16.0.66 is the private master endpoint, The SSH traffic is tunnelled via Cloud IAP in TLS, then port forwarding to the k8s master API endpoint. `gcloud beta compute --project ${PROJECT} ssh --zone ${ZONE} "bastion" --tunnel-through-iap --ssh-flag="-L 8443:172.16.0.66:443"` ## on the laptop, modify the .kube/dev kubernetes.default and kubernetes are allowed for port `server: https://kubernetes.default:8443` ## on the laptop, modify the /etc/hosts Please append the following line `127.0.0.1 kubernetes kubernetes.default` ## on the laptop, happy kubectl from here. `KUBECONFIG=~/.kube/dev kubectl get po --all-namespaces` # Private only EKS cluster Very much like the GCP Cloud IAP, except it uses AWS SSM and bastion to create a tunnel. Assuming the bastion 'subnet is added to the EKS control plane's cluster security group's inbound rule on tcp port 443. ``` # start a tunnel, local port 4443, traffic will be forwarded to the private EKS endpoint aws ssm start-session --target i-bastion --document-name AWS-StartPortForwardingSessionToRemoteHost --parameters host=,portNumber=443,localPortNumber=4443 ``` ## on the laptop 's /etc/hosts 127.0.0.1 localhost kubernetes kubernetes.default.svc kubernetes.default.svc.cluster.local ## modify kubeconfig with the server pointing to the local port ``` server: https://kubernetes.default.svc.cluster.local:4443 ```