Skip to content

Instantly share code, notes, and snippets.

@bprashanth
Last active February 19, 2018 18:54
Show Gist options
  • Save bprashanth/50355c25f4e2245d623f to your computer and use it in GitHub Desktop.
Save bprashanth/50355c25f4e2245d623f to your computer and use it in GitHub Desktop.

Revisions

  1. bprashanth revised this gist Mar 8, 2016. 1 changed file with 2 additions and 1 deletion.
    3 changes: 2 additions & 1 deletion https-sticky.md
    Original file line number Diff line number Diff line change
    @@ -136,7 +136,8 @@ hostname-wrzrk
    You can also put an Ingress in front of it:

    On older clusters the Ingress.Spec.tls is not supported, you only get http.
    If you have a newer master, and are running on GCE, you can update the ingress controller by running `kubectl edit rc --namespace=kube-system l7-lb-controller-v0.5.2` and replace `gcr.io/google_containers/glbc:0.5.2` with `gcr.io/google_containers/glbc:0.6.0`, then kill the pod so the rc starts another one.
    If you have a newer master, and are running on GCE, you can update the ingress controller by running `kubectl edit rc --namespace=kube-system l7-lb-controller-v0.5.2` and replace `gcr.io/google_containers/glbc:0.5.2` with `gcr.io/google_containers/glbc:0.6.0`, then kill the pod so the rc starts another one.

    If you're *not* on gce, you can try either:
    1. SSL termination directly with service-ladbalancer: https://github.com/kubernetes/contrib/blob/master/service-loadbalancer/README.md#ssl-termination
    2. Nginx ingress controller: https://github.com/kubernetes/contrib/tree/master/ingress/controllers/nginx-third-party#ssl
  2. bprashanth revised this gist Mar 8, 2016. 1 changed file with 6 additions and 4 deletions.
    10 changes: 6 additions & 4 deletions https-sticky.md
    Original file line number Diff line number Diff line change
    @@ -135,7 +135,11 @@ hostname-wrzrk

    You can also put an Ingress in front of it:

    On older clusters the Ingress.Spec.tls is not supported, you only get http. If you want to upgrade your ingress image, on GCE/GKE run `kubectl edit rc --namespace=kube-system l7-lb-controller-v0.5.2` and replace `gcr.io/google_containers/glbc:0.5.2` with `gcr.io/google_containers/glbc:0.6.0`, then kill the pod so the rc starts another one.
    On older clusters the Ingress.Spec.tls is not supported, you only get http.
    If you have a newer master, and are running on GCE, you can update the ingress controller by running `kubectl edit rc --namespace=kube-system l7-lb-controller-v0.5.2` and replace `gcr.io/google_containers/glbc:0.5.2` with `gcr.io/google_containers/glbc:0.6.0`, then kill the pod so the rc starts another one.
    If you're *not* on gce, you can try either:
    1. SSL termination directly with service-ladbalancer: https://github.com/kubernetes/contrib/blob/master/service-loadbalancer/README.md#ssl-termination
    2. Nginx ingress controller: https://github.com/kubernetes/contrib/tree/master/ingress/controllers/nginx-third-party#ssl

    Create a secret (the next few steps are just hacks to get off the ground with secrets using a legacy example that needs fixing):
    ```
    @@ -198,7 +202,5 @@ I0308 03:48:10.429696 1 controller.go:325] Updating loadbalancer default/n
    I0308 03:48:10.433449 1 event.go:211] Event(api.ObjectReference{Kind:"Ingress", Namespace:"default", Name:"no-rules-map", UID:"72599eb0-e4e0-11e5-9999-42010af00002", APIVersion:"extensions", ResourceVersion:"36676", FieldPath:""}): type: 'Normal' reason: 'CREATE' ip: 107.178.255.11
    ```

    If you're *not* on gce, you can try either:
    1. SSL termination directly with service-ladbalancer: https://github.com/kubernetes/contrib/blob/master/service-loadbalancer/README.md#ssl-termination
    2. Nginx ingress controller: https://github.com/kubernetes/contrib/tree/master/ingress/controllers/nginx-third-party#ssl


  3. bprashanth revised this gist Mar 8, 2016. 1 changed file with 3 additions and 3 deletions.
    6 changes: 3 additions & 3 deletions https-sticky.md
    Original file line number Diff line number Diff line change
    @@ -198,7 +198,7 @@ I0308 03:48:10.429696 1 controller.go:325] Updating loadbalancer default/n
    I0308 03:48:10.433449 1 event.go:211] Event(api.ObjectReference{Kind:"Ingress", Namespace:"default", Name:"no-rules-map", UID:"72599eb0-e4e0-11e5-9999-42010af00002", APIVersion:"extensions", ResourceVersion:"36676", FieldPath:""}): type: 'Normal' reason: 'CREATE' ip: 107.178.255.11
    ```

    If you're *not* on gce, you can try either:
    1. SSL termination directly with service-ladbalancer: https://github.com/kubernetes/contrib/blob/master/service-loadbalancer/README.md#ssl-termination
    2. Nginx ingress controller: https://github.com/kubernetes/contrib/tree/master/ingress/controllers/nginx-third-party#ssl
    If you're *not* on gce, you can try either:
    1. SSL termination directly with service-ladbalancer: https://github.com/kubernetes/contrib/blob/master/service-loadbalancer/README.md#ssl-termination
    2. Nginx ingress controller: https://github.com/kubernetes/contrib/tree/master/ingress/controllers/nginx-third-party#ssl

  4. bprashanth revised this gist Mar 8, 2016. No changes.
  5. bprashanth revised this gist Mar 8, 2016. 1 changed file with 5 additions and 0 deletions.
    5 changes: 5 additions & 0 deletions https-sticky.md
    Original file line number Diff line number Diff line change
    @@ -197,3 +197,8 @@ I0308 03:48:02.553696 1 loadbalancers.go:397] Creating forwarding rule for
    I0308 03:48:10.429696 1 controller.go:325] Updating loadbalancer default/no-rules-map with IP 107.178.255.11
    I0308 03:48:10.433449 1 event.go:211] Event(api.ObjectReference{Kind:"Ingress", Namespace:"default", Name:"no-rules-map", UID:"72599eb0-e4e0-11e5-9999-42010af00002", APIVersion:"extensions", ResourceVersion:"36676", FieldPath:""}): type: 'Normal' reason: 'CREATE' ip: 107.178.255.11
    ```

    If you're *not* on gce, you can try either:
    1. SSL termination directly with service-ladbalancer: https://github.com/kubernetes/contrib/blob/master/service-loadbalancer/README.md#ssl-termination
    2. Nginx ingress controller: https://github.com/kubernetes/contrib/tree/master/ingress/controllers/nginx-third-party#ssl

  6. bprashanth revised this gist Mar 8, 2016. 1 changed file with 1 addition and 1 deletion.
    2 changes: 1 addition & 1 deletion https-sticky.md
    Original file line number Diff line number Diff line change
    @@ -135,7 +135,7 @@ hostname-wrzrk

    You can also put an Ingress in front of it:

    __On older clusters the Ingress.Spec.tls is not supported, you only get http. If you want to upgrade your ingress image, on GCE/GKE run `kubectl edit rc --namespace=kube-system l7-lb-controller-v0.5.2` and replace "gcr.io/google_containers/glbc:0.5.2" with "gcr.io/google_containers/glbc:0.6.0", then kill the pod so the rc starts another one.__
    On older clusters the Ingress.Spec.tls is not supported, you only get http. If you want to upgrade your ingress image, on GCE/GKE run `kubectl edit rc --namespace=kube-system l7-lb-controller-v0.5.2` and replace `gcr.io/google_containers/glbc:0.5.2` with `gcr.io/google_containers/glbc:0.6.0`, then kill the pod so the rc starts another one.

    Create a secret (the next few steps are just hacks to get off the ground with secrets using a legacy example that needs fixing):
    ```
  7. bprashanth revised this gist Mar 8, 2016. 1 changed file with 1 addition and 1 deletion.
    2 changes: 1 addition & 1 deletion https-sticky.md
    Original file line number Diff line number Diff line change
    @@ -135,7 +135,7 @@ hostname-wrzrk

    You can also put an Ingress in front of it:

    On older clusters the Ingress.Spec.tls is not supported, you only get http. If you want to upgrade your ingress image, on GCE/GKE run `kubectl edit rc --namespace=kube-system l7-lb-controller-v0.5.2` and replace "gcr.io/google_containers/glbc:0.5.2" with "gcr.io/google_containers/glbc:0.6.0", then kill the pod so the rc starts another one.
    __On older clusters the Ingress.Spec.tls is not supported, you only get http. If you want to upgrade your ingress image, on GCE/GKE run `kubectl edit rc --namespace=kube-system l7-lb-controller-v0.5.2` and replace "gcr.io/google_containers/glbc:0.5.2" with "gcr.io/google_containers/glbc:0.6.0", then kill the pod so the rc starts another one.__

    Create a secret (the next few steps are just hacks to get off the ground with secrets using a legacy example that needs fixing):
    ```
  8. bprashanth revised this gist Mar 8, 2016. No changes.
  9. bprashanth revised this gist Mar 8, 2016. 1 changed file with 23 additions and 3 deletions.
    26 changes: 23 additions & 3 deletions https-sticky.md
    Original file line number Diff line number Diff line change
    @@ -135,7 +135,9 @@ hostname-wrzrk

    You can also put an Ingress in front of it:

    Create a secret
    On older clusters the Ingress.Spec.tls is not supported, you only get http. If you want to upgrade your ingress image, on GCE/GKE run `kubectl edit rc --namespace=kube-system l7-lb-controller-v0.5.2` and replace "gcr.io/google_containers/glbc:0.5.2" with "gcr.io/google_containers/glbc:0.6.0", then kill the pod so the rc starts another one.

    Create a secret (the next few steps are just hacks to get off the ground with secrets using a legacy example that needs fixing):
    ```
    kubernetes-root $ cd examples/https-nginx
    kubernets-root/examples/https-nginx $ make keys secret
    @@ -147,12 +149,15 @@ Generating a 2048 bit RSA private key
    writing new private key to '/tmp/nginx.key'
    -----
    godep go run make_secret.go -crt /tmp/nginx.crt -key /tmp/nginx.key > /tmp/secret.json
    ```

    You should have a json blob for a secret in /tmp/secret.json, now rename the nginx.crt/key fields to match: https://github.com/kubernetes/kubernetes/blob/master/pkg/api/types.go#L2349 and create the secret.
    ```
    $ kubectl create -f /tmp/secret.json
    secret "nginxsecret" created
    ```

    And create the Ingress (on older clusters the Ingress.Spec.tls is not supported, you only get http):
    Then create the Ingress:
    ```yaml
    apiVersion: extensions/v1beta1
    kind: Ingress
    @@ -173,7 +178,22 @@ $ kubectl get ing
    NAME RULE BACKEND ADDRESS AGE
    no-rules-map - haproxy:80 107.some.public.ip 8m

    $ for i in 1 2 3 4 5; do curl 107.some.public.ip/hostname --cookie "SERVERID=s1"; echo; done
    $ for i in 1 2 3 4 5; do curl https://107.some.public.ip/hostname -k --cookie "SERVERID=s1"; echo; done
    hostname-y8itc
    hostname-y8itc
    hostname-y8itc
    hostname-y8itc
    hostname-y8itc
    ```
    Note that this currently requires the following (most importantly the firewall rule): https://github.com/kubernetes/kubernetes/tree/master/cluster/addons/cluster-loadbalancing/glbc#prerequisites

    If this doens't work and you *don't* see both :80 and :443 open under the gce console->networking, try kubectl logs:
    ```
    $ kubectl logs l7-lb-controller-v0.6.0-kjas -c l7-lb-controller --follow
    ..
    I0308 03:47:54.712844 1 loadbalancers.go:330] Creating new sslCertificates default-no-rules-map for k8s-ssl-default-no-rules-map
    I0308 03:47:58.539590 1 loadbalancers.go:355] Creating new https proxy for urlmap k8s-um-default-no-rules-map
    I0308 03:48:02.553696 1 loadbalancers.go:397] Creating forwarding rule for proxy [k8s-tps-default-no-rules-map] and ip 107.178.255.11:443-443
    I0308 03:48:10.429696 1 controller.go:325] Updating loadbalancer default/no-rules-map with IP 107.178.255.11
    I0308 03:48:10.433449 1 event.go:211] Event(api.ObjectReference{Kind:"Ingress", Namespace:"default", Name:"no-rules-map", UID:"72599eb0-e4e0-11e5-9999-42010af00002", APIVersion:"extensions", ResourceVersion:"36676", FieldPath:""}): type: 'Normal' reason: 'CREATE' ip: 107.178.255.11
    ```
  10. bprashanth created this gist Mar 8, 2016.
    179 changes: 179 additions & 0 deletions https-sticky.md
    Original file line number Diff line number Diff line change
    @@ -0,0 +1,179 @@
    Create a backend service that simply serves the pod name, and a frontend haproxy instance that balances based on client cookies.
    ```yaml
    # This is the backend service
    apiVersion: v1
    kind: Service
    metadata:
    name: hostname
    annotations:
    # Enable sticky-ness on "SERVERID"
    serviceloadbalancer/lb.cookie-sticky-session: "true"
    labels:
    app: hostname
    spec:
    ports:
    - port: 80
    protocol: TCP
    targetPort: 9376
    selector:
    app: hostname
    ---
    # This is the backend
    apiVersion: v1
    kind: ReplicationController
    metadata:
    name: hostname
    spec:
    replicas: 5
    template:
    metadata:
    labels:
    app: hostname
    spec:
    containers:
    - name: hostname
    image: gcr.io/google_containers/serve_hostname:1.2
    ports:
    - containerPort: 9376
    ---
    # This is the frontend that needs to stick requests to backends
    apiVersion: v1
    kind: Service
    metadata:
    name: haproxy
    labels:
    app: service-loadbalancer
    spec:
    type: NodePort
    ports:
    - port: 80
    selector:
    app: service-loadbalancer
    ---
    apiVersion: v1
    kind: ReplicationController
    metadata:
    name: service-loadbalancer
    labels:
    app: service-loadbalancer
    version: v1
    spec:
    replicas: 1
    selector:
    app: service-loadbalancer
    version: v1
    template:
    metadata:
    labels:
    app: service-loadbalancer
    version: v1
    spec:
    containers:
    - image: gcr.io/google_containers/servicelb:0.4
    imagePullPolicy: Always
    livenessProbe:
    httpGet:
    path: /healthz
    port: 8081
    scheme: HTTP
    initialDelaySeconds: 30
    timeoutSeconds: 5
    name: haproxy
    ports:
    # All http services
    - containerPort: 80
    hostPort: 80
    protocol: TCP
    # haproxy stats
    - containerPort: 1936
    hostPort: 1936
    protocol: TCP
    args:
    - --default-return-code=200
    ```
    In the haproxy pod you should see something like:
    ``` shell
    $ kubectl exec service-loadbalancer-pw616 -- cat /etc/haproxy/haproxy.cfg
    ...
    backend hostname

    balance roundrobin
    # TODO: Make the path used to access a service customizable.
    reqrep ^([^\ :]*)\ /hostname[/]?(.*) \1\ /\2


    # insert a cookie with name SERVERID to stick a client with a backend server
    # http://cbonte.github.io/haproxy-dconv/configuration-1.5.html#4.2-cookie
    cookie SERVERID insert indirect nocache
    server 10.245.0.7:9376 10.245.0.7:9376 cookie s0 check port 9376 inter 5
    server 10.245.0.8:9376 10.245.0.8:9376 cookie s1 check port 9376 inter 5
    server 10.245.1.3:9376 10.245.1.3:9376 cookie s2 check port 9376 inter 5
    server 10.245.2.4:9376 10.245.2.4:9376 cookie s3 check port 9376 inter 5
    server 10.245.2.5:9376 10.245.2.5:9376 cookie s4 check port 9376 inter 5
    ```

    Where the important bit is "cookie SERVERID"

    You can test the stickyness:
    ```shell
    $ for i in 1 2 3 4 5; do curl public-ip-of-node/hostname; echo; done
    hostname-fiecu
    hostname-lc6tg
    hostname-wrzrk
    hostname-qotbq
    hostname-8smz0

    $ for i in 1 2 3 4 5; do curl public-ip-of-node/hostname --cookie "SERVERID=s1"; echo; done
    hostname-wrzrk
    hostname-wrzrk
    hostname-wrzrk
    hostname-wrzrk
    hostname-wrzrk
    ```

    You can also put an Ingress in front of it:

    Create a secret
    ```
    kubernetes-root $ cd examples/https-nginx
    kubernets-root/examples/https-nginx $ make keys secret
    # The CName used here is specific to the service specified in nginx-app.yaml.
    openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout /tmp/nginx.key -out /tmp/nginx.crt -subj "/CN=nginxsvc/O=nginxsvc"
    Generating a 2048 bit RSA private key
    ...............+++
    ................................+++
    writing new private key to '/tmp/nginx.key'
    -----
    godep go run make_secret.go -crt /tmp/nginx.crt -key /tmp/nginx.key > /tmp/secret.json
    $ kubectl create -f /tmp/secret.json
    secret "nginxsecret" created
    ```

    And create the Ingress (on older clusters the Ingress.Spec.tls is not supported, you only get http):
    ```yaml
    apiVersion: extensions/v1beta1
    kind: Ingress
    metadata:
    name: no-rules-map
    spec:
    tls:
    # A badly named secret
    secretName: nginxsecret
    backend:
    serviceName: haproxy
    servicePort: 80
    ```
    On GCE, you'll need to wait till the loadbalancer warms up (O(10m)):
    ```
    $ kubectl get ing
    NAME RULE BACKEND ADDRESS AGE
    no-rules-map - haproxy:80 107.some.public.ip 8m

    $ for i in 1 2 3 4 5; do curl 107.some.public.ip/hostname --cookie "SERVERID=s1"; echo; done
    ```
    Note that this currently requires the following (most importantly the firewall rule): https://github.com/kubernetes/kubernetes/tree/master/cluster/addons/cluster-loadbalancing/glbc#prerequisites