Skip to content

Instantly share code, notes, and snippets.

@kenmoini
Forked from tmckayus/remote_crc.md
Created April 6, 2020 02:32
Show Gist options
  • Save kenmoini/9e241bde6eba30e5715f66204aa2bc8f to your computer and use it in GitHub Desktop.
Save kenmoini/9e241bde6eba30e5715f66204aa2bc8f to your computer and use it in GitHub Desktop.

Revisions

  1. @tmckayus tmckayus revised this gist Jan 21, 2020. 1 changed file with 3 additions and 11 deletions.
    14 changes: 3 additions & 11 deletions remote_crc.md
    Original file line number Diff line number Diff line change
    @@ -1,5 +1,5 @@
    # Overview: running crc on a remote server
    This document shows how to deploy an OpenShift instance on a server using CodeReady Containers (crc) that can be accessed remotely from one or more client machines (sometimes called a "headless" instance). This provides a low-cost test and development platform that can be shared by developers and can persist for up to 30 days (see information below concerning certificates). Deploying this way also allows a user to create an instance that uses more cpu and memory resources than may be available on his or her laptop.
    This document shows how to deploy an OpenShift instance on a server using CodeReady Containers (crc) that can be accessed remotely from one or more client machines (sometimes called a "headless" instance). This provides a low-cost test and development platform that can be shared by developers. Deploying this way also allows a user to create an instance that uses more cpu and memory resources than may be available on his or her laptop.

    While there are benefits to this type of deployment, please note that the *primary* use case for crc is to deploy a local OpenShift instance on a workstation or laptop and access it directly from the same machine. The headless setup is configured completely outside of crc itself, and supporting a headless setup is beyond the mission of the crc development team. *Please do not ask for changes to crc to support this type of deployment, it will only cost the team time as they politely decline* :)

    @@ -170,14 +170,6 @@ $ oc login -u kubeadmin -p <kubeadmin password> https://api.crc.testing:6443

    The OpenShift console will be available at https://console-openshift-console.apps-crc.testing

    # Certificates Expire 30 days after Release
    # Renewal of expired certificates

    CodeReady Containers provides an opinionated deployment of OpenShift, and as such releases are only good for 30 days. After that you must download a new crc and create a new cluster. This ensures that everyone is running on a recent, supported release.

    You can get a quick idea of how old your release is by looking at the age of the OpenShift node:

    ```
    oc get nodes
    NAME STATUS ROLES AGE VERSION
    crc-vsqrt-master-0 Ready master,worker 19d v1.13.4+3bd346709
    ```
    Beginning in version 1.2.0, CodeReady Containers will renew embedded certificates when they expire (prior to 1.2.0 it was necessary to download and install a new version). When the certificates need to be renewed, this will be noted in the CRC log output and may take up to 5 minutes.
  2. @tmckayus tmckayus revised this gist Oct 8, 2019. 1 changed file with 2 additions and 0 deletions.
    2 changes: 2 additions & 0 deletions remote_crc.md
    Original file line number Diff line number Diff line change
    @@ -65,6 +65,8 @@ $ sudo semanage port -a -t http_port_t -p tcp 6443

    # Configure haproxy on the server

    The steps below will create an haproxy config file with placeholders, update the *SERVER_IP* and *CRC_IP* using *sed*, and copy the new file to the correct location. If you would like to edit the file manually, feel free :)

    ```
    $ sudo cp /etc/haproxy/haproxy.cfg /etc/haproxy/haproxy.cfg.bak
    $ tee haproxy.cfg &>/dev/null <<EOF
  3. @tmckayus tmckayus revised this gist Oct 8, 2019. 1 changed file with 1 addition and 1 deletion.
    2 changes: 1 addition & 1 deletion remote_crc.md
    Original file line number Diff line number Diff line change
    @@ -1,4 +1,4 @@
    # Overview
    # Overview: running crc on a remote server
    This document shows how to deploy an OpenShift instance on a server using CodeReady Containers (crc) that can be accessed remotely from one or more client machines (sometimes called a "headless" instance). This provides a low-cost test and development platform that can be shared by developers and can persist for up to 30 days (see information below concerning certificates). Deploying this way also allows a user to create an instance that uses more cpu and memory resources than may be available on his or her laptop.

    While there are benefits to this type of deployment, please note that the *primary* use case for crc is to deploy a local OpenShift instance on a workstation or laptop and access it directly from the same machine. The headless setup is configured completely outside of crc itself, and supporting a headless setup is beyond the mission of the crc development team. *Please do not ask for changes to crc to support this type of deployment, it will only cost the team time as they politely decline* :)
  4. @tmckayus tmckayus revised this gist Oct 8, 2019. 1 changed file with 1 addition and 90 deletions.
    91 changes: 1 addition & 90 deletions remote_crc.md
    Original file line number Diff line number Diff line change
    @@ -178,93 +178,4 @@ You can get a quick idea of how old your release is by looking at the age of the
    oc get nodes
    NAME STATUS ROLES AGE VERSION
    crc-vsqrt-master-0 Ready master,worker 19d v1.13.4+3bd346709
    ```

    # Adding Insecure Registries (and other things that require a node reboot)

    ## Overview

    Since crc deploys the OpenShift instance in a single VM, certain configuration changes
    can't be applied using the normal OpenShift mechanism. In a full deployment with multiple
    nodes, OpenShift coordinates the update and reboot of each node in the cluster, evacuating
    and rescheduling pods as it goes so everything continues to run. With a single VM this isn't
    possible, so some configuration changes have to be made manually.

    ## Registry configuration changes

    A change to the registry configuration is one of the types of changes that must be
    done manually with crc. Here is an example of how to add *my.fav.vendor* as an insecure
    registry (other registry changes in the same file are done the same way, like
    blocking registries or adding registries to search)

    As the kubeadmin user, get the name of the OpenShift node and enter a debug shell:

    ```
    $ oc get nodes
    NAME STATUS ROLES AGE VERSION
    crc-vsqrt-master-0 Ready master,worker 19d v1.13.4+3bd346709
    $ oc debug node/crc-vsqrt-master-0
    Starting pod/crc-vsqrt-master-0-debug ...
    To use host binaries, run `chroot /host`
    If you don't see a command prompt, try pressing enter.
    sh-4.2# chroot /host
    sh-4.4#
    ```

    Pro-tip, at this point we are going to use *vi* to edit a file. It seems
    that you will have a much better experience editing files if you resize your terminal at
    this point. It doesn't matter *what size*, just resize it :)

    ```
    sh-4.4# more /etc/containers/registries.conf
    [registries.search]
    registries = ['registry.access.redhat.com', 'docker.io']
    [registries.insecure]
    registries = []
    [registries.block]
    registries = []
    sh-4.4# vi /etc/containers/registries.conf
    ```

    Change the insecure registries section to look like this:
    ```
    [registries.insecure]
    registries = ['my.fav.vendor']
    ```
    Now exit out of the debug shell, stop crc, and restart crc to make
    the change take effect:
    ```
    sh-4.4# exit
    exit
    sh-4.2# exit
    exit
    Removing debug pod ...
    $ crc stop
    Stopping CodeReady Containers instance... this may take a few minutes
    CodeReady Containers instance stopped
    $ crc start
    ```

    # Pro-tip on searching OpenShift logs

    To search quickly for content in the OpenShift logs across all pods and
    namespaces, enter a debug shell and grep in */var/log*. This can be
    a handy way to search for errors when you're not quite sure what's
    going on with your application, you've already examined pods and
    events to no avail, and you don't know where to turn. Be warned, though,
    the output may be voluminous, so it helps to have some clue what you're
    looking for.

    ```
    $ oc debug node/crc-vsqrt-master-0
    Starting pod/crc-vsqrt-master-0-debug ...
    To use host binaries, run `chroot /host`
    If you don't see a command prompt, try pressing enter.
    sh-4.2# chroot /host
    sh-4.4# cd /var/log
    sh-4.4# grep -r mypodname .
    ```
    ```
  5. @tmckayus tmckayus created this gist Oct 8, 2019.
    270 changes: 270 additions & 0 deletions remote_crc.md
    Original file line number Diff line number Diff line change
    @@ -0,0 +1,270 @@
    # Overview
    This document shows how to deploy an OpenShift instance on a server using CodeReady Containers (crc) that can be accessed remotely from one or more client machines (sometimes called a "headless" instance). This provides a low-cost test and development platform that can be shared by developers and can persist for up to 30 days (see information below concerning certificates). Deploying this way also allows a user to create an instance that uses more cpu and memory resources than may be available on his or her laptop.

    While there are benefits to this type of deployment, please note that the *primary* use case for crc is to deploy a local OpenShift instance on a workstation or laptop and access it directly from the same machine. The headless setup is configured completely outside of crc itself, and supporting a headless setup is beyond the mission of the crc development team. *Please do not ask for changes to crc to support this type of deployment, it will only cost the team time as they politely decline* :)

    The instructions here were tested with Fedora on both the server (F30) and a laptop (F29).

    # Thanks to

    Thanks to Marcel Wysocki from Red Hat for the haproxy solution and the entire CodeReady Containers team for crc!

    # Useful links
    [Red Hat blog article on CodeReady Containers](https://developers.redhat.com/blog/2019/09/05/red-hat-openshift-4-on-your-laptop-introducing-red-hat-codeready-containers/)

    [Download page on cloud.redhat.com](https://cloud.redhat.com/openshift/install/crc/installer-provisioned)

    [CRC documentation on github.io](https://code-ready.github.io/crc/)

    [Project sources on github](https://github.com/code-ready/crc)

    # Download and setup CRC on a server

    Go to the [download page](https://cloud.redhat.com/openshift/install/crc/installer-provisioned) and get crc for Linux. You’ll also need the pull secret listed there during the installation process. Make sure to copy the *crc* binary to */usr/local/bin* or somewhere on your path.

    The initial setup command only needs to be run once, and it creates a *~/.crc* directory. Your user must have *sudo* privileges since crc will install dependencies for libvirt and modify the NetworkManager config:

    ```
    $ crc setup
    ```
    *Note: occasionally on some systems this may fail with “Failed to restart NetworkManager”. Just rerun crc setup a few times until it works*

    # Create an OpenShift Instance with CRC

    ```
    $ crc start
    ```
    You will be asked for the pull secret from the download page, paste it at the prompt.

    Optionally, use the -m and -c flags to increase the VM size, for example a 32GiB with 8 cpus:

    ```
    $ crc start -m 32768 -c 8
    ```
    See the [documentation](https://code-ready.github.io/crc/) or *crc -h* for other things you can do


    If you want to just use crc locally on this machine, you can stop here, you’re all set!

    # Make sure you have haproxy and a few other things

    ```
    sudo dnf -y install haproxy policycoreutils-python-utils jq
    ```

    # Modify the firewall on the server

    ```
    $ sudo systemctl start firewalld
    $ sudo firewall-cmd --add-port=80/tcp --permanent
    $ sudo firewall-cmd --add-port=6443/tcp --permanent
    $ sudo firewall-cmd --add-port=443/tcp --permanent
    $ sudo systemctl restart firewalld
    $ sudo semanage port -a -t http_port_t -p tcp 6443
    ```

    # Configure haproxy on the server

    ```
    $ sudo cp /etc/haproxy/haproxy.cfg /etc/haproxy/haproxy.cfg.bak
    $ tee haproxy.cfg &>/dev/null <<EOF
    global
    debug
    defaults
    log global
    mode http
    timeout connect 5000
    timeout client 5000
    timeout server 5000
    frontend apps
    bind SERVER_IP:80
    bind SERVER_IP:443
    option tcplog
    mode tcp
    default_backend apps
    backend apps
    mode tcp
    balance roundrobin
    option ssl-hello-chk
    server webserver1 CRC_IP check
    frontend api
    bind SERVER_IP:6443
    option tcplog
    mode tcp
    default_backend api
    backend api
    mode tcp
    balance roundrobin
    option ssl-hello-chk
    server webserver1 CRC_IP:6443 check
    EOF
    $ export SERVER_IP=$(hostname --ip-address)
    $ export CRC_IP=$(crc ip)
    $ sed -i "s/SERVER_IP/$SERVER_IP/g" haproxy.cfg
    $ sed -i "s/CRC_IP/$CRC_IP/g" haproxy.cfg
    $ sudo cp haproxy.cfg /etc/haproxy/haproxy.cfg
    $ sudo systemctl start haproxy
    ```

    # Setup NetworkManager on the client machine

    NetworkManager needs to be configured to use dnsmasq for DNS. Make sure you have dnsmasq installed:

    ```
    $ sudo dnf install dnsmasq
    ```

    Add a file to */etc/NetworkManager/conf.d* to enable use of dnsmasq. (Some systems may already have this setting in an existing file, depending on what's been done in the past. If that's the case, continue on without creating a new file)

    ```
    $ sudo tee /etc/NetworkManager/conf.d/use-dnsmasq.conf &>/dev/null <<EOF
    [main]
    dns=dnsmasq
    EOF
    ```

    Add dns entries for crc:

    ```
    $ tee external-crc.conf &>/dev/null <<EOF
    address=/apps-crc.testing/SERVER_IP
    address=/api.crc.testing/SERVER_IP
    EOF
    $ export SERVER_IP=”your server’s external IP address”
    $ sed -i "s/SERVER_IP/$SERVER_IP/g" external-crc.conf
    $ sudo cp external-crc.conf /etc/NetworkManager/dnsmasq.d/external-crc.conf
    $ sudo systemctl reload NetworkManager
    ```

    *Note: if you've previously run crc locally on the client machine, you likely have a /etc/NetworkManager/dnsmasq.d/crc.conf file that sets up dns for a local VM. Comment out those entries.*

    # Get the oc binary on the client machine

    If you don't already have it, you can get the *oc* client here:

    https://mirror.openshift.com/pub/openshift-v4/clients/ocp/latest/

    # Login to the OpenShift instance from the client machine

    The password for the kubeadmin account is printed when crc starts, but if you don't have it
    handy you can do this as the user running crc on the server:

    ```
    $ crc console --credentials
    ```

    Now just login to OpenShift from your client machine using the standard crc url

    ```
    $ oc login -u kubeadmin -p <kubeadmin password> https://api.crc.testing:6443
    ```

    The OpenShift console will be available at https://console-openshift-console.apps-crc.testing

    # Certificates Expire 30 days after Release

    CodeReady Containers provides an opinionated deployment of OpenShift, and as such releases are only good for 30 days. After that you must download a new crc and create a new cluster. This ensures that everyone is running on a recent, supported release.

    You can get a quick idea of how old your release is by looking at the age of the OpenShift node:

    ```
    oc get nodes
    NAME STATUS ROLES AGE VERSION
    crc-vsqrt-master-0 Ready master,worker 19d v1.13.4+3bd346709
    ```

    # Adding Insecure Registries (and other things that require a node reboot)

    ## Overview

    Since crc deploys the OpenShift instance in a single VM, certain configuration changes
    can't be applied using the normal OpenShift mechanism. In a full deployment with multiple
    nodes, OpenShift coordinates the update and reboot of each node in the cluster, evacuating
    and rescheduling pods as it goes so everything continues to run. With a single VM this isn't
    possible, so some configuration changes have to be made manually.

    ## Registry configuration changes

    A change to the registry configuration is one of the types of changes that must be
    done manually with crc. Here is an example of how to add *my.fav.vendor* as an insecure
    registry (other registry changes in the same file are done the same way, like
    blocking registries or adding registries to search)

    As the kubeadmin user, get the name of the OpenShift node and enter a debug shell:

    ```
    $ oc get nodes
    NAME STATUS ROLES AGE VERSION
    crc-vsqrt-master-0 Ready master,worker 19d v1.13.4+3bd346709
    $ oc debug node/crc-vsqrt-master-0
    Starting pod/crc-vsqrt-master-0-debug ...
    To use host binaries, run `chroot /host`
    If you don't see a command prompt, try pressing enter.
    sh-4.2# chroot /host
    sh-4.4#
    ```

    Pro-tip, at this point we are going to use *vi* to edit a file. It seems
    that you will have a much better experience editing files if you resize your terminal at
    this point. It doesn't matter *what size*, just resize it :)

    ```
    sh-4.4# more /etc/containers/registries.conf
    [registries.search]
    registries = ['registry.access.redhat.com', 'docker.io']
    [registries.insecure]
    registries = []
    [registries.block]
    registries = []
    sh-4.4# vi /etc/containers/registries.conf
    ```

    Change the insecure registries section to look like this:
    ```
    [registries.insecure]
    registries = ['my.fav.vendor']
    ```
    Now exit out of the debug shell, stop crc, and restart crc to make
    the change take effect:
    ```
    sh-4.4# exit
    exit
    sh-4.2# exit
    exit
    Removing debug pod ...
    $ crc stop
    Stopping CodeReady Containers instance... this may take a few minutes
    CodeReady Containers instance stopped
    $ crc start
    ```

    # Pro-tip on searching OpenShift logs

    To search quickly for content in the OpenShift logs across all pods and
    namespaces, enter a debug shell and grep in */var/log*. This can be
    a handy way to search for errors when you're not quite sure what's
    going on with your application, you've already examined pods and
    events to no avail, and you don't know where to turn. Be warned, though,
    the output may be voluminous, so it helps to have some clue what you're
    looking for.

    ```
    $ oc debug node/crc-vsqrt-master-0
    Starting pod/crc-vsqrt-master-0-debug ...
    To use host binaries, run `chroot /host`
    If you don't see a command prompt, try pressing enter.
    sh-4.2# chroot /host
    sh-4.4# cd /var/log
    sh-4.4# grep -r mypodname .
    ```