Skip to content

Instantly share code, notes, and snippets.

@platu
Last active February 26, 2025 09:26
Show Gist options
  • Save platu/9af406e872b69026e1b44f7727bcc6ca to your computer and use it in GitHub Desktop.
Save platu/9af406e872b69026e1b44f7727bcc6ca to your computer and use it in GitHub Desktop.

Revisions

  1. platu revised this gist Feb 20, 2024. 1 changed file with 1 addition and 1 deletion.
    2 changes: 1 addition & 1 deletion lab15+ansible+incus.md
    Original file line number Diff line number Diff line change
    @@ -602,7 +602,7 @@ This is why we need to extract network addresses from the YAML configuration fil
    ```bash=
    cat << 'EOF' > incus_fetch.yml
    ---
    - name: BUILD CONTAINERS DYNAMIC INVENTORY
    - name: BUILD CONTAINERS DYNAMIC INVENTORY
    hosts: webserver
    tasks:
    - name: GET INCUS CONTAINERS CONFIGURATION
  2. platu revised this gist Feb 19, 2024. 1 changed file with 1 addition and 1 deletion.
    2 changes: 1 addition & 1 deletion lab15+ansible+incus.md
    Original file line number Diff line number Diff line change
    @@ -1,7 +1,7 @@
    ---
    tags: m1, Devnet, incus, linux, lab15
    ---
    # Devnet Lab 15 -- Using Ansible to automate the installation of Web servers on Incus system containers
    # DevNet Lab 15 -- Using Ansible to automate the installation of Web servers on Incus system containers

    [toc]

  3. platu revised this gist Feb 19, 2024. 1 changed file with 1 addition and 1 deletion.
    2 changes: 1 addition & 1 deletion lab15+ansible+incus.md
    Original file line number Diff line number Diff line change
    @@ -7,7 +7,7 @@ tags: m1, Devnet, incus, linux, lab15

    ---

    # Background / Scenario
    ## Background / Scenario

    In this lab, you will first configure Ansible to communicate with a virtual machine hosting web servers in Incus system containers. You will create playbooks that automate the process of installing Incus on the Web server VM and build a dynamic inventory with a Python script. You will also create a custom playbook that installs Apache with specific instructions on each container.

  4. platu revised this gist Feb 19, 2024. 1 changed file with 171 additions and 249 deletions.
    420 changes: 171 additions & 249 deletions lab15+ansible+incus.md
    Original file line number Diff line number Diff line change
    @@ -1,63 +1,6 @@
    <!DOCTYPE html>
    <html>

    <head>
    <meta charset="utf-8">
    <meta http-equiv="X-UA-Compatible" content="IE=edge">
    <meta name="viewport" content="width=device-width, initial-scale=1.0, user-scalable=no">
    <meta name="apple-mobile-web-app-capable" content="yes">
    <meta name="apple-mobile-web-app-status-bar-style" content="black">
    <meta name="mobile-web-app-capable" content="yes">


    <meta name="description" content=" # Devnet Lab 15 -- Using Ansible to automate the installation of Web servers on Incus system contai">



    <meta property="og:title" content="Devnet Lab 15 -- Using Ansible to automate the installation of Web servers on Incus system containers - HedgeDoc">



    <meta property="og:type" content="website">

    <meta property="og:image" content="https://md.inetdoc.net/icons/android-chrome-512x512.png">
    <meta property="og:image:alt" content="HedgeDoc logo">
    <meta property="og:image:type" content="image/png">

    <base href="https://md.inetdoc.net/">
    <title>Devnet Lab 15 -- Using Ansible to automate the installation of Web servers on Incus system containers - HedgeDoc</title>
    <link rel="apple-touch-icon" sizes="180x180" href="https://md.inetdoc.net/icons/apple-touch-icon.png">
    <link rel="icon" type="image/png" sizes="32x32" href="https://md.inetdoc.net/icons/favicon-32x32.png">
    <link rel="icon" type="image/png" sizes="16x16" href="https://md.inetdoc.net/icons/favicon-16x16.png">
    <link rel="manifest" href="https://md.inetdoc.net/icons/site.webmanifest">
    <link rel="mask-icon" href="https://md.inetdoc.net/icons/safari-pinned-tab.svg" color="#b51f08">
    <link rel="shortcut icon" href="https://md.inetdoc.net/icons/favicon.ico">
    <meta name="apple-mobile-web-app-title" content="HedgeDoc - Ideas grow better together">
    <meta name="application-name" content="HedgeDoc - Ideas grow better together">
    <meta name="msapplication-TileColor" content="#b51f08">
    <meta name="msapplication-config" content="https://md.inetdoc.net/icons/browserconfig.xml">
    <meta name="theme-color" content="#b51f08">

    <link rel="stylesheet" href='https://md.inetdoc.net/build/emojify.js/dist/css/basic/emojify.min.css'>
    <link href="build/font-pack.51d576c9ea0a7705d2e0.css" rel="stylesheet"><link href="build/2.04010c738e6d668e6e08.css" rel="stylesheet"><link href="build/3.b67314821de89ccff48b.css" rel="stylesheet"><link href="build/pretty-styles-pack.6437be263d7329017dfe.css" rel="stylesheet"><link href="build/pretty-styles.8aa1888af7df52aee395.css" rel="stylesheet"><link href="build/1.1666d9d869a0532d9bce.css" rel="stylesheet"><link href="build/pretty.0424d40e7df68caf04c5.css" rel="stylesheet">
    </head>

    <body style="display:none;">
    <div class="ui-infobar container-fluid unselectable hidden-print">
    <small>
    <span>
    <span class="ui-lastchangeuser">&thinsp;<i class="ui-user-icon small" style="background-image: url(https://cdn.libravatar.org/avatar/bd55248aec1684ae315a56a872934676?default=identicon&s=96);" data-toggle="tooltip" data-placement="right" title="philippe.latu"></i></span>
    &nbsp;<span class="text-uppercase ui-status-lastchange"></span>
    <span class="ui-lastchange text-uppercase" data-createtime="Fri Feb 09 2024 07:18:26 GMT+0100 (Central European Standard Time)" data-updatetime="Fri Feb 16 2024 17:56:10 GMT+0100 (Central European Standard Time)"></span>
    </span>
    <span class="pull-right">13 views <a href="#" class="ui-edit" title="Edit this note"><i class="fa fa-fw fa-pencil"></i></a></span>
    <br>
    </small>
    </div>
    <div id="doc" class="container markdown-body" >
    ---
    tags: m1, Devnet, incus, linux, lab15
    ---
    # Devnet Lab 15 -- Using Ansible to automate the installation of Web servers on Incus system containers

    [toc]
    @@ -81,8 +24,8 @@ In this lab, you will first configure Ansible to communicate with a virtual mach
    ```

    ```bash=
    &#39;/home/etudianttest/masters/debian-testing-amd64.qcow2&#39; -&gt; &#39;./debian-testing-amd64.qcow2&#39;
    &#39;/home/etudianttest/masters/debian-testing-amd64.qcow2_OVMF_VARS.fd&#39; -&gt; &#39;./debian-testing-amd64.qcow2_OVMF_VARS.fd&#39;
    '/home/etudianttest/masters/debian-testing-amd64.qcow2' -> './debian-testing-amd64.qcow2'
    '/home/etudianttest/masters/debian-testing-amd64.qcow2_OVMF_VARS.fd' -> './debian-testing-amd64.qcow2_OVMF_VARS.fd'
    ```

    ```bash
    @@ -106,13 +49,13 @@ In this lab, you will first configure Ansible to communicate with a virtual mach
    ```

    ```bash=
    ~&gt; Virtual machine filename : webserver-host.qcow2
    ~&gt; RAM size : 4096MB
    ~&gt; SPICE VDI port number : 59XX
    ~&gt; telnet console port number : 23XX
    ~&gt; MAC address : b8:ad:ca:fe:00:XX
    ~&gt; Switch port interface : tapXX, access mode
    ~&gt; IPv6 LL address : fe80::baad:caff:fefe:XX%vlanYYY
    ~> Virtual machine filename : webserver-host.qcow2
    ~> RAM size : 4096MB
    ~> SPICE VDI port number : 59XX
    ~> telnet console port number : 23XX
    ~> MAC address : b8:ad:ca:fe:00:XX
    ~> Switch port interface : tapXX, access mode
    ~> IPv6 LL address : fe80::baad:caff:fefe:XX%vlanYYY
    ```

    3. Open a SSH connection to the Web server VM.
    @@ -133,12 +76,12 @@ The web server hosting VM is now ready for Ansible automation. First, we need to
    1. Make the `~/labs/lab15` directory for example and navigate to this folder

    ```bash
    mkdir -p ~/labs/lab15 &amp;&amp; cd ~/labs/lab15
    mkdir -p ~/labs/lab15 && cd ~/labs/lab15
    ```

    2. Check that **ansible** package is installed

    If the **ansible** package is not already installed on your Devnet VM, it&#39;s time to do so.
    If the **ansible** package is not already installed on your Devnet VM, it's time to do so.
    ```bash
    apt show ansible | head -n 10
    @@ -150,23 +93,23 @@ The web server hosting VM is now ready for Ansible automation. First, we need to
    Priority: optional
    Section: universe/admin
    Origin: Ubuntu
    Maintainer: Ubuntu Developers &lt;[email protected]&gt;
    Original-Maintainer: Lee Garrett &lt;[email protected]&gt;
    Maintainer: Ubuntu Developers <[email protected]>
    Original-Maintainer: Lee Garrett <[email protected]>
    Bugs: https://bugs.launchpad.net/ubuntu/+filebug
    Installed-Size: 263 MB
    Depends: ansible-core (&gt;= 2.11.5-1~), python3:any, openssh-client | python3-paramiko (&gt;= 2.6.0), python3-distutils, python3-dnspython, python3-httplib2, python3-jinja2, python3-netaddr, python3-yaml
    Depends: ansible-core (>= 2.11.5-1~), python3:any, openssh-client | python3-paramiko (>= 2.6.0), python3-distutils, python3-dnspython, python3-httplib2, python3-jinja2, python3-netaddr, python3-yaml
    ```
    3. Create a new `ansible.cfg` file in the `lab15` directory from the shell prompt
    ```bash=
    cat &lt;&lt; &#39;EOF&#39; &gt; ansible.cfg
    cat << 'EOF' > ansible.cfg
    # config file for Lab 15 Web Servers management
    [defaults]
    # Use inventory/ folder files as source
    inventory=inventory/
    host_key_checking = False # Don&#39;t worry about RSA Fingerprints
    host_key_checking = False # Don't worry about RSA Fingerprints
    retry_files_enabled = False # Do not create them
    deprecation_warnings = False # Do not show warnings
    interpreter_python = /usr/bin/python3
    @@ -198,11 +141,11 @@ ssh etu@fe80::baad:caff:fefe:XXX%enp0s1
    ```

    ```bash=
    The authenticity of host &#39;fe80::baad:caff:fefe:XXX%enp0s1 (fe80::baad:caff:fefe:XXX%enp0s1)&#39; can&#39;t be established.
    The authenticity of host 'fe80::baad:caff:fefe:XXX%enp0s1 (fe80::baad:caff:fefe:XXX%enp0s1)' can't be established.
    ED25519 key fingerprint is SHA256:yFLaZk+OfY7z7bHyHPXgjowRS4KMHjfoMQxracRdG9M.
    This key is not known by any other names.
    Are you sure you want to continue connecting (yes/no/[fingerprint])? yes
    Warning: Permanently added &#39;fe80::baad:caff:fefe:XXX%enp0s1&#39; (ED25519) to the list of known hosts.
    Warning: Permanently added 'fe80::baad:caff:fefe:XXX%enp0s1' (ED25519) to the list of known hosts.
    Linux webserver-host 6.6.13-amd64 #1 SMP PREEMPT_DYNAMIC Debian 6.6.13-1 (2024-01-20) x86_64
    ```
    @@ -232,16 +175,16 @@ webserver_user_pass: XXXXXXXXXX
    Create the inventory file `inventory/hosts.yml` with the IP address of your Web server VM.
    ```bash=
    cat &lt;&lt; &#39;EOF&#39; &gt; inventory/hosts.yml
    cat << 'EOF' > inventory/hosts.yml
    ---
    vms:
    hosts:
    webserver:
    ansible_host: &#39;fe80::baad:caff:fefe:XXX%enp0s1&#39;
    ansible_host: 'fe80::baad:caff:fefe:XXX%enp0s1'
    vars:
    ansible_ssh_user: etu
    ansible_ssh_pass: &#39;{{ webserver_user_pass }}&#39;
    ansible_become_pass: &#39;{{ webserver_user_pass }}&#39;
    ansible_ssh_pass: '{{ webserver_user_pass }}'
    ansible_become_pass: '{{ webserver_user_pass }}'
    all:
    children:
    @@ -260,9 +203,9 @@ Vault password:
    ```
    ```bash=
    webserver | SUCCESS =&gt; {
    &#34;changed&#34;: false,
    &#34;ping&#34;: &#34;pong&#34;
    webserver | SUCCESS => {
    "changed": false,
    "ping": "pong"
    }
    ```
    @@ -278,7 +221,7 @@ In order to be able to launch system containers and configure Web services in th
    Create the `lxd_init.yml` file an add the following information to the file. Make sure you use the proper YAML indentation. Every space and dash is significant. You may lose some formatting if you copy and paste.
    ```bash=
    cat &lt;&lt; &#39;EOF&#39; &gt; incus_init.yml
    cat << 'EOF' > incus_init.yml
    ---
    - name: INCUS INSTALLATION AND INITIALIZATION
    hosts: webserver
    @@ -292,7 +235,7 @@ cat &lt;&lt; &#39;EOF&#39; &gt; incus_init.yml
    - name: ADD USER TO INCUS SYSTEM GROUPS
    ansible.builtin.user:
    name: &#39;{{ ansible_ssh_user }}&#39;
    name: '{{ ansible_ssh_user }}'
    groups:
    - incus
    - incus-admin
    @@ -306,18 +249,18 @@ cat &lt;&lt; &#39;EOF&#39; &gt; incus_init.yml
    - name: INITIALIZE LXD
    ansible.builtin.shell: |
    set -o pipefail
    cat &lt;&lt; EOT | incus admin init --preseed
    cat << EOT | incus admin init --preseed
    config:
    core.https_address: &#39;[::]:8443&#39;
    core.https_address: '[::]:8443'
    networks: []
    storage_pools:
    - config: {}
    description: &#34;&#34;
    description: ""
    name: default
    driver: dir
    profiles:
    - config: {}
    description: &#34;&#34;
    description: ""
    devices:
    eth0:
    name: eth0
    @@ -394,26 +337,26 @@ In part 2 step 2, we have created the `inventory/hosts.yml` file that defines al
    Now, we have to create a new inventory file named `inventory/lab.yml` which defines all the system containers parameters. The purpose here is to be able to run Ansible playbooks within these containers.
    ```bash=
    cat &lt;&lt; &#39;EOF&#39; &gt; inventory/lab.yml
    cat << 'EOF' > inventory/lab.yml
    ---
    containers:
    hosts:
    web[01:04]:
    vars:
    ansible_ssh_user: webuser
    ansible_ssh_pass: &#39;{{ webuser_pass }}&#39;
    ansible_become_pass: &#39;{{ webuser_pass }}&#39;
    ansible_ssh_pass: '{{ webuser_pass }}'
    ansible_become_pass: '{{ webuser_pass }}'
    EOF
    ```
    &gt; Note: This inventory file is incomplete as it does not define `ansible_host` variable for each container.
    > Note: This inventory file is incomplete as it does not define `ansible_host` variable for each container.
    ## Step 2: Add a new entry in Ansible vault for container access
    In the previous setp above, we defined a user named `webuser` with its password stored in the `webuser_pass`variable.
    We must add the corresponding entry in the ansible vault file named `$HOME/lab15_passwd.yml&#39;.
    We must add the corresponding entry in the ansible vault file named `$HOME/lab15_passwd.yml'.
    ```bash
    ansible-vault edit $HOME/lab15_passwd.yml
    @@ -430,38 +373,38 @@ webuser_pass: XXXXXXXXXX
    ## Step 3: Create an Ansible playbook to launch and configure access to containers

    ```bash=
    cat &lt;&lt; &#39;EOF&#39; &gt; incus_launch.yml
    cat << 'EOF' > incus_launch.yml
    ---
    - name: LAUNCH INCUS CONTAINERS, SETUP USER ACCOUNT AND SSH SERVICE
    hosts: webserver
    tasks:
    - name: LAUNCH INCUS CONTAINERS
    ansible.builtin.shell: |
    set -o pipefail
    if ! incus ls -c n | grep -q &#34;{{ item }}&#34;
    if ! incus ls -c n | grep -q "{{ item }}"
    then
    incus launch images:debian/trixie &#34;{{ item }}&#34;
    touch $HOME/&#34;{{ item }}_launched&#34;
    incus launch images:debian/trixie "{{ item }}"
    touch $HOME/"{{ item }}_launched"
    fi
    args:
    chdir: $HOME
    creates: &#34;{{ item }}_launched&#34;
    creates: "{{ item }}_launched"
    with_inventory_hostnames:
    - all:!webserver
    - name: SETUP USER ACCOUNT AND SSH SERVICE
    ansible.builtin.shell: |
    set -o pipefail
    incus exec &#34;{{ item }}&#34; -- bash -c &#34;if ! grep -q webuser /etc/passwd; then adduser --quiet --gecos \&#34;\&#34; --disabled-password webuser; fi&#34;
    incus exec &#34;{{ item }}&#34; -- bash -c &#34;chpasswd &lt;&lt;&lt;\&#34;webuser:{{ webuser_pass }}\&#34;&#34;
    incus exec &#34;{{ item }}&#34; -- bash -c &#34;if ! id webuser | grep -qo sudo; then adduser --quiet webuser sudo; fi&#34;
    incus exec &#34;{{ item }}&#34; -- apt update
    incus exec &#34;{{ item }}&#34; -- apt install -y openssh-server python3 python3-apt
    incus exec &#34;{{ item }}&#34; -- apt clean
    touch $HOME/&#34;{{ item }}_configured&#34;
    incus exec "{{ item }}" -- bash -c "if ! grep -q webuser /etc/passwd; then adduser --quiet --gecos \"\" --disabled-password webuser; fi"
    incus exec "{{ item }}" -- bash -c "chpasswd <<<\"webuser:{{ webuser_pass }}\""
    incus exec "{{ item }}" -- bash -c "if ! id webuser | grep -qo sudo; then adduser --quiet webuser sudo; fi"
    incus exec "{{ item }}" -- apt update
    incus exec "{{ item }}" -- apt install -y openssh-server python3 python3-apt
    incus exec "{{ item }}" -- apt clean
    touch $HOME/"{{ item }}_configured"
    args:
    chdir: $HOME
    creates: &#34;{{ item }}_configured&#34;
    creates: "{{ item }}_configured"
    with_inventory_hostnames:
    - all:!webserver
    EOF
    @@ -488,16 +431,16 @@ TASK [Gathering Facts] *********************************************************
    ok: [webserver]
    TASK [LAUNCH INCUS CONTAINERS] ***********************************************************
    ok: [webserver] =&gt; (item=web01)
    ok: [webserver] =&gt; (item=web02)
    ok: [webserver] =&gt; (item=web03)
    ok: [webserver] =&gt; (item=web04)
    ok: [webserver] => (item=web01)
    ok: [webserver] => (item=web02)
    ok: [webserver] => (item=web03)
    ok: [webserver] => (item=web04)
    TASK [SETUP USER ACCOUNT AND SSH SERVICE] ************************************************
    ok: [webserver] =&gt; (item=web01)
    ok: [webserver] =&gt; (item=web02)
    ok: [webserver] =&gt; (item=web03)
    ok: [webserver] =&gt; (item=web04)
    ok: [webserver] => (item=web01)
    ok: [webserver] => (item=web02)
    ok: [webserver] => (item=web03)
    ok: [webserver] => (item=web04)
    PLAY RECAP *******************************************************************************
    webserver : ok=3 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
    @@ -515,13 +458,13 @@ Here is a short new playbook named `incus_fetch.yml` which will retrieve configu


    ```bash=
    cat &lt;&lt; &#39;EOF&#39; &gt; incus_fetch.yml
    cat << 'EOF' > incus_fetch.yml
    ---
    - name: BUILD CONTAINERS DYNAMIC INVENTORY
    hosts: webserver
    tasks:
    - name: GET INCUS CONTAINERS CONFIGURATION
    ansible.builtin.shell: incus --format yaml ls &gt; container_config.yml
    ansible.builtin.shell: incus --format yaml ls > container_config.yml
    args:
    chdir: $HOME
    creates: container_config.yml
    @@ -580,15 +523,15 @@ This is why we need to extract network addresses from the YAML configuration fil
    import yaml
    with open(&#39;container_config.yml&#39;,&#39;r&#39;) as yaml_file:
    with open('container_config.yml','r') as yaml_file:
    containers = yaml.safe_load(yaml_file)
    # look for the container &#39;name&#39; key and then the network &#39;addresses&#39; for each
    # look for the container 'name' key and then the network 'addresses' for each
    # container
    for container in containers:
    print(f&#34;Container: {container[&#39;name&#39;]}&#34;)
    for addresses in container[&#39;state&#39;][&#39;network&#39;][&#39;eth0&#39;][&#39;addresses&#39;]:
    print(f&#34; Addresses: {addresses}&#34;)
    print(f"Container: {container['name']}")
    for addresses in container['state']['network']['eth0']['addresses']:
    print(f" Addresses: {addresses}")
    ```
    ```bash
    @@ -597,21 +540,21 @@ This is why we need to extract network addresses from the YAML configuration fil
    ```bash=
    Container: web01
    Addresses: {&#39;family&#39;: &#39;inet&#39;, &#39;address&#39;: &#39;198.18.28.122&#39;, &#39;netmask&#39;: &#39;23&#39;, &#39;scope&#39;: &#39;global&#39;}
    Addresses: {&#39;family&#39;: &#39;inet6&#39;, &#39;address&#39;: &#39;2001:678:3fc:1c:216:3eff:fea4:95b7&#39;, &#39;netmask&#39;: &#39;64&#39;, &#39;scope&#39;: &#39;global&#39;}
    Addresses: {&#39;family&#39;: &#39;inet6&#39;, &#39;address&#39;: &#39;fe80::216:3eff:fea4:95b7&#39;, &#39;netmask&#39;: &#39;64&#39;, &#39;scope&#39;: &#39;link&#39;}
    Addresses: {'family': 'inet', 'address': '198.18.28.122', 'netmask': '23', 'scope': 'global'}
    Addresses: {'family': 'inet6', 'address': '2001:678:3fc:1c:216:3eff:fea4:95b7', 'netmask': '64', 'scope': 'global'}
    Addresses: {'family': 'inet6', 'address': 'fe80::216:3eff:fea4:95b7', 'netmask': '64', 'scope': 'link'}
    Container: web02
    Addresses: {&#39;family&#39;: &#39;inet&#39;, &#39;address&#39;: &#39;198.18.28.70&#39;, &#39;netmask&#39;: &#39;23&#39;, &#39;scope&#39;: &#39;global&#39;}
    Addresses: {&#39;family&#39;: &#39;inet6&#39;, &#39;address&#39;: &#39;2001:678:3fc:1c:216:3eff:fe6e:7a91&#39;, &#39;netmask&#39;: &#39;64&#39;, &#39;scope&#39;: &#39;global&#39;}
    Addresses: {&#39;family&#39;: &#39;inet6&#39;, &#39;address&#39;: &#39;fe80::216:3eff:fe6e:7a91&#39;, &#39;netmask&#39;: &#39;64&#39;, &#39;scope&#39;: &#39;link&#39;}
    Addresses: {'family': 'inet', 'address': '198.18.28.70', 'netmask': '23', 'scope': 'global'}
    Addresses: {'family': 'inet6', 'address': '2001:678:3fc:1c:216:3eff:fe6e:7a91', 'netmask': '64', 'scope': 'global'}
    Addresses: {'family': 'inet6', 'address': 'fe80::216:3eff:fe6e:7a91', 'netmask': '64', 'scope': 'link'}
    Container: web03
    Addresses: {&#39;family&#39;: &#39;inet&#39;, &#39;address&#39;: &#39;198.18.28.69&#39;, &#39;netmask&#39;: &#39;23&#39;, &#39;scope&#39;: &#39;global&#39;}
    Addresses: {&#39;family&#39;: &#39;inet6&#39;, &#39;address&#39;: &#39;2001:678:3fc:1c:216:3eff:fe40:705&#39;, &#39;netmask&#39;: &#39;64&#39;, &#39;scope&#39;: &#39;global&#39;}
    Addresses: {&#39;family&#39;: &#39;inet6&#39;, &#39;address&#39;: &#39;fe80::216:3eff:fe40:705&#39;, &#39;netmask&#39;: &#39;64&#39;, &#39;scope&#39;: &#39;link&#39;}
    Addresses: {'family': 'inet', 'address': '198.18.28.69', 'netmask': '23', 'scope': 'global'}
    Addresses: {'family': 'inet6', 'address': '2001:678:3fc:1c:216:3eff:fe40:705', 'netmask': '64', 'scope': 'global'}
    Addresses: {'family': 'inet6', 'address': 'fe80::216:3eff:fe40:705', 'netmask': '64', 'scope': 'link'}
    Container: web04
    Addresses: {&#39;family&#39;: &#39;inet&#39;, &#39;address&#39;: &#39;198.18.28.193&#39;, &#39;netmask&#39;: &#39;23&#39;, &#39;scope&#39;: &#39;global&#39;}
    Addresses: {&#39;family&#39;: &#39;inet6&#39;, &#39;address&#39;: &#39;2001:678:3fc:1c:216:3eff:fe1b:b041&#39;, &#39;netmask&#39;: &#39;64&#39;, &#39;scope&#39;: &#39;global&#39;}
    Addresses: {&#39;family&#39;: &#39;inet6&#39;, &#39;address&#39;: &#39;fe80::216:3eff:fe1b:b041&#39;, &#39;netmask&#39;: &#39;64&#39;, &#39;scope&#39;: &#39;link&#39;}
    Addresses: {'family': 'inet', 'address': '198.18.28.193', 'netmask': '23', 'scope': 'global'}
    Addresses: {'family': 'inet6', 'address': '2001:678:3fc:1c:216:3eff:fe1b:b041', 'netmask': '64', 'scope': 'global'}
    Addresses: {'family': 'inet6', 'address': 'fe80::216:3eff:fe1b:b041', 'netmask': '64', 'scope': 'link'}
    ```
    2. Second attempt: format YAML output
    @@ -622,17 +565,17 @@ This is why we need to extract network addresses from the YAML configuration fil
    import yaml
    with open(&#39;container_config.yml&#39;,&#39;r&#39;) as yaml_file:
    with open('container_config.yml','r') as yaml_file:
    containers = yaml.safe_load(yaml_file)
    print(&#39;containers:&#39;)
    print(&#39; hosts:&#39;)
    print('containers:')
    print(' hosts:')
    for container in containers:
    print(f&#34; {container[&#39;name&#39;]}:&#34;)
    for addresses in container[&#39;state&#39;][&#39;network&#39;][&#39;eth0&#39;][&#39;addresses&#39;]:
    print(f" {container['name']}:")
    for addresses in container['state']['network']['eth0']['addresses']:
    # print IPv6 local link address
    if addresses[&#39;family&#39;] == &#39;inet6&#39; and addresses[&#39;scope&#39;] == &#39;link&#39;:
    print(f&#34; ansible_host: &#39;{addresses[&#39;address&#39;]}%enp0s1&#39;&#34;)
    if addresses['family'] == 'inet6' and addresses['scope'] == 'link':
    print(f" ansible_host: '{addresses['address']}%enp0s1'")
    ```
    ```bash
    @@ -643,27 +586,27 @@ This is why we need to extract network addresses from the YAML configuration fil
    containers:
    hosts:
    web01:
    ansible_host: &#39;fe80::216:3eff:fea4:95b7%enp0s1&#39;
    ansible_host: 'fe80::216:3eff:fea4:95b7%enp0s1'
    web02:
    ansible_host: &#39;fe80::216:3eff:fe6e:7a91%enp0s1&#39;
    ansible_host: 'fe80::216:3eff:fe6e:7a91%enp0s1'
    web03:
    ansible_host: &#39;fe80::216:3eff:fe40:705%enp0s1&#39;
    ansible_host: 'fe80::216:3eff:fe40:705%enp0s1'
    web04:
    ansible_host: &#39;fe80::216:3eff:fe1b:b041%enp0s1&#39;
    ansible_host: 'fe80::216:3eff:fe1b:b041%enp0s1'
    ```
    3. Run the Python script from the `incus_fecth.yml` ansible playbook
    Here&#39;s a new version of the playbook with an additional task that creates the `containers.yml` file in the `inventory/` directory.
    Here's a new version of the playbook with an additional task that creates the `containers.yml` file in the `inventory/` directory.
    ```bash=
    cat &lt;&lt; &#39;EOF&#39; &gt; incus_fetch.yml
    cat << 'EOF' > incus_fetch.yml
    ---
    - name: BUILD CONTAINERS DYNAMIC INVENTORY
    hosts: webserver
    tasks:
    - name: GET INCUS CONTAINERS CONFIGURATION
    ansible.builtin.shell: incus --format yaml ls &gt; container_config.yml
    ansible.builtin.shell: incus --format yaml ls > container_config.yml
    args:
    chdir: $HOME
    creates: container_config.yml
    @@ -676,7 +619,7 @@ This is why we need to extract network addresses from the YAML configuration fil
    - name: ADD INCUS CONTAINERS ADDRESSES TO INVENTORY
    ansible.builtin.script:
    build_inventory.py &gt; inventory/containers.yml
    build_inventory.py > inventory/containers.yml
    delegate_to: localhost
    EOF
    ```
    @@ -701,7 +644,7 @@ This is why we need to extract network addresses from the YAML configuration fil
    ok: [webserver]
    TASK [ADD INCUS CONTAINERS ADDRESSES TO INVENTORY] ********************
    changed: [webserver -&gt; localhost]
    changed: [webserver -> localhost]
    PLAY RECAP ************************************************************
    webserver : ok=4 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
    @@ -729,32 +672,32 @@ all:
    containers:
    hosts:
    web01:
    ansible_become_pass: &#39;{{ webuser_pass }}&#39;
    ansible_become_pass: '{{ webuser_pass }}'
    ansible_host: fe80::216:3eff:fea4:95b7%enp0s1
    ansible_ssh_pass: &#39;{{ webuser_pass }}&#39;
    ansible_ssh_pass: '{{ webuser_pass }}'
    ansible_ssh_user: webuser
    web02:
    ansible_become_pass: &#39;{{ webuser_pass }}&#39;
    ansible_become_pass: '{{ webuser_pass }}'
    ansible_host: fe80::216:3eff:fe6e:7a91%enp0s1
    ansible_ssh_pass: &#39;{{ webuser_pass }}&#39;
    ansible_ssh_pass: '{{ webuser_pass }}'
    ansible_ssh_user: webuser
    web03:
    ansible_become_pass: &#39;{{ webuser_pass }}&#39;
    ansible_become_pass: '{{ webuser_pass }}'
    ansible_host: fe80::216:3eff:fe40:705%enp0s1
    ansible_ssh_pass: &#39;{{ webuser_pass }}&#39;
    ansible_ssh_pass: '{{ webuser_pass }}'
    ansible_ssh_user: webuser
    web04:
    ansible_become_pass: &#39;{{ webuser_pass }}&#39;
    ansible_become_pass: '{{ webuser_pass }}'
    ansible_host: fe80::216:3eff:fe1b:b041%enp0s1
    ansible_ssh_pass: &#39;{{ webuser_pass }}&#39;
    ansible_ssh_pass: '{{ webuser_pass }}'
    ansible_ssh_user: webuser
    ungrouped: {}
    vms:
    hosts:
    webserver:
    ansible_become_pass: &#39;{{ webserver_user_pass }}&#39;
    ansible_become_pass: '{{ webserver_user_pass }}'
    ansible_host: fe80::baad:caff:fefe:XXX%enp0s1
    ansible_ssh_pass: &#39;{{ webserver_user_pass }}&#39;
    ansible_ssh_pass: '{{ webserver_user_pass }}'
    ansible_ssh_user: etu
    ```

    @@ -768,39 +711,39 @@ Vault password:
    ```

    ```bash=
    web02 | SUCCESS =&gt; {
    &#34;changed&#34;: false,
    &#34;ping&#34;: &#34;pong&#34;
    web02 | SUCCESS => {
    "changed": false,
    "ping": "pong"
    }
    web04 | SUCCESS =&gt; {
    &#34;changed&#34;: false,
    &#34;ping&#34;: &#34;pong&#34;
    web04 | SUCCESS => {
    "changed": false,
    "ping": "pong"
    }
    web01 | SUCCESS =&gt; {
    &#34;changed&#34;: false,
    &#34;ping&#34;: &#34;pong&#34;
    web01 | SUCCESS => {
    "changed": false,
    "ping": "pong"
    }
    web03 | SUCCESS =&gt; {
    &#34;changed&#34;: false,
    &#34;ping&#34;: &#34;pong&#34;
    web03 | SUCCESS => {
    "changed": false,
    "ping": "pong"
    }
    ```

    Another way to check SSH access to the containers is to use the **command** module instead of **ping**.

    ```bash
    ansible containers -m command -a &#34;/bin/echo Hello, World!&#34; --ask-vault-pass --extra-vars @$HOME/lab15_passwd.yml
    ansible containers -m command -a "/bin/echo Hello, World!" --ask-vault-pass --extra-vars @$HOME/lab15_passwd.yml
    Vault password:
    ```

    ```bash=
    web01 | CHANGED | rc=0 &gt;&gt;
    web01 | CHANGED | rc=0 >>
    Hello, World!
    web04 | CHANGED | rc=0 &gt;&gt;
    web04 | CHANGED | rc=0 >>
    Hello, World!
    web02 | CHANGED | rc=0 &gt;&gt;
    web02 | CHANGED | rc=0 >>
    Hello, World!
    web03 | CHANGED | rc=0 &gt;&gt;
    web03 | CHANGED | rc=0 >>
    Hello, World!
    ```

    @@ -811,7 +754,7 @@ In this Part, you will create and automate the installation of Apache webserver
    ## Step 1: Create the `install_apache.yml` playbook

    ```bash=
    cat &lt;&lt; &#39;EOF&#39; &gt; install_apache.yml
    cat << 'EOF' > install_apache.yml
    ---
    - name: INSTALL APACHE2, ENABLE MOD_REWRITE, SET LISTEN PORT 8081, AND CHECK HTTP STATUS CODE
    hosts: containers
    @@ -820,7 +763,7 @@ cat &lt;&lt; &#39;EOF&#39; &gt; install_apache.yml
    - name: UPDATE AND UPGRADE APT PACKAGES
    ansible.builtin.apt:
    update_cache: true
    upgrade: &#39;full&#39;
    upgrade: 'full'
    - name: INSTALL APACHE2
    ansible.builtin.apt:
    @@ -913,15 +856,15 @@ Compared to playbooks in previous tasks, we can see that each task is run on eac
    We now want to verify that the `apache2` web server is active. Therfore, we add a task to the `install_apache.yml`playbook.

    ```bash=
    cat &lt;&lt; &#39;EOF&#39; &gt; install_apache.yml
    cat << 'EOF' > install_apache.yml
    ---
    - hosts: containers
    become: true
    tasks:
    - name: UPDATE AND UPGRADE APT PACKAGES
    ansible.builtin.apt:
    update_cache: true
    upgrade: &#39;full&#39;
    upgrade: 'full'
    - name: INSTALL APACHE2
    ansible.builtin.apt:
    @@ -967,17 +910,17 @@ ok: [web04]
    ok: [web03]
    TASK [debug] **********************************************************
    ok: [web01] =&gt; {
    &#34;apache2_status.status.ActiveState&#34;: &#34;active&#34;
    ok: [web01] => {
    "apache2_status.status.ActiveState": "active"
    }
    ok: [web02] =&gt; {
    &#34;apache2_status.status.ActiveState&#34;: &#34;active&#34;
    ok: [web02] => {
    "apache2_status.status.ActiveState": "active"
    }
    ok: [web03] =&gt; {
    &#34;apache2_status.status.ActiveState&#34;: &#34;active&#34;
    ok: [web03] => {
    "apache2_status.status.ActiveState": "active"
    }
    ok: [web04] =&gt; {
    &#34;apache2_status.status.ActiveState&#34;: &#34;active&#34;
    ok: [web04] => {
    "apache2_status.status.ActiveState": "active"
    }
    ```

    @@ -991,17 +934,17 @@ Here is a copy of the new tasks to add in the `install_apache.yml` file playbook
    - name: SET APACHE2 LISTEN ON PORT 8081
    ansible.builtin.lineinfile:
    dest: /etc/apache2/ports.conf
    regexp: &#39;^Listen 80&#39;
    line: &#39;Listen 8081&#39;
    regexp: '^Listen 80'
    line: 'Listen 8081'
    state: present
    notify:
    - RESTART APACHE2
    - name: SET APACHE2 VIRTUALHOST LISTEN ON PORT 8081
    ansible.builtin.lineinfile:
    dest: /etc/apache2/sites-available/000-default.conf
    regexp: &#39;^&lt;VirtualHost \*:80&gt;&#39;
    line: &#39;&lt;VirtualHost *:8081&gt;&#39;
    regexp: '^<VirtualHost \*:80>'
    line: '<VirtualHost *:8081>'
    state: present
    notify:
    - RESTART APACHE2
    @@ -1014,46 +957,46 @@ Once the playbook is run again, we can check the results through the `ansible` *

    1. Check the `/etc/apache2/ports.conf` file
    ```bash
    ansible containers -m command -a &#34;grep ^Listen /etc/apache2/ports.conf&#34; --ask-vault-pass --extra-vars @$HOME/lab15_passwd.yml
    ansible containers -m command -a "grep ^Listen /etc/apache2/ports.conf" --ask-vault-pass --extra-vars @$HOME/lab15_passwd.yml
    Vault password:
    ```

    ```bash=
    web01 | CHANGED | rc=0 &gt;&gt;
    web01 | CHANGED | rc=0 >>
    Listen 8081
    web04 | CHANGED | rc=0 &gt;&gt;
    web04 | CHANGED | rc=0 >>
    Listen 8081
    web02 | CHANGED | rc=0 &gt;&gt;
    web02 | CHANGED | rc=0 >>
    Listen 8081
    web03 | CHANGED | rc=0 &gt;&gt;
    web03 | CHANGED | rc=0 >>
    Listen 8081
    ```

    2. Check the `/etc/apache2/sites-available/000-default.conf` file
    ```bash
    ansible containers -m command -a &#34;grep ^&lt;VirtualHost /etc/apache2/sites-available/000-default.conf&#34; --ask-vault-pass --extra-vars @$HOME/lab15_passwd.yml
    ansible containers -m command -a "grep ^<VirtualHost /etc/apache2/sites-available/000-default.conf" --ask-vault-pass --extra-vars @$HOME/lab15_passwd.yml
    Vault password:
    ```

    ```bash=
    web02 | CHANGED | rc=0 &gt;&gt;
    &lt;VirtualHost *:8081&gt;
    web04 | CHANGED | rc=0 &gt;&gt;
    &lt;VirtualHost *:8081&gt;
    web01 | CHANGED | rc=0 &gt;&gt;
    &lt;VirtualHost *:8081&gt;
    web03 | CHANGED | rc=0 &gt;&gt;
    &lt;VirtualHost *:8081&gt;
    web02 | CHANGED | rc=0 >>
    <VirtualHost *:8081>
    web04 | CHANGED | rc=0 >>
    <VirtualHost *:8081>
    web01 | CHANGED | rc=0 >>
    <VirtualHost *:8081>
    web03 | CHANGED | rc=0 >>
    <VirtualHost *:8081>
    ```

    3. Finally, we cal list the TCP sockets open in listening state
    ```bash
    ansible containers -m command -a &#34;ss -ltn&#34; --ask-vault-pass --extra-vars @$HOME/lab15_passwd.yml
    ansible containers -m command -a "ss -ltn" --ask-vault-pass --extra-vars @$HOME/lab15_passwd.yml
    Vault password:
    ```

    ```bash=
    web02 | CHANGED | rc=0 &gt;&gt;
    web02 | CHANGED | rc=0 >>
    State Recv-Q Send-Q Local Address:Port Peer Address:Port
    LISTEN 0 4096 127.0.0.53%lo:53 0.0.0.0:*
    LISTEN 0 128 0.0.0.0:22 0.0.0.0:*
    @@ -1062,7 +1005,7 @@ Once the playbook is run again, we can check the results through the `ansible` *
    LISTEN 0 128 [::]:22 [::]:*
    LISTEN 0 4096 [::]:5355 [::]:*
    LISTEN 0 511 *:8081 *:*
    web04 | CHANGED | rc=0 &gt;&gt;
    web04 | CHANGED | rc=0 >>
    State Recv-Q Send-Q Local Address:Port Peer Address:Port
    LISTEN 0 4096 127.0.0.54:53 0.0.0.0:*
    LISTEN 0 4096 127.0.0.53%lo:53 0.0.0.0:*
    @@ -1071,7 +1014,7 @@ Once the playbook is run again, we can check the results through the `ansible` *
    LISTEN 0 128 [::]:22 [::]:*
    LISTEN 0 511 *:8081 *:*
    LISTEN 0 4096 [::]:5355 [::]:*
    web01 | CHANGED | rc=0 &gt;&gt;
    web01 | CHANGED | rc=0 >>
    State Recv-Q Send-Q Local Address:Port Peer Address:Port
    LISTEN 0 4096 127.0.0.53%lo:53 0.0.0.0:*
    LISTEN 0 128 0.0.0.0:22 0.0.0.0:*
    @@ -1080,7 +1023,7 @@ Once the playbook is run again, we can check the results through the `ansible` *
    LISTEN 0 128 [::]:22 [::]:*
    LISTEN 0 511 *:8081 *:*
    LISTEN 0 4096 [::]:5355 [::]:*
    web03 | CHANGED | rc=0 &gt;&gt;
    web03 | CHANGED | rc=0 >>
    State Recv-Q Send-Q Local Address:Port Peer Address:Port
    LISTEN 0 128 0.0.0.0:22 0.0.0.0:*
    LISTEN 0 4096 127.0.0.53%lo:53 0.0.0.0:*
    @@ -1102,9 +1045,9 @@ In order to do so, we add a last task to the `install_apache.yml` playbook that
    ```yaml=
    - name: CHECK HTTP STATUS CODE
    ansible.builtin.uri:
    url: &#39;http://[{{ ansible_default_ipv6.address }}]:8081&#39;
    url: 'http://[{{ ansible_default_ipv6.address }}]:8081'
    status_code: 200
    when: &#34;&#39;containers&#39; in group_names&#34;
    when: "'containers' in group_names"
    delegate_to: localhost
    become: false
    ```
    @@ -1120,22 +1063,22 @@ If we run the playbook with success, we only get **ok** as results. Here is a sa

    ```bash=
    TASK [CHECK HTTP STATUS CODE] *******************************************
    ok: [web04 -&gt; localhost]
    ok: [web01 -&gt; localhost]
    ok: [web02 -&gt; localhost]
    ok: [web03 -&gt; localhost]
    ok: [web04 -> localhost]
    ok: [web01 -> localhost]
    ok: [web02 -> localhost]
    ok: [web03 -> localhost]
    ```

    If we run the same playbook with the very verbose option `-vvv` we get detailed results of each HTTP request. Here is a sample for one of the four containers tested:

    ```json=
    &#34;last_modified&#34;: &#34;Sun, 11 Feb 2024 10:07:56 GMT&#34;,
    &#34;msg&#34;: &#34;OK (10701 bytes)&#34;,
    &#34;redirected&#34;: false,
    &#34;server&#34;: &#34;Apache/2.4.58 (Debian)&#34;,
    &#34;status&#34;: 200,
    &#34;url&#34;: &#34;http://[2001:678:3fc:1c:216:3eff:fe6e:7a91]:8081&#34;,
    &#34;vary&#34;: &#34;Accept-Encoding&#34;
    "last_modified": "Sun, 11 Feb 2024 10:07:56 GMT",
    "msg": "OK (10701 bytes)",
    "redirected": false,
    "server": "Apache/2.4.58 (Debian)",
    "status": 200,
    "url": "http://[2001:678:3fc:1c:216:3eff:fe6e:7a91]:8081",
    "vary": "Accept-Encoding"
    ```

    On line 5, the status value is actually 200.
    @@ -1144,25 +1087,4 @@ On line 5, the status value is actually 200.

    This lab is mainly a discovery of Ansible modules for students with no prior experience in the world of automation. It covers many Ansible modules with basic tasks for installing and configuring containers and services.

    If you&#39;ve reached these lines, I hope you&#39;ve enjoyed the trip :smiley:</div>
    <div class="ui-toc dropup unselectable hidden-print" style="display:none;">
    <div class="pull-right dropdown">
    <a id="tocLabel" class="ui-toc-label btn btn-default" data-toggle="dropdown" href="#" role="button" aria-haspopup="true" aria-expanded="false" title="Table of content">
    <i class="fa fa-bars"></i>
    </a>
    <ul id="ui-toc" class="ui-toc-dropdown dropdown-menu" aria-labelledby="tocLabel">
    </ul>
    </div>
    </div>
    <div id="ui-toc-affix" class="ui-affix-toc ui-toc-dropdown unselectable hidden-print" data-spy="affix" style="display:none;"></div>
    </body>
    </html>
    <script src="https://md.inetdoc.net/js/mathjax-config-extra.js"></script>
    <script src="https://md.inetdoc.net/build/MathJax/MathJax.js" defer></script>
    <script src="https://md.inetdoc.net/build/MathJax/config/TeX-AMS-MML_HTMLorMML.js" defer></script>
    <script src="https://md.inetdoc.net/build/MathJax/config/Safe.js" defer></script>
    <script src="config"></script><script src="build/vendors~common.77fe551a965ab1fc0d33.js" defer="defer"></script><script src="build/common.6da413651c225dd97e7a.js" defer="defer"></script><script src="build/vendors~cover~cover-pack~index~index-pack~pretty~pretty-pack~slide~slide-pack.897edf415109c21e7fb2.js" defer="defer"></script><script src="build/vendors~index~index-pack~pretty~pretty-pack~slide~slide-pack.624da9bb49753eaa85e1.js" defer="defer"></script><script src="build/pretty-pack.1de536f4d480e3bdc08f.js" defer="defer"></script>
    If you've reached these lines, I hope you've enjoyed the trip :smiley:
  5. platu revised this gist Feb 19, 2024. 1 changed file with 250 additions and 172 deletions.
    422 changes: 250 additions & 172 deletions lab15+ansible+incus.md
    Original file line number Diff line number Diff line change
    @@ -1,6 +1,63 @@
    ---
    tags: m1, Devnet, incus, linux, lab15
    ---
    <!DOCTYPE html>
    <html>

    <head>
    <meta charset="utf-8">
    <meta http-equiv="X-UA-Compatible" content="IE=edge">
    <meta name="viewport" content="width=device-width, initial-scale=1.0, user-scalable=no">
    <meta name="apple-mobile-web-app-capable" content="yes">
    <meta name="apple-mobile-web-app-status-bar-style" content="black">
    <meta name="mobile-web-app-capable" content="yes">


    <meta name="description" content=" # Devnet Lab 15 -- Using Ansible to automate the installation of Web servers on Incus system contai">



    <meta property="og:title" content="Devnet Lab 15 -- Using Ansible to automate the installation of Web servers on Incus system containers - HedgeDoc">



    <meta property="og:type" content="website">

    <meta property="og:image" content="https://md.inetdoc.net/icons/android-chrome-512x512.png">
    <meta property="og:image:alt" content="HedgeDoc logo">
    <meta property="og:image:type" content="image/png">

    <base href="https://md.inetdoc.net/">
    <title>Devnet Lab 15 -- Using Ansible to automate the installation of Web servers on Incus system containers - HedgeDoc</title>
    <link rel="apple-touch-icon" sizes="180x180" href="https://md.inetdoc.net/icons/apple-touch-icon.png">
    <link rel="icon" type="image/png" sizes="32x32" href="https://md.inetdoc.net/icons/favicon-32x32.png">
    <link rel="icon" type="image/png" sizes="16x16" href="https://md.inetdoc.net/icons/favicon-16x16.png">
    <link rel="manifest" href="https://md.inetdoc.net/icons/site.webmanifest">
    <link rel="mask-icon" href="https://md.inetdoc.net/icons/safari-pinned-tab.svg" color="#b51f08">
    <link rel="shortcut icon" href="https://md.inetdoc.net/icons/favicon.ico">
    <meta name="apple-mobile-web-app-title" content="HedgeDoc - Ideas grow better together">
    <meta name="application-name" content="HedgeDoc - Ideas grow better together">
    <meta name="msapplication-TileColor" content="#b51f08">
    <meta name="msapplication-config" content="https://md.inetdoc.net/icons/browserconfig.xml">
    <meta name="theme-color" content="#b51f08">

    <link rel="stylesheet" href='https://md.inetdoc.net/build/emojify.js/dist/css/basic/emojify.min.css'>
    <link href="build/font-pack.51d576c9ea0a7705d2e0.css" rel="stylesheet"><link href="build/2.04010c738e6d668e6e08.css" rel="stylesheet"><link href="build/3.b67314821de89ccff48b.css" rel="stylesheet"><link href="build/pretty-styles-pack.6437be263d7329017dfe.css" rel="stylesheet"><link href="build/pretty-styles.8aa1888af7df52aee395.css" rel="stylesheet"><link href="build/1.1666d9d869a0532d9bce.css" rel="stylesheet"><link href="build/pretty.0424d40e7df68caf04c5.css" rel="stylesheet">
    </head>

    <body style="display:none;">
    <div class="ui-infobar container-fluid unselectable hidden-print">
    <small>
    <span>
    <span class="ui-lastchangeuser">&thinsp;<i class="ui-user-icon small" style="background-image: url(https://cdn.libravatar.org/avatar/bd55248aec1684ae315a56a872934676?default=identicon&s=96);" data-toggle="tooltip" data-placement="right" title="philippe.latu"></i></span>
    &nbsp;<span class="text-uppercase ui-status-lastchange"></span>
    <span class="ui-lastchange text-uppercase" data-createtime="Fri Feb 09 2024 07:18:26 GMT+0100 (Central European Standard Time)" data-updatetime="Fri Feb 16 2024 17:56:10 GMT+0100 (Central European Standard Time)"></span>
    </span>
    <span class="pull-right">13 views <a href="#" class="ui-edit" title="Edit this note"><i class="fa fa-fw fa-pencil"></i></a></span>
    <br>
    </small>
    </div>
    <div id="doc" class="container markdown-body" >
    # Devnet Lab 15 -- Using Ansible to automate the installation of Web servers on Incus system containers

    [toc]
    @@ -24,8 +81,8 @@ In this lab, you will first configure Ansible to communicate with a virtual mach
    ```

    ```bash=
    '/home/etudianttest/masters/debian-testing-amd64.qcow2' -> './debian-testing-amd64.qcow2'
    '/home/etudianttest/masters/debian-testing-amd64.qcow2_OVMF_VARS.fd' -> './debian-testing-amd64.qcow2_OVMF_VARS.fd'
    &#39;/home/etudianttest/masters/debian-testing-amd64.qcow2&#39; -&gt; &#39;./debian-testing-amd64.qcow2&#39;
    &#39;/home/etudianttest/masters/debian-testing-amd64.qcow2_OVMF_VARS.fd&#39; -&gt; &#39;./debian-testing-amd64.qcow2_OVMF_VARS.fd&#39;
    ```

    ```bash
    @@ -49,13 +106,13 @@ In this lab, you will first configure Ansible to communicate with a virtual mach
    ```

    ```bash=
    ~> Virtual machine filename : webserver-host.qcow2
    ~> RAM size : 4096MB
    ~> SPICE VDI port number : 59XX
    ~> telnet console port number : 23XX
    ~> MAC address : b8:ad:ca:fe:00:XX
    ~> Switch port interface : tapXX, access mode
    ~> IPv6 LL address : fe80::baad:caff:fefe:XX%vlanYYY
    ~&gt; Virtual machine filename : webserver-host.qcow2
    ~&gt; RAM size : 4096MB
    ~&gt; SPICE VDI port number : 59XX
    ~&gt; telnet console port number : 23XX
    ~&gt; MAC address : b8:ad:ca:fe:00:XX
    ~&gt; Switch port interface : tapXX, access mode
    ~&gt; IPv6 LL address : fe80::baad:caff:fefe:XX%vlanYYY
    ```

    3. Open a SSH connection to the Web server VM.
    @@ -76,12 +133,12 @@ The web server hosting VM is now ready for Ansible automation. First, we need to
    1. Make the `~/labs/lab15` directory for example and navigate to this folder

    ```bash
    mkdir -p ~/labs/lab15 && cd ~/labs/lab15
    mkdir -p ~/labs/lab15 &amp;&amp; cd ~/labs/lab15
    ```

    2. Check that **ansible** package is installed

    If the **ansible** package is not already installed on your Devnet VM, it's time to do so.
    If the **ansible** package is not already installed on your Devnet VM, it&#39;s time to do so.

    ```bash
    apt show ansible | head -n 10
    @@ -93,23 +150,23 @@ The web server hosting VM is now ready for Ansible automation. First, we need to
    Priority: optional
    Section: universe/admin
    Origin: Ubuntu
    Maintainer: Ubuntu Developers <[email protected]>
    Original-Maintainer: Lee Garrett <[email protected]>
    Maintainer: Ubuntu Developers &lt;[email protected]&gt;
    Original-Maintainer: Lee Garrett &lt;[email protected]&gt;
    Bugs: https://bugs.launchpad.net/ubuntu/+filebug
    Installed-Size: 263 MB
    Depends: ansible-core (>= 2.11.5-1~), python3:any, openssh-client | python3-paramiko (>= 2.6.0), python3-distutils, python3-dnspython, python3-httplib2, python3-jinja2, python3-netaddr, python3-yaml
    Depends: ansible-core (&gt;= 2.11.5-1~), python3:any, openssh-client | python3-paramiko (&gt;= 2.6.0), python3-distutils, python3-dnspython, python3-httplib2, python3-jinja2, python3-netaddr, python3-yaml
    ```

    3. Create a new `ansible.cfg` file in the `lab15` directory from the shell prompt


    ```bash=
    cat << 'EOF' > ansible.cfg
    cat &lt;&lt; &#39;EOF&#39; &gt; ansible.cfg
    # config file for Lab 15 Web Servers management
    [defaults]
    # Use inventory/ folder files as source
    inventory=inventory/
    host_key_checking = False # Don't worry about RSA Fingerprints
    host_key_checking = False # Don&#39;t worry about RSA Fingerprints
    retry_files_enabled = False # Do not create them
    deprecation_warnings = False # Do not show warnings
    interpreter_python = /usr/bin/python3
    @@ -141,11 +198,11 @@ ssh etu@fe80::baad:caff:fefe:XXX%enp0s1
    ```

    ```bash=
    The authenticity of host 'fe80::baad:caff:fefe:XXX%enp0s1 (fe80::baad:caff:fefe:XXX%enp0s1)' can't be established.
    The authenticity of host &#39;fe80::baad:caff:fefe:XXX%enp0s1 (fe80::baad:caff:fefe:XXX%enp0s1)&#39; can&#39;t be established.
    ED25519 key fingerprint is SHA256:yFLaZk+OfY7z7bHyHPXgjowRS4KMHjfoMQxracRdG9M.
    This key is not known by any other names.
    Are you sure you want to continue connecting (yes/no/[fingerprint])? yes
    Warning: Permanently added 'fe80::baad:caff:fefe:XXX%enp0s1' (ED25519) to the list of known hosts.
    Warning: Permanently added &#39;fe80::baad:caff:fefe:XXX%enp0s1&#39; (ED25519) to the list of known hosts.
    Linux webserver-host 6.6.13-amd64 #1 SMP PREEMPT_DYNAMIC Debian 6.6.13-1 (2024-01-20) x86_64
    ```

    @@ -175,16 +232,16 @@ webserver_user_pass: XXXXXXXXXX
    Create the inventory file `inventory/hosts.yml` with the IP address of your Web server VM.

    ```bash=
    cat << 'EOF' > inventory/hosts.yml
    cat &lt;&lt; &#39;EOF&#39; &gt; inventory/hosts.yml
    ---
    vms:
    hosts:
    webserver:
    ansible_host: 'fe80::baad:caff:fefe:XXX%enp0s1'
    ansible_host: &#39;fe80::baad:caff:fefe:XXX%enp0s1&#39;
    vars:
    ansible_ssh_user: etu
    ansible_ssh_pass: '{{ webserver_user_pass }}'
    ansible_become_pass: '{{ webserver_user_pass }}'
    ansible_ssh_pass: &#39;{{ webserver_user_pass }}&#39;
    ansible_become_pass: &#39;{{ webserver_user_pass }}&#39;
    all:
    children:
    @@ -203,9 +260,9 @@ Vault password:
    ```

    ```bash=
    webserver | SUCCESS => {
    "changed": false,
    "ping": "pong"
    webserver | SUCCESS =&gt; {
    &#34;changed&#34;: false,
    &#34;ping&#34;: &#34;pong&#34;
    }
    ```

    @@ -221,7 +278,7 @@ In order to be able to launch system containers and configure Web services in th
    Create the `lxd_init.yml` file an add the following information to the file. Make sure you use the proper YAML indentation. Every space and dash is significant. You may lose some formatting if you copy and paste.

    ```bash=
    cat << 'EOF' > incus_init.yml
    cat &lt;&lt; &#39;EOF&#39; &gt; incus_init.yml
    ---
    - name: INCUS INSTALLATION AND INITIALIZATION
    hosts: webserver
    @@ -235,7 +292,7 @@ cat << 'EOF' > incus_init.yml
    - name: ADD USER TO INCUS SYSTEM GROUPS
    ansible.builtin.user:
    name: '{{ ansible_ssh_user }}'
    name: &#39;{{ ansible_ssh_user }}&#39;
    groups:
    - incus
    - incus-admin
    @@ -249,18 +306,18 @@ cat << 'EOF' > incus_init.yml
    - name: INITIALIZE LXD
    ansible.builtin.shell: |
    set -o pipefail
    cat << EOT | incus admin init --preseed
    cat &lt;&lt; EOT | incus admin init --preseed
    config:
    core.https_address: '[::]:8443'
    core.https_address: &#39;[::]:8443&#39;
    networks: []
    storage_pools:
    - config: {}
    description: ""
    description: &#34;&#34;
    name: default
    driver: dir
    profiles:
    - config: {}
    description: ""
    description: &#34;&#34;
    devices:
    eth0:
    name: eth0
    @@ -337,26 +394,26 @@ In part 2 step 2, we have created the `inventory/hosts.yml` file that defines al
    Now, we have to create a new inventory file named `inventory/lab.yml` which defines all the system containers parameters. The purpose here is to be able to run Ansible playbooks within these containers.

    ```bash=
    cat << 'EOF' > inventory/lab.yml
    cat &lt;&lt; &#39;EOF&#39; &gt; inventory/lab.yml
    ---
    containers:
    hosts:
    web[01:04]:
    vars:
    ansible_ssh_user: webuser
    ansible_ssh_pass: '{{ webuser_pass }}'
    ansible_become_pass: '{{ webuser_pass }}'
    ansible_ssh_pass: &#39;{{ webuser_pass }}&#39;
    ansible_become_pass: &#39;{{ webuser_pass }}&#39;
    EOF
    ```

    > Note: This inventory file is incomplete as it does not define `ansible_host` variable for each container.
    &gt; Note: This inventory file is incomplete as it does not define `ansible_host` variable for each container.

    ## Step 2: Add a new entry in Ansible vault for container access

    In the previous setp above, we defined a user named `webuser` with its password stored in the `webuser_pass`variable.

    We must add the corresponding entry in the ansible vault file named `$HOME/lab15_passwd.yml'.
    We must add the corresponding entry in the ansible vault file named `$HOME/lab15_passwd.yml&#39;.
    ```bash
    ansible-vault edit $HOME/lab15_passwd.yml
    @@ -373,38 +430,38 @@ webuser_pass: XXXXXXXXXX
    ## Step 3: Create an Ansible playbook to launch and configure access to containers
    ```bash=
    cat << 'EOF' > incus_launch.yml
    cat &lt;&lt; &#39;EOF&#39; &gt; incus_launch.yml
    ---
    - name: LAUNCH INCUS CONTAINERS, SETUP USER ACCOUNT AND SSH SERVICE
    hosts: webserver
    tasks:
    - name: LAUNCH INCUS CONTAINERS
    ansible.builtin.shell: |
    set -o pipefail
    if ! incus ls -c n | grep -q "{{ item }}"
    if ! incus ls -c n | grep -q &#34;{{ item }}&#34;
    then
    incus launch images:debian/trixie "{{ item }}"
    touch $HOME/"{{ item }}_launched"
    incus launch images:debian/trixie &#34;{{ item }}&#34;
    touch $HOME/&#34;{{ item }}_launched&#34;
    fi
    args:
    chdir: $HOME
    creates: "{{ item }}_launched"
    creates: &#34;{{ item }}_launched&#34;
    with_inventory_hostnames:
    - all:!webserver

    - name: SETUP USER ACCOUNT AND SSH SERVICE
    ansible.builtin.shell: |
    set -o pipefail
    incus exec "{{ item }}" -- bash -c "if ! grep -q webuser /etc/passwd; then adduser --quiet --gecos \"\" --disabled-password webuser; fi"
    incus exec "{{ item }}" -- bash -c "chpasswd <<<\"webuser:{{ webuser_pass }}\""
    incus exec "{{ item }}" -- bash -c "if ! id webuser | grep -qo sudo; then adduser --quiet webuser sudo; fi"
    incus exec "{{ item }}" -- apt update
    incus exec "{{ item }}" -- apt install -y openssh-server python3 python3-apt
    incus exec "{{ item }}" -- apt clean
    touch $HOME/"{{ item }}_configured"
    incus exec &#34;{{ item }}&#34; -- bash -c &#34;if ! grep -q webuser /etc/passwd; then adduser --quiet --gecos \&#34;\&#34; --disabled-password webuser; fi&#34;
    incus exec &#34;{{ item }}&#34; -- bash -c &#34;chpasswd &lt;&lt;&lt;\&#34;webuser:{{ webuser_pass }}\&#34;&#34;
    incus exec &#34;{{ item }}&#34; -- bash -c &#34;if ! id webuser | grep -qo sudo; then adduser --quiet webuser sudo; fi&#34;
    incus exec &#34;{{ item }}&#34; -- apt update
    incus exec &#34;{{ item }}&#34; -- apt install -y openssh-server python3 python3-apt
    incus exec &#34;{{ item }}&#34; -- apt clean
    touch $HOME/&#34;{{ item }}_configured&#34;
    args:
    chdir: $HOME
    creates: "{{ item }}_configured"
    creates: &#34;{{ item }}_configured&#34;
    with_inventory_hostnames:
    - all:!webserver
    EOF
    @@ -431,16 +488,16 @@ TASK [Gathering Facts] *********************************************************
    ok: [webserver]
    TASK [LAUNCH INCUS CONTAINERS] ***********************************************************
    ok: [webserver] => (item=web01)
    ok: [webserver] => (item=web02)
    ok: [webserver] => (item=web03)
    ok: [webserver] => (item=web04)
    ok: [webserver] =&gt; (item=web01)
    ok: [webserver] =&gt; (item=web02)
    ok: [webserver] =&gt; (item=web03)
    ok: [webserver] =&gt; (item=web04)
    TASK [SETUP USER ACCOUNT AND SSH SERVICE] ************************************************
    ok: [webserver] => (item=web01)
    ok: [webserver] => (item=web02)
    ok: [webserver] => (item=web03)
    ok: [webserver] => (item=web04)
    ok: [webserver] =&gt; (item=web01)
    ok: [webserver] =&gt; (item=web02)
    ok: [webserver] =&gt; (item=web03)
    ok: [webserver] =&gt; (item=web04)
    PLAY RECAP *******************************************************************************
    webserver : ok=3 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
    @@ -458,13 +515,13 @@ Here is a short new playbook named `incus_fetch.yml` which will retrieve configu
    ```bash=
    cat << 'EOF' > incus_fetch.yml
    cat &lt;&lt; &#39;EOF&#39; &gt; incus_fetch.yml
    ---
    - name: BUILD CONTAINERS DYNAMIC INVENTORY
    hosts: webserver
    tasks:
    - name: GET INCUS CONTAINERS CONFIGURATION
    ansible.builtin.shell: incus --format yaml ls > container_config.yml
    ansible.builtin.shell: incus --format yaml ls &gt; container_config.yml
    args:
    chdir: $HOME
    creates: container_config.yml
    @@ -523,15 +580,15 @@ This is why we need to extract network addresses from the YAML configuration fil
    import yaml
    with open('container_config.yml','r') as yaml_file:
    with open(&#39;container_config.yml&#39;,&#39;r&#39;) as yaml_file:
    containers = yaml.safe_load(yaml_file)
    # look for the container 'name' key and then the network 'addresses' for each
    # look for the container &#39;name&#39; key and then the network &#39;addresses&#39; for each
    # container
    for container in containers:
    print(f"Container: {container['name']}")
    for addresses in container['state']['network']['eth0']['addresses']:
    print(f" Addresses: {addresses}")
    print(f&#34;Container: {container[&#39;name&#39;]}&#34;)
    for addresses in container[&#39;state&#39;][&#39;network&#39;][&#39;eth0&#39;][&#39;addresses&#39;]:
    print(f&#34; Addresses: {addresses}&#34;)
    ```
    ```bash
    @@ -540,21 +597,21 @@ This is why we need to extract network addresses from the YAML configuration fil
    ```bash=
    Container: web01
    Addresses: {'family': 'inet', 'address': '198.18.28.122', 'netmask': '23', 'scope': 'global'}
    Addresses: {'family': 'inet6', 'address': '2001:678:3fc:1c:216:3eff:fea4:95b7', 'netmask': '64', 'scope': 'global'}
    Addresses: {'family': 'inet6', 'address': 'fe80::216:3eff:fea4:95b7', 'netmask': '64', 'scope': 'link'}
    Addresses: {&#39;family&#39;: &#39;inet&#39;, &#39;address&#39;: &#39;198.18.28.122&#39;, &#39;netmask&#39;: &#39;23&#39;, &#39;scope&#39;: &#39;global&#39;}
    Addresses: {&#39;family&#39;: &#39;inet6&#39;, &#39;address&#39;: &#39;2001:678:3fc:1c:216:3eff:fea4:95b7&#39;, &#39;netmask&#39;: &#39;64&#39;, &#39;scope&#39;: &#39;global&#39;}
    Addresses: {&#39;family&#39;: &#39;inet6&#39;, &#39;address&#39;: &#39;fe80::216:3eff:fea4:95b7&#39;, &#39;netmask&#39;: &#39;64&#39;, &#39;scope&#39;: &#39;link&#39;}
    Container: web02
    Addresses: {'family': 'inet', 'address': '198.18.28.70', 'netmask': '23', 'scope': 'global'}
    Addresses: {'family': 'inet6', 'address': '2001:678:3fc:1c:216:3eff:fe6e:7a91', 'netmask': '64', 'scope': 'global'}
    Addresses: {'family': 'inet6', 'address': 'fe80::216:3eff:fe6e:7a91', 'netmask': '64', 'scope': 'link'}
    Addresses: {&#39;family&#39;: &#39;inet&#39;, &#39;address&#39;: &#39;198.18.28.70&#39;, &#39;netmask&#39;: &#39;23&#39;, &#39;scope&#39;: &#39;global&#39;}
    Addresses: {&#39;family&#39;: &#39;inet6&#39;, &#39;address&#39;: &#39;2001:678:3fc:1c:216:3eff:fe6e:7a91&#39;, &#39;netmask&#39;: &#39;64&#39;, &#39;scope&#39;: &#39;global&#39;}
    Addresses: {&#39;family&#39;: &#39;inet6&#39;, &#39;address&#39;: &#39;fe80::216:3eff:fe6e:7a91&#39;, &#39;netmask&#39;: &#39;64&#39;, &#39;scope&#39;: &#39;link&#39;}
    Container: web03
    Addresses: {'family': 'inet', 'address': '198.18.28.69', 'netmask': '23', 'scope': 'global'}
    Addresses: {'family': 'inet6', 'address': '2001:678:3fc:1c:216:3eff:fe40:705', 'netmask': '64', 'scope': 'global'}
    Addresses: {'family': 'inet6', 'address': 'fe80::216:3eff:fe40:705', 'netmask': '64', 'scope': 'link'}
    Addresses: {&#39;family&#39;: &#39;inet&#39;, &#39;address&#39;: &#39;198.18.28.69&#39;, &#39;netmask&#39;: &#39;23&#39;, &#39;scope&#39;: &#39;global&#39;}
    Addresses: {&#39;family&#39;: &#39;inet6&#39;, &#39;address&#39;: &#39;2001:678:3fc:1c:216:3eff:fe40:705&#39;, &#39;netmask&#39;: &#39;64&#39;, &#39;scope&#39;: &#39;global&#39;}
    Addresses: {&#39;family&#39;: &#39;inet6&#39;, &#39;address&#39;: &#39;fe80::216:3eff:fe40:705&#39;, &#39;netmask&#39;: &#39;64&#39;, &#39;scope&#39;: &#39;link&#39;}
    Container: web04
    Addresses: {'family': 'inet', 'address': '198.18.28.193', 'netmask': '23', 'scope': 'global'}
    Addresses: {'family': 'inet6', 'address': '2001:678:3fc:1c:216:3eff:fe1b:b041', 'netmask': '64', 'scope': 'global'}
    Addresses: {'family': 'inet6', 'address': 'fe80::216:3eff:fe1b:b041', 'netmask': '64', 'scope': 'link'}
    Addresses: {&#39;family&#39;: &#39;inet&#39;, &#39;address&#39;: &#39;198.18.28.193&#39;, &#39;netmask&#39;: &#39;23&#39;, &#39;scope&#39;: &#39;global&#39;}
    Addresses: {&#39;family&#39;: &#39;inet6&#39;, &#39;address&#39;: &#39;2001:678:3fc:1c:216:3eff:fe1b:b041&#39;, &#39;netmask&#39;: &#39;64&#39;, &#39;scope&#39;: &#39;global&#39;}
    Addresses: {&#39;family&#39;: &#39;inet6&#39;, &#39;address&#39;: &#39;fe80::216:3eff:fe1b:b041&#39;, &#39;netmask&#39;: &#39;64&#39;, &#39;scope&#39;: &#39;link&#39;}
    ```
    2. Second attempt: format YAML output
    @@ -565,17 +622,17 @@ This is why we need to extract network addresses from the YAML configuration fil
    import yaml
    with open('container_config.yml','r') as yaml_file:
    with open(&#39;container_config.yml&#39;,&#39;r&#39;) as yaml_file:
    containers = yaml.safe_load(yaml_file)
    print('containers:')
    print(' hosts:')
    print(&#39;containers:&#39;)
    print(&#39; hosts:&#39;)
    for container in containers:
    print(f" {container['name']}:")
    for addresses in container['state']['network']['eth0']['addresses']:
    print(f&#34; {container[&#39;name&#39;]}:&#34;)
    for addresses in container[&#39;state&#39;][&#39;network&#39;][&#39;eth0&#39;][&#39;addresses&#39;]:
    # print IPv6 local link address
    if addresses['family'] == 'inet6' and addresses['scope'] == 'link':
    print(f" ansible_host: '{addresses['address']}%enp0s1'")
    if addresses[&#39;family&#39;] == &#39;inet6&#39; and addresses[&#39;scope&#39;] == &#39;link&#39;:
    print(f&#34; ansible_host: &#39;{addresses[&#39;address&#39;]}%enp0s1&#39;&#34;)
    ```
    ```bash
    @@ -586,27 +643,27 @@ This is why we need to extract network addresses from the YAML configuration fil
    containers:
    hosts:
    web01:
    ansible_host: 'fe80::216:3eff:fea4:95b7%enp0s1'
    ansible_host: &#39;fe80::216:3eff:fea4:95b7%enp0s1&#39;
    web02:
    ansible_host: 'fe80::216:3eff:fe6e:7a91%enp0s1'
    ansible_host: &#39;fe80::216:3eff:fe6e:7a91%enp0s1&#39;
    web03:
    ansible_host: 'fe80::216:3eff:fe40:705%enp0s1'
    ansible_host: &#39;fe80::216:3eff:fe40:705%enp0s1&#39;
    web04:
    ansible_host: 'fe80::216:3eff:fe1b:b041%enp0s1'
    ansible_host: &#39;fe80::216:3eff:fe1b:b041%enp0s1&#39;
    ```
    3. Run the Python script from the `incus_fecth.yml` ansible playbook
    Here's a new version of the playbook with an additional task that creates the `containers.yml` file in the `inventory/` directory.
    Here&#39;s a new version of the playbook with an additional task that creates the `containers.yml` file in the `inventory/` directory.
    ```bash=
    cat << 'EOF' > incus_fetch.yml
    cat &lt;&lt; &#39;EOF&#39; &gt; incus_fetch.yml
    ---
    - name: BUILD CONTAINERS DYNAMIC INVENTORY
    - name: BUILD CONTAINERS DYNAMIC INVENTORY
    hosts: webserver
    tasks:
    - name: GET INCUS CONTAINERS CONFIGURATION
    ansible.builtin.shell: incus --format yaml ls > container_config.yml
    ansible.builtin.shell: incus --format yaml ls &gt; container_config.yml
    args:
    chdir: $HOME
    creates: container_config.yml
    @@ -619,7 +676,7 @@ This is why we need to extract network addresses from the YAML configuration fil
    - name: ADD INCUS CONTAINERS ADDRESSES TO INVENTORY
    ansible.builtin.script:
    build_inventory.py > inventory/containers.yml
    build_inventory.py &gt; inventory/containers.yml
    delegate_to: localhost
    EOF
    ```
    @@ -644,7 +701,7 @@ This is why we need to extract network addresses from the YAML configuration fil
    ok: [webserver]
    TASK [ADD INCUS CONTAINERS ADDRESSES TO INVENTORY] ********************
    changed: [webserver -> localhost]
    changed: [webserver -&gt; localhost]
    PLAY RECAP ************************************************************
    webserver : ok=4 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
    @@ -672,32 +729,32 @@ all:
    containers:
    hosts:
    web01:
    ansible_become_pass: '{{ webuser_pass }}'
    ansible_become_pass: &#39;{{ webuser_pass }}&#39;
    ansible_host: fe80::216:3eff:fea4:95b7%enp0s1
    ansible_ssh_pass: '{{ webuser_pass }}'
    ansible_ssh_pass: &#39;{{ webuser_pass }}&#39;
    ansible_ssh_user: webuser
    web02:
    ansible_become_pass: '{{ webuser_pass }}'
    ansible_become_pass: &#39;{{ webuser_pass }}&#39;
    ansible_host: fe80::216:3eff:fe6e:7a91%enp0s1
    ansible_ssh_pass: '{{ webuser_pass }}'
    ansible_ssh_pass: &#39;{{ webuser_pass }}&#39;
    ansible_ssh_user: webuser
    web03:
    ansible_become_pass: '{{ webuser_pass }}'
    ansible_become_pass: &#39;{{ webuser_pass }}&#39;
    ansible_host: fe80::216:3eff:fe40:705%enp0s1
    ansible_ssh_pass: '{{ webuser_pass }}'
    ansible_ssh_pass: &#39;{{ webuser_pass }}&#39;
    ansible_ssh_user: webuser
    web04:
    ansible_become_pass: '{{ webuser_pass }}'
    ansible_become_pass: &#39;{{ webuser_pass }}&#39;
    ansible_host: fe80::216:3eff:fe1b:b041%enp0s1
    ansible_ssh_pass: '{{ webuser_pass }}'
    ansible_ssh_pass: &#39;{{ webuser_pass }}&#39;
    ansible_ssh_user: webuser
    ungrouped: {}
    vms:
    hosts:
    webserver:
    ansible_become_pass: '{{ webserver_user_pass }}'
    ansible_become_pass: &#39;{{ webserver_user_pass }}&#39;
    ansible_host: fe80::baad:caff:fefe:XXX%enp0s1
    ansible_ssh_pass: '{{ webserver_user_pass }}'
    ansible_ssh_pass: &#39;{{ webserver_user_pass }}&#39;
    ansible_ssh_user: etu
    ```
    @@ -711,39 +768,39 @@ Vault password:
    ```
    ```bash=
    web02 | SUCCESS => {
    "changed": false,
    "ping": "pong"
    web02 | SUCCESS =&gt; {
    &#34;changed&#34;: false,
    &#34;ping&#34;: &#34;pong&#34;
    }
    web04 | SUCCESS => {
    "changed": false,
    "ping": "pong"
    web04 | SUCCESS =&gt; {
    &#34;changed&#34;: false,
    &#34;ping&#34;: &#34;pong&#34;
    }
    web01 | SUCCESS => {
    "changed": false,
    "ping": "pong"
    web01 | SUCCESS =&gt; {
    &#34;changed&#34;: false,
    &#34;ping&#34;: &#34;pong&#34;
    }
    web03 | SUCCESS => {
    "changed": false,
    "ping": "pong"
    web03 | SUCCESS =&gt; {
    &#34;changed&#34;: false,
    &#34;ping&#34;: &#34;pong&#34;
    }
    ```
    Another way to check SSH access to the containers is to use the **command** module instead of **ping**.
    ```bash
    ansible containers -m command -a "/bin/echo Hello, World!" --ask-vault-pass --extra-vars @$HOME/lab15_passwd.yml
    ansible containers -m command -a &#34;/bin/echo Hello, World!&#34; --ask-vault-pass --extra-vars @$HOME/lab15_passwd.yml
    Vault password:
    ```
    ```bash=
    web01 | CHANGED | rc=0 >>
    web01 | CHANGED | rc=0 &gt;&gt;
    Hello, World!
    web04 | CHANGED | rc=0 >>
    web04 | CHANGED | rc=0 &gt;&gt;
    Hello, World!
    web02 | CHANGED | rc=0 >>
    web02 | CHANGED | rc=0 &gt;&gt;
    Hello, World!
    web03 | CHANGED | rc=0 >>
    web03 | CHANGED | rc=0 &gt;&gt;
    Hello, World!
    ```
    @@ -754,7 +811,7 @@ In this Part, you will create and automate the installation of Apache webserver
    ## Step 1: Create the `install_apache.yml` playbook
    ```bash=
    cat << 'EOF' > install_apache.yml
    cat &lt;&lt; &#39;EOF&#39; &gt; install_apache.yml
    ---
    - name: INSTALL APACHE2, ENABLE MOD_REWRITE, SET LISTEN PORT 8081, AND CHECK HTTP STATUS CODE
    hosts: containers
    @@ -763,7 +820,7 @@ cat << 'EOF' > install_apache.yml
    - name: UPDATE AND UPGRADE APT PACKAGES
    ansible.builtin.apt:
    update_cache: true
    upgrade: 'full'
    upgrade: &#39;full&#39;
    - name: INSTALL APACHE2
    ansible.builtin.apt:
    @@ -856,15 +913,15 @@ Compared to playbooks in previous tasks, we can see that each task is run on eac
    We now want to verify that the `apache2` web server is active. Therfore, we add a task to the `install_apache.yml`playbook.
    ```bash=
    cat << 'EOF' > install_apache.yml
    cat &lt;&lt; &#39;EOF&#39; &gt; install_apache.yml
    ---
    - hosts: containers
    become: true
    tasks:
    - name: UPDATE AND UPGRADE APT PACKAGES
    ansible.builtin.apt:
    update_cache: true
    upgrade: 'full'
    upgrade: &#39;full&#39;
    - name: INSTALL APACHE2
    ansible.builtin.apt:
    @@ -910,17 +967,17 @@ ok: [web04]
    ok: [web03]
    TASK [debug] **********************************************************
    ok: [web01] => {
    "apache2_status.status.ActiveState": "active"
    ok: [web01] =&gt; {
    &#34;apache2_status.status.ActiveState&#34;: &#34;active&#34;
    }
    ok: [web02] => {
    "apache2_status.status.ActiveState": "active"
    ok: [web02] =&gt; {
    &#34;apache2_status.status.ActiveState&#34;: &#34;active&#34;
    }
    ok: [web03] => {
    "apache2_status.status.ActiveState": "active"
    ok: [web03] =&gt; {
    &#34;apache2_status.status.ActiveState&#34;: &#34;active&#34;
    }
    ok: [web04] => {
    "apache2_status.status.ActiveState": "active"
    ok: [web04] =&gt; {
    &#34;apache2_status.status.ActiveState&#34;: &#34;active&#34;
    }
    ```
    @@ -934,17 +991,17 @@ Here is a copy of the new tasks to add in the `install_apache.yml` file playbook
    - name: SET APACHE2 LISTEN ON PORT 8081
    ansible.builtin.lineinfile:
    dest: /etc/apache2/ports.conf
    regexp: '^Listen 80'
    line: 'Listen 8081'
    regexp: &#39;^Listen 80&#39;
    line: &#39;Listen 8081&#39;
    state: present
    notify:
    - RESTART APACHE2
    - name: SET APACHE2 VIRTUALHOST LISTEN ON PORT 8081
    ansible.builtin.lineinfile:
    dest: /etc/apache2/sites-available/000-default.conf
    regexp: '^<VirtualHost \*:80>'
    line: '<VirtualHost *:8081>'
    regexp: &#39;^&lt;VirtualHost \*:80&gt;&#39;
    line: &#39;&lt;VirtualHost *:8081&gt;&#39;
    state: present
    notify:
    - RESTART APACHE2
    @@ -957,46 +1014,46 @@ Once the playbook is run again, we can check the results through the `ansible` *
    1. Check the `/etc/apache2/ports.conf` file
    ```bash
    ansible containers -m command -a "grep ^Listen /etc/apache2/ports.conf" --ask-vault-pass --extra-vars @$HOME/lab15_passwd.yml
    ansible containers -m command -a &#34;grep ^Listen /etc/apache2/ports.conf&#34; --ask-vault-pass --extra-vars @$HOME/lab15_passwd.yml
    Vault password:
    ```
    ```bash=
    web01 | CHANGED | rc=0 >>
    web01 | CHANGED | rc=0 &gt;&gt;
    Listen 8081
    web04 | CHANGED | rc=0 >>
    web04 | CHANGED | rc=0 &gt;&gt;
    Listen 8081
    web02 | CHANGED | rc=0 >>
    web02 | CHANGED | rc=0 &gt;&gt;
    Listen 8081
    web03 | CHANGED | rc=0 >>
    web03 | CHANGED | rc=0 &gt;&gt;
    Listen 8081
    ```
    2. Check the `/etc/apache2/sites-available/000-default.conf` file
    ```bash
    ansible containers -m command -a "grep ^<VirtualHost /etc/apache2/sites-available/000-default.conf" --ask-vault-pass --extra-vars @$HOME/lab15_passwd.yml
    ansible containers -m command -a &#34;grep ^&lt;VirtualHost /etc/apache2/sites-available/000-default.conf&#34; --ask-vault-pass --extra-vars @$HOME/lab15_passwd.yml
    Vault password:
    ```
    ```bash=
    web02 | CHANGED | rc=0 >>
    <VirtualHost *:8081>
    web04 | CHANGED | rc=0 >>
    <VirtualHost *:8081>
    web01 | CHANGED | rc=0 >>
    <VirtualHost *:8081>
    web03 | CHANGED | rc=0 >>
    <VirtualHost *:8081>
    web02 | CHANGED | rc=0 &gt;&gt;
    &lt;VirtualHost *:8081&gt;
    web04 | CHANGED | rc=0 &gt;&gt;
    &lt;VirtualHost *:8081&gt;
    web01 | CHANGED | rc=0 &gt;&gt;
    &lt;VirtualHost *:8081&gt;
    web03 | CHANGED | rc=0 &gt;&gt;
    &lt;VirtualHost *:8081&gt;
    ```
    3. Finally, we cal list the TCP sockets open in listening state
    ```bash
    ansible containers -m command -a "ss -ltn" --ask-vault-pass --extra-vars @$HOME/lab15_passwd.yml
    ansible containers -m command -a &#34;ss -ltn&#34; --ask-vault-pass --extra-vars @$HOME/lab15_passwd.yml
    Vault password:
    ```
    ```bash=
    web02 | CHANGED | rc=0 >>
    web02 | CHANGED | rc=0 &gt;&gt;
    State Recv-Q Send-Q Local Address:Port Peer Address:Port
    LISTEN 0 4096 127.0.0.53%lo:53 0.0.0.0:*
    LISTEN 0 128 0.0.0.0:22 0.0.0.0:*
    @@ -1005,7 +1062,7 @@ Once the playbook is run again, we can check the results through the `ansible` *
    LISTEN 0 128 [::]:22 [::]:*
    LISTEN 0 4096 [::]:5355 [::]:*
    LISTEN 0 511 *:8081 *:*
    web04 | CHANGED | rc=0 >>
    web04 | CHANGED | rc=0 &gt;&gt;
    State Recv-Q Send-Q Local Address:Port Peer Address:Port
    LISTEN 0 4096 127.0.0.54:53 0.0.0.0:*
    LISTEN 0 4096 127.0.0.53%lo:53 0.0.0.0:*
    @@ -1014,7 +1071,7 @@ Once the playbook is run again, we can check the results through the `ansible` *
    LISTEN 0 128 [::]:22 [::]:*
    LISTEN 0 511 *:8081 *:*
    LISTEN 0 4096 [::]:5355 [::]:*
    web01 | CHANGED | rc=0 >>
    web01 | CHANGED | rc=0 &gt;&gt;
    State Recv-Q Send-Q Local Address:Port Peer Address:Port
    LISTEN 0 4096 127.0.0.53%lo:53 0.0.0.0:*
    LISTEN 0 128 0.0.0.0:22 0.0.0.0:*
    @@ -1023,7 +1080,7 @@ Once the playbook is run again, we can check the results through the `ansible` *
    LISTEN 0 128 [::]:22 [::]:*
    LISTEN 0 511 *:8081 *:*
    LISTEN 0 4096 [::]:5355 [::]:*
    web03 | CHANGED | rc=0 >>
    web03 | CHANGED | rc=0 &gt;&gt;
    State Recv-Q Send-Q Local Address:Port Peer Address:Port
    LISTEN 0 128 0.0.0.0:22 0.0.0.0:*
    LISTEN 0 4096 127.0.0.53%lo:53 0.0.0.0:*
    @@ -1045,9 +1102,9 @@ In order to do so, we add a last task to the `install_apache.yml` playbook that
    ```yaml=
    - name: CHECK HTTP STATUS CODE
    ansible.builtin.uri:
    url: 'http://[{{ ansible_default_ipv6.address }}]:8081'
    url: &#39;http://[{{ ansible_default_ipv6.address }}]:8081&#39;
    status_code: 200
    when: "'containers' in group_names"
    when: &#34;&#39;containers&#39; in group_names&#34;
    delegate_to: localhost
    become: false
    ```
    @@ -1063,22 +1120,22 @@ If we run the playbook with success, we only get **ok** as results. Here is a sa
    ```bash=
    TASK [CHECK HTTP STATUS CODE] *******************************************
    ok: [web04 -> localhost]
    ok: [web01 -> localhost]
    ok: [web02 -> localhost]
    ok: [web03 -> localhost]
    ok: [web04 -&gt; localhost]
    ok: [web01 -&gt; localhost]
    ok: [web02 -&gt; localhost]
    ok: [web03 -&gt; localhost]
    ```
    If we run the same playbook with the very verbose option `-vvv` we get detailed results of each HTTP request. Here is a sample for one of the four containers tested:
    ```json=
    "last_modified": "Sun, 11 Feb 2024 10:07:56 GMT",
    "msg": "OK (10701 bytes)",
    "redirected": false,
    "server": "Apache/2.4.58 (Debian)",
    "status": 200,
    "url": "http://[2001:678:3fc:1c:216:3eff:fe6e:7a91]:8081",
    "vary": "Accept-Encoding"
    &#34;last_modified&#34;: &#34;Sun, 11 Feb 2024 10:07:56 GMT&#34;,
    &#34;msg&#34;: &#34;OK (10701 bytes)&#34;,
    &#34;redirected&#34;: false,
    &#34;server&#34;: &#34;Apache/2.4.58 (Debian)&#34;,
    &#34;status&#34;: 200,
    &#34;url&#34;: &#34;http://[2001:678:3fc:1c:216:3eff:fe6e:7a91]:8081&#34;,
    &#34;vary&#34;: &#34;Accept-Encoding&#34;
    ```
    On line 5, the status value is actually 200.
    @@ -1087,4 +1144,25 @@ On line 5, the status value is actually 200.
    This lab is mainly a discovery of Ansible modules for students with no prior experience in the world of automation. It covers many Ansible modules with basic tasks for installing and configuring containers and services.
    If you've reached these lines, I hope you've enjoyed the trip :smiley:
    If you&#39;ve reached these lines, I hope you&#39;ve enjoyed the trip :smiley:</div>
    <div class="ui-toc dropup unselectable hidden-print" style="display:none;">
    <div class="pull-right dropdown">
    <a id="tocLabel" class="ui-toc-label btn btn-default" data-toggle="dropdown" href="#" role="button" aria-haspopup="true" aria-expanded="false" title="Table of content">
    <i class="fa fa-bars"></i>
    </a>
    <ul id="ui-toc" class="ui-toc-dropdown dropdown-menu" aria-labelledby="tocLabel">
    </ul>
    </div>
    </div>
    <div id="ui-toc-affix" class="ui-affix-toc ui-toc-dropdown unselectable hidden-print" data-spy="affix" style="display:none;"></div>
    </body>
    </html>
    <script src="https://md.inetdoc.net/js/mathjax-config-extra.js"></script>
    <script src="https://md.inetdoc.net/build/MathJax/MathJax.js" defer></script>
    <script src="https://md.inetdoc.net/build/MathJax/config/TeX-AMS-MML_HTMLorMML.js" defer></script>
    <script src="https://md.inetdoc.net/build/MathJax/config/Safe.js" defer></script>
    <script src="config"></script><script src="build/vendors~common.77fe551a965ab1fc0d33.js" defer="defer"></script><script src="build/common.6da413651c225dd97e7a.js" defer="defer"></script><script src="build/vendors~cover~cover-pack~index~index-pack~pretty~pretty-pack~slide~slide-pack.897edf415109c21e7fb2.js" defer="defer"></script><script src="build/vendors~index~index-pack~pretty~pretty-pack~slide~slide-pack.624da9bb49753eaa85e1.js" defer="defer"></script><script src="build/pretty-pack.1de536f4d480e3bdc08f.js" defer="defer"></script>
  6. platu revised this gist Feb 17, 2024. 1 changed file with 63 additions and 60 deletions.
    123 changes: 63 additions & 60 deletions lab15+ansible+incus.md
    Original file line number Diff line number Diff line change
    @@ -100,7 +100,7 @@ The web server hosting VM is now ready for Ansible automation. First, we need to
    Depends: ansible-core (>= 2.11.5-1~), python3:any, openssh-client | python3-paramiko (>= 2.6.0), python3-distutils, python3-dnspython, python3-httplib2, python3-jinja2, python3-netaddr, python3-yaml
    ```
    2. Create a new `ansible.cfg` file in the `lab15` directory from the shell prompt
    3. Create a new `ansible.cfg` file in the `lab15` directory from the shell prompt
    ```bash=
    @@ -123,7 +123,7 @@ The web server hosting VM is now ready for Ansible automation. First, we need to
    EOF
    ```

    3. Create the `inventory` directory
    4. Create the `inventory` directory

    ```bash
    mkdir ~/labs/lab15/inventory
    @@ -133,80 +133,83 @@ The web server hosting VM is now ready for Ansible automation. First, we need to

    We start with a shell test connection before to set the configuration for **ansible**.

    1. Shell SSH connection

    One more time, be sure to change tap interface number to match your resource allocation.
    One more time, be sure to change tap interface number to match your resource allocation.

    ```bash
    ssh etu@fe80::baad:caff:fefe:XXX%enp0s1
    ```
    ```bash
    ssh etu@fe80::baad:caff:fefe:XXX%enp0s1
    ```

    ```bash=
    The authenticity of host 'fe80::baad:caff:fefe:XXX%enp0s1 (fe80::baad:caff:fefe:XXX%enp0s1)' can't be established.
    ED25519 key fingerprint is SHA256:yFLaZk+OfY7z7bHyHPXgjowRS4KMHjfoMQxracRdG9M.
    This key is not known by any other names.
    Are you sure you want to continue connecting (yes/no/[fingerprint])? yes
    Warning: Permanently added 'fe80::baad:caff:fefe:XXX%enp0s1' (ED25519) to the list of known hosts.
    Linux webserver-host 6.6.13-amd64 #1 SMP PREEMPT_DYNAMIC Debian 6.6.13-1 (2024-01-20) x86_64
    ```
    ```bash=
    The authenticity of host 'fe80::baad:caff:fefe:XXX%enp0s1 (fe80::baad:caff:fefe:XXX%enp0s1)' can't be established.
    ED25519 key fingerprint is SHA256:yFLaZk+OfY7z7bHyHPXgjowRS4KMHjfoMQxracRdG9M.
    This key is not known by any other names.
    Are you sure you want to continue connecting (yes/no/[fingerprint])? yes
    Warning: Permanently added 'fe80::baad:caff:fefe:XXX%enp0s1' (ED25519) to the list of known hosts.
    Linux webserver-host 6.6.13-amd64 #1 SMP PREEMPT_DYNAMIC Debian 6.6.13-1 (2024-01-20) x86_64
    ```
    2. Create a new vault file called `lab15_passwd.yml` and enter the unique vault password which will be used for all users passwords to be stored.
    ## Step 3: Create a new vault file
    Create a new vault file called `lab15_passwd.yml` and enter the unique vault password which will be used for all users passwords to be stored.
    ```bash
    ansible-vault create $HOME/lab15_passwd.yml
    ```
    ```bash
    ansible-vault create $HOME/lab15_passwd.yml
    ```
    ```bash=
    New Vault password:
    Confirm New Vault password:
    ```
    ```bash=
    New Vault password:
    Confirm New Vault password:
    ```
    This will open the default editor which is defined by the `$EDITOR` environment variable.
    There we enter a variable name which will designate the password for Web server VM user account.
    This will open the default editor which is defined by the `$EDITOR` environment variable.
    There we enter a variable name which will designate the password for Web server VM user account.
    ```bash
    webserver_user_pass: XXXXXXXXXX
    ```
    ```bash
    webserver_user_pass: XXXXXXXXXX
    ```
    3. Create the inventory file `inventory/hosts.yml` with the IP address of your Web server VM.
    ## Step 4: Create a new inventory file
    ```bash=
    cat << 'EOF' > inventory/hosts.yml
    ---
    Create the inventory file `inventory/hosts.yml` with the IP address of your Web server VM.
    ```bash=
    cat << 'EOF' > inventory/hosts.yml
    ---
    vms:
    hosts:
    webserver:
    ansible_host: 'fe80::baad:caff:fefe:XXX%enp0s1'
    vars:
    ansible_ssh_user: etu
    ansible_ssh_pass: '{{ webserver_user_pass }}'
    ansible_become_pass: '{{ webserver_user_pass }}'
    all:
    children:
    vms:
    hosts:
    webserver:
    ansible_host: 'fe80::baad:caff:fefe:XXX%enp0s1'
    vars:
    ansible_ssh_user: etu
    ansible_ssh_pass: '{{ webserver_user_pass }}'
    ansible_become_pass: '{{ webserver_user_pass }}'
    all:
    children:
    vms:
    containers:
    EOF
    ```
    containers:
    EOF
    ```
    4. Verify Ansible communication with the Web server VM
    ## Step 5: Verify Ansible communication with the Web server VM
    Now, we are able to use the `ping` ansible module to commincate with the `webserver` entry defined in the inventory file.
    Now, we are able to use the `ping` ansible module to commincate with the `webserver` entry defined in the inventory file.
    ```bash
    ansible webserver -m ping --ask-vault-pass --extra-vars @$HOME/lab15_passwd.yml
    Vault password:
    ```
    ```bash
    ansible webserver -m ping --ask-vault-pass --extra-vars @$HOME/lab15_passwd.yml
    Vault password:
    ```
    ```bash=
    webserver | SUCCESS => {
    "changed": false,
    "ping": "pong"
    }
    ```
    ```bash=
    webserver | SUCCESS => {
    "changed": false,
    "ping": "pong"
    }
    ```
    As the ansible ping is successful, we can go on with container management setup within the Web server VM.
    As the ansible ping is successful, we can go on with container management setup within the Web server VM.
    # Part 3: Initialize Incus container management with Ansible
    @@ -773,7 +776,7 @@ cat << 'EOF' > install_apache.yml
    state: present
    notify: RESTART APACHE2
    - name: CLEAN UNWANTED OLDER APT STUFF
    - name: CLEAN UNWANTED APT OLDER STUFF
    ansible.builtin.apt:
    autoclean: true
    autoremove: true
  7. platu revised this gist Feb 14, 2024. 1 changed file with 3 additions and 3 deletions.
    6 changes: 3 additions & 3 deletions lab15+ansible+incus.md
    Original file line number Diff line number Diff line change
    @@ -491,8 +491,8 @@ TASK [GET INCUS CONTAINERS CONFIGURATION] *****************************
    ok: [webserver]
    TASK [FETCH INCUS CONTAINERS CONFIGURATION] ***************************
    [WARNING]: sftp transfer mechanism failed on [fe80::baad:caff:fefe:1%enp0s1]. Use ANSIBLE_DEBUG=1 to see detailed information
    [WARNING]: scp transfer mechanism failed on [fe80::baad:caff:fefe:1%enp0s1]. Use ANSIBLE_DEBUG=1 to see detailed information
    [WARNING]: sftp transfer mechanism failed on [fe80::baad:caff:fefe:XXX%enp0s1]. Use ANSIBLE_DEBUG=1 to see detailed information
    [WARNING]: scp transfer mechanism failed on [fe80::baad:caff:fefe:XXX%enp0s1]. Use ANSIBLE_DEBUG=1 to see detailed information
    changed: [webserver]
    PLAY RECAP ************************************************************
    @@ -693,7 +693,7 @@ all:
    hosts:
    webserver:
    ansible_become_pass: '{{ webserver_user_pass }}'
    ansible_host: fe80::baad:caff:fefe:1%enp0s1
    ansible_host: fe80::baad:caff:fefe:XXX%enp0s1
    ansible_ssh_pass: '{{ webserver_user_pass }}'
    ansible_ssh_user: etu
    ```
  8. platu revised this gist Feb 14, 2024. 1 changed file with 74 additions and 66 deletions.
    140 changes: 74 additions & 66 deletions lab15+ansible+incus.md
    Original file line number Diff line number Diff line change
    @@ -220,30 +220,32 @@ Create the `lxd_init.yml` file an add the following information to the file. Mak
    ```bash=
    cat << 'EOF' > incus_init.yml
    ---
    - hosts: webserver
    - name: INCUS INSTALLATION AND INITIALIZATION
    hosts: webserver
    tasks:
    - name: INSTALL INCUS PACKAGE
    apt:
    ansible.builtin.apt:
    name: incus
    state: latest
    update_cache: yes
    state: present
    update_cache: true
    become: true
    - name: ADD USER TO INCUS SYSTEM GROUPS
    user:
    ansible.builtin.user:
    name: '{{ ansible_ssh_user }}'
    groups:
    - incus
    - incus-admin
    append: yes
    append: true
    become: true
    - name: RESET SSH CONNECTION TO ALLOW USER CHANGES
    meta:
    ansible.builtin.meta:
    reset_connection
    - name: INITIALIZE LXD
    shell: |
    ansible.builtin.shell: |
    set -o pipefail
    cat << EOT | incus admin init --preseed
    config:
    core.https_address: '[::]:8443'
    @@ -370,10 +372,12 @@ webuser_pass: XXXXXXXXXX
    ```bash=
    cat << 'EOF' > incus_launch.yml
    ---
    - hosts: webserver
    - name: LAUNCH INCUS CONTAINERS, SETUP USER ACCOUNT AND SSH SERVICE
    hosts: webserver
    tasks:
    - name: LAUNCH INCUS CONTAINERS
    shell: |
    ansible.builtin.shell: |
    set -o pipefail
    if ! incus ls -c n | grep -q "{{ item }}"
    then
    incus launch images:debian/trixie "{{ item }}"
    @@ -385,8 +389,9 @@ cat << 'EOF' > incus_launch.yml
    with_inventory_hostnames:
    - all:!webserver
    - name: SETUP USER ACCOUNT, SSH SERVICE AND PACKAGE MANAGEMENT
    shell: |
    - name: SETUP USER ACCOUNT AND SSH SERVICE
    ansible.builtin.shell: |
    set -o pipefail
    incus exec "{{ item }}" -- bash -c "if ! grep -q webuser /etc/passwd; then adduser --quiet --gecos \"\" --disabled-password webuser; fi"
    incus exec "{{ item }}" -- bash -c "chpasswd <<<\"webuser:{{ webuser_pass }}\""
    incus exec "{{ item }}" -- bash -c "if ! id webuser | grep -qo sudo; then adduser --quiet webuser sudo; fi"
    @@ -452,16 +457,17 @@ Here is a short new playbook named `incus_fetch.yml` which will retrieve configu
    ```bash=
    cat << 'EOF' > incus_fetch.yml
    ---
    - hosts: webserver
    - name: BUILD CONTAINERS DYNAMIC INVENTORY
    hosts: webserver
    tasks:
    - name: GET INCUS CONTAINERS CONFIGURATION
    shell: incus --format yaml ls > container_config.yml
    ansible.builtin.shell: incus --format yaml ls > container_config.yml
    args:
    chdir: $HOME
    creates: container_config.yml
    - name: FETCH INCUS CONTAINERS CONFIGURATION
    fetch:
    ansible.builtin.fetch:
    src: $HOME/container_config.yml
    dest: container_config.yml
    flat: true
    @@ -593,25 +599,25 @@ This is why we need to extract network addresses from the YAML configuration fil
    ```bash=
    cat << 'EOF' > incus_fetch.yml
    ---
    - hosts: webserver
    tasks:
    - name: GET INCUS CONTAINERS CONFIGURATION
    shell: incus --format yaml ls > container_config.yml
    args:
    chdir: $HOME
    creates: container_config.yml
    - name: FETCH INCUS CONTAINERS CONFIGURATION
    fetch:
    src: $HOME/container_config.yml
    dest: container_config.yml
    flat: true
    - name: ADD INCUS CONTAINERS ADDRESSES TO INVENTORY
    script:
    build_inventory.py > inventory/containers.yml
    run_once: true
    delegate_to: localhost
    - name: BUILD CONTAINERS DYNAMIC INVENTORY
    hosts: webserver
    tasks:
    - name: GET INCUS CONTAINERS CONFIGURATION
    ansible.builtin.shell: incus --format yaml ls > container_config.yml
    args:
    chdir: $HOME
    creates: container_config.yml
    - name: FETCH INCUS CONTAINERS CONFIGURATION
    ansible.builtin.fetch:
    src: $HOME/container_config.yml
    dest: container_config.yml
    flat: true
    - name: ADD INCUS CONTAINERS ADDRESSES TO INVENTORY
    ansible.builtin.script:
    build_inventory.py > inventory/containers.yml
    delegate_to: localhost
    EOF
    ```
    @@ -747,33 +753,34 @@ In this Part, you will create and automate the installation of Apache webserver
    ```bash=
    cat << 'EOF' > install_apache.yml
    ---
    - hosts: containers
    - name: INSTALL APACHE2, ENABLE MOD_REWRITE, SET LISTEN PORT 8081, AND CHECK HTTP STATUS CODE
    hosts: containers
    become: true
    tasks:
    - name: UPDATE AND UPGRADE APT PACKAGES
    apt:
    update_cache: yes
    ansible.builtin.apt:
    update_cache: true
    upgrade: 'full'
    - name: INSTALL APACHE2
    apt:
    ansible.builtin.apt:
    name: apache2
    state: latest
    state: present
    - name: ENABLE APACHE2 MOD_REWRITE MODULE
    apache2_module:
    community.general.apache2_module:
    name: rewrite
    state: present
    notify: RESTART APACHE2
    - name: CLEAN UNWANTED OLDER APT STUFF
    apt:
    autoclean: yes
    autoremove: yes
    ansible.builtin.apt:
    autoclean: true
    autoremove: true
    handlers:
    - name: RESTART APACHE2
    service:
    ansible.builtin.service:
    name: apache2
    state: restarted
    EOF
    @@ -852,37 +859,38 @@ cat << 'EOF' > install_apache.yml
    become: true
    tasks:
    - name: UPDATE AND UPGRADE APT PACKAGES
    apt:
    update_cache: yes
    ansible.builtin.apt:
    update_cache: true
    upgrade: 'full'
    - name: INSTALL APACHE2
    apt:
    ansible.builtin.apt:
    name: apache2
    state: latest
    state: present
    - name: ENABLE APACHE2 MOD_REWRITE MODULE
    apache2_module:
    community.general.apache2_module:
    name: rewrite
    state: present
    notify: RESTART APACHE2
    - name: CLEAN UNWANTED OLDER STUFF
    apt:
    autoclean: yes
    autoremove: yes
    ansible.builtin.apt:
    autoclean: true
    autoremove: true
    - name: GET APACHE2 SERVICE STATUS
    systemd:
    ansible.builtin.systemd:
    name: apache2
    register: apache2_status
    - debug:
    - name: PRINT APACHE2 SERVICE STATUS
    ansible.builtin.debug:
    var: apache2_status.status.ActiveState
    handlers:
    - name: RESTART APACHE2
    service:
    ansible.builtin.service:
    name: apache2
    state: restarted
    ```
    @@ -921,7 +929,7 @@ Here is a copy of the new tasks to add in the `install_apache.yml` file playbook

    ```yaml=
    - name: SET APACHE2 LISTEN ON PORT 8081
    lineinfile:
    ansible.builtin.lineinfile:
    dest: /etc/apache2/ports.conf
    regexp: '^Listen 80'
    line: 'Listen 8081'
    @@ -930,13 +938,13 @@ Here is a copy of the new tasks to add in the `install_apache.yml` file playbook
    - RESTART APACHE2
    - name: SET APACHE2 VIRTUALHOST LISTEN ON PORT 8081
    lineinfile:
    dest: /etc/apache2/sites-available/000-default.conf
    regexp: '^<VirtualHost \*:80>'
    line: '<VirtualHost *:8081>'
    state: present
    notify:
    - RESTART APACHE2
    ansible.builtin.lineinfile:
    dest: /etc/apache2/sites-available/000-default.conf
    regexp: '^<VirtualHost \*:80>'
    line: '<VirtualHost *:8081>'
    state: present
    notify:
    - RESTART APACHE2
    ```

    The **lineinfile** module is used to replace existing lines in the `/etc/apache2/ports.conf` and
    @@ -1033,10 +1041,10 @@ In order to do so, we add a last task to the `install_apache.yml` playbook that

    ```yaml=
    - name: CHECK HTTP STATUS CODE
    uri:
    ansible.builtin.uri:
    url: 'http://[{{ ansible_default_ipv6.address }}]:8081'
    status_code: 200
    when: "'containers' in group_names"
    when: "'containers' in group_names"
    delegate_to: localhost
    become: false
    ```
    @@ -1076,4 +1084,4 @@ On line 5, the status value is actually 200.

    This lab is mainly a discovery of Ansible modules for students with no prior experience in the world of automation. It covers many Ansible modules with basic tasks for installing and configuring containers and services.

    If you've reached these lines, I hope you've enjoyed the trip :smiley:
    If you've reached these lines, I hope you've enjoyed the trip :smiley:
  9. platu revised this gist Feb 12, 2024. 1 changed file with 1 addition and 1 deletion.
    2 changes: 1 addition & 1 deletion lab15+ansible+incus.md
    Original file line number Diff line number Diff line change
    @@ -1,5 +1,5 @@
    ---
    tags: m1, Devnet, cisco, linux, lab15
    tags: m1, Devnet, incus, linux, lab15
    ---
    # Devnet Lab 15 -- Using Ansible to automate the installation of Web servers on Incus system containers

  10. platu created this gist Feb 12, 2024.
    1,079 changes: 1,079 additions & 0 deletions lab15+ansible+incus.md
    Original file line number Diff line number Diff line change
    @@ -0,0 +1,1079 @@
    ---
    tags: m1, Devnet, cisco, linux, lab15
    ---
    # Devnet Lab 15 -- Using Ansible to automate the installation of Web servers on Incus system containers

    [toc]

    ---

    # Background / Scenario

    In this lab, you will first configure Ansible to communicate with a virtual machine hosting web servers in Incus system containers. You will create playbooks that automate the process of installing Incus on the Web server VM and build a dynamic inventory with a Python script. You will also create a custom playbook that installs Apache with specific instructions on each container.

    ![Lab 2 VMS Topology](https://md.inetdoc.net/uploads/9aa8568b-0828-42e8-86fe-6c05c4fabec2.png)

    # Part 1: Launch the Web server VM

    1. Copy the master server image

    At the Hypervisor console shell, we make a copy of server image from the `$HOME/masters` directory and rename the files.

    ```bash
    cp $HOME/masters/debian-testing-amd64.{qcow2,qcow2_OVMF_VARS.fd} .
    ```

    ```bash=
    '/home/etudianttest/masters/debian-testing-amd64.qcow2' -> './debian-testing-amd64.qcow2'
    '/home/etudianttest/masters/debian-testing-amd64.qcow2_OVMF_VARS.fd' -> './debian-testing-amd64.qcow2_OVMF_VARS.fd'
    ```

    ```bash
    rename.ul debian-testing-amd64 webserver-host debian-testing-amd64.qcow2*
    ```

    ```bash
    ls webserver-host*
    ```

    ```bash=
    webserver-host.qcow2 webserver-host.qcow2_OVMF_VARS.fd
    ```

    2. Launch the Web server VM.

    Do not forget to change the `$tapnum` VM interface number.

    ```bash
    $HOME/masters/scripts/ovs-startup.sh webserver-host.qcow2 4096 $tapnum
    ```

    ```bash=
    ~> Virtual machine filename : webserver-host.qcow2
    ~> RAM size : 4096MB
    ~> SPICE VDI port number : 59XX
    ~> telnet console port number : 23XX
    ~> MAC address : b8:ad:ca:fe:00:XX
    ~> Switch port interface : tapXX, access mode
    ~> IPv6 LL address : fe80::baad:caff:fefe:XX%vlanYYY
    ```

    3. Open a SSH connection to the Web server VM.

    Once again, do not forget to change the tap interface number at the right end of the link local IPv6 address.

    ```bash
    ssh etu@fe80::baad:caff:fefe:XX%vlanYYY
    ```


    # Part 2: Configure Ansible on the Devnet VM

    The web server hosting VM is now ready for Ansible automation. First, we need to configure Ansible and check that we have access to the web server VM from the Devnet VM via SSH.

    ## Step 1: Create the Ansible directory and configuration file

    1. Make the `~/labs/lab15` directory for example and navigate to this folder

    ```bash
    mkdir -p ~/labs/lab15 && cd ~/labs/lab15
    ```

    2. Check that **ansible** package is installed

    If the **ansible** package is not already installed on your Devnet VM, it's time to do so.
    ```bash
    apt show ansible | head -n 10
    ```
    ```bash=
    Package: ansible
    Version: 7.7.0+dfsg-3
    Priority: optional
    Section: universe/admin
    Origin: Ubuntu
    Maintainer: Ubuntu Developers <[email protected]>
    Original-Maintainer: Lee Garrett <[email protected]>
    Bugs: https://bugs.launchpad.net/ubuntu/+filebug
    Installed-Size: 263 MB
    Depends: ansible-core (>= 2.11.5-1~), python3:any, openssh-client | python3-paramiko (>= 2.6.0), python3-distutils, python3-dnspython, python3-httplib2, python3-jinja2, python3-netaddr, python3-yaml
    ```
    2. Create a new `ansible.cfg` file in the `lab15` directory from the shell prompt
    ```bash=
    cat << 'EOF' > ansible.cfg
    # config file for Lab 15 Web Servers management
    [defaults]
    # Use inventory/ folder files as source
    inventory=inventory/
    host_key_checking = False # Don't worry about RSA Fingerprints
    retry_files_enabled = False # Do not create them
    deprecation_warnings = False # Do not show warnings
    interpreter_python = /usr/bin/python3
    [inventory]
    enable_plugins = auto, host_list, yaml, ini, toml, script
    [persistent_connection]
    command_timeout=100
    connect_timeout=100
    connect_retry_timeout=100
    ssh_type = libssh
    EOF
    ```

    3. Create the `inventory` directory

    ```bash
    mkdir ~/labs/lab15/inventory
    ```

    ## Step 2: Check SSH access from Devnet VM to Web server VM

    We start with a shell test connection before to set the configuration for **ansible**.

    1. Shell SSH connection

    One more time, be sure to change tap interface number to match your resource allocation.

    ```bash
    ssh etu@fe80::baad:caff:fefe:XXX%enp0s1
    ```

    ```bash=
    The authenticity of host 'fe80::baad:caff:fefe:XXX%enp0s1 (fe80::baad:caff:fefe:XXX%enp0s1)' can't be established.
    ED25519 key fingerprint is SHA256:yFLaZk+OfY7z7bHyHPXgjowRS4KMHjfoMQxracRdG9M.
    This key is not known by any other names.
    Are you sure you want to continue connecting (yes/no/[fingerprint])? yes
    Warning: Permanently added 'fe80::baad:caff:fefe:XXX%enp0s1' (ED25519) to the list of known hosts.
    Linux webserver-host 6.6.13-amd64 #1 SMP PREEMPT_DYNAMIC Debian 6.6.13-1 (2024-01-20) x86_64
    ```
    2. Create a new vault file called `lab15_passwd.yml` and enter the unique vault password which will be used for all users passwords to be stored.
    ```bash
    ansible-vault create $HOME/lab15_passwd.yml
    ```
    ```bash=
    New Vault password:
    Confirm New Vault password:
    ```
    This will open the default editor which is defined by the `$EDITOR` environment variable.
    There we enter a variable name which will designate the password for Web server VM user account.
    ```bash
    webserver_user_pass: XXXXXXXXXX
    ```
    3. Create the inventory file `inventory/hosts.yml` with the IP address of your Web server VM.
    ```bash=
    cat << 'EOF' > inventory/hosts.yml
    ---
    vms:
    hosts:
    webserver:
    ansible_host: 'fe80::baad:caff:fefe:XXX%enp0s1'
    vars:
    ansible_ssh_user: etu
    ansible_ssh_pass: '{{ webserver_user_pass }}'
    ansible_become_pass: '{{ webserver_user_pass }}'
    all:
    children:
    vms:
    containers:
    EOF
    ```
    4. Verify Ansible communication with the Web server VM
    Now, we are able to use the `ping` ansible module to commincate with the `webserver` entry defined in the inventory file.
    ```bash
    ansible webserver -m ping --ask-vault-pass --extra-vars @$HOME/lab15_passwd.yml
    Vault password:
    ```
    ```bash=
    webserver | SUCCESS => {
    "changed": false,
    "ping": "pong"
    }
    ```
    As the ansible ping is successful, we can go on with container management setup within the Web server VM.
    # Part 3: Initialize Incus container management with Ansible
    In order to be able to launch system containers and configure Web services in these containers, we first must initialize the Incus manager with an Ansible playbook.
    ## Step 1: Create the `incus_init.yml` playbook
    Create the `lxd_init.yml` file an add the following information to the file. Make sure you use the proper YAML indentation. Every space and dash is significant. You may lose some formatting if you copy and paste.
    ```bash=
    cat << 'EOF' > incus_init.yml
    ---
    - hosts: webserver
    tasks:
    - name: INSTALL INCUS PACKAGE
    apt:
    name: incus
    state: latest
    update_cache: yes
    become: true
    - name: ADD USER TO INCUS SYSTEM GROUPS
    user:
    name: '{{ ansible_ssh_user }}'
    groups:
    - incus
    - incus-admin
    append: yes
    become: true
    - name: RESET SSH CONNECTION TO ALLOW USER CHANGES
    meta:
    reset_connection
    - name: INITIALIZE LXD
    shell: |
    cat << EOT | incus admin init --preseed
    config:
    core.https_address: '[::]:8443'
    networks: []
    storage_pools:
    - config: {}
    description: ""
    name: default
    driver: dir
    profiles:
    - config: {}
    description: ""
    devices:
    eth0:
    name: eth0
    nictype: macvlan
    parent: enp0s1
    type: nic
    root:
    path: /
    pool: default
    type: disk
    name: default
    projects: []
    cluster: null
    EOT
    touch $HOME/incus_init_done
    args:
    chdir: $HOME
    creates: incus_init_done
    EOF
    ```
    The `incus_init.yml` playbook contains four tasks:
    1. Install **incus** package if necessary
    2. Add the normal user **etu** to the **incus** and **incus-admin** system groups
    Therefore, we use the `{{ ansible_ssh_user }}` variable
    3. Reset the SSH connection between the Devnet VM and the werbserver VM to allow the new group assignments to take effect
    3. Initialize **incus** container manager from a preseeded setup YAML
    The **incus** initialization instructions fall in two main categories:
    1. Storage
    A default storage pool is defined based on a **btrfs** subvolume
    2. Networking
    We choose to use **macvlan** to connect any number of containers.
    This means that containers will be connected to the exact same VLAN as the Web server VM.
    IPv4 and IPv6 addressing will follow the same rules as the Web server VM. In our context, both Devnet and Web server VMs are connected to the hypervosor default VLAN with DHCP addressing for IPv4 and SLAAC for IPv6.
    ## Step 2: Run the `incus_init.yml` playbook
    ```bash
    ansible-playbook incus_init.yml --ask-vault-pass --extra-vars @$HOME/lab15_passwd.yml
    Vault password:
    ```
    ```bash=
    PLAY [webserver] **************************************************
    TASK [Gathering Facts] ********************************************
    ok: [webserver]
    TASK [INSTALL INCUS PACKAGE] **************************************
    ok: [webserver]
    TASK [ADD USER TO INCUS SYSTEM GROUPS] ****************************
    ok: [webserver]
    TASK [RESET SSH CONNECTION TO ALLOW USER CHANGES] *****************
    TASK [INITIALIZE LXD] *********************************************
    changed: [webserver]
    PLAY RECAP ********************************************************
    webserver : ok=4 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
    ```
    # Part 4: Instantiate containers with Ansible
    In this part, we start managing web services on demand with system container instantiation based on an Ansible playbook.
    ## Step 1: Create a lab inventory template
    In part 2 step 2, we have created the `inventory/hosts.yml` file that defines all the necessary parameters to run Ansible playbooks on the web servers hosting VM.
    Now, we have to create a new inventory file named `inventory/lab.yml` which defines all the system containers parameters. The purpose here is to be able to run Ansible playbooks within these containers.
    ```bash=
    cat << 'EOF' > inventory/lab.yml
    ---
    containers:
    hosts:
    web[01:04]:
    vars:
    ansible_ssh_user: webuser
    ansible_ssh_pass: '{{ webuser_pass }}'
    ansible_become_pass: '{{ webuser_pass }}'
    EOF
    ```
    > Note: This inventory file is incomplete as it does not define `ansible_host` variable for each container.
    ## Step 2: Add a new entry in Ansible vault for container access
    In the previous setp above, we defined a user named `webuser` with its password stored in the `webuser_pass`variable.
    We must add the corresponding entry in the ansible vault file named `$HOME/lab15_passwd.yml'.
    ```bash
    ansible-vault edit $HOME/lab15_passwd.yml
    New Vault password:
    Confirm New Vault password:
    ```

    There we enter a variable name which will designate the password for each container user account.

    ```bash
    webuser_pass: XXXXXXXXXX
    ```

    ## Step 3: Create an Ansible playbook to launch and configure access to containers

    ```bash=
    cat << 'EOF' > incus_launch.yml
    ---
    - hosts: webserver
    tasks:
    - name: LAUNCH INCUS CONTAINERS
    shell: |
    if ! incus ls -c n | grep -q "{{ item }}"
    then
    incus launch images:debian/trixie "{{ item }}"
    touch $HOME/"{{ item }}_launched"
    fi
    args:
    chdir: $HOME
    creates: "{{ item }}_launched"
    with_inventory_hostnames:
    - all:!webserver
    - name: SETUP USER ACCOUNT, SSH SERVICE AND PACKAGE MANAGEMENT
    shell: |
    incus exec "{{ item }}" -- bash -c "if ! grep -q webuser /etc/passwd; then adduser --quiet --gecos \"\" --disabled-password webuser; fi"
    incus exec "{{ item }}" -- bash -c "chpasswd <<<\"webuser:{{ webuser_pass }}\""
    incus exec "{{ item }}" -- bash -c "if ! id webuser | grep -qo sudo; then adduser --quiet webuser sudo; fi"
    incus exec "{{ item }}" -- apt update
    incus exec "{{ item }}" -- apt install -y openssh-server python3 python3-apt
    incus exec "{{ item }}" -- apt clean
    touch $HOME/"{{ item }}_configured"
    args:
    chdir: $HOME
    creates: "{{ item }}_configured"
    with_inventory_hostnames:
    - all:!webserver
    EOF
    ```

    This playbook has two different tasks.

    1. We first have to create and launch containers only if they are not already there and running. We are using the **shell** Ansible module to run commands on the Web server VM.
    2. Once the containers are running, we create a new user account and install the `openssh-server` package in each container.

    For both of these tasks, we use the `creates` function to ensure the commands are run only once. When all the shell commands are executed, we terminate by a `touch` command which creates an empty file which is further used as mark showing the job has already been done.

    ## Step 4: Run the `incus_launch.yml` playbook

    ```bash
    ansible-playbook incus_launch.yml --ask-vault-pass --extra-vars @$HOME/lab15_passwd.yml
    Vault password:
    ```

    ```bash=
    PLAY [webserver] *************************************************************************
    TASK [Gathering Facts] *******************************************************************
    ok: [webserver]
    TASK [LAUNCH INCUS CONTAINERS] ***********************************************************
    ok: [webserver] => (item=web01)
    ok: [webserver] => (item=web02)
    ok: [webserver] => (item=web03)
    ok: [webserver] => (item=web04)
    TASK [SETUP USER ACCOUNT AND SSH SERVICE] ************************************************
    ok: [webserver] => (item=web01)
    ok: [webserver] => (item=web02)
    ok: [webserver] => (item=web03)
    ok: [webserver] => (item=web04)
    PLAY RECAP *******************************************************************************
    webserver : ok=3 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
    ```

    # Part 5: Complete a dynamic inventory

    Now that the containers are launched, it is time to get their network addresses to build a new inventory file which will allow Ansible to run playbooks in each of these containers.

    We swicth here to Python development to build the new YAML inventory file based on informations given by **Incus** on the Web server VM

    ## Step 1: Fetch containers configuration

    Here is a short new playbook named `incus_fetch.yml` which will retrieve configuration from the Web server VM to the Devnet VM.


    ```bash=
    cat << 'EOF' > incus_fetch.yml
    ---
    - hosts: webserver
    tasks:
    - name: GET INCUS CONTAINERS CONFIGURATION
    shell: incus --format yaml ls > container_config.yml
    args:
    chdir: $HOME
    creates: container_config.yml
    - name: FETCH INCUS CONTAINERS CONFIGURATION
    fetch:
    src: $HOME/container_config.yml
    dest: container_config.yml
    flat: true
    EOF
    ```

    When we run this playbook, we get a copy of the **Incus** containers configuration from the Web server VM.

    ```bash
    ansible-playbook incus_fetch.yml --ask-vault-pass --extra-vars @$HOME/lab15_passwd.yml
    Vault password:
    ```

    ```bash=
    PLAY [webserver] ******************************************************
    TASK [Gathering Facts] ************************************************
    ok: [webserver]
    TASK [GET INCUS CONTAINERS CONFIGURATION] *****************************
    ok: [webserver]
    TASK [FETCH INCUS CONTAINERS CONFIGURATION] ***************************
    [WARNING]: sftp transfer mechanism failed on [fe80::baad:caff:fefe:1%enp0s1]. Use ANSIBLE_DEBUG=1 to see detailed information
    [WARNING]: scp transfer mechanism failed on [fe80::baad:caff:fefe:1%enp0s1]. Use ANSIBLE_DEBUG=1 to see detailed information
    changed: [webserver]
    PLAY RECAP ************************************************************
    webserver : ok=3 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
    ```

    From the Devnet VM shell, we can check the presence of the `container_config.yml` file.

    ```bash
    ls -lh container_config.yml
    -rw-rw-r-- 1 etu etu 18K févr. 11 09:23 container_config.yml
    ```

    ## Step 2: Build a Python script for containers inventory

    The main purpose here is to build a dynamic inventory with containers actual network addresses. With our **macvlan** network setup and random layer 2 MAC addresses, containers are addressed completely dynamically.

    This is why we need to extract network addresses from the YAML configuration file and build a new inventory file.

    1. First attempt: parse YAML file produced by `incus ls` command.

    ```python=
    #!/usr/bin/env python3
    # -*- coding: utf-8 -*-
    import yaml
    with open('container_config.yml','r') as yaml_file:
    containers = yaml.safe_load(yaml_file)
    # look for the container 'name' key and then the network 'addresses' for each
    # container
    for container in containers:
    print(f"Container: {container['name']}")
    for addresses in container['state']['network']['eth0']['addresses']:
    print(f" Addresses: {addresses}")
    ```
    ```bash
    /bin/python3 /home/etu/labs/lab15/build_inventory.py
    ```
    ```bash=
    Container: web01
    Addresses: {'family': 'inet', 'address': '198.18.28.122', 'netmask': '23', 'scope': 'global'}
    Addresses: {'family': 'inet6', 'address': '2001:678:3fc:1c:216:3eff:fea4:95b7', 'netmask': '64', 'scope': 'global'}
    Addresses: {'family': 'inet6', 'address': 'fe80::216:3eff:fea4:95b7', 'netmask': '64', 'scope': 'link'}
    Container: web02
    Addresses: {'family': 'inet', 'address': '198.18.28.70', 'netmask': '23', 'scope': 'global'}
    Addresses: {'family': 'inet6', 'address': '2001:678:3fc:1c:216:3eff:fe6e:7a91', 'netmask': '64', 'scope': 'global'}
    Addresses: {'family': 'inet6', 'address': 'fe80::216:3eff:fe6e:7a91', 'netmask': '64', 'scope': 'link'}
    Container: web03
    Addresses: {'family': 'inet', 'address': '198.18.28.69', 'netmask': '23', 'scope': 'global'}
    Addresses: {'family': 'inet6', 'address': '2001:678:3fc:1c:216:3eff:fe40:705', 'netmask': '64', 'scope': 'global'}
    Addresses: {'family': 'inet6', 'address': 'fe80::216:3eff:fe40:705', 'netmask': '64', 'scope': 'link'}
    Container: web04
    Addresses: {'family': 'inet', 'address': '198.18.28.193', 'netmask': '23', 'scope': 'global'}
    Addresses: {'family': 'inet6', 'address': '2001:678:3fc:1c:216:3eff:fe1b:b041', 'netmask': '64', 'scope': 'global'}
    Addresses: {'family': 'inet6', 'address': 'fe80::216:3eff:fe1b:b041', 'netmask': '64', 'scope': 'link'}
    ```
    2. Second attempt: format YAML output
    ```python=
    #!/usr/bin/env python3
    # -*- coding: utf-8 -*-
    import yaml
    with open('container_config.yml','r') as yaml_file:
    containers = yaml.safe_load(yaml_file)
    print('containers:')
    print(' hosts:')
    for container in containers:
    print(f" {container['name']}:")
    for addresses in container['state']['network']['eth0']['addresses']:
    # print IPv6 local link address
    if addresses['family'] == 'inet6' and addresses['scope'] == 'link':
    print(f" ansible_host: '{addresses['address']}%enp0s1'")
    ```
    ```bash
    /bin/python3 /home/etu/labs/lab15/build_inventory.py
    ```
    ```yaml=
    containers:
    hosts:
    web01:
    ansible_host: 'fe80::216:3eff:fea4:95b7%enp0s1'
    web02:
    ansible_host: 'fe80::216:3eff:fe6e:7a91%enp0s1'
    web03:
    ansible_host: 'fe80::216:3eff:fe40:705%enp0s1'
    web04:
    ansible_host: 'fe80::216:3eff:fe1b:b041%enp0s1'
    ```
    3. Run the Python script from the `incus_fecth.yml` ansible playbook
    Here's a new version of the playbook with an additional task that creates the `containers.yml` file in the `inventory/` directory.
    ```bash=
    cat << 'EOF' > incus_fetch.yml
    ---
    - hosts: webserver
    tasks:
    - name: GET INCUS CONTAINERS CONFIGURATION
    shell: incus --format yaml ls > container_config.yml
    args:
    chdir: $HOME
    creates: container_config.yml
    - name: FETCH INCUS CONTAINERS CONFIGURATION
    fetch:
    src: $HOME/container_config.yml
    dest: container_config.yml
    flat: true
    - name: ADD INCUS CONTAINERS ADDRESSES TO INVENTORY
    script:
    build_inventory.py > inventory/containers.yml
    run_once: true
    delegate_to: localhost
    EOF
    ```
    When we run this new version, the `containers.yml`, file is added to the `inventory/` directory.
    ```bash
    ansible-playbook incus_fetch.yml --ask-vault-pass --extra-vars @$HOME/lab15_passwd.yml
    Vault password:
    ```
    ```bash=
    PLAY [webserver] ******************************************************
    TASK [Gathering Facts] ************************************************
    ok: [webserver]
    TASK [GET INCUS CONTAINERS CONFIGURATION] *****************************
    ok: [webserver]
    TASK [FETCH INCUS CONTAINERS CONFIGURATION] ***************************
    ok: [webserver]
    TASK [ADD INCUS CONTAINERS ADDRESSES TO INVENTORY] ********************
    changed: [webserver -> localhost]
    PLAY RECAP ************************************************************
    webserver : ok=4 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
    ```
    ```bash
    ls -lh inventory/
    total 12K
    -rw-rw-r-- 1 etu etu 244 févr. 11 10:13 container.yml
    -rw-rw-r-- 1 etu etu 265 févr. 9 17:05 hosts.yml
    -rw-rw-r-- 1 etu etu 173 févr. 10 08:32 lab.yml
    ```
    ## Step 3: Check Ansible inventory
    We are now able to run the `ansible-inventory` command to check that the Web server VM and its containers are properly addressed.
    ```bash
    ansible-inventory --yaml --list
    ```

    ```yaml=
    all:
    children:
    containers:
    hosts:
    web01:
    ansible_become_pass: '{{ webuser_pass }}'
    ansible_host: fe80::216:3eff:fea4:95b7%enp0s1
    ansible_ssh_pass: '{{ webuser_pass }}'
    ansible_ssh_user: webuser
    web02:
    ansible_become_pass: '{{ webuser_pass }}'
    ansible_host: fe80::216:3eff:fe6e:7a91%enp0s1
    ansible_ssh_pass: '{{ webuser_pass }}'
    ansible_ssh_user: webuser
    web03:
    ansible_become_pass: '{{ webuser_pass }}'
    ansible_host: fe80::216:3eff:fe40:705%enp0s1
    ansible_ssh_pass: '{{ webuser_pass }}'
    ansible_ssh_user: webuser
    web04:
    ansible_become_pass: '{{ webuser_pass }}'
    ansible_host: fe80::216:3eff:fe1b:b041%enp0s1
    ansible_ssh_pass: '{{ webuser_pass }}'
    ansible_ssh_user: webuser
    ungrouped: {}
    vms:
    hosts:
    webserver:
    ansible_become_pass: '{{ webserver_user_pass }}'
    ansible_host: fe80::baad:caff:fefe:1%enp0s1
    ansible_ssh_pass: '{{ webserver_user_pass }}'
    ansible_ssh_user: etu
    ```

    ## Step 4: Check Ansible SSH access to the containers

    We are now also able to run the `ansible` command with its **ping** module to check SSH access to all the containers.

    ```bash
    ansible containers -m ping --ask-vault-pass --extra-vars @$HOME/lab15_passwd.yml
    Vault password:
    ```

    ```bash=
    web02 | SUCCESS => {
    "changed": false,
    "ping": "pong"
    }
    web04 | SUCCESS => {
    "changed": false,
    "ping": "pong"
    }
    web01 | SUCCESS => {
    "changed": false,
    "ping": "pong"
    }
    web03 | SUCCESS => {
    "changed": false,
    "ping": "pong"
    }
    ```

    Another way to check SSH access to the containers is to use the **command** module instead of **ping**.

    ```bash
    ansible containers -m command -a "/bin/echo Hello, World!" --ask-vault-pass --extra-vars @$HOME/lab15_passwd.yml
    Vault password:
    ```

    ```bash=
    web01 | CHANGED | rc=0 >>
    Hello, World!
    web04 | CHANGED | rc=0 >>
    Hello, World!
    web02 | CHANGED | rc=0 >>
    Hello, World!
    web03 | CHANGED | rc=0 >>
    Hello, World!
    ```

    # Part 6: Create an Ansible playbook to automate Web service installation

    In this Part, you will create and automate the installation of Apache webserver software.

    ## Step 1: Create the `install_apache.yml` playbook

    ```bash=
    cat << 'EOF' > install_apache.yml
    ---
    - hosts: containers
    become: true
    tasks:
    - name: UPDATE AND UPGRADE APT PACKAGES
    apt:
    update_cache: yes
    upgrade: 'full'
    - name: INSTALL APACHE2
    apt:
    name: apache2
    state: latest
    - name: ENABLE APACHE2 MOD_REWRITE MODULE
    apache2_module:
    name: rewrite
    state: present
    notify: RESTART APACHE2
    - name: CLEAN UNWANTED OLDER APT STUFF
    apt:
    autoclean: yes
    autoremove: yes
    handlers:
    - name: RESTART APACHE2
    service:
    name: apache2
    state: restarted
    EOF
    ```

    Explanation of some of the significant lines in your playbook:

    * hosts: `containers` - This references the containers group of devices in your hosts inventory file. This playbook will be run for all the devices with this group.
    * become: true - The become keyword activates sudo command execution, which will allow tasks such as installing applications.
    * apt: - The apt module is used to manage packages and application installations on Linux.
    * handlers: - Handlers are similar to a task but are not run automatically. They are called by a task. Notice that the task ENABLED MOD_REWRITE calls the handler RESTART APACHE2

    ## Step 2: Run the `install_apache.yml` playbook

    ```bash
    ansible-playbook install_apache.yml --ask-vault-pass --extra-vars @$HOME/lab15_passwd.yml
    Vault password:
    ```

    ```bash=
    PLAY [containers] *****************************************************
    TASK [Gathering Facts] ************************************************
    ok: [web01]
    ok: [web03]
    ok: [web02]
    ok: [web04]
    TASK [UPDATE AND UPGRADE APT PACKAGES] ********************************
    changed: [web02]
    changed: [web04]
    changed: [web03]
    changed: [web01]
    TASK [INSTALL APACHE2] ************************************************
    changed: [web01]
    changed: [web03]
    changed: [web02]
    changed: [web04]
    TASK [ENABLE APACHE2 MOD_REWRITE MODULE] ******************************
    changed: [web02]
    changed: [web03]
    changed: [web01]
    changed: [web04]
    TASK [CLEAN UNWANTED OLDER APT STUFF] *************************************
    ok: [web02]
    ok: [web01]
    ok: [web04]
    ok: [web03]
    RUNNING HANDLER [RESTART APACHE2] *************************************
    changed: [web04]
    changed: [web03]
    changed: [web02]
    changed: [web01]
    PLAY RECAP ************************************************************
    web01 : ok=6 changed=4 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
    web02 : ok=6 changed=4 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
    web03 : ok=6 changed=4 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
    web04 : ok=6 changed=4 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
    ```

    Compared to playbooks in previous tasks, we can see that each task is run on each of the four **Incus** system containers we have added to the inventory.

    ## Step 3: Add a task to verify Apache2 service status

    We now want to verify that the `apache2` web server is active. Therfore, we add a task to the `install_apache.yml`playbook.

    ```bash=
    cat << 'EOF' > install_apache.yml
    ---
    - hosts: containers
    become: true
    tasks:
    - name: UPDATE AND UPGRADE APT PACKAGES
    apt:
    update_cache: yes
    upgrade: 'full'
    - name: INSTALL APACHE2
    apt:
    name: apache2
    state: latest
    - name: ENABLE APACHE2 MOD_REWRITE MODULE
    apache2_module:
    name: rewrite
    state: present
    notify: RESTART APACHE2
    - name: CLEAN UNWANTED OLDER STUFF
    apt:
    autoclean: yes
    autoremove: yes
    - name: GET APACHE2 SERVICE STATUS
    systemd:
    name: apache2
    register: apache2_status
    - debug:
    var: apache2_status.status.ActiveState
    handlers:
    - name: RESTART APACHE2
    service:
    name: apache2
    state: restarted
    ```

    Here we introduce the **systemd** module and the ability to **debug** within a playbook by displaying the value a variable after **registering** to the status of a **systemd** service.

    When running the plyabook, the relevant part of the output gives:

    ```bash=
    TASK [GET APACHE2 SERVICE STATUS] *************************************
    ok: [web02]
    ok: [web01]
    ok: [web04]
    ok: [web03]
    TASK [debug] **********************************************************
    ok: [web01] => {
    "apache2_status.status.ActiveState": "active"
    }
    ok: [web02] => {
    "apache2_status.status.ActiveState": "active"
    }
    ok: [web03] => {
    "apache2_status.status.ActiveState": "active"
    }
    ok: [web04] => {
    "apache2_status.status.ActiveState": "active"
    }
    ```

    ## Step 4: Reconfigure Apache server to listen on port 8081

    In this step we add two tasks using the **lineinfile** Ansible module to edit configuration files.

    Here is a copy of the new tasks to add in the `install_apache.yml` file playbook.

    ```yaml=
    - name: SET APACHE2 LISTEN ON PORT 8081
    lineinfile:
    dest: /etc/apache2/ports.conf
    regexp: '^Listen 80'
    line: 'Listen 8081'
    state: present
    notify:
    - RESTART APACHE2
    - name: SET APACHE2 VIRTUALHOST LISTEN ON PORT 8081
    lineinfile:
    dest: /etc/apache2/sites-available/000-default.conf
    regexp: '^<VirtualHost \*:80>'
    line: '<VirtualHost *:8081>'
    state: present
    notify:
    - RESTART APACHE2
    ```

    The **lineinfile** module is used to replace existing lines in the `/etc/apache2/ports.conf` and
    `/etc/apache2/sites-available/000-default.conf` files. You can search the Ansible documentation for more information on the lineinfile module.

    Once the playbook is run again, we can check the results through the `ansible` **command** module.

    1. Check the `/etc/apache2/ports.conf` file
    ```bash
    ansible containers -m command -a "grep ^Listen /etc/apache2/ports.conf" --ask-vault-pass --extra-vars @$HOME/lab15_passwd.yml
    Vault password:
    ```

    ```bash=
    web01 | CHANGED | rc=0 >>
    Listen 8081
    web04 | CHANGED | rc=0 >>
    Listen 8081
    web02 | CHANGED | rc=0 >>
    Listen 8081
    web03 | CHANGED | rc=0 >>
    Listen 8081
    ```

    2. Check the `/etc/apache2/sites-available/000-default.conf` file
    ```bash
    ansible containers -m command -a "grep ^<VirtualHost /etc/apache2/sites-available/000-default.conf" --ask-vault-pass --extra-vars @$HOME/lab15_passwd.yml
    Vault password:
    ```

    ```bash=
    web02 | CHANGED | rc=0 >>
    <VirtualHost *:8081>
    web04 | CHANGED | rc=0 >>
    <VirtualHost *:8081>
    web01 | CHANGED | rc=0 >>
    <VirtualHost *:8081>
    web03 | CHANGED | rc=0 >>
    <VirtualHost *:8081>
    ```

    3. Finally, we cal list the TCP sockets open in listening state
    ```bash
    ansible containers -m command -a "ss -ltn" --ask-vault-pass --extra-vars @$HOME/lab15_passwd.yml
    Vault password:
    ```

    ```bash=
    web02 | CHANGED | rc=0 >>
    State Recv-Q Send-Q Local Address:Port Peer Address:Port
    LISTEN 0 4096 127.0.0.53%lo:53 0.0.0.0:*
    LISTEN 0 128 0.0.0.0:22 0.0.0.0:*
    LISTEN 0 4096 0.0.0.0:5355 0.0.0.0:*
    LISTEN 0 4096 127.0.0.54:53 0.0.0.0:*
    LISTEN 0 128 [::]:22 [::]:*
    LISTEN 0 4096 [::]:5355 [::]:*
    LISTEN 0 511 *:8081 *:*
    web04 | CHANGED | rc=0 >>
    State Recv-Q Send-Q Local Address:Port Peer Address:Port
    LISTEN 0 4096 127.0.0.54:53 0.0.0.0:*
    LISTEN 0 4096 127.0.0.53%lo:53 0.0.0.0:*
    LISTEN 0 128 0.0.0.0:22 0.0.0.0:*
    LISTEN 0 4096 0.0.0.0:5355 0.0.0.0:*
    LISTEN 0 128 [::]:22 [::]:*
    LISTEN 0 511 *:8081 *:*
    LISTEN 0 4096 [::]:5355 [::]:*
    web01 | CHANGED | rc=0 >>
    State Recv-Q Send-Q Local Address:Port Peer Address:Port
    LISTEN 0 4096 127.0.0.53%lo:53 0.0.0.0:*
    LISTEN 0 128 0.0.0.0:22 0.0.0.0:*
    LISTEN 0 4096 127.0.0.54:53 0.0.0.0:*
    LISTEN 0 4096 0.0.0.0:5355 0.0.0.0:*
    LISTEN 0 128 [::]:22 [::]:*
    LISTEN 0 511 *:8081 *:*
    LISTEN 0 4096 [::]:5355 [::]:*
    web03 | CHANGED | rc=0 >>
    State Recv-Q Send-Q Local Address:Port Peer Address:Port
    LISTEN 0 128 0.0.0.0:22 0.0.0.0:*
    LISTEN 0 4096 127.0.0.53%lo:53 0.0.0.0:*
    LISTEN 0 4096 127.0.0.54:53 0.0.0.0:*
    LISTEN 0 4096 0.0.0.0:5355 0.0.0.0:*
    LISTEN 0 128 [::]:22 [::]:*
    LISTEN 0 511 *:8081 *:*
    LISTEN 0 4096 [::]:5355 [::]:*
    ```

    In the results above, we can see that lines numbered 9, 17, 26 and 35 show that there are open TCP ports listening on 8081.

    ## Step 5: Add a task to verify access to the web services

    When deploying new services, it is important to check that they are actually reachable at the application layer.

    In order to do so, we add a last task to the `install_apache.yml` playbook that run a local `curl` command on the Devnet VM and verify the HTTP answer code is `200 OK`.

    ```yaml=
    - name: CHECK HTTP STATUS CODE
    uri:
    url: 'http://[{{ ansible_default_ipv6.address }}]:8081'
    status_code: 200
    when: "'containers' in group_names"
    delegate_to: localhost
    become: false
    ```

    As our playbook starts with the **gather facts** job, a lot of ansible variables are set during this first phase.

    In the example above, we use IPv6 address of each container in the HTTP URL and expect the code **200** as a succesful result.

    * The **delegate_to: localhost** instructs the task to be run from the Devnet VM.
    * The **become: false** tells the task must be run at the normal user level.

    If we run the playbook with success, we only get **ok** as results. Here is a sample:

    ```bash=
    TASK [CHECK HTTP STATUS CODE] *******************************************
    ok: [web04 -> localhost]
    ok: [web01 -> localhost]
    ok: [web02 -> localhost]
    ok: [web03 -> localhost]
    ```

    If we run the same playbook with the very verbose option `-vvv` we get detailed results of each HTTP request. Here is a sample for one of the four containers tested:

    ```json=
    "last_modified": "Sun, 11 Feb 2024 10:07:56 GMT",
    "msg": "OK (10701 bytes)",
    "redirected": false,
    "server": "Apache/2.4.58 (Debian)",
    "status": 200,
    "url": "http://[2001:678:3fc:1c:216:3eff:fe6e:7a91]:8081",
    "vary": "Accept-Encoding"
    ```

    On line 5, the status value is actually 200.

    # Conclusion

    This lab is mainly a discovery of Ansible modules for students with no prior experience in the world of automation. It covers many Ansible modules with basic tasks for installing and configuring containers and services.

    If you've reached these lines, I hope you've enjoyed the trip :smiley: