Skip to content

Instantly share code, notes, and snippets.

@sushant-hiray
Last active April 20, 2017 02:49
Show Gist options
  • Save sushant-hiray/ab0eb1ac7b2b5b9b3edc to your computer and use it in GitHub Desktop.
Save sushant-hiray/ab0eb1ac7b2b5b9b3edc to your computer and use it in GitHub Desktop.

Revisions

  1. sushant-hiray revised this gist Jan 26, 2015. 1 changed file with 3 additions and 0 deletions.
    3 changes: 3 additions & 0 deletions clearwater.md
    Original file line number Diff line number Diff line change
    @@ -515,6 +515,9 @@ We need to make sure the Clearwater nodes can all talk to each other. To do this

    For opening / closing ports in Ubuntu server, user [UFW](https://help.ubuntu.com/10.04/serverguide/firewall.html)

    ##Bulk Provisioning
    To bulk-provision numbers, follow the following process. (There is an alternative bulk provisioning process documented [here](https://github.com/Metaswitch/crest/blob/dev/src/metaswitch/crest/tools/sstable_provisioning/README.md) - while [this](https://github.com/Metaswitch/crest/blob/dev/docs/Bulk-Provisioning%20Numbers.md) is more complex to set up, it may be more suitable if you are provisioning very large sets of numbers.)

    ##Clustering
    For more information about clustering nodes, check the [official documentation](https://github.com/Metaswitch/clearwater-docs/wiki/Manual-Install#larger-scale-deployments)

  2. sushant-hiray revised this gist Jan 13, 2015. 1 changed file with 1 addition and 1 deletion.
    2 changes: 1 addition & 1 deletion todo-server.md
    Original file line number Diff line number Diff line change
    @@ -4,5 +4,5 @@ Following are the stuff to be done:
    - [x] Add new user
    - [x] scp the proxy script
    - [x] Run the proxy script
    - [ ] sudo apt-get update
    - [x] sudo apt-get update
    - [ ] scp the resolv.conf
  3. sushant-hiray revised this gist Jan 13, 2015. 1 changed file with 1 addition and 1 deletion.
    2 changes: 1 addition & 1 deletion todo-server.md
    Original file line number Diff line number Diff line change
    @@ -3,6 +3,6 @@ Following are the stuff to be done:
    - [x] Add RSA keys (ssh without pass)
    - [x] Add new user
    - [x] scp the proxy script
    - [ ] Run the proxy script
    - [x] Run the proxy script
    - [ ] sudo apt-get update
    - [ ] scp the resolv.conf
  4. sushant-hiray revised this gist Jan 13, 2015. 1 changed file with 1 addition and 1 deletion.
    2 changes: 1 addition & 1 deletion todo-server.md
    Original file line number Diff line number Diff line change
    @@ -2,7 +2,7 @@ Following are the stuff to be done:

    - [x] Add RSA keys (ssh without pass)
    - [x] Add new user
    - [ ] scp the proxy script
    - [x] scp the proxy script
    - [ ] Run the proxy script
    - [ ] sudo apt-get update
    - [ ] scp the resolv.conf
  5. sushant-hiray revised this gist Jan 13, 2015. 1 changed file with 1 addition and 1 deletion.
    2 changes: 1 addition & 1 deletion todo-server.md
    Original file line number Diff line number Diff line change
    @@ -1,7 +1,7 @@
    Following are the stuff to be done:

    - [x] Add RSA keys (ssh without pass)
    - [ ] Add new user
    - [x] Add new user
    - [ ] scp the proxy script
    - [ ] Run the proxy script
    - [ ] sudo apt-get update
  6. sushant-hiray revised this gist Jan 13, 2015. 1 changed file with 1 addition and 1 deletion.
    2 changes: 1 addition & 1 deletion todo-server.md
    Original file line number Diff line number Diff line change
    @@ -1,6 +1,6 @@
    Following are the stuff to be done:

    - [ ] Add RSA keys (ssh without pass)
    - [x] Add RSA keys (ssh without pass)
    - [ ] Add new user
    - [ ] scp the proxy script
    - [ ] Run the proxy script
  7. sushant-hiray revised this gist Jan 13, 2015. 1 changed file with 8 additions and 0 deletions.
    8 changes: 8 additions & 0 deletions todo-server.md
    Original file line number Diff line number Diff line change
    @@ -0,0 +1,8 @@
    Following are the stuff to be done:

    - [ ] Add RSA keys (ssh without pass)
    - [ ] Add new user
    - [ ] scp the proxy script
    - [ ] Run the proxy script
    - [ ] sudo apt-get update
    - [ ] scp the resolv.conf
  8. sushant-hiray revised this gist Oct 31, 2014. 1 changed file with 2 additions and 0 deletions.
    2 changes: 2 additions & 0 deletions clearwater.md
    Original file line number Diff line number Diff line change
    @@ -527,6 +527,8 @@ For more information about clustering nodes, check the [official documentation](
    ###Checking DNS Propogation
    Once you've added relevant lines in the file, DNS should propagate. It takes some time. To improve the speed, install nscd using `sudo apt-get install nscd` on each Node. It basically clears the DNS cache, so new DNS entries get added.

    `sudo pkill -HUP $(pgrep dnsmasq)` works like a charm to clear dns-cache

    ##Credits
    * [Manual Install Documentation](https://github.com/Metaswitch/clearwater-docs/wiki/Manual-Install)
    * [Digital Ocean: Configuring DNS](https://www.digitalocean.com/community/tutorials/how-to-configure-bind-as-a-private-network-dns-server-on-ubuntu-14-04)
  9. sushant-hiray revised this gist Oct 14, 2014. 1 changed file with 4 additions and 0 deletions.
    4 changes: 4 additions & 0 deletions clearwater.md
    Original file line number Diff line number Diff line change
    @@ -517,12 +517,16 @@ For opening / closing ports in Ubuntu server, user [UFW](https://help.ubuntu.com

    ##Clustering
    For more information about clustering nodes, check the [official documentation](https://github.com/Metaswitch/clearwater-docs/wiki/Manual-Install#larger-scale-deployments)

    ###Sprout Node

    * Create a new VM
    * Add its IP to the DNS and make sure you cluster the A records correctly.
    * Install sprout specific material as specified in the above sections. If you get a server_name_bucket size error, check this link to fix it [http://charles.lescampeurs.org/2008/11/14/fix-nginx-increase-server_names_hash_bucket_size](server_name error:nginx)

    ###Checking DNS Propogation
    Once you've added relevant lines in the file, DNS should propagate. It takes some time. To improve the speed, install nscd using `sudo apt-get install nscd` on each Node. It basically clears the DNS cache, so new DNS entries get added.

    ##Credits
    * [Manual Install Documentation](https://github.com/Metaswitch/clearwater-docs/wiki/Manual-Install)
    * [Digital Ocean: Configuring DNS](https://www.digitalocean.com/community/tutorials/how-to-configure-bind-as-a-private-network-dns-server-on-ubuntu-14-04)
  10. sushant-hiray revised this gist Oct 13, 2014. 1 changed file with 7 additions and 0 deletions.
    7 changes: 7 additions & 0 deletions clearwater.md
    Original file line number Diff line number Diff line change
    @@ -514,7 +514,14 @@ Either way, you must
    We need to make sure the Clearwater nodes can all talk to each other. To do this, you will need to open up some ports in the firewalls in your network. The ports used by Clearwater are listed in [Clearwater IP Port Usage](https://github.com/Metaswitch/clearwater-docs/wiki/Clearwater%20IP%20Port%20Usage). Configure all of these ports to be open to the appropriate hosts before continuing to the next step. If you are running on a platform that has multiple physical or virtual interfaces and the option to apply different firewall rules on each, make sure that you open these ports on the correct interfaces.

    For opening / closing ports in Ubuntu server, user [UFW](https://help.ubuntu.com/10.04/serverguide/firewall.html)

    ##Clustering
    For more information about clustering nodes, check the [official documentation](https://github.com/Metaswitch/clearwater-docs/wiki/Manual-Install#larger-scale-deployments)
    ###Sprout Node

    * Create a new VM
    * Add its IP to the DNS and make sure you cluster the A records correctly.
    * Install sprout specific material as specified in the above sections. If you get a server_name_bucket size error, check this link to fix it [http://charles.lescampeurs.org/2008/11/14/fix-nginx-increase-server_names_hash_bucket_size](server_name error:nginx)

    ##Credits
    * [Manual Install Documentation](https://github.com/Metaswitch/clearwater-docs/wiki/Manual-Install)
  11. sushant-hiray revised this gist Oct 13, 2014. 1 changed file with 1 addition and 0 deletions.
    1 change: 1 addition & 0 deletions clearwater.md
    Original file line number Diff line number Diff line change
    @@ -513,6 +513,7 @@ Either way, you must

    We need to make sure the Clearwater nodes can all talk to each other. To do this, you will need to open up some ports in the firewalls in your network. The ports used by Clearwater are listed in [Clearwater IP Port Usage](https://github.com/Metaswitch/clearwater-docs/wiki/Clearwater%20IP%20Port%20Usage). Configure all of these ports to be open to the appropriate hosts before continuing to the next step. If you are running on a platform that has multiple physical or virtual interfaces and the option to apply different firewall rules on each, make sure that you open these ports on the correct interfaces.

    For opening / closing ports in Ubuntu server, user [UFW](https://help.ubuntu.com/10.04/serverguide/firewall.html)
    For more information about clustering nodes, check the [official documentation](https://github.com/Metaswitch/clearwater-docs/wiki/Manual-Install#larger-scale-deployments)

    ##Credits
  12. sushant-hiray revised this gist Sep 17, 2014. 1 changed file with 181 additions and 1 deletion.
    182 changes: 181 additions & 1 deletion clearwater.md
    Original file line number Diff line number Diff line change
    @@ -2,6 +2,10 @@

    * Atleast 7 VM's: 1 corresponding to each node and 1 for DNS Server.
    * Keep note of the IP's of each VM and assign them to a node.
    * A publicly accessible IP address of each of the above machines and a private IP address for each of them (these may be the same address depending on the machine environment). These will be referred to as `<publicIP>` and `<privateIP>` below.
    * The FQDN of the machine, which resolves to the machine's public IP address (if the machine has no FQDN, you should instead use the public IP). Referred to as `<hostname>` below.
    * A DNS root zone in which to install your repository and the ability to configure records within that zone. This root zone will be referred to as `<zone>` below. In setting of DNS, this is referred to as `ims.hom`


    ## Setting up a DNS Server

    @@ -338,4 +342,180 @@ Restart BIND

    `sudo service bind9 restart`

    Your primary DNS server is now setup and ready to respond to DNS queries.
    Your primary DNS server is now setup and ready to respond to DNS queries.


    ## Bootstrapping the Machines

    There are a few steps that are common to all the Clearwater node installs - configuring the APT software sources and supplying the IP addresses/DNS hostnames to use for communicating within the deployment.

    The following steps should be followed on all the machines that will make up the Clearwater deployment.

    ###Configuring the APT software sources

    The machines need to be configured so that APT can use the Clearwater repository server.

    ###Project Clearwater

    Under sudo, create `/etc/apt/sources.list.d/clearwater.list` with the following contents:

    `deb http://repo.cw-ngv.com/stable binary/`


    Once this is created install the signing key used by the Clearwater server with:

    `curl -L http://repo.cw-ngv.com/repo_key | sudo apt-key add -`

    You should check the key fingerprint with:

    `sudo apt-key finger`

    The output should contain the following - check the fingerprint carefully.
    ```
    pub 4096R/22B97904 2013-04-30
    Key fingerprint = 9213 4604 DE32 7DF7 FEB7 2026 111D BE47 22B9 7904
    uid Project Clearwater Maintainers <[email protected]>
    sub 4096R/46EC5B7F 2013-04-30
    ```

    ###Finishing up

    Once the above steps have been performed, run the following to re-index your package manager:

    `sudo apt-get update`


    ## Configuring the inter-node hostnames/IP addresses
    On each machine, create the file /etc/clearwater/config with the following contents:

    ```
    # Deployment definitions
    home_domain=<zone>
    sprout_hostname=sprout.<zone>
    chronos_hostname=<privateIP>:7253
    hs_hostname=hs.<zone>:8888
    hs_provisioning_hostname=hs.<zone>:8889
    ralf_hostname=ralf.<zone>:10888
    xdms_hostname=homer.<zone>:7888
    # Local IP configuration
    local_ip=<privateIP>
    public_ip=<publicIP>
    public_hostname=<hostname>
    # Email server configuration
    smtp_smarthost=<smtp server>
    smtp_username=<username>
    smtp_password=<password>
    [email protected]
    # Keys (you can change this secret to something else)
    signup_key=secret
    turn_workaround=secret
    ellis_api_key=secret
    ellis_cookie_key=secret
    ```


    ##Node specific installation instructions

    At this point, you should decide (if you haven't already) which of the six machines will take on which of the Clearwater roles.

    The six roles are:

    * ellis
    * bono - This role also hosts a restund STUN server
    * sprout
    * homer
    * homestead
    * ralf

    Once this is determined, ssh onto each box in turn and follow the appropriate instructions below:

    ###Ellis

    Install the Ellis package with:

    `sudo DEBIAN_FRONTEND=noninteractive apt-get install ellis --yes`

    Provision a pool of numbers in Ellis. The command given here will generate 1000 numbers starting at sip:6505550000@<zone>, meaning none of the generated numbers will be routeable outside of the Clearwater deployment. For more details on creating numbers, see the create_numbers.py documentation.
    ```
    sudo bash -c "export PATH=/usr/share/clearwater/ellis/env/bin:$PATH ;
    cd /usr/share/clearwater/ellis/src/metaswitch/ellis/tools/ ;
    python create_numbers.py --start 6505550000 --count 1000"
    ```
    On success, you should see some output from python about importing eggs and then the following.

    Created 1000 numbers, 0 already present in database

    This command is idempotent, so it's safe to run it multiple times. If you've run it once before, you'll see the following instead.

    Created 0 numbers, 1000 already present in database


    ###Bono

    Install the Bono and Restund packages with:

    `sudo DEBIAN_FRONTEND=noninteractive apt-get install bono restund --yes`


    ###Sprout

    Install the Sprout package with:

    `sudo DEBIAN_FRONTEND=noninteractive apt-get install sprout --yes`

    If you want the Sprout nodes to include a Memento Application server, then install the memento packages with:
    ```
    sudo DEBIAN_FRONTEND=noninteractive apt-get install clearwater-cassandra --yes
    sudo DEBIAN_FRONTEND=noninteractive apt-get install memento memento-nginx --yeS
    ```

    ###Homer

    Install the Homer and Cassandra packages with:
    ```
    sudo DEBIAN_FRONTEND=noninteractive apt-get install clearwater-cassandra --yes
    sudo DEBIAN_FRONTEND=noninteractive apt-get install homer --yes
    ```

    ###Homestead

    Install the Homestead and Cassandra packages with:
    ```
    sudo DEBIAN_FRONTEND=noninteractive apt-get install clearwater-cassandra --yes
    sudo DEBIAN_FRONTEND=noninteractive apt-get install homestead homestead-prov --yes
    ```

    ###Ralf

    Install the Ralf package with:

    `sudo DEBIAN_FRONTEND=noninteractive apt-get install ralf --yes`


    ###Adding DNS Records (Client Configuration)
    Clearwater nodes need to know the identity of their DNS server. Ideally, this is achieved through DHCP. There are two main situations in which it might need to be configured manually.

    * When DNS configuration is not provided via DHCP.
    * When incorrect DNS configuration is provided via DHCP.

    Either way, you must

    * create an `/etc/dnsmasq.resolv.conf` file containing the desired DNS configuration (probably just the single line `nameserver <IP address>`)
    * add `RESOLV_CONF=/etc/dnsmasq.resolv.conf` to `/etc/default/dnsmasq`
    * run `service dnsmasq restart`

    (As background, dnsmasq is a DNS forwarder that runs on each Clearwater node to act as a cache. Local processes look in /etc/resolv.conf for DNS configuration, and this points them to localhost, where dnsmasq runs. In turn, dnsmasq takes its configuration from /etc/dnsmasq.resolv.conf. By default, dnsmasq would use /var/run/dnsmasq/resolv.conf, but this is controlled by DHCP.)

    ###Firewall configuration

    We need to make sure the Clearwater nodes can all talk to each other. To do this, you will need to open up some ports in the firewalls in your network. The ports used by Clearwater are listed in [Clearwater IP Port Usage](https://github.com/Metaswitch/clearwater-docs/wiki/Clearwater%20IP%20Port%20Usage). Configure all of these ports to be open to the appropriate hosts before continuing to the next step. If you are running on a platform that has multiple physical or virtual interfaces and the option to apply different firewall rules on each, make sure that you open these ports on the correct interfaces.

    For more information about clustering nodes, check the [official documentation](https://github.com/Metaswitch/clearwater-docs/wiki/Manual-Install#larger-scale-deployments)

    ##Credits
    * [Manual Install Documentation](https://github.com/Metaswitch/clearwater-docs/wiki/Manual-Install)
    * [Digital Ocean: Configuring DNS](https://www.digitalocean.com/community/tutorials/how-to-configure-bind-as-a-private-network-dns-server-on-ubuntu-14-04)
    * [Clearwater MailingList](http://lists.projectclearwater.org/mail-archive/msg00914.html)
  13. sushant-hiray revised this gist Sep 17, 2014. 1 changed file with 252 additions and 0 deletions.
    252 changes: 252 additions & 0 deletions clearwater.md
    Original file line number Diff line number Diff line change
    @@ -87,3 +87,255 @@ zone "ims.hom" {

    Assuming that our private subnet is `10.128.0.0/16`, add the reverse zone by with the following lines (note that our reverse zone name starts with "128.10" which is the octet reversal of "10.128"):


    Create Forward Zone File

    The forward zone file is where we define DNS records for forward DNS lookups. That is, when the DNS receives a name query, "bono.ims.hom" for example, it will look in the forward zone file to resolve bono's corresponding private IP address.

    Let's create the directory where our zone files will reside. According to our named.conf.local configuration, that location should be `/etc/bind/zones`:

    `sudo mkdir /etc/bind/zones`

    We will base our forward zone file on the sample db.local zone file. Copy it to the proper location with the following commands:

    `cd /etc/bind/zones`
    `sudo cp ../db.local ./db.ims.hom`

    Now let's edit our forward zone file:

    `sudo vi /etc/bind/zones/db.ims.hom`

    Initially, it will look something like the following:

    ```
    $TTL 604800
    @ IN SOA localhost. root.localhost. (
    2 ; Serial
    604800 ; Refresh
    86400 ; Retry
    2419200 ; Expire
    604800 ) ; Negative Cache TTL
    ;
    @ IN NS localhost. ; delete this line
    @ IN A 127.0.0.1 ; delete this line
    @ IN AAAA ::1 ; delete this line
    ```

    Finally the file should look something like this:

    ```
    ;
    ; BIND data file for local loopback interface
    ;
    $TTL 604800
    @ IN SOA ns1.ims.hom. admin.ims.hom. (
    7 ; Serial
    604800 ; Refresh
    86400 ; Retry
    2419200 ; Expire
    604800 ) ; Negative Cache TTL
    ;
    ; name servers - NS records
    IN NS ns1.ims.hom.
    ; name servers - A records
    ns1.ims.hom. IN A 10.129.34.70
    ; 10.129.0.0/16 - A records
    ; bono
    ; ====
    ;
    ; Per-node records - not required to have both IPv4 and IPv6 records
    bono.ims.hom. IN A 10.129.34.51
    bono-1 IN A 10.129.34.51
    ;
    ; Cluster A and AAAA records - UEs that don't support RFC 3263 will simply
    ; resolve the A or AAAA records and pick randomly from this set of addresses.
    bono IN A 10.129.34.51
    @ IN A 10.129.34.51
    ;
    ; NAPTR and SRV records - these indicate a preference for TCP and then resolve
    ; to port 5060 on the per-node records defined above.
    @ IN NAPTR 1 1 "S" "SIP+D2T" "" _sip._tcp
    @ IN NAPTR 2 1 "S" "SIP+D2U" "" _sip._udp
    _sip._tcp IN SRV 0 0 5060 bono-1
    _sip._udp IN SRV 0 0 5060 bono-1
    ; sprout
    ; ======
    ;
    ; Per-node records - not required to have both IPv4 and IPv6 records
    sprout.ims.hom. IN A 10.129.34.61
    sprout-1 IN A 10.129.34.61
    ;
    ; Cluster A and AAAA records - P-CSCFs that don't support RFC 3263 will simply
    ; resolve the A or AAAA records and pick randomly from this set of addresses.
    sprout IN A 10.129.34.61
    ;
    ; NAPTR and SRV records - these indicate TCP support only and then resolve
    ; to port 5054 on the per-node records defined above.
    sprout IN NAPTR 1 1 "S" "SIP+D2T" "" _sip._tcp.sprout
    _sip._tcp.sprout IN SRV 0 0 5054 sprout-1
    ;
    ; Per-node records for I-CSCF (if enabled) - not required to have both
    ; IPv4 and IPv6 records
    ;
    ; Cluster A and AAAA records - P-CSCFs that don't support RFC 3263 will simply
    ; resolve the A or AAAA records and pick randomly from this set of addresses.
    ;
    ; NAPTR and SRV records for I-CSCF (if enabled) - these indicate TCP
    ; support only and then resolve to port 5052 on the per-node records
    ; defined above.
    ; homestead
    ; =========
    ;
    ; Per-node records - not required to have both IPv4 and IPv6 records
    hs.ims.hom. IN A 10.129.34.49
    homestead-1 IN A 10.129.34.49
    ;
    ; Cluster A and AAAA records - sprout picks randomly from these.
    hs IN A 10.129.34.49
    ;
    ; (No need for NAPTR or SRV records as homestead doesn't handle SIP traffic.)
    ; homer
    ; =====
    ;
    ; Per-node records - not required to have both IPv4 and IPv6 records
    homer.ims.hom. IN A 10.129.34.72
    homer-1 IN A 10.129.34.72
    ;
    ; Cluster A and AAAA records - sprout picks randomly from these.
    homer IN A 10.129.34.72
    ;
    ; (No need for NAPTR or SRV records as homer doesn't handle SIP traffic.)
    ; ralf
    ; =====
    ;
    ; Per-node records - not required to have both IPv4 and IPv6 records
    ralf.ims.hom. IN A 10.129.34.68
    ralf-1 IN A 10.129.34.68
    ;
    ; Cluster A and AAAA records - sprout and bono pick randomly from these.
    ralf IN A 10.129.34.68
    ;
    ; (No need for NAPTR or SRV records as ralf doesn't handle SIP traffic.)
    ; ellis
    ; =====
    ;
    ; ellis is not clustered, so there's only ever one node.
    ;
    ; Per-node record - not required to have both IPv4 and IPv6 records
    ellis.ims.hom. IN A 10.129.34.41
    ellis-1 IN A 10.129.34.41
    ;
    ; "Cluster"/access A and AAAA record
    ellis IN A 10.129.34.41
    ```

    ###Create Reverse Zone File
    Reverse zone file are where we define DNS PTR records for reverse DNS lookups. That is, when the DNS receives a query by IP address, "10.128.100.101" for example, it will look in the reverse zone file(s) to resolve the corresponding FQDN, "bono.ims.hom" in this case.

    On ns1, for each reverse zone specified in the named.conf.local file, create a reverse zone file. We will base our reverse zone file(s) on the sample db.127 zone file. Copy it to the proper location with the following commands (substituting the destination filename so it matches your reverse zone definition):
    ```
    cd /etc/bind/zones
    sudo cp ../db.127 ./db.10.128
    ```
    Edit the reverse zone file that corresponds to the reverse zone(s) defined in named.conf.local:

    `sudo vi /etc/bind/zones/db.10.128`

    Initially, it will look something like the following:

    ```
    $TTL 604800
    @ IN SOA localhost. root.localhost. (
    1 ; Serial
    604800 ; Refresh
    86400 ; Retry
    2419200 ; Expire
    604800 ) ; Negative Cache TTL
    ;
    @ IN NS localhost. ; delete this line
    1.0.0 IN PTR localhost. ; delete this line
    ```


    Finally should look something like this:
    ```
    ;
    ; BIND reverse data file for local loopback interface
    ;
    $TTL 604800
    @ IN SOA ims.hom. admin.ims.hom. (
    5 ; Serial
    604800 ; Refresh
    86400 ; Retry
    2419200 ; Expire
    604800 ) ; Negative Cache TTL
    ; name servers
    IN NS ns1.ims.hom.
    ; PTR Records
    70.34 IN PTR ns1.ims.hom. ; 10.129.34.70
    51.34 IN PTR bono.ims.hom. ; 10.129.34.51
    61.34 IN PTR sprout.ims.hom. ; 10.129.34.61
    72.34 IN PTR homer.ims.hom. ; 10.129.34.41
    49.34 IN PTR hs.ims.hom. ; 10.129.34.49
    68.34 IN PTR ralf.ims.hom. ; 10.129.34.68
    ```

    ###Check BIND Configuration Syntax

    Run the following command to check the syntax of the named.conf* files:

    `sudo named-checkconf`

    If your named configuration files have no syntax errors, you will return to your shell prompt and see no error messages.

    The named-checkzone command can be used to check the correctness of your zone files. Its first argument specifies a zone name, and the second argument specifies the corresponding zone file

    `sudo named-checkzone nyc2.example.com db.nyc2.example.com`

    `sudo named-checkzone 128.10.in-addr.arpa /etc/bind/zones/db.10.128`

    When all of your configuration and zone files have no errors in them, you should be ready to restart the BIND service.
    Restart BIND

    ###Restart BIND:

    `sudo service bind9 restart`

    Your primary DNS server is now setup and ready to respond to DNS queries.
  14. sushant-hiray created this gist Sep 17, 2014.
    89 changes: 89 additions & 0 deletions clearwater.md
    Original file line number Diff line number Diff line change
    @@ -0,0 +1,89 @@
    # Prereqs

    * Atleast 7 VM's: 1 corresponding to each node and 1 for DNS Server.
    * Keep note of the IP's of each VM and assign them to a node.

    ## Setting up a DNS Server

    We will be using Bind to configure a Private DNS Server

    `sudo apt-get update`

    Now install BIND:

    `sudo apt-get install bind9 bind9utils bind9-doc`

    ###IPv4 Mode

    Before continuing, let's set BIND to IPv4 mode. On both servers, edit the bind9 service parameters file:

    `sudo vi /etc/default/bind9`

    Add "-4" to the OPTIONS variable. It should look like the following:

    `OPTIONS="-4 -u bind"`


    ###Configure Primary DNS Server

    BIND's configuration consists of multiple files, which are included from the main configuration file, named.conf. These filenames begin with "named" because that is the name of the process that BIND runs. We will start with configuring the options file.
    Configure Options File

    On the dns VM, open the named.conf.options file for editing:

    `sudo vi /etc/bind/named.conf.options`

    Above the existing options block, create a new ACL block called "trusted". This is where we will define list of other VM's

    ```
    acl "trusted" {
    10.128.10.11; # ns1 - DNS
    10.128.100.101; # host1
    10.128.200.102; # host2
    10.128.100.101; # host3
    10.128.200.102; # host4
    10.128.100.101; # host5
    10.128.200.102; # host6
    };
    ```

    Save and exit.

    Now that BIND is installed, let's configure the primary DNS server.


    Below the directory directive, add the highlighted configuration lines (and substitute in the proper ns1 IP address) so it looks something like this:
    ```
    options {
    directory "/var/cache/bind"; # This should be already present Add the others
    recursion yes; # enables resursive queries
    allow-recursion { trusted; }; # allows recursive queries from "trusted" clients
    listen-on { 10.128.10.11; }; # ns1 private IP address - listen on private network only
    allow-transfer { none; }; # disable zone transfers by default
    ...
    };
    ```

    Now save and exit `named.conf.options`. The above configuration specifies that only your own servers (the "trusted" ones) will be able to query your DNS server.

    Next, we will configure the local file, to specify our DNS zones.

    ###Configure Local File

    On ns1, open the `named.conf.local` file for editing:

    `sudo vi /etc/bind/named.conf.local`

    Aside from a few comments, the file should be empty. Here, we will specify our forward and reverse zones.

    Add the forward zone with the following lines (substitute the zone name with your own):
    ```
    zone "ims.hom" {
    type master;
    file "/etc/bind/zones/db.ims.hom"; # zone file path
    };
    ```

    Assuming that our private subnet is `10.128.0.0/16`, add the reverse zone by with the following lines (note that our reverse zone name starts with "128.10" which is the octet reversal of "10.128"):