Manual Install Instructions¶
These instructions will take you through installing a minimal Clearwater system using the latest binary packages provided by the Clearwater project. For a high level look at the install process, and a discussion of alternative install methods, see Installation Instructions.
Prerequisites¶
Before starting this process you will need the following:
- Six machines running clean installs of Ubuntu 14.04 - 64bit server
edition.
- The software has been tested on Amazon EC2
t2.small
instances (i.e. 1 vCPU, 2 GB RAM), so any machines at least as powerful as one of them will be sufficient. - Each machine will take on a separate role in the final deployment. The system requirements for each role are the same thus the allocation of roles to machines can be arbitrary.
- The firewalls of these devices must be independently configurable. This may require some attention when commissioning the machines. For example, in Amazon’s EC2, they should all be created in separate security groups.
- On Amazon EC2, we’ve tested both within a VPC and without. If using a VPC, we recommend using the “VPC with a Single Public Subnet” model (in the “VPC Wizard”) as this is simplest.
- The software has been tested on Amazon EC2
- A publicly accessible IP address of each of the above machines and a
private IP address for each of them (these may be the same address
depending on the machine environment). These will be referred to as
<publicIP>
and<privateIP>
below. (If running on Amazon EC2 in a VPC, you must explicitly add this IP address by ticking the “Automatically assign a public IP address” checkbox on creation.) - The FQDN of the machine, which resolves to the machine’s public IP
address (if the machine has no FQDN, you should instead use the
public IP). Referred to as
<hostname>
below. - SSH access to the above machines to a user authorised to use sudo. If
your system does not come with such a user pre-configured, add a user
with
sudo adduser <username>
and then authorize them to use sudo withsudo adduser <username> sudo
. - A DNS zone in which to install your deployment and the ability to
configure records within that zone. This zone will be referred to as
<zone>
below. - If you are not using the Project Clearwater provided Debian repository, you will need to know the URL (and, if applicable, the public GPG key) of your repository.
Configure the APT software sources¶
Configure each machine so that APT can use the Clearwater repository server.
Under sudo, create /etc/apt/sources.list.d/clearwater.list
with the
following contents:
deb http://repo.cw-ngv.com/stable binary/
Note: If you are not installing from the provided Clearwater Debian repository, replace the URL in this file to point to your Debian package repository.
Once this is created install the signing key used by the Clearwater server with:
curl -L http://repo.cw-ngv.com/repo_key | sudo apt-key add -
You should check the key fingerprint with:
sudo apt-key finger
The output should contain the following - check the fingerprint carefully.
pub 4096R/22B97904 2013-04-30
Key fingerprint = 9213 4604 DE32 7DF7 FEB7 2026 111D BE47 22B9 7904
uid Project Clearwater Maintainers <maintainers@projectclearwater.org>
sub 4096R/46EC5B7F 2013-04-30
Once the above steps have been performed, run the following to re-index your package manager:
sudo apt-get update
Determine Machine Roles¶
At this point, you should decide (if you haven’t already) which of the six machines will take on which of the Clearwater roles.
The six roles are:
- ellis
- bono - This role also hosts a restund STUN server
- sprout
- homer
- homestead
- ralf
Firewall configuration¶
We need to make sure the Clearwater nodes can all talk to each other. To do this, you will need to open up some ports in the firewalls in your network. The ports used by Clearwater are listed in Clearwater IP Port Usage. Configure all of these ports to be open to the appropriate hosts before continuing to the next step. If you are running on a platform that has multiple physical or virtual interfaces and the option to apply different firewall rules on each, make sure that you open these ports on the correct interfaces.
Create the per-node configuration.¶
On each machine create the file /etc/clearwater/local_config
with
the following contents.
local_ip=<privateIP>
public_ip=<publicIP>
public_hostname=<hostname>
etcd_cluster="<comma separated list of private IPs>"
Note that the etcd_cluster
variable should be set to a comma
separated list that contains the private IP address of the nodes you
created above. For example if the nodes had addresses 10.0.0.1 to
10.0.0.6, etcd_cluster
should be set to
"10.0.0.1,10.0.0.2,10.0.0.3,10.0.0.4,10.0.0.5,10.0.0.6"
If you are creating a geographically redundant deployment, then:
etcd_cluster
should contain the IP addresses of nodes in both sites- you should set
local_site_name
andremote_site_name
in/etc/clearwater/local_config
.
These names are arbitrary, but should reflect the node’s location (e.g.
a node in site A should have local_site_name=siteA
and
remote_site_name=siteB
, whereas a node in site B should have
local_site_name=siteB
and remote_site_name=siteA
):
If this machine will be a Sprout or Ralf node create the file
/etc/chronos/chronos.conf
with the following contents:
[http]
bind-address = <privateIP>
bind-port = 7253
threads = 50
[logging]
folder = /var/log/chronos
level = 2
[alarms]
enabled = true
[exceptions]
max_ttl = 600
Install Node-Specific Software¶
ssh
onto each box in turn and follow the appropriate instructions
below according to the role the node will take in the deployment:
Ellis¶
Install the Ellis package with:
sudo DEBIAN_FRONTEND=noninteractive apt-get install ellis --yes
sudo DEBIAN_FRONTEND=noninteractive apt-get install clearwater-management --yes
Bono¶
Install the Bono and Restund packages with:
sudo DEBIAN_FRONTEND=noninteractive apt-get install bono restund --yes
sudo DEBIAN_FRONTEND=noninteractive apt-get install clearwater-management --yes
Sprout¶
Install the Sprout package with:
sudo DEBIAN_FRONTEND=noninteractive apt-get install sprout --yes
sudo DEBIAN_FRONTEND=noninteractive apt-get install clearwater-management --yes
If you want the Sprout nodes to include a Memento Application server, then install the Memento packages with:
sudo DEBIAN_FRONTEND=noninteractive apt-get install memento-as memento-nginx --yes
Homer¶
Install the Homer packages with:
sudo DEBIAN_FRONTEND=noninteractive apt-get install homer --yes
sudo DEBIAN_FRONTEND=noninteractive apt-get install clearwater-management --yes
Homestead¶
Install the Homestead packages with:
sudo DEBIAN_FRONTEND=noninteractive apt-get install homestead homestead-prov clearwater-prov-tools --yes
sudo DEBIAN_FRONTEND=noninteractive apt-get install clearwater-management --yes
Ralf¶
Install the Ralf package with:
sudo DEBIAN_FRONTEND=noninteractive apt-get install ralf --yes
sudo DEBIAN_FRONTEND=noninteractive apt-get install clearwater-management --yes
SNMP statistics¶
Sprout, Bono and Homestead nodes expose statistics over SNMP. This function is not installed by default. If you want to enable it follow the instruction in our SNMP documentation.
Provision Telephone Numbers in Ellis¶
Log onto you Ellis node and provision a pool of numbers in Ellis. The
command given here will generate 1000 numbers starting at
sip:6505550000@<zone>
, meaning none of the generated numbers will be
routable outside of the Clearwater deployment. For more details on
creating numbers, see the create_numbers.py
documentation.
sudo bash -c "export PATH=/usr/share/clearwater/ellis/env/bin:$PATH ;
cd /usr/share/clearwater/ellis/src/metaswitch/ellis/tools/ ;
python create_numbers.py --start 6505550000 --count 1000"
On success, you should see some output from python about importing eggs and then the following.
Created 1000 numbers, 0 already present in database
This command is idempotent, so it’s safe to run it multiple times. If you’ve run it once before, you’ll see the following instead.
Created 0 numbers, 1000 already present in database
DNS Records¶
Clearwater uses DNS records to allow each node to find the others it needs to talk to to carry calls. At this point, you should create the DNS entries for your deployment before continuing to the next step. Clearwater DNS Usage describes the entries that are required before Clearwater will be able to carry service.
Although not required, we also suggest that you configure individual DNS records for each of the machines in your deployment to allow easy access to them if needed.
Be aware that DNS record creation can take time to propagate, you can check whether your newly configured records have propagated successfully by running ``dig <record>`` on each Clearwater machine and checking that the correct IP address is returned.
Where next?¶
Once you’ve reached this point, your Clearwater deployment is ready to handle calls. See the following pages for instructions on making your first call and running the supplied regression test suite.
Larger-Scale Deployments¶
If you’re intending to spin up a larger-scale deployment containing more than one node of each types, it’s recommended that you use the automated install process, as this makes scaling up and down very straight-forward. If for some reason you can’t, you can add nodes to the deployment using the Elastic Scaling Instructions
Standalone Application Servers¶
Gemini and Memento can run integrated into the Sprout nodes, or they can be run as standalone application servers.
To install Gemini or Memento as a standalone server, follow the same process as installing a Sprout node, but don’t add them to the existing Sprout DNS cluster.
The sprout_hostname
setting in /etc/clearwater/shared_config
on
standalone application servers should be set to the cluster of the
standalone application servers, for example, memento.cw-ngv.com
.
I-CSCF configuration¶
The I-CSCF is responsible for sending requests to the correct S-CSCF. It
queries the HSS, but if the HSS doesn’t have a configured S-CSCF for the
subscriber then it needs to select an S-CSCF itself. The I-CSCF defaults
to selecting the Clearwater S-CSCF (as configured in scscf_uri
in
/etc/clearwater/shared/config
).
You can configure what S-CSCFs are available to the I-CSCF by editing
the /etc/clearwater/s-cscf.json
file.
This file stores the configuration of each S-CSCF, their capabilities, and their relative weighting and priorities. The format of the file is as follows:
{
"s-cscfs" : [
{ "server" : "<S-CSCF URI>",
"priority" : <priority>,
"weight" : <weight>,
"capabilities" : [<comma separated capabilities>]
}
]
}
The S-CSCF capabilities are integers, and their meaning is defined by the operator. Capabilities will have different meanings between networks.
As an example, say you have one S-CSCF that supports billing, and one that doesn’t. You can then say that capability 1 is the ability to provide billing, and your s-cscf.json file would look like:
{
"s-cscfs" : [
{ "server" : "sip:scscf1",
"priority" : 0,
"weight" : 100,
"capabilities" : [1]
},
{ "server" : "sip:scscf2",
"priority" : 0,
"weight" : 100,
"capabilities" : []
}
]
}
Then when you configure a subscriber in the HSS, you can set up what capabilities they require in an S-CSCF. These will also be integers, and you should make sure this matches with how you’ve set up the s-cscf.json file. In this example, if you wanted your subscriber to be billed, you would configure the user data in the HSS to make it mandatory for your subscriber to have an S-CSCF that supports capability 1.
To change the I-CSCF configuration, edit this file on any Sprout node,
then upload it to the shared configuration database by running
sudo /usr/share/clearwater/clearwater-config-manager/scripts/upload_scscf_json
.