Build an small Docker Cluster

Build an small Docker Cluster

When starting with Docker it mostly run on a single host to test, as soon as the amount of Docker containers grow you can split it to more hosts and even create an Docker Cluster. In this topic I will create an small cluster so all Docker machines can be controlled from one command.


To make sure the Docker hosts share information I used Consul. Next to Consul we need to make the Docker daemon accessible for the network and set-up an Docker Swarm to have 1 single command to control all Docker containers

For this example I have used 2 hosts;

  • Docker-Host-Master with IP
  • Docker-Host-02 with IP

When both hosts are up and running and have network connection we can install Consul on both hosts.

Install and configure Consul

In this example we will start Consule inside an Docker container. On the master host run the following command to start Consul and forward the port 8500;

Docker-Host-Master# docker run -d -h master --name consul-master -p 8500:8500 progrium/consul -server -advertise -bootstrap-expect 1

On the second host start also Consul but with the slave configuration. For this example we use 1 master and 1 slave, for production environments I would recommend at least 3 hosts in total.

Docker-Host-02# docker run -d -h node2 --name consul-node2 -p 8500:8500 progrium/consul -server -advertise -IP -join

Configure Docker for remote access

When we want to open the Docker daemon for remote access change the Docker configuration file. This example is for Ubuntu 16.x with VIM as text editor, on the master server execute the following:

Docker-Host-Master# sudo vim /etc/default/docker

Add line: DOCKER_OPTS=”–cluster-store=consul:// –cluster-advertise= -H=tcp:// -H=unix:///var/run/docker.sock”

On the Docker-Host-02 (slave) server we need to configure the same but we advertise another IP number:

Docker-Host-02# sudo vim /etc/default/docker

Add line: DOCKER_OPTS=”–cluster-store=consul:// –cluster-advertise= -H=tcp:// -H=unix:///var/run/docker.sock”

Now both hosts advertise the Docker daemon on TCP port 2375 and share the same Docker Cluster storage (Consul).

Docker Swarm

Now we want to start the Swarm Manager so we can run the docker commands over the cluster. In this example we use TCP port 3375 for the docker command to connect, run the following command on the Docker master host:

Docker-Host-Master# docker run --restart=unless-stopped --name swarm-master -d -p 3375:2375 swarm manage consul://

Join the Swarm on both hosts:

Docker-Host-Master# docker run -d --name swarm-client-master swarm join --advertise= consul://
Docker-Host-02# docker run -d --name swarm-client-node2 swarm join --advertise= consul://

Check the Docker Swarm by set the DOCKER_HOST environment variable and run an docker command:

docker info

or use the -H parameter:

docker -H info

In the output under “Nodes” you should see all the nodes that we just created and added in this example.

Docker overlay network

Next to running commands to our Docker Cluster we want to be able to create 1 network subnet for our containers. This can be used for example by ElasticSearch Clusters. In this example subnet is configured and the network name is “clusternetwork”, run the following command on the Master server:

Docker-Host-Master# docker network create --driver overlay --internal --subnet= clusternetwork

Use the –internal if no communication outside the network is required. To test the network we will run an BusyBox container on both hosts:

docker run -it \
--name=busybox \
--net=clusternetwork \
--net=bridge \

When BusyBox is started you will see the bash from inside the container, use ‘ifconfig’ and ‘ping’ to verify network connection between the 2 containers.

Now we have created the Cluster and the Network overlay we can expand or create clustered containers, for example with ElasticSearch :

Have fun with Docker!

Comments are closed.