Orchestrating a RabbitMQ cluster using Docker

Fran Álvarez de Diego
6 min readMar 24, 2018

After some time playing with RabbitMQ in several projects, I have known how cumbersome it is to be able to make a proof of concept and start.

For this, I have always tried to automate and document the way in which RabbitMQ works using Docker as a platform to run the infrastructure.

Here you can find the necessary code to enable clustering in RabbitMQ and how to make Docker containers work with each other as well as how to easily test a RabbitMQ infrastructure without even having to install RabbitMQ:

Provisioning the cluster

In my example we are going to provision a master and two slaves using both docker run and docker-compose to create the containers.

We need a master node, which will be launched in daemon mode so this means it runs in the background. We’re giving it a name, a host name as well and we need to set up two environments variables which are required by RabbitMQ in order for clustering to work so it needs an ERLANG_COOKIE and NODENAME, RabbitMQ will use this node name for clustering.

Then we’re mounting two volumes for RabbitMQ configuration and also the definition file in JSON format that contains the RabbitMQ setup like users, permissions for each user, queues, routing keys and exchanges. You can find the definitions file here:

RabbitMQ definitions

We are mounting these volumes and also publishing some ports which are all the ports that RabbitMQ and we’re also using the RabbitMQ Docker Image version 3, named rabbitmq:3-management means that we’re using the image that has the management plug-in installed and you’ll see later why is useful.

So let’s run the master node running the command

Creating RabbitMQ master node

You can check the status of the master node using docker ps

docker ps

Now we need the slaves. The command is similar except that we’re calling with a different naming and we don’t need to expose any port. Also we need to change the NODENAME in order for clustering to work but we are using the same configuration, the same definitions and we’re linking it with rabbit1 so it’s means that rabbit2 will be able to access to rabbit1 via Docker overlay network

Creating slave 1

And for the second slave just use the same command as above only changing the name, hostname and nodename as explained. The only difference is that we need to link this node also with rabbit2 because every node needs to be able to the other nodes in the network

Creating slave 2

We can check that all the containers are working using docker ps

docker ps cluster result

Also you can check the node log individually running docker log -f <rabbit#>:

so the master actually knows about the two slaves. So everything is working okay up until this point now so let’s check the RabbitMQ console through http://localhost:15672

I’ve set up the credentials as username guest and password guest .

RabbitMQ Console

Here you can see that we have the three nodes running:

We can also have a look at queues clicking on queues tab. By configuration we have only created one queue named q.user.created and it’s on rabbit1+2 so replicated or synchronized on two other nodes. It contains also information about the features like high availability, durable and auto deleted.

q.user.created queue

Let’s take a look on exchanges that by default we only created e.user.created which has a binding to q.user.created so this queue is replicated.

At overview tab you can find the ports defined by configuration in groups. The amqp port is used by your clients, then we have the clustering port and the RabbitMQ console port

Ok now that we have everything under control, let’s play with our cluster. Anything you configure in your interface: queues, exchange and stuff like that can be exported into a new definition file using Export definitions option from the overview tab and then this new definition can replace the original one so every new cluster you create using this configuration will create automatically all your definitions. Also you can import a bundled definition file from Import definitions.

Scaling the cluster

The way in which we have configured our cluster is fine to have only one master and two slaves but it is a bit tedious to maintain if we want to expand a cluster by adding more slaves. As (almost) everything in this world is a better way of doing things so let’s use docker compose for it.

To make an erasure of all the containers that we have created and start from the beginning, we will run the commanddocker rm -f $(docker ps -a). (Please use this command with care as it completely eliminates all your containers)

docker compose configuration file

The configuration file has three services without links (remember that we linked rabbit2 with rabbit1 and rabbit3 with rabbit1 and rabbit2). This is because we are using the depends on statement property and what this basically does is it tells a docker compose: “please, don’t start this service till the depend service is up”.

Also you can find a small piece of configuration that is new from this way, the networks section so what we need to do is we’re actually launching like all these containers are able to see each other by hostname or by IP like they are on the same network. So this network is not created automatically so then the first thing we have to do is create it running:

docker network create rabbitmq-cluster

You can check the network status by docker network ls

docker network status

And then, we are ready to run the instance in daemon mode, usingdocker-compose up -d

$ docker-compose up -d

The containers are prefixed so we can find them easily. You can check that in my case the order in which the slave nodes have been created has been totally random but my depends on rule has been maintained, which only required node 1 to be created first.

So if we run docker-compose logs -f we can check the containers logs to see if everything has been created OK and also you can check that the master node is actually synced with the slaves running docker compose logs -f rabbit1

We can check the RabbitMQ console using the same url as provided: http://localhost:15672 and nothing changes from the previous configuration, we still have the same set up.

Playing in the sandbox

Now that we have spent some time learning how to create our infrastructure, it is time to get down to the masses and try, for example, to send a message to our queue.

Access to the Exchanges configuration and click on Publish message to deliver an example message to the user.created queue with the payload User created. You will receive a confirmation message and if we go to overview or queues we have 1 message ready and total. There are no consumers connected right now but we can get the message and let’s get it and requeue it so we’re not consuming nor removing it from the queue.

Getting the message

But because we have requeued it the message is still ready. We can try it getting the message without requeue so everything will get back to 0.

Happy coding and queueing!

I only write about programming and systems. If you follow me on Twitter I won’t waste your time. 👍

--

--