Install Docker Swarm On A Server Cluster In 15 Minutes

Photo by Ian Taylor / Unsplash

Docker Swarm In A Nutshell

This simple tutorial shows how a running docker swarm cluster can be created in ~15 minutes

Paul Knulst  in  Docker Nov 24, 2021 4 min read

This simple tutorial shows how a running docker swarm cluster can be created in ~15 minutes.

I created a follow-up guide to show important services that should be used in every Docker Swarm. The guide is based on this tutorial.

4 Important Services Everyone Should Deploy In A Docker Swarm
Learn How-To enhance your Docker Swarm with four important services that you will love.

1. Introduction

Whenever you read “Docker Swarm” we are actually talking about “Docker Swarm mode”. (not the deprecated product Docker swarm)

1.1 Why Docker Swarm?

Docker Swarm is the right tool to deploy your application stacks to production, in a distributed cluster, using the same files used by Docker Compose locally.

Additional advantages are:

  • Replicability (dev files can be used in production)
  • Robustness (fault-tolerant clusters)
  • Simplicity and speed for development and deployment

The main benefit of Docker Swarm is simplicity and development speed while still being able to set up a distributed cluster ready for production in 15 minutes.

1.2 Prerequisites

To follow this tutorial on Docker Swarm you need these prerequisites:

  • Basic familiarity with Linux and Docker
  • Need access to at least 2 (I used four) ubuntu servers from a cloud provider on which the Docker Swarm will run
  • A domain that points to one of your servers

2. Create the Docker Swarm

2.1 Installing dependencies

In a freshly installed server infrastructure, every dependency has to be installed to create the swarm cluster. These include Updating the operating system and installing the docker-engine

To update the operating system to an up-to-date version

sudo apt-get update && sudo apt-get upgrade

Subsequently, curl can be used to download get-docker.sh which is a script created by docker.com to easily install docker on a system:

curl -fsSL https://get.docker.com -o get-docker.sh

This curl command opens a website, reads the content and transfers its content into the file get-docker.sh

Ultimately, the file has to be executed (as a root user):

sh get-docker.sh

This script installs everything you need to use docker at your server.

2.2 Initialize the Docker Swarm

After the Docker installation is done choose your Manager node. The Manager node will be the server within your cluster that does all the deployment and managing. The Manager node should be the server within the environment whose IP is targeted by a domain name.

On your manager execute the following command:

docker swarm init --advertise-addr X.X.X.X

This command activates swarm mode on the server. The advertise-addr has to be the internal IP from the server. (normally if you buy more than one server from the same hosting company you can put all of them into an internal network and get the internal IP).

2.3 Finalising the Docker Swarm

The last step is joining the docker swarm from the other servers. After executing the previous command there will be some lines printed in the end that should be executed on the remaining servers in your cluster. It will look like this:

docker swarm join --token SOME_CRYPTIC_HASH_TOKEN X.X.X.X:2377

Execute the command on every other server in your network which will then connect to the Worker node to your Manager.

3. Optional enhancement

Now that the swarm is ready to be used it is possible to deploy services.

I would suggest deploying at least Portainer. Portainer is a container management service that could be used for managing and deploying services.

As a starting point, you can use this docker-compose.yml

version: '3.2'

services:
  agent:
    image: portainer/agent
    volumes:
      - /var/run/docker.sock:/var/run/docker.sock
      - /var/lib/docker/volumes:/var/lib/docker/volumes
    networks:
      - agent_network
    deploy:
      mode: global
      placement:
        constraints: [node.platform.os == linux]

  portainer:
    image: portainer/portainer-ce
    command: -H tcp://tasks.agent:9001 --tlsskipverify
    ports:
      - "9000:9000"
      - "8000:8000"
    volumes:
      - portainer_data:/data
    networks:
      - agent_network
    deploy:
      mode: replicated
      replicas: 1
      placement:
        constraints: [node.role == manager]

networks:
  agent_network:
    driver: overlay
    attachable: true

volumes:
  portainer_data:

After saving this file on the Manager node it can be deployed within the Docker Swarm:

docker stack deploy -c docker-compose.portainer.yml portainer

Immediately after executing the command, Portainer can be accessed at http://YOUR_DOMAIN:9000

4. Closing Notes

Keep in mind that the example Portainer instance will be deployed without SSL. To achieve this you can use a load balancer like traefik. As a starting point, you can read an article that explains how to set up a traefik service within a plain docker environment:

How to setup Traefik v2 with automatic Let’s Encrypt certificate resolver
Today it is really important to have SSL encrypted websites. This guide will show how easy it is to have an automatic SSL resolver built into your traefik load balancer.

Additionally, I am working on a follow-up article that contains information about the best services you want to have in a Docker Swarm. traefik will be part of it as the load balancer. Follow or Subscribe to me to get informed when my “docker swarm services” article is published.

Feel free to connect with me on Medium, LinkedIn, and Twitter.


🙌 Support this content

If you like this content, please consider supporting me. You can share it on social media, buy me a coffee, or become a paid member. Any support helps.

See the contribute page for all (free or paid) ways to say thank you!

Thanks! 🥰

By Paul Knulst

I'm a husband, dad, lifelong learner, tech lover, and Senior Engineer working as a Tech Lead. I write about projects and challenges in IT.