Traefik, Docker and dnsmasq to simplify container networking

Great tech adventures start with a bit of frustration, a need or a requirement. This is the story of how I simplified the management and access of my local web applications using Traffic and dnsmasq. The reasoning applies equally well to a production server using Docker.

My dev environment consists of a growing number of web applications hosted on my laptop. Such applications include multiple websites, tools, editors, registries, … They use databases, REST APIs or more complex backends. Take the example of Supabase, the Docker Compose file includes the Studio, the Kong API gateway, the authentication service, the REST service, the realtime service, the storage service, the metaservice and PostgreSQL database.

The result is a growing number of containers started on my laptop, available on localhost at different ports. Some of them use the default ports and cannot be run in parallel to avoid conflicts. For example 3000 and 8000 ports are common to many containers located on my machine. To work around this problem some containers use custom ports which I often forget.

The solution is to create local domain names that are easy to remember and use a web proxy to route the requests to the correct container. Traefik helps with routing and discovery of these services and dnsmasq provides a custom top-level domain (pseudo-TLD) to access them.

Another use of Traefik is a production server that uses multiple Docker Compose files for different websites and web applications. The containers communicate inside an internal network and are exposed through a proxy service, in our case implemented with Caddy.

Problem description

Out of many, let's take 3 web applications running locally. All managed with Docker Compose:

  • Adalta's website1 container, Gatsby based static website
  • Alloy website10 containers, Next.js frontend, Node.js backend and Supabase
  • Pen pot6 containers, Penpot frontend, backend services plus Inbucket for email testing (custom addon)

By default, these containers expose the following ports on localhost:

  • Adaltas
    • 8000 Gatsby server in developer mode
    • 9000 Gatsby service to serve a construction site
  • Alloy
    • 3000 Next.js website in both dev and build mode
    • 3001 Node.js custom API
    • 3000 Supabase Studio
    • 5555 Supabase Meta
    • 8000 King HTTP
    • 8443 King HTTPS
    • 5432 PostgreSQL
    • 2500 Inbucket SMTP server
    • 9000 Inbucket web interface
    • 1100 Inbucket POP3 Server
  • Pen pot
    • 2500 Inbucket SMTP server
    • 9000 Inbucket web interface
    • 1100 Inbucket POP3 Server
    • 9001 Penpot front end

Please note, depending on your environment and preferences, some ports may be restricted while other ports may be available.

As you can see, many ports collide with each other. It's not just the two instances of Inbucket that run in parallel. For example port 8000 used by both Gatsby and Kong. It is a common default port for several applications. The same applies to ports 3000, 8080, 8443

One solution is to assign distinct ports for each service. However, this approach is not scalable. Soon enough I forget which port each service is assigned to.

Expected behavior

A better solution is the use of a reverse proxy with hostnames that are easy to remember. Here's what we expect:

  • Adaltas
    • www.adaltas.local Gatsby server in developer mode
    • build.adaltas.local Gatsby service to serve a construction site
  • Alloy
    • www.alliage.local Next.js website in both dev and build mode
    • api.alliage.local Node.js custom API
    • studio.alliage.local Supabase Studio
    • meta.alliage.local Supabase Meta
    • kong.alliage.local King HTTP
    • kong.alliage.local King HTTPS
    • sql.alliage.local PostgreSQL
    • smtp.alliage.local Inbucket SMTP server
    • mail.alliage.local Inbucket web interface
    • pop3.alliage.local Inbucket POP3 Server
  • Pen pot
    • www.penpot.local Penpot front end
    • smtp.penpot.local Inbucket SMTP server
    • mail.penpot.local Inbucket web interface
    • pop3.penpot.local Inbucket POP3 Server

In a traditional setup, the reverse proxy is configured with one or more configuration files containing all the routing information. However, a central configuration is not so convenient. It is preferred that each service declares which hostname it resolves.

Automatic routing registration

All my web services are managed with Docker Compose. Ideally, I would expect that information to be in the Docker Compose file. Traefik is cloud-based in the sense that it configures itself using cloud-based workflows. The application contains some instructions contained in its docker-compose.yml file and the containers are exposed automatically.

The road Traefik works with Dockerit connects to the Docker socket, discovers new services and creates the routes for you.

Starts Traefik

Starting Traefik inside Docker is easy (never say easy). The docker-compose.yml the file is:

version: '3'
services:
  reverse-proxy:
    
    image: traefik:v2.9
    
    command: --api.insecure=true --providers.docker
    ports:
      
      - "80:80"
      
      - "8080:8080"
    volumes:
      
      - /var/run/docker.sock:/var/run/docker.sock

Register new services

Let's consider an additional service. Adalta's website is a single container based on Gatsby. In development mode, it starts a web server on port 8000. I expect it to be available with the hostname www.adaltas.local at port 80.

Follows Traefik starts with Dockerthe integration with the property is done traefik.http.routers.{router_name}.rule present in labels the area of ​​the dock worker service. It defines the hostname under which our website is accessible on port 80. It is set to www.adaltas.localhost because the .localhost The TLD resolves locally by default. Because I prefer to use .local domain, we set the domain to www.adaltas.local later used dnsmasq. The traffic is then routed to the container's IP on port 8000. The container port is obtained by Traefik from Docker Compose's ports field.

version: '3'
services:
  www:
    container_name: adaltas-www
    ...
    labels:
    - "traefik.http.routers.adaltas-www.rule=Host(`www.adaltas.localhost`)"
    ports:
    - "8000:8000"

This works when both Traefik and Adaltas services are defined in the same Docker script. Burning docker-compose up and you can:

  • http://localhost:8080: Open Traefik's web interface
  • http://localhost:8080/api/rawdata: Get access to Traefik's API raw data
  • http://www.adaltas.localhost: Go to the Adalta website in development mode
  • http://localhost:8080: Same as http://www.adaltas.localhost

There are three constraints we have to deal with:

  • Internal network
    It only works because all services are declared in the same Docker Compose file. With separated Docker Compose files, an internal network must be used to communicate between the Traefic container and the target containers.
  • Domain name
    I want to use a pseudo-top-level domain (TLD), for example, www.adaltas.local instead of www.adaltas.localhost. The .local TLD does not resolve locally yet, a local DNS server must be configured.
  • Gate label
    The port for Adaltas is defined in the Docker Compose file. Thus, it is exposed on the host computer and it collides with other services. Port forwarding must be disabled and Traefik must be instructed about the port using a different mechanism than ports field.

Internal network

When defined over separated files, the container cannot communicate. Each Docker Compose file generates a dedicated network. The targeted service is visible in Traefik's user interface. However, the request cannot be routed.

The containers must share a common network to communicate. When the Traefik container is started, a traefik_default network is created, see docker network list. Instead of creating a new network, let's reuse it. Enrich the Docker Compose file for the targeted container, Adalta's website in our case, with network field:

version: '3'
services:
  www:
    container_name: adaltas-www
    
networks:  default:    name: traefik_default

After starting the 2 Docker Compose setups with docker-compose upthe Traefik and website containers begin to communicate.

Domain name

It's time to tackle FQDNs for our services. The current TLD in use, .localhost, is perfectly fine. It works by default and it is officially reserved for this use. However, I want to use my own TLDs (pseudo-TLD name), we will use .local in this example.

Disclaimer, use of a pseudo-TLD name is not recommended. The .local The TLD is used by multicast DNS / zero-configuration networks. In practice, I have not encountered any problems. To reduce the risk of conflicts, RFC 2606 reserves the following TLD names: .test, .example, .invalid, .localhost.

A local DNS server is used to resolve the issue *.local addresses. I had little experience with Bind before. A simpler and easier alternative is the use of dnsmasq. The instructions below cover installation on MacOS and Ubuntu Desktop. In both cases, dnsmaq is installed and configured not to interfere with the current DNS settings.

MacOS instructions:


brew install dnsmasq

mkdir -pv $(brew --prefix)/etc/
echo 'address=/.local/127.0.0.1' >> $(brew --prefix)/etc/dnsmasq.conf

sudo brew services start dnsmasq

sudo mkdir -v /etc/resolver
sudo bash -c 'echo "nameserver 127.0.0.1" > /etc/resolver/test'

scutil --dns

Linux instructions using NetworkManager (eg Ubuntu Desktop):


systemctl disable systemd-resolved
systemctl stop systemd-resolved
unlink /etc/resolv.conf

cat <<CONF | sudo tee /etc/NetworkManager/conf.d/00-use-dnsmasq.conf
[main]
dns=dnsmasq
CONF

cat <<CONF | sudo tee /etc/NetworkManager/dnsmasq.d/00-dns-public.conf
server=8.8.8.8
CONF
cat <<CONF | sudo tee /etc/NetworkManager/dnsmasq.d/00-address-local.conf
address=/.local/127.0.0.1
CONF
systemctl restart NetworkManager

Use dig to validate that all FQDNs using our pseudo-TLD resolve to the local machine:

Gate label

With the introduction of a reverse proxy like Traefik, it is no longer necessary to expose the container port on the host, eliminating the risk of collision between the exposed port and those of other services.

A label already exists to define the hostname of the website service. Traefik comes with a lot supplementary labels. The traefik.http.services.{service_name}.loadbalancer.server.port property tells Traefik to use a specific port to connect to a container.

The final Docker Compose file looks like this:

version: '3'
services:
  www:
    container_name: adaltas-www
    image: node:18
    volumes:
      - .:/app
    user: node
    working_dir: /app
    command: bash -c "yarn install && yarn run develop"
    labels:
    - "traefik.http.routers.adaltas-www.rule=Host(`www.adaltas.local`)"
    - "traefik.http.services.adaltas-www.loadbalancer.server.port=8000"
networks:
 default:
   name: traefik_default

Conclusion

With Traefik, I like the idea that my container services are automatically registered in a cloud-based philosophy. It gave me comfort and simplicity. In addition, dnsmasq has proven to be well documented and quick to adapt to my various requirements.

#Traefik #Docker #dnsmasq #simplify #container #networking

Source link

Leave a Reply