Update deployed Container with Watchtower

I added Watchtower to the current Caddy Docker Compose setup that hosts one of my small side projects: acl-feed.madflex.de (blog post). The same Docker-Compose is running a Caddy Server (for this blog and other static pages) and my GoToSocial playground instance.

To be sure only the acl-feed gets updated I added the Watchtower enable-label. The relevant part of the Docker-Compose file:

# (caddy + gotosocial parts omitted)
acl-feed:
  image: forgejo.tail07efb.ts.net/mfa/acl-feed:latest
  environment:
    PYTHONUNBUFFERED: 0
  restart: unless-stopped
  ports:
    - "127.0.0.1:8000:8000"
  labels:
    - "com.centurylinklabs.watchtower.enable=true"
watchtower:
  image: containrrr/watchtower
  volumes:
    - /var/run/docker.sock:/var/run/docker.sock
  command: --debug --http-api-update --label-enable
  environment:
    - WATCHTOWER_HTTP_API_TOKEN=some-token-here
  ports:
    - 8081:8080

The interesting parts are the enable label for the acl-feed, to only activate this one container (for now). And the commandline parameters for HTTP API and to enable only the labels. I had to change the port to 8081 instead of 8080, because 8080 is already used. All ports except 80, 443 and 22 are actually firewalled, so port 8081 is only accessable via Tailscale.

With this curl command the container is updating:

curl -H "Authorization: Bearer some-token-here" http://wachtel.tail07efb.ts.net:8081/v1/update

When nothing is there to update the end of the log messages look like this:

level=debug msg="Found a match"
level=debug msg="No pull needed. Skipping image."
level=debug msg="No new images found for /acl-feed"
level=info msg="Session done" Failed=0 Scanned=1 Updated=0 notify=no

And when there is a container to pull, the log messages look like this:

level=info msg="Found new forgejo.tail07efb.ts.net/mfa/acl-feed:latest image (cf0e4574a984)"
level=info msg="Stopping /acl-feed (bf60581c76ef) with SIGTERM"
level=debug msg="Removing container bf60581c76ef"
level=info msg="Creating /acl-feed"
level=debug msg="Starting container /acl-feed (09aed1963e85)"
level=info msg="Session done" Failed=0 Scanned=1 Updated=1 notify=no

After successfully testing this, I added the curl to the end of the container build in the Forgejo action. This will automatically trigger a deployment via watchtower when the container was updated.

I described how I built the container for arm64 and amd64 (the latter using QEMU) in an older post.

The step in the Forgejo action that I added for the curl looks like this:

- name: update deployed version
  run: |
    curl -H "Authorization: Bearer ${{ secrets.WATCHTOWER_TOKEN }}" \
      http://wachtel.tail07efb.ts.net:8081/v1/update

I set the WATCHTOWER_TOKEN variable with the secret I chose in the watchtower compose section. (In the example above it is some-token-here).

Using watchtower for my small sideproject feels good enough at the moment. And having automatic deployment on Git push without additional infrastructure (except of course Forgejo and a Forgejo Runner) is really nice.

Use Pi-hole with Tailscale

Now that Tailscale Services work (previous post) and I can give a Pi-hole service easily a name in my Tailscale network, I actually installed a Pi-hole. The initial setup was easy. The tricky part was IPv6.

Running Pi-hole

I am running Pi-hole with all my containers (immich, forgejo, homeassistant, ...) on an 8GB Raspberry PI 4 compute module. The initial Docker compose setup is based on the version in the Pi-hole documentation.

I want my Pi-hole to have IPv6, so I added a /etc/docker/daemon.json with this contents:

{
  "ipv6": true
}

And after a lot of experimentation with Docker networks this is my working compose.yml:

services:
  pihole:
    container_name: pihole
    image: pihole/pihole:latest
    ports:
      - "53:53/tcp"
      - "53:53/udp"
      # local port 80 is already used
      - "8314:80/tcp"
    environment:
      TZ: 'Europe/Berlin'
      FTLCONF_webserver_api_password: 'insert-a-secret-password-here'
      FTLCONF_dns_listeningMode: 'ALL'
    volumes:
      # persisting Pi-hole's databases and common configuration file
      - './etc-pihole:/etc/pihole'
    restart: unless-stopped
    # for IPv6
    cap_add:
      - NET_ADMIN
    networks:
      pihole-net:
        ipv6_address: fd42:cafe:dead:beef::53

networks:
  pihole-net:
    driver: bridge
    enable_ipv6: true
    ipam:
      config:
        - subnet: fd42:cafe:dead:beef::/64

A differences to the default setup is that I changed the http port to 8314 (8+pi), because I already use port 80 for something else. Because of Tailscale services the container will get an address: https://pihole.tail07efb.ts.net. I configured this only for port 443 and NOT for port 53, because Tailscale Services don't support UDP.

Because I could not reuse the IPv6 address of the Docker bridge, so I use a new Unique local address subnet that is not used anywhere else on this host.

The IP addresses on the System Settings page of my Pi-hole looks like this now:

img1

As far as I understand IPv6, fe80:: address will not work to route IPv6 with Docker, so I ignored it. That it is shown here should be no issue.

ArchLinux setup

My workplace computer is running ArchLinux and obviously I want to use Pi-hole here.

I switched to using systemd-resolved as described in the Tailscale Knowledgebase. My workplace system is not using NetworkManager, but pure systemd-networkd instead. I still needed to restart that because of DHCP interference, the same way I would have done for NetworkManager.

My /etc/systemd/resolved.conf.d/pi-hole.conf looks like this:

[Resolve]
DNS=192.168.0.10
# DNS=fd7a:115c:a1e0::3c01:3638   # pi4c.tail07efb.ts.net
Domains=~

The first entry is the local IPv4 address of my container hosting Raspberry PI and the commented DNS line is the IPv6 of the Tailscale address of the same host. I decided to use the local address of the Pi-hole host and not their Tailscale address, because I switch occasionally to the work Tailscale network. When looking at the generated /etc/resolv.conf, switching to another Tailscale network would remove the routing to the first entry (and therefore disable DNS filtering). However, since the system would use the next nameserver listed in the file (the one provided by my router), DNS resolution would still function properly.

On my travel notebook, which is running ArchLinux too, I used the Tailscale address of the container Raspberry PI. Here I use NetworkManager, because it is easier to connect to a new Wi-Fi. The uplink at home shouldn't be a bottleneck here. I will monitor how annoying the latency to my home network is.

Raspberry PI setup

My movie player is a Raspberry PI 5 with Rasperry PI OS. First I needed to install systemd-resolved:

sudo apt update && sudo apt install systemd-resolved

Then I edited the /etc/systemd/resolved.conf and added my local DNS server

[Resolve]
DNS=192.168.0.10
Domains=~

This PC will never move, so using the local IP is sensible.

Conclusion

Only a few percent of the queries are actually filtered. But I use in my browser already adblockers, so that was expected. For the IPv6 usage I looked a the query types after a few hours and about half the queries are IPv6 ones (AAAA):

img2

Of course I tested the Pi-hole with other DNS queries, like TXT, MX and NS. Everything works, including IPv6. I would call this a success.

Use Tailscale Services to get more DNS entries

Having actual (Tailscale based) DNS entries for local services is something I was waiting for. In the past I added for some services a sidecar Docker container that adds that (i.e. for Forgejo).

Now with the release of Tailscale Services Beta this should be solved. So I tried to give my local Immich container a DNS entry and some https.

To figure this out I played around a bit until I figured out, that I need to add the host system of the containers as a tagged server and not as a user installed server. This is actually noted in the yellow box in the docs.

So first step is: How can I get the user based machine to become a tag based one. First we need to add a tag in the access controls tab. I named mine tag:local-servers. Then I created an Auth key with add Linux server. The important part is the key. I didn't use the generated command, because the Raspberry PI has already Tailscale installed. To change the machine from User based to Auth key based I did this on the commandline of my Raspberry PI:

sudo tailscale logout
sudo tailscale up --auth-key=tskey-auth-XXXXXX --reset --advertise-tags=tag:local-servers

The --reset was needed for me because I did expose this machine sometime in the past as exit-node. This is maybe not needed for other machines. The server is now back in Tailnet but has a tag below it's name in the Machine list. 🎉

Now the actual service. First I created a tag named local-services for all services I want to add. My current tags look like this:

img1

One is for the local servers and one for the local services. I don't need any grants, because the Tailscale network has only one user and everything is routed to everywhere.

All prerequisites are there, I created an immich service with tcp:443 and the local-services tag attached. On the cli of the Raspberry PI that is running the Immich container this adds the immich to the service:

sudo tailscale serve --service=svc:immich --https=443 localhost:2283

I didn't add any auto approver, so I have to approve manually

img2

And after the first time it needs a bit of time, probably for the Let's Encrypt dance. But then it just works:

img3

I added a few other services, but I will probably keep the sidecar for Forgejo for now.

This Tailscale Services feature can do way more than I need. Thanks Tailscale! 🚀

Addendum

Tailscale Services needs --accept-routes on other clients to work.

And Homeassistant needed a few lines in the configuration.yaml for services to work. Otherwise you get a 400: Bad Request. This worked for me:

http:
  use_x_forwarded_for: true
  server_host: 0.0.0.0
  trusted_proxies:
    - 127.0.0.1