Keeping Docker containers updated is the kind of chore you automate once
and forget about — until a container silently runs a four-month-old image
with five CVEs because you forgot to docker compose pull && up -d.
Watchtower solves this. It watches your running containers, checks for new images, and restarts them with the latest tag — all on a cron schedule. But a naive “update everything” setup will break your database container and nuke your uptime.
This post covers a production-grade Watchtower deployment for a homelab: selective update rules, Telegram push notifications, manual rollback, and patterns for zero-downtime containers.
Architecture Overview
┌───────────────────────────────────────────────────┐
│ Docker Host │
│ │
│ ┌────────────┐ ┌──────────────────────┐ │
│ │ Watchtower │──────▶│ Container Registry │ │
│ │ :poll every │ │ (Docker Hub / GHCR) │ │
│ │ 6h │ └──────────────────────┘ │
│ └──────┬─────┘ │
│ │ scans containers │
│ ▼ │
│ ┌──────────────────┐ ┌──────────────┐ │
│ │ traefik (auto) │ │ postgres (skip) │ │
│ │ frigate (auto) │ │ │ │
│ │ monitoring (auto)│ │ valkey (skip)│ │
│ └──────────────────┘ └──────────────┘ │
│ │
│ Push: Telegram notification on each update │
│ Rollback: docker compose pull <tag> && up -d │
└───────────────────────────────────────────────────┘
Why Watchtower (Not DIY Cron)
You could script:
|
|
But this has problems:
- Updates all services, including databases (risky)
- No notification when something changes
- No way to pin specific images
- No scheduling beyond basic cron
- Restarts every container even if no change
Watchtower gives you per-container control, scheduling, notifications, and only restarts containers whose images actually changed.
1. Deploy Watchtower
|
|
That’s the baseline. Deploy with:
|
|
By default, Watchtower polls Docker Hub every 24 hours. I set
POLL_INTERVAL=21600 (6 hours) for a balance between freshness and
rate limits.
2. Selective Update Control — Skip Databases
The most important config: tell Watchtower not to update stateful containers. A database restart on an image update is a database outage — you want to do that manually with proper migration checks.
Mark containers to skip with a label on their own compose file:
|
|
|
|
Check which containers are excluded:
|
|
Labels without explicit value default to true in Watchtower, so only
containers you explicitly mark false get skipped.
3. Notifications — Telegram on Every Update
Watchtower supports a dozen notification backends. Telegram is the most practical for a homelab — push notifications to your phone with what changed.
|
|
|
|
Restart Watchtower and trigger a manual check to test:
|
|
You’ll see output in logs:
|
|
Expected Telegram message:
🐳 srv1 — Watchtower
Updated: traefik:latest (abcdef12345)
Updated: frigate:stable (12345abcde)
No news between updates means nothing changed — exactly what you want.
4. Schedule Instead of Poll Interval
If you prefer exact scheduling (e.g., daily at 3 AM), use cron syntax over the generic poll interval:
|
|
WATCHTOWER_SCHEDULE=0 0 3 * * * = daily at 3:00 AM local time
(requires TZ to be set correctly). Remove POLL_INTERVAL if using
SCHEDULE — they conflict.
Note: Containers with restart: always or unless-stopped get
restarted cleanly at 3 AM. If a container takes > 30s to restart, adjust
the default timeout:
|
|
5. Rolling Back a Bad Update
Watchtower updates the running container, but it doesn’t pin the previous image tag. This is the one gap you need to cover manually.
When a bad update happens:
|
|
Prevent it: Pin major versions in compose files and let Watchtower only handle patch updates:
|
|
Watchtower will update traefik:v3.2 when a new v3.2.x patch is
published, but won’t jump you to v3.3 automatically. This is the
safest pattern for datastore-adjacent containers.
6. Full Example — Selective Updates with Notifications
|
|
|
|
Deploy:
|
|
7. Excluding Containers — What to Skip
| Container | Auto-update? | Reason |
|---|---|---|
| Traefik | ✅ Yes | Proxy downtime = brief blip, stateless |
| Frigate | ✅ Yes | Pulls new model weights, fast restart |
| Grafana / Prometheus | ⚠️ Cautiously | Pin major version (image: grafana/grafana:11.3) |
| Loki | ⚠️ Cautiously | Schema version matters — pin major |
| Postgres / MySQL | ❌ No | Manual migration on version bumps |
| ValKey / Redis | ❌ No | Breaking config changes between majors |
| Watchtower itself | ✅ Yes | Self-updating is fine |
| PBS client / restic | ✅ Yes | Stateless, always pull latest |
The rule: Stateless containers auto-update. Stateful containers with persistent data — update manually after reviewing release notes.
8. Zero-Downtime Considerations
Watchtower stops a container, pulls the new image, and starts it again. For a single-replica setup, there’s brief downtime (~5-15 seconds).
If you need true zero-downtime updates for a web service:
Option A: Docker Compose scale + health check
|
|
Add this to Watchtower to enable rolling restart:
|
|
Watchtower will stop one replica, pull, start, wait for the health check to pass, then move to the next.
Option B: Use a load balancer in front (Traefik with multiple backends) — Watchtower’s rolling restart + health check is the simpler homelab solution.
9. Lifecycle Hooks — Run Commands Before/After Updates
Watchtower supports prerestart and postrestart commands via labels:
|
|
Useful for services that need graceful shutdown:
|
|
10. Monitoring Watchtower
Add a Prometheus blackbox probe or simply monitor the watchtower
container’s last-seen timestamp:
|
|
Better: expose Watchtower’s metrics endpoint. Mount a port:
|
|
Then add to Prometheus:
|
|
11. Alternative: Diun (Notification-Only)
If you don’t want Watchtower touching your containers at all (just notifying you about available updates), use Diun instead:
|
|
Diun sends a Telegram message when a new image is available but does nothing else. You decide when to pull the trigger. This is the conservative approach.
Summary
|
|
Watchtower isn’t optional once you have more than five containers. A
3 AM daily check with Telegram push means you wake up to a notification
like “Updated traefik, frigate, prometheus” — or nothing at all. Manual
rollback is a docker compose pull <version> away. Skip your database
containers, pin major versions on stateful workloads, and let the rest
update themselves.