TL;DR. A Synology NAS can host Vercel-style per-PR preview deployments, but every layer has a sharp edge. Use Cloudflare Tunnel + a wildcard hostname [32] for ingress (sidesteps NAT, CGNAT and wildcard certs in one move), GHCR + SSH-and-
docker compose pullfor delivery [38] (Container Manager’s UI can’t auth to GHCR), and a thin shell wrapper over Docker Compose as the orchestrator — Coolify’s installer doesn’t run on DSM [58], Dokploy breaks on DSM’s old engine [55]. If your previews must be reachable by external reviewers on a public repo, a $5/mo VPS or Cloudflare Pages free tier is the better call — reserve the NAS for storage and internal-only previews [72].
What “preview deployment” means
The pattern is uniform across PaaS vendors: a webhook fires on PR open or push, the platform builds the commit, and an ephemeral environment is published at a deterministic per-PR URL. Netlify popularised the term in 2016 [10] with deploy-preview-{n}--{site}.netlify.app [1]. Vercel [2], Cloudflare Pages [3], Render [4] and Fly.io [5] copy the same loop, with Render going further and cloning whole-stack databases per PR. Self-hosted clones — Coolify ⭐ 54k (Apr 2026) and Dokploy ⭐ 34k (Apr 2026) — reproduce the invariants: one env per branch, unique URL, automatic teardown, status comment on the PR [6] [7]. The point is shifting review left: a designer or PM clicks a URL, exercises the running build, and surfaces integration bugs while they are still cheap [8] [9].
DSM 7.2 building blocks
DSM 7.2 reshapes Synology around five primitives that, together, make this realistic.
| Primitive | Purpose for previews | Sharp edge |
|---|---|---|
| Container Manager (replaces Docker pkg, 2023) | Compose stacks per PR via the Project tab [11] [48] | Docker engine 24.0.2, unsupported by upstream since June 2024 [64]; container ports/volumes locked post-creation, duplicate-to-modify [12]; x86_64 only — ARMv8 needs the 007revad community port ⭐ 208 [13] [14] |
| Reverse Proxy (Login Portal → Advanced) | Maps host:port → container [15] with one-click WebSocket headers [16] | Subdomain-only, no path routing, no wildcard upstream [17] — wildcard fan-out needs hand-rolled nginx |
| Web Station | Per-portal Nginx/Apache, HTTP/2, HSTS [18] [19] | DSM 7.2 silently dropped Apache Options ExecCGI/FollowSymLinks from vhosts [20] |
| SSH + sudo | CI deploy hook | Default home perms (777) silently break key auth — needs 755 / .ssh 755 / authorized_keys 644 [21]; only administrators may SSH; sudo NOPASSWD is a separate explicit step [22] |
| Task Scheduler | Cron hook for cleanup/teardown [23] [24] | Runs as root by default — single source of truth for periodic image-prune |
| Btrfs snapshots | Cheap per-preview rollback [25] | Ext4-only J-series models lose this lever entirely [26] |
DNS + TLS — the hard part
DSM’s bundled Let’s Encrypt client only speaks HTTP-01, so it cannot issue wildcard certs and requires port 80 open to the internet [27]. Three working escapes:
| Approach | Setup | Trade-off |
|---|---|---|
| acme.sh DNS-01 + DSM deploy hook | acme.sh --issue --dns dns_cf -d example.com -d "*.example.com", then --deploy-hook synology_dsm with SYNO_Username/Password/Scheme/Port/Certificate/Create env vars [28] [29] |
First deploy needs --insecure (DSM has no trusted cert yet) [29]; CSRF must be enabled in Control Panel > Security or the hook 403s silently [35] |
| Cloudflare Tunnel | cloudflared ingress with hostname: "*.preview.example.com" routing every subdomain to one local origin [32] |
TLS terminated at Cloudflare’s edge — no port-forward, no wildcard cert problem; works with proxied wildcard records on all CF plans [31] |
| Caddy + On-Demand TLS | Caddy container provisions per-host certs at first TLS handshake [33] | Must wire an ask endpoint to prevent abuse; Let’s Encrypt rate-limits 50 certs/registered domain/week [30] — fine for tens of open PRs, not hundreds |
Cloudflare Tunnel is the cleanest path — it eliminates NAT/CGNAT issues, port-forwarding, and the wildcard-cert problem in one configuration file. The nschaper/homelab-domain-hosting ⭐ 0 reference repo documents the exact wiring for Synology behind consumer ISPs (Xfinity, etc.) [68]. For teams that prefer to avoid the shell client, a Python wrapper ⭐ 7 issues wildcard certs via DNS-01 and imports them through the DSM API as a scheduled task [34].
CI/CD: GitHub Actions → Synology
Four delivery patterns, ordered by recommendation strength.
| Pattern | What CI does | Network needs | Verdict |
|---|---|---|---|
GHCR push + SSH docker compose pull |
docker build && push ghcr.io/org/img:pr-N; ssh to NAS; docker compose -p pr-N pull && up -d |
SSH reachable (Tailscale node, CF Tunnel, or NAT) | ✓ Best fit. Container Manager’s UI cannot auth to GHCR (‘Registry returned bad result’) so the pull must run via the underlying Docker CLI [38] — but the daemon itself copes fine. Per-PR isolation is the compose project name. |
| rsync over SSH | appleboy/ssh-action ⭐ 6.1k (Apr 2026) for arbitrary commands [36]; easingthemes/ssh-deploy ⭐ 1.3k (Apr 2026) for rsync [37] | Same | ✓ Fine for static-build previews (no container build). ⚠ ssh-deploy private key must have no passphrase because rsync’s ssh cannot prompt [37]. |
| Self-hosted GitHub runner on DSM | Runner pulls jobs and builds in-place; symlinks /var/run/docker.sock into /volume1/docker [39]; a real-world write-up reports it works but only at org-runner level [63] | Outbound only | ⚠ Private repos only. GitHub explicitly states self-hosted runners “should almost never be used for public repositories” — any forker can run a malicious workflow that exfiltrates secrets and the GITHUB_TOKEN [40]. The poutine pr_runs_on_self_hosted rule flags this exact pattern [41]. |
| Webhook / agent (Portainer, Watchtower) | CI POSTs to /api/stacks/webhooks/<uuid> [45] with rollout-restart toggle [47], or pushes the image tag and lets Watchtower poll GHCR every 300 s [46] |
Webhook URL reachable (CF Tunnel + Zero-Trust service tokens) [44] | ⚠ Portainer stack webhooks are Business-Edition-only [45] [50] [52] — a community sidecar ⭐ 3 fills the CE gap [51]. Watchtower’s 5-min poll loses the per-PR lifecycle signal (no PR-close → teardown). |
For NAT traversal, the official tailscale/github-action@v4 ⭐ 31k (Apr 2026) [69] (Tailscale) brings up an ephemeral OAuth-tagged tailnet node inside the runner [42] and connects to DSM’s built-in OpenSSH (Tailscale SSH itself is unsupported on Synology) [43]. Cloudflare Tunnel + Zero-Trust service tokens is the alternative — Shape93’s 2024 walk-through has the curl -H "CF-Access-Client-Id: $X" -H "CF-Access-Client-Secret: $Y" plumbing [44].
Orchestration: which one fits
Stratified by how far you stray from DSM-native territory.
| Option | ⭐ Stars (Apr 2026) | Preview-deploy story | DSM compatibility |
|---|---|---|---|
| Container Manager + Compose | n/a (built-in) | Roll your own: docker compose -p pr-N per PR over SSH [48]. Thinnest, most reliable path. |
✓ Native. Third-party CLIs like syno-docker ⭐ 2 [49] paper over the GUI’s opinions. |
| Portainer CE | n/a (community-edition docs [53]) | Stacks API + webhooks are the right API surface for “redeploy this PR” [45] [47] | ✓ Installs as a container [53], but stack webhooks are BE-only [50] [52]; the aklinker1/portainer-stack-webhook ⭐ 3 sidecar [51] is required for free auto-update. |
| Dokploy | ⭐ 34k [54] | First-class GitHub-only PR previews with PR-comment URLs and label-based opt-in [7] | ✗ Swarm-centric design [56] collides with DSM’s older Docker daemon — client version 1.53 too new, max 1.43 on a DS920+ since v0.28 [55]. |
| Coolify v4 | ⭐ 54k [57] | Cleanest preview UX: . template + wildcard DNS [6] |
✗ Installer fails on DSM with grep: /etc/os-release: No such file or directory; the maintainer-blessed workaround is an Ubuntu VM under VMM [58], at which point you’ve left DSM. |
| k3s | ⭐ 33k [60] | Argo CD ApplicationSet PR generator, one Application per open PR [61] | ✗ Bare metal is heroic: DSM ships without the overlay snapshotter kernel module, so containerd needs the native snapshotter [59], and even after the kernel-module hacks you still hit cgroup failures on small units like the DS220+ [60]. The realistic homelab pattern is k3s in a VMM Linux VM with the Synology CSI plugin handling storage [62]. |
Pick: Container Manager + Compose + a bash deploy script. Portainer CE if you want a stacks UI and accept the webhook sidecar [51]. Everything else either does not run on DSM or runs only via “give up on DSM and use a Linux VM” — at which point the NAS is not where the work happens.
Nobody has published an end-to-end recipe
A targeted search across Reddit (r/homelab, r/synology, r/selfhosted), Hacker News, dev.to, Medium, and GitHub turns up no end-to-end public write-up of a working per-PR preview pipeline on a Synology NAS. The Synology + GitHub-Actions corpus stops well short of the goal: the most-linked tutorial pushes a single container over Tailscale to a webhook receiver with no notion of PR number, dynamic subdomain, or teardown [73]; Damir’s Corner stops at runner registration [39]; the 2025 perspikapps “CI/CD powerhouse” piece covers builds and private deploys but never per-PR ephemeral environments [63]. The closest off-the-shelf GitHub Action — pullpreview/action ⭐ 196 — supports exactly two providers (Lightsail and Hetzner) and explicitly does not target a generic SSH host [74] [75] [78]. When developers ask publicly how to recreate Vercel-style PR previews on infrastructure they own, the answers default to vendor PaaS — Static Web Apps, deployment slots — not to a self-hosted recipe [76]. Even purpose-built PR-preview projects like neilhtennek/PreviewOps ⭐ 46 frame the gap as “every team wants this, nobody wants to build it” and ship as managed services [77].
→ Anyone building this on a Synology in 2026 is doing original integration work. Plan for it.
Real-world gotchas
The closest things to a partial Synology preview-deployment write-up are nschaper/homelab-domain-hosting ⭐ 0 (Cloudflare Tunnel + multiple subdomains, no PR lifecycle) [68] and the perspikapps self-hosted-runner post [63]. What turns up in spades is operational pain.
- Engine drift. Container Manager’s 2025 release ships Docker 24.0.2, unsupported by upstream since June 2024; users report permission and resource-monitoring bugs and recommend migrating to Portainer/Dockge/Dockhand instead [64].
- DSM updates wipe state. The DSM 7.3 → 7.3.1 update silently deletes downloaded kernel modules; telnetdoogie/synology-docker ⭐ 150 publishes the iptables-modules-reinstall recipe [65]. nginx.conf is regenerated from
.mustachetemplates on every update — hand-edited vhosts are wiped unless they live in/usr/local/etc/nginx/sites-enabled(invisible to the UI but survives) [67]. - BTRFS snapshots eat the volume. One operator hit 4.7 TB of accumulated snapshots in
@dockeron a 10.5 TB volume, and aggressive cleanup broke every container on the box [66]. Quota the snapshots, do not ignore them. - Tailscale Funnel ≠ wildcard ingress. No wildcard support, one domain per device [69] — if you are not using Cloudflare Tunnel, reverse-proxy a single funnel hostname through DSM’s reverse proxy.
- Disk hygiene is not optional. A
docker system prunescript wired into Task Scheduler is canon; mariushosting publishes the standard recipe [70], but it does not quantify reclamation, so monitor. - Plex co-tenancy got worse in 2025. Synology removed the i915 graphics driver from DSM, “effectively disabling hardware transcoding on the Intel Celeron J4125 CPU” — both H.264 and H.265 transcoding are blocked at the kernel-driver level, shifting load onto CPU [71]. If Plex shares the box with previews, Plex now wins the CPU race during family movie night.
When NOT to do this on a NAS
The honest answer: when previews must be reachable by external reviewers, on a public repo, with reliable uptime — don’t. A $5/mo VPS or Cloudflare Pages free tier is structurally better suited [72]. Reserve the Synology for storage, backup, and internal-only previews where the team is on the tailnet. The simplehomelab decision is blunt about the split: NAS for storage, mini PC for Docker, VPS for things that have to live on the public internet [72].