Recommended shape. Slack Events API → ack-and-queue worker →
repository_dispatchinto a private GitHub repo →anthropics/claude-code-action@v1job (Claude Code by Anthropic) authenticated by a GitHub App installation token →pull_request.openedtriggers a deploy job that SSH’s into the Synology and runsdocker compose -p pr-N pull && up -dagainst an image pulled from GHCR → Cloudflare Tunnel publishespr-N.preview.sangu.bevia a wildcard hostname. State is one row per Slackthread_tsin Redis or a Cloudflare Durable Object. No durable-execution engine, no Coolify, no k3s — every component pulled in must justify its weight against this minimum.
The same auth footgun fires twice
Both arms of the pipeline have a “default token quietly breaks the next stage” trap. On the Slack side, Claude Code’s MCP client requires OAuth 2.0 Dynamic Client Registration which Slack’s first-party MCP server does not ship — admins must hand-roll an internal Slack app and pass --client-id/--client-secret (slack-claude-code-remote-control/, source [github.com/anthropics/claude-code/issues/30564]). On the GitHub side, the default GITHUB_TOKEN opens the PR fine but downstream CI never runs on it, so the preview-deploy workflow silently never fires (branch-and-pr-automation-from-a-remote-trigger/, source [peter-evans/create-pull-request docs]). Mint the token via actions/create-github-app-token for every push that should trigger anything downstream.
thread_ts is the only stable correlation key
Before a PR exists, nothing else identifies the work. Generate thread_ts once at the Slack ingress, pass it through every repository_dispatch payload, and stash slack:thread:{thread_ts} → {run_id, branch, pr_number, preview_url, status} in one small KV (orchestration-and-state/). The same key carries through to the preview subdomain (pr-N.preview.sangu.be once the PR exists, feature-{slug-of-thread} before that). This is also how the “go” message lands — the Slack button’s interaction handler reads the row and emits the second repository_dispatch to promote.
“Clean context between features” is a cwd contract
The user’s instinct to clear context per feature aligns with a Claude Code gotcha worth pinning down: headless transcripts are keyed on <encoded-cwd>, so a worker that resumes from a different directory than the original run silently starts a fresh session (slack-claude-code-remote-control/, source [code.claude.com agent-sdk/sessions]). One cwd per Slack thread (clone repo to /tmp/<thread_ts>/) gives per-feature isolation and per-thread continuation for free. --fork-session makes the per-feature split explicit when the user wants to branch a thread.
The Synology layer is where the integration risk lives
Three of the four child reports describe well-trodden ground. The fourth notes that no public end-to-end Synology preview-deployment write-up exists in 2026 (synology-preview-deployments/) — Container Manager ships Docker 24.0.2 (unsupported by upstream since June 2024, source [xda-developers]), Coolify’s installer fails on DSM ([coollabsio/coolify#3166]), and Dokploy breaks against DSM’s older daemon ([Dokploy#3888]). The minimum-viable stack is Cloudflare Tunnel + wildcard hostname + GHCR + SSH + a bash wrapper around docker compose -p pr-N, plus a Task Scheduler cron for docker system prune and a BTRFS snapshot quota so previews don’t eat the volume. PR-close → teardown is pull_request.closed → docker compose -p pr-N down -v. Self-hosted runners stay on private repos ([docs.github.com secure-use]).
The unresolved question worth answering before building
Synology research is blunt: if previews must be reachable by external reviewers on a public repo with reliable uptime, don’t host them on a NAS — a $5/mo VPS or Cloudflare Pages free tier is structurally better suited (synology-preview-deployments/, source [simplehomelab]). The user’s sangu.be design reads as internal-only previews, which is fine — but the audience question (who clicks the URL?) decides whether the NAS is the right home for this at all.