Deployment
Ship Aphex to production with Docker or Render. Env vars, migrations, signed assets, backups, health checks.
Aphex is a SvelteKit app (adapter-node), so it deploys like any Node web service. The base template ships two ready-to-go deploy paths:
Dockerfile— single-package multi-stage build. Drop into Render, Fly, Railway, k8s, plaindocker run.Procfile(web: node build) — for buildpack platforms like canine.sh, Heroku, or any host that detectspackage.jsonand runs the Procfile.
Both paths produce the same build/index.js from @sveltejs/adapter-node. Pick whichever fits your host.
Build no longer requires .env. Earlier template versions crashed during SvelteKit's analyse pass if DATABASE_URL / AUTH_SECRET / RESEND_API_KEY weren't set at build time. The current template guards every server-module init with building from $app/environment and falls back to placeholders, so pnpm build succeeds with zero env. Real values are required at runtime.
Other platforms (Vercel, Cloudflare Pages) may work but aren't tested in production by the maintainers — adapter-node is portable but Sharp + Postgres driver want Node. If you ship on one, please open a PR with notes.
Production checklist
Before your first deploy:
Generate a strong BETTER_AUTH_SECRET. 32+ random bytes. Rotating it logs everyone out and invalidates every API key, so generate once and keep it stable.
openssl rand -base64 48Provision Postgres. A managed instance (Neon, Supabase, Render Postgres, RDS) or your own container. Capture DATABASE_URL. Default pool is 10 — fine for a VPS, drop to 1–3 for serverless.
Decide on storage. Local filesystem works for single-node Docker deploys (mount a volume). For anything else, provision R2 / S3 — capture all four R2_* vars.
Pick an email provider. Resend in prod (RESEND_API_KEY). Optional in dev. Required for password reset, email verification, and invitations.
Run migrations. pnpm db:generate locally → review the SQL → commit → run pnpm db:migrate against prod as a release step (not at boot — concurrent deploys will race).
Sign up the first user immediately. The first sign-up gets super_admin. Do this as soon as the app is live, before anyone else can hit /login.
Environment variables
# --- Database ----------------------------------------------
DATABASE_URL=postgres://user:pass@host:5432/dbname?sslmode=require
# Or split: PG_HOST, PG_PORT, PG_USER, PG_PASSWORD, PG_DATABASE
# --- Auth --------------------------------------------------
BETTER_AUTH_SECRET=<48+ bytes of randomness — never commit>
BETTER_AUTH_URL=https://cms.your-app.com # public origin of the SvelteKit app
AUTH_TRUSTED_ORIGINS=https://cms.your-app.com,https://your-app.com
# AUTH_SECRET / AUTH_URL also work for backwards compat (and the
# bundled Dockerfile uses those names).
# --- Email (Resend in prod, Mailpit in dev) ----------------
RESEND_API_KEY=re_xxxxxxxxxxxxxxxxxxxxxxxx
RESEND_FROM=[email protected] # must be a verified sender
# --- Storage (any S3-compatible — optional, falls back to local) ---
R2_BUCKET=my-bucket
R2_ENDPOINT=https://<account>.r2.cloudflarestorage.com
R2_ACCESS_KEY_ID=...
R2_SECRET_ACCESS_KEY=...
R2_PUBLIC_URL=https://cdn.your-app.com # what end-users see in <img src=…>BETTER_AUTH_URL and AUTH_TRUSTED_ORIGINS are not the same thing. The first is where the
auth cookies are scoped — must match the public origin exactly (protocol + host + port).
The second is the CSRF allowlist — comma-separated origins your frontend(s) call from. Get
either wrong and login silently fails with no obvious error.
Optional but recommended
# Asset signing — short-lived signed URLs for cross-org sharing
ASSET_SIGNING_SECRET=<32+ random chars>
# Public org id — pinned in your frontend integration (see /frontend)
PUBLIC_ORG_ID=<uuid of the org your public site reads from>Docker
The base template ships a single-package multi-stage Dockerfile. It installs with pnpm via corepack, builds with ADAPTER=node so the output is build/index.js, and prunes devDependencies before copying into the runtime stage.
FROM node:20-alpine AS builder
RUN corepack enable
WORKDIR /app
COPY package.json pnpm-lock.yaml* ./
RUN pnpm install --frozen-lockfile
COPY . .
RUN ADAPTER=node pnpm build
RUN pnpm prune --prod
FROM node:20-alpine AS runner
WORKDIR /app
ENV NODE_ENV=production PORT=3000
COPY --from=builder /app/build ./build
COPY --from=builder /app/node_modules ./node_modules
COPY --from=builder /app/package.json ./package.json
COPY --from=builder /app/drizzle ./drizzle
COPY --from=builder /app/drizzle.config.ts ./drizzle.config.ts
EXPOSE 3000
CMD ["node", "build"]Build and run
# Build the image (no env needed at build time)
docker build -t my-aphex .
# Run with prod env vars
docker run --rm -d -p 3000:3000 \
-e DATABASE_URL=postgres://user:pass@db-host:5432/aphex \
-e AUTH_SECRET=$(openssl rand -base64 48) \
-e AUTH_URL=https://cms.your-app.com \
-e RESEND_API_KEY=re_xxxx \
-e AUTH_TRUSTED_ORIGINS=https://cms.your-app.com,https://your-app.com \
--name aphex-studio my-aphex
# Migrate (one-shot against the running container)
docker exec aphex-studio pnpm db:migrateCompose with Postgres + Cloudflare Tunnel
If you want everything in one box, write a docker-compose.yml next to the Dockerfile. (The template no longer ships prod.docker-compose.yml — different users want different stacks.)
services:
postgres:
image: postgres:16-alpine
env_file: [.env.production]
environment:
POSTGRES_USER: ${PG_USER}
POSTGRES_PASSWORD: ${PG_PASSWORD}
POSTGRES_DB: ${PG_DATABASE}
volumes: [postgres_data:/var/lib/postgresql/data]
healthcheck:
test: ['CMD-SHELL', 'pg_isready -U $${PG_USER} -d $${PG_DATABASE}']
interval: 10s
retries: 5
studio:
build:
context: .
dockerfile: Dockerfile
env_file: [.env.production]
depends_on:
postgres:
condition: service_healthy
ports: ['3000:3000']
volumes:
- ./storage:/app/storage # local-storage persistence; remove if using R2/S3
cloudflared:
image: cloudflare/cloudflared:latest
command: tunnel run
environment:
TUNNEL_TOKEN: ${CLOUDFLARE_TUNNEL_TOKEN}
profiles: [cloudflare]
volumes:
postgres_data:docker compose up -d --build
docker compose exec studio pnpm db:migrate
docker compose logs -f studioLocal storage in Docker
The ./storage:/app/storage mount keeps uploaded assets durable across container rebuilds. If you switch to R2 / S3, you can remove the volume — cms_assets rows still reference the old storageAdapter, so historical local files keep working through the mount.
Updating
git pull
docker compose up -d --build studio
docker compose exec studio pnpm db:migrateOnly the studio service rebuilds; Postgres data persists in the named volume.
Buildpack / PaaS (Procfile)
For platforms like canine.sh, Heroku, Fly.io with flyctl launch, or anything that detects package.json and runs your Procfile, the template ships:
web: node buildYou typically need two pieces of platform configuration:
-
Build command — set
ADAPTER=nodeso SvelteKit emitsbuild/index.js. On most platforms this means aheroku-postbuildscript inpackage.jsonor a build env var. The simplest fix is to makenodethe default — changesvelte.config.js:svelte.config.js import adapterNode from '@sveltejs/adapter-node'; export default { kit: { adapter: adapterNode() } };Then any
pnpm build(Docker, Procfile, CI, local) produces a runnable Node bundle without needingADAPTER=node. -
Runtime env vars — set
DATABASE_URL,AUTH_SECRET,AUTH_URL,RESEND_API_KEY,AUTH_TRUSTED_ORIGINSin the platform's secret manager. These are required at runtime; the build doesn't need them.
Buildpack gotchas
A few things that bite people on buildpack-based platforms (canine.sh, Heroku, Fly's CNB launcher, anything Kubernetes-with-buildpacks):
- Don't set a custom Start command. Buildpacks rely on a launcher that sets
PATHcorrectly sonode/npm/pnpmresolve. Leaving the start command blank lets the launcher read yourProcfile(web: node build). Setting it explicitly bypasses the launcher and you'll seeexec: "node": executable file not found in $PATH. - Set the container port to
3000. SvelteKit's adapter-node listens onPORT(defaulting to3000if unset). Some platforms injectPORTautomatically; others don't — if yours doesn't, hard-code3000in the platform's container/port config and yourENV PORT=3000in the runtime. - Lockfile presence matters. Buildpacks pick a package manager based on which lockfile is committed (
pnpm-lock.yaml→ pnpm,package-lock.json→ npm). If you're publishing the template via subtree-push or similar, make sure the standalone repo gets a generated lockfile — buildpacks won't fall back gracefully when there's no lockfile in a Node project. - TLS for self-managed Kubernetes. If you're on a PaaS that runs on top of cert-manager + Let's Encrypt, HTTP-01 challenges fail when the ingress controller redirects
/.well-known/acme-challenge/...to HTTPS. Switch theClusterIssuerto DNS-01 (e.g. via Cloudflare API token) to bypass — it validates against your DNS provider directly without needing public HTTP reachability on port 80/443.
Render.com (battle-tested)
Render runs the same Docker image, with managed Postgres alongside.
Create a managed Postgres in the Render dashboard. Pick a region close to where you'll host the web service. Capture the Internal Database URL for use inside the app.
Create a Web Service from your repo:
- Environment: Docker
- Dockerfile path:
Dockerfile(the scaffolded template ships one at the project root) - Docker Build Context Directory: project root (
.)
Render builds with Docker and runs the resulting container. No native Sharp gotchas because you control the base image.
Add environment variables (matching the matrix above). Use the internal Postgres URL for DATABASE_URL. Set BETTER_AUTH_URL to the public Render URL (e.g. https://aphex-studio.onrender.com) until you point a custom domain.
Add a Disk (Render's persistent volume) and mount it at /app/storage if you're using local storage. Skip this if you're using R2 / S3 — assets live in the bucket.
Migration Pre-Deploy command: set pnpm db:migrate so Render runs migrations before flipping traffic to the new release.
Custom domain — once added in Render's settings, update BETTER_AUTH_URL and AUTH_TRUSTED_ORIGINS to match. Without this, login redirects fail.
Free vs Starter plan caveats
Render's free plan spins down after 15 minutes of inactivity. The first request after spin-down hits a cold start that can take 30–60 seconds — fine for occasional editing, painful for production. Use the Starter plan for anything customer-facing.
Migrations on deploy
Never run db:push against production — it can drop columns silently. Always commit a generated migration file:
Locally edit your Drizzle schema, then pnpm db:generate. Review the SQL in drizzle/0NNN_*.sql. Commit both the schema change and the migration.
On deploy run pnpm db:migrate against production before serving traffic — Render's pre-deploy command, a Docker exec step, a CI job. Whatever your platform calls it.
On first request after the deploy, the CMS hook calls initializeRLS() to ensure RLS policies exist on cms_documents and cms_assets. This is idempotent.
Signed asset URLs
If you serve assets to multiple downstream apps that don't share the CMS's session, generate short-lived signed URLs instead of exposing API keys:
createCMSConfig({
security: {
assetSigningSecret: env.ASSET_SIGNING_SECRET // 32+ chars
}
});The /media/{id}/{filename}?sig=...&exp=... URL is HMAC-validated on every request. After expiry, it's a 403.
Health checks
The CMS exposes adapter health via databaseAdapter.isHealthy() and storageAdapter.isHealthy(). Wire them into a /healthz route:
import { json } from '@sveltejs/kit';
export const GET = async ({ locals }) => {
const { databaseAdapter, storageAdapter } = locals.aphexCMS;
const [db, storage] = await Promise.all([
databaseAdapter.isHealthy(),
storageAdapter.isHealthy()
]);
const ok = db && storage;
return json({ ok, db, storage }, { status: ok ? 200 : 503 });
};Point your platform's health probe at /healthz. Render watches HTTP 200 by default.
Backups
Three things to back up:
| Asset | How |
|---|---|
| Postgres | Render's managed Postgres has automated backups. Self-hosted: pg_dump to S3. |
| Storage bucket | R2 / S3 versioning + lifecycle rules. Local: rsync the mounted volume. |
Secrets (BETTER_AUTH_SECRET, signing keys) | Password manager. Losing them invalidates every session and signed URL. |
Documents are content + a hash; restoring Postgres restores everything including version history.
CDN / cache headers
The CMS's asset CDN sets Cache-Control: public, max-age=31536000, immutable because URLs are content-addressed (the id is unique per upload). For your public site's HTML, set sensible cache headers yourself — the CMS doesn't touch SvelteKit page responses.
For the published-data cache layer, see Configuration → cache. Pair with a CDN like Cloudflare in front of your origin and you've got two-tier caching for free.
Other platforms
These should work — the studio is a standard adapter-node SvelteKit app — but aren't tested in production by the maintainers:
- Vercel / Netlify — adapter-vercel exists; cold starts will hit Sharp's native binaries. Test asset uploads carefully.
- Cloudflare Pages / Workers — partial fit. The admin uses Sharp + Better Auth crypto + Postgres driver, all of which want Node. The pragmatic split: studio on a Node platform, public site on Pages, both reading the same DB.
- Fly.io / Railway — Docker-friendly platforms; the bundled
Dockerfiledrops in directly. - canine.sh / Heroku — Procfile + buildpacks (see above).
If you ship on one of these, file an issue (or a docs PR) so we can fold real-world notes into this page.
See also
Last updated on