v0.2.34 — Split Convex into its own service
TL;DR
Convex now runs as its own Docker service (convex) instead of being embedded in the platform container. Platform becomes a thin Vite/TanStack Start client that pushes schema + env vars to Convex over HTTP. Existing deployments are migrated automatically on the next tale start / tale deploy after tale upgrade — there is no separate tale migrate command.
# Dev:
tale upgrade
tale start # detects pending migrations, prompts to confirm
# Production:
tale upgrade
tale deploy # detects pending migrations, prompts to confirm
# Or non-interactive (CI):
tale deploy --yes
Pre-v0.2.33 users will see two migrations applied in a single gated step: first namespace-volumes (rename tale_* → ${projectId}_*), then split-convex (this release). Both are idempotent and re-runnable.
Why
The old all-in-one layout tied the Vite frontend’s lifecycle to Convex. A slow convex push or a transient Rust crash would fail the platform healthcheck and pull the whole site out of the load balancer even when Vite itself was fine. Every Convex backend version bump forced a full platform rebuild, and blue-green deploys had to coordinate two sibling platform containers fighting over the same Convex data volume.
The new layout treats Convex as a database — an independent service with its own image, its own volume, and its own lifecycle. Code changes rebuild the platform image only; the Convex container keeps running and WebSocket clients stay connected.
What changed
Architecture
- New
convex service runs convex-local-backend + Convex Dashboard. It owns the convex-data volume (migrated from platform-data).
- New
tale-convex Docker image (~485 MB compressed).
- Platform image slimmed from ~2.58 GB to ~320 MB compressed — keeps the
generate_key binary from the convex-backend base (needed to sign admin tokens for the remote push) but drops the backend daemon, Dashboard, and builtin seed assets.
- Caddy upstream paths (
/ws_api, /http_api, /api/storage, /convex-dashboard, /api/*/sync, …) are re-routed from platform:* to convex:*.
rag and crawler services now read their shared provider config from convex-data:/app/platform-config:ro instead of platform-data.
Platform’s entrypoint now:
- Waits for
http://convex:3210/version (the sibling convex service).
- Pushes 24 env vars via
bunx convex env set (secrets, URLs, file-path semantics).
- Runs
bunx convex deploy --url http://convex:3210 with a 300s default timeout.
- Classifies deploy failures into
start_push / wait_for_schema / finish_push stages with tailored diagnostics.
- Touches
/tmp/platform-ready — the compose healthcheck gate that prevents traffic until deploy succeeds.
Env vars are persisted in Convex’s own database, so a convex restart doesn’t require a repush.
Migration framework
The CLI’s migration subsystem (introduced in v0.2.33) is now a generic, version-agnostic pipeline. Each migration is a registry entry with detect, requiredStops, and apply hooks; tale start and tale deploy compute the pending set at runtime and prompt before applying.
- No new user-facing flag in this release. The pipeline runs whatever’s pending, in registry order.
tale deploy --yes accepts any pending migrations non-interactively (replaces the deprecated --migrate-volumes, which still works as an alias for one release).
- Persisted state lives in
.tale/migrations.json (append-only applied[]); the legacy .tale/migration-pending marker is auto-migrated on first run.
Dev workflow
- Default
bun run dev still works (spawns bunx convex dev locally on 127.0.0.1:3210).
- New
CONVEX_EXTERNAL=true bun run dev mode for running Vite against a containerised convex (docker compose up convex).
vite.config.ts reads CONVEX_URL / CONVEX_SITE_PROXY_URL for proxy targets.
Upgrade instructions
Important: the migration does not create an automatic backup of your data — it preserves the legacy ${projectId}_platform-data volume in place after copying its contents into the new ${projectId}_convex-data. If you want a hard backup before upgrading, snapshot the volume yourself first (e.g. docker run --rm -v ${projectId}_platform-data:/src:ro -v $(pwd):/out alpine tar -C /src -czf /out/platform-data-backup.tgz .).
There is no separate tale migrate command. Migrations are part of the normal start / deploy flow:
tale upgrade # writes nothing destructive; logs which migrations are pending
tale start # dev: detects pending migrations, prompts [y/N], then starts
# or, in production:
tale deploy # detects pending migrations, prompts [y/N], then deploys
# Non-interactive (CI):
tale deploy --yes # accept any pending migrations without prompting
The migration runner:
- Detects every pending migration in the registry (current pending set:
split-convex, plus namespace-volumes for users still on pre-v0.2.33).
- Prints the plan: which migrations, which containers will be stopped briefly, estimated impact.
- Prompts
[y/N] (default No). Decline → exits cleanly with code 2; nothing changed.
- On confirm: stops affected containers, copies data with
cp -a --user 1001:1001, verifies strict file count (src + 1 sentinel == dst), records the migration in .tale/migrations.json.
- Leaves the old volume untouched.
After verifying the new setup works, reclaim disk space:
docker volume rm <projectId>_platform-data
docker volume rm <projectId>-dev_platform-data # if you ran dev mode
Breaking changes
- The
platform container no longer mounts /app/data read-write. If you had custom bind mounts into platform-data, rewrite them to target convex-data.
platform no longer exposes ports 3210, 3211, 6791 — those all live on convex now. Caddy’s routing handles the transition transparently; only custom test harnesses that hit the ports directly need an update.
platform-data:/app/platform-config:ro mounts on rag / crawler are rewritten to convex-data:/app/platform-config:ro. If you run those services standalone with hand-written compose files, update the mount source.
- The platform container’s Docker healthcheck now only probes Vite (
/api/health) — it no longer requires Convex to be healthy. Convex has its own independent healthcheck.
Rollback
tale rollback --version 0.2.x works as long as platform-data has been preserved. The old image expects platform-data:/app/data, which is untouched by the migration. After rollback:
# Optional cleanup: remove the unused convex-data volume
docker volume rm <projectId>_convex-data
See Production deployment → Schema compatibility and rollback for guidance on handling schema changes across versions.
Known caveats
- Blue-green transient window (~10–30s): during cutover,
blue platform is still serving users but green has already deployed new Convex functions. If the new deploy removes or renames a function, blue clients may see 404s in that window. Use expand-contract for breaking changes.
- Forward-only schema:
tale rollback reverts container images but does not revert Convex data. Required-field additions, renames, and type changes should follow the two-release expand-contract pattern documented in Schema compatibility and rollback.
- First deploy after migration takes longer than subsequent ones because convex env vars are all “new” — expect ~30–60s for env sync + deploy completion on first boot.
Last modified on April 19, 2026