githubEdit

cloudCloud Deploy

Hosting the headless openhuman-core in the cloud - DigitalOcean App Platform or Docker Compose on any VPS.

OpenHuman is a desktop app, but its Rust core (openhuman-core) is a headless JSON-RPC server that can be hosted in the cloud. Deploying the core separately is useful for:

  • Multi-device access, point several desktop clients at the same hosted core

  • Internal testers without local Rust toolchains

  • Long-running cron jobs / webhooks that should outlive a laptop session

This guide covers three deploy paths, easiest first:

What gets deployed in every path: a single container running openhuman-core serve on port 7788, behind the provider's TLS. The desktop app already knows how to talk to a remote core, set OPENHUMAN_CORE_RPC_URL=https://your-host/rpc and OPENHUMAN_CORE_TOKEN=... in app/.env.local and launch.


Single source of truth for the bearer token

Every /rpc call carries Authorization: Bearer <token>. The core has two ways to load that token at startup (src/core/auth.rsarrow-up-right):

  1. OPENHUMAN_CORE_TOKEN environment variable — pre-seeded by the caller (Tauri shell, Docker, App Platform, systemd unit, …). The core uses this value as-is and never writes a file.

  2. {workspace}/core.token file — generated by the core on first boot only when OPENHUMAN_CORE_TOKEN is unset. Standalone openhuman core run uses this so CLI clients can cat the file.

Rule of thumb for any remote / dockerized deploy: always set OPENHUMAN_CORE_TOKEN. Do not rely on core.token in a container — ephemeral filesystems lose it on redeploy, and any client trying to read the file from outside the container will get a stale or empty value. The two paths are deliberately mutually exclusive at startup; mixing them is the most common reason behind "the dashboard gets 401 after I redeployed".

To check what the running core is using, run scripts/print-core-token.sharrow-up-right on the host (or inside the container with docker compose exec):

scripts/print-core-token.sh --where     # prints 'env' or 'file:/path'
scripts/print-core-token.sh --redact    # first 8 hex chars + '…' (safe for logs)
scripts/print-core-token.sh             # full value (pipe straight into a client)

The desktop app's first-run picker also exposes a Test connection button next to the Core RPC URL + token fields, which fires core.ping against the URL with the typed token and reports Connected ✓ / Auth failed / Unreachable inline before persisting the configuration.


What you need before you start

Setting
Required
Notes

OPENHUMAN_CORE_TOKEN

yes

Bearer token clients send to /rpc. Generate with openssl rand -hex 32. Anyone with this token can drive the core.

BACKEND_URL

yes

Tinyhumans backend the core talks to (https://api.tinyhumans.ai for prod).

OPENHUMAN_APP_ENV

no

production or staging. Defaults to production.

OPENHUMAN_CORE_HOST

no

Defaults to 0.0.0.0 in the container.

OPENHUMAN_CORE_PORT

no

Defaults to 7788.

RUST_LOG

no

info is fine; debug for triage.

Endpoints exposed by the running container:

  • GET /health, public liveness probe. Used by every deploy path's healthcheck.

  • POST /rpc, bearer-protected JSON-RPC entrypoint.

  • GET /events, GET /ws/dictation, public streaming channels.

The OPENHUMAN_WORKSPACE directory (/home/openhuman/.openhuman inside the container) holds the core's config, sqlite databases, and skill state. Mount it on a persistent volume in every production deploy or you will lose data on restart.


1. DigitalOcean App Platform: one-click

Click the button below to create a new App Platform application from this repository's .do/app.yamlarrow-up-right:

Deploy to DOarrow-up-right

Then, in the App Platform UI, before the first deploy completes:

  1. Open the Settings → App-Level Environment Variables tab.

  2. Replace the placeholder OPENHUMAN_CORE_TOKEN value with a strong secret (openssl rand -hex 32). Mark it encrypted.

  3. If you are deploying staging, change OPENHUMAN_APP_ENV to staging and BACKEND_URL to https://staging-api.tinyhumans.ai.

  4. Hit Save. App Platform redeploys with the new secret.

App Platform handles TLS, restart-on-crash, log streaming, and rolling redeploys on git push (set deploy_on_push: true in .do/app.yaml to opt-in).

Persistence note: App Platform Basic does not provide block storage. The core's workspace lives in the container's ephemeral filesystem and is lost on redeploy. For durable storage, attach a managed database or upgrade to a tier that supports volumes. See the Compose path for a self-host alternative with persistent volumes out of the box.


2. DigitalOcean App Platform: manual via doctl

If you'd rather not click through the UI:

Update an existing app after editing the spec:


3. Any VPS via Docker Compose

Works on any host with Docker Engine ≥ 24 and the Compose plugin. DigitalOcean Droplet, Hetzner, Linode, EC2, a home server.

Each production release publishes a multi-tagged image to GHCR:

The image is linux/amd64. arm64 hosts pull the standalone tarball attached to the same GitHub Release (openhuman-core-<version>-aarch64-unknown-linux-gnu.tar.gz) or build the image from source on an arm64 builder.

Quick run with a published image:

Or use the in-repo Compose file (still builds the image locally from Dockerfile; switch the image: field to ghcr.io/tinyhumansai/openhuman-core:latest in docker-compose.yml to consume the published image instead):

Headless install without Docker

If you can't run Docker on the host, grab the standalone CLI tarball attached to the latest GitHub Releasearrow-up-right:

Then run openhuman-core serve under your service manager of choice (systemd, supervisord, …) with the same environment variables documented above.

Headless self-update contract

Headless deployments should treat openhuman.update_apply as the safe primitive: it downloads the release asset, writes it atomically next to the current binary, and returns. Nothing exits automatically.

openhuman.update_run follows config.update.restart_strategy:

  • self_replace (default): stage the binary, publish an in-process restart request, and let the running core respawn itself.

  • supervisor: stage the binary and return restart_requested=false. Your outer service manager must restart the process.

For long-running Linux services, set:

or the equivalent env vars:

Recommended systemd stance:

Operator flow:

  1. Call openhuman.update_check to discover a release.

  2. Configure restart_strategy = "supervisor" in your update.toml (or set OPENHUMAN_AUTO_UPDATE_RESTART_STRATEGY=supervisor) so the core stages the new binary without trying to re-exec itself, then call openhuman.update_apply or openhuman.update_run. restart_strategy is a configuration setting, not an RPC parameter.

  3. Restart the unit explicitly: systemctl restart openhuman.

If download or staging fails, the running binary is left in place and no restart is requested. If a staged binary proves bad after restart, roll back by restoring the previous binary from your package manager, image tag, or release artifact and restarting the supervisor again.

The Compose file (docker-compose.ymlarrow-up-right) maps the core on :7788, mounts a named volume openhuman-workspace for persistence, and sets restart: unless-stopped so the core comes back after host reboots.

Updating

For RPC-exposed production deployments, prefer leaving mutating update RPCs disabled (OPENHUMAN_AUTO_UPDATE_RPC_MUTATIONS_ENABLED=false) and perform rollouts through your existing image tag or package-management flow instead.

Logs

Rotating the bearer token

OPENHUMAN_CORE_TOKEN is the only thing standing between the public internet and full RPC access. Rotate it on a schedule and after any suspected leak:

For App Platform, do the same in Settings → App-Level Environment Variables: edit the OPENHUMAN_CORE_TOKEN secret and let App Platform redeploy. There is no separate token file to delete; the env var is the only state.

Putting it behind TLS

Use Caddy, nginx, or Traefik as a reverse proxy in front of :7788. A minimal Caddyfile:


Pointing the desktop app at a hosted core

In the desktop app's environment file (app/.env.local):

Restart the desktop app. The provider chain in App.tsx will route all RPC calls to the remote core; nothing else changes.


Smoke test

The repo ships .github/workflows/deploy-smoke.ymlarrow-up-right, which runs on every PR that touches the deploy artifacts. It builds the Docker image, boots it, and polls /health, so a regression in the cloud deploy path fails CI before it lands on main.

To run the same check locally:

Last updated