# Percher > AI-native hosting platform. Connect your AI assistant, say "publish my app", and it's live at `name.percher.run` in under 60 seconds with SSL, health checks, logs, and rollback. ## For AI agents with MCP (Claude Code, Cursor, Windsurf, VS Code) Use the `percher_publish` MCP tool. It handles everything in one call: - Authenticates (or returns a login URL if needed) - Auto-detects framework and generates config - Packages, uploads, and deploys - Returns the live URL on success - Returns actionable error diagnosis on failure ### Recommended agentic happy path The lyckliga väg for AI agents is the async opt-in — control returns to you in a few seconds instead of blocking for several minutes inside publish, so you can keep talking to the user while the build runs: 1. **`percher_publish({ waitForLive: false })`** — uploads the bundle and queues the deploy. Returns immediately with `{ status: 'queued', deployment: { id }, recovery: { nextAction: 'wait_deploy', args: { app, deployId, timeoutSeconds } } }`. 2. **`percher_wait_for_deploy(recovery.args)`** — short-polls the queued deploy. Default 30s timeout, max 120s. Returns one of four statuses: - `live` → surface `result.url` to the user. - `failed` → follow `result.recovery.nextAction` literally. For ambiguous build failures recovery will be `run_doctor` — call `percher_doctor(recovery.args)` and follow doctor's `recovery.nextAction` recursively. Other recoveries: `set_env_vars` / `inspect_build_log` / `retry` / `ask_user`. - `replaced` → this row was superseded; check `recovery.nextAction` (`none` with `result.url` = newer deploy is live; `wait_deploy` = newer deploy still in flight, call again with the new args; `run_doctor` / `ask_user` = resolution failed, call doctor or surface to the user). - `still_running` → call `percher_wait_for_deploy` again with the same args (`recovery.args` is pre-filled). 3. **`percher_doctor`** is the recovery hub — prefer it over manually choosing `percher_logs` / `percher_deploys_inspect` / `percher_reproduce`. It returns the same `recovery` shape as publish/wait, so you keep following `recovery.nextAction` until you hit `none` (success), `ask_user` (surface to user), or a concrete next tool. Doctor will point at the right low-level tool if one is genuinely needed. 4. **Whenever a tool returns `recovery`, follow `recovery.nextAction` exactly.** Don't regex-parse the suggestion text. The `recovery` contract is the machine-readable next step — the suggestion is for the user. The standard agent path is: ```text publish (waitForLive: false) -> wait_for_deploy -> if recovery.nextAction = run_doctor: doctor -> exact action ``` The synchronous form (`percher_publish` without `waitForLive`) still works and is fine for one-shot scripts; the async path exists for agents that want to stay responsive during the 30-90s build. ### Agent recovery contract Every Percher MCP tool that can fail returns a `recovery` field with a machine-readable next step. The full set of actions: ```text recovery.nextAction = "none" // success or terminal — stop | "open_login" // open recovery.url in browser, then retry | "wait_deploy" // call percher_wait_for_deploy(recovery.args) | "run_doctor" // call percher_doctor(recovery.args) — recovery hub | "set_env_vars" // set recovery.args.keys via percher_env_set, then retry | "fix_problems" // doctor reported source-level issues; fix the listed files/lines | "retry" // transient — call the same tool again with same args | "fix_config" // percher.toml is invalid or [env] contract is wrong | "ask_user" // ask recovery.prompt verbatim and wait | "inspect_build_log" // advisory hand-off — doctor sometimes routes here ``` Rules every agent must follow: 1. Always read `recovery.nextAction`. Never regex-parse the suggestion text — that's for surfacing to the user. 2. Call EXACTLY the suggested tool with EXACTLY `recovery.args`. Don't strip `mode` or `deployId` — they're load-bearing. 3. If `recovery.nextAction` is `ask_user`, ask `recovery.prompt` verbatim and wait for the user. 4. `run_doctor` is the canonical entry to recovery. Doctor will route to the right low-level tool if one is genuinely needed. 5. Repeat until `recovery.nextAction` is `none` (success) or `ask_user` (needs the user). Recovery error codes you may see (in `recovery.args.code` or in the error envelope): `DAILY_QUOTA_EXCEEDED`, `DEPLOY_RATE_LIMITED`, `REQUIRED_ENV_MISSING`, `ENV_KEY_UNDECLARED`, `RETRY_LIMIT_REACHED`, `already_in_progress`. The `recovery.nextAction` already encodes the right response — the codes are for surfacing context to the user. ### [env] contract in percher.toml Going forward, percher.toml uses an explicit env contract. Source code that references env keys not in any of these lists blocks the deploy with `recovery.nextAction = "fix_config"`: ```toml [env] required = ["OPENAI_API_KEY"] # must exist before deploy queues optional = ["SENTRY_DSN"] # may be referenced; not required ignore = ["NODE_ENV"] # explicitly ignored by the scanner ``` Rules: a key may appear in only one of `required` / `optional` / `ignore`; all keys must be `UPPER_SNAKE_CASE`; empty arrays are allowed (means "explicit empty"). The legacy `[env] KEY = "placeholder"` shape continues to parse for back-compat. MCP config: ```json { "mcpServers": { "percher": { "command": "npx", "args": ["-y", "@percher/mcp"] } } } ``` Never put secrets in percher.toml. Use `percher_env_set` to set environment variables. ### AI-native tools (diagnosis, cost optimization, billing) Beyond publish/logs/env/rollback, Percher exposes MCP tools that let the assistant actually help the user run their apps, not just deploy them: - **`percher_diagnose_crash`** — when the user says "my app isn't working", fetches the latest crash report + AI-generated explanation + suggested fix. Read-only. Returns `{ state, app, crash { explanation, suggestion, severity, logTail }, summary }`. Stale reports (>24h old) on recovered apps are filtered out so the AI doesn't surface irrelevant fixes. - **`percher_app_insights`** — cost-optimization and reliability suggestions per app: idle apps to archive, memory-pressure needing an upgrade, over-provisioned plans that could downgrade, failing-deploy warnings. Each insight includes an `actionHint` naming the exact follow-up tool to call. - **`percher_billing_upgrade(plan)`** — returns a Polar checkout URL for upgrading. The user confirms payment in their browser; Percher never changes subscriptions server-side. - **`percher_billing_portal`** — returns a Polar customer-portal URL for managing the current subscription (downgrade, cancel, update card, invoices). Returns `url: null` for users who have never subscribed — suggest `percher_billing_upgrade` instead. - **`percher_deploys` / `percher_deploys_inspect`** — when a deploy fails, list recent deploys (id, status, errorMessage) and inspect a specific one to see the full buildLog. Pass `latestFailed: true` to auto-pick the most recent failure on an app. The buildLog is the canonical source of truth for build-time errors. Use these whenever `percher_publish` or `percher_push` reports a failure with a deploy ID — they replace the old workaround of hitting the REST API directly to read /apps/:id/deploys. Typical agentic flow: 1. User: "check my apps" 2. Agent calls `percher_app_insights` → sees `overprovisioned` with `actionHint: { tool: "percher_billing_portal" }` 3. Agent: "your maker plan looks overprovisioned. Want to downgrade?" 4. User: "yes" 5. Agent calls `percher_billing_portal` → gets URL 6. Agent: "open this to manage your subscription: https://polar.sh/..." ## For AI agents without MCP (ChatGPT, Claude web, Codex, any chat agent) `publish` is the recommended deploy command. It auto-detects the framework, generates `percher.toml` if missing, packages the project, deploys it, and returns a structured error with explanation + suggestion if anything fails. Before upload it prints a pre-flight summary (detected framework, Node version + its source — `engines.node`, `.nvmrc`, or the LTS default — build/health/port, bundle size). On failure the build-log tail, missing env vars, and any classified diagnosis are printed inline; pre-build infra failures get the `infra_unavailable` class so the agent knows to retry rather than fix code. Use `publish --dry-run` to verify the bundle without uploading. Reach for the lower-level `login + init + push` flow only when you specifically need to isolate one phase — both flows hit the same server-side build, so the only thing you gain is where the error surface lives. Guide the user through these copy-paste steps: 1. Install: no install needed, uses npx 2. Login: `npx percher login` (opens browser for account creation + auth) - Or: create a token at https://percher.app/settings and run `npx percher login --token TOKEN` 3. Deploy: `npx percher publish` (auto-detects framework, builds, deploys) If the user's project doesn't have a `percher.toml`, generate one: ```toml [app] name = "app-name-here" runtime = "node" [web] port = 3000 health = "/health" # password = true # uncomment to password-protect the site ``` Runtime options: `node`, `bun`, `python`, `static`, `docker` After deploy, the app is live at `https://app-name-here.percher.run`. To password-protect a site: set `password = true` in `[web]` and set `SITE_PASSWORD` env var: `npx percher env set SITE_PASSWORD=mysecret`. Visitors will see a branded login page before accessing the app. To set environment variables: `npx percher env set KEY=value` To check status: `npx percher doctor` (also validates `percher.toml` — strict, unknown keys fail) To see runtime logs: `npx percher logs` To see the BUILD log of a failed deploy: `npx percher logs --build` (defaults to latest failed) or `npx percher logs --build dep_xyz` To diagnose a crash: `npx percher doctor` + read crash-report from dashboard To see cost suggestions: read `/apps/:app/insights` from the API, or use the dashboard Suggestions card on the overview tab. ## CI/CD & GitHub Actions ### Deploy from GitHub Actions (PERCHER_TOKEN) `bunx percher publish` works fully non-interactively when the `PERCHER_TOKEN` env var is set — no browser login needed. Get a token at https://percher.app/settings → API tokens. ```yaml # .github/workflows/deploy.yml name: Deploy to Percher on: push: branches: [main] jobs: deploy: runs-on: ubuntu-latest steps: - uses: actions/checkout@v4 - uses: oven-sh/setup-bun@v2 - run: bunx percher publish env: PERCHER_TOKEN: ${{ secrets.PERCHER_TOKEN }} ``` Store the token as a GitHub Actions secret: repo Settings → Secrets and variables → Actions → New repository secret → name it `PERCHER_TOKEN`. **Preview deploys on pull requests** — the branch name is auto-detected from `GITHUB_HEAD_REF`: ```yaml - run: bunx percher publish --preview env: PERCHER_TOKEN: ${{ secrets.PERCHER_TOKEN }} ``` Also works with GitLab CI (`CI_COMMIT_REF_NAME`), Vercel (`VERCEL_GIT_COMMIT_REF`), and Netlify (`BRANCH`/`HEAD`) — branch is auto-detected and used as the deploy note. ### GitHub webhook auto-deploy (no CI runner needed) For public repos, Percher can deploy on every push without GitHub Actions runners or billing minutes. **MCP:** `percher_github_connect` — clones the repo, queues an initial deploy, returns `{ deployId, branch, webhookInstructions: { payloadUrl, secret, settingsUrl } }`. If webhook setup failed (`webhookSetupFailed: true`), call `percher_github_setup_webhook` to regenerate the secret without re-cloning. **CLI:** ```bash bunx percher github connect https://github.com/owner/repo --branch main # If webhook setup failed: bunx percher github setup-webhook --app ``` Then add the webhook in GitHub: repo Settings → Webhooks → Add webhook with the printed Payload URL, Content type `application/json`, and the secret. Pushes to the tracked branch auto-deploy from that point on. Only public repos are supported; private repos use the PERCHER_TOKEN + GitHub Actions path instead. ## What it does Percher takes a project directory, detects the runtime (Node.js, Bun, Python, static, or Docker), builds it with Nixpacks (or your Dockerfile for runtime=docker), and runs it in an isolated Docker container behind Caddy with automatic TLS. Each deploy is a git commit in an internal Forgejo repo — rollback to any version instantly. ## Key features - **One-command deploy**: `npx percher publish` is the recommended path — handles login + config + deploy + structured error diagnosis. Add `--dry-run` to verify the bundle. The `login + init + push` flow exists for granular control but hits the same pipeline. - **MCP server**: 38 tools for AI assistants (publish, wait_for_deploy, logs, env, rollback, domains, doctor, deploys/inspect, crash-diagnosis, cost-insights, billing, GitHub auto-deploy, account export/delete, ai-files install/update, etc.). Returns a discriminated `recovery` shape — agents read `recovery.nextAction` (e.g. `wait_deploy`, `set_env_vars`, `inspect_build_log`) instead of regex-parsing summaries. - **Managed PocketBase**: Built-in database with `[data] mode = "pocketbase"` in percher.toml. Percher auto-creates a superuser and injects `POCKETBASE_URL`, `POCKETBASE_ADMIN_EMAIL`, and `POCKETBASE_ADMIN_PASSWORD` as env vars — no manual setup needed. Rotate the superuser later with `npx percher data reset-superuser ` (default: rotates silently and re-injects the new credential as encrypted env). Add `--reveal` to also print the plaintext once if you need to log into the PB admin UI directly. - **Graceful zero-downtime deploys**: drain window + canary monitor + auto-rollback if the new version fails its health check within 60s - **Multi-instance + auto-scaling**: configure `[resources] instances = N` or `[resources.autoscale] min/max`, and Percher runs behind a Caddy load balancer with active health checks - **Preview deploys**: `npx percher publish --preview` creates a temporary URL like `--p-.percher.run` for testing — does not replace the live version. **Free plan: 1 active preview per app**, and running `--preview` again automatically rotates the previous one out (the CLI prints "replaced previous preview" — no manual `percher preview discard` needed). Paid plans: multiple parallel slots, hit 403 at the limit and discard explicitly. - **Custom domains**: `npx percher domains add yourdomain.com` with automatic DNS verification - **Crash diagnostics**: Classified build errors with AI-generated fix suggestions (surface via `percher_diagnose_crash` MCP tool) - **External uptime monitoring + per-route errors**: Percher pings paid apps from outside the cluster on a plan-tier cadence (Starter=5min, Maker=1min, Pro=30s; Free disabled) and surfaces the result in the dashboard's Health tab — live status, 24h/7d/30d uptime %, latency sparkline, 7-day outage log. Three consecutive failed probes flip an app to `down` and fire the `app.unhealthy` webhook + Discord; recovery fires `app.recovered`. The Analytics tab's "Top error paths" panel shows which routes returned 4xx/5xx, sorted by 5xx count, summed across the selected window so a route that fails consistently rises to the top of the 30-day view. Distinct from `app.crashed`: that fires on container-level failures (process exit / OOM); uptime fires when the external HTTP path is broken (DNS / Caddy / TLS / 5xx). - **Cost optimization**: Rule-based insights over request/crash/deploy history — idle apps, memory pressure, over-provisioned plans - **Environment variables (build vs runtime)**: Encrypted at rest (AES-256-GCM), managed via CLI (`percher env set KEY=value`) or MCP (`percher_env_set`). **Two injection points**: build-time (baked into the static client bundle by Next.js / Vite / Astro / Expo) and runtime (available to the running container). Vars matching a public-prefix convention auto-forward to build with no TOML changes — `NEXT_PUBLIC_*`, `VITE_*`, `PUBLIC_*`, `REACT_APP_*`, `VUE_APP_*`, `EXPO_PUBLIC_*`. Server-secret build tokens that don't match a prefix (Sentry source-map upload, private npm registries) require explicit opt-in via `[build] pass_env = ["SENTRY_AUTH_TOKEN", "NPM_TOKEN"]` in percher.toml. Keys NOT matching a prefix AND NOT in `pass_env` stay runtime-only — security boundary preserved. Every deploy log starts with a `Build env: A, B (2 vars) / Runtime env: 8 vars` summary so you can verify what landed where without bundle archaeology. If a `*_PUBLIC_*` var compiles to `undefined` in the browser, check the build log first — most likely it's not set at all (the env-store check `percher env list` doesn't tell you it actually reached build). - **Outbound HTTPS to external APIs (exa, openrouter, OpenAI, Stripe, etc.)**: App containers run on an internal Docker network and reach the public internet through a forward HTTP proxy automatically injected as `HTTP_PROXY`/`HTTPS_PROXY`=`http://egress-proxy:8888` (plus `NODE_USE_ENV_PROXY=1`). **Bun and Node 24+** honor it automatically — `fetch("https://api.exa.ai/...")` just works. **Python (requests/httpx)** and **Go (http.DefaultTransport)** honor `HTTPS_PROXY` natively. **Node 22 and earlier**: wire up undici at startup: `setGlobalDispatcher(new ProxyAgent(process.env.HTTPS_PROXY))`. Direct outbound is blocked by design — `ENETUNREACH 1.1.1.1:443` is the symptom of the runtime not using the proxy, not a missing-egress problem. The proxy is **permissive — no host allowlist**: any public hostname works once `HTTPS_PROXY` is honored, including standard web-platform services like web-push (`fcm.googleapis.com`, `updates.push.services.mozilla.com`, `web.push.apple.com`), email APIs (Resend, SendGrid, Postmark), payment (Stripe, Polar, Paddle), analytics (Sentry, PostHog), and any other third-party API. - **Webhooks**: Signed (HMAC-SHA256) event deliveries for `deploy.failed`, `app.crashed`, `app.unhealthy`, `app.recovered`, `domain.expiring`. `app.crashed` fires on container-level failures (process exit / OOM); `app.unhealthy` / `app.recovered` fire when external uptime probes can't reach the app (Starter+). `app.unhealthy` is gated by a 15-minute cooldown so a flapping app doesn't pager-spam your receiver; `app.recovered` is not cooldown-gated but fires only when the matching `app.unhealthy` was actually delivered to your receiver (paired, per channel) — so every recovered event you see corresponds to an unhealthy event you saw. - **GitHub auto-deploy**: `bunx percher github connect https://github.com/owner/repo` clones the repo, deploys it, and sets up a push webhook — every push to the tracked branch triggers a fresh deploy (public repos only). If webhook setup fails, recover with `bunx percher github setup-webhook`. For private repos or full CI/CD control, use `PERCHER_TOKEN` with `bunx percher publish` in GitHub Actions. - **CI/CD token auth**: `PERCHER_TOKEN= bunx percher publish` works non-interactively — suitable for GitHub Actions, GitLab CI, Vercel, and Netlify. Get a token at https://percher.app/settings. The CLI auto-detects the CI environment and uses the branch name as the deploy note when running `--preview`. - **AI assistant rules**: `npx percher ai-files install` installs CLAUDE.md / .cursorrules / .windsurfrules into the current project so the AI assistant knows how to deploy to Percher. `ai-files update` updates to the latest version; `ai-files status` shows what's installed. - **GDPR data export**: `npx percher account export` downloads all account data as a JSON file (GDPR Art. 20). Add `--include-app-data` to also download PocketBase tarballs for every app. - **App rename**: `npx percher rename new-name` changes the subdomain. Once per 30 days, old name reserved 60 days - **Daily deploy quota (abuse mitigation)**: each account has a per-day cap on deploys, with separate counters for live vs preview so a review cycle doesn't eat the live budget. Defaults: **Free 50 live + 25 preview, Starter 100/50, Maker 200/100, Pro 1000/500**. Counters reset at 00:00 UTC. Hitting the cap returns 429 `DAILY_QUOTA_EXCEEDED` with structured fields `{ kind, used, limit, resetAt, retryAfterSec }` plus a `Retry-After` header. The MCP `percher_publish` and `percher_push` tools surface this as `recovery.nextAction = "ask_user"` with `args.code = "DAILY_QUOTA_EXCEEDED"` — agents must NOT retry until `resetAt`; surface the message to the user instead. Live and preview counters are independent: a 429 on live ({ kind: "live" }) leaves preview deploys still available and vice versa. There's also a per-app burst rate limit (DEPLOY_RATE_LIMITED) that fires on a sliding 60s window — that one IS retryable after a few seconds. ## Configuration Apps are configured via `percher.toml` in the project root: ```toml [app] name = "my-app" runtime = "node" [web] port = 3000 health = "/health" [data] mode = "pocketbase" # Optional — opt-in build-time env exposure for non-prefix server-secrets # only (Sentry source-map upload, private npm registries, design-system # API tokens). Vars matching the public-prefix conventions # (NEXT_PUBLIC_*, VITE_*, PUBLIC_*, REACT_APP_*, VUE_APP_*, EXPO_PUBLIC_*) # auto-forward to build without needing to be listed here. Values come # from `npx percher env set`, never from this file. [build] pass_env = ["SENTRY_AUTH_TOKEN", "NPM_TOKEN"] # Optional — static instance count [resources] instances = 2 # Optional — CPU-based autoscaling (mutually exclusive with `instances`) [resources.autoscale] min = 1 max = 4 ``` Plan-gated limits: autoscale max and instances both clamp to the plan's instance cap (free=1, starter=1, maker=2, pro=4). Exceeding limits is silently clamped with a note in the build log. ## Deploy history and retention Percher keeps deploy history per app. The number of deploys with full rollback capability depends on the plan. Older deploys keep their metadata (status, timestamp, error message) but Docker images and tarballs are cleaned up automatically. | Plan | Rollback deploys | Build logs | Runtime logs | |---------|-----------------|------------|--------------| | Free | 5 most recent | 3 days | 24 hours | | Starter | 15 most recent | 14 days | 7 days | | Maker | 30 most recent | 30 days | 30 days | | Pro | 60 most recent | 90 days | 90 days | ## Pricing - Free: 2 apps, auto-sleep after 10 min idle - Starter (€3/mo): 5 apps, always-on, custom domains - Maker (€12/mo): 10 apps, 1 GB memory, 1 CPU, up to 2 instances per app - Pro (€29/mo): 25 apps, 2 GB memory, 2 CPUs, up to 4 instances per app ## Links - Website: https://percher.app - Docs: https://docs.percher.app (also reachable at https://percher.app/docs). Each section is its own URL — fetch the slug directly. Slugs: `quick-start`, `recommended-setup`, `cli-reference`, `env-vars`, `password-protection`, `retention`, `telemetry`, `versions`, `ai-agents`, `crash-diagnostics`, `monitoring`, `graceful-deploys`, `multi-instance`, `cost-insights`, `webhooks`, `billing`, `pocketbase`, `export-data`, `percher-toml`, `mcp-tools`, `migrate-supabase`, `migrate-convex`, `migrate-vercel`, `custom-domains`, `persistent-data`, `backup-and-restore`. Use `https://docs.percher.app/` (or `https://percher.app/docs/`) — both work. Common URL guesses (`/cli/publish`, `/runtimes/node`, `/publish`, etc.) alias to the closest real slug (`cli-reference`). Anything else on `docs.percher.app/` 302-redirects to the docs index — never to a dashboard 404. - Connect your agent: https://percher.app/connect - API spec: https://api.percher.run/openapi.json - Webhook guide: https://docs.percher.app/webhooks