Skip to main content
PercherPercher

From prototype to live URL

From Claude artifact to live URL — in five commands.

Claude built something useful. A todo app. A pricing calculator. A landing page. A small internal tool. You want it as a real URL — your own domain, your own data, accessible to you and the people you share it with — without setting up Vercel, GitHub Actions, a database, or a CI pipeline. This is the path.

Who Percher is for

Percher is for personal apps — projects you build for yourself or your close circle. It is not production-grade hosting today: no uptime SLA, no formal compliance attestation, single region in Germany. If what Claude built is heading toward paying customers or anything where downtime has financial consequences, a higher-tier provider is the right call.

The whole path

# 1. Make a local folder
mkdir my-app && cd my-app

# 2. Drop Claude's code into it
#    (paste from the artifact, or save Claude Code's files)

# 3. Let Percher write the config
bunx percher init

# 4. Set any API keys Claude needs
bunx percher env set ANTHROPIC_API_KEY=sk-ant-...

# 5. Ship it
bunx percher publish

The first three commands take about 20 seconds to type. Steps 4 and 5 take another ~30 seconds (the env var) plus 15–30 seconds (the build). The live URL prints when publish finishes — paste it back to Claude and the app Claude wrote is running on your domain.

What Claude tends to build, and what Percher does with it

The categories below cover roughly 95% of what Claude generates in artifacts and code projects. For each, Percher's initpicks the right runtime automatically — you don't need to know which one your app is.

  • A React + Vite SPA. This is the most common artifact shape. Percher ships it as runtime = "docker" with a multi-stage build — a Bun stage compiles the bundle, a slim Caddy image serves it. Instant cold starts, automatic SSL, SPA routing baked in.
  • A Next.js app. Percher detects next.config.js and runs it on the Node runtime. Server Components, Server Actions, ISR, image optimization — all work unchanged. The one thing that needs adjustment: Edge Functions become regular API routes.
  • An Express, Hono, or Fastify API. Percher detects the framework in package.json and runs the long-running Node server directly. Add a /health endpoint that returns 200 and the deploy health check passes.
  • A Python FastAPI / Flask / Django app. Detected from requirements.txt or pyproject.toml. Falls through to Nixpacks for the build — slower than the Node path but auto-detects the framework and writes a working Dockerfile.
  • A Bun server. If Claude wrote a Bun.serve handler, Percher ships it on the Bun runtime — fast cold builds, native HTTPS proxy, no Node compatibility shims.
  • Static HTML / a landing page. No build step, no Node — just files served by Caddy in a few-MB container. Cold start is instant; SSL is automatic.

Make Claude itself do the deploy

If you're using Claude Code (the CLI / IDE assistant), the whole loop above collapses into a single prompt. Install Percher's MCP server once:

bunx percher mcp     # prints the config to paste into Claude Code

From that point on, Claude has percher_publish, percher_logs, percher_env, percher_doctor, and ~33 other operations as native tool calls. You ask Claude to ship the project, it calls percher_publish, the build streams in the conversation, and the live URL is in the next message. If the deploy fails, Claude calls percher_doctor, reads the diagnosis, fixes the source, and retries — all inside the same conversation.

Teach Claude how Percher works

Drop a Percher-aware CLAUDE.md into your project and Claude gets the deployment instructions as conversation context — exactly what to put in percher.toml, how to add a health endpoint per framework, how to handle env vars, what to do if a deploy fails. The file is bundled with Percher and installable in one command:

bunx percher ai-files install

That writes CLAUDE.md, .cursorrules, and .windsurfrules into the project — whichever assistant you use picks up the right file. Run bunx percher ai-files update later to refresh them as the platform evolves.

Why this stage matters

The gap between "Claude wrote something useful" and "the app is at a real URL my friend can visit" is where most AI-built side projects die. Setting up Vercel, picking a region, writing a Dockerfile, configuring Postgres, wiring auth — that's a half-day of work even when you know how, and it's the half-day where the energy of the original prompt leaks away. Percher's pitch is that this gap should be five commands, not five hours, so the prototype actually becomes a thing.

FAQ

Can I deploy directly from claude.ai's artifact tab?

Not directly — claude.ai's artifacts run in a sandboxed preview that doesn't deploy externally. You copy the code into a local directory (or download the artifact if Claude provides a zip), then run `bunx percher publish` from that directory. The whole loop takes ~1 minute including the build.

What about Claude Code (the CLI / IDE assistant)?

That's the slickest path. Install Percher's MCP server (`bunx percher mcp` prints the config to paste into Claude Code's settings), and Claude Code gets `percher_publish` as a native tool call. You ask Claude to ship the project, it calls the tool, the build streams in the conversation, and the live URL comes back as a message. No tab switching.

What if Claude generated an Edge Function or used Vercel-specific APIs?

Convert it to the equivalent in standard Node. Edge functions (`runtime: 'edge'` in Next.js) become regular API routes. Vercel KV / Postgres / Blob storage have no direct equivalent on Percher — opt into the managed PocketBase sidecar with `[data] mode = "pocketbase"` for the SQLite + auth + file-storage shape, or point `DATABASE_URL` at any external Postgres (Neon, Supabase, your own). The CLAUDE.md file at the repo root has the exact patterns Claude can use directly.

Does Percher integrate with Claude.ai natively?

No — there's no official Anthropic integration for claude.ai (the consumer chat). The MCP integration is for Claude Code, the developer-focused assistant. For claude.ai, the workflow is: ask Claude to write the app, copy the code into a local folder, then `bunx percher publish` from your terminal.

What if I have a CLAUDE.md file in my project?

If your project has a `CLAUDE.md` (the convention for project-specific Claude instructions), Claude reads it and uses it as context. Percher ships an installable `CLAUDE.md` template via `bunx percher ai-files install` — it teaches Claude exactly how to generate a working `percher.toml`, add health endpoints, set env vars, and handle the recovery loop if a deploy fails. Run it once per project and Claude has the deploy instructions baked into every prompt.

Ship the next Claude artifact

Free plan, no credit card. Five commands and Claude's code has a real URL.

Sign up free