Run PI across every provider, without the credential plumbing

Run PI against any of its 15+ supported model providers without managing credentials yourself. PI is a minimal terminal coding harness, and Spinup projects the keys for each provider into a persistent, isolated environment so PI stays minimal and the runtime carries the rest.

What Makes PI Different

A minimal harness with room to grow

PI intentionally ships without MCP, sub-agents, permission popups, or background execution. It leaves those to extensions. That design makes the environment around PI load-bearing.

Credentials for every provider

PI supports Anthropic, OpenAI, Google, Azure, Bedrock, Mistral, Groq, Cerebras, xAI, Hugging Face, Kimi, MiniMax, OpenRouter, and Ollama. Spinup projects a credential for each provider so the harness can switch models without env-file juggling.

Print and RPC modes fit the API

PI's four operational modes (interactive, print/JSON, RPC, and SDK) all pair naturally with a stable HTTPS endpoint. Send a task to an agent, get back structured output your code can pipe downstream.

Extensions have somewhere to live

PI extensions are TypeScript modules that assume a filesystem, installed packages, and persistent state. Spinup keeps all three stable across runs, so extensions never rebuild from scratch.

Long sessions keep their context

PI's tree-structured session history and auto-summarization support long conversations. Spinup's persistent environment keeps the surrounding state (files, packages, generated artifacts) in sync across those turns.

The Spinup Angle

The harness stays lean. The runtime carries the rest.

PI's design philosophy is explicit: provide extension primitives, not opinionated features. That choice pushes responsibility for state, secrets, and lifecycle out of the harness and onto the environment around it.

Spinup is that environment. Each agent gets its own isolated filesystem, projected credentials per model provider, and snapshots that survive between runs. PI stays lean. The runtime handles the parts PI deliberately leaves unanswered.

If your team swaps from PI to another harness later, the agent, environment, and secrets stay the same. Only the execution engine changes.

FAQ

Common questions about running PI on Spinup

Why does a minimal harness like PI benefit from a cloud runtime?+

PI is a minimal terminal coding harness. It intentionally leaves MCP, sub-agents, permission controls, and background execution to extensions. Those extensions need a stable environment to run inside. Spinup provides the persistent filesystem, projected secrets, and controlled network access that PI's minimal design counts on without shipping itself.

Does Spinup manage API keys for every provider PI supports?+

PI reaches Anthropic, OpenAI, Google, Azure, Bedrock, Mistral, Groq, Cerebras, xAI, Hugging Face, Kimi, MiniMax, OpenRouter, and Ollama. Spinup projects credentials for each provider into the environment under stable names such as `OPENAI_API_KEY`, `ANTHROPIC_API_KEY`, and `GROQ_API_KEY`. PI reads them the same way it would locally, without scattering env files across your machine.

Can I call PI's RPC or print/JSON modes through a Spinup agent endpoint?+

Yes. Spinup exposes one stable HTTPS endpoint per agent. The runtime dispatches the request to PI inside the isolated environment and returns structured output. PI's print/JSON mode and RPC mode map cleanly to that shape, so application code can call an agent and pipe results downstream.

What happens if I compare PI against another harness later?+

The agent keeps the same environment, secrets, and controls. Only the harness layer changes. Spinup is designed around swappable harnesses, so teams can compare PI against Claude Code, OpenClaw, or another harness without rebuilding the setup.

Related

The runtime stays bigger than any one harness

Early access

Keep PI minimal. Let the runtime carry the rest.

Join the early-access waitlist if this is the runtime shape your team has been missing.