
Last June, I wrote about Claude Code and why I switched from Cursor and Windsurf. At the time, Claude Code was the gold standard for me. It still is, fundamentally – but for the past few weeks, I've been primarily using pi by Mario Zechner. And the reasons go beyond just coding.
First, a quick look at what's happened in the past few months. Because: a hell of a lot. The big players are now pouring massive resources into their coding agents. Anthropic has continuously expanded Claude Code, OpenAI introduced Codex, Google is pushing Jules, and with Amp, Droid, and opencode, there are even more alternatives on the market. It's a full-blown arms race. And then there's pi. Built by Austrian hobby programmer* Mario Zechner.
pi is a terminal-based coding agent that's intentionally kept lean. The tool comes with exactly four tools: read, write, edit, and bash. That's it. No built-in plan mode, no sub-agents, no permission popups. The philosophy: if you don't need it, it won't be built. And if you do need it, you can build it yourself – or install it.
A key difference from Claude Code: pi is open source, under the MIT license. The entire codebase is on GitHub. Anyone can inspect it, contribute, or fork it. With Claude Code, you don't have that transparency. You simply don't know what's happening under the hood. But more on that later.
Installation is straightforward:
npm install -g @mariozechner/pi-coding-agentClaude Code remains an excellent tool. But pi offers me something Claude Code doesn't: transparency and a learning effect. Over the past few weeks of using pi, I've learned more about how LLM agents work than I did in all the months before. That's down to several factors.
Probably the most important point: pi taught me how crucial proper context management is. There's now a term in the community called the "Dumb Zone", coined by Dex Horthy from HumanLayer. The idea: once you hit roughly 40 to 60 percent utilization of the context window, LLMs start performing significantly worse. They lose the thread, hallucinate interfaces, and forget the original goal. The more context, the dumber the model.
pi permanently shows me the current context utilization in the footer. I can see how many tokens are being consumed and when it's time to switch sessions. This makes an enormous difference in the quality of results. For those who want a bit more hand-holding here, there's already an extension for that.
Another point that fascinated me: the system prompt. Claude Code uses a system prompt of over 10,000 tokens. That's a massive block of instructions sent with every interaction, naturally eating into the context window. And this prompt changes with every release – affecting model behavior and potentially breaking workflows. Mario Zechner actually built a dedicated tool called cchistory that lets you track changes to the system prompt and tool definitions across different Claude Code versions.
pi takes a radically different approach. The entire system prompt fits in a few lines and essentially says: You're a coding assistant, you have four tools, work in the user's project. Less prompt overhead means more room for the actual context: the code and the task at hand.
What I find particularly exciting about pi is the extension system. Extensions are TypeScript modules that can add arbitrary functionality to pi: new tools, commands, keyboard shortcuts, event handlers, or UI components. The brilliant part: you can have your LLM write extensions for you that then benefit your own workflow. It's essentially a self-improving system. Or as Armin Ronacher (Flask creator and Sentry CTO) puts it in his excellent blog post about pi: you tell the agent to extend itself. And it does.
Particularly noteworthy here is Nico Bailon, who seemingly publishes new extensions and skills on a daily basis. His pi-interactive-shell, an extension that lets pi control interactive CLIs in an observable overlay, has thousands of stars by now. His pi-messenger extension for multi-agent communication also shows what's possible with the system. The community around pi is growing rapidly. From Doom in the terminal to code review tools, everything is out there.
One feature I particularly appreciate: pi supports over 15 LLM providers. Anthropic, OpenAI, Google, xAI, Groq, Cerebras, OpenRouter, Ollama, and many more. And you can switch models mid-session (Ctrl+L). I use this constantly, switching between Anthropic's Opus & Sonnet and OpenAI's Codex 5.3 in every session. No other coding agent I know handles this switching as elegantly.
Interestingly, pi is also the technical foundation of the hyped project OpenClaw, the open-source AI agent by Peter Steinberger, about which I've already written an article for DER STANDARD. OpenClaw uses pi in SDK mode as its agent engine. Steinberger has since joined OpenAI, and the project continues as a foundation. This shows how powerful the underlying architecture truly is. And that pi goes far beyond being just a coding tool.
My workflow is similar to Claude Code but has become more intentional. I formulate my task, keep an eye on context utilization, and start a new session after complex tasks. I use the AGENTS.md file (the equivalent of Claude Code's CLAUDE.md) for project-specific instructions.
For context management, I use pi-amplike – a skill package that offers /handoff among other things. This lets me transfer the current context into a new, focused session when the current one gets too full. Also extremely useful: session-query, which lets me search through earlier sessions without polluting the current context.
As additional skills, I use the frontend-design skill by Vercel and pi-mcp-adapter for my Payload CMS projects. The MCP integration works really well here. Even though pi doesn't natively support MCP, it can be elegantly solved through skills and extensions.
I find the steering messages particularly productive: while pi is working, I can press Enter to queue a message that gets delivered after the current tool call. This lets me guide the agent in real-time without aborting it completely.
Claude Code is and remains an excellent tool. Anyone looking for something that just works and requires little configuration is still well served there. But if you want to understand how LLM agents work under the hood, if you want to control your workflow down to the last detail, and if you're willing to get your hands a little dirty, you should give pi a chance.
Through using pi, I've learned more about context management, system prompts, and the limits of LLMs than through any other tool before. And that's exactly what makes it the most exciting coding agent on the market for me right now. Even though it calls itself a "Shitty Coding Agent."
What I learned building an opinionated and minimal coding agent
cchistory – Claude Code System Prompt Tracker
Awesome pi Agent – Community Extensions
Escaping the Dumbzone – Context Engineering
Pi: The Minimal Agent Within OpenClaw – Armin Ronacher
*that's a joke, by the way.