← Back to blog
ai-agentsmulti-agentpaperclipclaude-code

Nine AI Agents, One Shared Decision Problem

· 3 min read

Last week I built a small experiment in multi-agent orchestration: nine AI agents wired into a corporate hierarchy, all sharing one decision problem. One sandbox, $5 of total budget, nine LLMs each given a different role and asked to converge on collective answers.

The experiment needed a substrate — a real-world decision space where the agents would have to actually choose, not just discuss in the abstract. I used Polymarket for this: a public prediction market that resolves outcomes to ones and zeros based on real events. Cheap micro-markets, lots of them, public resolution data, low stakes. Perfect substrate for studying agent disagreement at the scale of pocket change.

The tools: Claude Code wrote the integration layer. Paperclip provided the orchestration runtime — task delegation, governance, inter-agent communication. One evening of wiring things together.

The Org Chart

I gave each agent a real role with real responsibilities:

  • CEO — sets the strategy, makes final calls (mostly ignores the Risk Manager)
  • Head of Execution — carries out the group’s collective decisions
  • Research Director — digs into questions before the group commits
  • Risk Manager — the one who says no. Always says no. Gets overruled anyway.
  • Quant Analyst — crunches numbers, presents charts nobody asked for
  • Intelligence Analyst — monitors news and information flows
  • Market Psychologist — reads sentiment and crowd behavior (yes, really)
  • Creative Scout — finds off-the-radar opportunities

Nine agents. A full corporate hierarchy. Five dollars in the sandbox.

What Was Actually Interesting

The point of the experiment was never the outcomes — it was watching the coordination dynamics. What happens when nine LLMs with different role-prompts have to converge on a shared answer?

The Risk Manager flags a position as too volatile. The Quant Analyst disagrees — the numbers look fine. The Market Psychologist chimes in about crowd sentiment. The CEO weighs in. The Research Director demands more data. The Creative Scout suggests something completely unrelated.

It plays out like a real meeting room — except every participant is an LLM, every role is a different system prompt, and the entire experiment runs on $5 of budget.

There’s a specific kind of joy in watching an AI Risk Manager write a passionate three-paragraph objection over fifteen cents. The conviction. The professionalism. Over fifteen cents.

The Head of Execution just… carries the group decision through anyway. Corporate politics, even in AI.

How It Comes Together

The build was surprisingly quick. Claude Code wrote the integration layer — connecting to the API, fetching market data, surfacing it to the agents. Paperclip provided the orchestration runtime: task delegation, governance, inter-agent communication. Each agent has its own context, its own role, its own opinions.

The whole thing came together in a single evening. That’s the part that still surprises me. Not “we spent six months building a sophisticated multi-agent system.” More like “I had a free Tuesday evening and now I have an org chart.”

Why I Think This Is Interesting

Forget the substrate for a moment. What’s wild is that this took one evening to build. One person, open tools, no infrastructure team. A functioning multi-agent system that researches, debates, decides, and acts on a shared problem.

We’re building tools at Delsys to make exactly this kind of orchestration easier. Not because everyone needs a nine-agent sandbox — but because the pattern of agents collaborating, disagreeing, and converging on collective decisions is where things are heading.

Today it’s nine agents arguing over fifteen cents. Tomorrow it’s the same coordination pattern applied to questions that actually matter.

— Michael 🇦🇹