Most AI agent systems have a fatal flaw: the human is the bottleneck.

You build agents, give them tools, define their roles. Then you spend your entire day deciding what they should work on. You become a full-time manager of robots.

I run 18 AI agents across my business. Security, marketing, sales, content, infrastructure, health, even music. Each one has domain expertise. Each one has tools. But for weeks, I was the one coming up with ideas for what they should do. I was assigning tasks, reviewing work, prioritizing.

The agents were capable. I was the constraint.

The idea: let agents compete for attention

What if agents pitched me instead of the other way around?

Not vague "we should look into X" suggestions. Real, specific, actionable proposals. With deliverables. With effort estimates. With business impact.

And not just one agent talking. All of them. Every day. Competing for my limited time and attention.

The best ideas rise to the top. The weak ones get killed. Just like a real team.

How the pitching system works

Every morning, the system runs a three-stage pipeline:

Stage 1: Agents propose

Each agent generates 1-2 pitches. These come from two very different sources:

Pitching from learnings

Every morning, agents learn from the web. They read industry news, follow experts, scan RSS feeds and X/Twitter for threats and opportunities relevant to their domain. The security agent reads Krebs on Security. The developer follows Anthropic's changelog. The SEO agent tracks Google algorithm updates.

When an agent finds something relevant, it pitches an action based on what it learned. For example: "Google just updated its Core Web Vitals thresholds. I want to re-audit all our product sites against the new benchmarks."

These pitches are reactive. Something happened in the world, and the agent connects it to our business.

Pitching from role expertise

This is the more interesting one. Even when there is no external news, agents pitch based on their domain knowledge and awareness of the systems around them.

Every agent has a shared environment document that describes the full landscape: all 68 repos on the server, the tech stack, Google Drive folders, internal tools like the QA platform, heartbeat monitors, CRM, and support system. On top of that, each agent has a role-specific prompt with concrete suggestions for what they could proactively do.

The SEO agent knows there are product sites it can audit for web vitals. The developer knows there is a QA system with tests that might be broken. The health agent knows it has access to meal logs via FatSecret's API. The security agent knows there are repos it can scan for exposed secrets.

These pitches are proactive. Nothing happened externally. The agent just looks at the environment, thinks about its role, and proposes something useful.

The two sources complement each other. Learning pitches keep you current with the outside world. Role pitches keep your internal systems healthy. If an agent has learnings that day, it pitches from those. If not, it falls back to role-based pitching. Every agent gets a shot every day.

The format is strict

## PITCH: [Title]
What I want to do: [Specific action]
Why it matters: [Business impact]
What I need: [Tools/access required]
Deliverable: [What you get]
Effort: quick/medium/deep

No fluff. No "it would be nice if." Just proposals.

Stage 2: Wayne challenges

Wayne is my CEO agent. He sees the broader picture: all 68 repos, all products, all priorities. His job is to poke holes in every pitch.

He pushes back:

  • "This pitch does not connect to revenue."
  • "You are scoping too narrowly. This affects five repos, not one."
  • "Who maintains this after you build it?"
  • "Why now? This can wait."

Each agent then refines their pitch based on Wayne's feedback. They incorporate the broader context they were missing. They tighten the scope or expand it. The pitch gets sharper.

Stage 3: Top 3 for Mike

Wayne ranks all refined pitches and selects the top 3. He sends them to me via Telegram in a format I can read in under 30 seconds:

1. Title (from Agent Name) Two sentences on what and why. Effort: quick | Impact: high

I reply with "approve 1, 3" or "reject all." That is the entire decision I need to make. The approved agents get dispatched to execute.

What actually happens in practice

On a typical day, I get pitches like:

  • From the security agent (role pitch): "Scan all repos for exposed API keys and .env files committed to git. I will output a report of any secrets found with remediation steps."
  • From the developer (role pitch): "Review the heartbeat monitoring system for monitors that have not fired in 7+ days. Dead monitors mean blind spots."
  • From the health agent (role pitch): "Analyze your last 30 days of meal logs and flag nutritional gaps. I have access to your meal data and FatSecret's nutrition API."
  • From the SEO agent (learning pitch): "Google just deprecated FAQ rich results for most sites. I want to audit our structured data and remove FAQ schema where it no longer qualifies."

These are not things I would have thought to ask for. That is the point. The agents surface work I did not know needed doing.

Why competition matters

If you only have one agent, it tells you whatever it thinks is important. There is no pressure test. No second opinion.

With 18 agents pitching against each other, several things happen:

Quality goes up. Agents know their pitch will be challenged. Vague proposals get killed. Only specific, actionable ideas survive.

Coverage goes up. I was only thinking about the two or three things on my mind. The agents collectively scan across security, SEO, content, infrastructure, finance, health, and sales, simultaneously. Every day.

My time goes down. Instead of spending an hour figuring out what to work on, I spend 30 seconds approving or rejecting pitches. The agents do the thinking. I just say yes or no.

The evaluation layer

We also built automated quality scoring on top of this. Wayne does not just pick pitches blindly. He evaluates by impact, feasibility, cost, and alignment with current priorities.

Red flags that get pitches auto-rejected:

  • Generic "improve X" without a specific plan
  • Requires my time beyond approving
  • Duplicates something pitched in the last 7 days
  • Not aligned with current focus

This means even Wayne's selection is principled, not arbitrary.

The deeper lesson

The pitching system changed how I think about AI agents. Before, I thought of them as tools. You give them a task, they execute. Now I think of them as a team. They observe, think, propose, and compete.

The shift from "I tell agents what to do" to "agents tell me what they want to do" is subtle but transformative. It means my AI team gets smarter over time, not just faster at following instructions.

Building a team that thinks for itself is harder than building one that does what it is told. But it is the only version that scales.


This is part of a series on building FlatNine Ensemble, a team of AI agents that runs a business. Previous posts: AI self learning, Evaluating AI Work, AI REM, Lossless orchestration.