1
people have died from curable diseases
since this page started loading...
💀

Building Your AI Coordination Army

Keywords

war-on-disease, 1-percent-treaty, medical-research, public-health, peace-dividend, decentralized-trials, dfda, dih, victory-bonds, health-economics, cost-benefit-analysis, clinical-trials, drug-development, regulatory-reform, military-spending, peace-economics, decentralized-governance, wishocracy, blockchain-governance, impact-investing

How to build and deploy autonomous AI agents that coordinate millions of people fighting the War on Disease.

How to build and deploy autonomous AI agents that coordinate millions of people fighting the War on Disease.
Loading economic parameters...
Economic parameters loaded.

You’ve got mission control, a network of decentralized institutes of health (DIH), and a targeting system (Wishocracy). Now you need an army of AI agents to do the work while humans are sleeping, arguing on Twitter, or attending conferences that accomplish nothing.

A conceptual diagram showing the relationship between Wishocracy (goal setting), Decentralized Institutes of Health (research focus), and AI agents (resource coordination) within a unified infrastructure.

A conceptual diagram showing the relationship between Wishocracy (goal setting), Decentralized Institutes of Health (research focus), and AI agents (resource coordination) within a unified infrastructure.

Here’s the division of labor:

Here’s how you build this coordination infrastructure.

Where Your Agents Get Their Missions (Not From a Management Consultant)

Your agents don’t randomly decide what to work on based on what sounds cool or what their creator happens to care about that day. They pull from Wishocracy’s Task Tree, the globally-prioritized breakdown of humanity’s highest-priority problems into tasks a computer (or sufficiently motivated intern) can actually execute.

Wishocracy creates the list. For example, “Cure Alzheimer’s” (vague, terrifying, impossible) becomes:

  • Map protein structures → Run AlphaFold on these sequences → Rent computing time → Find cheapest cloud provider that won’t mysteriously go down during your job
  • Test drug candidates → Recruit trial participants → Find people age 65+ with early symptoms → Contact the 73 researchers globally who study this specific thing

Your agents work from this same list. Whether you’re deploying agents from a patient advocacy nonprofit in Boston (running on donated servers), a university in Beijing (behind the Great Firewall), or a biotech in Switzerland (with an actual IT budget), all agents see the same prioritized tasks. This is how you coordinate millions of people without someone having to organize a conference call.

Breakdown of the Wishocracy Task Tree, showing how high-level goals translate into executable agent instructions.

Breakdown of the Wishocracy Task Tree, showing how high-level goals translate into executable agent instructions.

The Secret to Success: Cross-Sector Coordination (Forces for Good)

Forces for Good studied the difference between successful and unsuccessful nonprofits. The finding: Successful nonprofits coordinate across sectors, they work with other nonprofits, businesses, governments, and the public. Unsuccessful ones operate in silos, jealously guarding their email lists and acting like collaboration is a zero-sum game where helping someone else means losing.

A conceptual diagram showing the transition from fragmented, siloed cross-sector communication to a unified architecture where agents from diverse sectors (biotech, government, academia) synchronize through a global Task Tree.

A conceptual diagram showing the transition from fragmented, siloed cross-sector communication to a unified architecture where agents from diverse sectors (biotech, government, academia) synchronize through a global Task Tree.

The problem: Cross-sector coordination is nearly impossible to do manually. How do you coordinate a cancer nonprofit in Boston (operating 9-5 EST) with a biotech in Switzerland (different timezone, language, and concept of urgency), a university in Beijing (behind a firewall), and individual researchers across 50 countries (half of whom don’t check email)?

Currently, this requires conference calls scheduled 6 months in advance where everyone spends 45 minutes on introductions and 5 minutes accomplishing nothing. Then someone sends a follow-up email that 80% of people won’t read.

The solution: All agents work from the same global Task Tree. Your nonprofit’s agent and the biotech’s agent and the university’s agent all see “Find researchers studying protein misfolding” as the next priority task. They automatically coordinate because they’re working from the same list, not because someone managed to find a time that works across 12 timezones.

You base your agent design on the six practices from Forces for Good:

This ensures your agents enable the proven success pattern, cross-sector coordination, rather than just automating the process of sending emails that no one reads and forming committees that accomplish nothing.

Why You Need This (Humans Are Hilariously Bad at Coordination)

The NIH has 27,000 employees. They can’t coordinate lunch orders, let alone millions of researchers. Seriously, they spent $1 trillion and eradicated zero diseases because their coordination strategy is “form a committee to discuss forming a committee.”

A comparison between human coordination bottlenecks, limited by Dunbar’s number and manual communication, and AI-driven infrastructure capable of managing global scale, time zones, and data synthesis simultaneously.

A comparison between human coordination bottlenecks, limited by Dunbar’s number and manual communication, and AI-driven infrastructure capable of managing global scale, time zones, and data synthesis simultaneously.

The FDA thinks humans can manually coordinate global trials. These are the same humans who took 17 years and $2.6 billion to approve one drug. Asking them to coordinate is like asking goldfish to organize a space program.

Meanwhile, military contractors somehow coordinate $2.72T in annual spending to build jets that don’t work. They’re idiots, but they’re coordinated idiots with database infrastructure.

Health advocates? We’re trying to coordinate billions in funding using email threads that descend into reply-all hell and Zoom calls where 18 people are on mute and 2 are talking over each other.

The problem isn’t that people don’t want to help cure disease. It’s that coordinating millions of people manually is impossible:

  • Want to recruit trial participants across 195 countries? Good luck finding someone in each timezone to make phone calls.
  • Need to match researchers with funding opportunities? Hope you enjoy reading 10,000 grant descriptions and manually building a spreadsheet.
  • Trying to mobilize treaty advocates globally? Enjoy scheduling a call where someone is always asleep.
  • Need real-time trial data synthesis? Better hire someone to email everyone weekly asking for updates (they won’t respond).

Humans evolved to coordinate hunting parties of 20 people. We’re now trying to coordinate millions. Our brains literally can’t do it, we max out at 150 relationships (Dunbar’s number)138.

AI doesn’t have this limitation. It never sleeps, speaks every language, operates in every timezone simultaneously, and doesn’t get into passive-aggressive email wars. The way you win the War on Disease is by building coordination infrastructure that works at the scale humans can’t, while the NIH is still trying to schedule their next committee meeting.

The Architecture You’re Building (It’s Simpler Than It Looks)

A hierarchical agent coordination architecture showing the flow from Global Level network architects down to Mission Level and Node Level task agents.

A hierarchical agent coordination architecture showing the flow from Global Level network architects down to Mission Level and Node Level task agents.

Here’s the coordination architecture you need. Don’t let the diagram scare you, it’s just showing that agents talk to other agents, which talk to more agents. Like a corporate org chart, except things actually happen:

/bin/bash: -c: line 1: syntax error near unexpected token `('
/bin/bash: -c: line 1: `[A structural diagram of the Mission Level architecture showing the hierarchical containment of the Mission Network Layer and the Coordinating Agent within the network.](/assets/images/ai-coordination-army/ai-coordination-army-section-mission-level-bw-academic.jpg)'
/bin/bash: -c: line 1: syntax error near unexpected token `('
/bin/bash: -c: line 1: `[A system architecture diagram showing the coordination between a Cluster Agent and individual nodes, including internal components like Task Agents and Performance Monitors.](/assets/images/ai-coordination-army/ai-coordination-army-section-node-level-bw-academic.jpg)'

Diagram showing three layers: Global Level (Network Architect Agent recommends to Builder Agents, who implement Global Network Layer), Mission Level (Global Network contains Mission Network, which contains Coordinating Agent), and Node Level (Coordinating Agent coordinates Node A and Node B, Node A contains Task Agent 1 and Performance Monitor)

Step 1: Deploy Your Coordination Hub (Your Organization’s Digital Slave Labor)

The way you join the network is by deploying a “node”, your organization’s AI coordination hub that never sleeps, never complains, never asks for a raise, and doesn’t require health insurance.

A visualization of the AI Coordination Hub acting as a central node that automates donor research, grant drafting, and cross-organization coordination to streamline global health funding.

A visualization of the AI Coordination Hub acting as a central node that automates donor research, grant drafting, and cross-organization coordination to streamline global health funding.

Think of it as your digital headquarters where AI agents work 24/7 to coordinate your piece of the War on Disease, while your human staff can focus on things humans are actually good at (like empathy, creativity, and knowing when someone’s email is passive-aggressive).

Here’s what you use your node for:

To Fund the War (Phase 1: Before the Treaty Passes):

Right now, you still need money to keep the lights on and fund the $200M (95% CI: $140M-$260M) global referendum campaign. Your agents help with this:

They identify potential donors, foundations, and grant opportunities while you sleep. They read every foundation’s 47-page “priorities document” (written by consultants who don’t know what the foundation wants either), figure out which ones might fund your work, and draft grant applications that don’t sound like they were written by a robot having an existential crisis.

A human development director can research maybe 20 foundation fits per week, assuming they don’t get distracted by the 300 unread emails in their inbox. Your agents research 200 per day, draft the applications, and don’t need therapy afterward. They also don’t waste donor meetings asking questions that were answered on page 3 of the website.

The key: Coordination prevents waste. Instead of 500 cancer nonprofits all applying to the Gates Foundation for the same thing, your agents coordinate: “Org A applied for X, Org B apply for Y.” This is how you raise the $200M (95% CI: $140M-$260M) for the referendum while cutting fundraising costs 50%.

Once the treaty passes? Your agents shift to allocating the $27.2B from Wishocracy instead of begging foundations. But until then, we still live in the old world where money doesn’t magically appear.

To Mobilize Support

You launch advocacy agents that track health legislation (all 10,000 pages that get released at 2 AM), coordinate treaty signature campaigns across timezones, organize awareness events, and recruit volunteers without the usual process of begging people on Facebook and hoping they show up.

A comparison diagram showing the shift in workload where AI agents handle data-intensive logistics and tracking, while the human coordinator transitions from scheduling tasks to high-value relationship building.

A comparison diagram showing the shift in workload where AI agents handle data-intensive logistics and tracking, while the human coordinator transitions from scheduling tasks to high-value relationship building.

Instead of one burned-out coordinator trying to schedule volunteers across 12 timezones while also answering emails, planning events, and maintaining their sanity, your agents handle all the logistics. The coordinator can focus on actual human relationships instead of being a human scheduling algorithm who’s slowly dying inside.

To Accelerate Research

You automate the soul-crushing coordination work that turns enthusiastic researchers into bitter husks. Matching researchers with funding opportunities (before treaty: foundation grants; after treaty: Wishocracy allocations). Recruiting trial participants who actually meet the criteria (not just people who saw a Facebook ad and think they qualify). Tracking outcomes when half the participants ghost you. Synthesizing literature when there are 50,000 new papers published monthly and you’re supposed to read them all.

A workflow diagram illustrating the transition from manual research coordination to an automated two-phase system involving grant matching and task execution.

A workflow diagram illustrating the transition from manual research coordination to an automated two-phase system involving grant matching and task execution.

What used to take 6 months of emails like “Just following up on my previous 12 emails…” now happens continuously without making you want to throw your laptop out a window. Phase 1: coordinate who applies for which grants. Phase 2: coordinate who executes which tasks from Wishocracy’s list using patient subsidies from the treaty.

To Build Awareness

You deploy content agents that create educational materials people might actually read, track what the public currently thinks about health issues, coordinate social media campaigns, and figure out which messages actually change minds versus which ones just get ratio’d.

A comparison showing the manual A/B testing process versus AI agents simultaneously testing hundreds of content variations to optimize engagement.

A comparison showing the manual A/B testing process versus AI agents simultaneously testing hundreds of content variations to optimize engagement.

Instead of a marketing team manually A/B testing campaigns one painful experiment at a time, your agents test hundreds of variations simultaneously. They learn that posting about preventing Alzheimer’s at 2 PM on Tuesday gets 10x more engagement than posting at 9 AM on Monday, for reasons no human will ever understand.

To Optimize Impact

You use “Agent Evaluators” to measure which strategies actually save lives instead of which ones win awards at nonprofit conferences or make board members feel good.

Architectural overview of an AI node, illustrating how specialized Task Agents utilize shared memory and context to coordinate activities while connecting to external research, government systems, and other nodes via APIs.

Architectural overview of an AI node, illustrating how specialized Task Agents utilize shared memory and context to coordinate activities while connecting to external research, government systems, and other nodes via APIs.

You get real-time data showing which fundraising approaches work (not just which ones the development director learned at their last nonprofit job), which advocacy tactics actually change minds (versus which ones just make activists feel productive), which research collaborations accelerate progress (versus which ones just produce papers no one reads).

You double down on what works. You kill what doesn’t. No politics, no “but we’ve always done it this way,” no protecting programs because someone’s spouse runs them.

Inside your node, you build Task Agents, hyper-focused AI workers who are the opposite of your coworker Dave who somehow has 17 responsibilities and does none of them well. Each agent has one specific job in the War on Disease. They share a knowledge base for long-term memory (so they don’t keep asking you the same questions), coordinate through a shared context system (so they don’t duplicate work like humans do), and interact with research databases, government systems, and other nodes through APIs (which is fancy talk for “they can talk to computers without needing IT support to set it up”).

Step 2: Define Each Agent’s Mission (From the Task Tree, Not From Your Feelings)

Your agents pull their missions from Wishocracy’s Task Tree, not from brainstorming sessions where someone says “wouldn’t it be cool if…” The way you prevent them from becoming useless generalists (like every nonprofit’s “Director of Strategy and Partnerships and Innovation and Impact”) is by mapping each agent to one specific task from that tree. Here’s how:

1. Pick a Task from the Tree:

Wishocracy’s Task Tree breaks “Cure Alzheimer’s” (impossible, vague, makes you want to give up) down into executable tasks like “Find researchers studying protein misfolding” or “Recruit trial participants age 65+ with early symptoms.”

An architecture diagram showing a hierarchical Task Tree where distributed agents in different global locations map to specific nodes, each operating on a lead/lag measure feedback loop.

An architecture diagram showing a hierarchical Task Tree where distributed agents in different global locations map to specific nodes, each operating on a lead/lag measure feedback loop.

You pick one task. Your agent does only that task. It doesn’t also try to do social media, write the newsletter, organize events, and somehow also fix the printer. This prevents digital ADHD and the slow death of competence through task overload.

2. Define the Lead Measure (the action):

The thing the agent does to complete its task. For a “find researchers” agent: “Number of qualified researchers contacted per day.” Simple, countable, and the agent controls it completely (unlike “build relationships” or other vague corporate nonsense that sounds good in meetings but means nothing).

3. Define the Lag Measure (the result):

The outcome you actually care about. For the same agent: “Number of researchers who sign up for a trial within your decentralized framework for drug assessment (dFDA).” This tells you if the action is working or if you’ve built an agent that’s really good at sending emails no one reads.

This structure (borrowed from “The 4 Disciplines of Execution”) turns tasks from Wishocracy’s global list into specific agent missions you can build, measure, and improve without needing to hire a consultant to interpret the results.

The key: Every agent across the entire network works from the same Task Tree. Your cancer research agent in Boston might be working on “Find protein researchers” while someone’s agent in Beijing works on “Find computing resources for AlphaFold.” Both tasks come from the same Alzheimer’s cure branch of the tree. This is how you coordinate globally without endless meetings where half the attendees are on mute and the other half forgot they were even invited.

Step 3: Enable Competition Between Agents (May the Best Robot Win)

Here’s how you make the system evolve instead of calcifying into bureaucracy: You let anyone design and launch new agents to compete with existing ones.

A flowchart depicting the Darwinian competition between two agents, showing a side-by-side comparison of performance metrics leading to resource allocation for the winner and retirement for the loser.

A flowchart depicting the Darwinian competition between two agents, showing a side-by-side comparison of performance metrics leading to resource allocation for the winner and retirement for the loser.

If someone thinks they have a better way to find cancer researchers, they build an agent for it. You run both agents side-by-side and measure which one delivers better results for less cost. The winner gets more resources. The loser gets improved or retired (but gently, because it’s code, not a person you’re firing via Zoom).

The network evolves through ruthless, Darwinian competition. The best strategies win. The worst die. No committees to protect underperforming approaches because “we’ve invested too much to give up now.” No politics. No protecting someone’s pet project because they’ve been here since 2003. Just results.

How You Keep Control (Preventing the Robot Uprising)

Current AI can’t run a global health movement alone, which is honestly for the best. You want humans making strategic decisions, not algorithms optimizing for metrics that accidentally prioritize the wrong things (like every social media platform ever).

A conceptual diagram showing the hierarchy of human strategic decision-making providing oversight to AI algorithmic execution to ensure alignment with movement goals.

A conceptual diagram showing the hierarchy of human strategic decision-making providing oversight to AI algorithmic execution to ensure alignment with movement goals.

Here’s the cooperation model you use:

AI Proposes

Your agents analyze what’s working across the network and propose new strategies (via GitHub issues, because developers already live there). Example: “Trial recruitment is 3x higher when you contact researchers on weekends” or “Funding requests sent on Tuesday get 40% more responses than those sent on Friday.”

A flowchart showing AI agents analyzing network data to generate actionable insights delivered as GitHub issues to human developers.

A flowchart showing AI agents analyzing network data to generate actionable insights delivered as GitHub issues to human developers.

You Decide

Actual humans review and approve strategic changes. You set direction based on values and context the AI doesn’t understand (like “we don’t spam people” or “that approach is technically effective but ethically questionable”). AI suggests. Humans decide. This is the opposite of most organizations where humans suggest and bureaucracy decides nothing will change.

A comparison between the AI-human collaborative decision process and the traditional bureaucratic model.

A comparison between the AI-human collaborative decision process and the traditional bureaucratic model.

AI Implements

Once approved, “Builder Agents” update the coordination infrastructure, writing code, updating agent instructions, submitting improvements as pull requests. They do the grunt work of implementation so humans don’t have to spend their weekends debugging Python.

A workflow diagram showing Builder Agents receiving approval and executing technical tasks like writing code, updating instructions, and submitting pull requests to automate implementation.

A workflow diagram showing Builder Agents receiving approval and executing technical tasks like writing code, updating instructions, and submitting pull requests to automate implementation.

You Verify

You do final review to ensure changes align with the mission before they go live across the network. Think of it as peer review, except the peer is a robot and it actually happens instead of sitting in someone’s inbox for 6 months.

A conceptual diagram showing a human reviewer acting as a strategic checkpoint, filtering AI-driven coordination tasks before they are deployed to a global network.

A conceptual diagram showing a human reviewer acting as a strategic checkpoint, filtering AI-driven coordination tasks before they are deployed to a global network.

This keeps you in strategic control while letting AI handle the coordination work humans can’t scale. You’re building infrastructure to coordinate millions of people fighting disease, not building Skynet (though the “kill all humans” approach would technically solve the disease problem by eliminating the hosts, so you specifically program against that).

How You Prevent Data Silos (The Tragedy of the Nonprofit Commons)

A conceptual diagram contrasting isolated data silos where information is hoarded with a shared coordination network where data flows freely between organizations.

A conceptual diagram contrasting isolated data silos where information is hoarded with a shared coordination network where data flows freely between organizations.

A coordination network is only as smart as the information flowing through it. Here’s how you prevent organizations from hoarding data like dragons sitting on gold they’ll never use:

Rule 1 - Share to Play

To use the powerful shared coordination agents (funded by the collective), you agree to share your anonymized performance data back to the network. No sharing, no access to the network effects. It’s the opposite of most nonprofit collaborations where everyone wants the benefits but no one wants to contribute anything.

A conceptual diagram showing the feedback loop between individual nodes and the collective network, where data contributions are exchanged for AI agent access and financial compensation.

A conceptual diagram showing the feedback loop between individual nodes and the collective network, where data contributions are exchanged for AI agent access and financial compensation.

Rule 2 - Get Paid to Share:

For high-value data, the network directly compensates your node. You create an actual market for insights that benefits everyone, instead of the current system where valuable data sits unused in someone’s database because sharing it would require 6 months of legal review and 14 signatures.

This ensures every success and failure becomes a lesson for the entire network. The whole army gets smarter over time, not just individual organizations repeating the same mistakes in isolation while pretending they’re doing cutting-edge work.