Listen Get

Ubiquitous Pragmatic Trial Impact Analysis: How to Prevent a Year of Death and Suffering for 84 Cents

Abstract

Of 9.5 million combinations plausible drug-disease pairings, only 0.342% have been clinically tested. At the current discovery rate of 15 diseases/year, clearing this backlog would take ~443 years. A decentralized FDA integrating pragmatic clinical trials into standard healthcare at $929/patient (vs. $41K traditional) increases trial capacity 12.3x, reducing backlog clearance to 36 years. Combined with eliminating the 8.2 years post-safety efficacy delay through opt-in trial participation after Phase I, treatments arrive 212 years earlier on average. This timeline shift saves 10.7 billion deaths, averts 565 billion DALYs, and eliminates 1.93 quadrillion hours of suffering (YLD portion of 565 billion DALYs converted to hours) at $0.842/DALY, competitive with bed nets ($89/DALY) at vastly greater scale. Using standard health economic valuation ($150K/DALY, the US cost-effectiveness threshold; conservative relative to EPA/DOT Value of Statistical Life estimates), full impact yields $84.8 quadrillion in cumulative value (565 billion DALYs cumulative DALYs over the 212 years timeline shift, not annual; 178k:1 ROI).

Keywords

war-on-disease, 1-percent-treaty, medical-research, public-health, peace-dividend, decentralized-trials, dfda, dih, victory-bonds, health-economics, cost-benefit-analysis, clinical-trials, drug-development, regulatory-reform, military-spending, peace-economics, decentralized-governance, wishocracy, blockchain-governance, impact-investing

Executive Summary

The Problem: 6.65 thousand diseases have zero FDA-approved treatments. We have 9.5 thousand compounds proven-safe compounds (FDA-approved drugs + GRAS substances), yet only 0.342% of 9.5 million combinations plausible drug-disease combinations have been tested. At the current discovery rate of 15 diseases/year, clearing this backlog would take ~443 years. Of 2.4 billion people with chronic disease, only 1.9 million patients/year participate in trials annually (0.06%).

The Solution: A decentralized FDA137,138, an open protocol (like HTTP/FHIR, not a competing platform) that existing DCT platforms, EHRs, and health apps adopt to integrate pragmatic clinical trials into standard healthcare, enabling:

  1. Subsidized Patient Participation: Patients receive subsidies to participate in trials, making participation accessible and incentivized
  2. Universal Trial Access: Any patient can join trials from home via their phone or computer - no travel to research centers required
  3. Real-World Data Aggregation: Outcomes from all participants are aggregated into a unified database
  4. Treatment Rankings: Standardized effectiveness rankings for every treatment-condition pair, updated continuously with real-world evidence
  5. Outcome Labels: “Nutrition facts for drugs” showing exactly what happened to real patients who tried each treatment

The Receipts

Metric Value Context
Cost-Effectiveness $0.842/DALY Competitive with bed nets ($89/DALY) at vastly greater scale
Lives Saved

10.7 billion deaths

One-time benefit from 212 years timeline shift
DALYs Averted

565 billion DALYs

Captures both mortality and morbidity
Suffering Eliminated

1.93 quadrillion hours

YLD portion of DALYs (39%) x 8,760 hrs/yr over timeline shift
Total Economic Value

$84.8 quadrillion

Cumulative DALYs x $150K/DALY (WHO threshold) over 212 years shift
Efficacy Lag Eliminated

8.2 years

Post-Phase I access via trial participation
ROI (R&D Savings) 637:1 44.1x cheaper trials
Annual R&D Savings

$58.6B

From 97.7% cost reduction
Trial Capacity Increase

12.3x

Enabling parallel therapeutic space exploration

Why These Numbers Are Large

The economic value figure ($84.8 quadrillion) exceeds global GDP ($115T). This is expected, not an error. Three points of context:

1. Standard methodology, applied at scale. The $150K/DALY valuation is the US cost-effectiveness threshold (ICER). It is conservative relative to EPA and DOT Value of Statistical Life estimates, which imply higher per-DALY values when converted (~$300K-$600K/DALY). We did not invent this number. We multiplied it by the number of sick people.

2. GDP measures transactions, not the value of being alive. GDP does not count the value of not being dead, not being in pain, or not watching your children die of treatable diseases. Health economists have measured these values for decades. The global burden of disease (2.88 billion DALYs/year/year) valued at $150K/DALY produces $400T/year in health losses, roughly 3.5x global GDP. This is consistent with the established finding that the value of health substantially exceeds market output139,140.

3. The figure is cumulative over 212 years, not annual. This is the total value of permanently accelerating medical progress, the same methodology used to value smallpox eradication ($300M program -> millions of future lives saved) and climate infrastructure (multi-trillion dollar damage estimates that exceed annual GDP). The only debatable input is whether the timeline shift is really ~212 years; see The Discovery Capacity Model for that derivation.

Key Metric Derivations

Lives Saved:

\[ \begin{gathered} Lives_{max} \\ = Deaths_{disease,daily} \times T_{accel,max} \times 338 \\ = 150{,}000 \times 212 \times 338 \\ = 10.7B \end{gathered} \]
where:
\[ T_{accel,max} = T_{accel} + T_{lag} = 204 + 8.2 = 212 \]
where:
\[ \begin{gathered} T_{accel} \\ = T_{first,SQ} \times \left(1 - \frac{1}{k_{capacity}}\right) \\ = 222 \times \left(1 - \frac{1}{12.3}\right) \\ = 204 \end{gathered} \]
where:
\[ \begin{gathered} T_{first,SQ} \\ = T_{queue,SQ} \times 0.5 \\ = 443 \times 0.5 \\ = 222 \end{gathered} \]
where:
\[ \begin{gathered} T_{queue,SQ} \\ = \frac{N_{untreated}}{Treatments_{new,ann}} \\ = \frac{6{,}650}{15} \\ = 443 \end{gathered} \]
where:
\[ \begin{gathered} N_{untreated} \\ = N_{rare} \times 0.95 \\ = 7{,}000 \times 0.95 \\ = 6{,}650 \end{gathered} \]
where:
\[ \begin{gathered} k_{capacity} \\ = \frac{N_{fundable,dFDA}}{Slots_{curr}} \\ = \frac{23.4M}{1.9M} \\ = 12.3 \end{gathered} \]
where:
\[ \begin{gathered} N_{fundable,dFDA} \\ = \frac{Subsidies_{dFDA,ann}}{Cost_{pragmatic,pt}} \\ = \frac{\$21.8B}{\$929} \\ = 23.4M \end{gathered} \]
where:
\[ \begin{gathered} Subsidies_{dFDA,ann} \\ = Funding_{dFDA,ann} - OPEX_{dFDA} \\ = \$21.8B - \$40M \\ = \$21.8B \end{gathered} \]
where:
\[ \begin{gathered} OPEX_{dFDA} \\ = Cost_{platform} + Cost_{staff} + Cost_{infra} \\ + Cost_{regulatory} + Cost_{community} \\ = \$15M + \$10M + \$8M + \$5M + \$2M \\ = \$40M \end{gathered} \]

Suffering Hours Eliminated:

\[ \begin{gathered} Hours_{suffer,max} \\ = DALYs_{max} \times Pct_{YLD} \times 8760 \\ = 565B \times 0.39 \times 8760 \\ = 1930T \end{gathered} \]
where:
\[ \begin{gathered} DALYs_{max} \\ = DALYs_{global,ann} \times Pct_{avoid,DALY} \times T_{accel,max} \\ = 2.88B \times 92.6\% \times 212 \\ = 565B \end{gathered} \]
where:
\[ T_{accel,max} = T_{accel} + T_{lag} = 204 + 8.2 = 212 \]
where:
\[ \begin{gathered} T_{accel} \\ = T_{first,SQ} \times \left(1 - \frac{1}{k_{capacity}}\right) \\ = 222 \times \left(1 - \frac{1}{12.3}\right) \\ = 204 \end{gathered} \]
where:
\[ \begin{gathered} T_{first,SQ} \\ = T_{queue,SQ} \times 0.5 \\ = 443 \times 0.5 \\ = 222 \end{gathered} \]
where:
\[ \begin{gathered} T_{queue,SQ} \\ = \frac{N_{untreated}}{Treatments_{new,ann}} \\ = \frac{6{,}650}{15} \\ = 443 \end{gathered} \]
where:
\[ \begin{gathered} N_{untreated} \\ = N_{rare} \times 0.95 \\ = 7{,}000 \times 0.95 \\ = 6{,}650 \end{gathered} \]
where:
\[ \begin{gathered} k_{capacity} \\ = \frac{N_{fundable,dFDA}}{Slots_{curr}} \\ = \frac{23.4M}{1.9M} \\ = 12.3 \end{gathered} \]
where:
\[ \begin{gathered} N_{fundable,dFDA} \\ = \frac{Subsidies_{dFDA,ann}}{Cost_{pragmatic,pt}} \\ = \frac{\$21.8B}{\$929} \\ = 23.4M \end{gathered} \]
where:
\[ \begin{gathered} Subsidies_{dFDA,ann} \\ = Funding_{dFDA,ann} - OPEX_{dFDA} \\ = \$21.8B - \$40M \\ = \$21.8B \end{gathered} \]
where:
\[ \begin{gathered} OPEX_{dFDA} \\ = Cost_{platform} + Cost_{staff} + Cost_{infra} \\ + Cost_{regulatory} + Cost_{community} \\ = \$15M + \$10M + \$8M + \$5M + \$2M \\ = \$40M \end{gathered} \]

Cost per DALY:

\[ \begin{gathered} Cost_{direct,DALY} \\ = \frac{NPV_{direct}}{DALYs_{max}} \\ = \frac{\$476B}{565B} \\ = \$0.842 \end{gathered} \]
where:
\[ NPV_{direct} = Funding_{ann} \times \frac{1 - (1+r)^{-T}}{r} \]
where:
\[ \begin{gathered} T_{queue,dFDA} \\ = \frac{T_{queue,SQ}}{k_{capacity}} \\ = \frac{443}{12.3} \\ = 36 \end{gathered} \]
where:
\[ \begin{gathered} T_{queue,SQ} \\ = \frac{N_{untreated}}{Treatments_{new,ann}} \\ = \frac{6{,}650}{15} \\ = 443 \end{gathered} \]
where:
\[ \begin{gathered} N_{untreated} \\ = N_{rare} \times 0.95 \\ = 7{,}000 \times 0.95 \\ = 6{,}650 \end{gathered} \]
where:
\[ \begin{gathered} k_{capacity} \\ = \frac{N_{fundable,dFDA}}{Slots_{curr}} \\ = \frac{23.4M}{1.9M} \\ = 12.3 \end{gathered} \]
where:
\[ \begin{gathered} N_{fundable,dFDA} \\ = \frac{Subsidies_{dFDA,ann}}{Cost_{pragmatic,pt}} \\ = \frac{\$21.8B}{\$929} \\ = 23.4M \end{gathered} \]
where:
\[ \begin{gathered} Subsidies_{dFDA,ann} \\ = Funding_{dFDA,ann} - OPEX_{dFDA} \\ = \$21.8B - \$40M \\ = \$21.8B \end{gathered} \]
where:
\[ \begin{gathered} OPEX_{dFDA} \\ = Cost_{platform} + Cost_{staff} + Cost_{infra} \\ + Cost_{regulatory} + Cost_{community} \\ = \$15M + \$10M + \$8M + \$5M + \$2M \\ = \$40M \end{gathered} \]
where:
\[ \begin{gathered} DALYs_{max} \\ = DALYs_{global,ann} \times Pct_{avoid,DALY} \times T_{accel,max} \\ = 2.88B \times 92.6\% \times 212 \\ = 565B \end{gathered} \]
where:
\[ T_{accel,max} = T_{accel} + T_{lag} = 204 + 8.2 = 212 \]
where:
\[ \begin{gathered} T_{accel} \\ = T_{first,SQ} \times \left(1 - \frac{1}{k_{capacity}}\right) \\ = 222 \times \left(1 - \frac{1}{12.3}\right) \\ = 204 \end{gathered} \]
where:
\[ \begin{gathered} T_{first,SQ} \\ = T_{queue,SQ} \times 0.5 \\ = 443 \times 0.5 \\ = 222 \end{gathered} \]

Interpreting These Figures: Cumulative, Not Annual

These are cumulative benefits over the entire acceleration period, not annual figures. This is the same methodology used to value smallpox eradication (program cost -> total future lives saved) and climate infrastructure investments: the one-time benefit of permanently accelerating medical progress.

Interpreting the Timeline Figure: The 212 years figure represents a discovery capacity model of medical research. Think of the therapeutic search space as the set of all untested drug-disease combinations, with trial capacity determining how fast we can explore it.

The Discovery Capacity Model

Parameter Status Quo Proposed Impact
Untreated diseases

6.65 thousand diseases

6.65 thousand diseases

Same backlog
Discovery rate (first treatments/year)

15 diseases/year

185 diseases/year

12.3x faster
Time to explore search space

443 years

36 years

Centuries saved
Expected time to first treatment ~443 years/2 ~36 years/2 204 years earlier

Why treatments arriving sooner saves lives:

A disease that would receive its first effective treatment in year 200 under the status quo might receive it in year 16 with the framework. During those 184 years, people die from that disease who could have been saved. The 10.7 billion deaths figure captures the cumulative lives saved across all diseases during their acceleration periods.

The two components:

  1. Discovery acceleration (204 years): Higher discovery rate explores the therapeutic space faster, moving treatments forward
  2. Efficacy lag elimination (8.2 years): Once discovered, treatments reach patients immediately instead of waiting for Phase II/III

Total timeline shift: 212 years = 204 years + 8.2 years

How the 12.3x capacity increase works: With $21.8B/year in trial funding at $929/patient (based on ADAPTABLE trial), the framework enables 23.4 million patients/year annual trial participants vs. current 1.9 million patients/year, increasing trial completion rate from 15 diseases/year to 185 diseases/year. This removes the primary bottleneck to medical progress: currently less than 0.06% of willing patients can access trials, and over 9.5 thousand compounds proven-safe (FDA-approved drugs + GRAS substances) remain untested for most conditions they could improve.

Capabilities

Core Model: An Open Coordination Protocol

The protocol enables existing systems (DCT platforms, EHRs, health apps) to:

For Treatment Providers (via compliant platforms):

  • Register treatments through any protocol-compliant system
  • Access aggregated effectiveness data across all participating platforms
  • Receive automatic liability coverage through protocol governance
  • Benefit from standardized outcome reporting across the ecosystem

For Patients (via participating apps and platforms):

  • Search any condition, see treatments ranked by real-world effectiveness
  • Join trials through their preferred platform or health app
  • Receive subsidies to offset participation costs
  • Report outcomes through any protocol-compliant interface
  • Access “Outcome Labels” showing what happened to similar patients

The Result: A self-sustaining research ecosystem where participating platforms collect outcome data using standardized schemas, and the protocol aggregates this into continuously-updated treatment rankings available to all participants.

A map of how sick people might meet doctors, if everyone agreed to use the same computer system. They won’t.

A map of how sick people might meet doctors, if everyone agreed to use the same computer system. They won’t.

Key Capabilities

  • Treatment Rankings: Every treatment for every condition ranked by real-world effectiveness, updated continuously as new data arrives
  • Outcome Labels: Standardized “nutrition facts for drugs” showing effectiveness rates, side effects, and outcomes from real patients
  • Universal Trial Access: Any patient can participate from anywhere via phone/computer
  • Real-Time Surveillance: Continuous data on efficacy, side effects, and drug interactions
  • Federated Data Architecture: Data stays in source systems (EHR platforms, consumer health apps) while queries run across all sources

Potential Impact on the Status Quo

  • Speed of Trials: Reduced overhead and automated data capture compresses timelines.
  • Cost of Trials: Using existing healthcare encounters, telemedicine, and EHR data to drastically cut per-patient costs (modeled on pragmatic trials like Oxford RECOVERY and the US-based ADAPTABLE trial).
  • Scale & Scope: Enables testing many more drugs, off-label indications, unpatentable treatments, nutraceuticals, and personalized medicine approaches.
  • Innovation Incentives: Lower R&D costs increase profitability and encourage more entrants/innovation in the life sciences.

Addressing Key Concerns

Historical Validation: Pre-1962 Physician-Led Efficacy Testing

The decentralized physician-led efficacy model is not theoretical. It operated successfully from 1883 to 1960, providing 77 years of empirical validation.

How the pre-1962 system worked:

From 1883 to 1960, 144 thousand physicians across America tested drug efficacy on real patients in routine clinical practice. The Journal of the American Medical Association (JAMA) compiled observational reports, leading medical experts peer-reviewed the aggregated data, and effective treatments received endorsement. This decentralized approach successfully identified antibiotics, vaccines, and countless surgical techniques.

Cost comparison demonstrates dramatic efficiency:

Era Cost per Drug (2024 USD) System
Pre-1962

$24.7M

Decentralized physician-led efficacy testing
Post-1962

$2.6B

Centralized pharmaceutical company trials
Cost Increase

105x

Regulatory mandate, not drug complexity

The cost explosion began exactly when efficacy testing was centralized within pharmaceutical companies. This wasn’t a natural evolution of drug development or increasing drug complexity. The same types of compounds (small molecules, biologics) that cost $24.7M to develop in 1960 now cost $2.6B.

The thalidomide success story: Thalidomide is often cited as justification for the 1962 amendments, but the US already blocked thalidomide under existing 1938 safety regulations. The FDA’s Frances Kelsey refused approval based on inadequate safety data, not efficacy requirements. The 1962 amendments added efficacy proof requirements, not additional safety testing.

Implications: This model returns to decentralized physician-led efficacy testing but with modern automation (electronic health records, AI-assisted analysis, real-time data aggregation), targeting the same 50-95% cost reductions that the pre-1962 system achieved.

Why “Eventually Avoidable” Matters

A critical assumption in this analysis is that 92.6% of disease deaths are “eventually avoidable” - meaning they could be prevented with sufficient biomedical research over time.

Why this assumption is conservative:

  1. Historical trend: In 1900, life expectancy was ~47 years. Today it’s ~79. Most of that gain came from preventing deaths that were once considered inevitable (infectious disease, childhood mortality, cardiovascular disease).

  2. Known mechanisms exist: For most major disease categories, we understand enough biology to know that interventions are theoretically possible. Cancer is caused by specific mutations. Heart disease has identifiable risk factors. The question is finding the right treatments, not whether treatments can exist.

  3. Already-discovered treatments prove the space: 30% of approved drugs gain new indications, demonstrating that effective treatments exist but haven’t been found yet.

What if this assumption is wrong?

Even if only 25% of deaths are eventually avoidable (half our estimate), the framework still generates 637:1 ROI from R&D savings alone, independent of health benefits. These deaths will eventually be preventable with sufficient research progress. The health impact figures scale linearly with avoidability assumptions, but the cost-saving case doesn’t depend on them.

Trial Funding Scenario

This analysis models a scenario with $21.8B/year allocated to pragmatic clinical trials. At $929/patient, this funds approximately 23.4 million patients/year patient-years annually.

With proper funding, we could solve the backlog in 36 years instead of 443. Your great-great-great-great-great-great-great-grandchildren will be so pleased.

With proper funding, we could solve the backlog in 36 years instead of 443. Your great-great-great-great-great-great-great-grandchildren will be so pleased.

On the Funding Assumption

This analysis demonstrates what becomes possible when the funding constraint is removed. The $21.8B/year figure is achievable through multiple mechanisms:

  • Philanthropic mega-donors: A single Gates Foundation-scale commitment could fund the protocol infrastructure and initial years
  • Sovereign wealth funds: Norway’s $1.4T fund or similar could view this as humanity-scale infrastructure
  • WHO/multilateral coordination: Comparable to GAVI or the Global Fund
  • Military reallocation: Less than 1% of global military spending ($2.72T/year)
  • Industry consortium: Pharma collectively spends $60B/year on trials; even 10% redirection exceeds this threshold

The returns justify the funding, not vice versa. At 637:1 ROI, any rational capital allocator would fund this if they believed the analysis.

Trial Capacity Impact: See The Discovery Capacity Model for the full comparison of status quo vs. framework trial capacity metrics.

The Untested Treatment Backlog:

Approximately 6.65 thousand diseases have zero FDA-approved treatments. At current trial capacity (15 diseases/year), systematically testing all 9.5 million combinations plausible pairings would take ~443 years. With 12.3x capacity, this drops to ~36 years.

Framework Costs (ROM Estimates)

Protocol Infrastructure Costs

This is protocol infrastructure, open standards and APIs that existing clinical trial systems adopt. This is analogous to HTTP enabling any browser to access any website, or FHIR enabling health data interoperability.

  • Upfront protocol/API build: $15–25M
  • Annual protocol operations: $5–12M
  • Integration onboarding fund: $20-50M (one-time, to support EHR and DCT platform adoption)
  • Total initiative: ~$40-75M upfront, $5-12M annual

Ecosystem Integration: The protocol leverages existing infrastructure - DCT platforms, major EHR systems, academic medical centers, and consumer health apps. These organizations have invested billions in infrastructure that becomes more valuable through protocol interoperability.

How much it costs to make all the medical computer systems talk to each other, assuming they want to. They don’t want to.

How much it costs to make all the medical computer systems talk to each other, assuming they want to. They don’t want to.

This section provides a Rough Order of Magnitude (ROM) cost estimate for the protocol infrastructure that enables decentralized clinical trials.

Upfront Build Costs (30 Months)

  1. Core Engineering & Development Effort:

    • Basis: ~75 FTEs 2.5 years $200k/FTE/year
    • Activities: Detailed design, Core framework development (API, storage, mapping/validation, auth), reference frontend, initial plugin interfaces, testing, documentation, initial deployment.

    The engineering cost is calculated as:

    \[ C_{\text{engineering}} = N_{\text{FTEs}} \times T \times C_{\text{FTE}} = 75 \times 2.5 \times \$200\text{k} = \$37.5\text{M} \]

    Where \(N_{\text{FTEs}} = 75\) is the number of full-time equivalents, \(T = 2.5\) years is the development timeline, and \(C_{\text{FTE}} = \$200\text{k}\) per FTE per year.

    • Estimated ROM: $35 - $40M
  2. Infrastructure Setup & Initial Cloud Costs:

    • Activities: Establishing cloud accounts, VPCs, Kubernetes cluster (EKS) setup, database provisioning (RDS/TimescaleDB), S3 buckets, CI/CD pipeline setup, initial IaC development (Terraform).
    • Costs: Includes initial compute/storage during development/testing, potential small upfront reservations.
    • Estimated ROM: $1 - $3 Million
  3. Software Licenses & Tooling (Initial):

    • Examples: Potential costs for monitoring tools (Datadog), security scanners (Snyk), specialized libraries, collaboration tools if not already covered.
    • Estimated ROM: $0.5 - $1 Million
  4. Compliance, Legal & Security (Initial Setup):

    • Activities: Initial HIPAA/GDPR compliance assessment, policy development, security architecture review, legal consultation for data sharing frameworks.
    • Estimated ROM: $1 - $2 Million

The total upfront cost is the sum of all components:

\[ C_0 = C_{\text{engineering}} + C_{\text{infrastructure}} + C_{\text{software}} + C_{\text{compliance}} \]

Where:

  • \(C_{\text{engineering}} = \$35 - \$40\) million (Core Engineering & Development)
  • \(C_{\text{infrastructure}} = \$1 - \$3\) million (Infrastructure Setup & Initial Cloud Costs)
  • \(C_{\text{software}} = \$0.5 - \$1\) million (Software Licenses & Tooling)
  • \(C_{\text{compliance}} = \$1 - \$2\) million (Compliance, Legal & Security)

Total Estimated Upfront Cost (ROM): $37.5 - $46M

Note: This ROM estimate focuses only on the Core framework build effort and associated setup. It represents the foundational first step. A full global implementation requires significant additional investment in broader initiatives to achieve goals of global integration, legal harmonization, and massive scale. These crucial, follow-on costs are estimated separately in the Scenario Based ROM Estimates for Broader Initiative Costs section below and include:

  • Global EHR/Data Source Integration Effort: Building/buying connectors for thousands of systems worldwide.
  • Large-Scale Plugin Development: Funding the ecosystem of data importers, analysis tools, and visualization plugins.
  • International Legal/Regulatory Harmonization: Major diplomatic and legal efforts to create a global standard.
  • Global Rollout & Adoption: Costs associated with driving adoption and providing training worldwide.
  • Massive-Scale Infrastructure: Scaling hardware and cloud resources beyond initial targets to support millions of users.

The following sections provide ROM estimates for both the ongoing operational costs of the Core framework and for these essential broader initiatives.

Ecosystem Participants

The protocol creates value by enabling interoperability among existing clinical trial infrastructure. These organizations represent the ecosystem that would adopt and benefit from the protocol:

Participant Current Investment How They Integrate What They Gain
DCT Platforms $1B+ collectively in VC funding Adopt outcome reporting standards Interoperability, larger patient pools, regulatory credibility
Major EHR Systems Billions in infrastructure Enable federated queries New revenue from research queries, competitive differentiation
Pharma Sponsors $60B/year on trials Submit trials via compliant systems Lower costs, faster enrollment, real-world evidence
Academic Medical Centers Research infrastructure Contribute federated data nodes Research funding, publication opportunities
Consumer Health Apps Consumer health data Report patient outcomes to protocol User engagement, clinical validation

Why This Matters

DCT platforms (collectively raising over $1B in venture funding) and major EHR systems have already built the infrastructure for patient recruitment, data collection, and trial management. The protocol doesn’t replicate this work - it makes their existing investments more valuable by enabling data to flow across systems.

Annual Operational Costs (5M MAU Target Scale)

  1. Cloud Infrastructure Costs (AWS):

    • Components: EKS cluster, RDS/TimescaleDB hosting, S3 storage & requests, SQS messaging, API Gateway usage, Data Transfer (egress), CloudWatch logging/monitoring.
    • Basis: Highly dependent on actual usage patterns, data retrieval frequency, processing intensity. Assumes optimized resource usage.
    • Estimated ROM: $5 - $15 Million / year (Very sensitive to scale and usage patterns)
  2. Ongoing Engineering, Maintenance & Operations:

    • Team Size: Assume ~20 FTEs (SREs, DevOps, Core Maintainers, Security).
    • Basis: 20 FTEs * $200k/FTE/year

    The ongoing engineering cost is calculated as:

    \[ C_{\text{engineering}}^{\text{ops}} = N_{\text{FTEs}}^{\text{ops}} \times C_{\text{FTE}} = 20 \times \$200\text{k} = \$4\text{M}/\text{year} \]

    Where \(N_{\text{FTEs}}^{\text{ops}} = 20\) is the number of FTEs for ongoing operations.

    • Estimated ROM: $4 - $6 Million / year
  3. Software Licenses & Tooling (Ongoing):

    • Examples: Monitoring (Datadog/New Relic), Error Tracking (Sentry), Security Tools, potential DB license/support costs at scale.
    • Estimated ROM: $0.5 - $1.5 Million / year
  4. Compliance & Auditing (Ongoing):

    • Activities: Regular security audits (penetration tests, compliance checks), maintaining certifications, legal reviews.
    • Estimated ROM: $0.5 - $1 Million / year
  5. Support (User & Developer):

    • Activities: Tier 1/2 support for protocol participants and third-party plugin developers.
    • Estimated ROM: $1 - $3 Million / year (Scales with user base)

The total annual operational cost is the sum of all components:

\[ C_{\text{op}} = C_{\text{cloud}} + C_{\text{engineering}} + C_{\text{software}} + C_{\text{compliance}} + C_{\text{support}} \]

Where:

  • \(C_{\text{cloud}} = \$5 - \$15\) million/year (Cloud Infrastructure Costs)
  • \(C_{\text{engineering}} = \$4 - \$6\) million/year (Ongoing Engineering, Maintenance & Operations)
  • \(C_{\text{software}} = \$0.5 - \$1.5\) million/year (Software Licenses & Tooling)
  • \(C_{\text{compliance}} = \$0.5 - \$1\) million/year (Compliance & Auditing)
  • \(C_{\text{support}} = \$1 - \$3\) million/year (Support)

Total Estimated Annual Operations (Platform Only, ROM): $11 - $26.5 Million / year

Marginal Cost Analysis per User

The 5M MAU target is an illustrative milestone used for these initial ROM estimates, not the ultimate goal for the framework, which is to support hundreds of millions or billions of users. At this initial scale, you can analyze the cost on a per-user basis.

  • Average Cost Range Per User (at 5M MAU):
    • Based on the total annual operational cost range of $11M - $26.5M, the average cost per user is: \[ \frac{\$11,000,000 \text{ to } \$26,500,000}{5{,}000{,}000 \text{ users}} = \mathbf{\$2.20 \text{ to } \$5.30 \text{ per user per year}} \]
  • Marginal Cost Per Additional User:
    • As federated data infrastructure, the framework has high fixed costs (protocol development, core engineering) but very low variable costs. Therefore, the marginal cost of supporting one additional participant is expected to be a small fraction of the average cost, likely pennies per year. This cost will decrease further as the protocol achieves greater economies of scale, making it exceptionally efficient at supporting a global participant base.

(Note: The underlying cloud infrastructure cost ($5M-$15M/year) is a top-down ROM estimate. A more granular, bottom-up analysis based on projected per-user storage, data transfer, and compute would provide further support for these figures and is a key area for future refinement of this model.)

Note on Participant Financial Contributions:

This cost estimate covers building the protocol infrastructure, not paying patients for trial participation. Trial participation costs would be handled separately through funding mechanisms (government grants, foundation funding, or sponsor payments). The protocol coordinates information exchange but doesn’t move money around directly.

This estimate excludes costs for governance structure and plugin development.

Scenario Based ROM Estimates for Broader Initiative Costs

This table presents point estimates for each scenario, with the overall range of possibilities captured by comparing the Best, Medium, and Worst Case columns.

Component Best Case (Upfront / Annual) Medium Case (Upfront / Annual) Worst Case (Upfront / Annual) Key Assumptions & Variables Driving Range
Global Data Integration $2M / ~$0 $125M / $10M $1.5B / $150M Success of AI/automation, standards adoption, #systems, vendor cooperation.
Bounty & Prize Program $1M (Prizes) / ~$0 $15M (Bounties) / $2M $50M (Major Bounties) / $10M Success of organic ecosystem growth vs. need to incentivize critical plugin/tool development via bounties.
Legal/Regulatory Harmonization $1.5M / ~$0 $60M / $3M $300M / $30M Effectiveness of AI legal tools, political will, complexity of global law.
Global Rollout & Adoption ~$0 / ~$0 $12M / $3M $125M / $30M Need for training/support beyond protocol adoption, user interface complexity.
Governance Operations ~$0 / ~$0 ~$1M / $0.3M ~$6M / $1M Automation level, need for audits, grants, core support staff.
— TOTAL — ~$4.5M / ~$0 ~$213M / ~$18.3M ~$1.98B+ / ~$221M+ Represents total initiative cost excluding Core framework build/ops.

Interpretation

Even when pursuing efficient strategies, the potential cost for the full initiative for a decentralized framework (beyond the Core framework) varies dramatically based on real-world execution challenges. The Medium Case suggests upfront costs in the low hundreds of millions and annual costs in the low tens of millions, while the Worst Case pushes towards multi-billion dollar upfront figures and annual costs in the hundreds of millions, dominated by integration, plugin funding, and legal costs if automation and community efforts fall short.

Summary

Based on the detailed technical specification, a ROM estimate suggests:

  • Initial Core framework Build (~2.5 years): ~$37.5 - $46M
  • Annual Core framework Operations (at ~5M MAU scale): ~$11 - $26.5 Million (These framework operational costs are distinct from the financial flows of patient contributions and the NIH Trial Participation Cost Discount Fund, and also exclude plugin ecosystem costs not covered by protocol bounties)

The core protocol infrastructure build costs tens of millions; the broader global initiative (integration, legal frameworks, rollout) accounts for the larger cost estimates detailed in the scenario table above.

Benefit Analysis - Quantifying the Savings

This section quantifies the potential societal benefits of a decentralized FDA, focusing primarily on R&D cost savings and health outcome improvements.

Market Size and Impact

The global pharmaceutical and medical device R&D market is vast. Annual global spending on clinical trials is approximately $60B. Much of this spending is made dramatically more efficient through protocol standardization. If such a framework enables even a fraction of these trials to use pragmatic designs, the economic impact will be substantial.

  • Current Average Costs: Estimates suggest $2.6B to bring a new drug from discovery through FDA approval, spread across ~10 years.
  • Clinical Trial Phase Breakdown:
    • Phase I: $2 - $5 million/trial (smaller scale).
    • Phase II: $10 - $50 million/trial (depending on disease area).
    • Phase III: $100M - $500M/trial (large patient populations).
  • Per-Patient Phase III Costs: Often $41K (site fees, overhead, staff, monitoring, data management).

Decentralized Trial Costs Modeled on Pragmatic Trials

  • Oxford RECOVERY: Achieved ~$500. Key strategies included:

    1. Embedding trial protocols within routine hospital care.
    2. Minimizing overhead by leveraging existing staff/resources and electronic data capture.
    3. Focused, pragmatic trial designs.
  • Systematic Review Evidence: A systematic review of 64 embedded pragmatic clinical trials found a median cost per patient of $9778. This confirms that low-cost execution is a replicable property of the pragmatic design, not an anomaly of any single trial.

  • ADAPTABLE Trial (PCORnet): The US-based ADAPTABLE trial141 ($14M / 15.1 thousand patients = $929/patient) provides a more representative benchmark for pragmatic trial costs in typical healthcare settings without emergency conditions.

  • Framework Cost Projection: Our projections use $929/patient based on ADAPTABLE. Confidence interval ($500-$3,000) captures range from RECOVERY-like efficiency to complex chronic disease trials.

Input: Pragmatic Trial Cost Distribution

Probability Distribution: dFDA Pragmatic Trial Cost per Patient

Probability Distribution: dFDA Pragmatic Trial Cost per Patient

This chart shows the assumed probability distribution for this parameter. The shaded region represents the 95% confidence interval where we expect the true value to fall.

  • Extrapolation to New System:

    • A well-integrated global framework could achieve $929 in many cases, especially for pragmatic or observational designs.
    • Up to ~44.1x cost reduction is achievable by comparing pragmatic trial costs ($929) against traditional costs of $41K.

    The cost reduction factor:

    \[ \begin{gathered} k_{reduce} \\ = \frac{Cost_{P3,pt}}{Cost_{pragmatic,pt}} \\ = \frac{\$41K}{\$929} \\ = 44.1 \end{gathered} \]

    The percentage reduction:

    \[ \begin{gathered} Reduce_{pct} \\ = 1 - \frac{Cost_{pragmatic,pt}}{Cost_{P3,pt}} \\ = 1 - \frac{\$929}{\$41K} \\ = 97.7\% \end{gathered} \]

Scope of Cost Reduction

This reduction applies to trials amenable to pragmatic design - approximately 70% of Phase III trial volume by patient count (chronic disease management, comparative effectiveness, dose optimization). First-in-human studies, novel mechanism trials, and high-risk interventions retain traditional controlled protocols. The confidence interval ($500-$3,000/patient) captures this heterogeneity: simple comparative studies approach RECOVERY-level efficiency while complex trials remain closer to traditional costs. The headline 97.7% figure represents the weighted average across the addressable trial market, not a claim that every trial achieves this reduction.

Overall Savings

  1. By Reducing Per-Patient Costs

    • If a trial with 5,000 participants costs $929/patient, total cost is ~$6 million, versus $200 - $600 million under traditional models.
    • This magnitude of savings can drastically reduce the total cost of clinical development.

    For a trial with \(x\) participants, the total cost savings is:

    \[ S_{\text{trial}}(x) = (c_t - c_d) \cdot x \]

    Where:

    • \(c_t\) is the traditional cost per patient ($41K)
    • \(c_d\) is the decentralized cost per patient ($929)

    For a trial with \(x = 5,000\) participants, savings are approximately:

    \[(\text{Traditional} - \text{Pragmatic}) \times 5{,}000 \approx \$194\text{M per trial}\]

  2. Volume of Trials & Speed

    • Faster, cheaper trials allow more drug candidates, off-label uses, nutraceuticals, and personalized dosing strategies to be tested.
    • Shorter development cycles reduce carrying costs and risk, further increasing ROI for sponsors.
  3. Regulatory Savings

    • A unified protocol standard with automated data audits cuts bureaucratic duplication across multiple countries, drastically lowering compliance costs.
  4. Transparent, Standardized Environment

    • The transparent nature of the protocol creates a standardized operating environment. Outcome data is visible across the ecosystem, enabling sponsors to benchmark their trial designs and optimize operational costs, further driving down R&D expenditure beyond the technical efficiencies of pragmatic trials.

Economic Value of Earlier Access to Treatments

  • Faster approvals and access to effective treatments can save lives and improve quality of life.
  • Value of a Statistical Life (VSL): U.S. agencies use ~$10M per life saved.
  • QALY Framework: Standard willingness-to-pay is $100,000–$150K per QALY gained.
  • These benefits are additive to direct cost savings and can be substantial depending on the scale of acceleration.

When people don’t die, economists can count them twice: once for being alive and again for feeling good. Death is very expensive.

When people don’t die, economists can count them twice: once for being alive and again for feeling good. Death is very expensive.

Gross R&D Savings from a Decentralized FDA

  • Parameter: Percentage reduction in addressable clinical trial costs due to a decentralized FDA.
  • Central Estimate: 97.7% (44.1x)
  • Source/Rationale:
    • Decentralized Clinical Trials (DCTs) demonstrate significant cost reductions142 through reduced site management, travel, and streamlined data collection.
    • Empirical evidence: ADAPTABLE trial achieved $929/patient in routine US settings. Harvard meta-analysis of 108 pragmatic trials found median cost of $97/patient.
    • Our estimate: $929/patient (vs. $41K traditional). This deliberately uses ADAPTABLE as a conservative baseline; actual costs may be lower.
    • Confidence interval captures uncertainty from complex chronic disease trials to highly efficient EHR-integrated designs.

The annual gross R&D savings can be calculated as:

\[ S_{\text{annual}} = \alpha \cdot R_d \]

Where:

  • \(\alpha \in [0,1]\) is the cost reduction percentage (as decimal)
  • \(R_d\) = $60B global clinical trial spending

Base Case Calculation:

Using 97.7% cost reduction (pragmatic trial costs of $929 vs traditional $41K):

\[ \begin{gathered} Benefit_{RD,ann} \\ = Spending_{trials} \times Reduce_{pct} \\ = \$60B \times 97.7\% \\ = \$58.6B \end{gathered} \]
where:
\[ \begin{gathered} Reduce_{pct} \\ = 1 - \frac{Cost_{pragmatic,pt}}{Cost_{P3,pt}} \\ = 1 - \frac{\$929}{\$41K} \\ = 97.7\% \end{gathered} \]

Uncertainty Analysis - R&D Savings:

Monte Carlo Distribution: Decentralized Framework for Drug Assessment Annual Benefit: R&D Savings (10,000 simulations)

Monte Carlo Distribution: Decentralized Framework for Drug Assessment Annual Benefit: R&D Savings (10,000 simulations)

Simulation Results Summary: Decentralized Framework for Drug Assessment Annual Benefit: R&D Savings

Statistic Value
Baseline (deterministic) $58.6B
Mean (expected value) $58.8B
Median (50th percentile) $57.8B
Standard Deviation $7.66B
90% Range (5th-95th percentile) [$49.2B, $73.1B]

The histogram shows the distribution of Decentralized Framework for Drug Assessment Annual Benefit: R&D Savings across 10,000 Monte Carlo simulations. The CDF (right) shows the probability of the outcome exceeding any given value, which is useful for risk assessment.

Post-Safety Efficacy Lag Elimination

Relative Magnitude

Efficacy lag elimination (8.2 years) is the smaller of the two timeline shift components. Discovery acceleration from 12.3x trial capacity contributes 204 years, over 10× more health impact. This section details the efficacy lag component; see The Discovery Capacity Model for the dominant component.

One of two health benefits of a decentralized FDA comes from eliminating the “efficacy lag”, the 8.2 years Phase II/III delay between Phase I safety verification and final approval. Critical: This does NOT eliminate safety testing. Phase I safety testing (2.3 years) is preserved.

We keep the part where we check if drugs kill you immediately. We skip the part where we wait 8 years to see if they work.

We keep the part where we check if drugs kill you immediately. We skip the part where we wait 8 years to see if they work.

The Efficacy Lag Problem

A comprehensive quantitative analysis of post-safety efficacy lag costs (1962-2024) found:

  • Total Deaths: 416 million deaths eventually avoidable deaths over 8.2 years efficacy lag (1962-2024)
  • Total DALYs: 7.94 billion Disability-Adjusted Life Years lost
  • Total Timeline Shift: One-time 8.2 years acceleration in disease eradication

The analysis shows that for every 1 unit of harm the FDA prevents through safety testing, it generates 3.07k units of harm through efficacy delay (Type II vs. Type I error ratio).

For every person saved by making you wait for treatment, 3,070 people die from waiting. It’s called being careful.

For every person saved by making you wait for treatment, 3,070 people die from waiting. It’s called being careful.

Input: Efficacy Lag Duration Distribution

Probability Distribution: Regulatory Delay for Efficacy Testing Post-Safety Verification

Probability Distribution: Regulatory Delay for Efficacy Testing Post-Safety Verification

This chart shows the assumed probability distribution for this parameter. The shaded region represents the 95% confidence interval where we expect the true value to fall.

How a Decentralized Framework Eliminates the Efficacy Lag

Such a framework provides provisional access post-Phase I via trial participation:

  1. Phase I Safety Testing: Maintained at 2.3 years (no change)
  2. Post-Phase I Access: Patients can access drugs through trial participation immediately after safety verification
  3. Continuous Efficacy Monitoring: Real-world evidence replaces the 8.2 years pre-market efficacy delay

This eliminates the post-safety efficacy lag (the Phase II/III portion, while preserving Phase I safety testing) by enabling real-world evidence collection during trials.

Quantified Benefits (Efficacy Lag Component Only)

What happens when you give people medicine 8 years earlier: they live longer and economists get to make bigger numbers.

What happens when you give people medicine 8 years earlier: they live longer and economists get to make bigger numbers.

The elimination of the post-safety efficacy lag by such a framework achieves a one-time 8.2 years timeline acceleration (the smaller component of the total 212 years shift):

\[ DALYs_{lag} = YLL_{lag} + YLD_{lag} = 7.07B + 873M = 7.94B \]
where:
\[ \begin{gathered} YLL_{lag} \\ = Deaths_{lag} \times (LE_{global} - Age_{death,delay}) \\ = 416M \times (79 - 62) \\ = 7.07B \end{gathered} \]
where:
\[ \begin{gathered} Deaths_{lag} \\ = T_{lag} \times Deaths_{disease,daily} \times 338 \\ = 8.2 \times 150{,}000 \times 338 \\ = 416M \end{gathered} \]
where:
\[ \begin{gathered} YLD_{lag} \\ = Deaths_{lag} \times T_{suffering} \times DW_{chronic} \\ = 416M \times 6 \times 0.35 \\ = 873M \end{gathered} \]

\[ \begin{gathered} Value_{lag} \\ = DALYs_{lag} \times Value_{QALY} \\ = 7.94B \times \$150K \\ = \$1190T \end{gathered} \]
where:
\[ DALYs_{lag} = YLL_{lag} + YLD_{lag} = 7.07B + 873M = 7.94B \]
where:
\[ \begin{gathered} YLL_{lag} \\ = Deaths_{lag} \times (LE_{global} - Age_{death,delay}) \\ = 416M \times (79 - 62) \\ = 7.07B \end{gathered} \]
where:
\[ \begin{gathered} Deaths_{lag} \\ = T_{lag} \times Deaths_{disease,daily} \times 338 \\ = 8.2 \times 150{,}000 \times 338 \\ = 416M \end{gathered} \]
where:
\[ \begin{gathered} YLD_{lag} \\ = Deaths_{lag} \times T_{suffering} \times DW_{chronic} \\ = 416M \times 6 \times 0.35 \\ = 873M \end{gathered} \]

\[ \begin{gathered} Deaths_{lag} \\ = T_{lag} \times Deaths_{disease,daily} \times 338 \\ = 8.2 \times 150{,}000 \times 338 \\ = 416M \end{gathered} \]

Efficacy Lag Elimination - Uncertainty Analysis

Monte Carlo Distribution: Total DALYs Lost from Disease Eradication Delay (10,000 simulations)

Monte Carlo Distribution: Total DALYs Lost from Disease Eradication Delay (10,000 simulations)

Simulation Results Summary: Total DALYs Lost from Disease Eradication Delay

Statistic Value
Baseline (deterministic) 7.94 billion
Mean (expected value) 8.05 billion
Median (50th percentile) 7.89 billion
Standard Deviation 2.31 billion
90% Range (5th-95th percentile) [4.43 billion, 12.1 billion]

The histogram shows the distribution of Total DALYs Lost from Disease Eradication Delay across 10,000 Monte Carlo simulations. The CDF (right) shows the probability of the outcome exceeding any given value, which is useful for risk assessment.

This represents the health benefits from eliminating the post-safety efficacy lag only. The efficacy lag (8.2 years) is one of two components of the total timeline shift:

  1. Efficacy lag elimination: 8.2 years (this section)
  2. Trial capacity/discovery acceleration: 204 years (from 12.3x more trials)

The combined total timeline shift is 212 years, yielding 10.7 billion deaths and 565 billion DALYs. See The Receipts for the combined totals.

For detailed methodology and assumptions on efficacy lag, see https://invisible-graveyard.warondisease.org.

Safety and Risk Management

Common concern: Won’t faster trials with lower costs compromise safety?

Traditional safety monitoring versus actually paying attention. One watches 2,000 people for a few months. The other watches everyone forever.

Traditional safety monitoring versus actually paying attention. One watches 2,000 people for a few months. The other watches everyone forever.

The evidence indicates the opposite. The proposed system provides superior safety monitoring compared to traditional trials across multiple dimensions.

Current System Limitations: Dangerously Blind to Real-World Harms

Current drug safety: please voluntarily report if you die. Proposed drug safety: computers notice when you die.

Current drug safety: please voluntarily report if you die. Proposed drug safety: computers notice when you die.

Caution

The current system is not safe - it just appears safe because harms go undetected.

The FDA’s voluntary adverse event reporting system (MedWatch) captures only 1-10% of actual adverse events. Long-term harms that develop gradually over years - the most insidious and deadly kind - are virtually invisible:

  • Vioxx (rofecoxib): Caused 38,000-55,000 cardiovascular deaths over 5 years before detection through voluntary reporting. With automated EHR pharmacovigilance, the elevated MI risk would have been detected within 6-12 months.
  • Hormone Replacement Therapy: Prescribed for decades before the Women’s Health Initiative revealed increased cancer and cardiovascular risk - risks invisible to voluntary reporting
  • Opioids: The overdose crisis killed 500,000+ Americans; the addiction signal was undetectable in short trials with cherry-picked populations
  • Avandia (rosiglitazone): 83,000 excess heart attacks estimated before restrictions; signal emerged years post-approval
  • Thalidomide (1950s): The disaster that prompted regulatory reform - yet the current system would still miss a thalidomide-like harm if it manifested gradually rather than as obvious birth defects

The current “safety” system doesn’t prevent harm - it delays detection until bodies accumulate. A global automated pharmacovigilance system with continuous EHR monitoring would detect these signals in months, not years or decades.

Specific limitations of the current system:

  • Voluntary adverse event reporting captures only 1-10% of actual events
  • Traditional Phase III trials test 100-300 patients for 3-12 months, then monitoring stops
  • Approximately 50% of trial results go unpublished, with publication bias favoring positive findings 3:1
  • 86.1% of patients excluded due to age, comorbidities, or medications - safety signals in these populations go undetected
  • Long-term effects (>1 year) rarely captured in pre-approval trials
  • No systematic mechanism to detect gradual harms that develop over years

Proposed System Safety Advantages

  1. Preserved Phase I Safety Testing: Rigorous Phase I safety testing (~2.3 years) is maintained. What changes is eliminating the 8.2 years efficacy delay after safety is verified.

The old way makes you wait 8 years to find out if your medicine works. The new way tells you immediately and keeps checking. Revolutionary.

The old way makes you wait 8 years to find out if your medicine works. The new way tells you immediately and keeps checking. Revolutionary.
  1. Continuous Population-Scale Monitoring: Pragmatic trials with 10,000-100,000+ participants monitored continuously through EHR integration detect safety problems faster than small, time-limited traditional trials. The RECOVERY trial identified both effective treatments (dexamethasone) and harmful ones (hydroxychloroquine) in under 100 days with 47,000 patients.

  2. Universal Data Collection: The system automatically collects and publishes outcome data on all treatments, eliminating the publication bias that currently hides negative results.

  3. Faster Adverse Event Detection: Automated EHR pharmacovigilance detects safety signals in months rather than the years required by voluntary reporting systems.

  4. Immediate Mass Notification: When safety signals are detected, all patients taking the drug receive automated alerts through patient portals, enabling immediate clinical review.

Detection Timeline Comparison

The following table quantifies how automated EHR monitoring would have changed outcomes for major drug safety disasters:

Drug Harm Voluntary Reporting Detection Projected EHR Detection Deaths During Delay
Vioxx Cardiovascular events 5 years (1999-2004) 6-12 months 38,000-55,000
Avandia Heart attacks 8 years post-approval 12-18 months ~83,000 excess events
Opioids Addiction/overdose Decades 2-3 years 500,000+ deaths
HRT Cancer/cardiovascular 40+ years 3-5 years Unknown (millions affected)

Why the difference? Automated EHR surveillance compares treated patients to matched controls continuously. The Vioxx cardiovascular signal would trigger statistical alerts after ~5,000 prescriptions (1.2% elevated event rate vs. expected background). Voluntary reporting requires doctors to notice, remember, and file paperwork, capturing only 1-10% of events.

Comparative Safety Surveillance

Safety Dimension Traditional Trials Pragmatic Trials + EHR Monitoring
Sample size 100-300 patients 10,000-100,000+ patients
Patient selection 86.1% excluded All volunteers (real-world populations)
Monitoring duration 3-12 months (then stops) Continuous via EHR (indefinite)
Publication rate ~50% unpublished 100% automatically published
Adverse event detection Voluntary reporting (1-10% capture) Automated surveillance (100% capture)

Pooled Liability Insurance

The framework includes pooled liability coverage for sponsors, reducing individual company risk while ensuring patient compensation for adverse events. This removes a major barrier to trial participation for smaller sponsors while maintaining accountability.

Type II Error Dominance: For every person protected from an unsafe drug (Type I error prevention), 3.07k people die from delayed access to beneficial treatments (Type II errors). The current system prevents harm from unsafe drugs, but causes 3.07k more deaths through delays. Phase I safety testing is preserved in this framework; the ratio quantifies the cost of the efficacy lag, not safety testing. (See Historical Validation for the thalidomide context.)

ROI Analysis for a Decentralized FDA

Monte Carlo Distributions

Monte Carlo Distribution: ROI from Decentralized Framework for Drug Assessment R&D Savings Only (10,000 simulations)

Monte Carlo Distribution: ROI from Decentralized Framework for Drug Assessment R&D Savings Only (10,000 simulations)

Simulation Results Summary: ROI from Decentralized Framework for Drug Assessment R&D Savings Only

Statistic Value
Baseline (deterministic) 637:1
Mean (expected value) 653:1
Median (50th percentile) 645:1
Standard Deviation 58.4:1
90% Range (5th-95th percentile) [569:1, 790:1]

The histogram shows the distribution of ROI from Decentralized Framework for Drug Assessment R&D Savings Only across 10,000 Monte Carlo simulations. The CDF (right) shows the probability of the outcome exceeding any given value, which is useful for risk assessment.

Monte Carlo Distribution: NPV Net Benefit (R&D Only) (10,000 simulations)

Monte Carlo Distribution: NPV Net Benefit (R&D Only) (10,000 simulations)

Simulation Results Summary: NPV Net Benefit (R&D Only)

Statistic Value
Baseline (deterministic) $389B
Mean (expected value) $390B
Median (50th percentile) $383B
Standard Deviation $50.7B
90% Range (5th-95th percentile) [$326B, $484B]

The histogram shows the distribution of NPV Net Benefit (R&D Only) across 10,000 Monte Carlo simulations. The CDF (right) shows the probability of the outcome exceeding any given value, which is useful for risk assessment.

Monte Carlo Distribution: Decentralized Framework for Drug Assessment Total NPV Cost (10,000 simulations)

Monte Carlo Distribution: Decentralized Framework for Drug Assessment Total NPV Cost (10,000 simulations)

Simulation Results Summary: Decentralized Framework for Drug Assessment Total NPV Cost

Statistic Value
Baseline (deterministic) $611M
Mean (expected value) $609M
Median (50th percentile) $595M
Standard Deviation $127M
90% Range (5th-95th percentile) [$415M, $853M]

The histogram shows the distribution of Decentralized Framework for Drug Assessment Total NPV Cost across 10,000 Monte Carlo simulations. The CDF (right) shows the probability of the outcome exceeding any given value, which is useful for risk assessment.

Probability of Exceeding Threshold: ROI from Decentralized Framework for Drug Assessment R&D Savings Only

Probability of Exceeding Threshold: ROI from Decentralized Framework for Drug Assessment R&D Savings Only

This exceedance probability chart shows the likelihood that ROI from Decentralized Framework for Drug Assessment R&D Savings Only will exceed any given threshold. Higher curves indicate more favorable outcomes with greater certainty.

Research Acceleration Mechanism

The 12.3x research acceleration transforms our ability to explore the vast therapeutic space where undiscovered cures already exist.

If you test 12 drugs at once instead of one drug at a time, you learn about 12 drugs. Mathematics.

If you test 12 drugs at once instead of one drug at a time, you learn about 12 drugs. Mathematics.

The Unexplored Therapeutic Frontier

The fundamental problem isn’t that cures are hard to discover. It’s that we’re barely looking:

  • 9.5 million combinations plausible drug-disease pairings exist (9.5 thousand compounds safe × 1 thousand diseases)
  • Only 0.342% of these combinations have been tested - 99.7% remains unexplored
  • Only 12% of the human interactome has ever been targeted by drugs
  • 30% of approved drugs gain new indications, proving undiscovered uses exist

\[ \begin{gathered} Ratio_{explore} \\ = \frac{N_{tested}}{N_{combos}} \\ = \frac{32{,}500}{9.5M} \\ = 0.342\% \end{gathered} \]
where:
\[ \begin{gathered} N_{combos} \\ = N_{safe} \times N_{diseases,trial} \\ = 9{,}500 \times 1{,}000 \\ = 9.5M \end{gathered} \]

The cures likely already exist among tested-safe compounds. We just haven’t looked. See The Untapped Therapeutic Frontier for detailed analysis of this exploration gap, and The Discovery Capacity Model for the quantitative framework showing how 12.3x trial capacity produces the 212 years timeline shift.

We’ve tested 0.342% of possible treatments. The remaining 99.658% might cure everything or nothing, but we’ll never know at this rate.

We’ve tested 0.342% of possible treatments. The remaining 99.658% might cure everything or nothing, but we’ll never know at this rate.

Addressing the Returns Question: Diminishing, Linear, or Compounding?

A common objection is that “more trials won’t produce proportionally more cures” - the diminishing returns hypothesis. This deserves serious consideration, but the evidence suggests the opposite may be true.

Why Diminishing Returns Is Unlikely (We Haven’t Started Looking)

The diminishing returns objection assumes we’ve exhausted low-hanging fruit. But we’ve barely begun:

  1. Single compounds alone: 9.5 million combinations possible combinations of known safe compounds × diseases. At current trial capacity, systematically testing these would take 2.88 thousand years. We won’t finish until the year 5000+.

All the medicine we’ve discovered so far, next to all the medicine we haven’t bothered looking for yet. It’s mostly the second one.

All the medicine we’ve discovered so far, next to all the medicine we haven’t bothered looking for yet. It’s mostly the second one.

\[ \begin{gathered} T_{explore,safe} \\ = \frac{N_{combos}}{Trials_{ann,curr}} \\ = \frac{9.5M}{3{,}300} \\ = 2{,}880 \end{gathered} \]
where:
\[ \begin{gathered} N_{combos} \\ = N_{safe} \times N_{diseases,trial} \\ = 9{,}500 \times 1{,}000 \\ = 9.5M \end{gathered} \]

  1. Combination therapies expand the space: Modern medicine relies on multi-drug regimens (oncology, HIV, cardiology). Pairwise combinations of safe compounds create 45.1 billion combinations possibilities, requiring 13.7 million years at current pace - longer than Homo sapiens has existed.

  2. Repurposing success proves cures exist: 30% of approved drugs gain new indications, demonstrating the unexplored space contains discoveries.

  3. Most biology is untargeted: Only 12% of the human interactome has been targeted. We’re ignoring 88% of our own biology.

  4. RECOVERY found treatments in months: The Oxford trial discovered multiple effective COVID treatments rapidly because it looked systematically.

You cannot have diminishing returns when you’ve barely started.

Mathematical Framework: When Would Diminishing Returns Dominate?

We can formalize the competing models to identify when diminishing returns would actually matter.

Model 1: Linear (Baseline)

\[ T_{discovered} = k_0 \cdot N_{trials} \]

Where \(k_0\) is the constant discovery rate (effective treatments per trial). This assumes the therapeutic space is sampled uniformly at random.

Model 2: Diminishing Returns (Pessimistic)

As we exhaust the therapeutic space, the hit rate decreases:

\[ k_{dim}(s) = k_0 \cdot (1 - s) \]

Where \(s = S_{explored}/S_{total}\) is the fraction of therapeutic space already tested. At current exploration (\(s < 0.01\)), this gives \(k_{dim} \approx 0.99 \cdot k_0\), virtually identical to linear.

Model 3: Learning/Compounding (Optimistic)

Each trial improves our biological models, increasing future hit rates:

\[ k_{learn}(n) = k_0 \cdot \left(1 + \alpha \cdot \ln(1 + n)\right) \]

Where \(\alpha\) is the learning coefficient and \(n\) is cumulative trials completed. Even modest learning (\(\alpha = 0.1\)) with 100,000 trials yields \(k_{learn} \approx 2.15 \cdot k_0\).

Model 4: Combined (Realistic)

Both effects operate simultaneously:

\[ k_{combined}(s, n) = k_0 \cdot (1 - s) \cdot \left(1 + \alpha \cdot \ln(1 + n)\right) \]

The Crossover Point: When Does Depletion Dominate Learning?

Diminishing returns dominates when the depletion factor exceeds the learning factor. Solving for the critical exploration fraction:

\[ s_{crossover} = 1 - \frac{1}{1 + \alpha \cdot \ln(1 + n)} \]

Learning Coefficient (\(\alpha\)) Trials Completed (\(n\)) Crossover Exploration (\(s_{crossover}\))
0.05 (weak) 100,000 37%
0.10 (modest) 100,000 53%
0.15 (strong) 100,000 63%

Interpretation: Even with weak learning effects, diminishing returns only dominates after exploring 37%+ of therapeutic space. With modest learning, the crossover occurs at 53%+ exploration.

Timeline to Crossover:

At current exploration of 0.342% (<1%), reaching the 53% crossover would require ~1,500 years at current pace or ~125 years with the framework. For combination therapies (45.1 billion combinations), reaching 53% exploration would take millions of years.

Conclusion: For any plausible planning horizon, learning effects dominate. Diminishing returns is a theoretical concern for civilizations operating on multi-century timescales, not a practical constraint for the next 100+ years of medical research.

The Conservative Default: Linear Assumption

Given genuine uncertainty about whether returns are diminishing or compounding, our analysis assumes a linear relationship between trial capacity and treatment discoveries. This is the conservative choice because:

  1. It’s the neutral prior: Without strong evidence for either diminishing or compounding returns, linearity is the least assumptive model
  2. It may underestimate benefits: If platform technologies and learning effects produce compounding returns, our projections are conservative
  3. It’s empirically defensible: The RECOVERY trial’s success (multiple treatments found with increased search) is consistent with linear or better returns
  4. It avoids both failure modes: Assuming diminishing returns would justify inaction; assuming compounding returns might overstate benefits. Linearity is the responsible middle ground

Bottom line: Even under the conservative linear assumption, 12.3x more trials produces 12.3x more discoveries from a space that is 99%+ unexplored. The expected value calculation remains overwhelmingly positive.

Three guesses about whether cures get harder to find over time, easier to find, or stay the same. We picked the boring one.

Three guesses about whether cures get harder to find over time, easier to find, or stay the same. We picked the boring one.

Funding Level vs. Cost-Effectiveness

While the analysis above addresses whether trials produce proportionally more cures, a separate question is how funding level affects cost per DALY averted. The acceleration formula \(T_{accel} = T_{baseline} \times (1 - 1/k)\), where \(k\) is the trial capacity multiplier, produces natural diminishing returns: each additional dollar buys less acceleration as \(k\) grows. Figure 48.1 shows cost per DALY rising with funding, while Figure 48.2 shows total DALYs approaching an asymptotic ceiling.

Figure 48.1: Cost per DALY averted as a function of annual funding level. Uses undiscounted total cost (upfront platform build + annual funding over queue clearance period) because annual government appropriations are not a discountable capital allocation. The curve is monotonically decreasing: more funding always improves efficiency, but with strongly diminishing returns.

Verification at proposed funding ($21.8B/yr):
  Trial capacity multiplier: 12.3x
  Queue clearance: 36.0 years
  Treatment acceleration: 203.7 years
  Total timeline shift: 211.9 years
  DALYs averted: 565.2B
  Upfront cost: $270M
  Total undiscounted cost: $784.2B
  Cost per DALY: $1.39
  Asymptotic floor: $1.28/DALY (total queue cost: $783B)
Figure 48.2: Total DALYs averted as a function of annual funding level. The curve flattens as funding approaches the asymptotic ceiling where all avoidable diseases receive accelerated treatment. Efficacy lag elimination (8.2 years) provides a fixed baseline benefit independent of funding scale.

Verification at proposed funding ($21.8B/yr):
  Trial capacity multiplier: 12.3x
  Queue clearance: 36.0 years
  Treatment acceleration: 203.7 years
  Total timeline shift: 211.9 years
  DALYs averted: 565B
  Ceiling DALYs: 613B
  Utilization: 92.2% of ceiling
  Efficacy lag baseline: 22B DALYs

The proposed funding level ($21.8B/year) sits in the steep part of the curve, where cost-effectiveness is strongest. Even at much higher funding levels, the cost per DALY remains far below the bed nets benchmark ($89/DALY).

Data Sources and Methodological Notes

  1. Cost of Current Drug Development:

    • Tufts Center for the Study of Drug Development often cited for $1.0 - $2.6 billion/drug.
    • Journal articles and industry reports (IQVIA, Deloitte) also highlight $2+ billion figures.
    • Oxford RECOVERY trial: $500/patient (exceptional NHS/COVID conditions). ADAPTABLE trial: $929/patient (typical US pragmatic trial). Our projections use $929/patient based on ADAPTABLE; confidence interval captures uncertainty.
  2. ROI Calculation Method:

    • Simplified approach comparing aggregated R&D spending to potential savings.
    • Does not account for intangible factors (opportunity costs, IP complexities, time-value of money) beyond a basic Net Present Value (NPV) perspective.
  3. Scale & Adoption Rates:

    • The largest uncertainties revolve around uptake speed, regulatory harmonization, and participant willingness.
    • Projections assume widespread adoption by major pharmaceutical companies and global health authorities.
  4. Secondary Benefits:

    • Quality-of-life improvements, lower healthcare costs from faster drug innovation, and potentially fewer adverse events from earlier detection.
    • These are positive externalities that can significantly enlarge real ROI from a societal perspective.

Conclusion

A decentralized FDA transforms the centralized regulatory approach into a global, decentralized model, reducing clinical trial costs by a factor of 44.1x, accelerating approval timelines, and expanding therapeutic coverage to neglected diseases. The 10-year NPV total cost is $611M (upfront plus discounted annual operations), generating $389B in net R&D savings. Given that the pharmaceutical industry collectively spends $60B annually on clinical trials, a 97.7% reduction yields an ROI of 637:1 at scale.

Beyond direct savings, the effects on medical progress are substantial: expanded therapeutic exploration, real-time treatment effectiveness rankings, and research on off-patent treatments that currently lack commercial incentives. With appropriate privacy protections and international coordination, this framework enables evidence-based personalized medicine at global scale.

Disclaimer

All figures in this document are estimates based on publicly available information, industry benchmarks, and simplifying assumptions. Real-world costs, savings, and ROI will vary greatly depending on the scope of implementation, the speed of adoption, regulatory cooperation, and numerous other factors. Nonetheless, this high-level exercise illustrates the substantial potential gains from a global, decentralized, continuously learning clinical trial and regulatory ecosystem.

Verification: Complete Derivation Chains

For economist verification, this section provides complete derivation chains for all headline figures. Each metric traces back to primary data sources.

Trial Capacity Multiplier Derivation

If you spend more money on trials, you can run more trials. This required a diagram.

If you spend more money on trials, you can run more trials. This required a diagram.

Result: 12.3x

Step 1: Current trial capacity

Step 2: Capacity with $21.8B/year

Step 3: Calculate multiplier

\[ \begin{gathered} k_{capacity} \\ = \frac{N_{fundable,dFDA}}{Slots_{curr}} \\ = \frac{23.4M}{1.9M} \\ = 12.3 \end{gathered} \]
where:
\[ \begin{gathered} N_{fundable,dFDA} \\ = \frac{Subsidies_{dFDA,ann}}{Cost_{pragmatic,pt}} \\ = \frac{\$21.8B}{\$929} \\ = 23.4M \end{gathered} \]
where:
\[ \begin{gathered} Subsidies_{dFDA,ann} \\ = Funding_{dFDA,ann} - OPEX_{dFDA} \\ = \$21.8B - \$40M \\ = \$21.8B \end{gathered} \]
where:
\[ \begin{gathered} OPEX_{dFDA} \\ = Cost_{platform} + Cost_{staff} + Cost_{infra} \\ + Cost_{regulatory} + Cost_{community} \\ = \$15M + \$10M + \$8M + \$5M + \$2M \\ = \$40M \end{gathered} \]

Timeline Shift Derivation

Result: 212 years

Components:

Component Value Source
Efficacy Lag Elimination

8.2 years

FDA drug approval timeline data
Discovery Acceleration

204 years

Capacity vs. backlog model
Combined Total 212 years Sum of components

\[ T_{accel,max} = T_{accel} + T_{lag} = 204 + 8.2 = 212 \]
where:
\[ \begin{gathered} T_{accel} \\ = T_{first,SQ} \times \left(1 - \frac{1}{k_{capacity}}\right) \\ = 222 \times \left(1 - \frac{1}{12.3}\right) \\ = 204 \end{gathered} \]
where:
\[ \begin{gathered} T_{first,SQ} \\ = T_{queue,SQ} \times 0.5 \\ = 443 \times 0.5 \\ = 222 \end{gathered} \]
where:
\[ \begin{gathered} T_{queue,SQ} \\ = \frac{N_{untreated}}{Treatments_{new,ann}} \\ = \frac{6{,}650}{15} \\ = 443 \end{gathered} \]
where:
\[ \begin{gathered} N_{untreated} \\ = N_{rare} \times 0.95 \\ = 7{,}000 \times 0.95 \\ = 6{,}650 \end{gathered} \]
where:
\[ \begin{gathered} k_{capacity} \\ = \frac{N_{fundable,dFDA}}{Slots_{curr}} \\ = \frac{23.4M}{1.9M} \\ = 12.3 \end{gathered} \]
where:
\[ \begin{gathered} N_{fundable,dFDA} \\ = \frac{Subsidies_{dFDA,ann}}{Cost_{pragmatic,pt}} \\ = \frac{\$21.8B}{\$929} \\ = 23.4M \end{gathered} \]
where:
\[ \begin{gathered} Subsidies_{dFDA,ann} \\ = Funding_{dFDA,ann} - OPEX_{dFDA} \\ = \$21.8B - \$40M \\ = \$21.8B \end{gathered} \]
where:
\[ \begin{gathered} OPEX_{dFDA} \\ = Cost_{platform} + Cost_{staff} + Cost_{infra} \\ + Cost_{regulatory} + Cost_{community} \\ = \$15M + \$10M + \$8M + \$5M + \$2M \\ = \$40M \end{gathered} \]

Lives Saved Derivation

When you cure diseases 212 years early, you save everyone who would have died in those 212 years. It adds up.

When you cure diseases 212 years early, you save everyone who would have died in those 212 years. It adds up.

Result: 10.7 billion deaths

Step 1: Daily mortality from eventually avoidable causes

  • Global disease deaths: 150,000/day4
  • Eventually avoidable percentage: 92.6%

Step 2: Timeline shift period

Step 3: Calculate lives saved

\[ \begin{gathered} Lives_{max} \\ = Deaths_{disease,daily} \times T_{accel,max} \times 338 \\ = 150{,}000 \times 212 \times 338 \\ = 10.7B \end{gathered} \]
where:
\[ T_{accel,max} = T_{accel} + T_{lag} = 204 + 8.2 = 212 \]
where:
\[ \begin{gathered} T_{accel} \\ = T_{first,SQ} \times \left(1 - \frac{1}{k_{capacity}}\right) \\ = 222 \times \left(1 - \frac{1}{12.3}\right) \\ = 204 \end{gathered} \]
where:
\[ \begin{gathered} T_{first,SQ} \\ = T_{queue,SQ} \times 0.5 \\ = 443 \times 0.5 \\ = 222 \end{gathered} \]
where:
\[ \begin{gathered} T_{queue,SQ} \\ = \frac{N_{untreated}}{Treatments_{new,ann}} \\ = \frac{6{,}650}{15} \\ = 443 \end{gathered} \]
where:
\[ \begin{gathered} N_{untreated} \\ = N_{rare} \times 0.95 \\ = 7{,}000 \times 0.95 \\ = 6{,}650 \end{gathered} \]
where:
\[ \begin{gathered} k_{capacity} \\ = \frac{N_{fundable,dFDA}}{Slots_{curr}} \\ = \frac{23.4M}{1.9M} \\ = 12.3 \end{gathered} \]
where:
\[ \begin{gathered} N_{fundable,dFDA} \\ = \frac{Subsidies_{dFDA,ann}}{Cost_{pragmatic,pt}} \\ = \frac{\$21.8B}{\$929} \\ = 23.4M \end{gathered} \]
where:
\[ \begin{gathered} Subsidies_{dFDA,ann} \\ = Funding_{dFDA,ann} - OPEX_{dFDA} \\ = \$21.8B - \$40M \\ = \$21.8B \end{gathered} \]
where:
\[ \begin{gathered} OPEX_{dFDA} \\ = Cost_{platform} + Cost_{staff} + Cost_{infra} \\ + Cost_{regulatory} + Cost_{community} \\ = \$15M + \$10M + \$8M + \$5M + \$2M \\ = \$40M \end{gathered} \]

Cost per DALY Derivation

Result: $0.842

Step 1: Total protocol infrastructure cost (10-year NPV)

Step 2: DALYs averted

Step 3: Calculate cost per DALY

\[ \begin{gathered} Cost_{direct,DALY} \\ = \frac{NPV_{direct}}{DALYs_{max}} \\ = \frac{\$476B}{565B} \\ = \$0.842 \end{gathered} \]
where:
\[ NPV_{direct} = Funding_{ann} \times \frac{1 - (1+r)^{-T}}{r} \]
where:
\[ \begin{gathered} T_{queue,dFDA} \\ = \frac{T_{queue,SQ}}{k_{capacity}} \\ = \frac{443}{12.3} \\ = 36 \end{gathered} \]
where:
\[ \begin{gathered} T_{queue,SQ} \\ = \frac{N_{untreated}}{Treatments_{new,ann}} \\ = \frac{6{,}650}{15} \\ = 443 \end{gathered} \]
where:
\[ \begin{gathered} N_{untreated} \\ = N_{rare} \times 0.95 \\ = 7{,}000 \times 0.95 \\ = 6{,}650 \end{gathered} \]
where:
\[ \begin{gathered} k_{capacity} \\ = \frac{N_{fundable,dFDA}}{Slots_{curr}} \\ = \frac{23.4M}{1.9M} \\ = 12.3 \end{gathered} \]
where:
\[ \begin{gathered} N_{fundable,dFDA} \\ = \frac{Subsidies_{dFDA,ann}}{Cost_{pragmatic,pt}} \\ = \frac{\$21.8B}{\$929} \\ = 23.4M \end{gathered} \]
where:
\[ \begin{gathered} Subsidies_{dFDA,ann} \\ = Funding_{dFDA,ann} - OPEX_{dFDA} \\ = \$21.8B - \$40M \\ = \$21.8B \end{gathered} \]
where:
\[ \begin{gathered} OPEX_{dFDA} \\ = Cost_{platform} + Cost_{staff} + Cost_{infra} \\ + Cost_{regulatory} + Cost_{community} \\ = \$15M + \$10M + \$8M + \$5M + \$2M \\ = \$40M \end{gathered} \]
where:
\[ \begin{gathered} DALYs_{max} \\ = DALYs_{global,ann} \times Pct_{avoid,DALY} \times T_{accel,max} \\ = 2.88B \times 92.6\% \times 212 \\ = 565B \end{gathered} \]
where:
\[ T_{accel,max} = T_{accel} + T_{lag} = 204 + 8.2 = 212 \]
where:
\[ \begin{gathered} T_{accel} \\ = T_{first,SQ} \times \left(1 - \frac{1}{k_{capacity}}\right) \\ = 222 \times \left(1 - \frac{1}{12.3}\right) \\ = 204 \end{gathered} \]
where:
\[ \begin{gathered} T_{first,SQ} \\ = T_{queue,SQ} \times 0.5 \\ = 443 \times 0.5 \\ = 222 \end{gathered} \]

Comparison: Malaria bed nets cost $89/DALY. This framework operates at vastly greater scale while achieving competitive cost-effectiveness.

This costs 84 cents per year of healthy life saved. Mosquito nets cost $89. We give medals to the mosquito net people.

This costs 84 cents per year of healthy life saved. Mosquito nets cost $89. We give medals to the mosquito net people.

ROI Derivation

Result: 637:1

Step 1: Calculate benefits

  • Annual R&D savings: $58.6B
  • 10-year NPV of savings: $389B

Step 2: Calculate costs

  • 10-year NPV total cost: $611M

Step 3: Calculate ROI

\[ \begin{gathered} ROI_{RD} \\ = \frac{NPV_{RD}}{Cost_{dFDA,total}} \\ = \frac{\$389B}{\$611M} \\ = 637 \end{gathered} \]
where:
\[ \begin{gathered} NPV_{RD} \\ = \sum_{t=1}^{10} \frac{Savings_{RD,ann} \times \frac{\min(t,5)}{5}}{(1+r)^t} \end{gathered} \]
where:
\[ \begin{gathered} Savings_{RD,ann} \\ = Benefit_{RD,ann} - OPEX_{dFDA} \\ = \$58.6B - \$40M \\ = \$58.6B \end{gathered} \]
where:
\[ \begin{gathered} Benefit_{RD,ann} \\ = Spending_{trials} \times Reduce_{pct} \\ = \$60B \times 97.7\% \\ = \$58.6B \end{gathered} \]
where:
\[ \begin{gathered} Reduce_{pct} \\ = 1 - \frac{Cost_{pragmatic,pt}}{Cost_{P3,pt}} \\ = 1 - \frac{\$929}{\$41K} \\ = 97.7\% \end{gathered} \]
where:
\[ \begin{gathered} OPEX_{dFDA} \\ = Cost_{platform} + Cost_{staff} + Cost_{infra} \\ + Cost_{regulatory} + Cost_{community} \\ = \$15M + \$10M + \$8M + \$5M + \$2M \\ = \$40M \end{gathered} \]
where:
\[ \begin{gathered} Cost_{dFDA,total} \\ = PV_{OPEX} + Cost_{upfront,total} \\ = \$342M + \$270M \\ = \$611M \end{gathered} \]
where:
\[ PV_{OPEX} = OPEX_{ann} \times \frac{1 - (1+r)^{-T}}{r} \]
where:
\[ \begin{gathered} OPEX_{total} \\ = OPEX_{ann} + OPEX_{DIH,ann} \\ = \$18.9M + \$21.1M \\ = \$40M \end{gathered} \]
where:
\[ \begin{gathered} Cost_{upfront,total} \\ = Cost_{upfront} + Cost_{DIH,init} \\ = \$40M + \$230M \\ = \$270M \end{gathered} \]

Verification Summary

Metric Value Primary Inputs Data Sources
Trial Capacity

12.3x

Funding, trial costs ADAPTABLE trial, ClinicalTrials.gov
Timeline Shift

212 years

Efficacy lag, backlog model FDA approval data, disease registry
Lives Saved

10.7 billion deaths

Mortality rates, timeline WHO GBD, mortality statistics
Cost/DALY

$0.842

NPV costs, DALYs ROM estimates, DALY calculations
ROI 637:1 Costs, savings NPV analysis with 5-year ramp

All parameters, confidence intervals, and Monte Carlo distributions are documented in Parameters and Calculations.

Key Analytical Assumptions

This analysis rests on several core assumptions that should be made explicit for academic transparency:

Linear Scaling Assumption

Assumption: Each additional dollar of trial funding produces proportional additional discoveries.

Justification: This is actually conservative - network effects in data aggregation and platform economics often produce increasing returns. We assume linear to avoid overstating benefits.

Sensitivity: If returns are sublinear (diminishing), health impact estimates would be reduced. However, as documented in Addressing the Returns Question, diminishing returns are unlikely when <1% of therapeutic space has been explored.

Adoption Rate Assumptions

Assumption: Framework adoption follows a 5-year ramp (20%, 40%, 60%, 80%, 100%) before reaching full capacity.

Justification: Based on historical technology adoption curves in healthcare (EHR adoption, telemedicine during COVID). The ramp is built into NPV calculations.

Sensitivity: Slower adoption delays benefits but doesn’t change eventual steady-state impact. NPV is reduced with slower adoption due to discounting.

Cost Reduction Assumptions

Assumption: Pragmatic trials cost $929/patient versus $41K/patient for traditional trials.

Justification: Based on ADAPTABLE trial ($929/patient) and systematic review of 64 pragmatic trials (median $97/patient). RECOVERY achieved $500/patient under exceptional NHS/COVID conditions.

Sensitivity: The tornado diagrams show ROI remains strongly positive even at 30% cost reduction (vs. baseline 97.7%).

Eventually Avoidable Mortality Assumption

Assumption: 92.6% of disease deaths are eventually avoidable with sufficient biomedical research.

Justification: Historical trend shows ~70% reduction in age-adjusted mortality since 1900. Most major disease categories have known biological mechanisms amenable to intervention. See Why “Eventually Avoidable” Matters.

Sensitivity: Health impact scales linearly with this assumption. At 25% avoidability (half the estimate), health benefits are halved. R&D savings are unaffected.

Counterfactual Baseline Specification

This cost-effectiveness analysis uses the status quo as the baseline counterfactual: current clinical trial infrastructure continues operating at current efficiency ($41K/patient) and capacity (1.9 million patients/year participants/year). Under this baseline, the $21.8B/year allocated to pragmatic trials would not exist.

Why status quo is the appropriate baseline:

  1. No comparable interventions exist: There is no competing proposal that would achieve similar trial cost reductions at scale
  2. Historical trend supports it: Trial costs have increased, not decreased, over the past 50 years (105x since 1962)
  3. Incremental improvements are marginal: Ongoing digitization efforts (DCT platforms, EHR integration) produce 10-20% efficiency gains, not the 97.7% reduction from pragmatic trial design

Alternative counterfactual scenarios:

  1. Organic efficiency improvement: Clinical trial costs decrease 2-3% annually through technology adoption. Under this scenario, the marginal impact of the framework is reduced by the amount of improvement that would occur anyway. At 3%/year organic improvement over 10 years, approximately 26% of the cost reduction would occur regardless, reducing the framework’s attributable benefit to ~74% of projections.

  2. Alternative government priorities: Funds are allocated to other health investments (NIH grants, hospital infrastructure, insurance subsidies). Each alternative use would require separate cost-benefit analysis. However, none of these alternatives address the core trial cost problem; they operate within the existing high-cost paradigm.

  3. Return to taxpayers: Funds are returned via tax cuts, enabling private consumption and investment. Under this scenario, the opportunity cost equals the weighted average return on private capital (approximately 3% annually). The framework ROI of 637:1 substantially exceeds this threshold.

Methodological note: The analysis uses the neutral status quo baseline to avoid biasing results in either direction. Sensitivity analysis (tornado diagrams) demonstrates robustness across baseline assumptions.

The analysis uses the neutral status quo baseline to avoid biasing results in either direction.

Methodology Validation Against Accepted Benchmarks

This analysis uses standard health economics methodology identical to that employed by EPA, DOT, GiveWell, NICE, WHO-CHOICE, and CBO:

Our Method Equivalent Standard Institution Using It
Value of Statistical Life ($10M) VSL for regulatory impact EPA, DOT, FDA
Cost per DALY ($0.842/DALY) ICER thresholds GiveWell, NICE, WHO-CHOICE
Monte Carlo uncertainty propagation Probabilistic sensitivity analysis ICER, Cochrane, HTA agencies
NPV with discount rate Standard cost-benefit analysis CBO, OMB Circular A-94
Long-horizon cumulative impact Social cost of carbon EPA, IPCC, Stern Review

The large headline figures reflect the cumulative value of permanent infrastructure, the same methodology used to value smallpox eradication ($300M program valued at total future lives saved) and climate economics (multi-trillion dollar damage estimates that exceed annual GDP). Our 95% confidence intervals span nearly an order of magnitude, wider than most published health economics studies.

Appendix Calculation Frameworks and Detailed Analysis

This appendix provides the detailed models and data used in the cost-benefit analysis.

Calculation Framework - NPV Methodology

Uses 10-year NPV horizon (standard business practice). See Verification: Complete Derivation Chains for full methodology.

Financial Analysis Summary

Health Impact Uncertainty Analysis

The Monte Carlo distributions below show the range of health impact estimates across 10,000 simulations, accounting for uncertainty in timeline shift, mortality rates, and avoidable percentages:

Lives Saved Distribution:

Monte Carlo Distribution: Total Lives Saved from Elimination of Efficacy Lag Plus Earlier Treatment Discovery from Higher Trial Throughput (10,000 simulations)

Monte Carlo Distribution: Total Lives Saved from Elimination of Efficacy Lag Plus Earlier Treatment Discovery from Higher Trial Throughput (10,000 simulations)

Simulation Results Summary: Total Lives Saved from Elimination of Efficacy Lag Plus Earlier Treatment Discovery from Higher Trial Throughput

Statistic Value
Baseline (deterministic) 10.7 billion
Mean (expected value) 11.7 billion
Median (50th percentile) 11.7 billion
Standard Deviation 2.45 billion
90% Range (5th-95th percentile) [7.4 billion, 16.2 billion]

The histogram shows the distribution of Total Lives Saved from Elimination of Efficacy Lag Plus Earlier Treatment Discovery from Higher Trial Throughput across 10,000 Monte Carlo simulations. The CDF (right) shows the probability of the outcome exceeding any given value, which is useful for risk assessment.

Economic Value Distribution:

Monte Carlo Distribution: Total Economic Benefit from Elimination of Efficacy Lag Plus Earlier Treatment Discovery from Higher Trial Throughput (10,000 simulations)

Monte Carlo Distribution: Total Economic Benefit from Elimination of Efficacy Lag Plus Earlier Treatment Discovery from Higher Trial Throughput (10,000 simulations)

Simulation Results Summary: Total Economic Benefit from Elimination of Efficacy Lag Plus Earlier Treatment Discovery from Higher Trial Throughput

Statistic Value
Baseline (deterministic) $84.8 quadrillion
Mean (expected value) $87.8 quadrillion
Median (50th percentile) $92.9 quadrillion
Standard Deviation $11.5 quadrillion
90% Range (5th-95th percentile) [$62.4 quadrillion, $97.3 quadrillion]

The histogram shows the distribution of Total Economic Benefit from Elimination of Efficacy Lag Plus Earlier Treatment Discovery from Higher Trial Throughput across 10,000 Monte Carlo simulations. The CDF (right) shows the probability of the outcome exceeding any given value, which is useful for risk assessment.

Cost-Utility Framework

We present a cost-utility analysis using the quality-adjusted life years (QALYs) and disability-adjusted life years (DALYs) metrics. This approach is the US and global standard for evaluating the value of health interventions143.

  • QALY: One year of life in perfect health. Gains are calculated as:

    \[ \text{QALYs Gained} = (Q_1 \times T_1) - (Q_0 \times T_0) \]

    Where \(Q_0\)/\(Q_1\) = quality of life (0-1) before/after, \(T_0\)/\(T_1\) = years of life before/after.

  • Cost-Effectiveness: A decentralized FDA achieves cost-effectiveness through dual pathways:

    1. R&D Savings: $58.6B+ annual savings from 97.7% trial cost reduction
    2. Health Gains: 565 billion DALYs averted from the full timeline shift (~212 years from 12.3x trial capacity + efficacy lag elimination)

    This combination creates a dominant intervention: simultaneously saves money and improves health outcomes.

  • US Willingness-to-Pay Threshold: Typically $100,000–$150,000 per QALY for interventions that add costs. Dominant interventions that both save money and improve health are favorable regardless of this threshold.

  • Sources for Context:

    • QALY methodology and standards143: “The quality-adjusted life year (QALY) is the academic standard for measuring how well all different kinds of medical treatments lengthen and/or improve patients’ lives…”
    • Health economic evaluation94: Standard health economic analysis considers cost-effectiveness across intervention types.

QALY Benefit Streams Breakdown

The total DALY impact (565 billion DALYs) derives from three distinct benefit streams with varying levels of empirical support:

A. Accelerated Development of Existing Pipeline Drugs

Health gains from bringing effective treatments to patients faster through shortened development and approval timelines:

  • Baseline: Research shows treatment delays significantly increase mortality. Cancer studies indicate approximately 10% increased mortality risk per month of delay.
  • Mechanism: 12.3x trial capacity reduces average development time
  • Confidence: High. Well-documented costs of delayed access (84,000 life-years lost per year of delay in cancer therapies alone)

B. Improved Preventative Care via Real-World Evidence

Value of using comprehensive data to optimize preventative care and treatment effectiveness:

  • Baseline: Cancer screenings alone have saved millions of life-years; significant untapped potential remains
  • Mechanism: Large-scale identification of at-risk populations and real-world effectiveness measurement enables personalized prevention
  • Confidence: Medium. Preventative care benefits are well-established, but scale of improvement from comprehensive data remains uncertain

C. Enabling Research for Previously Untreatable Diseases

Transformative potential to create viable research pathways for conditions ignored due to high trial costs:

  • Baseline: 7 thousand diseases+ rare diseases, 95% lack FDA-approved treatments
  • Mechanism: Radically lower per-patient costs ($929 vs $41K) make rare disease R&D economically feasible
  • Confidence: Lower. Economic viability of rare disease research is theoretically sound but empirically unproven at scale

Conservative Approach: Base case uses median estimates across all three streams; sensitivity analysis demonstrates positive returns even when using R&D savings alone (637:1 ROI independent of health benefits).

Three different ways this saves 565 billion years of human suffering. We made them into a pie chart.

Three different ways this saves 565 billion years of human suffering. We made them into a pie chart.

DALY Sensitivity Analysis

The following auto-generated sensitivity analyses show how cost-effectiveness varies based on uncertainty in input parameters. These use Monte Carlo simulation with uncertainty propagation from parameter distributions.

Key DALY Outcomes:

Sensitivity Indices for Total DALYs from Elimination of Efficacy Lag Plus Earlier Treatment Discovery from Higher Trial Throughput

Regression-based sensitivity showing which inputs explain the most variance in the output.

Input Parameter Sensitivity Coefficient Interpretation
dFDA Average Total Timeline Shift (years) 0.8999 Strong driver
Eventually Avoidable DALY Percentage (percentage) 0.4866 Moderate driver
Global Annual DALY Burden (DALYs/year) 0.0432 Minimal effect

Interpretation: Standardized coefficients show the change in output (in SD units) per 1 SD change in input. Values near ±1 indicate strong influence; values exceeding ±1 may occur with correlated inputs.

Comparative Cost-Effectiveness - A Decentralized Framework vs Other Interventions

To provide context for the impact of a global pragmatic trial system’s infrastructure, the chart below visualizes its cost-effectiveness against other well-understood public health programs. The metric used is Quality-Adjusted Life Years (QALYs) Gained per $1 Million of Spending. A higher number signifies greater cost-effectiveness.

For standard interventions, this value is calculated as $1,000,000 / ICER, where the ICER (Incremental Cost-Effectiveness Ratio) is the cost to gain one QALY. For dominant interventions that are both more effective and less expensive, the ICER is negative, and this metric isn’t strictly applicable. For these cases, an illustrative range is used to represent their high value.

All data used in the chart is derived from the table and sources below.

How much healthy life you buy per million dollars spent on various health things. Some are better than others. We fund the others.

How much healthy life you buy per million dollars spent on various health things. Some are better than others. We fund the others.

The following table provides the data and sources that support the chart. The list is ordered to match the chart’s presentation.

Intervention QALYs Gained per $1M Spending¹ Typical ICER Range (Cost per QALY Gained) Classification Source / Evidence
Decentralized Framework for Drug Assessment Dominant Cost-Saving + Health Gain Dominant This analysis’s Sensitivity Analysis. Based on $18.9M-$40M annual costs generating 565 billion DALYs from ~212 years timeline shift.
Smallpox Eradication 100,000+³ Dominant (Cost-Saving) Dominant The $300M program (1967-1980) prevents 5M annual deaths. Benefit-cost ratio exceeds 100:1. Standard ICER calculation is impractical due to its uncommon scale. (WHO, 2010144)
Childhood Vaccinations 22+³ Often Dominant to ~$100,000 Dominant / Highly Cost-Effective CDC estimates routine childhood vaccinations prevent 32M hospitalizations and 1.1M deaths among 1994-2023 US birth cohorts, with $2.9T in societal cost savings. (CDC, 2023145)
Clean Water Programs 100 ~$1,000 - $10,000 Highly Cost-Effective WHO estimates household water treatment costs $100-$500/DALY averted. Community water supply improvements cost $200/DALY. (WHO, 2004146)
Hypertension Screening 30 - 50 ~$20,000 - $33,000 Highly Cost-Effective Recent US studies show pharmacist-led hypertension management has ICERs in the $20,000-$33,000 range per QALY gained, falling within standard willingness-to-pay thresholds. (JAMA Netw Open, 2023147)
Generic Drug Substitution +³ Dominant (Cost-Saving) Dominant By definition cost-saving when therapeutic equivalence is maintained, with typical savings of 30-80% versus brand-name drugs. (WHO, 2015148)
Statins / Polypill 67+³ Cost-Saving to ~$15,000 Dominant / Highly Cost-Effective Cost-saving in high-risk populations. ICERs range from dominant to $15k/QALY in lower-risk groups. (eClinicalMedicine, 2022149)
Pragmatic Trials (RECOVERY model) ~250,000 $4/QALY Highly Cost-Effective UK RECOVERY trial: $20M spent, saving 1 million lives globally via dexamethasone discovery. 44.1x cheaper per patient than traditional Phase 3 trials. (Note: RECOVERY’s $500/patient benefited from NHS infrastructure; ADAPTABLE achieved $929/patient in US settings.)
NIH Standard Research Portfolio ~20 $50K/QALY Inefficient Baseline Standard NIH-funded research. Represents current status quo efficiency.69

Methodology Notes

¹ QALYs per $1M Calculation:

  • For a decentralized framework: (Annual QALYs Gained) / (Annual Cost in Millions)
  • Ranges reflect conservative to optimistic scenarios accounting for parameter uncertainties

² Cost-Dominant Interpretation:

  • All scenarios for the framework show extremely low cost per DALY while generating net economic benefits that exceed costs
  • The framework is “dominant” - more effective and less costly than the status quo

³ Dominant Interventions:

  • For cost-saving (dominant) interventions, standard QALY/$1M calculations are not applicable
  • Values shown are illustrative to demonstrate relative cost-effectiveness
  • Upper bounds represent the exceptional value of these interventions

Data Limitations

  • Historical interventions (e.g., smallpox) use retrospective analyses
  • Direct comparisons between interventions should consider contextual differences
  • All costs are in 2023 USD, adjusted using appropriate health inflation indices
  • QALY calculations use standard health state utility weights where available

Comparison to Other Major Public Investments

To provide context for the estimated costs of a global pragmatic trial system, it is useful to compare them to other significant U.S. government investments in health and technology. The projected ‘Lean Ecosystem’ cost for the framework of approximately $40M per year (covering Core framework operations plus medium-scope broader initiatives) is modest in comparison to other major federal projects.

Initiative / Project Approximate Cost / Budget (Annualized) Comparison to Framework’s Annual Cost Source / Note
Decentralized Framework (Lean Ecosystem) ~$40M / year 1x (Baseline) This analysis
Cancer Moonshot Initiative ~$257 Million / year150 ($1.8B over 7 years) ~6.4x 21st Century Cures Act151
NIH “All of Us” Research Program ~$500M / year (FY23 Approx. Budget) ~12.5x NIH Budget49
HealthCare.gov (Initial Build) ~$1.7 - $2.1 Billion152 (Total Upfront Cost) ~42x - 52x (of one year’s cost) GAO Reports / Public Reporting152
National Cancer Institute (NCI) ~$7.2 Billion / year153 (FY25 Budget) ~180x NCI Budget Data153

Translation for Budget Committees: The estimated annual cost of this initiative is an order of magnitude smaller than the budgets for other major national health priorities like the “All of Us” program or the Cancer Moonshot. It represents approximately 0.55% of the NCI’s annual budget (calculated from the system’s annual cost and NCI budget153). This comparison underscores that such an infrastructure is not only a high-leverage investment (due to its massive ROI) but also a remarkably cost-effective one relative to the scale of federal health and technology spending.

Why This Differs from Failed Megaprojects

Large-scale interventions face legitimate skepticism. The development economics literature documents numerous failures: infrastructure megaprojects that exceed budgets by 50-100%, foreign aid programs with negative or negligible returns, and “grand challenges” that fail to materialize promised benefits.

Why giant health IT projects fail (they’re giant) versus why small connected systems work (they’re small).

Why giant health IT projects fail (they’re giant) versus why small connected systems work (they’re small).

This intervention differs in five critical ways:

  1. Proven Technology: Unlike speculative moonshots, pragmatic trials using existing EHR infrastructure have been validated. The RECOVERY trial enrolled 47,000+ patients at $500/patient. ADAPTABLE achieved $929/patient in routine US healthcare settings. This isn’t “we hope this works” - it’s “we’ve proven this works, now scale it.”

  2. Marginal Extension, Not Novel System: The framework extends existing clinical trial infrastructure rather than replacing it. Hospitals already have EHRs. Patients already take medications. We’re adding a coordination layer and outcome tracking, not building from scratch.

  3. Self-Correcting Feedback: Unlike infrastructure projects where failures compound, the protocol has built-in error correction. If a treatment doesn’t work, the data shows it. If costs exceed projections, we can adjust scope. The framework generates its own performance metrics.

  4. Historical Precedent: Smallpox eradication (280:1 ROI) and childhood vaccination programs demonstrate that systematic health interventions can achieve extraordinary returns. The difference: those targeted specific diseases. This targets the discovery process itself, potentially even higher leverage.

  5. Modern Infrastructure Makes This Possible Now: The convergence of electronic health records (major EHR systems covering 57% of US hospitals), consumer wearables (billions of devices tracking health metrics), federated data networks (querying 300M+ patient records without moving data), and AI-powered analysis enables systematic outcome tracking at scale that wasn’t feasible even a decade ago. This isn’t speculation about future technology - it’s deployment of existing, proven infrastructure.