5 min read

The 2025 JAMA AI Report: A Wake-Up Call for Healthcare Leaders

A new JAMA Special Communication from Derek C. Angus, MD, MPH, Rohan Khera, MD, MS, Tracy Lieu, MD, MPH, and colleagues delivers one of the most grounded takes on artificial intelligence in healthcare so far. It’s not a technical paper. It’s a blueprint for how to move forward responsibly.

The authors open with a line that captures both the urgency and the opportunity:

“AI will disrupt every part of health and health care delivery in the coming years. Given the many long-standing problems in health care, this disruption represents an incredible opportunity. However, the odds that this disruption will improve health for all will depend heavily on the creation of an ecosystem capable of rapid, efficient, robust, and generalizable knowledge about the consequences of these tools on health.”

That last part — rapid, efficient, robust, and generalizable — could just as easily describe what healthcare leaders have been trying to build all along. The difference now is that AI forces the issue.

Not Just Better Care — Better Operations

Too often, AI gets framed as a clinical tool. But this report makes it clear that its impact will be much broader. It’s not just about improving patient outcomes. It’s also about improving how healthcare works — from scheduling and documentation to claims, authorizations, and resource planning.

That’s exactly what we’ve seen in our work at Productive Edge. Most of the early AI wins in healthcare haven’t come from diagnosing disease. They’ve come from automating administrative work, coordinating tasks, and surfacing insights that help people do their jobs faster and more accurately.

The JAMA authors are pointing to the same reality: the real transformation will come when AI is built into the infrastructure of healthcare, not bolted on as a novelty.

The Case for a Learning System

The paper argues that AI isn’t static — it’s a living system that learns and changes. That means evaluation can’t be one-and-done. It has to be continuous.

We’ve said the same thing all year as we’ve built Boost Health AI and the AI Agents for Healthcare framework. Static models won’t work in dynamic environments. Healthcare organizations need feedback loops that measure performance in the real world, retrain when needed, and show whether the technology is actually improving outcomes or efficiency.

The authors describe this as building a learning health ecosystem. We call it operationalizing AI responsibly. The language differs, but the goal is identical: use data, oversight, and feedback to make AI better over time.

Governance as Infrastructure

One of the report’s strongest messages is that AI governance can’t be an afterthought. The authors recommend treating AI like any medical intervention — with defined oversight, monitoring, and accountability.

That aligns perfectly with what we’ve been helping clients establish this year: AI governance frameworks that balance innovation with safety, compliance, and transparency. These frameworks don’t slow progress down; they enable it. Once clinicians, compliance officers, and executives know that governance is in place, adoption accelerates.

As we’ve learned, the fastest way to build confidence in AI isn’t a better demo. It’s better governance.

Augmentation, Not Automation

The report draws a sharp distinction between artificial and augmented intelligence. That distinction matters.

Healthcare doesn’t need systems that make decisions for people. It needs systems that make people more effective. The authors emphasize that AI should amplify, not replace, human judgment — and that performance must be measured by how well humans and machines work together.

That mirrors how we’ve designed our AI Agent Accelerators. Whether it’s a care manager, utilization reviewer, or claims processor, the goal isn’t to remove the human. It’s to take away the noise — the manual searches, the copy-paste work, the buried data — so people can focus on decisions that matter.

Integration: The Real Test

The JAMA team also calls out a familiar problem: most AI fails not because the models are bad, but because they’re disconnected from the workflow.

That line could have been written by anyone working inside a hospital or health plan right now. We’ve seen it firsthand — great tools that never reach production because they require people to switch systems, change habits, or trust data they can’t see.

The report argues for designing AI around the user and the workflow. That’s exactly why we’ve focused on embedding AI into existing ecosystems — not replacing EHRs, CRMs, or data platforms, but enhancing them.

Efficiency as a Health Outcome

While the authors focus on improving health, their definition of “health” is broad. They connect it to efficiency, equity, and sustainability — all things that drive system-level performance.

That’s where the alignment with Productive Edge is clearest. We’ve said for months that efficiency is a health outcome. When care teams spend less time on manual work, when claims move faster, when prior authorizations are automated, patients get what they need sooner. Everyone wins.

The JAMA report reinforces that efficiency, done right, is part of improving care. It’s not just an operational metric — it’s a quality one.

Preparing for 2026

For healthcare leaders, this paper reads like a call to get serious. The experimentation phase is ending. The accountability phase is here.

The organizations that will lead in 2026 are the ones that:

  • Build AI governance early and treat it as a core capability.

  • Focus on augmentation and workflow fit, not flashy automation.

  • Measure both care impact and operational efficiency.

  • Create learning systems that evolve as models and data change.

The JAMA authors have given healthcare a roadmap. It looks a lot like the one many of us have already started following — grounded, cautious, and practical.

A Shared Direction

The report’s closing message mirrors what we’ve been hearing from clients across the country: AI will change everything, but progress depends on trust, measurement, and structure.

At Productive Edge, we’re encouraged to see academic leaders echo what health system leaders are already living — that AI is both an operational tool and a care improvement catalyst. The opportunity isn’t just to use AI; it’s to use it responsibly, at scale, and for everyone’s benefit.

That’s not hype. That’s the real work ahead.

Ready to discuss your project?

Let's talk