5 min read

What the Smartest Healthcare Teams Are Doing With AI Right Now

What the Smartest Healthcare Teams Are Doing With AI Right Now
7:21

Every week in 2025, I talked with people who are actually building and using AI in healthcare. Not pitch decks. Not future visions. Real work, inside real organizations, with real constraints.

After 17 episodes of Health/Tech Edge, a pattern became hard to ignore. The teams making progress are not chasing novelty. They’re focused on friction. They’re putting AI into the flow of work. And they’re building trust early enough that solutions can actually scale.

Here’s what stood out across the season, and what it tells us about where healthcare AI is headed next.

AI is finally moving from insight to action

For years, AI in healthcare was mostly about insight. Dashboards. Predictions. Risk scores. Useful, but incomplete.

In 2025, that started to change. Several guests described AI agents that don’t just surface information, but help teams act on it.

Jessica Richmond from Google Cloud talked about agentic experiences becoming the front door for how teams interact with data and systems. Her point was simple. Once AI is embedded in the workflow and actually saves time, adoption spreads fast. Organizations need to be ready for that moment, not surprised by it.

That same shift showed up in conversations with Toryn Slater from Qventus, who described AI operational assistants that coordinate work across hospital teams, and Murali Gandhirajan from Snowflake, who framed agents as the next layer sitting on top of a strong data foundation.

The common thread was clear. Insight is no longer the finish line. Action is.

Governance is not slowing teams down. It’s helping them scale

As agents become more capable, the stakes go up. That came through clearly in our conversations on governance.

In multiple episodes with Raheel Retiwalla, including a full deep dive on AI governance, we talked about why agents change the risk profile. A bad dashboard is annoying. A bad agent can trigger a denial, send the wrong outreach, or influence care decisions.

Shannon Kennedy added an important perspective here. Healthcare moves cautiously for a reason. Responsible AI practices aren’t red tape. They’re what allow organizations to move faster with confidence, especially in regulated environments.

The teams getting this right are not waiting until after a pilot to think about governance. They’re profiling risk up front, protecting PHI, monitoring outputs, and defining when humans stay in the loop. That clarity is what allows pilots to grow instead of stall.

Data readiness still decides who wins

Even in a year dominated by talk of agents, the basics never went away.

In my conversations with Ryan Leurck and Murali Gandhirajan, the message was consistent. Agents assume the data beneath them is clean, connected, and trusted. If it’s not, they don’t fix the problem. They amplify it.

What has changed is who can use data. AI lowers the barrier. You no longer need to be a specialist to ask good questions. But that only works if the underlying data layer has already been done right.

The takeaway is not new, but it’s more urgent. Agents raise the ceiling, but they also raise the cost of weak foundations.

The best teams are augmenting people, not replacing them

No one I spoke with this year was selling an AI replacement story. The real progress was about giving people their time back.

Ben Tasker emphasized that adoption lives or dies on trust. If AI feels like a black box second-guessing clinicians or operators, it won’t stick. If it supports judgment and reduces cognitive load, it spreads naturally.

That theme showed up again with Chris Darland, who talked about explainability in clinical AI. Clinicians don’t need magic. They need to understand why a model surfaced a recommendation so they can apply it in context.

Across operations and clinical care, the message was consistent. AI works best as a teammate, not a referee.

The fastest ROI is still operational

If there was one area everyone agreed on, it was this. The biggest near term wins are operational.

Michael Docktor from Dock Health described task management as a hidden root cause of burnout. Healthcare still runs on fragmented handoffs and mental sticky notes. Making work visible and trackable unlocks immediate productivity gains.

Aqeel Shahid from IntelePeer shared how conversational AI is helping practices answer every call, reduce no shows, and resolve billing questions. These aren’t flashy use cases, but they remove friction patients and staff feel every day.

Adam Block reinforced the broader lesson. The highest value starting points are often the least glamorous. Fix the work that slows people down, prove value fast, then build from there.

Revenue cycle pressure is accelerating AI adoption

One area where urgency was especially clear was revenue cycle.

Michael Riley from Gabeo framed denial management as an escalating operational arms race. Payers are getting more complex. Rules are changing. Manual processes can’t keep up. His view was direct. Scalable AI agents are becoming necessary just to stay afloat, not as a future optimization.

This wasn’t about experimentation. It was about survival in an environment where administrative complexity keeps rising.

Personalization and prevention are starting to scale

Some of the most hopeful conversations this year focused on care beyond the clinic.

Kirsten Karchmer from Conceivable described an AI supported virtual care team model in women’s health and fertility. Always on, personalized support at lower cost changes who can access care in the first place.

James Wallace brought a precision medicine lens to the same idea. Better synthesis of complex signals enables earlier intervention and more personalized care paths, moving healthcare from reactive treatment to prevention.

These conversations pointed to the long arc. AI that helps people earlier, more personally, and more consistently.

Healthcare doesn’t need to reinvent everything

A quieter but important theme came from Makoto Kern, who reminded us that healthcare can borrow proven playbooks from other industries. Product design, autonomy, and trust in human machine systems are not new problems. Healthcare just experiences them under higher stakes.

Learning sideways instead of starting from scratch can accelerate progress without cutting corners.

Heading into 2026

After 17 episodes and 15 guests, here’s what feels clear to me.

Healthcare AI is getting practical. The teams making progress are focused on real workflows, measurable value, and trust from day one. They’re not waiting for perfect conditions. They’re building responsibly and learning fast.

There’s still hard work ahead. Data is messy. Change is slow. Trust takes time. But the direction is right, and the people doing this work are more grounded and more focused than the market was even a year ago.

Thanks to every guest who shared what they’re building and what they’ve learned. And thanks to everyone listening. I’m excited to keep these conversations practical and keep learning together in 2026.

You can check out the Health/Tech Edge on:

Ready to discuss your project?

Let's talk