HIMSS 2026 felt different.
Not because there was more AI. There was plenty of that.
But because the conversation shifted.
This year wasn’t about whether healthcare should use AI. That debate is over. The real question now is where AI is actually working, what it’s worth, and how to scale it without creating new problems.
Across keynotes, panels, and conversations on the floor, a few themes came through clearly:
This post is a recap of what actually mattered from HIMSS 2026, and what it means for the work ahead.
One of the clearest signals from HIMSS was that AI is moving out of experimentation and into real operational environments.
Organizations like Mass General Brigham are not just testing AI tools. They’re building structured environments to use them safely. That includes governance, security, workforce training, and controlled access to models.
That shift matters.
Because the challenge is no longer access to AI. It’s the ability to operationalize it across workflows without introducing risk or chaos.
We see this every day in our work.
At HIMSS, I shared how much of the friction in healthcare comes from the way decisions are made, especially in payer organizations. Critical logic is buried in documents like policies, contracts, and benefit definitions.
“If I bring up a policy… it’s 10 pages long. If we all read it, we might come up with different responses. And that’s a problem.”
That inconsistency is one of the root causes of inefficiency, delays, and frustration across the system.
AI becomes powerful when it helps turn that fragmented logic into something consistent, repeatable, and scalable.
Another theme that stood out: not everyone is in the same place.
Some organizations are still early. Running pilots. Exploring use cases.
Others are investing heavily, but not always in ways that connect to outcomes.
And a smaller group is already seeing measurable impact.
That gap is growing.
At HIMSS, this showed up in everything from vendor messaging to executive panels. Some conversations were still about what AI could do. Others were about what it already is doing.
This creates a new kind of risk.
Not just moving too slowly. But moving quickly in the wrong direction.
The organizations that win won’t be the ones that adopt AI fastest. They’ll be the ones that connect it to real workflows and measurable results.
If there was one word that came up over and over again at HIMSS, it was ROI.
Leaders are under pressure to show results from their technology investments. Not just activity. Not just innovation. Results.
That’s changing the conversation.
Vendors are talking less about features and more about outcomes:
This aligns directly with what we see in the field.
Healthcare doesn’t need more technology for the sake of it. It needs fewer people doing repetitive work, faster decisions, and more consistent outcomes.
In our work with payers, we focus on one core idea:
Start with the decisions that drive cost and friction.
Then work backwards.
“Same source driving every decision… policies, benefits, contracts… how do we make that executable at scale?”
When you unlock that layer, you don’t just solve one use case. You unlock many.
Another major shift: governance is no longer a side conversation.
At HIMSS, there was a clear recognition that AI introduces new risks:
Organizations are starting to treat governance as part of the foundation, not an afterthought.
And that’s exactly right.
Because scaling AI without governance creates more problems than it solves.
This is where many initiatives stall.
It’s not the model that fails. It’s the lack of structure around it.
Who can use it?
What data can it access?
How are decisions monitored?
How do you ensure consistency?
These are not technical details. They are operational requirements.
One of the most thought-provoking ideas from HIMSS was this:
At what point does it become unethical not to use AI?
For years, the focus has been on the risks of AI.
But now there’s another side.
If AI can improve diagnosis, reduce delays, and support better decisions, what happens if it’s not used?
At what point does that become a gap in care?
Leaders like John Halamka pointed out that AI-augmented care may soon become part of the standard.
That’s a meaningful shift.
It moves the conversation from optional adoption to expected capability.
The most powerful moment from HIMSS didn’t come from a vendor or a panel.
It came from the closing keynote.
Jeremy Renner shared his experience recovering from a near-fatal accident.
Multiple surgeries. Multiple care teams.
And one consistent challenge:
Coordination.
Different doctors ordering the same tests. Teams not communicating. Data not flowing between care settings.
Even when the care was excellent, the system still felt fragmented.
That’s the reality behind all of this.
AI can help. But only if the system itself works better.
Interoperability, data sharing, and workflow alignment are still foundational problems.
If there’s one takeaway from HIMSS 2026, it’s this:
Healthcare is moving from AI experimentation to AI execution.
The opportunity now is not to add more tools.
It’s to make the system work better.
At Productive Edge and Boost Health AI, our focus is simple:
We believe that starts by addressing one of the most overlooked problems in healthcare:
How decisions are made.
Not just building AI on top of workflows, but rewiring the logic that drives those workflows.
“Don’t start with the use case. Start with the documents that inform the decisions… unlock those, and you unlock everything downstream.”
That’s how you create leverage.
That’s how you move from isolated pilots to real impact.
And that’s how we start to make meaningful progress on the problems we all came to HIMSS to solve.
If you had to improve one workflow in your organization this year using AI and better data flow…
What would it be?
And what’s actually stopping you?