Forrester’s new Technology and Security Predictions report offers a useful look at where AI and enterprise technology are heading. The themes speak directly to the pressure healthcare leaders feel as AI adoption accelerates and expectations rise. Many organizations want a clearer path from pilots to real operational value.
Healthcare sits at the center of this shift. Administrative work keeps growing, margins stay tight, and leaders are trying to use AI to relieve pressure without adding risk or complexity. The predictions in this report help explain why some efforts stall and what it will take to move forward with confidence.
1. AI failures will fall back on the CIO
Forrester predicts that a quarter of CIOs will be asked to step in and rescue AI projects that were started in the business but failed to deliver.
This is familiar territory for healthcare. AI experiments often start within departments that are trying to solve immediate pain points. Care management teams test assistants. Revenue cycle teams try tools for summarizing or sorting claims. Digital groups experiment with chatbots or outreach models.
But without shared guidelines for risk, data quality, and system integration, many of these projects hit natural limits. They can’t scale. They create new operational work. Or they simply produce outputs that teams don’t trust.
What healthcare leaders can take from this:
AI needs a basic structure around it. Not heavy bureaucracy—just clear agreement on how tools are selected, how they connect to source systems, how performance is checked, and who owns the outcome. The more consistent that structure is, the easier it becomes to expand from early pilots to real adoption.
2. Many organizations will delay a quarter of their planned AI spending
Forrester expects companies to slow or pause some AI investments until they can show measurable financial impact.
Healthcare has been moving in this direction already. Leaders want confidence that an AI solution will reduce rework, shorten turnaround times, or improve throughput in a way they can track and explain.
This shift isn’t a setback. It pushes organizations toward clearer priorities. When budgets tighten, the work with the strongest operational impact rises to the top—things like claims automation, utilization management support, care manager efficiency, and administrative tasks that currently drain staff time.
For healthcare leaders, the lesson is simple:
Start with the workflows that create the most friction or the highest cost. Focus the first phase of AI work there. Prove value in a short window, then build from that foundation.
3. New AI-focused cloud providers will gain ground
The report highlights fast growth in “neoclouds”—providers built specifically for GPU-heavy AI workloads.
This doesn’t mean healthcare organizations need to move away from their current cloud platforms. But it does show how quickly the ecosystem around AI is changing. Leaders will face new questions about where models should run, how data flows across environments, and which hosting choices support long-term flexibility.
The takeaway is straightforward:
Build AI in a way that avoids locking the organization into one platform or tool. A flexible architecture will matter more as new options continue to appear.
4. Hiring AI talent will get harder
Forrester predicts a sharp rise in time-to-fill for developer and engineering roles.
Healthcare already feels this strain. Finding people who can manage data pipelines, build machine learning workflows, or maintain large cloud systems is difficult for most organizations. And talent shortages make it even harder to support multiple one-off AI projects across different teams.
The message for leaders is clear:
Plan for an environment where internal teams won’t build every AI capability themselves. Reusable components, shared frameworks, and predictable processes will matter more than ever. They reduce the burden on limited engineering teams and make it easier to support AI at scale.
5. Quantum security spending will rise
The report expects quantum-related security investments to become a growing share of overall IT security budgets.
Healthcare isn’t the first sector that will face quantum threats, but it handles some of the most sensitive data. As AI adoption grows, so does the need for stronger protections—secure data flows, modern encryption practices, and better visibility into how models use internal information.
The broader point:
AI and security can’t be separated. Any serious AI plan needs clear controls around data handling, access, encryption, monitoring, and auditing. This is especially true in environments that manage PHI.
The bigger picture for healthcare
Taken together, these predictions point to a simple reality:
AI success in healthcare depends on steady foundations, not speed alone.
Organizations that build a shared governance model, start with high-impact workflows, use flexible architectures, and modernize security will move faster and hit fewer blockers. They’ll also avoid the trap of launching multiple disconnected initiatives that never grow beyond proof-of-concept status.
Healthcare doesn’t need more experiments. It needs a way to turn the best ideas into real improvements in how people work and how care is delivered. The themes in this report help clarify what that path looks like and why some organizations will advance farther than others.