AI Budgets Soar, ROI Still Elusive: The Enterprise Reality Check
It appears that the honeymoon phase for enterprise AI investment is officially over. While generative AI budgets have surged dramatically year over year, according to analysts at Forrester Research, the ability to demonstrate sustained return on investment remains frustratingly out of reach for most organizations. What started as an era of boundless enthusiasm has entered a more sobering chapter where boards and CFOs are asking the hard questions: What are we actually getting back for all this spending?
The Numbers Tell a Troubling Story
The data suggests a widening gap between investment and measurable returns. Early pilots often deliver promising results in controlled environments, but as systems scale into production, costs fluctuate unpredictably and benefits become harder to quantify. This isn't because AI technology is failing technically—it's because organizations are applying traditional IT budgeting and accountability models to a fundamentally different economic model.
Key Specifications:
- GenAI budgets increased substantially year over year
- Majority of organizations struggle to demonstrate sustained ROI
- Costs are consumption-based with unpredictable usage patterns
- Benefits often indirect or risk-adjusted rather than transactional
What the mainstream media is missing is that this ROI challenge represents a fundamental shift in how enterprises must think about technology investment. The convergence between IT and finance isn't just organizational—it's philosophical.
Analyst Perspective: Beyond Cost Control
Greg Zorella, lead principal analyst at Forrester covering IT financial management, argues that high-performing IT organizations are moving away from treating finance as a gatekeeper focused on cost containment. Instead, IT finance is becoming a strategic capability for value delivery.
"IT finance isn't there because IT spends a lot of money," Zorella explains. "It's there because IT spend can really drive strategic outcomes for the enterprise." This framing matters enormously for AI, where traditional financial models break down. Consumption-based costs, unpredictable usage patterns, and indirect benefits require new approaches to value measurement.
Our internal analysis at NextCore suggests that organizations making progress are starting with narrow proof points that demonstrate how better financial visibility improves decision-making, rather than attempting comprehensive transformation all at once.
CIO Reality: Budgets Don't Expand Forever
Sumit Johar, CIO of BlackLine, describes a familiar cycle: initial AI enthusiasm gave way to peer pressure, and now that phase is ending as finance leaders ask harder questions. "If I tell my CFO that 95% of employees are using AI, that doesn't mean anything," Johar says. "It's like saying 100% of employees use email. Finance cares about impact on profitability, revenue, or risk—everything else falls flat."
The distinction Johar draws is critical: broad productivity platforms versus outcome-driven AI initiatives. While everyday AI tools can be culturally transformative, they're notoriously difficult to quantify. Engagement metrics and self-reported productivity rarely survive financial scrutiny.
What's changed most is that AI spending is no longer additive. CIOs aren't receiving incremental budget increases "because AI." Any additional investment must be funded by reallocating existing budgets. "Nobody is blindly throwing money at AI anymore," Johar notes. "If we want to spend more, we have to move things around."
Why ROI Collapses at Scale
According to Jim Olsen, CTO of AI lifecycle management platform ModelOp, the breakdown is rarely caused by a single flaw—it's structural. Early AI projects develop in controlled environments with limited data and predictable usage. Costs appear manageable, performance looks strong.
Production environments behave very differently. "You develop something locally and it looks very doable," Olsen explains. "But once it hits production, usage patterns change, contexts explode, and suddenly the true cost shows up." Generative AI amplifies this problem through unpredictable token consumption and widespread model reuse across workflows.
Without clear inventory and lifecycle tracking, enterprises end up managing AI spend in aggregate while value is created or lost at the margins. "If you don't know what's out there, you can't measure it, govern it, or tie it back to ROI," Olsen says.
The remedy is treating AI as industrial infrastructure rather than experimental tooling. Lifecycle management—covering development, deployment, monitoring, and retirement—isn't bureaucratic overhead; it's the only way to maintain accountability as models evolve.
Governance: When Value Must Be Proven
As AI investments face regulatory and board-level scrutiny, governance increasingly determines whether ROI can be defended at all. Anthony Habayeb, CEO of AI governance vendor Monitaur, argues that many AI initiatives fail under review not because they perform poorly, but because success was never clearly defined.
"We're running around with a hammer looking for a nail," Habayeb says. "If you don't know what success looks like at inception, you can't defend ROI later." Governance failures often surface only after deployment when organizations attempt to retroactively justify spend.
Regulatory frameworks like the EU AI Act are pushing organizations to formalize oversight, but the smartest enterprises are using regulation as a forcing function to build broader governance capabilities. "Governance shouldn't be a separate compliance line item," Habayeb argues. "It should be part of how you make AI work for the business."
The NextCore Edge: What's Actually Working
Our strategic tracking of this sector reveals that enterprises making genuine progress share several counterintuitive traits. They're aligning AI investment with business strategy rather than treating it as a standalone category. They're building financial models that accommodate consumption-based costs and indirect value. They're enforcing operational discipline across the AI lifecycle.
Most importantly, they're embedding governance early—not as a brake on innovation, but as a foundation for trust and sustainability. The era of AI as an experiment is ending. The era of AI as an accountable enterprise asset has begun.
Pro Tip: Building Defensible AI ROI
For CIOs planning 2026 budgets, the message is sobering but constructive: AI will not justify itself. Value must be designed, measured, and defended using tools and practices that many organizations are only now beginning to develop. Start with narrow, high-impact use cases where outcomes are clearly measurable. Build lifecycle management from day one, not as an afterthought. And most critically, ensure AI governance isn't just about compliance—it's about creating the visibility and accountability needed to defend every dollar of investment.
According to our analysis, organizations that treat AI as a strategic capability rather than a technology experiment are the ones most likely to survive the current ROI reckoning and emerge with sustainable competitive advantage.
Related: RSA Conference 2026: Five Agent Identity Frameworks Launched, Three Critical Security Gaps Remain
Industry Insights: #IndustrialTech #HardwareEngineering #NextCore #SmartManufacturing #TechAnalysis
NextCore | Empowering the Future with AI Insights
Bringing you the latest in technology and innovation.