From pilots to platforms: why enterprise AI investment needs a rethink

By Sujatha S Iyer, Head of AI Security at Zoho Corp.

Following the initial hype around AI, we can now start to see the results of its early deployment - and the results are not always pretty. The first wave of generative AI enthusiasm has played out inside large organisations, with many early pilots failing to translate into meaningful returns. This is not because the technology has fallen short, but because of enterprises' funding and governance decisions.

Now, as attention shifts rapidly toward agentic AI systems designed to act not just generate the same structural weaknesses are becoming even more exposed.

It starts with a mismatch that most leadership teams have been slow to acknowledge - AI no longer behaves like a software tool. It behaves like infrastructure. And infrastructure requires a different kind of investment logic entirely.

When project budgets meet infrastructure reality

Enterprise technology investment has long operated on a familiar project model: defined scope, fixed budget, delivery milestone, handover to operations. For software that is deployed once and then maintained, that works well enough. But AI does not fit that pattern.

A single large language model deployment may simultaneously touch document generation, contract review, customer interaction, internal search and compliance reporting. It requires continuous data curation, ongoing model refinement and active governance oversight. If an organisation uses a project funding model it is, in effect, starting from scratch each time it wants to extend or improve the capability.

Organisations that have successfully moved AI beyond the pilot stage have made one common shift: replacing project-based budgets with a product or platform funding model. Rather than a one-off spend tied to a specific deliverable, this means a sustained budget covering the full capability stack, including data pipelines, integration layers, security controls, and the people required to keep everything running. The AI team stops functioning as a project squad and starts operating more like an internal platform function, responsible for something the rest of the business depends on day to day.

The cost layers most business cases overlook

Most AI business cases are built around two numbers: model licensing costs and anticipated efficiency savings. Neither tells the full story.

The first commonly underestimated layer is infrastructure and compute. Inference at scale, particularly for real-time customer-facing applications, routinely exceeds training costs by a significant margin.

The second is data readiness. A March 2026 report from Cloudera and Harvard Business Review Analytic Services found that only 7% of enterprises consider their data fully prepared for AI adoption, with more than a quarter reporting their data is not meaningfully ready at all. The remediation work required to make enterprise data usable by AI systems is substantial, and it rarely appears in initial proposals.

The third layer is governance: the policies, controls, monitoring systems, and human review processes needed to deploy AI responsibly and maintain regulatory compliance. In regulated industries, this is not discretionary overhead. It is a legal prerequisite.

The fourth, and most chronically underfunded, is workforce adoption. AI-driven change is harder to manage than conventional technology change because the implications for established roles are more direct and more visible. Organisations that invest heavily in the model without considering the people using it consistently find that technically capable AI delivers modest real-world results. Employees who do not understand a system, do not trust it, or have quietly found workarounds will not unlock its potential, regardless of what the underlying model can do.

Legacy systems: from backlog item to strategic blocker

Legacy modernisation has sat on IT backlogs for years, consistently deprioritised in favour of more immediate demands. The rise of agentic AI has changed its strategic status considerably.

AI agents designed to work autonomously across enterprise systems, querying data, triggering workflows, and surfacing decisions, cannot function effectively when the underlying architecture is fragmented, poorly documented, or inaccessible via modern APIs. Legacy debt does not simply slow deployment. It actively degrades output quality and makes it near-impossible to establish the audit trails that regulators and boards now require.

The organisations currently achieving the strongest AI outcomes are not always those running the most advanced models. They are the ones that have done the less glamorous groundwork: rationalising data estates, consolidating tooling, and decommissioning systems that were never designed to participate in a connected, AI-driven environment.

Getting the investment logic right

How senior leadership frames AI investment has a direct bearing on what gets delivered. Organisations that treat AI as a compounding, enterprise-wide capability, rather than a series of discrete technology purchases, tend to make substantially better decisions about where and how to commit resources.

AI investment compounds in a way that most project budgets are not designed to accommodate. Data infrastructure built for one use case becomes progressively cheaper to apply to the next. Governance frameworks, once established, can be extended more rapidly across subsequent deployments. Teams that have delivered AI projects carry forward

capability that reduces both cost and risk on future programmes. The first deployment may be expensive and slow. The tenth is faster and cheaper, but only if the investment logic is built for continuity rather than repeated cold starts. Research backs this up: 95% of AI ROI leaders allocate more than 10% of their technology budget to AI, treating it as a core transformation priority rather than an isolated line item.

Building for the long term

Talking about AI as infrastructure is easy. Funding it that way is harder, particularly when boards still expect project-style accountability: a deliverable, a cost, a close-out date.

The practical shift starts with being honest about what is holding programmes back. In most organisations, it is not the model. It is the data that feeds it, the legacy systems it cannot connect to, and the workforce that has not been brought along. These are not peripheral concerns to be addressed after deployment. They are the conditions that determine whether deployment succeeds at all.

IT leaders who can make that case clearly, and secure a budget that reflects the full picture, are the ones who will have something durable to show for it. Those who keep accepting project budgets for infrastructure problems will keep arriving at the same destination: a promising pilot that goes nowhere.

By Simone Larsson, Head of Enterprise AI, EMEA, Lenovo.
By Rick Vanover, Vice President, Product Strategy at Veeam.
By Peter Manta, AI Strategy and Practice Director, Informatica by Salesforce.
By Dmitry Panenkov, CEO and founder of emma
By Apurva Kadakia, Global Head, Cloud and Partnerships, Hexaware.
By David Trossell, CEO and CTO at Bridgeworks.