"What's the ROI on this AI project?"

It is the question every executive asks. It is the question every team dreads. And it is, more often than not, the wrong question — or at least, a question being asked with the wrong tools.

I have sat in enough strategy rooms to know how this plays out. A team builds something with AI. It works. People use it. Things improve. And then someone in finance asks for a number. A clean, defensible, spreadsheet-friendly number. And suddenly, the team that built something genuinely valuable finds itself unable to explain why.

The problem is not that AI lacks value. The problem is that the frameworks we use to measure value were designed for a different kind of investment — one where inputs are predictable, outputs are linear, and timelines are short. AI does not behave that way. And when we force it into those frameworks, we either overstate the return with inflated projections, or understate it by measuring only what is easy to count.

The ROI question is not wrong. The measurement model is. We are applying industrial-era accounting to a capability that compounds over time, reshapes how work gets done, and creates value in ways that do not appear on a cost-reduction spreadsheet.

The Numbers That Feel Right But Aren't

There are three ROI metrics that appear in almost every AI business case. They are not wrong, exactly — but they are routinely misused in ways that make AI investments look either miraculous or pointless, depending on who is doing the maths.

False Metric #1

Hours Saved

The most common AI ROI claim: 'This tool saves each employee two hours per week.' Multiply by headcount, multiply by average salary, and you have a number that looks compelling. The problem is that saved time rarely converts to saved money unless you reduce headcount — which most organisations are not willing to do, and probably should not do. What actually happens is that people fill the recovered time with other work. That is not a bad outcome. But it is not the cost reduction that was promised. The honest version of this metric is not 'hours saved' but 'capacity unlocked' — and that requires a different conversation about what the organisation does with that capacity.

False Metric #2

Productivity Increase

Productivity metrics are seductive because they are measurable. Tickets closed per day. Code lines per sprint. Documents processed per hour. But productivity at the task level does not automatically translate to productivity at the outcome level. A team that closes twice as many support tickets is not necessarily delivering twice the customer satisfaction. A developer who writes more code is not necessarily shipping better software. When AI accelerates the execution of tasks, it can mask whether those tasks were the right ones to begin with. The metric goes up; the outcome may not.

False Metric #3

Cost Per Transaction

In operational contexts — customer service, document processing, data entry — AI genuinely does reduce cost per transaction. This is real and measurable. But organisations often project these savings across the entire business without accounting for the fact that not all work is transactional. Knowledge work, creative work, strategic work, and relationship work do not have a 'transaction cost' in any meaningful sense. Applying cost-per-transaction logic to these domains produces numbers that are technically calculable but strategically meaningless.

What the Business Case Leaves Out

If false metrics inflate the return side of the equation, hidden costs deflate the investment side — or rather, they make the true investment much larger than the initial budget suggests.

The most underestimated cost of AI implementation is not the technology. It is the organisational change required to make the technology work. A new AI tool does not slot into existing workflows. It reshapes them. And reshaping workflows means retraining people, redesigning processes, updating governance, managing resistance, and absorbing a period of reduced productivity while the organisation adapts.

None of this appears in the vendor's pricing. Almost none of it appears in the initial business case. And yet it is often the determining factor in whether an AI investment succeeds or fails.

The Hidden Cost Stack

Change management & training — Typically 2–4× the tool cost in large organisations
Integration & data quality work — Often the longest phase; rarely scoped accurately upfront
Governance & compliance overhead — Grows with scale; invisible until something goes wrong
Productivity dip during transition — Real but temporary; almost never modelled in ROI projections
Ongoing model maintenance — LLMs degrade, drift, and require continuous human oversight
Opportunity cost of attention — Leadership bandwidth spent on AI is not spent on other priorities

There is also a subtler cost that almost never gets measured: the cost of misaligned AI. When an AI system is deployed without sufficient context, governance, or human oversight, it produces outputs that look correct but are subtly wrong. The organisation acts on those outputs. Decisions get made. And the error compounds quietly until it becomes visible — by which point the damage is much harder to quantify and much harder to reverse.

The Returns That Actually Materialise

None of this means AI does not deliver value. It does — often substantial value. But the value tends to arrive in forms that are harder to measure, slower to appear, and more strategic than operational.

Real Return #1

Decision Quality at Scale

The most durable AI return is not speed — it is the ability to make better decisions, more consistently, across a larger surface area than human attention can cover. AI that surfaces the right signal at the right moment, flags anomalies before they become crises, or synthesises information across domains that humans cannot hold simultaneously — this creates compounding strategic value. It is hard to put a number on it. But organisations that have it make fewer expensive mistakes.

Real Return #2

Capability Expansion Without Headcount Growth

The genuine productivity story is not that AI replaces people — it is that AI allows a team of ten to do what previously required a team of twenty-five. Not by working harder, but by working on higher-leverage tasks while AI handles the cognitive overhead of lower-leverage ones. This return is real, but it only materialises if the organisation is deliberate about what it does with the reclaimed capacity. Teams that use AI to do more of the same work will see modest gains. Teams that use AI to do fundamentally different work will see transformational ones.

Real Return #3

Organisational Learning Velocity

AI accelerates the feedback loop between action and insight. When AI can process outcomes, identify patterns, and surface learning faster than human analysis can, the organisation gets smarter faster. This is perhaps the least visible ROI — and the most important one over a five-year horizon. The organisations that will lead in 2030 are not necessarily the ones with the most AI today. They are the ones that are learning fastest about what works, what does not, and how to adapt.

Real Return #4

Talent Retention and Attraction

This one surprises people, but the data is consistent: knowledge workers increasingly want to work in environments where AI is used thoughtfully. Not because they want to be replaced, but because they want to do meaningful work — and AI, when deployed well, removes the tedious work that drains meaning from a role. Organisations that use AI to elevate their people's work will find it easier to attract and retain the talent that matters most.

The organisations that measure AI ROI correctly are not the ones with the best spreadsheets. They are the ones that have decided, in advance, what kind of value they are trying to create — and built measurement systems that can actually see it.

Measuring What Actually Matters

If the standard ROI framework is inadequate for AI, what should replace it? Not a single number — but a portfolio of measures that together tell a more honest story.

I think about AI measurement across three time horizons, each requiring different metrics and different levels of tolerance for ambiguity.

Time HorizonWhat to MeasureTolerance for Ambiguity
0–6 monthsAdoption rate, error reduction, cycle time, user satisfactionLow — you need early signals that the tool is working
6–18 monthsCapacity reallocation, decision quality, process redesign depthMedium — directional evidence is sufficient
18 months+Strategic capability expansion, learning velocity, competitive positioningHigh — these are portfolio-level bets, not line items

The short-term metrics are the ones most organisations already track — and they matter. But they are not the story. They are the early chapters. The real return on AI investment is written over years, not quarters, and it shows up in the organisation's ability to do things it could not do before — not just in the cost of things it was already doing.

This requires a different conversation with finance and leadership. Not "here is the ROI" but "here is the theory of value, here is how we will know if it is working, and here is the time horizon over which we expect to see it." That is a harder conversation. It is also a more honest one.

What to Say When Someone Asks for the Number

So what do you say when the CFO asks for the ROI on AI?

You say: "It depends on what we're trying to return."

If the goal is cost reduction in a specific, transactional process — you can measure that, and you should. The number will be real and defensible. If the goal is to build a more capable, faster-learning, better-deciding organisation — you can measure the leading indicators of that, and you should. But the final number will only be visible in hindsight, and it will be much larger than anything you projected upfront.

The organisations that get AI right are the ones that resist the pressure to justify every investment with a precise, near-term return. They invest in capability, they measure what they can, they stay honest about what they cannot, and they build the organisational muscle to learn faster than the competition.

That is not a financial model. It is a strategic posture. And in the current moment, it may be the most important competitive advantage available.

The question is not "what is the ROI on AI?" The question is "what kind of organisation do we want to become — and is AI the right path to get there?" Answer that first. The numbers will follow.

Daniela Santos is an Engineering Manager at Mercedes-Benz.io and the author of HumanAI — a newsletter on humans, AI, and the future of work.