The Question
"What's the ROI on this AI project?"
It is the question every executive asks. It is the question every team dreads. And it is, more often than not, the wrong question — or at least, a question being asked with the wrong tools.
I have sat in enough strategy rooms to know how this plays out. A team builds something with AI. It works. People use it. Things improve. And then someone in finance asks for a number. A clean, defensible, spreadsheet-friendly number. And suddenly, the team that built something genuinely valuable finds itself unable to explain why.
The problem is not that AI lacks value. The problem is that the frameworks we use to measure value were designed for a different kind of investment — one where inputs are predictable, outputs are linear, and timelines are short. AI does not behave that way. And when we force it into those frameworks, we either overstate the return with inflated projections, or understate it by measuring only what is easy to count.
The ROI question is not wrong. The measurement model is. We are applying industrial-era accounting to a capability that compounds over time, reshapes how work gets done, and creates value in ways that do not appear on a cost-reduction spreadsheet.
False Metrics
The Numbers That Feel Right But Aren't
There are three ROI metrics that appear in almost every AI business case. They are not wrong, exactly — but they are routinely misused in ways that make AI investments look either miraculous or pointless, depending on who is doing the maths.
False Metric #1
Hours Saved
The most common AI ROI claim: 'This tool saves each employee two hours per week.' Multiply by headcount, multiply by average salary, and you have a number that looks compelling. The problem is that saved time rarely converts to saved money unless you reduce headcount — which most organisations are not willing to do, and probably should not do. What actually happens is that people fill the recovered time with other work. That is not a bad outcome. But it is not the cost reduction that was promised. The honest version of this metric is not 'hours saved' but 'capacity unlocked' — and that requires a different conversation about what the organisation does with that capacity.
False Metric #2
Productivity Increase
Productivity metrics are seductive because they are measurable. Tickets closed per day. Code lines per sprint. Documents processed per hour. But productivity at the task level does not automatically translate to productivity at the outcome level. A team that closes twice as many support tickets is not necessarily delivering twice the customer satisfaction. A developer who writes more code is not necessarily shipping better software. When AI accelerates the execution of tasks, it can mask whether those tasks were the right ones to begin with. The metric goes up; the outcome may not.
False Metric #3
Cost Per Transaction
In operational contexts — customer service, document processing, data entry — AI genuinely does reduce cost per transaction. This is real and measurable. But organisations often project these savings across the entire business without accounting for the fact that not all work is transactional. Knowledge work, creative work, strategic work, and relationship work do not have a 'transaction cost' in any meaningful sense. Applying cost-per-transaction logic to these domains produces numbers that are technically calculable but strategically meaningless.
What Is Real
The Returns That Actually Materialise
None of this means AI does not deliver value. It does — often substantial value. But the value tends to arrive in forms that are harder to measure, slower to appear, and more strategic than operational.
Real Return #1
Decision Quality at Scale
The most durable AI return is not speed — it is the ability to make better decisions, more consistently, across a larger surface area than human attention can cover. AI that surfaces the right signal at the right moment, flags anomalies before they become crises, or synthesises information across domains that humans cannot hold simultaneously — this creates compounding strategic value. It is hard to put a number on it. But organisations that have it make fewer expensive mistakes.
Real Return #2
Capability Expansion Without Headcount Growth
The genuine productivity story is not that AI replaces people — it is that AI allows a team of ten to do what previously required a team of twenty-five. Not by working harder, but by working on higher-leverage tasks while AI handles the cognitive overhead of lower-leverage ones. This return is real, but it only materialises if the organisation is deliberate about what it does with the reclaimed capacity. Teams that use AI to do more of the same work will see modest gains. Teams that use AI to do fundamentally different work will see transformational ones.
Real Return #3
Organisational Learning Velocity
AI accelerates the feedback loop between action and insight. When AI can process outcomes, identify patterns, and surface learning faster than human analysis can, the organisation gets smarter faster. This is perhaps the least visible ROI — and the most important one over a five-year horizon. The organisations that will lead in 2030 are not necessarily the ones with the most AI today. They are the ones that are learning fastest about what works, what does not, and how to adapt.
Real Return #4
Talent Retention and Attraction
This one surprises people, but the data is consistent: knowledge workers increasingly want to work in environments where AI is used thoughtfully. Not because they want to be replaced, but because they want to do meaningful work — and AI, when deployed well, removes the tedious work that drains meaning from a role. Organisations that use AI to elevate their people's work will find it easier to attract and retain the talent that matters most.
The organisations that measure AI ROI correctly are not the ones with the best spreadsheets. They are the ones that have decided, in advance, what kind of value they are trying to create — and built measurement systems that can actually see it.
A Better Frame
Measuring What Actually Matters
If the standard ROI framework is inadequate for AI, what should replace it? Not a single number — but a portfolio of measures that together tell a more honest story.
I think about AI measurement across three time horizons, each requiring different metrics and different levels of tolerance for ambiguity.
| Time Horizon | What to Measure | Tolerance for Ambiguity |
|---|---|---|
| 0–6 months | Adoption rate, error reduction, cycle time, user satisfaction | Low — you need early signals that the tool is working |
| 6–18 months | Capacity reallocation, decision quality, process redesign depth | Medium — directional evidence is sufficient |
| 18 months+ | Strategic capability expansion, learning velocity, competitive positioning | High — these are portfolio-level bets, not line items |
The short-term metrics are the ones most organisations already track — and they matter. But they are not the story. They are the early chapters. The real return on AI investment is written over years, not quarters, and it shows up in the organisation's ability to do things it could not do before — not just in the cost of things it was already doing.
This requires a different conversation with finance and leadership. Not "here is the ROI" but "here is the theory of value, here is how we will know if it is working, and here is the time horizon over which we expect to see it." That is a harder conversation. It is also a more honest one.
The Honest Answer
What to Say When Someone Asks for the Number
So what do you say when the CFO asks for the ROI on AI?
You say: "It depends on what we're trying to return."
If the goal is cost reduction in a specific, transactional process — you can measure that, and you should. The number will be real and defensible. If the goal is to build a more capable, faster-learning, better-deciding organisation — you can measure the leading indicators of that, and you should. But the final number will only be visible in hindsight, and it will be much larger than anything you projected upfront.
The organisations that get AI right are the ones that resist the pressure to justify every investment with a precise, near-term return. They invest in capability, they measure what they can, they stay honest about what they cannot, and they build the organisational muscle to learn faster than the competition.
That is not a financial model. It is a strategic posture. And in the current moment, it may be the most important competitive advantage available.
The question is not "what is the ROI on AI?" The question is "what kind of organisation do we want to become — and is AI the right path to get there?" Answer that first. The numbers will follow.
Daniela Santos is an Engineering Manager at Mercedes-Benz.io and the author of HumanAI — a newsletter on humans, AI, and the future of work.

