The most expensive AI investment is the one that doesn't work.
Enterprises are pouring $30 to $40 billion into generative AI annually. The reward for this investment? According to MIT's 2025 State of AI in Business report, 95% of enterprise AI pilots deliver zero measurable return. S&P Global found that 42% of companies abandoned most of their AI initiatives in 2025, up from just 17% the year before. The average organization scraps nearly half of its AI proof-of-concepts before they ever reach production.
Meanwhile, CFOs find themselves in an impossible position. RGP research shows 66% expect significant AI ROI within two years, but only 14% report meaningful value today. Even more telling, 71% remain skeptical about quantifying returns at all. They're being told to invest aggressively in a technology they can't measure, using a procurement model designed for a completely different era.
The problem isn't AI technology. It's AI procurement. Traditional capex models ask enterprises to bet millions on technology with an 80% failure rate, then figure out value later. Keep in mind that cloud providers usually have multi-year deals with enterprises, paid upfront with no value at the beginning. Outcome-based models flip this equation entirely, and they're changing how smart enterprises buy AI.
The traditional enterprise software playbook doesn't work for AI, and the reasons go deeper than most vendors want to admit.
AI depreciates differently than enterprise software. ERP systems have seven to ten year lifecycles. You buy them, implement them, and amortize the investment over a decade. AI models become obsolete in months. GPT-3.5 was state-of-the-art in 2022. By 2024 it was legacy technology.
Implementation timelines are fantasy. Gartner reports that only 48% of AI projects make it to production, taking an average of eight months from prototype to deployment. At least 30% of generative AI projects will be abandoned after proof of concept by the end of 2025 due to poor data quality, escalating costs, or unclear business value. These aren't edge cases. This is the norm.
Value is impossible to predict upfront. AI use cases evolve as organizations learn what actually works. The applications that deliver the most value often emerge from experimentation, not planning. Capex models require you to define value before you've discovered it. Which is exactly backwards for a technology where learning happens through deployment.
But the deepest problem is incentives. Traditional pricing models, like per-seat, per-license, or per-user, pay vendors whether their solution delivers value or not. Once the contract is signed, the vendor's incentive shifts from ensuring your success to minimizing their support costs. You bear 100% of the implementation risk while they bear none.
CFOs are caught between competing pressures that make rational AI investment nearly impossible. The board mandate is real. AI mentions on S&P 500 earnings calls hit record highs in 2025, with 306 calls citing AI in Q3 alone according to FactSet analysis. The pressure to demonstrate AI leadership is intense. Boards want to see initiatives, pilots, strategies. Nobody wants to explain why they're falling behind.
But so is fiduciary duty. CFOs can't justify investments they can't measure. When RGP surveyed finance leaders, 35% cited data trust as their top barrier to AI ROI, yet investment in data foundations remains limited. The disconnect between recognition and resourcing means AI initiatives stall before they scale. And 48% of CFOs now say they're ultimately responsible for ensuring AI delivers measurable value, more than any other C-suite role. The accountability is landing squarely on finance.
The result is paralysis. Pilot projects that never scale. Shadow AI is proliferating because official initiatives move too slowly. McKinsey's 2025 survey found that only 6% of organizations report meaningful enterprise-wide bottom-line impact. Everyone else is stuck between the mandate to invest and the inability to prove it worked.
What CFOs actually need isn't another vendor promising transformation. They need a procurement model that de-risks AI investment entirely. They need to pay for performance, not potential.
The shift is already happening, faster than most enterprises realize. Instead of paying per seat, per user, or per license, outcome-based pricing charges for results delivered. A customer service AI charges per resolved conversation, not per agent seat. A document processing solution charges per successful extraction, not per API call. The vendor only makes money when you get value.
When vendors only get paid for successful outcomes, they're incentivized to ensure your deployment actually works—not just to close the deal and move on. They can't collect revenue from failed implementations, so they stop shipping failed implementations. Outcome-based models also require clear definitions of success before deployment begins. This forces both parties to agree on what "value" means, eliminating the ambiguity that lets vendors claim victory while customers see nothing.
Most importantly, outcome-based pricing transfers risk. The vendor absorbs the technology risk. If their AI doesn't perform, they don't get paid. This shifts the burden of proof from the buyer hoping the technology works to the seller ensuring it does.
Outcome-based pricing is necessary but not sufficient. The structure of the engagement matters as much as the commercial terms. Deployment speed also functions as a de-risking mechanism. Which means solutions that deploy in days rather than months limit your exposure.
If something isn't working, you find out quickly and adjust. Which is a much better option than discovering failure six months and several million dollars later. Modular architecture, pre-built integrations, and rapid iteration cycles all reduce the time between investment and validation.
Another callout is that outcome-based pricing requires clear metrics both parties can verify. Beware vendors who define success in ways that are convenient for them but meaningless for you. Real-time dashboards, auditability, and third-party validation options indicate a vendor confident enough in their performance to let you see it clearly.
The buy versus build trend reinforces this logic. Menlo Ventures data shows 76% of AI use cases are now purchased rather than built internally, up from 47% in 2024. Despite continued investment in internal builds, ready-made AI solutions reach production faster and demonstrate value sooner. Buying shifts risk to vendors who specialize in making AI work. Outcome-based pricing makes that risk transfer explicit and contractual.
The question is whether to invest before or after you know AI works.
Traditional procurement asks you to place a bet. Spend millions upfront, hope the technology performs, measure value years later. Given that 80 to 95% of AI projects fail depending on whose research you trust, this isn't a calculated risk. It's a gamble dressed up in enterprise terminology.
Outcome-based models invert the equation. You pay for results, not promises. The vendor proves value before you commit at scale. If it doesn't work, you don't pay. If it does, you scale confidently.
This isn't just better economics. It's better discipline. When payment follows performance, both parties are forced to define what success looks like before writing any code. That clarity alone eliminates most of the ambiguity that kills AI projects. It forces honest conversations about what's actually achievable, what data is actually available, and what outcomes actually matter.
The enterprises winning with AI aren't the ones spending the most. They're the ones who've figured out how to spend only on what works.
See how Unframe's outcome-based approach means you pay for AI that delivers, not AI that promises. Check out how it works.