Most enterprises I talk to have the same story:
- • The CEO has an AI mandate.
- • There are pilots everywhere.
- • And yet, very little has made it into the hardened, boring, P&L-moving part of the business.
The data backs the discomfort up. BCG's latest research says only about a quarter of companies are seeing tangible value from AI, and just 4% have truly cutting-edge capabilities at scale. The rest—roughly 74%—haven't yet turned AI into measurable impact.
Other analyses put the failure rate of AI projects between 70% and 85%, once you account for initiatives that never leave the lab or get abandoned before they hit production.
At the same time, when AI does reach production, it works: McKinsey reports that 63% of companies using AI in a business unit see revenue uplift, and 44% see cost savings in that unit.
So the problem isn't that AI doesn't work. The problem is that most organizations don't have a reliable way to turn AI into systems that work.
That's why we created the Ultrathink AI Maturity Curve.
Why another AI maturity model?
If you've spent time in this space, you've seen a lot of frameworks already:
- • Google Cloud's AI Adoption Framework maps maturity across six themes—Lead, Learn, Access, Scale, Automate, Secure—mostly focused on building cloud-native capabilities.
- • Gartner's AI Maturity Model scores organizations on strategy, governance, engineering, data, and more, and shows that high-maturity orgs keep more AI projects running for three years or longer.
These are useful. But most of them share two blind spots:
- They treat capabilities as the destination, not as a means to business impact.
- They ignore the delivery & commercial model—the "Execution Gap" between strategy decks and production systems.
We built our curve because we kept seeing the same pattern:
- • Huge investments in strategy work and pilots.
- • A Trust Deficit in the consulting model (you get billed by the hour, not by the outcome).
- • Very few systems that anyone would bet a quarterly KPI on.
Our version is designed to answer one blunt question:
How reliably can you turn AI into production systems that move your P&L—again and again?
Everything else—skills, tools, data, governance—is in service of that.
What makes the Ultrathink curve different
Three big things:
1. Business-impact first, capabilities second
We don't start by asking "How many models are you running?" or "How AI-ready is your data?" We start with:
- • Which workflows drive your revenue, cost, and risk?
- • Where would AI actually change those numbers if it worked?
- • How many of those workflows are supported by reliable AI systems today?
Capabilities matter—but only as enablers of that map.
2. The modern AI application stack is explicit
Most frameworks vaguely talk about "platforms." We don't.
We model maturity against a concrete Modern AI Application Stack:
- • Foundation – compute & models
- • Data & Persistence
- • MLOps & Automation
- • Application & Logic (your workflows, your business rules)
- • Access & Presentation (UIs, APIs, integrations)
With cross-cutting security, governance, observability, and continuous evaluation.
Maturity is partly: how much of this stack is real, unified, and reusable across use cases—not how many SaaS tools you've bought.
3. We challenge the "AI-ready data or bust" myth
A lot of vendors still tell executives: "You're not AI-ready because your data isn't AI-ready." That conveniently turns every AI initiative into a five-year data program. It's also not how the leaders behave.
Even high-maturity organizations say data quality and availability are challenges—but they still manage to keep AI systems in production for years by choosing the right use cases and building robust governance and engineering around them.
Our view:
- • You don't need perfect data to start.
- • You do need a stack and a partner that can contain the mess, make the assumptions explicit, and improve data in lockstep with delivering value.