BlueAllyBlueAlly

If you’ve sat through any executive AI conversation in the last six months, somebody almost certainly opened it by quoting MIT’s 95% number. It is a useful number, but it does not tell the full story on its own. When viewed alongside research from RAND, Gartner, BCG, McKinsey, and CIO.com, the picture becomes more useful and much less about technology.

The numbers everyone is quoting

I’ve stopped counting how many AI conversations I’ve been in this year that opened with the same statistic. MIT’s NANDA initiative published it last summer in their State of AI in Business 2025 report: 95% of generative AI pilots return zero measurable P&L impact. RAND, working from a different dataset, finds that more than 80% of AI projects fail outright, roughly twice the rate of regular IT work. Gartner expects companies to abandon 60% of AI projects through 2026 because the data underneath isn’t ready and predicts another 40% of agentic AI projects will be canceled by 2027. BCG, after surveying more than 1,250 firms, says only about 5% are pulling real value out of AI. McKinsey’s annual report puts the financial reality in starker numbers: 88% of organizations use AI somewhere, but only 39% see any EBIT impact, and just ~6% qualify as “AI high performers.”

Different studies. Different methods. Same conclusion.

The most recent piece in CIO.com, CIOs struggle to find clarity in their organizations’ AI strategies, names the problem squarely. In CIO.com’s 2026 State of the CIO survey, 31% of CIOs flagged a lack of clarity on corporate AI strategy as a top challenge. Twenty-four percent can’t say which department even owns AI ROI. Forty percent don’t have the in-house expertise. The headlines say AI is failing. The data says many organizations are failing to set the work up correctly.

That’s the part worth holding onto. The 5% who are succeeding aren’t running better models. They’re running stronger operating models around the same AI Tools available to all . Once that distinction is clear, the playbook becomes more focused and more repeatable.

The failure landscape, in one place

It’s worth laying these studies out side by side, because each one is looking at a slightly different slice of the same problem and reaching the same verdict.

Study

Headline number

What it indicates

MIT NANDA, State of AI in Business 202595% of GenAI pilots return zero measurable P&L impactFailure traces back to a “learning gap”, generic tools don’t adapt to enterprise workflows. Vendor partnerships succeed about 67% of the time; internal builds about a third as often. Most budgets target sales and marketing, but the highest ROI sits in back-office automation.
RAND, Root Causes of Failure for AI Projects>80% of AI projects fail,  about 2x the rate of non-AI IT projects33.8% are abandoned before production. 28.4% ship but deliver no value. 18.1% deliver some value, but not enough to justify the cost. Only ~19.7% meet or beat their goals. Top root cause: business leaders set the project up wrong from day one.
Gartner, multiple 2025–2026 reports60% of AI projects to be abandoned through 2026; 40%+ of agentic AI canceled by 2027Only 28% of AI use cases in infrastructure & operations fully meet ROI expectations. Two-thirds of organizations don’t have data management practices ready for AI. Most failures come back to poor data quality, unclear value, or weak risk controls.
BCG, The Widening AI Value Gap (Sept 2025)Only ~5% of firms capture meaningful value; 60% see little to none“Future-built” firms invest 2x what laggards do and get 2x the revenue lift and 40% more cost savings. They don’t just deploy tools, they redesign the workflow and reskill the people doing the work.
McKinsey, State of AI 202588% use AI in at least one function; ~6% are AI high performersOnly 39% see any EBIT impact, mostly under 5%. Two-thirds of companies are still in pilot or PoC mode without an enterprise scaling plan. The $2.6–$4.4T value pool sits in customer ops, marketing & sales, software engineering, and R&D, but only if the workflow is rewired.
CIO.com, 2026 State of the CIO survey31% of CIOs say their company has no clear AI strategy24% don’t know who owns AI ROI. 40% lack in-house AI talent. 32% have no clear ROI metrics. 28% face too many competing AI demands. CIOs, COOs, CFOs, CROs, and CHROs all claim ownership of AI strategy in the same companies.

What jumps out reading these back-to-back is how little disagreement there is. The numbers move because the methodologies do. The list of explanations doesn’t. Strategy isn’t clear. Ownership is murky. Data isn’t ready. Workflows haven’t been redesigned. The money is going to the wrong use cases. The same five problems show up everywhere.

Why most AI projects fail

The studies argue about percentages but agree on causes. Three of them keep showing up.

Nobody owns the strategy.

The CIO.com piece quotes Aman Mahapatra, CIO at Tribeca Softech, on what he calls a “nearly universal” pattern: “Every C-suite executive believes AI is strategic, but nobody has agreed on who owns the strategy.” His follow-up is the line I keep coming back to: “‘Distributed’ is a polite word for ‘nobody’.”

Shubhradeep Guha at Publicis Sapient frames the same problem from another angle: “A lot of companies do not have an AI strategy as much as they have an AI activity list.” Lots of pilots. Lots of demos. Not a lot of agreement on which decisions AI is supposed to improve, or how anyone will know it worked. RAND’s research lands in the same place from a different door — the most common root cause of AI failure is leadership setting the project up wrong from day one.

AI is treated as a tech project instead of a business priority.

Guha’s sharpest line is also his most accurate: “AI strategy dies quickly when it is treated as a tech project instead of a business priority.” MIT’s GenAI Divide research shows what that looks like in spend. More than half of GenAI budgets go to sales and marketing tools that demo well, even though MIT found the biggest measurable ROI in back-office automation, replacing BPO contracts, cutting agency spend, streamlining ops. The pilots that attract the most attention are not always the initiatives most likely to deliver measurable value.

Gartner reinforces this from the other side. About half of GenAI projects get terminated after PoC because the business value never crystallized or the costs got out of hand. McKinsey shows the macro version: two-thirds of firms still in pilot mode, no enterprise scaling plan attached.

The data and the workflow aren’t ready.

Underneath the strategy and ownership problems is a more concrete one. Most enterprises don’t have AI-ready data, and they haven’t redesigned the workflows the AI is supposed to live in. Gartner says 63% of organizations either don’t have the right data management practices for AI or aren’t sure if they do. MIT calls the symptom “the learning gap” — generic AI tools that perform well for individual users often stall in enterprise environments because they don’t adapt to established workflows. BCG’s leaders distinguish themselves not by buying more models, but by reshaping the workflow around them and reskilling the people inside it.

“AI strategy dies quickly when it is treated as a tech project instead of a business priority.”  — Shubhradeep Guha, Publicis Sapient

For BlueAlly, this is where AI strategy becomes practical. The goal is not to help organizations launch more pilots. It is to help clients identify the right business outcomes, prepare the data and operating model, and execute against a portfolio that can be measured.

What good AI strategy looks like

Strategy is where most companies lose this game before execution even starts. Read the research as one body of work and a handful of attributes show up repeatedly among the firms getting it right.

The strategy belongs to the CEO.

Mahapatra’s argument is that AI is too cross-cutting for any single functional leader to arbitrate. “AI touches strategy, operations, risk, talent, and culture simultaneously. No single functional leader has the authority to arbitrate across all of those.” The CIO translates that ambition into an execution model. The CFO validates returns. Each initiative gets a named business owner, not the CIO or CFO, who is accountable for the financial outcome. When that ownership is missing, the project drifts.

It’s anchored to the earnings plan, not innovation theater.

Successful CIOs, Mahapatra says, treat AI investments the way the company evaluates any other capital project, “by tying AI initiatives to the earnings plan before a line of code is written.” He calls this “shockingly rare.” Most firms still budget AI as innovation spend, “corporate shorthand for ‘we do not require this to pay for itself.’” RAND’s data tells you what happens when you skip this step: more than a quarter of the AI projects that ship at all (28.4%) deliver no value, because the value was never specified up front.

Fewer bets, deeply funded.

Mahapatra’s counterintuitive advice to CIOs is to say no to most AI proposals. “The CIOs getting this right fund fewer initiatives with more resources, clearer financial targets, and direct business ownership from day one.” BCG’s data backs him up. The leaders concentrate AI investment in a handful of high-leverage functions — R&D, sales and marketing, manufacturing, IT — and outpace laggards on revenue and cost savings precisely because they refused to spread the budget thin.

Built to evolve.

Rishi Kaushal, CIO at Entrust, points out that the underlying technology is shifting month to month. “It takes you time to figure out if that’s good enough to get going, so the strategy is not a one-and-done deal. This is something that must evolve as AI shifts.” Good AI strategies have a re-evaluation cycle baked in, usually quarterly, and they keep the durable bets (the business outcomes you’re chasing) clearly separate from the perishable ones (which model, which vendor, which architecture this quarter).

Cross-functional by construction.

Kaushal again: “This strategy falls apart if you cannot enable the AI capabilities, and the only way you can enable AI capabilities at scale is if you leverage the talent you have across the organization.” Translation: the chief HR officer endorses the plan and underwrites the training. Legal and risk understand the exposure and the controls. The CIO is wired into the LOB conversations that decide what the AI is for. Skip those conversations and you ship a tool nobody trusts and few people use. In my opinion, this is one of the most commonly underestimated barriers to AI adoption..

What good AI execution looks like

If strategy is where companies fail quietly, execution is where they fail expensively. The good news, again, is that the research points at a tight set of disciplines.

Buy and partner before you build.

MIT’s number on this is one of the cleanest signals in any of these reports: vendor partnerships and specialized tools succeed about 67% of the time, internal builds about a third as often. Most enterprises don’t have a structural advantage in foundation-model engineering. They do have an advantage in their proprietary data and workflows. Channel your build effort there. Partner everywhere else.

Aim at the back office before the front office.

More than half of GenAI budgets are pointed at sales and marketing, but MIT found the highest measurable ROI in back-office automation. These initiatives may be less visible, but they often provide a clearer path to measurable value. Winning organizations follow the business case, not the demo.

Treat data readiness as an entry requirement , not a parallel workstream.

Gartner’s 60%-abandoned-through-2026 number is blunt for a reason. If the data, governance, and access controls aren’t in place, the project doesn’t start. Calling AI-readiness “a parallel program” is one of the more reliable ways I’ve seen to demolish the timeline.

Redesign the workflow. Reskill the people.

BCG’s leaders are distinguished by what they do after the model is in place. They restructure the work around it, and they invest in the humans working alongside it. Drop a model into an unchanged process and you get the ship-and-no-value outcome RAND quantifies. Workflow redesign and reskilling are slower. They’re also the actual mechanism by which AI converts into EBIT.

Name a business owner per initiative — and write it down.

Mahapatra’s prescription is precise. Every AI initiative is connected to a business owner, not the CIO or CFO, who is accountable for the financial outcome. “The CIO owns the technical platform and governance. The CFO validates returns against the balance sheet. The CEO arbitrates when priorities conflict. Joint ownership works, but only when each party’s specific accountability is written down, reviewed quarterly, and tied to compensation. Otherwise, joint ownership becomes shared neglect.”

Watch the cost curve from day one.

Gartner’s prediction that 40%+ of agentic AI projects will be canceled by 2027 cites escalating costs as a primary driver. Inference, orchestration, and integration costs grow non-linearly when usage scales. Wire cost telemetry into AI workloads from the first deployment, and run unit economics (cost per resolved ticket, per generated draft, per closed task) alongside accuracy and net promoter score (NPS). One without the other is how a useful pilot becomes an unaffordable program.

Why some AI projects succeed

Pull all of it together and the success pattern is surprisingly small. The 5% capturing real value tend to do roughly the same handful of things at the same time:

  • They start from a business decision, not a technology. BCG’s leaders pick three to five high-value functions and rebuild the work around AI inside them, instead of letting every team run its own pilot.
  • They put the CEO behind the strategy and a named operator behind every initiative. The CIO.com survey shows how rare this remains. The firms that follow this rule stop arguing about ownership and start measuring outcomes.
  • They invest at scale in the few bets they do make. BCG’s future-built firms spend more than 2x what laggards spend, and that concentration is what produces the 2x revenue lift and 40% cost savings advantage.
  • They build only what’s proprietary, and partner for the rest. MIT’s 67% vs. 33% gap between vendor-led and internal builds is a hard signal to ignore.
  • They redesign the work, not just the tooling. This is the single biggest behavioral difference between leaders and laggards in both BCG’s data and McKinsey’s scaling-gap analysis.
  • They tie AI to the earnings plan and review it every quarter. Mahapatra’s line that this “sounds obvious but is shockingly rare” is the strongest single explanation for the gap between the 5% and the 95%.

None of this requires a frontier model. Most of it is organizational discipline that pre-dates AI by decades — capital allocation, accountability, workflow design, talent investment. The reason 95% of pilots are failing isn’t that AI is uniquely hard. It’s that AI is exposing how many organizations have been getting these basics wrong on every transformation and have been hiding it behind quarterly software wins.

The bottom line

The MIT 95%, the RAND 80%, the BCG 5%, the Gartner abandonment forecasts, the McKinsey scaling gap aren’t five separate stories. They’re one story told from different angles. AI fails when it’s treated as a technology pilot floating outside the operating model. It works when it’s funded, governed, and measured like any other capital investment that must pay for itself.

The CIO.com piece arrives at the same conclusion through the daily reality of the CIO’s job. Guha’s line is the one I’d take with you: “Too many organizations are confusing experimentation with strategy.” The fix isn’t a bigger AI budget. It’s a smaller portfolio of bets, owned by named business leaders, anchored to the earnings plan, executed on AI-ready data, and reviewed at a cadence that matches how fast the technology underneath is changing.

That’s what good AI strategy looks like. That’s what good AI execution looks like. And that, more than any model choice, is why some projects keep working when most don’t.

How BlueAlly Helps Clients Move From AI Activity to AI Strategy We built our AI services around the same conclusion the research keeps reaching; get the strategy right before you start executing. In practice that means we don’t lead with a use case, a model, or a platform. We lead with a workshop.

Our AI Strategy Workshop has one job: make the strategy real on paper. We sit with the leadership team and work through a specific set of questions. Which decisions and processes are AI meant to improve? What does success look like in the earnings plan? Who is the named business owner for each initiative? How does any of it sit alongside the company’s data, security, and operating model? The output is a short, defensible portfolio of AI bets and each one is tied to a business outcome and a named person.

From the workshop we move into structured analysis, not deployment. We assess data readiness against the specific use cases the workshop produced, not in the abstract. We map the workflows AI is supposed to live inside and identify the redesign work that must happen for the AI to convert into value. We size cost-to-serve at scale, not just at PoC. And we surface the cross-functional dependencies of HR, legal, risk, and the LOBs that determine whether a deployment will hold up once it leaves the lab.

Only then do we move to execution. Build, partner, or buy decisions get made against the strategy, not the other way around. Every initiative has a named business owner before a line of code is written. The portfolio gets reviewed every quarter against the earnings plan and against where the technology has moved underneath us.

It isn’t the fastest path to “we have an AI.” It is, based on every study referenced in this piece, the path that leads to AI you can point to in the EBIT line. If your organization is sitting on an AI activity list rather than an AI strategy, or if you’re honestly not sure which one you have,  that’s the conversation we’d like to be in.

Sources

Ready to turn AI activity into AI strategy?

BlueAlly helps organizations move beyond scattered AI pilots with a practical, outcome-driven approach. Our AI Strategy Workshop brings the right leaders together to define business priorities, assess readiness, identify ownership, and build a focused portfolio of AI initiatives tied to measurable value.

Let’s start the conversation.