How to roll out AI implementations

Are there techniques that high-performing organizations use to roll out their AI implementations? Lets explore finding a special recipe that all organizations can strive for.

How to roll out AI implementations

Eighty-eight percent of organizations now use AI, but only about 6% qualify as high performers.[1] That gap is not a technology problem. It is a deployment problem. The same AI tools produce radically different outcomes depending on how organizations roll them out, and the data now shows why: companies that succeed allocate roughly 70% of their AI resources to people, processes, and culture, while the average company spends 93% on technology and 7% on everything else.[3][2] The organizations stuck in pilot purgatory are not using worse AI. They are deploying it backwards.

Key Points

  • 88% of organizations use AI, but only ~6% are high performers and 5% qualify as "future-built," achieving 3.6× higher total shareholder returns.[4][1]

Lessons Learned

  • Treat AI rollout as an operating-model transformation, not a technology implementation. Budget accordingly: for every $1 on the model, plan $3 for change management.[9]

Why does AI adoption keep growing while AI results stay flat?

The concept people keep circling is "pilot purgatory," a term used by MIT researchers to describe the state where organizations run AI experiments that never graduate to production.[13] McKinsey, BCG, and Gartner each describe the same phenomenon with different labels: stuck in experimentation, abandoned after proof of concept, scaling gap. The convergent finding is that nearly two-thirds of organizations sit in this state, running pilots that accumulate cost without delivering returns.[1]

This is different from previous technology waves. ERP, cloud, and mobile had their own adoption problems, but AI has a structural peculiarity that makes conventional deployment playbooks fail. AI is probabilistic. It orchestrates less than 1% of core enterprise processes because companies keep trying to insert probabilistic systems into deterministic architectures.[14] Traditional IT implementations worked because the old system and the new system were both deterministic: predictable inputs, predictable outputs, testable edge cases. AI does not behave this way. A system that is right 95% of the time and wrong 5% of the time in unpredictable ways cannot be deployed with the same methodology as an ERP migration.

The cost of getting this wrong is accelerating. The 42% abandonment rate reported by S&P Global is roughly 2.5× the rate from the year prior.[5] BCG's research divides organizations into three tiers: the 5% "future-built" firms generating 1.7× revenue growth and 3.6× shareholder returns, a middle tier experimenting without clear results, and the 60% lagging behind.[4] The future-built firms plan to invest more than double in 2025, creating a self-reinforcing divide. Organizations that delay fixing their deployment methodology do not stay in place. They fall further behind.

Funnel chart showing 88% AI adoption narrowing to approximately 6% high performers and 5% future-built firms, with corresponding value multipliers at each tier.
The AI adoption funnel: 88% of organizations adopt AI, but value concentrates in the top 5-6%.

Credit: Tricky Wombat made with Google Gemini 3.1 Flash Image, Mar 2026

What do the measurements actually show?

The failure rate data is stark, but it requires precision. RAND's widely cited finding that over 80% of AI projects fail comes with a qualifier the secondary sources drop: the RAND researchers write "by some estimates, more than 80 percent of AI projects fail," attributing the figure to external sources, not to their own 65-person interview study.[7] What RAND's own research confirmed is more useful: they identified five root causes, and four of the five are organizational and methodological. Misaligned stakeholders, insufficient data quality, poor requirements definition, and inadequate testing and validation are organizational problems. The fifth, applying AI to problems too difficult for it to solve, is a technical limitation. The pattern is clear even with the technical exception: most AI failure traces to how projects are run, not what technology they use.

McKinsey's data sharpens this. Across 25+ attributes tested, workflow redesign is the single strongest correlate of EBIT impact from AI. High performers are nearly 3× more likely to have redesigned workflows. Yet only 21% of organizations have redesigned any workflows around AI capabilities.[1] The math is simple: almost everyone buys AI tools, almost nobody changes how work gets done, and then almost everyone wonders why the tools did not produce results.

The production gap confirms the pattern from a different angle. ISG found that only 31% of enterprises have even one top use case in production.[15] Menlo Ventures reports that 76% of AI use cases are now purchased rather than built internally, up from 53%.[16] Organizations are buying more and building less, which should make deployment easier. It does not, because deployment methodology, not technology procurement, is the bottleneck.

Side-by-side bar chart comparing approximately 80% AI project failure rate to approximately 40% non-AI IT project failure rate, with five root causes listed beneath the AI bar.
AI projects fail at roughly double the rate of non-AI IT projects. Four of five identified root causes are organizational.

Credit: Tricky Wombat made with Google Gemini 3.1 Flash Image, Mar 2026

What are people on the ground reporting?

The perception gap between leadership and frontlines is the most underappreciated data point in AI deployment. A BCG and Columbia Business School survey of 1,400 U.S.-based employees found that 76% of executives believe their employees are enthusiastic about AI. Only 31% of individual contributors actually are.[11] That 45-point gap does not just mean leadership is out of touch. It means rollout plans built on leadership assumptions about employee readiness are designed for an organization that does not exist.

The downstream effects compound. Gartner's survey of 2,986 employees found that 37% of those with AI access do not use it because their coworkers are not using it.[17] This is a coordination failure: willing employees wait for social proof that never arrives because everyone else is also waiting. Rushed top-down deployment, without peer adoption infrastructure, creates a stalemate among the exact population that was supposed to adopt.

When organizations fail to provide sanctioned tools and adequate training, employees solve the problem themselves. WalkMe's survey found that 78% of employees using AI at work bring unapproved tools.[18] BCG's global survey of over 10,000 employees reported that more than half would use unauthorized AI tools if sanctioned options were inadequate.[19] The training deficit is severe: only 7.5% of employees have received what they consider extensive AI training.[18] Shadow AI is not a failure of employee discipline. It is a symptom of deployment plans that addressed technology procurement but not human readiness.

What does enterprise AI implementation look like in practice?

The case studies are more instructive than the survey data because they show the same technology producing opposite outcomes depending on deployment methodology. Three cases from the same two-year window illustrate the pattern.

Klarna: fast deployment, slow reversal

Klarna's AI assistant processed 2.3 million customer service conversations in its first month after launch. Resolution time dropped from 11 minutes to under 2 minutes. Repeat inquiries fell 25%. The company projected $40 million in annualized savings and publicly declared that AI could do "all of the jobs that we, as humans, do."[20] The deployment methodology was big-bang replacement: scale fast, measure cost reduction, reduce headcount by 40% through attrition. Then quality deteriorated on complex interactions. By mid-2025, Klarna reversed course and began rehiring human agents at competitive hourly rates through an "Uber-type setup" for flexible staffing.[21] The AI worked for high-volume, low-complexity interactions. The deployment methodology, which treated all customer service as a single undifferentiated workload, did not account for the complexity gradient. The technology was not the failure. The rollout design was.

McDonald's and IBM: the pilot that never graduated

McDonald's partnered with IBM in 2021 to deploy AI-powered drive-thru ordering across more than 100 U.S. locations. The pilot ran for approximately two-and-a-half years. It never progressed beyond pilot status. Customers reported receiving 2,510 chicken McNuggets on a single order and being charged $264.75. Franchisees noted the system achieved about 95% accuracy, which sounds high until you calculate what 5% error rates mean across millions of daily drive-thru transactions.[22] McDonald's ended the partnership in June 2024. The failure was not that the AI could not take orders. It was that the pilot had no defined accuracy threshold, no explicit exit criteria, and no staged gate structure for deciding when to expand or shut down. It ran on momentum, not methodology.

PepsiCo: infrastructure before applications

PepsiCo took the opposite sequencing. Before deploying AI applications, the company spent years consolidating data infrastructure, bringing over 6 petabytes of data worldwide into a unified platform with 1,500+ users across 30+ teams.[23] A senior strategy and transformation executive described the philosophy as creating "room to play while having a very focused agenda."[24] Early production results include a 20% increase in factory throughput and 90% of potential plant issues identified through simulation before they occur on the production line.[24] PepsiCo's approach treats data infrastructure as a prerequisite, not a parallel workstream. The AI applications sit on top of a foundation that was already built. The methodology is slower to start and harder to justify in quarterly earnings calls. It also produces measurable outcomes at scale.

What patterns emerge across these cases?

The pattern across all three cases, and across the broader survey data, is consistent: deployment methodology predicts outcomes better than technology selection. Klarna and Octopus Energy (discussed below) both used GPT-based customer service AI. One deployed big-bang and reversed. The other deployed in phases and scaled. McDonald's and Wendy's (discussed below) both attempted drive-thru voice AI. One ran a 2.5-year zombie pilot. The other staged from 1 location to 500+ with explicit gates.

The quantitative research confirms what the cases illustrate. BCG found that only 26% of organizations have moved beyond pilots. The ones that have share a resource allocation pattern: roughly 70% on people and processes, 20% on technology and data, 10% on algorithms.[2] The organizations stuck in pilot purgatory invert this ratio. The technology is a fraction of the investment. Deployment methodology is the dominant cost and the dominant success factor.

Two-column comparison showing Klarna's big-bang replacement approach with reversal outcome versus Octopus Energy's phased co-pilot approach with sustained scaling outcome.
Same AI technology domain, opposite deployment methodologies, opposite outcomes.

Credit: Tricky Wombat made with Google Gemini 3.1 Flash Image, Mar 2026

What separates organizations that succeed with AI from those that do not?

The organizational pattern data is the most consistent finding across every major research firm studying AI deployment. Gartner surveyed 432 organizations and found that 45% of those with high AI maturity keep projects operational for at least three years, compared to 20% of low-maturity organizations.[25] BCG found that 62% of AI leaders have deployed AI across functions, compared to 12% of laggards.[4] The gap is not closing. It is widening.

The MIT research team behind the NANDA report identified the core barrier to scaling, and it is not what most organizations assume. It is not infrastructure, regulation, or talent. It is learning: tools that do not learn, integrate, or adapt within the organizational context.[26] The report found an enterprise paradox: organizations with the most resources and the most pilot counts convert to production at the lowest rates. Mid-market companies moved faster. The advantage of having less infrastructure was that there was less to work around.

What do practitioners consistently say about AI rollout?

Across BCG's survey of more than 10,000 employees, frontline AI usage has stalled at 51%, even while leadership usage climbs to 85% and manager usage reaches 80%.[19] The enthusiasm gap is not uniform: employees who have received training and who work alongside peers who also use AI adopt at dramatically higher rates. The 37% peer effect from Gartner's data, where employees do not use available AI because colleagues are not using it, is a coordination problem, not a motivation problem.[17]

The shadow AI data adds urgency. A 68% increase in shadow generative AI usage was reported in a TELUS Digital survey cited by Menlo Security.[27] IBM's breach data puts a cost figure on it: organizations with high levels of shadow AI usage pay an average of $670K more per breach.[12] The 13% of organizations that reported breaches of AI models or applications found that 97% of them lacked proper AI access controls.[12] Shadow AI is the predictable outcome of providing employees AI-capable roles without AI-capable tools.

What creates the gap between strong and weak AI outcomes?

The gap compounds in both directions. On the upside, McKinsey studied 440 customer care leaders and found that 40% of AI leaders reported improved customer experience scores, versus 12% of laggards.[28] Both groups access the same AI vendor ecosystem. The difference: 67% of leaders had invested in foundational AI use cases at scale, versus 16% of laggards.[28] Same tools, different results, traceable to deployment depth.

On the downside, the compounding works against you. Organizations that skip structured rollout lose employees to frustration (the 31% enthusiasm rate), lose data to shadow AI (78% bringing unapproved tools), lose money to breach costs ($670K incremental), and lose competitive position as the 5% future-built firms pull further ahead with more than double the planned investment. Each failure reinforces the next. The cost of inaction is not flat. It accelerates.

Two-panel visualization showing leadership versus employee AI enthusiasm on the left and shadow AI consequences on the right.
The perception gap drives the shadow AI problem: executives assume enthusiasm that does not exist, employees self-serve with unapproved tools.

Credit: Tricky Wombat made with Google Gemini 3.1 Pro Image, Mar 2026

Why do conventional IT deployment playbooks fail for AI?

This is where the data converges on one finding. The root cause is not the AI technology. It is how organizations allocate resources around the AI technology. BCG surveyed 1,000 CxOs and found that organizations that have successfully scaled AI beyond pilots follow a consistent resource allocation pattern: roughly 70% of resources go to people and processes, 20% to technology and data, and 10% to algorithms.[2] These are the organizations achieving 50% higher revenue growth and 60% higher total shareholder returns.

The average company does the inverse. Deloitte's chief technology officer told Fortune that enterprise AI spending runs approximately 93% on technology and 7% on people.[3] McKinsey quantified the ratio differently but reached the same conclusion: for every $1 spent developing a model, organizations need $3 for change management. The model itself accounts for only about 15% of total project effort. Scaling it accounts for the rest.[9]

This is not an argument that technology does not matter. It does. But technology selection is a solved problem for most enterprise use cases. The vendor ecosystem is mature, purchased solutions succeed at roughly twice the rate of internal builds, and model capabilities improve quarterly without any action from the buyer.[26] What is not solved, and what the data says determines outcomes, is the organizational infrastructure around the technology: governance, training, workflow redesign, champion networks, measurement systems, and change management. Organizations that treat AI rollout as a technology procurement exercise are allocating 93% of resources to the 15% of the problem that is already closest to solved.

Stacked bar chart comparing BCG's 70/20/10 resource allocation for AI leaders to Deloitte's 93/7 split for average companies.
AI leaders and average companies allocate resources in almost exactly opposite proportions.

Credit: Tricky Wombat made with Google Gemini 3.1 Flash Image, Mar 2026

What does a successful AI implementation look like?

Two transformation cases demonstrate the resource allocation principle in practice. They succeed not because they use better AI, but because they invest in everything around the AI.

Octopus Energy deployed a GPT-based customer service tool called Magic Ink. The company did not replace agents. It started with AI-generated draft emails that agents could edit. Then it expanded to call summaries. Then to handling complex cases. Then to authorized actions. At each stage, humans remained in the loop. The results: AI-assisted emails score 80% customer satisfaction versus 65% for human-only emails. The system handles the equivalent of 250 full-time employees' workload, generates over 9.4 million messages and summarizes over 6.2 million calls. The company did not lay off a single customer service employee. Instead, it grew the team 30× as it scaled the business to 7.8 million customers. Its Trustpilot rating sits at 4.8 out of 5.[29] The framing: AI handles volume so humans can handle complexity. The methodology is phased co-pilot deployment, not replacement.

DBS Bank in Singapore has been building AI infrastructure for over a decade. The bank established a PURE framework (Purposeful, Unsurprising, Respectful, Explainable) for responsible AI governance, created a Responsible AI Council, and upskilled more than 9,000 employees.[30] The bank now runs over 370 AI use cases with more than 1,500 models in production. Its Customer Service Officer Assistant reduced handle time by 20% with 95%+ accuracy. Its nudge engine has delivered 1.2 billion personalized nudges to over 13 million customers.[30] The economic value trajectory tells the story: S$180 million in 2022, S$370 million in 2023, S$750 million in 2024, and over S$1 billion in 2025.[31] A senior data and transformation executive put it plainly: "The tech is the easiest bit. The processes, the structure, the people, the culture, that's the hard bit."[30] DBS won the "World's Best AI Bank 2025" award from Global Finance, not for its technology stack, but for the organizational infrastructure around it.[32]

Line chart showing DBS Bank's AI economic value rising from S$180M in 2022 to over S$1B in 2025, with methodology milestones annotated along the timeline.
DBS Bank's AI economic value compounded over a decade of infrastructure investment, reaching S$1B+ in 2025.

What are the real financial stakes of getting AI rollout wrong?

The economics cut both ways, and both sides are larger than most organizations estimate.

On the cost side, IBM and the Ponemon Institute found that 1 in 5 data breaches now involves shadow AI, adding an average of $670K to breach costs.[12] The average breach involving shadow AI costs $4.63 million.[33] These are not theoretical risks. They are the downstream costs of the 78% of employees who bring unapproved tools to work because their employer did not provide sanctioned alternatives.

On the return side, EY measured a 14% productivity gain when 81 hours of training was combined with role redesign.[34] Training alone was insufficient. The compound variable was training plus workflow change. Deloitte's survey of 1,854 executives across 14 European and Middle Eastern countries found that among AI ROI leaders, 40% mandate AI training as a requirement, not an option.[35] Prosci's research across 2,668 data points in 101 countries found that projects with excellent change management are 7× more likely to meet their objectives. When senior sponsors actively support the change, success rates rise from 27% to 79%.[36]

The math connects the cost and return sides. An organization that skips structured rollout faces the shadow AI cost ($670K+ per breach), the abandonment cost (42% of initiatives written off), and the opportunity cost (the compounding gap between the 5% future-built firms and everyone else). An organization that invests in the deployment infrastructure, budgeting $3 for change management for every $1 on the model, faces higher upfront costs but captures the 7× improvement in project success and the productivity gains that compound across every employee who actually uses the tools.

PwC's 2026 Global CEO Survey of 4,454 CEOs across 95 countries found that only 12% report AI delivered both cost and revenue benefits.[37] That 12% figure is not evidence that AI does not work. It is evidence that 88% of organizations have not yet built the deployment infrastructure that makes AI work.

Bar chart comparing AI adoption and project success rates across organizations with and without formal strategy, change management, and peer champion programs.
Structured deployment programs produce measurable adoption gains: formal strategy, change management, and peer champions each shift outcomes.

Credit: Tricky Wombat made with Google Gemini 3.1 Flash Image, Mar 2026

How do you fix enterprise AI implementation?

The evidence points to one conclusion: the organizations getting value from AI treat deployment as an operating-model transformation, not a technology purchase. They invest in what goes around the model, not just the model itself. The pipeline, the governance, the workflow redesign, the measurement infrastructure, the training, the champion networks. That is where Tricky Wombat's approach starts.

The pipeline that delivers AI into an organization has to get three things right. Most get all three wrong.

1. Structured rollout over big-bang deployment

Most AI rollouts launch broadly and hope for organic adoption. This is the Klarna pattern: early volume metrics look strong, quality problems emerge later, and reversal is expensive. The correct approach is staged deployment with explicit gates, as Wendy's demonstrated. You start narrow, define measurable success criteria before expanding, and use company-controlled environments as proving grounds before wider rollout. Tricky Wombat builds phased deployment sequences with gate criteria built into the rollout timeline, not appended after problems surface.

2. People infrastructure before technology scaling

Most organizations budget for licenses, compute, and integration. They do not budget for the 81 hours of training that EY found necessary to unlock productivity gains, or for the champion networks that Citi used to drive 70%+ adoption at scale (champions spending 3-5 hours per week supporting peers).[34][10] Tricky Wombat's implementation pipeline allocates resources according to the BCG 70/20/10 pattern by default: people and process investment leads, technology investment follows.

3. Measurement infrastructure that catches problems early

Google Cloud's AI KPI framework identifies three measurement pillars: model quality, system quality, and business impact.[38] Most organizations measure only the first and wonder why business results lag. The 30% adoption threshold identified by Worklytics as the tipping point, after which adoption becomes self-reinforcing, requires active measurement to detect and accelerate through.[39] Tricky Wombat builds monitoring into the pipeline from day one: adoption tracking, workflow impact metrics, and sentinel indicators for shadow AI that signal when sanctioned tools are failing to meet employee needs.

The system improves over time because each deployment generates data about what works in specific organizational contexts. The 21% of organizations that have redesigned workflows are not starting from scratch. They are iterating on what the measurement infrastructure tells them. That feedback loop, from deployment to measurement to adjustment, is what separates organizations that compound AI value from those that compound AI costs.

The bottom line

The same AI tools, deployed with different methodologies, produce different outcomes. This is the consistent finding across McKinsey, BCG, Gartner, RAND, MIT, and every case study in this analysis. Klarna and Octopus Energy used GPT-based customer service AI. One reversed course and rehired humans. The other scaled to 7.8 million customers with a 4.8-star rating and no layoffs. McDonald's and Wendy's both deployed drive-thru voice AI. One ran a zombie pilot for two-and-a-half years. The other staged to 500+ locations with 86% autonomous accuracy. DBS Bank did not buy better AI than its competitors. It spent a decade building the organizational infrastructure around the AI.

The 5% of organizations capturing outsized returns from AI are not smarter about technology. They are investing in the right ratio: 70% people and process, 20% technology and data, 10% algorithms. The other 95% are spending 93% on the part of the problem that accounts for 15% of project effort.

The organizations that fix this ratio in the next 18 months will compound their advantage. The ones that do not will compound their costs. There is no third option.

References (39)
  1. McKinsey & Company, "The State of AI," November 5, 2025. https://www.mckinsey.com/capabilities/quantumblack/our-insights/the-state-of-ai
  2. BCG, "Where's the Value in AI?," October 2024. https://www.bcg.com/publications/2024/wheres-value-in-ai
  3. Fortune, "Deloitte CTO: What Really Scares CEOs About AI," December 15, 2025. https://fortune.com/2025/12/15/deloitte-cto-bill-briggs-what-really-scares-ceos-about-ai-human-resources/
  4. BCG, "Are You Generating Value from AI? The Widening Gap," 2025. https://www.bcg.com/publications/2025/are-you-generating-value-from-ai-the-widening-gap
  5. S&P Global Market Intelligence, "AI Experiences Rapid Adoption but with Mixed Outcomes," May 2025. https://www.spglobal.com/market-intelligence/en/news-insights/research/ai-experiences-rapid-adoption-but-with-mixed-outcomes-highlights-from-vote-ai-machine-learning
  6. Gartner, "Survey Finds All IT Work Will Involve AI by 2030," October 20, 2025. https://www.gartner.com/en/newsroom/press-releases/2025-10-20-gartner-survey-finds-all-it-work-will-involve-ai-by-2030-organizations-must-navigate-ai-readiness-and-human-readiness-to-find-capture-and-sustain-value
  7. RAND Corporation, "The Root Causes of Failure for Artificial Intelligence Projects and How They Can Succeed," 2024. https://www.rand.org/pubs/research_reports/RRA2680-1.html
  8. Writer/Workplace Intelligence, "Enterprise AI Adoption Survey," 2025. https://writer.com/blog/enterprise-ai-adoption-survey/
  9. McKinsey & Company, "Moving Past Gen AI's Honeymoon Phase: Seven Hard Truths for CIOs," 2024. https://www.mckinsey.com/capabilities/tech-and-ai/our-insights/moving-past-gen-ais-honeymoon-phase-seven-hard-truths-for-cios-to-get-from-pilot-to-scale
  10. Lead with AI, "AI Champion Programs," 2026. https://www.leadwithai.co/guides/ai-champion-programs
  11. HBR/BCG Henderson Institute/Columbia Business School, "Leaders Assume Employees Are Excited About AI. They're Wrong," November 26, 2025. https://hbr.org/2025/11/leaders-assume-employees-are-excited-about-ai-theyre-wrong
  12. IBM/Ponemon Institute, "Cost of a Data Breach Report," July 30, 2025. https://newsroom.ibm.com/2025-07-30-ibm-report-13-of-organizations-reported-breaches-of-ai-models-or-applications,-97-of-which-reported-lacking-proper-ai-access-controls
  13. Fortune, "MIT Report: 95 Percent of Generative AI Pilots at Companies Failing," August 18, 2025. https://fortune.com/2025/08/18/mit-report-95-percent-generative-ai-pilots-at-companies-failing-cfo/
  14. ServicePath (citing Forrester), "AI Integration Crisis: Enterprise Hybrid AI," September 2025. https://servicepath.co/2025/09/ai-integration-crisis-enterprise-hybrid-ai/
  15. ISG via BusinessWire, "AI Use Cases Double Though Business Outcomes Lag Ambition," September 15, 2025. https://www.businesswire.com/news/home/20250915236466/en/AI-Use-Cases-Double-Though-Business-Outcomes-Lag-Ambition-ISG-Study
  16. Menlo Ventures, "2025: The State of Generative AI in the Enterprise," 2025. https://menlovc.com/perspective/2025-the-state-of-generative-ai-in-the-enterprise/
  17. Gartner, "HR Survey Finds 65 Percent of Employees Are Excited to Use AI at Work," December 16, 2025. https://www.gartner.com/en/newsroom/press-releases/2025-12-16-gartner-hr-survey-finds-65-percent-of-employees-are-excited-to-use-ai-at-work
  18. WalkMe via GlobeNewsWire, "Employees Left Behind in Workplace AI Boom," August 27, 2025. https://www.globenewswire.com/news-release/2025/08/27/3140046/0/en/Employees-Left-Behind-in-Workplace-AI-Boom-New-WalkMe-Survey-Finds.html
  19. BCG, "AI at Work: Momentum Builds but Gaps Remain," 2025. https://www.bcg.com/publications/2025/ai-at-work-momentum-builds-but-gaps-remain
  20. Klarna, "Klarna AI Assistant Handles Two-Thirds of Customer Service Chats in Its First Month," 2024. https://www.klarna.com/international/press/klarna-ai-assistant-handles-two-thirds-of-customer-service-chats-in-its-first-month/
  21. Bloomberg, "Klarna Turns from AI to Real Person Customer Service," May 8, 2025. https://www.bloomberg.com/news/articles/2025-05-08/klarna-turns-from-ai-to-real-person-customer-service
  22. CNBC, "McDonald's to End IBM AI Drive-Thru Test," June 17, 2024. https://www.cnbc.com/2024/06/17/mcdonalds-to-end-ibm-ai-drive-thru-test.html
  23. Databricks, "How PepsiCo Established Enterprise-Grade Data Intelligence Platform," 2025. https://www.databricks.com/blog/how-pepsico-established-enterprise-grade-data-intelligence-platform-powered-databricks-unity
  24. CIO Dive, "PepsiCo Generative AI Pilot Scale Strategy," 2025. https://www.ciodive.com/news/PepsiCo-generative-AI-pilot-scale-strategy-agents/750095/
  25. Gartner, "Survey Finds 45% of Organizations with High AI Maturity Keep AI Projects Operational for at Least Three Years," June 30, 2025. https://www.gartner.com/en/newsroom/press-releases/2025-06-30-gartner-survey-finds-forty-five-percent-of-organizations-with-high-artificial-intelligence-maturity-keep-artificial-intelligence-projects-operational-for-at-least-three-years
  26. MIT NANDA, "AI Report 2025," July 2025. https://www.artificialintelligence-news.com/wp-content/uploads/2025/08/ai_report_2025.pdf
  27. Menlo Security (citing TELUS Digital), "2025 Report Uncovers 68% Surge in Shadow Generative AI Usage," 2025. https://www.menlosecurity.com/press-releases/menlo-securitys-2025-report-uncovers-68-surge-in-shadow-generative-ai-usage-in-the-modern-enterprise
  28. McKinsey & Company, "Building Trust: How Customer Care Leaders Pull Ahead with AI," 2025. https://www.mckinsey.com/capabilities/operations/our-insights/building-trust-how-customer-care-leaders-pull-ahead-with-ai
  29. techUK, "Case Study: Kraken Tech's Generative AI Tool for Customer Service," 2025. https://www.techuk.org/resource/case-study-kraken-tech-s-generative-ai-tool-for-customer-service.html
  30. Google Cloud Blog, "How DBS, Singapore's Largest Bank, Builds AI with Confidence," 2025. https://cloud.google.com/transform/how-dbs-singapores-largest-bank-builds-ai-with-confidence
  31. CNBC, "CEO Southeast Asia's Top Bank DBS Says AI Adoption Already Paying Off," November 14, 2025. https://www.cnbc.com/2025/11/14/ceo-southeast-asias-top-bank-dbs-says-ai-adoption-already-paying-off.html
  32. DBS Newsroom, "DBS Named World's Best AI Bank 2025," 2025. https://www.dbs.com/newsroom/DBS_named_Worlds_Best_AI_Bank_2025
  33. VentureBeat, "IBM: Shadow AI Breaches Cost $670K More," 2025. https://venturebeat.com/security/ibm-shadow-ai-breaches-cost-670k-more-97-of-firms-lack-controls
  34. CIO.com, "The Davos Reality Check on AI ROI: Why Tools Don't Pay Off Until Work Changes," 2025. https://www.cio.com/article/4145044/the-davos-reality-check-on-ai-roi-why-tools-dont-pay-off-until-work-changes.html
  35. Deloitte, "AI ROI, OBM, RAI," 2025. https://www.deloitte.com/nl/en/issues/generative-ai/ai-roi-obm-rai.html
  36. Prosci, "The Correlation Between Change Management and Project Success," 2023. https://www.prosci.com/blog/the-correlation-between-change-management-and-project-success
  37. PwC, "2026 Global CEO Survey," 2026. https://www.pwc.com/gx/en/news-room/press-releases/2026/pwc-2026-global-ceo-survey.html
  38. Google Cloud, "Gen AI KPIs: Measuring AI Success Deep Dive," November 2024. https://cloud.google.com/transform/gen-ai-kpis-measuring-ai-success-deep-dive
  39. Worklytics, "Top 10 KPIs: AI Adoption Dashboard 2025," 2025. https://www.worklytics.co/resources/top-10-kpis-ai-adoption-dashboard-2025-dax-formulas

By Tricky Wombat

Last Updated: Mar 31, 2026