AI in banking

4 barriers to AI execution in commercial banking and how to overcome them

28 April 2026
9
mins read

Commercial banking feels the AI implementation challenge more acutely than any other segment. Four structural barriers explain why most pilots stall before they scale - and each one is solvable.

This blog is based on our full report, Pragmatic AI Strategies for Commercial Bank Growth in 2026, which covers four high-impact AI use cases across onboarding, relationship management, fraud prevention, and payments.

According to Deloitte, AI implementation within banks is often challenged by "brittle and fragmented data foundations", mounting compliance demands, and outdated legacy systems. Commercial banking feels that more acutely than any other segment.

Unlike retail, commercial banking runs on complex multi-entity hierarchies, granular credit authorities, and multi-stakeholder approval cycles. Banks trying to force off-the-shelf AI models through that infrastructure are discovering why most pilots stall before they scale.

Four structural barriers explain why commercial banking specifically struggles to move from AI ambition to AI at scale. We break them down below, in addition to ways to address them.

Barrier 1: the data accessibility gap in commercial banks

In most commercial banks, data is scattered across decades of accumulated systems. Those include core banking platforms, CRM tools, credit engines, treasury systems, and compliance databases that were never designed to talk to each other.

According to Deloitte's 2024 Banking & Capital Markets Data and Analytics Survey, more than 90% of banking data users report that the data they need is often unavailable or takes too long to retrieve, and 81% cite data quality as a top challenge.

As a result, customer records exist in multiple versions across multiple systems, product data carries inconsistent definitions across lines of business, and transaction history sits in ledgers that predate modern data architecture.

When an AI model is trained or run on fragmented, inconsistent data, its outputs reflect that fragmentation. This manifests, for instance, in the form of recommendations that business users can't trust, audit trails that regulators can't follow, and decisions that can't be explained or defended. This is why so many well-funded AI pilots fail to graduate to production.

How to overcome the data accessibility gap

The prerequisite for AI at scale is a single, continuously updated view of every client - one that connects transaction history, credit exposure, relationship signals, and compliance data in real time.

Without it, every AI model deployed on top of fragmented infrastructure inherits that fragmentation, producing outputs that reflect the gaps in the data rather than the reality of the client relationship.

Banks that treat data unification as a pre-condition for AI - rather than a parallel workstream - close the accuracy gap and move from pilot to production far faster. When every system draws from the same source of truth, model outputs become trustworthy, audit trails become defensible, and the data accessibility problem stops compounding at every layer of the stack.

Still running AI pilots that don't make it to production? Talk to a Backbase specialist and find out what's holding you back

Barrier 2: the AI literacy gap

According to BCG, while 89% of organizations say their workforce needs improved AI skills, only 6% have begun upskilling in "a meaningful way." The tools are being purchased faster than the people using them can keep up. Meanwhile, according to Wessel Oosthuizen from Deloitte Africa, 84% of companies haven't redesigned a single job around AI, according to a conversation with him during the Banking Reinvented podcast.

When the workforce can't engage confidently with AI, one of two things happens: the tools get abandoned or misused. In commercial banking, this manifests itself in the form of relationship managers handed AI-generated client briefings they don't know how to interrogate, and credit officers asked to act on model outputs they can't explain to a regulator or a client.

Additionally, fear plays a larger role than most banks acknowledge. According to KPMG's November 2025 survey of over 2,100 U.S. workers, 52% now fear job displacement due to AI - nearly double the level from the previous year. In an environment where RMs have spent decades building expertise and client trust, that anxiety is real, and it creates passive resistance that no software rollout plan accounts for.

Banks whose senior leadership openly engages with AI tools, challenges their outputs, and talks transparently about where AI helps and where it doesn't signal to the rest of the organization that this is safe ground to explore.

Related: Separating hype from reality for AI in banking, with Marcus Martinez, Kanishka Bhattacharya, and Dave Murphy

How to overcome it the AI literacy gap

Role-specific training closes the gap faster than general AI literacy programs: Show RMs how AI prepares a client briefing. Show credit analysts how to stress-test a model output. Show operations staff when to escalate an AI decision. Keep humans accountable for the judgment call, with AI doing the preparation. The goal is confident collaboration - not passive consumption of outputs nobody fully trusts.

Barrier 3: the risk and trust barrier

Commercial banking has higher stakes than retail. Credit decisions carry direct financial and legal consequences, client-facing AI interactions are subject to regulatory scrutiny, and a single misfire - whether a hallucinated credit ratio or a compliance contradiction - creates real liability.

Hallucinations are the most visible flashpoint. When a model generates incorrect financial ratios, misinterprets credit reports, or produces false signals about bankruptcy risk, the consequences can trigger regulatory penalties and client disputes.

Banks understand this risk, but the problem is that most simply haven't built governance frameworks to manage it before deploying. Even though 60% of companies are considering agentic AI, over half have yet to undertake any form of risk assessment. That gap between ambition and governance discipline is where liability accumulates.

That complexity is compounded by unclear internal ownership. When AI initiatives are driven alternately by technology teams, compliance functions, and business units without a unified mandate, governance fragments.

How to overcome it

The answer to governance fragmentation isn't more process - it's clearer architecture. Decision Authority needs to be embedded directly into AI workflows from the start, defining what AI can act on autonomously, what requires human review, and what stays firmly with a credit officer or compliance team. AI-driven workflows built this way are auditable, explainable, and defensible to regulators across jurisdictions.

When risk committees can see exactly where human judgment sits in every AI-assisted decision, approval stops being a blocker and becomes a formality.

Become the trusted partner for commercial clients with Backbase

Barrier 4: overreliance, skill atrophy and eroding trust in AI

This is the barrier most banks haven't put on their risk registers yet.

According to Microsoft Research's Literature Review on Overreliance on AI, "overreliance on AI occurs when users start accepting incorrect AI outputs," leading to errors that ultimately erode trust in AI systems. In commercial banking, those errors surface in credit committees, client meetings, and regulatory reviews, where the consequences are financial, legal, and reputational.

There's also skill atrophy. RMs who defer to AI-generated briefings without stress-testing them lose the instinct to challenge. Credit teams that accept model outputs without interrogating assumptions gradually lose the interpretive capability that differentiates a commercial bank from a transaction processor.

By 2026, Gartner forecasts that 50% of organizations will introduce "AI-free" assessments to address the decline in critical thinking, recognizing that over-reliance on AI weakens decision-making skills. The numbers reflect growing institutional awareness of the problem - but awareness alone doesn't protect an RM who has spent years building advisory expertise from quietly becoming dependent on algorithmic guidance.

How to overcome this

Design AI as augmentation with hard constraints on automation. Keep humans accountable for decisions, particularly in complex or ambiguous client situations. Structure workflows so that AI prepares and surfaces, while bankers interpret and own. Build in periodic human-only assessments for credit and relationship decisions to keep judgment sharp. The banks that get this balance right will have RMs who are faster, better informed, and sharper - not dependent.

Streamline sales and servicing operations with a single, AI-native banking OS designed to scale across your entire bank

Execution over ambition

The window for treating commercial banking AI as an experiment is closing. McKinsey confirmed that most banks have not yet delivered revenue growth or efficiency gains at scale from AI. Those that have, however, are pulling ahead in speed-to-decision, loss rate performance, and customer experience.

Each of the four barriers is solvable, and none requires a wholesale technology overhaul. What they require is an architecture built for AI that unifies data, governs decisions, and keeps human expertise at the center of every client relationship.

Every bank can say they're 'AI-first,' but most are bolting models onto mainframe-era silos. Transformation isn't about algorithms, it is about an architecture that operationalizes insights at scale. Without that, AI stays in pilot.
‍
Jouk Pleiter, CEO & Founder, Backbase
About the author
Backbase
Backbase pioneered the Unified Frontline category for banks.

Backbase built the AI-native Banking OS - the operating system that turns fragmented banking operations into a Unified Frontline. Customers, employees, and AI agents work as one across digital channels, front-office, and operations.

Backbase was founded in 2003 by Jouk Pleiter and is headquartered in Amsterdam, with teams across North America, Europe, the Middle East, Asia-Pacific, Africa and Latin America. 120+ leading banks run on Backbase across Retail, SMB & Commercial, Private Banking, and Wealth Management.

Table of contents
Vietnam's AI moment is here
From digital access to the AI "factory"
The missing nervous system: data that can keep up with AI
CLV as the north star metric
Augmented, not automated: keeping humans in the loop