AI in banking

Why fragmentation is the enemy of AI in banking

29 April 2026
4
mins read

The AI models aren't the problem. The fragmented architecture is.

‍

Banks spent over $73 billion on AI in 2025 - a 17% year-on-year increase - and most of them have little to show for it. Only 25% of institutions have woven AI into their strategic playbook, while the other 75% remain stuck in fragmented pilots that never reach production.Β 

The AI models aren't the problem. The fragmented architecture is.

53% of banks on legacy core systems struggle to scale due to data silos and production bottlenecks. Banks have built their technology stacks purchase by purchase, integration by integration, year by year - loan origination in one system, KYC in another, fraud detection in a third. The result is a fragmented frontline where AI hits a wall at every turn.Β 

For decades, humans papered over the gaps in fragmented banking architecture. AI can't do that, and when you hand it that same infrastructure to operate on, it doesn't compensate. Instead, it scales the inconsistency.

What fragmentation looks like

Ask a banker to describe their technology stack and the word "integrated" comes up fast. Ask a technologist at the same bank, and you'll hear something closer to the truth: a collection of point solutions stitched together over decades, each solving one specific problem and speaking a slightly different language.

Banks that build AI on top of that foundation are stacking intelligence on an unstable base - assembled purchase by purchase, integration by integration, year by year. The instability runs across three layers.

Layer 1: The technology stack

Large enterprise banks run around 897 systems on average, and 95% of them struggle to integrate data across those systems. The problem isn't any single purchase - it's what they become together.

Each individual piece made sense when it was bought - the fraud detection tool, the CRM, the digital onboarding layer, the compliance module. But none of them were designed to work together. Every connection between them is a custom integration, and every data handoff between them is a place where decisions slow down.

According to a 2024 PYMNTS Intelligence report, 71% of bankers describe their own core systems as a "spaghetti of legacy systems that are difficult to untangle and update. AI models can't navigate this spaghetti.

Layer 2: The data problem

AI has plenty of data, just no reliable version of it. This goes back to legacy cores, which were designed during the branch-centric era, prioritizing uptime and stability over flexibility. They operate on batch processing models and overnight jobs rather than the continuous data streams modern digital experiences require.

When a customer calls the contact center at 9 AM about a transaction that happened at 11 PM, the system often hasn't caught up yet. A human agent can explain that, but an AI model working from stale data simply fails.Β 

Many banks compound the problem by replicating datasets across systems, creating inconsistent truths and higher storage costs. The AI has plenty of data - just no reliable version of it.

Layer 3: The organizational layer

This is the one most banks overlook. Teams that manage the mortgage stack rarely talk to the team managing the mobile app. The fraud team's data doesn't flow automatically to the relationship manager's workspace. Banks maintain fragmented, siloed approaches instead of enterprise-wide strategies, and that organizational structure mirrors the technology beneath it.

People learn to work around it - through manual reconciliation, direct messages, and institutional memory. AI has no access to any of that workaround infrastructure. When you remove the humans who hold the context together, the gaps become visible fast.

Why AI can't inherit the workarounds

Human beings are good at navigating broken systems. A veteran relationship manager knows the system of record runs 24 hours behind, so they check the secondary tool before a client call. A branch teller knows the fraud flag doesn't update until end-of-day, so they apply judgment. A compliance officer knows which data source to trust when two systems disagree.

While that knowledge lives in people's heads, AI inherits the fragmented architecture.

What AI sees and what it misses

AI reads what's in the system, acts on what's available, and decides based on what it can see. Feed an AI agent a fragmented data environment and it doesn't compensate intelligently; instead, it scales the inconsistency. A bad signal that a human would quietly discount becomes an input shaping thousands of decisions per day.

Three scenarios show exactly where fragmentation breaks AI in practice.

Commercial lending. An AI model tasked with credit risk assessment pulls from three separate systems with different update frequencies. The balance sheet data is current, the behavioral transaction data is 48 hours old, and the covenant compliance records are updated weekly. The model builds a recommendation on a patchwork of temporal mismatches - an output no credit committee should trust.

Fraud detection. An AI system watching for suspicious patterns needs a unified view of customer behavior across channels. When mobile, branch, and telephone banking data live in separate systems, the AI sees fragments, not patterns. It misses the cross-channel behavior that a human analyst, with access to all three feeds and some experience, would flag immediately.

Customer onboarding. An agentic AI designed to automate KYC and document verification runs into a workflow where document intake happens in one system, identity verification in a second, and risk scoring in a third. Each handoff requires a human to copy data across. The agent stalls at every boundary.

Pilot purgatory is an architectural symptom

The industry has a name for what happens when AI meets a fragmented bank: pilot purgatory.Β 

A bank runs a successful proof of concept for AI-driven loan decisioning. It works in the test environment - the data is clean, the scope is narrow, and the integration is manageable. Then the program tries to go live across the full portfolio and hits the real architecture:Β 

  • Dozen systems that don't talk to each other
  • Batch data that's a day old,Β 
  • Compliance rules that live in a separate platformΒ 
  • Workflow approvals that require a human because no system can trigger the next step automaticallyΒ 

Why pilots don't become products

Decision-makers remain focused too narrowly on simple use cases rather than seeking to transform more complex workflows and end-to-end journeys. The architecture was never designed to support reuse, so it doesn't.Β 

Every new AI use case demands a new set of custom integrations, new data pipeline and new governance wrapper. Meanwhile, many banks are still missing the skills, frameworks, and operational architectures they need to implement AI successfully.Β 

As the cost of each deployment grows and the marginal return on each AI investment shrinks, work stays a project and never becomes a product.

Why AI fails without a unified frontline

An AI agent needs to know the state of the customer to make a decision. When that state is split across dozens of disconnected systems, the agent can't function. Agentic AI and frontline architecture have to evolve together - intelligence without a unified foundation is automation waiting to fail.

With the right data architecture, banks could cut AI implementation time in half and lower costs by 20% - but that requires a clear direction, not just a better tool.Β 

When all execution surfaces - branch, mobile, contact center, relationship manager workspace - draw from the same Customer State Graph, something different becomes possible. An AI agent working a loan renewal can see the customer's full transaction history, their recent service interactions, their risk profile, and their product holdings. The recommendation it makes is trustworthy, and the human banker reviewing it can act with confidence.

The pattern is consistent: banks that consolidate their frontline onto a single operating layer scale AI faster, deploy new use cases with less custom integration, and generate governance trails that hold up to regulatory scrutiny.

To learn more about what a unified frontline in bankin looks like, visit the dedicated blog exploring it here.

Frequently asked questions

What does fragmentation mean in the context of AI in banking?

Fragmentation in banking refers to disconnected point solutions, siloed data systems, and isolated teams that can't share information in real time. When AI is applied to this kind of architecture, it inherits the inconsistencies rather than overcoming them. Data locked in overnight batch updates and systems that don't communicate make it impossible for AI to operate reliably at scale. Learn more at Backbase's guide to banking legacy systems.

Why does fragmentation stop banks from scaling AI?

AI models make decisions based on the data they can access. When customer data is spread across a dozen disconnected systems with different update frequencies, AI can't build a reliable picture of reality. Humans compensate through experience and institutional knowledge - AI cannot. The result is that pilots work in controlled conditions but stall the moment they touch the real architecture, a pattern known as pilot purgatory.

How does a unified frontline help banks scale AI?

A unified frontline means every execution surface - mobile, branch, contact center, relationship manager workspace - draws from the same real-time customer data layer. AI agents working across that unified foundation see consistent, current information, produce trustworthy decisions, and generate clean audit trails for regulators. It removes the structural barriers that keep AI trapped in proof-of-concept mode and lets banks deploy intelligence at enterprise scale.

What are the real costs of fragmented AI in banking?

The costs compound across multiple dimensions: each new AI use case requires custom integrations that increase technical debt, explainability breaks down across disconnected systems creating regulatory liability, and duplicated data creates inconsistent signals that reduce model accuracy. According to BCG, only one in four banks is actively using AI competitively - and fragmented architecture is the primary reason the other three are stuck.

What is the difference between an AI pilot and AI at scale in banking?

A pilot works in a controlled environment with clean, scoped data and manageable integrations. Scaling means deploying AI across the full operational architecture - all channels, all customer segments, all workflow dependencies. Fragmentation is the gap between the two. Banks that treat AI as a connected strategic capability built on unified data and shared governance cross that gap. Banks that treat AI as a collection of one-off projects don't.

About the author
Backbase
Backbase pioneered the Unified Frontline category for banks.

Backbase built the AI-native Banking OS - the operating system that turns fragmented banking operations into a Unified Frontline. Customers, employees, and AI agents work as one across digital channels, front-office, and operations.

Backbase was founded in 2003 by Jouk Pleiter and is headquartered in Amsterdam, with teams across North America, Europe, the Middle East, Asia-Pacific, Africa and Latin America. 120+ leading banks run on Backbase across Retail, SMB & Commercial, Private Banking, and Wealth Management.

Table of contents
Vietnam's AI moment is here
From digital access to the AI "factory"
The missing nervous system: data that can keep up with AI
CLV as the north star metric
Augmented, not automated: keeping humans in the loop