In banking boardrooms, there’s a conversation that keeps surfacing. Someone raises a competitor - a bank that launched a new lending product in weeks, absorbed a volume surge without adding headcount, or delivered a genuinely coherent customer experience across channels.
The question is always the same: what do they know that we don't?
According to BCG, only 1 in 4 banks worldwide uses AI to gain any real competitive advantage. These projects don't stall because the models are inadequate or because data science teams lack vision. They stall because the underlying architecture is fundamentally fragmented.
The technology budgets and AI programs are real - but so is the gap between banks who got AI right and those stuck in pilot purgatory. The weak link is the structural problem that bigger AI budgets cannot solve.
Level 1 was about the app. Level 2 is about how the business runs.
To understand what separates the leading banks from the rest in the era of AI, it helps to think about the past decade of banking transformation in two distinct phases.
The first phase - Level 1 - was about the customer-facing surface. Banks invested in digital channels: better mobile applications, streamlined onboarding, online self-service, and improved interfaces for customers who no longer wanted to visit a branch.
Those investments delivered genuine results. The front of the banking operation was modernized in ways that were visible and meaningful. Digital adoption increased, and customer satisfaction improved in measurable ways.
What those investments left largely untouched, however, was the operational reality behind the customer-facing surface.
Behind every digital interaction, sat a set of processes that in most banks run across dozens of disconnected systems. These systems help in completing a loan application, resolving a dispute, onboarding a new customer, or handling a service request. Each system carries its own version of the customer's data, its own rules, and its own view of what has happened and what needs to happen next.
The work that bridges those systems - gathering information, checking policies, coordinating between teams, resolving exceptions - falls to people, running through manual handoffs that no single system tracks or owns.
Modernizing the front of the operation did not change this. In many cases it made the gap between the digital experience customers see and the operational reality behind it more visible, not less.
The banks leading in the AI era already moved past the app layer to Level 2. They changed how the entire frontline operates. This doesn't just affect what the customer sees, but also how an interaction moves from first touchpoint to resolution - across digital channels, branches, contact centers, and the operations that sit behind all of them.
That is a categorically different transformation goal, and it requires a foundation that Level 1 investments were not designed to provide.
What the operational gap looks like in practice
When a customer submits a loan application through a digital channel in a bank still operating at Level 1, the experience they see may be seamless, but what happens behind the scenes typically is far from that.
Banks stuck in Level 1 have data that sits in systems that do not share a common model. On this fragmented infrastructure, data has to be gathered and reconciled manually before anyone can act on it. For example:
- Policy checks are performed by people working across separate tools that do not communicate.
- Documents move between teams through email threads that carry no audit trail.
- Exceptions accumulate in queues that no dashboard captures end to end.
The customer does not see any of this, but it is where the time goes, where the cost accumulates, and where the risk compounds.
This is the operational whitespace - the work that lives between systems rather than inside them. It is also, critically, where AI cannot function effectively when the foundation beneath it is fragmented.
Every AI initiative deployed on a fragmented foundation requires its own bespoke integration to access the data and context it needs. Every use case is essentially a new project rather than an extension of a working operational model. This is why BCG research found that 75% of institutions remain stuck in siloed pilots and proofs of concept despite sustained investment in AI.
In a nutshell, Al pilots are failing because each one is built on a foundation that was never designed to support enterprise-wide execution.
Three actors, and what happens when they don't share a foundation
For most of banking's history, every bank's frontline involved two participants: the employee and the customer. Growth meant adding more of the first to serve more of the second.
AI introduces a third participant, with the potential to absorb significant operational work at scale - but the conditions under which that potential can be realized matter enormously.
In a bank that has reached Level 2, all three participants work from the same operational foundation. For example:
- A customer's situation is understood consistently by the mobile application, the contact center agent picking up the call, and the AI tool handling a related service task.
- A policy change propagates across every channel and workflow simultaneously, without a separate implementation effort per system.
- An employee picking up a case mid-process sees the same context the customer saw and the full history of every prior interaction.
An AI agent operating within that environment inherits all of it without requiring a custom integration to access any of it.
In a bank still running on a Level 1 foundation, each of those participants operates with a different version of the same reality. The customer sees their situation through the app. The contact center agent sees a different version through their system. The AI tool for the relevant workflow may have access to neither.
This is not a technology problem that a better AI model would solve. It is a structural problem that determines what any AI model can do once it is deployed.
Why the distance between the two tiers grows over time
The structural difference also determines how each type of foundation behaves as new capabilities are built on top of it.
The financial data reflects this dynamic directly. According to BCG's Widening AI Value Gap research, the companies that have built the right foundation for AI are achieving 1.7 times higher revenue growth and 3.6 times greater three-year total shareholder returns compared to those that have not. Here's how:
In a bank operating at Level 2 - with a unified foundation - each new investment builds on what came before. A new product reaches every channel from a single configuration. For example:
- A new AI deployment draws on the operational context accumulated across every previous customer interaction.
- A new policy enforces consistently everywhere from the moment it is applied.
The effort required to deliver new capabilities diminishes as the foundation matures, because each new initiative starts from a working base rather than from scratch.
In a bank still on Level 1 - on a fragmented foundation - the overhead of each new initiative stays roughly constant - or grows. For example:
- Every new AI use case requires its own integration work.
- Every new channel creates another data silo to manage.
- Every new policy change has to be implemented across each disconnected system separately.
Even though the banks in this position are investing actively, running pilots sincerely, and adding genuine capabilities, those investments cannot compound because the foundation prevents it.
The leading group is also reinvesting those returns into further capability - planning to allocate 64% more of their IT budgets to AI than their slower-moving counterparts in 2025.
The question that determines which side of the divide you are on
Understanding where a bank sits on this divide comes from looking honestly at a few operational realities that are already visible:
- When a customer calls after submitting an online application, does the person who picks up see the same information the customer saw - or do they start by asking the customer to explain their situation again?
- When a policy changes, how many separate systems does that change have to be implemented in, and who owns making sure it has been applied consistently across all of them?
- When your last AI pilot succeeded in its test environment, how long did the integration work take before it could access what it needed in production - and how similar was that effort to every pilot before it?
The answers to those questions describe the foundation more accurately than any roadmap does.
The banks that have crossed to Level 2 did not do so by adding more technology to their existing structure. They changed the structure itself - building a foundation where customers, employees, and AI agents work from a shared operational model, and then building everything else on top of it.
For the institutions still completing Level 1, the most consequential strategic question is not which AI capability to prioritize next. It is whether the foundation underneath will allow that capability to compound - or require the same integration effort all over again.




.jpg)
