Banks have spent years improving interfaces while leaving decision-making slow and fragmented. The next phase belongs to institutions that can use AI to act intelligently, govern confidently, and move at commercial speed.


For years, banks have spoken about becoming more customer centric. They have invested in channels, interfaces, automation, and analytics. They have refined journeys, redesigned experiences, and modernized parts of the stack. Yet in too many institutions, one problem persists. The customer may move in real time, but the bank still responds in stages.
That gap is becoming harder to defend.
At the BIAN Conference in Cape Town, South Africa, I made the point that AI matters most when it helps banks close that gap. Not with more digital theatre, and not with another layer of clever language sitting on top of old processes, but with the ability to translate customer intent into commercial action quickly, responsibly and at scale.
What is changing in banking is not simply the technology. It is the nature of interaction. In the past, banks spoke to customers, gathered requirements, and then tried to interpret what those customers meant. Increasingly, that is no longer how the relationship works. Customers are becoming more explicit, more informed, and more demanding in how products and services should be configured. In many cases, they are effectively arriving with what I described as a no-code level of engagement. They are saying, in simple terms, this is what I need you to achieve. The bank’s task is no longer to infer demand slowly. It is to respond to it intelligently.


That is where AI becomes commercially meaningful.
In my view, the most useful role for AI in financial services is not only at the edge of customer experience, though that matters greatly. It is in connecting the front office to the decision layers behind it. Once a customer expresses a requirement, the bank should be able to assess, almost immediately, how that request fits within product parameters, pricing logic, regulatory obligations, and revenue protection guardrails. If the institution has the right architecture and intelligence in place, it should not have to disappear into a back-office rabbit hole before coming back with an answer. It should be able to negotiate, configure, and respond in near real time.
That changes the commercial dynamic. It moves the bank from being reactive and procedural to being responsive and precise. It also shifts AI from being an experimental add-on to being part of the operating discipline of the bank.
This matters because responsiveness without control is dangerous, but control without responsiveness is simply another form of failure. Much of banking has spent years oscillating between those two extremes. On one side sits the promise of personalization and agility. On the other sits the institutional instinct to slow everything down in the name of caution. Neither is sufficient on its own. Customers do not want a reckless bank. But they do not want an obstructive one either.
That is why I also spoke about the role of AI at the back end, especially in relation to fraud, anomaly detection, and behavior that sits outside the norm. Here again, the question is not whether protection is necessary. Of course it is. The question is whether protection becomes excessive. Too often, banks drift into what I call hyper-protection. The result is a customer experience that feels less like safety and more like suspicion. From the customer’s point of view, the sentiment is straightforward: it is my money, and I still want the freedom to use it, with sensible safeguards around me. AI gives institutions the chance to build those safeguards with more nuance. It allows them to distinguish between unusual behavior and dangerous behavior more intelligently than blunt legacy controls often can.
That is a subtle but important difference. Good AI does not merely stop more things. It helps the bank stop the right things, while allowing legitimate activity to continue. In banking, that distinction is the difference between building trust and eroding it.
None of this, however, happens by accident. If organizations want meaningful AI outcomes, they need meaningful AI investment. At SunTec, we have invested in AI infrastructure. We run multiple large language models because we do not believe serious enterprise capability comes from locking oneself into a single approach. Different customer groups have different needs, and different use cases demand different responses. Building product capability in this environment means ensuring our teams can work across that complexity rather than be constrained by it.
That investment has another purpose as well. It is about talent.
A great many firms talk about attracting the next generation of AI professionals. Fewer create an environment in which that talent can actually learn. Our experience has been that people want access to current infrastructure, current models, and current problems. They want room to experiment. They want the freedom to test, fail, learn, and test again. That cycle matters. It sharpens capability, but it also signals something about the organization itself. People begin to recognize a brand not only for the products it sells, but for the seriousness of its commitment to innovation.
The same logic applies internally. One of the areas where we have seen comparatively quick returns is training and enablement. As a software company, we have looked closely at how AI can accelerate the journey of new developers and configurators, helping them become productive faster. We have also looked at how AI can assess proficiency, identify whether a user is at an expert, medium or lower level, and then recommend the training needed to help them use software more effectively. In such areas, the return on investment can be relatively quick because the gains show up in time to competence, time to use, and speed of execution.
Other returns are slower, but no less real. In areas such as fraud management, anomaly detection and operational risk, the value may only become fully visible over time. It will show up on the balance sheet eventually, but rarely in a dramatic instant. That is one reason why banks need a more mature conversation about AI value. Not every benefit arrives at the same speed. Some use cases improve productivity immediately. Others strengthen resilience quietly over time. Both matter.
There is another issue that deserves more honesty in the industry, and that is trust. When we speak about moving AI from pilot to production, the discussion usually focuses on data, architecture, security, and governance. Those are all valid concerns. But there is also an emotional dimension to adoption that institutions often understate. People do not always resist AI because the models are weak. Often, they resist because trust has not caught up with capability.
I shared an example from a digital banking migration. We found a report that had been running inside a bank for 17 years, even though nobody could quite explain why it was still there. When we investigated, we discovered that it had originally been created as a parallel check when the bank moved from handwritten ledgers onto a new system. It was meant to provide assurance during transition. Nobody ever switched it off. That story says a great deal about how institutions operate. Legacy is not only technical. It is behavioral. Caution survives long after its original purpose has faded.
AI is now entering that same terrain. We have already moved from paper-based to server-based operations. The next transition is from human-based to AI-assisted, and in some cases AI-driven, decisions. The pace of that transition will depend not only on technical readiness, but on whether the organization can see, trace, and understand what the system is doing. This is one reason I believe standards and structured architecture matter so much. When services are transparent and traceability is built in, trust becomes easier to earn. The hurdle to adoption becomes lower. The speed to market becomes faster. AI stops feeling like a black box and starts feeling like managed capability.
That is also why I see real value in the role BIAN can play. In a world of intelligent services, standards are not abstract technical niceties. They provide semantic clarity and service transparency that make AI more governable and more usable inside complex financial institutions. Banks do not merely need intelligence. They need intelligence they can understand, trace and regulate.
And regulation will remain central. For firms like ours that work across countries, tax regimes, and banking environments, adaptation to regulatory variation is already part of daily reality. The future of AI in banking will not be shaped by technology alone. It will be shaped by how confidently institutions can align innovation with commercial logic, customer need, and regulatory discipline.


That is why I remain convinced that the next phase of banking AI will not be won by the firms with the flashiest demonstrations. It will be won by those that can turn customer intent into product action, price intelligently within guardrails, protect revenue, meet regulatory obligations, and build trust at the same time.
Banking has enough theatre already. What it needs now is less delay, less institutional guesswork, and far more real-time intelligence.


