AI Radar 2025: Why technology alone is not enough – insights from banks and IT providers

The first version of our AI Radar for the DACH region from fall 2024 clearly showed how intensively banks are already using artificial intelligence (LINK). We systematically analyzed the largest banks in the DACH region based on publicly available information to determine which AI use cases they have implemented. The identified cases were structured according to the bank model (Gabler Banklexikon) based on the underlying banking processes. On the other hand, other criteria such as the model used, investment costs, lessons learned, etc. were recorded when available.

But how has reality changed since then? In our current surveys, we focused on qualitative interviews with partner banks and IT providers of the Competence Center Future Financial Services in order to analyze deeper insights, experiences and new strategic challenges in dealing with AI in the financial industry.

Insights from the interviews

The discussions clearly show that AI has now grown beyond pilot projects in many banks. Following the prominent release of generative AI for the consumer market in 2022, we have seen a significant leap in development in the last two years  – banks are adopting such applications in practice. Chat GPT or other large language models are now no longer just a field of experimentation for innovation departments but are actively used for specific tasks. Examples range from corporate customers supplementing their e-banking with AI-driven solutions for expense management to automated credit decisions in the back office , which can lead to a reduction in processing times of up to 90% according to respondents. Other applications include the automated plausibility check of expert opinions, digital onboarding using AI-supported document analysis and the GDPR-compliant creation of reports in marketing and communication. We have recorded and described these and other use cases from the banks shown in the figure below in a structured manner as part of our AI radar.

Figure 1: AI radar (status 05/2025)

In addition to the implementation of various AI use cases in banks, it was particularly noticeable in the interviews that there is currently no uniform understanding of artificial intelligence in practice: while some focus more on the technology (“machine learning + x”), others tend to describe its functionality (“simulation of human intelligence”).

Strategic patterns & decision paths

It is striking that almost all of the institutions surveyed now closely link their AI and data strategies with the overarching IT and corporate strategy. Nevertheless, governance remains a challenge in many cases.

In practice, banks and IT providers usually take a hybrid approach to sourcing the technology: they draw on existing foundation models, some of which are open source-based, and then refine them for specific use cases. Decision-making processes are often decentralized; use cases are usually identified via structured workshops and then prioritized according to criteria such as expected benefits, technical maturity and feasibility.

Differences between banks and IT providers

There is an interesting contrast between banks and technology providers. Banks are naturally more cautious. Regulatory requirements and internal coordination require banks in particular to implement planned applications in a cautious and structured manner. At the same time, they are under pressure to realize efficiency gains and develop new services . Providers, on the other hand, are typically more agile, test new Large Language Models (LLMs) in short cycles and have more technical freedom. Typically, providers develop new use cases depending on the banks’ requirements. However, they often lack direct access to end user feedback (from bank customers or bank employees), as feedback from day-to-day banking is often only filtered and passed on via the IT department.

Challenges in implementation

A key finding of our study concerns the complex challenges that banks face when using artificial intelligence. In particular, regulatory uncertainty – for example due to the GDPR/DSG, the AI Act in the EU as well as the planned implementation of the European AI Convention in Switzerland through sectoral adjustments to existing laws – is creating a considerable need for adaptation at both a technical and organizational level. At the same time, it is clear that many institutions are struggling with the acceptance of the technology within their own workforce: Cultural and organizational hurdles make it difficult to introduce new AI solutions, which requires active expectation management and targeted change measures.

Added to this is the economic trade-off between costs and benefits. Although the investment costs can often be clearly quantified, the potential efficiency gains, such as faster processing times or automated processes, have often yet to be clearly measured. Implementation is also made more difficult by a lack of standards in data formats, which hinders collaboration with technology providers and complicates the integration of solutions.

Last but not least, there is a lack of clear governance structures in many organizations: who controls the AI portfolio, what priorities are set and how to measure the success of individual applications is not yet systematically regulated in many places. These open questions make it clear that technological feasibility alone is not enough – clear framework conditions, decision-making logics and responsibilities are also required.

One challenge that has become increasingly important in the discussions concerns geopolitical dependencies and the risk of vendor lock-in. Particularly when using large, mostly non-European foundation models and cloud infrastructures, it becomes clear that banks can become dependent on individual providers in the long term – both technologically and in terms of regulatory and economic conditions. These concerns are not only issues of data sovereignty and compliance, but also strategic flexibility in model selection, integration and further development.

In contrast, the problem of hallucinations with LLMs was comparatively minor in the interviews. Two companies reported that they had systematically compared the performance of LLMs with manual processing by employees – particularly when analyzing large volumes of files. The result: the error rate of LLM-supported analysis was lower than that of human processing.

Recommendations for action from practice

Five key recommendations for action can be derived from the interviews.

  1. Prioritize use cases: With the help of criteria such as “Greatest Shortest Job First” or the Data Use Case Assesment Framework (LINK) developed in the Competence Center Future Financial Services, quick wins can be identified and implemented.
  2. Establish governance: Clearly defined responsibilities, AI guidelines and centralized portfolio management increase the speed of implementation.
  3. Establish feedback systems: Feedback from day-to-day operations should flow back systematically, especially when working with providers.
  4. Evaluate make-or-buy decisions transparently: Hybrid sourcing approaches should be critically reviewed with regard to the balance between control and speed.
  5. Promote skills development: When introducing AI, technology alone is not enough – targeted training, change management and leadership training are crucial for success.

Managing risks: Generative AI not only opens up new opportunities, but also new areas of attack – for example through deepfakes, identity forgery or automated fraud attempts. Security concepts must actively and proactively counter this growing threat.

Future prospects – what’s next?

According to our partners, process-oriented agent systems that can orchestrate not just individual steps but entire workflows are particularly exciting. The use of small language models and multi-agent architectures will also increase. Looking to the future, our partners also expect to see the increasing establishment of adaptive AI systems that can automatically take regulatory changes into account.

Conclusion & outlook

The AI Radar 2025 shows that the successful use of AI in banks requires far more than just technology – the decisive factor is how AI is strategically embedded, organizationally managed, culturally anchored and implemented on a sustainable database – despite sometimes fragmented structures and systemic legacy issues.

The question of how AI will fundamentally influence banking processes and IT architecture and whether it will stop at financial topics and financial advice, e.g. as personal AI assistants for customers, is still unanswered.

Stefanie Auge-Dickhut