Today, the topic of artificial intelligence or AI permeates every conversation about the future of technology and the digital world. The biggest companies in the world have been using AI for years now, with notable everyday examples such as the Face ID feature on Apple’s iPhones or Microsoft’s virtual assistant, Cortana. Alongside this, major industries such as healthcare, food, travel, manufacturing, banking and real estate have welcomed and nurtured large technological transformations, featuring AI.

Why has it taken off so astronomically? This is all because AI can be applied to many business and operational functions that power productivity and business growth, thereby opening new opportunities for companies to become more efficient and less costly to run.

AI... explained

Simply put, AI is the development of intelligent systems capable of learning and solving problems much like humans. To build an understanding of the situation and produce an answer, AI requires data and algorithms, the latter of which teaches it how to analyse patterns and sequences. But much like the rise and spread of the Internet, the depth of unknowns and potential harms of AI are piling up alongside its stated benefits.

How is the banking sector using AI?

The banking sector is an interesting case for AI deployment as the industry has traditionally faced stringent regulation and pressures to improve cost-to-income ratios. Given how integral AI tools have become to everyday banking such as through ATM machines and online banking, studies forecast that the global banking industry could accumulate an additional $1 trillion per year if AI is truly leveraged through data collection, analytics, language processing models and generative AI. While cost savings for banks from AI applications could reach $447 billion by the end of 2023, banks may also be able to perform two to five times more interactions and transactions with the same number of resources.

Where does the banking sector currently stand on AI deployment?

The Evident AI Index, created by the business intelligence firm Evident, benchmarks global banks against each other in terms of AI readiness. Across North America and Europe, JP Morgan Chase currently ranks first with Royal Bank of Canada following next, Citigroup in third place, and UBS Group following in fourth place. The Evident AI Index scores banks across Talent, Innovation, Leadership, and Transparency (which includes a company’s responsible AI policies).

The ability to leverage AI successfully can provide banks with wide-reaching benefits such as increased efficiency, stronger cybersecurity, cost savings, better customer service, and arguably better decision-making when it comes to risk assessments and loan management.

The following 5 use-cases demonstrate how:

  • Increased operational efficiency: Banks can automate typically manual business processes such as middle office tasks, marketing and sales campaigns and document processing and verification to better allocate resources by reassigning employees to tasks that require human interaction or are ‘higher’ value, and also to reduce the risk of human error. This can generate cost savings with regards to recruitment and having the ability to scale up resources when demand surges.
  • Stronger and more resilient cybersecurity: By monitoring the patterns and behaviours of customers, AI can enable faster and stronger prevention and detection mechanisms for fraudulent activity such as identify theft and cybercrime in real time, to ensure banking transactions are safer and more secure. This can also strengthen a bank’s security around threats of anti-money laundering practices.
  • Improved and personalised customer experience: Many banks use chatbots today because of the elevated customer experience they provide with fast and 24-hour instant service that brings informative material to customers in just a few clicks. Virtual assistants such as Capital One’s Eno and Bank of America’s Erica connect millions of customers daily with their personal finance accounts and provide heightened security measures. Fraud alerts are just one of many personalised services on offer: over time, AI can learn the spending and saving habits of customers to help them with financial planning and account management, as well sending automatic push notifications on apps when an upcoming payment is due or helping with money transfers. Santander Spain has rolled out such AI applications, with customers scoring the quality of AI-backed notifications a 4.7 out of 5. With advanced conversational AI, banks will be able to use chatbots and virtual assistants to have more lifelike conversations with their customers, offering even more personalised care and recommendations.
  • Omnichannel enhancement: With the increasing digitalisation of banking services, specifically the improvement and growth of customer interaction, AI will help manage this on an even bigger scale.
  • Decision-making: Banks can leverage AI to gather and monitor an even bigger breadth of data and metrics such as income, spending habits, average bank balance, level of debt and credit scores to improve decision-making on aspects such as loan applications and customer risk.

What’s the catch?

However, with all these benefits come threats to data privacy, security, equality and human rights. Because humans choose what data they feed to the algorithms that instruct AI tools, a level of unconscious bias is inherent in AI decision-making. AI systems can learn and adopt biased information and patterns to generate widespread discriminatory practices against already marginalised populations. In this regard the limitations of using AI in banking relate to the absence of human interaction in decision-making, imperfect data quality, ethical considerations and cybersecurity risks from the sharing of sensitive customer data. For example, if malicious use of AI or biased algorithms lead to more credit and loan approvals for a certain demographic of the population whilst excluding or intentionally marginalising another to boost profit, this will have huge social implications.

Striking the right balance between boosting financial innovation and mitigating risks to both banks and customers is difficult: while AI regulation can help combat a lack of transparency in algorithmic decision-making and bias, ‘too much’ regulation may discourage banks from pursuing AI innovation.

Ethics, policies and regulations

Wavestone’s 2023 Data and Analytics Leadership Executive Survey found that many organisations are already far behind in their attention and commitment to data ethics and policies. As with the phenomena of social media, concerned members of civil society are calling for regulation to come from the ‘top’. Governments from around the world, as well as prominent tech leaders such as OpenAI’s CEO Sam Altman, are increasingly talking about ‘responsible tech’ and ‘responsible AI’: the idea that the use of AI is guided by ethical and human-centred principles.

While new laws and regulations face polarisation and bureaucracy in the US, multiple regulatory bodies have voiced their opinions on the topic of AI. In March 2022, the Consumer Financial Protection Bureau reaffirmed the importance of upholding the Equal Credit Opportunity Act, which prohibits discrimination in credit applications, in the context of using AI technology in the decision-making process. Consumer safety and data privacy are of utmost importance in recent propositions such as the Biden administration’s nonbinding ‘AI Bill of Rights’ that details five guiding principles for the development of AI, and the 2022 American Data Privacy Protection Act (ADPPA), which promises to push for greater transparency in the collection, use and sale of consumer data.

The ‘AI Act’

Across the Atlantic, the European Parliament first proposed the regulation of AI in 2021 and committed to enshrining the world’s first comprehensive AI law. The so-called AI Act will classify AI uses based on the level of risks they pose to humans. For example, ‘unacceptable risk’ connotes that the threat to people from the use of AI is so high that it must be banned, and this seeks to regulate AI applications such as social scoring (classifying people based on socio-economic or personal characteristics) and cognitive behavioural manipulation. In the UK, the government has promoted an ‘agile and iterative process’ to find a ‘pro-innovation’ approach to AI regulation. Like the Biden administration, five core principles touching on safety, transparency and accountability will shape the use of AI in the UK to strengthen the UK’s position as a global AI leader and boost innovation, whilst also protecting the consumer.

How can banks prepare?

With much of the literature and policy recommendations focusing on the ethical implications of AI, banks will have to ask themselves whether they can use AI to streamline processes, whilst absolutely ensuring customer data privacy and fair and unbiased decision-making. Establishing a regular review of AI-backed decision processes and applications, and an auditing system, will ensure banks hold themselves and their developers accountable to impacts on society and human rights. Another question that will impact the banking industry is that of customer experience: are customers ready to let go of the human interaction that is already slipping away from local branches or telephone conversations, or will faster and more efficient AI chatbots be welcomed as a better substitute? And how will banks tackle employee fears of AI replacing their job functions?

As mentioned, the development of AI points to a future where large language models enable more human-like conversations and interactions, while automation enables higher productivity. As a recent Wavestone insight advises “the current state of AI represents just the beginning of what is possible, with the frontiers of the field advancing at an unprecedented rate”.

The banking industry should focus on implementing, expanding and strengthening their responsible use of data and AI policies for two reasons:

  1. Governments around the world are already in the process of writing legislation to regulate privacy and how the development and use of AI will impact this.
  2. Stronger security and ethical policies will build trust with customers and ensure protection from harmful bias and data breaches.

 

The concept of ‘responsible’ technology is not new and should be applied by companies and governments alike when designing forward-looking, human-centred and equitable policies that govern the use of AI while allowing humanity to reap the many benefits.

Get in touch with our Data and AI experts