AI: risk and reward

The use of artificial intelligence (AI) in financial services may enable firms to offer better products and services to consumers and drive innovation… but also brings significant risk, according to a new discussion paper by the Bank of England.

According to the regulator, AI adoption within financial services is likely to continue to increase due to increased availability of data, improvements in computational power, and wider availability of AI skills and resources.

However, it adds that although the use of AI may bring a range of benefits, it can also pose novel challenges for firms and regulators as well as amplify existing risks to consumers, the safety and soundness of firms, market integrity, and financial stability. One of the most significant questions, it suggests, is whether AI can be managed through clarifications of the existing regulatory framework, or whether a new approach is needed. How to regulate AI to ensure it delivers in the best interests of consumers, firms, and markets is the subject of a wide-ranging debate, both in the UK and in other jurisdictions around the world.

Benefits and risks related to the use of AI in financial services 

AI offers potential benefits for consumers, businesses, and markets. However, AI also has the potential to create new or increased risks and challenges. The benefits, risks, and harms discussed in this DP are neither exhaustive nor applicable to every AI use case.

The primary drivers of AI risk in financial services relate to three key stages of the AI lifecycle: (i) data; (ii) models; and (iii) governance. Interconnected risks at the data level can feed into the model level, and then raise broader challenges at the level of the firm and its overall governance of AI systems. Depending on how AI is used in financial services, issues at each of the three stages (data, models, and governance) can result in a range of outcomes and risks that are relevant to the supervisory authorities’ remits.

Consumers

AI may benefit consumers in important ways – from improved outcomes through more effective matching to products and services, to an enhanced ability to identify and support consumers with characteristics of vulnerability, as well as increasing financial access. However, if misused, these technologies may potentially lead to harmful targeting of consumers’ behavioural biases or characteristics of vulnerability, discriminatory decisions, financial exclusion, and reduced trust.

Competition

There may be substantial benefits to competition from the use of AI in financial services, where these technologies may enable consumers to access, assess, and act on information more effectively. But risks to competition may also arise where AI is used to implement or facilitate further harmful strategic behaviour such as collusion, or creating or exacerbating market features that hinder competition, such as barriers to entry or to leverage a dominant position.

Firms

There are also many potential benefits for financial services firms including enhanced data and analytical insights, increased revenue generation, increased operational efficiency and productivity, enhanced risk management and controls, and better combatting of fraud and money laundering. Equally, the use of AI can translate into a range of prudential risks to the safety and soundness of firms, which may differ depending on how the technology is used by firms.

Financial markets

AI may benefit the broader financial system and markets in general through more responsive pricing and more accurate decision-making, which can, in turn, lead to increased allocative efficiency. However, AI may also lead to risks to system resilience and efficiency. For example, models may become correlated in subtle ways and add to risks of herding, or procyclical behaviour at times of market stress.

Insurance policyholder protection – PRA and FCA

AI in the insurance sector has the potential to improve the efficiency of data processing and decision-making in terms of both underwriting and claims processing, the paper suggests. In life insurance, which includes an investment component, firms could leverage AI to support the investment choices of policyholders. In general insurance, AI could be used for automating claims management. Firms in the insurance sector can also use AI to analyse new unstructured data sources, like telematics or data collected from wearable devices, to provide more tailored products and/or pricing.

The Bank of England notes that insurers use AI across a range of business areas, which could pose risks to policyholder protection. Risks related to underwriting could lead to inappropriate pricing and marketing. For example, AI models trained on historical data may not account for a breakthrough healthcare treatment, which can lead to mispriced policies. Similarly, risks such as concept drift and lack of explainability in claims management AI systems could impact policyholders’ ability to claim and their overall protection. Risks related to building AI models for cash-flow and capital reserve estimates could result in inaccurate predictions and reserve levels that could, in turn, impact insurers’ ability to meet future liabilities.

Third party vulnerabilities?

A further key challenge for firms lies in their ability to monitor operations and risk management activities that take place outside their organisations at third parties, the Bank of England paper suggests. 

Increased reliance on third parties, often outside the regulatory perimeter, for datasets, AI algorithms, and other IT outsourcing (such as cloud computing) may amplify systemic risks. For example, operational failures and cyberattacks at critical third parties could result in disruption to certain AI services and therefore lead to a single point of failure that could impact multiple firms and markets.

The Bank of England notes that insurers use AI across a range of business areas, which could pose risks to policyholder protection. Risks related to underwriting could lead to inappropriate pricing and marketing. For example, AI models trained on historical data may not account for a breakthrough healthcare treatment, which can lead to mispriced policies.

SHARE: