Some make it seem like automating compliance functions is a question of do or die. But if anyone hopes to have a seat at the table, they need to understand the risks involved – along with the evolving set of AI regulations in finance they will need to follow.
AI is transforming financial services, and it now represents a key battleground between incumbents and digital-first challengers. Most major banks have some live AI use cases, particularly in risk and compliance, customer interactions and credit analysis. Many insurers have adopted AI in fraud and claims management, customer engagement and parts of the underwriting and pricing processes.
Societal anxiety around AI, however, has also grown. The high-profile Apple credit card launch got mired in allegations of sexist algorithmic bias, though a subsequent regulatory review did not suggest any evidence of unfair bias. Online insurer Lemonade created a public relations disaster with the claim that it used facial recognition technology to identify users’ emotions in order to detect fraudulent claims.
National AI Regulations in Finance
Some of these may be instances of early overreaction to a poorly understood new technology. However, very real risks lie underneath. While the financial sector has always used algorithms, the use of machine learning (an important subset of AI) is relatively new and introduces or accentuates several risks:
- Machine learning algorithms can be more difficult to understand than their traditional rule-based or statistical counterparts. This makes it difficult to justify them internally or with customers and regulators.
- Because such algorithms learn from patterns in historical data, they can be influenced by quality and bias issues in the input data. This reduces their reliability and creates the potential for outcomes such as unfair refusal of credit or insurance.
- Lower human oversight can accentuate traditional risks to market functioning, such as stability and resilience.
Collectively, these can result in a significant trust deficit in AI, which financial regulators have already recognized. For example, regulators in Singapore (PDF download) (Nov 2018), Hong Kong (PDF download) (Nov 2019) and the Netherlands (PDF download) (Jan 2020) have all published specific guidelines on AI regulations in finance. The Bank of England and Financial Conduct Authority in the U.K. formed a public-private forum in October 2020 with the aim of gathering industry inputs. U.S. banking regulators have sought industry comments on a wide-ranging set of questions around AI risks.
So far, the focus has been on guidance and consultation rather than new binding regulation. However, with greater understanding of the risks and mitigation techniques, regulators are likely to get more explicit in their expectations from industry players. One early example was the European Commission’s draft AI law, which designates credit provision as one of the high-risk AI use cases.
Current Practices and Gaps in Existing Frameworks
The good news is that the industry already has many of the building blocks necessary to address AI-related risks. Banks and insurers have used predictive models for a long time – for example, to manage capital, credit, liquidity or market risk, run stress test scenarios or make insurance underwriting and pricing decisions. Requirements around banking secrecy and the need to deal with sensitive personal health information, have sensitized them to customer data protection far more than peers in other industries. Requirements around fair treatment of customers, such as being transparent on the reasons for refusal of credit or demonstrating the suitability of an investment product sold to a customer, are also part of existing risk and compliance frameworks.
However, significant additional work is needed inside financial institutions to strengthen existing frameworks, enhance understanding and implement standard processes and tools to meet AI regulations in finance. While the ultimate responsibility will lie with the data science teams who build models and their business stakeholders, risk and compliance professionals also have a very important role to play:
- Teams that have traditionally set the standards for managing model risk – and for independently validating models created by the first line – will be expected to enhance their standards and practices to incorporate AI-specific requirements, such as those regarding explainability and bias.
- Data risk owners will have to strengthen existing standards and practices to maintain the quality of and access to historical data used to train machine learning models.
- Compliance teams will have to assess whether existing customer protection and market stability standards, such as those around customer communication and fair treatment, product suitability and market competition, are adequate for a world with greater AI adoption.
Ideally, all of this would be done in a way that does not add incremental risk management policies and standards, but is instead embedded into the institution’s existing enterprise risk management framework.
Risk and compliance professionals should embrace this mandate – not only as a way of supporting the digital transformation of their employers, but also as a means of continuing their own professional growth. To do so, they must start by learning more about AI regulations in finance, its potential and limitations and the ways in which these can be addressed. Not everyone has to become a data scientist, but the ability to ask the right questions – supported by technology to assess AI systems’ transparency, fairness and robustness – will be critical.