The news: Four UK regulatory bodies, collaborating together as the Digital Regulation Cooperation Forum (DRCF), issued a pair of reports defining potential risks posed by the use of algorithmic systems—particularly machine-learning approaches. They also offered ideas on how to manage those risks and ensure algorithm usage is fair and unbiased.
What’s the problem? AI is set to explode in banking, with its projected market size to reach $64.03 billion by 2030 and a CAGR of 32.6%, per Research and Markets. But the complexity of AI models can create a “black box” problem in which decisions are made with very little transparency regarding how they reached their conclusions, making accountability and error detection a challenge. Recently banks have been criticized for their current use of AI.
- Goldman Sachs faces 2019 claims that technology used to measure creditworthiness might be biased against women.
- Wells Fargo’s refinancing practices found the bank only approved 47% of Black homeowners’ mortgage refinancing applications in 2020.
- Three Federal Reserve economists found in a 2021 paper that algorithmic systems for mortgage underwriting gave higher denial rates for minority borrowers.
More on the UK reports: The regulators called the two papers a foundation for identifying areas where individual regulators should step in and where they should collaborate on oversight.
The first report lays out six focus areas in algorithmic processing: transparency, fairness, access to information, resilience of infrastructure, individual autonomy, and healthy competition.
It also offered an overview of the potential harms and benefits of algorithmic processing.
- Algorithms are beneficial but require responsible innovation because they have the potential to cause harm, both intentionally and inadvertently.
- Firms procuring and using algorithms often know little about their origins and limitations.
- The lack of visibility and transparency in algorithmic processing can undermine accountability.
- Adding a “human in the loop” is not a foolproof safeguard against causing harm.
- Regulators still need to conduct further studies on the risks associated with algorithmic processing.
What happens next? In the second report, the regulators outlined a tentative plan of action, which includes:
- Working to improve companies’ understanding of the impact algorithms can have, including identifying and promoting best practices.
- Supporting the development of algorithmic assessment practices to identify inadvertent harm, improve transparency, and give the public more confidence in algorithmic processing systems.
- Helping firms communicate with consumers about where and how they use algorithmic systems.
- Working with researchers on human-computer interaction to better understand issues with human-in-the-loop oversight, such as automation bias.
- Promoting further research on open questions, such as exploring futures methodologies (for example, horizon scanning and scenario planning), to identify trends in the development and adoption of algorithms.
The big takeaway: The collaborative effort of the four UK regulatory agencies has advanced the development of a clear standard for the use of AI and machine learning. Their findings—and in the US, a similar call from the Consumer Financial Protection Bureau (CFPB) for deeper scrutiny of AI—have started the dialogue needed to get a handle on these powerful and easy-to-misuse tools.
- Clear, globally agreed-on standards must be developed to promote interoperability.
- Financial services firms must have a better understanding of the algorithms they use and their implications.
- Consumers need to know how their data is processed as well as what data is providing input.
- Agencies, companies, and consumers should all encourage greater transparency in the use of these techniques and methodologies.