The news: The Consumer Financial Protection Bureau (CFPB) is signaling that it will clamp down on artificial intelligence (AI) practices in banking that can be used to discriminate against people, per American Banker.
- AI is a significant growth space in banking: A report from Research and Markets projects that its market size will soar globally from $3.88 billion in 2020 to $64.03 billion in 2030, with a CAGR of 32.6%.
More on this: The publication cited concerns voiced in recent months by senior CFPB officials concerned about potential misuse of AI:
- Director Rohit Chopra has warned that AI could be abused to advance “digital redlining” and “robo discrimination.”
- At an October 2021 press conference, Chopra addressed opacity for underlying data used in algorithms, blasting what he called “black box underwriting algorithms [that] are not creating a more equal playing field and only exacerbate the biases fed into them.”
- Chopra tweeted last month that the bureau “will be taking a deeper look at how lenders use artificial intelligence or algorithmic decision tools.”
- CFPB Assistant Director for Supervision Policy Lorelei Salas gave a similar warning in a blog post that same day.
- Eric Halperin, the bureau’s enforcement leader, warned in a December 2021 speech about AI being coupled with “unfair, deceptive and abusive acts and practices.”
Under the Biden Administration, the US regulator has pivoted away from its Trump-era encouragement of AI adoption, American Banker noted, pointing to a 2020 post as an historical contrast. The remarks follow a broader request for public comment about AI that the bureau jointly issued with other federal regulators in March 2021.
A key component of the AI discussion is the incorporation of alternative data, or information that hasn’t traditionally been used to improve underwriting, the publication reported, summarizing the pros and cons of using it:
- Supporters view AI as a way to deliver more accurate decisions for underwriting because of greater data inclusion.
- However, critics have voiced data accuracy concerns and worry that potential applicants could be excluded via the underwriting processes.
Prudence is warranted: The CFPB’s scrutiny of AI usage could uncover misuses and give banks clearer guidance to follow. Implementing guard rails for the technology is crucial—there’s already evidence for AI perpetuating biases in finance and beyond:
- Three Federal Reserve economists found in a 2021 paper that algorithmic systems for mortgage underwriting gave higher denial rates for minority borrowers. They said their paper was the first to document this discrepancy.
- A 2020 Harvard Business Review article cited cases where algorithms led to biases— for example, UK college admissions were tied to the past performances of schools the students attended, and Amazon’s recruiting algorithm penalized applicants whose submissions included the word “women’s.”
Clear guidance will help banking players employ AI in ways that lower bias risks—and gain people’s trust in the process—while coupling it with the advantages that the technology offers. Beneficial examples include, per our 2020 AI in Banking report:
- Delivering customers’ insights with personal financial management (PFM) tools.
- Boosting credit-analysis quality.
- Making targeted product offerings to people.
- Deploying virtual assistants.
- Going after payments fraud with prevention and detection.