The news: In a report released this week, the UK government outlines a framework for governing AI while also promoting innovation.
What’s driving this? Excitement around the potential uses of generative AI has kicked regulators into high gear as they try to figure out how to promote its use while keeping consumers safe. UK regulators listed three main objectives that they hope this framework will achieve:
The framework: Five principles set the stage for the framework, informing how AI should be developed and used across all sectors: Safety, security and robustness; appropriate transparency and explainability; fairness; accountability and governance; and contestability and redress. The framework is explained in four elements:
How will they achieve this? The report also highlights some support functions that will assist regulators in implementing the framework while maintaining an innovative environment. Some functions include:
How will this affect financial services? Most countries that are using AI in financial services are relying on existing regulatory frameworks to address the risks associated with AI and to protect consumers. But huge gaps still remain, especially when it comes to consumer data privacy and fielding large quantities of unbiased model input data.
The bottom line: It’s a refreshing change that UK regulators are taking a stab at creating cohesive and complete AI-focused regulations while promoting development. The picture looks different in the US, where calls for AI regulation are growing louder, but regulators are dragging their feet. However, all banks still have lots of work to do. Retail banks are forecast to spend $4.9 billion on AI platforms by 2024, per GlobalData research.
This article originally appeared in Insider Intelligence’s Banking Innovation Briefing—a daily recap of top stories reshaping the banking industry. Subscribe to have more hard-hitting takeaways delivered to your inbox daily.