UK policymakers publish an AI regulatory framework that’s flexible enough to cover all sectors

The news: In a report released this week, the UK government outlines a framework for governing AI while also promoting innovation.

What’s driving this? Excitement around the potential uses of generative AI has kicked regulators into high gear as they try to figure out how to promote its use while keeping consumers safe. UK regulators listed three main objectives that they hope this framework will achieve:

  • Drive growth and prosperity: Regulators want to promote responsible innovation by reducing the uncertainty around regulations.
  • Increase public trust in AI: Many risks accompany the benefits of AI. Regulators want to ensure they protect consumers properly.
  • Strengthen the UK’s position: The UK is working hard to become a tech leader, and regulators hope the framework will set it apart as a global leader in AI.

The framework: Five principles set the stage for the framework, informing how AI should be developed and used across all sectors: Safety, security and robustness; appropriate transparency and explainability; fairness; accountability and governance; and contestability and redress. The framework is explained in four elements:

  • A complete definition of AI that covers its unique characteristics and allows regulators to coordinate on policy making.
  • A context-specific approach that doesn’t assign blanket rules or regulations to entire sectors, but rather takes into consideration the outcomes that the use of AI will generate.
  • A set of cross-sectoral principles to help regulators respond to the risks of AI, including standards for strong governance, and to empower regulators to enforce the framework.
  • A set of functions that allows regulators to share and enforce the framework iteratively.

How will they achieve this? The report also highlights some support functions that will assist regulators in implementing the framework while maintaining an innovative environment. Some functions include:

  • Monitoring risks that arise from using AI across all sectors.
  • Conducting gap analysis to identify new AI trends.
  • Supporting sandbox initiatives to foster ideation and speed up time-to-marker of new products.
  • Promoting interoperability with international regulatory frameworks.

How will this affect financial services? Most countries that are using AI in financial services are relying on existing regulatory frameworks to address the risks associated with AI and to protect consumers. But huge gaps still remain, especially when it comes to consumer data privacy and fielding large quantities of unbiased model input data.

  • In the UK, lawmakers are prepared to work with financial regulators to ensure AI offerings within the sector meet current consumer protection requirements, such as those introduced in the Financial Services and Markets Act.
  • The interactive nature of implementation will also allow regulators to update policies that may turn out to be too loose or too rigid.

The bottom line: It’s a refreshing change that UK regulators are taking a stab at creating cohesive and complete AI-focused regulations while promoting development. The picture looks different in the US, where calls for AI regulation are growing louder, but regulators are dragging their feet. However, all banks still have lots of work to do. Retail banks are forecast to spend $4.9 billion on AI platforms by 2024, per GlobalData research.

This article originally appeared in Insider Intelligence’s Banking Innovation Briefing—a daily recap of top stories reshaping the banking industry. Subscribe to have more hard-hitting takeaways delivered to your inbox daily.