Great! We'll call you.

×

×

Send an email to sales

×

Great! We'll call you.

×

×

Send an email to us

×

Great! We'll call you.

×

×

Send an email to sales

×

Great! We'll call you.

×

×

Send an email to us

×

By Bronagh Duggan, 29 May 2024

Decoding the FCA's Approach to AI Regulation

Delve into the FCA's strategic approach to regulating AI in the financial sector and understand the implications for firms.

AI - tomorrow's regulation, today?

The Financial Conduct Authority (FCA) is taking a pragmatic approach to the regulation of artificial intelligence (AI) in the financial sector. Instead of developing a separate regulatory framework exclusively for AI, the FCA is applying and adapting existing rules and principles to manage AI-related risks. This strategy aims to ensure effective governance of AI technologies while fostering innovation in financial services.

The FCA is actively seeking to understand the current deployment strategies of AI within the firms it regulates. By learning from these firms, the FCA plans to develop strategies to ensure safe and responsible AI adoption in the financial sector.

The FCA's pragmatic approach to AI regulation

The FCA's approach to AI regulation involves interpreting and applying the government's five pro-innovation regulatory principles for AI. These principles include:

  1. Safety, security, robustness,
  2. Fairness,
  3. Appropriate transparency and explainability,
  4. Accountability and governance, and
  5. Contestability and redress.

To align with the FCA's interpretation of these principles, firms need to take several key steps. These steps include conducting regular audits and reviews of AI systems to identify potential security and safety risks, developing robust business continuity and incident response plans, ensuring operational resilience in the face of AI-related disruptions, and conducting thorough due diligence on AI providers.

Additionally, firms should provide regular training for staff on the security, safety, and regulatory aspects of AI and establish cross-functional teams involving legal, compliance, technical, and risk management staff to review and address AI-related issues.

The Government’s expectations

In March 2023, the UK government set out five pro-innovation regulatory principles for AI as noted above. The FCA and other regulators are expected to interpret and apply these principles within their remits. The FCA has interpreted these principles and established a preliminary framework for firms to follow. The framework integrates AI considerations into existing regulations and provides recommendations for compliance and the adoption of best practices. This framework may evolve as AI technologies and their applications grow.

Key steps for firms to ensure alignment with FCA's interpretation

To ensure alignment with the FCA's interpretation of the government's pro-innovation regulatory principles for AI, firms need to take several key steps. These steps include conducting regular audits and reviews of AI systems to identify potential security and safety risks, developing robust business continuity and incident response plans, ensuring operational resilience in the face of AI-related disruptions, conducting thorough due diligence on AI providers, providing regular training for staff on the security, safety, and regulatory aspects of AI, and establishing cross-functional teams involving legal, compliance, technical, and risk management staff to review and address AI-related issues.

 

By complying with these principles, firms can demonstrate their commitment to safe and responsible AI adoption in the financial sector.