Regulating AI in the EU and the UK -a legal view

  • 21 February 2024
  • Technology
  • Sarah Pearce, Partner at Hunton Andrews Kurth

We saw significant regulatory developments worldwide in the field of AI during the course of 2023. To date, privacy laws have been a key source for AI regulation, particularly in the EU and the UK. Both have highly developed privacy rules, specifically the General Data Protection Regulation (GDPR) and the GDPR as incorporated into the laws of the UK. However, privacy laws alone are not sufficient to fully regulate the use of AI. In 2023, significant advances were seen in the approaches proposed in the EU and the UK for regulating the use of AI, and in 2024, this looks set to continue.

The EU and UK in particular, have taken divergent approaches with regards to regulating AI. While the EU opted for prescriptive legislation through the AI Act, the UK’s preferred approach is a non-statutory principle-based framework.

EU Approach

2023 saw major advancement regarding the AI Act with political agreement being reached on December 8, 2023. However, technical discussions are still ongoing and, thus, the final text of the AI Act is still not available. It is likely that the AI Act will be formally approved in early 2024. While the text is not yet available, a copy of the proposed final version has been leaked and the key provisions and approach are known:

  1. The AI Act will introduce a risk-based legal framework for AI governance in the EU, meaning obligations will vary in accordance with the risk level given to a use of AI. Most obligations will fall on providers of AI systems, with a more limited set applying to those deploying AI and other players such as importers.
  2. The AI Act prohibits deployment of harmful AI used in the EU including, for example, AI systems used in social scoring for public and private purposes.
  3. High-risk AI systems are subject to detailed obligations, including an obligation on providers to perform a conformity assessment to ensure that the systems it places on the market comply with the provisions of the Act.
  4. AI systems that may give rise to transparency risks are subject to light obligations and AI systems that are not considered prohibited, high-risk or a transparency risk are not regulated.
  5. General purpose AI systems are also subject to risk-based requirements. All such systems, and the models they are based on, must adhere to transparency requirements, with a set of more stringent requirements only applicable to the most advanced systems.
  6. Non-compliance with the AI Act may lead to significant fines of up to €35 million or seven percent of an organization’s annual global turnover.
  7. Following formal approval, the AI Act will become applicable after a transition period, the length of which will vary depending on the type of AI system.

UK Approach

The UK Government announced its “pro-innovation approach” to regulating AI last year and issued further details in its Policy Paper. The UK proposes to develop a framework of principles to guide and inform responsible development and use of AI in all sectors. It does not, at this stage, propose to enact legislation. The principles will be issued on a non-statutory basis and implemented by existing regulators, allowing their “domain-specific expertise” to tailor implementation to the specific context in which AI is used. Regulation will be based on the outcomes of AI as opposed to any specific sector or technology. Existing regulators will be expected to implement the framework underpinned by the following principles: (i) safety, security and robustness; (ii) appropriate transparency and explainability; (iii) fairness; (iv) accountability and governance; and (v) contestability and redress.

While the UK is primarily proposing a non-statutory approach, the draft Artificial Intelligence (Regulation) Bill was introduced to UK Parliament in late 2023. It is limited in scope with its key provisions proposing to create a new body, the “AI Authority,” the functions of which are defined in the Bill and placing several obligations on the UK Secretary of State to issue further regulations.

During 2024, it is expected that the UK will progress further with the “pro-innovation approach.” The Policy Paper sets out a range of next steps, such as further engagement with industry, issuing principles to regulators and publishing an AI Regulation Roadmap, which are likely to take place during 2024. In addition, while in the early stages of review, the Bill will also likely progress during 2024.

Bletchley Declaration

In addition to country-specific activity, 2023 also saw nations working together to regulate the use of AI and we will likely see other similar collaborations and the development of international standards continue into 2024 and beyond. In November 2023, 29 nations globally, including the UK and EU, reached a world-first agreement known as the Bletchley Declaration at the AI Safety Summit 2023. The Declaration sees a shared understanding of the opportunities and risks posed by AI and the need for governments to work together to meet the most significant challenges.


In 2024, for us and others in our sector the focus will be on supporting clients to ensure they are compliant with the AI Act. For some, this will be a case of leveraging data privacy compliance programmes already in place. For others, they will need to start from scratch to ensure they have the right policies and procedures in place. And it doesn’t stop here, approaches to regulating AI are still being discussed and debated by big tech companies, Governments as well as other stakeholders. This is bound to play out further over the next 12 months. We will continue to monitor approaches taken in the EU, UK and globally.