The European Union’s landmark AI Act formally went into effect Aug. 1, changing the way artificial intelligence is regulated across Europe — and, indeed, around the world. This first-ever comprehensive legal framework aims to ensure that AI systems released in the EU market and used in the EU are safe. Punter Southall partner Jonathan Armstrong explores the details of the regulation and what corporations around the globe need to know.
The first thing to say is that even before the passing of the EU AI Act, AI was not completely unregulated in the EU thanks to the GDPR. Previous enforcement activity against AI under GDPR has included:
- An Italian ban of the ReplikaAI chatbot
- The temporary suspension by Google of its Bard AI tool in the EU after intervention Irish authorities
- Italian fines for Deliveroo and a food delivery start-up over AI algorithm use
- Clearview AI fines under GDPR
But this regulation sets out the following risk-based framework:
Minimal risk
Most AI systems present only minimal or no risk for citizens’ rights or safety. There are no mandatory requirements, but organizations may nevertheless voluntarily commit to additional codes of conduct for these if they wish. Minimal risk AI systems are generally simple automated tasks with no direct human interaction, such as an email spam filter.
High risk
Those AI systems identified as high risk will be required to comply with strict requirements, including: (i) risk-mitigation systems; (ii) obligation to ensure high quality of data sets; (iii) logging of activity; (iv) detailed documentation; (v) clear user information; (vi) human oversight; and (vii) a high level of robustness, accuracy and cybersecurity.
Providers and deployers will be subject to additional obligations regarding high-risk AI. Providers of high-risk AI systems (and general-purpose AI model systems discussed below) established outside the EU will be required to appoint an authorized representative in the EU in writing. In many respects this is similar to the data protection representative (DPR) provisions in GDPR. There is also a registration requirement for high-risk AI systems under Article 49.
Examples of high-risk AI systems include:
- Some critical infrastructures, for example, for water, gas and electricity
- Medical devices
- Systems to determine access to educational institutions or for recruiting people
- Some systems used in law enforcement, border control, administration of justice and democratic processes; in addition, biometric identification, categorization and emotion recognition systems
Unacceptable risk
AI systems considered a clear threat to the fundamental rights of people will be banned outright early next year, including:
- Systems or applications that manipulate human behavior to circumvent users’ free will, such as toys using voice assistance encouraging dangerous behavior of minors, or systems that allow so-called “social scoring” by governments or companies and some applications of predictive policing
- Some uses of biometric systems will be prohibited, for example, emotion recognition systems used in the workplace and some systems for categorizing people or real-time remote biometric identification for law enforcement purposes in publicly accessible spaces, subject to some narrow exceptions.
Specific transparency risk
Also called limited-risk AI systems, which must comply with transparency requirements. When AI systems like chatbots are used, users need to be aware that they are interacting with a machine. Deepfakes and other AI-generated content will have to be labeled as such, and users will have to be informed when biometric categorization or emotion recognition systems are being used.
In addition, service providers will have to design systems in a way that synthetic audio, video, text and images content is marked in a machine-readable format and detectable as artificially generated or manipulated.
Systemic risk
Systemic risk is:
- Specific to the high-impact capabilities of general purpose AI models
- Has a significant impact on the EU market due to reach or due to actual or reasonably foreseeable negative effects on public health, safety, public security, fundamental rights or society as a whole
- Can be propagated at scale
General-purpose AI
The EU AI Act introduces dedicated rules for so-called general-purpose AI (GPAI) models aimed at ensuring transparency. Generally speaking, this means an AI system that is intended by the service provider to perform generally applicable functions like image and speech recognition, audio and video generation, pattern detection, question answering, translation and others.
For very powerful models that could pose systemic risks, there will be additional binding obligations related to managing risks and monitoring serious incidents, performing model evaluation and adversarial testing — a bit like red teaming to test for information security issues. These obligations will come about through codes of practice developed by a number of interested parties.
AI Is the Wild West, but Not for the Reasons You Think
As Europe moves closer to blanket rules regarding its use, CCI’s Jennifer L. Gaskin explores the evolving compliance and regulatory picture around artificial intelligence, the technology everyone seems to be using (but that we’re also all afraid of?).
Read moreWhat about enforcement?
Market surveillance authorities (MSAs) will supervise the implementation of the EU AI Act at the national level. Member states are to designate at least one MSA and one notifying authority as their national competent authorities. Member states need to appoint their MSA at national level before Aug. 2, 2025, for the purpose of supervising the application and implementation of the EU AI Act. It is by no means guaranteed that each member will appoint its DPA as the in-country MSA but the European Data Protection Board pushed for them to do so in its plenary session in July 2024.
In addition to in-country enforcement across the EU, a new European AI Office within the European Commission will coordinate matters at the EU level, which will also supervise the implementation and enforcement of the EU AI Act concerning general purpose AI models.
With regard to GPAI, the European Commission, and not individual member states, has the sole authority to oversee and enforce rules related to GPAI models. The newly created AI Office will assist the Commission in carrying out various tasks.
In some respects, this system mirrors the current regime in competition law with in-country enforcement together with EU coordination. But this could still lead to differences in enforcement activity across the EU as we’ve seen with GDPR, especially if the same in-country enforcement bodies have responsibility for both GDPR and the EU AI Act.
In certain circumstances, dawn raids may be possible in enforcement action. The first is in relation to testing high-risk AI systems in real-world circumstances. Under Article 60 of the Act, MSAs will be given powers of unannounced inspections, both remote and on-site, to conduct checks on that type of testing.
The second is that competition authorities may perform dawn raids as a result of this act. MSAs will report annually to national competition authorities any information identified in their market surveillance activities that may be of interest to the competition authorities. Competition authorities have had the power to conduct dawn raids under antitrust laws for many years now. As such, competition authorities might conduct dawn raids based on information or reports received.
Penalties
When a national authority or MSA finds that an AI system is not compliant, they have the power to require corrective actions to make that system compliant and to withdraw, restrict or recall the system from the market.
Similarly, the Commission may also request those actions to enforce GPAI compliance.
Noncompliant organizations can be fined under the new rules, as follows:
- €35 million or 7% of global annual turnover of the preceding year for violations of banned AI applications
- €15 million or 3% for violations of other obligations, including rules on general purpose AI models
- €7.5 million or 1.5% for supplying incorrect, incomplete or misleading information in reply to a request
Lower thresholds are foreseen for small and mid-sized companies and higher thresholds for other companies.
Applicability outside the EU
The AI Act’s extraterritorial application is quite similar to that of the GDPR; as such, these rules may affect organizations in the UK and elsewhere,including the U.S. Broadly, the EU AI Act will apply to organizations outside the EU if their AI systems or AI-generated output are on the EU market or their use affects people in the EU, directly or indirectly.
For example, if a U.S. company’s website has a chatbot function that is available for people in the EU to use, that U.S. business will likely be subject to the EU AI Act. Similarly if a non-EU organization does not provide AI systems to the EU market but does make available AI system-generated output to people in the EU (such as media content), that organization will be subject to this Act.
The UK, the U.S., China and other jurisdictions are addressing AI issues in their own particular ways.
The UK government published a whitepaper on its approach to AI regulation in March 2023, which set out its proposed “pro-innovation” regulatory framework for AI, and subsequently had a public consultation on the proposals. The government response to the consultation was published in February 2024.
Since then, the UK government has changed thanks to a shocking election result, and we’ve seen the government’s position on AI change, too. The position of the new Labour Government was set out in the King’s Speech in July 2024 with the new government saying it would, “seek to establish the appropriate legislation to place requirements on those working to develop the most powerful artificial intelligence models.” No details are as yet available on what this bill would look like.
What happens next?
The AI Act formally entered into force Aug. 1 and will become fully applicable in two years, apart from some specific provisions. Prohibitions will already apply after six months, and rules on general purpose AI will apply after 12 months.
Date | Applicable elements | Corresponding sections of regulation |
2/2/2025 | Prohibitions on unacceptable risk AI will apply, so-called Prohibited Artificial Intelligence Practices | Chapters I and II |
5/2/2025 | Codes of practice must be ready by this time. The plan is for providers of GPAI models and other experts to jointly work on a code of practice. | Article 56 |
8/2/2025 | The main body of rules start to apply: Notifying authorities, GPAI models, governance, penalties, confidentiality (except rules on fines for GPAI providers). MSAs should also be appointed by member states. | Chapter III section 4, Chapter V, Chapter VII, Chapter XII, Article 78 (except Article 101). |
8/2/2026 | The remainder of the act will apply except Article 6(1). | |
8/2/2027 | Article 6(1) and the corresponding obligations in this regulation will apply; these relate to some high-risk AI systems covered by existing EU harmonization legislation (Annex I systems e.g. those covered by existing EU product safety legislation) and GPAIs that have been on the market before Aug. 2, 2025. However, some high-risk AI systems already subject to sector-specific regulation (listed in Annex I) will remain regulated by the authorities that control them today (e.g. for medical devices). | Article 6(1) |
What is the AI pact?
Before the EU AI Act becomes generally applicable, the European Commission will launch a voluntary AI pact aimed at bringing together AI developers from Europe and around the world to commit on a voluntary basis to implement key obligations of the EU AI Act ahead of the legal deadlines.
The European Commission has said that over 550 organizations have responded to the first call for interest in the AI pact, but whether that leads to widespread adoption remains to be seen. The Commission has published draft details of the pact to a select group outlining a series of voluntary commitments and is currently aiming to launch the AI pact in October.