The EU AI Act: enforcement overview

The EU AI Act: enforcement overview

The EU Artificial Intelligence Act (the "AI Act") is a legal framework for the regulation of AI in the EU. While it became law on 1 August 2024, many of its provisions won't come into effect until 2025-2026. 

In preparation for when the majority of the provisions of the AI Act come into force, in this article we set out how the AI Act will be enforced, and the penalties for non-compliance with the legislation.

Regulatory bodies and their roles and responsibilities 

EU AI Office 

The AI Office was unveiled on 29 May 2024. It sits within the European Commission (the "Commission"). The AI Office undertakes the Commission’s function to supervise the use of AI systems and general-purpose AI models and take enforcement action against non-compliance. 

In order to do this, the AI Office will have a constructive role to play in developing model contract terms, guidelines and templates, facilitating drawing up codes of practice, particularly in relation to general-purpose AI models (see our article here in relation to the AI Office's consultation on the first general-purpose AI Code of Practice and the published first draft here), receive reports of serious incidents, and collaborate closely with the competent bodies in Member States. The AI Office is also responsible for fostering the development and use of trustworthy AI across the EU. In order to do this, it will provide advice on best practices and access to AI sandboxes, as well as other European support structures for AI uptake (such as the European Digital Innovation Hubs). It will also support research and innovation activities in the fields of AI and robotics.

Crucially, the AI Office will have the exclusive power to monitor, supervise and enforce against providers of general-purpose AI models. These exclusive powers include powers to request documentation and information, conduct evaluations on general-purpose AI models to assess compliance or investigate systemic risks, request access to the model, including the source code, and where necessary and appropriate, the AI Office may request providers to take measures to comply with its obligations, implement mitigation measures, or restrict the making available on the market, withdraw or recall the model. Failure to comply with any of these powers could result in a fine (see the Penalties section below).

Notably, Article 75 also provides that where an AI system is based on a general-purpose AI model, and the model and the system are developed by the same provider, the AI Office will also have powers to monitor and supervise compliance of that AI system and will have all powers of a market surveillance authority (outlined under the National Competent Authorities section below).

AI Board

The AI Office is supported by the AI Board (the "Board"), which will be composed of one representative per Member State. Article 66 sets out that it is the Board's role to advise and assist the Commission and Member States to facilitate the consistent and effective application of the AI Act. The Board can be thought of as playing a role similar to the European Data Protection Board ("EDPB") under the GDPR.

It's particular tasks include:

  • contributing to coordination between national competent authorities and market surveillance authorities, including in conducting joint investigations;
  • collecting and sharing technical and regulatory practices and expertise among Member States;
  • providing advice on the implementation of the AI Act and enforcement of the rules on general-purpose AI models;
  • supporting with the promotion of AI literacy, public awareness and understanding of the benefits, risks, safeguards and rights and obligations in relation to the use of AI systems.
Advisory Forum and Scientific Panel

An advisory forum will also be established to provide technical expertise and advise the Board and the Commission, and to contribute to their tasks under the Act.

The Commission will also be supported by a scientific panel of independent experts to support the enforcement activities under the Act.

National Competent Authorities

National Competent Authorities ("NCAs") are responsible for implementing the AI Act on a national level.

Each Member State must designate by 2 August 2025 at least one or more of the following bodies to act as NCAs in their particular jurisdiction:

  • notifying authority – the national authority responsible for setting up and carrying out the necessary procedures for the assessment, designation and notification of conformity assessment bodies and for their monitoring; and
     
  • market surveillance authority - the national authority responsible for overseeing compliance with the AI Act in the Member State, and acting as a single point of contact for that Member State. Each Member State may delegate one or more market surveillance authorities, which will become the market surveillance authority under the EU product safety regime (Regulation (EU) 2019/1020) and will exercise powers in accordance with that regime in addition to the powers granted under the AI Act. The powers under the EU product safety regime include powers to request documentation and information, unannounced on-site inspections, and to prohibit or restrict the making available of an AI system. For high-risk AI systems related to products covered by the EU harmonisation legislation listed in Section A of Annex I of the AI Act, the market surveillance authority will remain the authority responsible for market surveillance activities designated under those legal acts.

Member States must ensure that their NCAs are provided with adequate technical, financial and human resources, and with infrastructure to fulfil their tasks effectively under the Act. By 2 August 2025 and thereafter once every two years, each Member State must report to the Commission the status of the financial and human resources of the NCAs, with an assessment of their adequacy. The NCAs may provide guidance and advice on the implementation of the Act.

The EDPB has suggested that each Member State's current data protection authority should be appointed as the market surveillance authority for high-risk AI systems that are likely to impact "natural persons' rights and freedoms with regard to the processing of personal data". Please see our article here for more information on the EDPB's statement.

Notified Bodies

Notified bodies are conformity assessment bodies notified in accordance with the Act and other relevant EU harmonisation legislation. They are responsible for performing the conformity assessment activities for high-risk AI systems, including testing, certification and inspection, according to the procedures set out in Article 43. They will be monitored by the notifying authorities referred to above.

Notified bodies must be independent of the provider of a high-risk AI system in relation to which they perform conformity assessment activities, of any other operator that has an economic interest in the high-risk AI systems assessed, and of any competitors of the provider.

Penalties
BreachMaximum Penalty
Non-compliance with the prohibition on certain AI practices. Note that while the provisions in relation to these prohibitions come into effect on 2 February 2025, the penalties will apply from 2 August 2025.

Whichever is highest of:

  • €35 million; or
  • up to 7% of worldwide annual turnover

Non-compliance of an AI system with any of the provisions related to operators or notified bodies (other than the prohibited AI practices).

Those provisions are:

  • Obligations of high-risk AI system providers pursuant to Article 16;
  • Obligations of authorised representatives of providers of high-risk AI systems pursuant to Article 22;
  • Obligations of high-risk AI system importers pursuant of Article 23;
  • Obligations of high-risk AI system distributors pursuant to Article 24;
  • Obligations of high-risk AI system deployers pursuant to Article 26; 
  • Requirements and obligations of notified bodies for high-risk AI system pursuant to Articles 31, 33(1), 33(3), 33(4) or 34; or
  • Transparency obligations for providers and deployers pursuant to Article 50.

These penalties apply from 2 August 2025 while most obligations in relation to high-risk AI systems or in respect of transparency do not come into force until 2 August 2026.

Whichever is highest of:

  • €15 million; or
  • up to 3% of worldwide annual turnover
Supply of incorrect, incomplete or misleading information to notified bodies or national competent authorities in replay to a request. These penalties apply from 2 August 2025.

Whichever is highest of:

  • €7.5 million; or
  • 1% of worldwide annual turnover

 

When deciding whether to impose a fine and the amount of the fine, all relevant circumstances of the specific situation shall be taken into account, and the national competent authorities shall also take into account:

  • the nature, gravity and duration of the infringement and of its consequences, taking into account the purpose of the AI system, as well as the number of people affected by the infringement and the level of damage they suffered;
  • whether the operator has already received a fine for the same infringement from another market surveillance authority of another Member State;
  • whether the operator has already received a fine from other authorities for breaches of other EU or national laws, when the breaches result from the same activity or omission that constitutes an infringement of the AI Act;
  • the size, annual turnover and market share of the operator committing the breach;
  • any other aggravating or mitigating factor applicable to the circumstances of the breach (for example, if any financial benefits were gained as a result of the breach);
  • the degree of cooperation with the national competent authorities in order to remedy the breach;
  • the degree of responsibility of the operator (taking into account the technical and organisational measures it has implemented);
  • the manner in which the breach became known to the national competent authorities, for instance whether the operator notified the authorities of the breach itself;
  • the negligent or intentional character of the infringement; and
  • any action taken by the operator to mitigate the harm any affected people suffered due to the breach.

In the case of SMEs, each fine referred to above will be the lower of the percentage or amount.

Fines for providers of general-purpose AI

Article 101 sets out that providers of general-purpose AI models will be fined when they:

  • breach the general-purpose AI model provisions of the AI Act;
  • fail to comply with a request for a document or information, or supply incorrect, incomplete or misleading information;
  • fail to comply with a measure requested by the Commission under Article 93; or
  • fail to provide the Commission access to the model in order to conducting an evaluation in accordance with Article 92.

The penalty for infringing the Act will result in a fine of either €15 million or 3% of worldwide annual turnover, whichever is highest.

While the provisions in relation to general-purpose AI will come into effect on 2 August 2025, the penalty provisions will not apply until 2 August 2026.

Key Takeaways

With this complex network of regulatory bodies which will each have broad powers, organisations should begin establishing an AI governance strategy and practices to ensure that they are compliant with the law.