EU: Obligations on providers of GPAI models under the EU AI Act

EU: Obligations on providers of GPAI models under the EU AI Act

Summary

The EU AI Act introduces obligations for providers of general-purpose AI (GPAI) models, including those with systemic risk, defining GPAI models as AI capable of performing a wide range of tasks and requiring integration into downstream systems. Providers must adhere to documentation, cooperation, and risk mitigation requirements, with additional obligations for models with systemic risk, such as adversarial testing and cybersecurity measures. The Act, effective from August 2, 2025, with a grace period for fines until August 2, 2026, grants the European Commission exclusive enforcement powers, including fines up to 3% of annual worldwide turnover or €15 million. A scientific panel will support compliance monitoring, and providers can demonstrate compliance through codes of practice and harmonized standards.

The EU Artificial Intelligence Act (the AI Act) is set to become a landmark regulation governing artificial intelligence (AI). It introduces requirements and responsibilities for providers (and those treated as providers) of generalpurpose AI (GPAI) models. With respect to GPAI models, a provider is a natural or legal person or body that develops or has a GPAI model developed and places that model on the market under its own name or trademark, whether for payment or free of charge. This includes organizations that outsource the development of a GPAI model and then place it on the market. The concept of GPAI models was not in the original text of the AI Act when it was first proposed in 2021. But articles and soon a chapter on GPAI models were added after the proliferation of models like OpenAI's GPT-3, generating much debate during negotiations for the Act. In this Insight article, Katie Hewson and Eva Lu, from Stephenson Harwood LLP, examine the definition of GPAI models under the Act, as well as the sub-category of GPAI models with systemic risk and the obligations of providers of these models.

Definition of GPAI models

A GPAI model is defined as an 'AI model, including where such an AI model is trained with a large amount of data using self-supervision at scale, that displays significant generality and is capable of competently performing a wide range of distinct tasks regardless of the way the model is placed on the market and that can be integrated into a variety of downstream systems or applications, except AI models that are used for research, development, or prototyping activities before they are placed on the market.'

From this definition, the key characteristics of these models include:

  • Generality - These models can competently perform a wide range of distinct tasks, such as text synthesis, image manipulation, and audio generation. Models with at least a billion parameters and trained with a large amount of data using self-supervision at scale should be considered to display significant generality and to competently perform a wide range of distinct tasks.
  • Training data - The models are typically trained on large datasets through various methods, such as self-supervised, unsupervised, or reinforcement learning.
  • Integration - While essential, these models alone do not constitute AI systems. They require additional components, such as user interfaces, to be integrated into various downstream systems or applications.
  • Placed on the market - These models can be placed on the market in various ways, through libraries, APIs, as direct download, as a physical copy, or after being integrated into an AI system, if that AI system is made available on the market or put into service. However, this excludes models used for purely internal processes that are not essential for providing a product or a service to third parties and the rights of natural persons are not affected.

Prominent examples of such models are GPT-4, DALL-E, Google BERT, or Midjourney 5.1.

It is likely that models that are modified or fine-tuned into new models could also constitute a separate GPAI model. More difficult questions of definition are also likely to arise as large language models (LLMs) are replaced by small language models that may not be said to display 'significant generality,' and which perform a narrower range of tasks in specific contexts or applications.

Classification of GPAI models with systemic risk

In addition, a GPAI model will be classified as a GPAI model with systemic risk if it meets one of the requirements in Article 51(1), namely:
 

  • it has high-impact capabilities evaluated on the basis of appropriate technical tools and methodologies, including indicators and benchmarks. Under Article 51(2), a GPAI model will be presumed to have high-impact capabilities when the cumulative amount of computation used for its training measured in floating point operations is greater than 10. A provider of a GPAI model with systemic risk under this condition must notify the European Commission without delay and in any event within two weeks after the requirement is met or it becomes known that it will be met; or
  • based on a decision of the European Commission, ex officio or following a qualified alert from the scientific panel, it has capabilities or an impact equivalent to those set out above, having regard to the criteria set out in Annex XIII, which are:
      
     

    • onumber of parameters of the model;
    • quality or size of the dataset;
    • amount of computation used for training the model;
    • the input and output modalities of the model;
    • the benchmarks and evaluations or capabilities of the model;
    • whether it has a high impact on the internal market due to its reach (presumed to be met when made available to at least 10,000 registered business users established in the EU); and
    • number of registered end users

    Systemic risk is defined under the Act as 'a risk that is specific to the high-impact capabilities of GPAI models, having a significant impact on the EU market due to their reach, or due to actual or reasonably foreseeable negative effects on public health, safety, public security, fundamental rights, or the society as a whole, that can be propagated at scale across the value chain.'

    Some of the systemic risks identified include any actual or reasonably foreseeable negative effects in relation to major accidents, disruptions of critical sectors and serious consequences to public health and safety, any actual or reasonably foreseeable negative effects on democratic processes, public and economic security, and the dissemination of illegal, false, or discriminatory content.

    'High-impact capabilities' is defined under the Act to mean 'capabilities that match or exceed the capabilities recorded in the most advanced GPAI models.'

    The provider of a GPAI model can present arguments that the model should not be classified as a GPAI model with systemic risk, but the European Commission will make the conclusion. The European Commission will also publish a list of GPAI models with systemic risk, without prejudice to the need to observe and protect intellectual property rights and confidential business information or trade secrets under EU laws.

    Integration with AI systems

    It is worth noting that while GPAI models appear to be a separate category to other risk-based AI systems under the AI Act, GPAI models, when integrated into an AI system, will be regulated under the Act according to the risk of that AI system, for example, prohibited, high-risk, or those with specific transparency risk.

    Specifically, underArticle 25(4) as part of the AI value chain of a high-riskAI system, providers of GPAI models will need to assist and enable the provider of a high-risk AI system to fully comply with the obligations of the Act.

    Further, underArticle 50(2), providers of AI systems, including GPAI systems, generating synthetic audio, image, video, or text content must ensure that the outputs of the AI system are marked in a machine-readable format and detectable as artificially generated or manipulated.

    A GPAI system is defined under the Act as 'an AI system which is based on a GPAI model and which has the capability to serve a variety of purposes, both for direct use as well as for integration in otherAI systems.'

    Obligations on providers of GPAI models

    The obligations of providers of GPAI models are set out in Article 53 of the AI Act, which are to:

    • draw up and keep up-to-date the technical documentation of the model, including its training and testing process and the results of its evaluation, which must contain, at a minimum, the information set out in Annex XI for the purpose of providing it, upon request, to the AI Office and the national competent authorities;
    • draw up, keep up-to-date, and make available information and documentation to providers of AI systems who intend to integrate the GPAI model into their AI systems. Without prejudice to the need to observe and protect intellectual property rights and confidential business information or trade secrets under EU laws, the information and documentation must enable providers of AI systems to have a good understanding of the capabilities and limitations of the GPAI model and to comply with their obligations under the Act and contain, at a minimum, the elements set out in Annex XII;
    • put in place a policy to comply with EU law on copyright and related rights, and in particular to identify and comply with, including through state-of-the-art technologies, a reservation of rights expressed pursuant to Article 4(3) of Directive (EU) 2019/790, the Directive on Copyright in the Digital Single Market;
    • draw up and make publicly available a sufficiently detailed summary about the content used for training of the GPAI model, according to a template provided by the AI Office; and
    • cooperate as necessary with the European Commission and the national competent authorities in the exercise of their competencies and powers pursuant to the Act.

    The first two documentation obligations do not apply to GPAI models (but not those with systemic risk) that are released under a free and open-source license that allows for the access, usage, modification, and distribution of the model, and whose parameters, including the weights, the information on the model architecture, and the information on model usage, are made publicly available.

    Additional obligations on authorized representatives of providers of GPAI models are set out in Article 54.

    Obligations on providers of GPAI models with systemic risk

    In addition to the obligations in Articles 53 and 54 of the Act, providers of GPAI models with systemic risk must also comply with Article 55, which targets the specific risks associated with these models. The obligations are to:

    • perform model evaluations in accordance with standardized protocols and tools reflecting the state of the art, including conducting and documenting adversarial testing of the model with a view to identifying and mitigating systemic risks;
    • assess and mitigate possible systemic risks at the EU level, including their sources, that may stem from the development, the placing on the market, or the use of GPAI models with systemic risk;
    • keep track of, document, and report, without undue delay, to the AI Office and national competent authorities as appropriate, relevant information about serious incidents and possible corrective measures to address them; and
    • ensure an adequate level of cybersecurity protection for the GPAI model with systemic risk and the physical infrastructure of the model

    Regulating GPAI models

    Providers of GPAI models that have been placed on the market before August 2, 2025, must take the necessary steps to comply with the obligations in the Act by August 2, 2027. For new models, obligations come into force on August 2, 2025. There will be a one-year grace period for fines on providers of GPAI models, which will not commence until August 2, 2026.

    Unlike other AI systems under the AI Act, underArticle 88 the European Commission (via the newly formed AI Office) has exclusive powers to supervise and enforce obligations of GPAI models. The AI Office may take the necessary actions to monitor the effective implementation and compliance with the Act by providers of GPAI models, including:

    • their adherence to approved codes of practice as well as having extensive documentation and information request powers (Article 91);
    • power to conduct evaluations (Article 92), and power to request measures to comply with the Act, mitigation measures; and
    • to restrict the making available on the market, withdraw, or recall the model (Article 93)

    A scientific panel of independent experts will also be formed to support the monitoring activities and provide qualified alerts to the AI Office.

    The European Commission may impose fines on providers of GPAI models up to 3% of their annual total worldwide turnover in the preceding financial year or €15 million, whichever is higher.

    Providers of GPAI models can rely on codes of practice and harmonized standards to demonstrate compliance with obligations under the Act. On July 30, 2024, the AI Office opened a call for expression of interest to participate in the drawing up of the first GPAI Code of Practice as well as a multi-stakeholder consultation on trustworthy GPAI models under the Act.