The roles of the provider and deployer in AI systems and models

The roles of the provider and deployer in AI systems and models

The EU Artificial Intelligence Act ("AI Act", "Act") is set to become a landmark regulation governing AI. It introduces stringent requirements and responsibilities for various actors in the AI value chain. Two key actors in that value chain are AI providers and deployers.

Determining whether an entity is a provider or deployer is crucial, as the roles carry distinct obligations under the Act. As with the General Data Protection Regulation and its distinction between controller and processor, the classification of provider or deployer will always require an assessment of the facts, rather than simply being allocated through contractual arrangements.

In practice, as businesses increasingly seek AI solutions trained on their proprietary data and materials to achieve outputs that are more tailored to their needs, the line between a provider and deployer may become blurred. It may even be possible for an entity to be both provider and deployer for the same AI system, depending on how it is implemented.

This article examines the definition of provider and deployer under the Act, the differences in obligations and risk exposure between the two, and some steps organizations can take to mitigate those risks. There are several other operators in the AI value chain - product manufacturer, importer, distributor, and authorized representative - that won't be covered in this article.

AI Act's risk-based approach

The AI Act takes a “risk-based approach” – the higher the risk of an AI system or model, the stricter the rules. Providers and deployers will have different obligations under the AI Act depending on the risk level of the system or model involved.

AI systems that pose an “unacceptable risk” will be prohibited, while stringent regulatory requirements will be imposed on “high-risk” AI systems and general-purpose AI models. High-risk AI systems are classified in Article 6 and include AI used in product safety, certain biometrics technologies, recruitment and employment, essential public infrastructure (utilities) or the insurance and banking sectors.

A separate layer of obligations applies to general-purpose AI models, which are defined as AI models that display significant generality and are capable of competently performing a wide range of distinct tasks. These are likely to include foundation models. Additional obligations apply to general-purpose AI models that pose "systemic risks", due to their high impact capabilities. Certain AI systems are also subject to transparency obligations. This article covers provider and deployer obligations in each of these contexts.

Definition of provider and deployer

A provider under the AI Act is defined by Article 3(3) as a natural or legal person or body that:

  • develops an AI system or general-purpose AI model or has an AI system or general-purpose AI model developed; and
  • places that system or model on the market, or puts that system into service, under the provider's own name or trademark, whether for payment or free for charge.

A deployer, as defined under Article 3(4) of the AI Act, is a natural or legal person or body using an AI system under its authority, except in the course of a personal non-professional activity.

Territorial scope of the AI Act for providers and deployers

Under Article 2(1) of the AI Act, providers will be within the scope of the Act if they are:

  • placing on the market or putting into service AI systems in the EU (regardless of where the provider is located);
  • placing on the market general-purpose AI models in the EU (regardless of where the provider is located); or
  • established or located outside of the EU, where the output produced by the AI system is used in the EU.

    Deployers will be within the scope of the Act if they are:

  • established or located in the EU; or
  • established or located outside of the EU, where the output produced by the AI system is used in the EU.

Allocating roles of providers and deployers

Providers

Under the AI Act, providers bear overall responsibility for ensuring the compliance and safety of AI systems. Traditionally, it would be the entity that designs, builds, or develops an AI model or system, such as an AI developer or machine learning ("ML") specialist company, which would be seen as the provider. However, under the AI Act definition, even if an entity outsources the development of the AI system or general-purpose AI model but is responsible for placing it onto the market or into service, that entity would be the provider.

For example, where an entity procures the services of a third-party ML specialist to design or develop the AI system for them, using the entity's own data, materials, or even bespoke algorithms, that entity is more likely to become a provider.

It's possible that in this example, the AI developer or ML specialist, as it does not place on the market or put an AI system into service, does not play any role and therefore has no obligations under the AI Act. It's also possible that the procuring entity and its AI developer or ML specialist may both be providers, if components of the AI system, such as a general-purpose AI model that it uses, are placed on the market by the developer or specialist company.

Deployers

Deployers under the AI Act have the critical responsibility of ensuring the safe and compliant use of AI systems when they are rolled out. In the example above, the entity procuring the services that may be classed as provider will also be the deployer if it puts the newly designed AI system into service.

Becoming providers or deployers under the AI Act for output used in the EU

In addition, as noted in the territorial scope section above, the entity that develops or uses an AI system established or located outside the EU may also become a provider or deployer under the AI Act if the output produced by the AI system is used in the EU even when the AI system is not placed on the market, put into service, or used in the EU.

While Recital 22 of the Act narrows this to the extent the output is intended to be used in the EU, the level of "intention" could still be open to interpretation pending further guidance. The example provided in Recital 22 relates to where a provider or deployer established in the EU contracts certain services to a provider or deployer established in a third country in relation to an activity to be performed by an AI system that would qualify as high-risk. The AI system used in a third country by the provider or deployer could process data lawfully collected in and transferred from the EU and provide to the contracting provider or deployer in the EU the output of that AI system resulting from that processing, without that AI system being placed on the market, put into service or used in the EU. The Act contemplates that in such a circumstance, the provider or deployer of the AI system in the third country would be a provider or deployer under the EU AI Act.

It would therefore be prudent for entities established or located outside the EU that develop or use AI systems to take steps to include a clear contractual provision with its downstream third parties, particularly users of its AI system or users of outputs produced by its AI system, to specify that outputs of the AI system are not intended for use in the EU and for the third party to ensure outputs are not used in the EU.

Deployers becoming providers of high-risk AI systems

The AI Act sets out certain conditions under which a deployer could become a provider. This will only apply in connection with high-risk AI systems, as classified under Article 6 of the Act.

Article 25(1) provides that a deployer or other third party will be considered to be a provider of a high-risk AI system and will therefore assume all the relevant obligations of a provider, if it:

  • puts its name or trademark on a high-risk AI system placed on the market or put into service (although contractual arrangements may stipulate how the provider obligations are allocated);
  • makes a substantial modification to a high-risk AI system placed on the market or put into service such that it remains a high-risk AI system; or
  • modifies the intended purpose of an AI system, including a general-purpose AI system, which has not been classified as high-risk and is on the market or in service, in such a way that the AI system becomes a high-risk AI system.

In these circumstances, the initial provider may no longer be considered a provider of that specific AI system. However, Article 25(2) stipulates that it will need to closely cooperate with the new provider, make available the necessary information and provide the reasonably expected technical access and other assistance that are required for the new provider to fulfil its obligations under the AI Act. This is particularly important where the new provider is only putting its name or trademark on the high-risk AI system and has had little involvement in its design or development, so, as stated, clear contractual provisions should be in place to this effect.

Article 25(2) also provides that the initial provider is not obliged to undertake such cooperation if it has clearly specified that its AI system is not to be changed into a high-risk AI system. It is therefore particularly important for providers of AI systems to include a clear and enforceable contractual provision to this effect with its downstream third parties, including deployers, that have the capability of modifying the AI system.

Equally, a deployer of in-scope AI systems may want to include a clear provision in its third-party supplier contracts not to take any of the actions set out in Article 25(1) on its behalf that could make the deployer become a provider of a high-risk AI system.

Obligations of providers and deployers

The distinction between the definitions of provider and deployer is crucial because the bulk of the obligations under the AI Act are imposed on providers. As mentioned, these will vary depending on the risk-level posed by the AI system or model.

Provider obligations for high-risk AI systems

Providers of high-risk AI systems must perform a prior conformity assessment to ensure the system complies with the requirements set out in Chapter III Section 2 of the Act. After which the high-risk AI system is registered, with a declaration of conformity drawn up and attached a CE mark. Chapter III Section 2 obligations require:

  • establishing a risk management system;
  • data and data governance;
  • technical documentation;
  • record keeping;
  • transparency;
  • human oversight; and
  • accuracy, robustness, and cybersecurity.

Providers of high-risk AI systems must also:

  • establish a quality management system;
  • establish a post-marketing monitoring system;
  • keep logs and documentation;
  • take corrective actions if the system presents a risk to the health or safety, or to the fundamental rights of persons;
  • report serious incidents;
  • appoint an authorized representative if not established in the EU; and
  • cooperate with and provide information to competent authorities.

The Act also recognizes that, along the AI value chain, multiple parties often supply AI systems, tools, and services but also components or processes with various objectives that are incorporated by the provider into the AI system. These parties have an important role to play in the value chain. Article 25 of the Act therefore also places requirements on the provider of a high-risk AI system to enter into detailed written contractual terms with any third-party suppliers of other AI systems, tools, services, components, or processes that are used or integrated into the high-risk AI system. These must specify the necessary information, capabilities, technical access, and other assistance based on the generally acknowledged state of the art to enable the provider of the high-risk AI system to fully comply with the obligations set out in the Act. The Act also sets out that the AI Office could develop and recommend voluntary model contractual terms for this purpose to facilitate the cooperation along the AI value chain.

Deployer obligations for high-risk AI systems

Deployers of high-risk AI systems are responsible for:

  • using the AI system in accordance with instructions;
  • assigning human oversight;
  • keeping logs; and
  • monitoring the performance and compliance of the AI system.

As deployers are more likely to have direct interaction with individual end users, they are also responsible for:

  • informing users subject to a high-risk AI system;
  • conducting impact assessments;
  • explaining decisions to individuals;
  • reporting if the system presents a risk to the health or safety, or to the fundamental rights of persons;
  • reporting serious incidents; and
  • cooperating with and providing information to competent authorities.

Provider and deployer obligations for certain AI systems

Under Article 4 of the Act, both providers and deployers of any AI systems within the scope of the Act must take measures to ensure, to their best extent, a sufficient level of AI literacy among their staff. This should take into account the staff's technical knowledge, experience, education and training and the context the AI systems are to be used in including the persons on whom the AI systems are to be used.

Regardless of the risk level, providers and deployers also both have a range of transparency obligations under Article 50 of the Act to provide information in a clear and distinguishable manner at the latest at the time of the first interaction or exposure to an AI system.

For providers, who have more responsibility with respect to the design and development of an AI system, they must ensure:

  • AI systems intended to interact directly with individuals are designed and developed in a way that the individual is informed that they are interacting with an AI system unless this is obvious; and
  • ·outputs generated or manipulated by an AI system are marked in a machine-readable format and detectable as such.

    Deployers, who have a more direct interaction with individuals, must:

  • inform individuals when they are exposed to the operation of an emotion recognition or biometric categorization system;
  • ensure deep fakes generated or manipulated by an AI system are disclosed as such; and
  • ensure text generated or manipulated by an AI system which is published with the purpose of informing the public on matters of public interest is disclosed as such.

AI systems authorized by law for criminal offence purposes are exempt from these obligations.

Provider obligations for general-purpose AI models

Providers are the only operators that bear responsibilities for general-purpose AI models, including where they pose systemic risk.

Providers of general-purpose AI models must create:

  • detailed technical documentation of the model;
  • enable downstream users of their model to comprehend their capabilities and limitations;
  • put in place a policy to comply with copyright law;
  • draw up and make publicly available a sufficiently detailed summary about the content used for training; and
  • appoint an authorized representative if not established in the EU.

In addition, providers of general-purpose AI models posing systemic risks must also:

  • notify the European Commission;
  • conduct model evaluations;
  • assess and mitigate possible systemic risks;
  • keep track of, document, and report, without undue delay serious incidents and corrective measures to address them; and
  • ensure adequate level of cybersecurity protection.

Providers of general-purpose AI models will be subject to a higher level of scrutiny by the European Commission.

Deployer obligations for general-purpose AI models

Deployers do not have any obligations in relation to general-purpose AI models alone. However, they may have obligations in relation to any AI system of which a general-purpose model forms part, and these obligations will depend on the risk level of the AI system.

Tips for managing risk

Given that providers bear most of the responsibilities and compliance requirements under the AI Act, correct pre-contract and contractual classification of the relative roles of provider and deployer is vital for managing legal and reputational risk exposure in the event of regulatory challenges or litigation. Entities should not assume that because they are engaging a third party to help them design and develop the AI system, the third party will be the provider and will bear all the responsibilities of compliance with the AI Act. In some cases, such third party may not be a provider at all, or it may be a provider, with its client acting as a (separate) provider.

Aside from the Act's specific requirements, for example and as stated above, on the providers of high-risk AI systems to enter into detailed written contractual terms with any third-party suppliers of other AI systems, tools, services, components, or processes that are used or integrated into the high-risk AI systems, it will be crucial to ensure the roles and responsibilities between the parties are clearly specified contractually with respect to the AI system. It will also be essential to, where necessary, ensure that suppliers provide sufficient cooperation and assistance to support their clients (whether acting as provider or as deployer) in their compliance with the AI Act. The parties will also need contractually to allocate liability between them covering, among other areas, non-compliance with the AI Act, as well as for claims, losses and damages arising from the application of the Act and current law more generally.

As between a provider and deployer, where they are two different entities, while deployers may have less onerous obligations under the AI Act, they are not necessarily exposed to less risk. This is because deployers will be responsible for verifying their provider's compliance with the AI Act and the AI system's performance. Given the current reluctance of many leading AI and ML developers to reveal the workings of their models and systems, this could be a challenge in practice, at least in the early days of the AI Act's operation. Deployers should also ensure that the provider will provide sufficient cooperation and assistance to support their compliance with the AI Act.

No doubt, over time, we shall see a wide range of new forms of contract develop to reflect the allocation of roles, risks, and liabilities between providers and deployers.

Authors

Katie Hewson, partner, katie.hewson@shlegal.com

Eva Lu, associate, eva.lu@shlegal.com