Decoding the AI Act PART 2

Key players of the AI ACT

The AI Act is set to apply to various players: Providers, deployers, importers, distributors, product manufacturers, and authorised representatives of the providers of AI systems. Each of these roles carries different levels of compliance responsibilities related to, i.e. data governance, transparency, and technical documentation. As the AI Act follows a risk-based approach, the requirements posed on a player depend on the type of AI system involved. For instance, providers of high-risk AI systems will face the most stringent obligations, while those dealing with limited-risk AI systems will have only lighter transparency requirements. Therefore, it is crucial to understand what your role is under the Act and how your use of the AI system is to be classified. In this article, we will dive further into the scope of the AI Act and the key players. For information on what constitutes an AI system under the Act, please find Part 1 of our series on the AI Act (“Unravelling the definition of AI system”).

Providers

Providers are subject to most of the obligations under the AI Act. Therefore, it is important to understand if an organisation qualifies or falls under the legal definition of a provider. According to the AI Act, a “provider” is:

A natural or legal person, public authority, agency, or other body that:

(i) develops an AI system or a general-purpose AI model or  

(ii) that has an AI system or a general-purpose AI model developed and places it on the market or puts the AI system into service under its name or trademark, whether for payment or free of charge.

At the same time:

  • “Placing on the market” is to be understood as “the first making available of an AI system or a general-purpose AI model on the EU market”, and
  • “Putting into service” is to be understood as “the supply of an AI system by the provider for first use directly to the deployer or for its own use in the EU for its intended purpose.

Deployers

The second important actor in the AI Act (in relation to a set of obligations applicable to them) is the deployers of AI systems.

According to the AI Act, a “deployer” is “a natural or legal person, public authority, agency, or other body using an AI system under its authority except where the AI system is used in the course of personal non-professional activity.” 

Deployers are, in other words, for example, businesses that implement and use AI solutions that another party developed into their business operations (see the “household exception” for deployers below).

The AI Act applies to deployers of AI systems that have their place of establishment or are located within the EU.

Importers and distributors

The AI Act further applies to importers and distributors of AI systems.

An “importer” means “a natural or legal person located or established in the EU that places on the market an AI system that bears the name or trademark of a natural or legal person established in a third country”.

A “distributor” means “a natural or legal person in the supply chain, other than the provider or the importer, that makes an AI system available on the EU market.” 

At the same time, “making available on the market’” is to be understood as “the supply of an AI system or a general-purpose AI model for distribution or use on the Union market in the course of a commercial activity, whether in return for payment or free of charge.”

In contrast to importers, distributors are not required to be established or located in the EU.

Importers and distributors may also contribute to the development of AI systems. In these situations, these operators may be acting in more than one role at the same time and should in such cases cumulatively fulfill all relevant obligations associated with these roles.

Product manufacturers

The AI Act also applies to product manufacturers placing on the market or putting into service an AI system together with their product and under their name or trademark.

In the case of high-risk AI systems that are safety components of products covered by certain EU product safety laws, the product manufacturer will assume the role of the provider of the high-risk AI system and must comply with the applicable obligation of providers.

Extraterritorial scope of the AI Act

It should be noted that the AI Act adopts the model of extraterritorial scope, similar to that of the GDPR. This means that the AI Act does not only apply to providers and deployers established or located in the EU, but it also expands its territorial scope to providers and deployers of AI systems that have their place of establishment or are located in a third country (that is any country outside the EU), where the output produced by the AI system is used in the EU.

Specifically, this addresses situations where an operator established in the EU purchases services involving an AI system operated in a third country. This prevents regulatory circumvention and ensures protection for individuals in the EU by applying the Act to third-country providers and deployers of AI systems when their output is intended for use in the EU.

To create a level-playing field for operators, the AI Act has also introduced the concept of “Authorised Representatives” of providers that are not established in the EU. An “authorised representative” means “a natural or legal person located or established in the EU who has received and accepted a written mandate from a provider of an AI system or a general-purpose AI model to, respectively, perform and carry out on its behalf the obligations and procedures established by the AI Act.”

Authorised representatives shall, in particular, serve as contact persons for those providers not established in the EU and ensure compliance with the AI system placed on the market or put into service by the non-EU provider and it is similar to the GDPR notion of “EU Representative” for controllers and processors without a presence in the EU/EEA set forth in Article 27 GDPR.

EXCLUSION OF APPLICATION

The AI Act sets forth cases where it is not applicable. This would be the case:

  1. In areas the EU normally does not have competence, such as national security, or where the AI systems are placed on the market, put into service, used without modification, or the output is used in the EU exclusively for military, defense, or national security purposes (regardless of the entity’s role).  
  1. Where the AI Systems or Models (including their output), are specifically developed and put into service for the sole purpose of scientific research and development.
  1. For any research, testing, or development activity regarding AI systems or models prior to their being placed on the market or put into service. However, testing in real-world conditions shall not be covered by that exclusion.
  1. To deployers who are natural persons using AI systems in the course of a purely personal non-professional activity. This exception is similar to the “household exception” set forth in the GDPR.
  1. Of public authorities in a third country or international organisations (captured in Article 1 of the AI Act), where those authorities or organisations use AI systems in the framework of international cooperation or agreements for law enforcement and judicial cooperation with the Union or with one or more Member States, provided that such a third country or international organisation provides adequate safeguards with respect to the protection of fundamental rights and freedoms of individuals.

Key Takeaways and Next Steps

In summary, the AI Act applies to organisations in all sectors that develop, distribute, use, or import AI systems in the EU market and to those whose AI systems generate output used in the EU. Despite the differences in obligations under the Act, each operator plays an important role in managing the risks of AI.

To determine which requirements of the AI Act apply to your organisation, begin by identifying your role under the Act and how your use of AI is classified. If your organisation is not established in the EU, it is crucial to assess whether and to what extent your AI systems or GPAI model output will be used within the EU.

Stay tuned, as this series of articles will further clarify the different risk classifications of AI systems covered by the Act.

More from our team