|日本語

EU AI Act: what it means for you

Filed under:

On December 9th 2023, after 36 hours of extended negotiations, a political agreement was reached among the European Parliament, the Council of the European Union, and the European Commission on the EU AI Act – a groundbreaking AI regulation covering both EU and non-EU organizations.

The final text of the AI Act was formally approved by the Committee of Permanent Representatives (COREPER) on February 2nd 2024, and is expected to go through the last stage of voting in the European Parliament in April of this year. Afterwards, the regulation will be published in the Official Journal of the European Union, which officially starts the law’s enforcement deadlines. This will impact all AI systems with outputs that are used in the EU, even if the developer is not based in the EU.

In this article, we will cover the key things you need to know about the AI Act, and how it may impact your business.

What is the AI Act?

The EU AI Act is the world’s first mandatory large-scale regulation on AI usage, and has the potential to be the international baseline for non-EU regions as well. We can draw parallels to the GDPR, which set the gold standard of data privacy even for non-EU companies, and spawned similar regulations such as the CCPA in California.

The AI Act takes a “risk-based” approach, scaling the stringency of requirements based on the inherent risk of an AI system. You can learn more about how this approach differs from other frameworks and standards in our previous blog post on the state of AI regulation.

As explained below, AI systems categorized as high-risk will feel the most impact from this regulation. Developers of such high-risk AI systems must go through a third-party conformity assessment in order to obtain a CE mark and access the EU market.

Do the requirements differ for developers and users of AI systems?

The most recent version of the AI Act from Feb 2, approved by the Committee of Permanent Representatives, provides additional information on the roles and their responsibilities across the AI Value Chain. The law defines six key roles:

  1. Provider. An entity (e.g. company) that develops an AI system and “places them on the market or puts the system into service under its own name or trademark, whether for payment or free of charge”.
  2. Deployer. An entity that uses an AI system under its authority, except in personal non-professional activities. Note that this does not refer to the affected end users.
  3. Importer. An EU entity that places an non-EU AI system on the EU market.
  4. Distributor. An entity in the supply chain, other than the provider or the importer, that makes an AI system available in the EU market.
  5. Authorized Representatives. An EU entity that carries out obligations on behalf of a Provider.
  6. Operator. An entity that falls into any of the five categories above.

Operators could act in more than one role at the same time and should therefore fulfill cumulatively all relevant obligations associated with those roles. For example, an operator could act as a distributor and an importer at the same time.

Recital 56a

Providers bear the majority of responsibilities under the AI Act. Providers of high-risk AI systems will need to comply with Article 16, and by extension:

  • Article 9. Implementing risk management processes.
  • Article 10. Data and Data Governance. Using high-quality training, validation and testing data.
  • Article 11. Establishing documentation as defined by Annex IV.
  • Article 12. Implementing automatic logging.
  • Article 13. Ensuring an appropriate level of transparency with Deployers.
  • Article 14. Ensuring human oversight measures.
  • Article 15. Ensuring robustness, accuracy and cybersecurity.
  • Article 17. Setting up a Quality Management System (e.g. based on ISO 42001)
  • Article 18. Keeping documentation for 10 years at the disposal of the national competent authorities.
  • Article 20. Storing the automatically generated logs for at least 6 months.
  • Article 21. Corrective actions and Duty of information when the system is not in conformity.
  • Article 23. Providing the necessary information to competent authorities.
  • Article 51. Registering the high-risk model in the EU Database.

Deployers of high-risk AI systems have lighter obligations, and will need to comply with Article 29. This includes exercising due diligence in using the system; monitoring and logging the system in use; and, in specific conditions, carrying out a fundamental rights impact assessment

However, a Deployer, Importer, or Distributor is considered a Provider of a high-risk AI system if they:

  • Change the name or a trademark of the AI system
  • Make a substantial modification to the AI system
  • Modify the intended purpose of a non-high-risk AI system already on the market, in such a manner that makes the AI system high-risk

If the initial Provider was overridden in the cases above, the initial Provider is no longer subject to the usual obligations, but must provide “the reasonably expected technical access” to the new Provider.

Importers and Distributors are subject to the obligations in Article 26 and Article 27, respectively. They include verifying that the provider has complied with their obligations, and that the system has gone through the relevant conformity assessment and bears the CE mark. 

Authorized Representatives are subject to the obligations in Article 25, which outline the cooperation with relevant authorities and submission of relevant documents.

The following diagram can be useful to understand your entity’s (“operator’s”) position in the AI system supply chain as defined by the AI Act. Note that this is a simplified version that makes some assumptions; for example, that the AI system’s output is located or used within the EU. We also recommend using the interactive AI Act Compliance Checker tool developed by the Future of Life Institute. 

What are the requirements for different types of AI systems?

The Act defines several categories of AI systems with different obligations. In principle, these categories are not mutually exclusive, and the same AI system may fall under a mix of obligations. For example, a foundation model that’s used in a medical device and directly interacts with end users might need to fulfill High-Risk, GPAI, and Transparency obligations all at once.

Prohibited AI

According to Article 5, these use cases are considered unacceptable and will be prohibited, with some exceptions for law enforcement activities. Examples: real-time biometric identification, social scoring, manipulative techniques.

High-Risk AI

These use cases are considered highly harmful if misused, and will need to go through a third-party conformity assessment before being placed on the market. According to Article 6, an AI system is considered high-risk under either of the following conditions:

  1. It’s used as a safety component of a product, or is itself a product that’s already regulated by existing EU laws listed in Annex II, and is required to undergo a third-party conformity assessment under those Annex II laws; or,
  2. It falls under Annex III, unless it’s proved to not possess significant risk of harm to health, safety, or fundamental rights

We will now cover the two points above in more detail.


One of the major points of deliberation during the AI Act’s development was alignment with existing EU harmonisation legislations that regulate existing product categories. This is covered by Annex II.

Annex II, Section A: List of Union Harmonisation Legislation Based on the New Legislative Framework

Includes AI systems, or the products for which the AI system is a safety component, that fall under the following categories. As per Article 28 2(a), the manufacturer of such products is considered the Provider of the included high-risk AI system if it’s placed on the market under the manufacturer’s name or trademark:

  • Machinery
  • Toys
  • Recreational craft and personal watercraft
  • Lifts
  • Equipment and protective systems intended for use in potentially explosive atmospheres
  • Radio equipment
  • Pressure equipment
  • Cableway installations
  • Personal protective equipment
  • Appliances burning gaseous fuels
  • Medical devices
  • In vitro diagnostic medical devices

According to Article 6(1), if an AI system is covered by the above categories, and already needs to go through third-party assessment under the relevant legislation, it’s considered high-risk – with all the relevant obligations applicable to the Provider.

However, the exact interplay between the AI Act and existing regulations, e.g. the Medical Devices Regulation (MDR), is not yet finalized and remains a point of discussion. Obligations for AI systems in Annex II will be enforced a year later than other high-risk systems to provide sufficient time to make the requirements compatible.

Annex II, Section B: Other devices that fall under the Union Harmonisation Legislation

Includes AI systems, or the product for which the AI system is a safety component, that fall under the following categories:

  • Civil aviation security
  • Two- or three-wheel vehicles and quadricycles
  • Agricultural and forestry vehicles
  • Marine equipment
  • Interoperability of the rail system
  • Motor vehicles and their trailers
  • Civil aviation

According to Article 2(2), if a high-risk AI system is covered by the above categories, and already needs to go through third-party assessment under the relevant legislation, it’s categorized as high-risk. However, unlike for use cases under Annex II Section A above, only Article 84 shall apply. The Provider is only obligated to comply with the existing harmonized legislation, and is exempt from the regular high-risk AI system rules. 


Annex III provides the list of other high-risk use cases that are not covered by other existing harmonization legislation:

  • Biometrics (ones permitted under relevant Union or national law) 
  • Critical infrastructure (e.g. safety component in water, gas, heating or electricity infrastructure)
  • Education and vocational training (e.g. determining admission to training institutions)
  • Employment, workers management and access to self-employment (e.g. monitoring and evaluating performance; targeted job ads)
  • Access to and enjoyment of essential public and private services (e.g. assessing eligibility to benefits; evaluating creditworthiness; pricing in health insurance)
  • Law enforcement (e.g., assessing an individual using past criminal behavior)
  • Migration, asylum and border control management (e.g. visa application examination)
  • Administration of justice and democratic processes (e.g. assisting a judicial authority to research and interpret the law)

The Commission will assess the need to amend the lists in Annex III (high-risk uses) and Article 5 (prohibited practices) once a year, and report findings to the Parliament and Council.

Article 84

However, Article 6(2a) provides an exemption for AI systems under Annex III that will not be considered high-risk if they “do not pose a significant risk of harm to the health, safety or fundamental rights of natural persons”. In such cases, the Provider must submit a notification to its national supervisory authority, including a self-assessment proving that the use of the AI system must be limited to either of the following tasks:

  • Performing narrow procedural tasks (excluding profiling of natural persons);
  • Improving the results of previously completed human activities;
  • Detecting decision-making patterns without replacing or influencing human assessments; or
  • Performing preparatory tasks to an assessment relevant for the purpose of the use cases listed in Annex III.
  • Notwithstanding the above conditions, an AI system shall always be considered high-risk if it performs profiling of natural persons.

The Commission shall, after consulting the AI Board, and no later than 18 months after the entry into force of this Regulation, provide guidelines specifying the practical implementation of this article (Article 6) completed by a comprehensive list of practical examples of high-risk and non-high-risk use cases on AI systems.

Article 6(2c)

If you’re the Provider of a high-risk AI system, the obligations outlined in the previous section will apply to you, and a conformity assessment according to Article 43 will need to be carried out before the system can be placed on the market.

Transparency-Obligated AI

These use cases are considered potentially deceitful, but not to the level of the High-Risk category. Examples: systems that interact directly with natural persons, or systems (including GPAI) that generate synthetic image, audio, or video. According to Article 52, the Providers and Deployers of such systems will need to fulfill transparency requirements, such as informing the user that they’re interacting with AI (chatbots) or AI-generated content ( “deep fakes”), and provide a code of conduct.

General Purpose AI (GPAI)

One of the latest additions to the legislation, Title VIIIa covers obligations for General Purpose AI (GPAI) Models, which are defined as models that “display significant generality and is capable to competently perform a wide range of distinct tasks, including when trained with a large amount of data using self-supervision at scale”.

According to Article 52c, Providers of GPAI models must:

  • Draw up technical documentation, including training and testing process and evaluation results.
  • Enable other Providers that intend to integrate the GPAI model into their own AI system with sufficient documentation and information.
  • Put in place a policy to respect the EU copyright law.
  • Based on a template provided by the AI Office, publish a sufficiently detailed summary about the content used for training.
  • Develop and adhere to a code of practice

One of the reasons for the recent delays in the AI Act’s development was a debate on the two-tiered categorization for GPAI models. Currently, models that were trained using a total computing power of more than 10^25 FLOPs are considered to carry “systemic risk”. This places most models on the market, excluding GPT-4 and potentially Gemini Ultra, in the non-systemic risk subcategory. However, the threshold may be updated in future by the European Commission and the AI Office as the technology progresses.

Providers of GPAI models with systemic risk are subject to additional obligations to:

  • Perform model evaluations, including adversarial testing
  • Assess and mitigate possible systemic risks
  • Document and report issues to the AI office and relevant authorities

However, there are also major exemptions for Providers of AI systems for research and academic use, as well as for AI models (including GPAI) that are free and open-source. These models are not subject to any obligations until they are placed on the market by a Provider as part of an AI system that has obligations under the AI Act. Military and law enforcement uses are excluded from the majority of obligations, but have some special considerations that we will not cover.

You can learn more details about each risk category in the official FAQ published by the European Commission.

What are the consequences of not complying with the EU AI Act?

As defined in Article 71, administrative fines act as the main penalty. The actual fine amount will be decided on a case-by-case basis depending on the severity of the infringement (based on rules to be further developed by individual EU Member states), but the AI Act does outline maximum penalties for different types of non-compliance:

  • Prohibited practices or non-compliance related to requirements on data: up to €35m or 7% of the total worldwide annual turnover
  • Non-compliance with any of the other requirements: up to €15m or 3% of the total worldwide annual turnover
  • Incorrect, incomplete or misleading information to notified bodies and national competent authorities: up to €7.5m or 1.5% of the total worldwide annual turnover

For each category of infringement, the threshold would be the lower of the two amounts for small and medium-sized enterprises (including startups) and the higher for other companies.

What is the current legislative status and timeline?

The EU AI Act is not yet law, but it’s on a fast track to be published in the Official Journal in early-mid 2024, and most of its requirements will gradually be enforced in the following 24 months.

The timeline for enforcement is as follows:

  1. After 6 months: Prohibition of AI applications of unacceptable risk becomes applicable.
  2. After 9 months: Codes of practice for GPAI providers must be prepared.
  3. After 12 months (approximately 2025 Q2~): Provisions on GPAI become applicable.
  4. After 24 months (approximately 2026 Q2~): Obligations for high-risk AI systems under Annex III become applicable. Each EU Member State is required to establish or participate in at least one regulatory sandbox.
  5. After 36 months: Obligations for high-risk AI systems under Annex II become applicable.

In practical terms, this means that the next two years will see a flurry of activity in the field of AI auditing, both in government and in the private sector. Relevant offices and systems will need to be set up in EU member states; standards-making and certification bodies will be under extra pressure to fill the gap and provide clear assessment frameworks. It is possible that for specific industries, like medical devices incorporating AI, the actual rollout of standards and their enforcement may take longer than the nominal deadline of 2026.

It must also be noted that per Article 83(2), the operators of high-risk AI systems that were already on the market or in use before the date of applicability of the Act are out of scope, as long as the systems were not subject to significant changes in their designs since then. In other words, the AI Act does not retroactively apply to existing high-risk AI systems until a substantial modification is made.

Organizations are encouraged by the legislators to conduct anticipatory voluntary compliance checks  and gap analyses as soon as possible.

How Citadel AI tools help with AI Act preparation

The EU Act is the complex result of numerous drafts and revisions since April 2021; as such, its scope covers multiple levels of AI usage in an organization from process governance to technical validation.

Citadel AI has been an early participant in the AI standards ecosystem, and strives to help organizations streamline their AI testing and governance processes, and automate compliance with AI standards and regulations. Our product, Lens and Radar, can:

  1. Automatically fulfill some of the most technically demanding requirements of the EU AI Act
  2. Help engineering teams validate their models and data against international standards, which will form the basis of technical requirements in the AI Act, in a hands-on way
  3. Provide easy reporting and guidance to get on the compliance track as quickly as possible

Citadel AI is trusted by world-leading organizations such as the British Standards Institution. At this critical period of time where AI standards and regulation are maturing, we believe that we can help you streamline compliance, improve AI reliability, and navigate this evolving landscape.

Contact us if you’re interested in learning more.

Get in Touch

Interested in a product demo or discussing how Citadel AI can improve your AI quality? Please reach out to us here or by email.

Related Articles