Executive Summary
The EU Artificial Intelligence Act (Regulation (EU) 2024/1689), effective from August 2024, establishes the world’s first comprehensive legal framework for AI. It aims to foster trustworthy, human-centric AI while safeguarding health, safety, fundamental rights, democracy, and innovation across the EU. The Act introduces a risk-based approach, categorizing AI systems into unacceptable, high, limited, and minimal risk, with strict prohibitions on certain harmful practices (e.g., manipulative techniques, social scoring, and certain biometric uses). High-risk AI systems face stringent requirements for risk management, data quality, transparency, human oversight, and post-market monitoring. The Act also regulates general-purpose AI models, especially those with systemic risks, mandating transparency, copyright compliance, and risk mitigation. Governance is ensured through the European AI Office, national authorities, and a coordinated Board. The Act includes innovation support measures, such as regulatory sandboxes, and provides for significant penalties for non-compliance. Full application is phased in through 2026, with some provisions effective earlier.
Characteristics
- The EU AI Act (Regulation (EU) 2024/1689) is the world’s first comprehensive legal framework for artificial intelligence, aiming to ensure trustworthy, human-centric AI while protecting health, safety, fundamental rights, democracy, and the environment, and supporting innovation and the internal market.
- It uses a risk-based approach, classifying AI systems into four categories: unacceptable risk (prohibited practices), high-risk (subject to strict requirements), transparency risk (specific disclosure obligations), and minimal/no risk (no specific rules).
- Prohibited AI practices include manipulative or deceptive techniques, exploitation of vulnerabilities, social scoring, certain biometric uses (e.g., untargeted facial recognition scraping, emotion recognition in workplaces/education), and real-time remote biometric identification in public spaces for law enforcement (with narrow exceptions).
- High-risk AI systems (e.g., in critical infrastructure, education, employment, essential services, law enforcement, migration, justice, and democratic processes) must meet requirements for risk management, high-quality data, technical documentation, transparency, human oversight, robustness, cybersecurity, and post-market monitoring.
- General-purpose AI models, especially those with systemic risk (e.g., large generative models), are subject to transparency, copyright compliance, risk assessment, and mitigation obligations. Open-source models have some exemptions unless they present systemic risks.
- The Act establishes governance structures: the European AI Office, national competent authorities, a Board, a Scientific Panel, and an Advisory Forum. It provides for regulatory sandboxes, support for SMEs, conformity assessment, CE marking, market surveillance, penalties, and regular review and adaptation mechanisms. Most provisions apply from August 2026, with some (e.g., prohibitions, general-purpose AI model rules) effective earlier.
Actors
Civil Society Actors
Economic Actors
Political Actors
Research and Innovation Actors
Practical Applications
- The EU AI Act (Regulation (EU) 2024/1689) entered into force on 1 August 2024, establishing the first comprehensive legal framework for AI in the EU, with full applicability by 2 August 2026 and phased implementation of certain provisions starting from 2 February 2025.
- The European Commission has launched the AI Pact, a voluntary initiative inviting AI providers and deployers to comply with the key obligations of the AI Act ahead of its full application.
- The European AI Office has been established by the European Commission to implement, supervise, and enforce the AI Act, including facilitating the development of a Code of Practice for general-purpose AI models.
- Member States are required to establish at least one AI regulatory sandbox at national level by 2 August 2026, providing a controlled environment for the development, training, testing, and validation of innovative AI systems under regulatory supervision.
- The EU has set up an EU database for high-risk AI systems, requiring providers and certain deployers to register their high-risk AI systems before placing them on the market or putting them into service.
- The Act mandates the creation of a European Artificial Intelligence Board (the Board), a Scientific Panel, and an Advisory Forum to steer and advise on the governance and enforcement of the AI Act.
- The Act establishes a risk-based classification system for AI, with specific prohibitions on certain AI practices (e.g., social scoring, untargeted facial recognition scraping, emotion recognition in workplaces and education, etc.) effective from 2 February 2025.
- High-risk AI systems are subject to mandatory requirements, including risk management, data governance, technical documentation, record-keeping, transparency, human oversight, robustness, accuracy, and cybersecurity, with conformity assessment procedures in place.
- Providers of high-risk AI systems must implement a post-market monitoring system and report serious incidents and malfunctions to market surveillance authorities.
- The Act requires the marking of high-risk AI systems with the CE mark to indicate conformity, and sets up procedures for conformity assessment, including the involvement of notified bodies.
- The Act amends several existing EU regulations and directives (e.g., on machinery, medical devices, transport, etc.) to ensure alignment with the new AI requirements.
- The Act provides for the establishment of Union AI testing support structures to assist Member States with technical and scientific advice for enforcement and market surveillance.
- The Act mandates Member States to develop initiatives targeted at SMEs and startups, including priority access to AI regulatory sandboxes, awareness-raising, training, and dedicated communication channels.
- The Act requires the development and implementation of codes of practice for general-purpose AI models, with the AI Office facilitating their creation and approval.
- The Act introduces transparency obligations for providers and deployers of certain AI systems, including requirements for labelling AI-generated content (e.g., deepfakes) and informing individuals when interacting with AI systems.
- The Act establishes a system of administrative fines and penalties for non-compliance, with specific provisions for SMEs and public authorities.
- The Act provides for the right of affected persons to obtain explanations for decisions made by high-risk AI systems that produce legal or similarly significant effects.
- The Act includes provisions for the protection of whistleblowers reporting infringements of the regulation, in line with Directive (EU) 2019/1937.
- The Act requires annual reporting and review mechanisms, including the evaluation of the list of high-risk AI systems and prohibited practices, and the effectiveness of the governance and enforcement system.
Resulting Commitments
- The AI Act entered into force on 1 August 2024 and will be fully applicable 2 years later on 2 August 2026, with some exceptions.
- Prohibitions and AI literacy obligations enter into application from 2 February 2025.
- Governance rules and obligations for general-purpose AI models become applicable on 2 August 2025.
- Rules for high-risk AI systems embedded into regulated products have an extended transition period until 2 August 2027.
- Codes of practice for general-purpose AI models should be ready by 2 May 2025.
- Providers of general-purpose AI models that have been placed on the market before 2 August 2025 must comply with obligations by 2 August 2027.
- AI systems which are components of large-scale IT systems established by the legal acts listed in Annex X and placed on the market or put into service before 2 August 2027 must be brought into compliance with the Regulation by 31 December 2030.
- Providers and deployers of high-risk AI systems intended to be used by public authorities must comply with the requirements and obligations of the Regulation by 2 August 2030.
- Member States must ensure that their competent authorities establish at least one AI regulatory sandbox at national level, operational by 2 August 2026.
- Member States must communicate to the Commission the identity of the notifying authorities and the market surveillance authorities, and make information on how competent authorities and single points of contact can be contacted, by 2 August 2025.
- Member States must report to the Commission on the status of the financial and human resources of the national competent authorities by 2 August 2025 and every two years thereafter.
- Member States must lay down and notify to the Commission the rules on penalties, including administrative fines, and ensure they are properly and effectively implemented by the date of application of the Regulation; provisions on penalties apply from 2 August 2025.
- The Commission must assess the need for amendment of the list set out in Annex III and of the list of prohibited AI practices once a year following the entry into force of the Regulation and until the end of the period of delegation of power.
- By 2 August 2028 and every four years thereafter, the Commission must evaluate and report to the European Parliament and the Council on the need for amendments to Annex III, transparency measures, and the effectiveness of the supervision and governance system.
- By 2 August 2029 and every four years thereafter, the Commission must submit a report on the evaluation and review of the Regulation to the European Parliament and the Council.
- By 2 August 2028 and every four years thereafter, the Commission must submit a report on the review of the progress on the development of standardisation deliverables on the energy-efficient development of general-purpose AI models.
- By 2 August 2028 and every three years thereafter, the Commission must evaluate the impact and effectiveness of voluntary codes of conduct to foster the application of requirements for AI systems other than high-risk AI systems.
- By 2 August 2031, the Commission must carry out an assessment of the enforcement of the Regulation and report to the European Parliament, the Council, and the European Economic and Social Committee.
- Providers must keep documentation for a period ending 10 years after the high-risk AI system has been placed on the market or put into service.
- Importers must keep a copy of the certificate issued by the notified body, instructions for use, and the EU declaration of conformity for 10 years after the high-risk AI system has been placed on the market or put into service.
- Providers and deployers must keep automatically generated logs for a period appropriate to the intended purpose of the high-risk AI system, of at least six months, unless otherwise provided by applicable law.
- Certificates for high-risk AI systems are valid for a period not exceeding five years for AI systems covered by Annex I, and four years for AI systems covered by Annex III; validity may be extended for further periods, each not exceeding five or four years respectively, based on reassessment.
- Administrative fines for non-compliance with prohibited AI practices can be up to EUR 35,000,000 or 7% of total worldwide annual turnover, whichever is higher.
- Administrative fines for non-compliance with other specified provisions can be up to EUR 15,000,000 or 3% of total worldwide annual turnover, whichever is higher.
- Administrative fines for supplying incorrect, incomplete, or misleading information can be up to EUR 7,500,000 or 1% of total worldwide annual turnover, whichever is higher.
- For SMEs, including start-ups, each fine shall be up to the percentage or amount specified, whichever is lower.
- Administrative fines for non-compliance by Union institutions, bodies, offices, and agencies can be up to EUR 1,500,000 for prohibited practices and up to EUR 750,000 for other requirements.
- The Commission is empowered to impose fines on providers of general-purpose AI models up to 3% of annual total worldwide turnover or EUR 15,000,000, whichever is higher, for specified infringements.