The rapid advancement of artificial intelligence (AI) technologies has ushered in a new era of innovation and challenges across various sectors of society and the economy. In response to these developments, the European Union has introduced the Artificial Intelligence Act (AIA), a comprehensive legal framework aimed at regulating AI systems. Published in the Official Journal of the EU on July 12, 2024, and set to enter into force on August 1, 2024, the AIA represents the EU’s most ambitious effort to date in creating a balanced approach to AI governance.
At its core, the AIA adopts a risk-based approach, categorizing AI systems based on their potential impact and use cases. This tiered structure determines the level of regulatory oversight and obligations imposed on operators, ranging from outright prohibitions for certain high-risk applications to minimal requirements for low-risk systems. The Act’s scope is defined by its broad definition of AI systems, encompassing machine-based systems designed to operate with varying levels of autonomy and adaptiveness, generating outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments.
The AIA categorizes AI systems into four main risk levels: unacceptable risk (explicitly prohibited), high risk (subject to stringent regulatory requirements), limited risk (with specific transparency obligations), and minimal risk (no specific obligations under the AIA). This risk-based categorization forms the foundation of the Act’s regulatory framework.
One of the most significant aspects of the AIA is its explicit prohibition of certain AI applications deemed to pose unacceptable risks to fundamental rights and societal values. These prohibitions include AI systems that deploy subliminal techniques or exploit vulnerabilities to manipulate human behavior in ways that may cause physical or psychological harm, social scoring systems used by public authorities for evaluating individuals based on social behavior, certain uses of real-time remote biometric identification systems in publicly accessible spaces for law enforcement purposes (subject to limited exceptions), and AI systems for emotion recognition in specific contexts such as law enforcement, border management, the workplace, and education.
The AIA places particular emphasis on regulating high-risk AI systems (HRAIS), which are subject to the most stringent requirements. HRAIS are defined as those used in critical areas such as infrastructure management, educational and vocational training, employment and worker management, access to essential private and public services, law enforcement, migration and border control management, and administration of justice and democratic processes. Providers of HRAIS are subject to extensive obligations, including implementing comprehensive risk management systems, establishing data governance measures, creating and maintaining technical documentation, enabling automatic logging of events, ensuring transparency through detailed instructions for use, implementing human oversight mechanisms, ensuring accuracy, robustness, and cybersecurity, establishing quality management systems, and conducting post-market monitoring.
Additionally, HRAIS providers must fulfill procedural obligations such as conducting conformity assessments, obtaining CE marking, registering the HRAIS in an EU-wide database, and reporting serious incidents or malfunctions to relevant authorities within 15 days. These requirements aim to ensure that high-risk AI systems are developed, deployed, and monitored with the utmost care and transparency.
The AIA also introduces specific provisions for general purpose AI (GPAI) models, which can serve as the foundation for a wide range of AI applications. These provisions apply regardless of the specific use case and include transparency requirements, compliance with EU copyright law, and providing summaries of training data. For GPAI models trained on extensive datasets and exhibiting superior performance (termed “GPAI with systemic risk”), additional obligations apply, such as conducting stringent model evaluations, assessing and mitigating potential systemic risks, enhanced reporting to regulators, ensuring adequate cybersecurity measures, and reporting on energy efficiency.
For AI systems not falling into the prohibited or high-risk categories, the AIA imposes limited obligations primarily focused on transparency. Providers of these low-risk systems must ensure that users are aware when they are interacting with an AI system, particularly in cases of human-AI interaction.
The implementation of the AIA’s provisions will occur in phases. Most provisions will come into effect on August 1, 2026, following a two-year implementation period. However, prohibitions and AI literacy requirements will apply from February 1, 2025, six months after the Act’s entry into force, while GPAI requirements will take effect on August 1, 2025, one year after the Act’s entry into force. During the implementation period, supporting delegated legislation, guidance, and standards will be published to assist with AIA compliance.
To ensure adherence to its provisions, the AIA establishes a robust enforcement mechanism with significant financial penalties for violations. The fines are structured on a tiered basis, depending on the nature of the infringement and the size of the company. For less severe infringements, penalties can reach up to €7.5 million or 1.5% of global annual turnover (whichever is higher), while more severe infringements can result in fines of up to €35 million or 7% of global annual turnover (whichever is higher). These substantial penalties underscore the EU’s commitment to enforcing the AIA and ensuring that organizations take their obligations under the Act seriously.
The implications of the AIA for organizations developing, deploying, or using AI systems are far-reaching, both within the EU and globally. Organizations will need to conduct thorough assessments of their AI systems to determine which category they fall under and what obligations apply. Implementing robust risk management processes will be crucial, particularly for providers of high-risk AI systems. Maintaining comprehensive technical documentation and ensuring transparency in AI system operations will be essential for compliance, as will designing AI systems with human oversight capabilities and ensuring that staff have sufficient AI literacy to manage these systems effectively.
Stricter data governance measures will be required, particularly for training and testing datasets used in high-risk AI systems. Organizations will also need to establish post-market monitoring and incident reporting mechanisms to comply with the AIA’s requirements. While the AIA is an EU regulation, its effects will likely be felt globally due to the interconnected nature of AI development and deployment.
The EU Artificial Intelligence Act represents a landmark in the regulation of AI technologies. By adopting a risk-based approach, the AIA aims to strike a balance between fostering innovation and protecting fundamental rights and societal values. As the first comprehensive legal framework of its kind, the AIA is likely to set a global standard for AI regulation.
As the implementation of the AIA unfolds, organizations must stay informed about evolving requirements and guidance. Proactive compliance efforts will be crucial to navigating the new regulatory landscape successfully. Moreover, the AIA’s influence may extend beyond the EU, potentially shaping AI governance approaches worldwide.
The coming years will be critical in determining the effectiveness of the AIA in achieving its goals of promoting trustworthy AI while supporting innovation. As regulators, industry stakeholders, and civil society engage with the Act’s provisions, ongoing dialogue and potential refinements to the regulatory framework can be expected. For legal practitioners, policymakers, and AI developers alike, the EU AI Act marks the beginning of a new era in the governance of artificial intelligence.
In conclusion, the European Union Artificial Intelligence Act represents a significant step forward in the regulation of AI technologies. Its comprehensive approach, balancing innovation with the protection of fundamental rights, sets a new standard for AI governance. As organizations prepare for compliance and the global AI community grapples with the implications of this landmark legislation, the AIA is poised to shape the future of AI development and deployment far beyond the borders of the European Union.
Footnotes
European Parliament, EU AI Act: first regulation on artificial intelligence, https://www.europarl.europa.eu/topics/en/article/20230601STO93804/eu-ai-act-first-regulation-on-artificial-intelligence, accessed August 12, 2024
Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 laying down harmonised rules on artificial intelligence and amending Regulations (EC) No 300/2008, (EU) No 167/2013, (EU) No 168/2013, (EU) 2018/858, (EU) 2018/1139 and (EU) 2019/2144 and Directives 2014/90/EU, (EU) 2016/797 and (EU) 2020/1828 (Artificial Intelligence Act) (Text with EEA relevance)