EU AI Act Implementation: Consultation Process and Code of Practice
The European Union is taking significant steps to implement the AI Act, launching targeted stakeholder consultations and developing a Code of Practice for general-purpose AI models. Key focus areas include transparency requirements, risk assessment, and safety frameworks for powerful AI models. The consultation process, open until December 11, 2024, seeks input from various stakeholders to refine guidelines and ensure effective regulation. While the AI Act aims to balance innovation with human rights protection, concerns persist regarding potential loopholes in AI technology exports. This comprehensive approach reflects the EU’s commitment to responsible AI development and deployment, with implications for businesses, citizens, and AI developers worldwide.
Introduction to EU AI Act Implementation
The European Union has embarked on a crucial phase in the implementation of its landmark AI Act, which came into force in the summer of 2024. As part of this process, the European Commission has initiated targeted stakeholder consultations and is developing a Code of Practice for general-purpose AI models. These efforts are aimed at refining the definition of AI systems, clarifying prohibited practices, and establishing guidelines for compliance with the Act’s provisions.
The consultation period, which runs until December 11, 2024, invites input from a wide range of stakeholders, including AI providers, businesses, national authorities, academia, and civil society. This inclusive approach underscores the EU’s commitment to creating a regulatory framework that balances innovation with the protection of human rights and addresses the complex challenges posed by rapidly advancing AI technologies.
Defining AI Systems and Challenges
One of the primary objectives of the consultation process is to refine the definition of AI systems. This task is complicated by the need to distinguish between complex AI and simpler software applications. The European Commission is seeking input on several key aspects:
- The importance of features such as machine learning and neural networks in defining AI systems
- Practical examples of software that may or may not fall under the AI definition
- Strategies for balancing innovation with effective regulation
The consultation aims to apply regulations only to high-impact AI systems while providing clear guidelines for stakeholders. As stated in the consultation document, “The legal concepts regarding the AI system definition and prohibited AI practices are established in the AI Act. This consultation seeks additional practical examples from stakeholders to feed into the guidelines and provide further clarity on practical aspects and use cases.” ¹
Prohibited AI Practices and Unacceptable Risks
A critical aspect of the AI Act implementation is clarifying which AI uses pose “unacceptable risks” and should be banned. The consultation focuses on several high-risk AI applications, including:
- Government social scoring systems (already prohibited)
- Emotion recognition technologies
- Predictive policing systems
- Live facial recognition in public spaces
- Untargeted data scraping for facial recognition
- Biometric categorization for sensitive information
- Manipulative AI techniques
- Crime risk assessment tools
The EU is seeking specific examples and insights on these high-risk AI uses to shape fair and effective regulations. The European Commission is focused on determining which AI applications present “unacceptable risks” and should therefore be prohibited. Certain practices, such as governmental social scoring systems that rank citizens based on their behavior, have already been banned. The consultation aims to further clarify and define these high-risk AI applications to shape fair and effective regulations.
Draft Code of Practice for General-Purpose AI Models
In parallel with the consultation process, the EU has published a first draft of the Code of Practice for providers of general-purpose AI models (GPAIs). This 36-page document outlines guiding principles and objectives, targeting powerful foundational AI models likely to affect major AI companies such as OpenAI, Google, Meta, and Anthropic.
Key aspects of the Code of Practice include:
- Transparency requirements, including copyright-related information
- Systemic risk mitigation for powerful AI systems
- A safety and security framework
- Serious incident reporting
The Code allows for alternative compliance measures and seeks input on various aspects, including open-source models and risk prioritization. Important timelines include ²:
- Transparency requirements to be implemented by August 1, 2025
- Risk assessment and mitigation for the most powerful GPAIs by August 1, 2027
As stated in the draft, “This first draft of the Code is the result of a preliminary review of existing best practices by the four specialised Working Groups, stakeholder consultation input from nearly 430 submissions, responses from the provider workshop, international approaches, and, most importantly, the AI Act itself.” ²
Scope and Content of the Code of Practice
The Code of Practice covers several major focus areas, addressing key concerns related to powerful AI systems:
- Transparency Requirements: The Code sets out how GPAIs must comply with information provisions, particularly in the area of copyrighted material. For example, “Sub-Measure 5.2” commits signatories to providing details of web crawlers used for developing the GPAI and their relevant robots.txt features.
- Systemic Risk Mitigation: The Code identifies various risk types that signatories must treat as systemic risks, including:
- Offensive cybersecurity risks
- Chemical, biological, radiological, and nuclear risks
- “Loss of control” scenarios
- Persuasion and manipulation risks
- Large-scale discrimination
- Safety and Security Framework: GPAI makers are expected to detail their risk management policies and continuously identify potential systemic risks. The Code emphasizes the need for “best-in-class evaluations” of models’ capabilities and limitations.
- Serious Incident Reporting: The Code commits signatories to identify, track, and report serious incidents originating from their general-purpose AI models with systemic risk.
As noted in the draft, “The Code also seeks to provide guidance around identifying key attributes that could lead to models creating systemic risks, such as ‘dangerous model capabilities’ and ‘dangerous model propensities’.”
Impact on Stakeholders
The implementation of the AI Act and the associated Code of Practice will have significant implications for various stakeholders:
- Businesses: Companies may need to adjust their AI products and face potential compliance costs. However, this push for transparency could also drive responsible development. “For businesses, the new AI Act may require adjustments in how they design and sell AI products. Certain features may need to be changed or removed if they fall into banned categories.” ³
- Citizens: The public can expect enhanced privacy and freedom protection. The Act aims to safeguard citizens against misuse of their data and potential biases in AI systems.
- AI Developers: Developers will need to focus more on ethical and responsible AI development, potentially inspiring safer AI solutions. They will need to place greater emphasis on privacy considerations in their work, which ultimately will lead to more responsible AI solutions.
Criticism and Concerns
Despite the AI Act’s ambitious goals, there are concerns about potential loopholes and limitations:
- Export of Risky AI Technology: Critics argue that the Act allows for the export of risky AI technology to non-EU countries, potentially leading to the proliferation of dangerous AI systems globally.
- Lack of Effective Checks and Balances: There are concerns that EU institutions, including the AI Office, lack effective mechanisms to prevent harmful AI exports or gather data on overseas sales.
- Resource and Influence Issues: Some experts worry that the AI Office and other EU institutions may be under-resourced and influenced by industry lobbies, potentially compromising their ability to enforce regulations effectively.
As science policy advisor William Burns states, “It is implausible that the office would take on activist tasks outside the legislation, such as gathering data on harmful sales overseas. Perhaps it could dabble in lower-key horizon scanning activities that would meet some of these needs. But in the absence of export controls, there is no solid way to measure what is going on, let alone halt it.” ⁴
Conclusion
The EU’s implementation of the AI Act represents a significant step towards regulating AI technologies while fostering innovation. The ongoing consultation process and the development of the Code of Practice for general-purpose AI models demonstrate the EU’s commitment to creating a comprehensive and balanced regulatory framework.
As the process moves forward, stakeholders from various sectors will play a crucial role in shaping the final guidelines and ensuring that the AI Act effectively addresses the challenges and opportunities presented by artificial intelligence. The success of this regulatory effort will have far-reaching implications for the development and deployment of AI technologies not only in Europe but potentially worldwide.
Sources:
- https://ec.europa.eu/commission/presscorner/detail/en/mex_24_5821
- https://digital-strategy.ec.europa.eu/en/library/first-draft-general-purpose-ai-code-practice-published-written-independent-experts
- https://autogpt.net/eu-invites-public-input-on-ai-regulations-what-could-be-banned-and-why-it-matters/
- https://www.biometricupdate.com/202411/eu-launches-consultation-on-ai-definition-and-banned-practices