EU AI Act: A Comprehensive Rulebook for Artificial Intelligence
The European Union’s groundbreaking regulation on artificial intelligence (AI), the EU AI Act, has been officially published in the bloc’s Official Journal. This landmark legislation will come into force on August 1, 20 days from now, with full applicability for AI developers expected by mid-2026. However, the implementation of the AI rulebook will be phased, with different deadlines for specific provisions.
In December of last year, EU lawmakers reached a political agreement on the EU’s first comprehensive rulebook for AI. The framework imposes different obligations on AI developers based on use cases and perceived risk. The majority of AI applications will not be regulated as they are considered low risk. However, certain high-risk use cases such as biometric applications, AI in law enforcement, employment, education, and critical infrastructure are allowed but subject to obligations related to data quality and anti-bias measures.
There is also a third tier of risk that imposes lighter transparency requirements for tools like AI chatbots. Additionally, makers of general-purpose AI (GPAI) models, including OpenAI’s GPT, will have transparency requirements. The most powerful GPAIs can also be required to conduct systemic risk assessments based on compute thresholds.
Despite the comprehensive nature of the EU AI Act, there has been heavy lobbying from certain elements of the AI industry, supported by a few Member States’ governments, aiming to dilute obligations on GPAIs. Concerns have been raised that strict regulations may hinder Europe’s ability to compete with AI giants from the US and China. However, the EU remains committed to striking a balance between regulation and fostering innovation in the AI sector.
The implementation of the EU AI Act will be carried out in phases. The first phase involves the prohibition of certain uses of AI, which will come into effect six months after the law is enforced, in early 2025. These prohibited use cases include practices such as China-style social credit scoring and compiling facial recognition databases through untargeted scraping of the Internet or CCTV. Real-time remote biometrics by law enforcement in public places will also be restricted unless specific exceptions apply, such as during searches for missing or abducted persons.
Nine months after the law’s enforcement, around April 2025, codes of practice will be applicable for developers of AI applications falling under the scope of the regulation. The responsibility of providing these codes lies with the EU’s AI Office, an oversight body established by the law. However, concerns have been raised about the influence of AI industry players on shaping the rules, as the EU has been seeking consultancy firms to draft the codes. To address these concerns, the AI Office will launch a call for expressions of interest to select stakeholders who will be involved in drafting the codes of practice for general-purpose AI models.
Another crucial deadline is set for 12 months after the law comes into force, on August 1, 2025. This is when the regulation’s transparency requirements for GPAIs will start to apply. High-risk AI systems have been given a compliance deadline of 36 months, until 2027, while other high-risk systems must comply within 24 months.
The EU AI Act represents a significant step towards regulating AI applications in a risk-based manner. It aims to strike a balance between promoting innovation and safeguarding fundamental rights and ethical considerations. By taking a phased approach to implementation, the EU is allowing time for developers to adapt and comply with the regulations while ensuring that high-risk AI systems are subject to necessary obligations at an earlier stage.