We’ve read the EU AI Guidelines so you don’t have to
In early February 2025, the European Commission released Guidelines on prohibited artificial intelligence practices as well as on the definition of an artificial intelligence system under the AI Act (Regulation (EU) 2024/168).
The EU AI Act is one of the world’s first attempts to set clear legal boundaries for artificial intelligence, ensuring that high-risk and prohibited AI systems are controlled while allowing safe AI applications to thrive. To achieve this, the EU has introduced guidelines to define what qualifies as an AI system and clarify which AI practices are outright banned.
However, these guidelines have sparked widespread criticism, ranging from being called "useless" to concerns about legal uncertainty and severe liability risks for those affected. This article explores the key issues surrounding these guidelines and the implications for AI-related businesses and ventures.
Key Take-Aways:
- What is the EU AI Act and its most important implications?
- Why is there a need for guidelines?
- What do the guidelines contain?
- What is the critique?
- Conclusion - How to deal with this?
The Most Important Regulations of the EU AI Act
Here is a quick recap on the most important regulations of the AI Act. Check out our Blog Article about the EU AI Act and how to comply with it.
1. Risk-Based Classification of AI Systems
The EU AI Act adopts a risk-based classification system, categorizing AI systems based on their potential impact on fundamental rights, safety, and society, ensuring that higher-risk applications face stricter regulatory requirements while fostering innovation for lower-risk AI.
- Unacceptable Risk: AI systems that pose a clear threat to safety, fundamental rights, or democracy are banned. This includes social scoring, emotion recognition in workplaces, real-time biometric surveillance, and manipulative AI techniques.
- High Risk: AI applications in critical infrastructure, law enforcement, education, and healthcare must comply with strict requirements, such as risk assessments, human oversight, and data quality measures.
- Transparency Risk: AI systems that interact with humans (e.g., chatbots, deepfakes, and generative AI) must be clearly disclosed and identifiable.
- Minimal or No Risk: AI applications like spam filters and AI-enhanced video games are largely unregulated.
2. Regulations for High-Risk AI Systems
- Must undergo risk assessments and implement mitigation systems.
- Require high-quality datasets to minimize bias and discrimination.
- Providers must ensure traceability, documentation, and human oversight.
- Systems must be robust, secure, and accurate before deployment.
3. Regulation of General-Purpose AI Models
- Transparency and copyright rules apply to foundational AI models.
- AI models with systemic risks must undergo risk assessments and mitigation procedures.
- The AI Office is developing a Code of Practice to guide compliance with these requirements.
4. Governance and Enforcement
- The European AI Office and national authorities will supervise and enforce compliance.
- The AI Board and Scientific Panel will provide regulatory guidance and oversight.
- Market surveillance authorities will ensure that high-risk AI systems comply with EU regulations.
5. Implementation Timeline
- February 2025: Prohibitions on high-risk AI and AI literacy rules take effect.
- August 2025: Regulations for general-purpose AI models become applicable.
- August 2026: Full enforcement of the AI Act, including high-risk AI systems.
The AI Act is designed to balance innovation with safety, ensuring trustworthy AI while fostering economic growth in the EU. Let me know if you need further clarification!
So … Why is there a need for guidelines?
The AI Act's approach to defining and regulating AI systems reflects a delicate balancing act. It strives to provide clear and effective oversight without hindering technological progress, all while remaining adaptable to the fast-paced evolution of AI technologies.
Central to this regulation is Article 3, which provides definitions for key terms, including the term "AI system". This article defines an AI system as a "machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment"
However, the breadth and ambiguity of this definition have sparked considerable debate and portray a double-edged sword:
An overly expansive definition could:
- Include a wide range of technologies not traditionally considered AI.
- Lead to over-regulation, adding unnecessary compliance burdens.
- Stifle innovation and discourage AI development.
- Create an atmosphere of uncertainty and fear among businesses and developers.
A definition that is too narrow could:
- Fail to cover emerging AI technologies, leaving gaps in oversight.
- Lead to a lack of protection for fundamental rights.
- Allow unregulated AI systems to be used in ways that harm consumers.
Think about it: The rapid evolution of AI makes it challenging to craft a precise yet future-proof definition. With continuous advancements and new applications emerging regularly, legislation risks becoming outdated. Ensuring the AI Act keeps pace with technological progress remains a formidable task. The Act seeks to balance the need for regulation with the imperative to foster innovation. This dynamic legislative process is designed to prevent the law from lagging behind technological developments, ensuring that AI systems are developed and deployed in a manner that respects fundamental rights and societal values.
Now, what do the guidelines contain?
The AI System Definition Guidelines clarify on 12 pages what qualifies as an AI system which is defined in Article 3 (1) of the Act, distinguishing regulated AI from simple software tools. Meanwhile, the AI Act Guidelines Part 1 establish prohibited AI practices (Article 5), banning AI systems that pose unacceptable risks, such as manipulative AI, social scoring, and unauthorized biometric surveillance.
1. Definition of an AI System (Article 3(1) of the Act)
An AI system is a machine-based system that:
- Operates autonomously to some degree.
- May adapt after deployment.
- Processes inputs to generate outputs (e.g., predictions, recommendations).
- Influences environments, whether physical or digital.
Key Elements & Exclusions
- Includes both software and hardware, operating with varying autonomy and adaptiveness.
- Excludes basic software, simple statistical models, and rule-based systems that do not learn.
2. Prohibited AI Practices (Article 5 of the Act)
On 140 pages (!), the guidelines define AI systems that pose inherent risks to fundamental rights, including:
- Manipulation & deception affecting decision-making.
- Exploitation of vulnerabilities (age, disability, socio-economic status).
- Social scoring with discriminatory consequences.
- Predictive crime assessment based solely on profiling.
- Unauthorized facial recognition data scraping.
- Emotion recognition in workplaces & schools (except for safety/medical use).
- Biometric categorization based on sensitive characteristics (race, religion, etc.).
- Real-time biometric surveillance in public spaces, except for specific law enforcement cases.
Scope & Enforcement
- Exemptions apply to national security, defense, R&D, and open-source AI unless classified as high-risk or prohibited.
- Market Surveillance Authorities (MSAs) will oversee enforcement.
- Violations may lead to fines up to €35 million or 7% of annual turnover.
Critics demand clarity: “Risky” cases could lead to severe liability risks
However, the Commission has been met with significant criticism for ambiguity, legal uncertainty, enforcement challenges, and potential negative effects on innovation.
1. AI System Definition Guidelines – Key Criticisms
- Ambiguous and Circular Definition
- The guidelines define an AI system without clearly differentiating it from advanced software, leading to confusion for businesses and regulators.
- The reasoning is circular, stating that the AI Act applies to AI systems as defined in the AI Act, without further clarification.
- Legal Uncertainty and Non-Binding Nature
- The guidelines are not legally binding, meaning enforcement may vary across EU jurisdictions, creating legal and compliance risks.
- Market Surveillance Authorities (MSAs) could interpret the rules inconsistently, leading to fragmented regulatory enforcement.
- The final interpretation rests with the Court of Justice of the European Union (CJEU), which could override national authorities, forcing companies into risk-averse compliance strategies.
- Overregulation vs. Innovation Risk
- The broad AI definition could lead to overregulation, forcing businesses to comply with rules meant for high-risk AI, even if their systems pose minimal risks.
- The uncertainty discourages innovation, as companies may avoid developing AI systems in Europe to escape unclear compliance obligations.
- Regulatory Capture and Arbitrary Exclusions
- The exclusion of specific AI techniques (e.g., logistic regression) seems politically motivated rather than based on technical merit.
- This could benefit large industry players, making compliance harder for startups and SMEs while limiting fair competition.
- Failure to Account for Rapid AI Evolution
- AI technology is evolving faster than regulations, and the lack of a mechanism for dynamic updates means the definition could become obsolete quickly.
2. Prohibited AI Practices Guidelines – Key Criticisms
- Lack of Clear and Actionable Criteria
- The guidelines provide vague descriptions of prohibited AI practices, making it unclear which specific AI systems are affected.
- Key terms like "manipulative AI" or "exploitation of vulnerabilities" remain subject to interpretation, creating legal uncertainty.
- Weak Enforcement Mechanisms
- While certain AI applications are labeled "prohibited", the guidelines do not establish clear enforcement mechanisms.
- Loopholes could allow high-risk AI to continue operating, as enforcement depends on how MSAs interpret the guidelines.
- Conflict Between Innovation and Regulation
- The lack of binding legal weight in these guidelines could lead to inconsistent enforcement, while an overly strict interpretation could stifle AI research.
- Businesses may fear liability risks if AI applications fall into a gray area of prohibited practices, discouraging AI investments in the EU.
- Selective and Politically Influenced Bans
- Some high-risk AI applications (e.g., real-time facial recognition) are not fully banned, raising concerns about incomplete consumer protection.
- The focus on specific AI risks rather than comprehensive risk assessment models suggests that lobbying efforts influenced what was prohibited.
- Failure to Adapt to Emerging AI Threats
- The guidelines lack flexibility, meaning they may not address future AI risks that emerge with technological advancements.
- The AI industry is rapidly changing, and static definitions of prohibited AI may fail to regulate new, high-risk AI developments in the coming years.
Conclusion - Is this relevant to you?
The AI Act Guidelines aim to provide clarity but instead create ambiguity and enforcement inconsistencies. The definition of AI remains vague, making it unclear who falls under regulation, while the prohibited AI practices lack effective enforcement mechanisms.
However, the main focus is on high-risk and prohibited AI—if your system doesn’t fall into these categories, compliance concerns may be minimal. Still, this debate shapes the future of AI governance. Poorly defined regulations could hinder innovation or fail to prevent harmful AI applications.
Ultimately, AI regulation must balance technological progress with ethical responsibility. A clear, adaptable framework is essential to foster innovation while ensuring accountability—because the way we regulate AI today will define its impact for years to come.
Do you want to find out more about the EU AI Act? Check out our other blog article: