Get your regular legal insights

Subscribe to our newsletter to learn more about legal management and be the first to hear about news at GAIA

Request a demo

Take the first step towards uncomplicated and efficient legal management. Request a demo today and discover how GAIA can transform the way you handle legal affairs, saving you time and stress.

AboutPricingEIP
Login

Understanding the EU AI Act: Key Takeaways and How to Comply

Discover how the EU AI Act is reshaping AI regulations with a risk-based approach, impacting businesses of all sizes, including SMEs and startups. Learn about key compliance requirements, what the implementation looks like, and how GAIA can empower your legal team to navigate these new obligations with ease.

Understanding the EU AI Act: Key Takeaways and How to Comply

On July 12, 2024, the European Union introduced the Artificial Intelligence Act, Regulation (EU) 2024/1689 (”EU AI Act”), which aims to establish the world’s first comprehensive legal framework for AI regulation and compliance. Simply put, the approach is risk-based: the higher the risk associated with a particular AI system, the greater the legal obligations attached to it. The primary goal is to ensure AI technologies are safe, respect fundamental rights, and are trustworthy.

Key Takeaways:

  • Understanding the legal framework of the EU AI Act
  • What the implementation will look like
  • The impact on businesses, especially SMEs and startups
  • How GAIA can help your legal team while following the new EU AI Act

1. Regulatory Framework of the Artificial Intelligence Act

One of the most important features of the EU AI Act is the categorization of AI systems based on risk levels and the corresponding safety requirements.

  • Unacceptable Risk
    AI practices that fall under this category are prohibited due to their potential to cause significant harm to individual rights or societal values. For example, AI systems that manipulate human behaviour to bypass free will or exploit vulnerabilities based on age, social status, or disability fall into this category. These systems are banned under the EU AI Act.

  • High Risk
    AI systems in critical sectors—such as healthcare, law enforcement, or transportation—where failures could endanger human safety or fundamental rights, are considered high risk. These systems face stringent requirements for risk management, data governance, transparency, human oversight, and cybersecurity measures, including comprehensive documentation to ensure accountability.

  • Limited Risk
    AI applications that require some level of transparency are classified as limited risk. This includes, for example, AI systems that interact with humans, such as chatbots. Users must be informed when they are engaging with AI rather than another human being. Generative AI systems, like those that create text (e.g. ChatGPT) or images, must use machine-readable formats to indicate that the content is AI-generated, ensuring transparency with outputs like deepfakes.

  • Minimal Risk
    AI systems that do not fall into the above categories are considered minimal risk and are not subject to stringent requirements under the EU AI Act. These may include AI-driven video games or spam filters, which have minimal impact on user rights or safety. However, transparency obligations and existing laws still apply.

The regulations apply to providers, deployers, importers, distributors, and manufacturers of AI systems linked to the EU market.

Specifically, providers are individuals or organizations that develop or commission the development of an AI system or general-purpose AI (GPAI) model and place it on the market under their name or trademark.

An AI system is broadly defined as a system that, with some level of autonomy, processes inputs to generate outputs (e.g., predictions, recommendations, decisions, or content) that can influence physical or virtual environments. GPAI models—such as foundation models that can perform a variety of distinct tasks—are also subject to regulations, requiring transparency, accountability, and governance depending on their risk potential.

Learn all about the 25 most important AI terms in our AI vocabulary for legal professionals

2. Implementation of the EU AI Act

The EU AI Act includes clear compliance obligations for companies developing or using AI systems. For instance, there are pre-market and post-market monitoring requirements to manage risks and ensure ongoing compliance. National supervisory authorities and a European Artificial Intelligence Board will oversee compliance and enforcement across the EU.

The obligations for high-risk AI systems will be:

  • Risk Management: Implement processes to identify, assess, and mitigate potential risks.
  • Conformity Assessments: AI systems must be certified for compliance before entering the market.
  • Data Governance: Ensure high-quality, bias-free data is used in training and testing.
  • Transparency: Clear documentation on how the AI system works and what data must be provided.
  • Human Oversight: Establish processes for human intervention if the AI system poses risks.
  • Cybersecurity: Ensure robust security measures to prevent unauthorized access and protect data.

Regarding limited-risk AI Systems they need to comply with:

  • Transparency Requirement: Users must be informed when interacting with AI rather than humans.
  • Labeling: Outputs generated by AI (e.g., text or images) should be marked clearly as AI-generated to avoid confusion or deception.

Minimal-risk AI has no specific regulatory obligations:

  • General Transparency: Some systems may still be required to disclose that AI is being used, though the overall compliance burden is low.

Non-compliance with the Act can result in significant penalties, depending on the nature of the violation:

  • Up to €30 million or 6% of global annual turnover (whichever is higher) for using prohibited AI systems.
  • Up to €20 million or 4% of global annual turnover (whichever is higher) for violations involving high-risk AI systems.
  • Up to €10 million or 2% of global annual turnover (whichever is higher) for failures related to limited-risk AI systems.

Beyond financial penalties, enforcement actions may include suspending the use of AI systems, mandatory recalls, or even ceasing operations.

Grace Period for Compliance

There is a two-year grace period after the Act´s adoption to allow businesses, SMEs, startups and member states to prepare for full compliance. This means the main provisions of the EU AI Act won't be fully enforced until 2025. Within this period, businesses should adjust their practices, especially those developing or using high-risk AI systems, to comply with the new regulations.

3. Impact on Businesses Worldwide

The EU AI Act will have far-reaching significant implications for companies operating in or exporting to the EU market. The Act emphasizes ethical AI development by focusing on transparency, safety, and non-discrimination. Businesses need to implement robust risk management and governance structures to meet these requirements, especially when deploying high-risk AI systems.

Generally, the EU AI Act is also seen as a blueprint for global AI regulation, potentially influencing international standards similar to the EU’s GDPR (General Data Protection Regulation (Regulation (EU) 2016/679)). Businesses worldwide must comply with EU regulations if they wish to access the European market. Therefore a comparable global influence can be predicted.

4. Special Regulations for SMEs and Startups

The EU AI Act acknowledges the unique challenges faced by SMEs and startups. To avoid overburdening smaller businesses, the Act includes targeted support and flexibility:

  • Reduced Administrative Burden: SMEs benefit from simplified conformity assessments for high-risk AI systems, and can test AI systems in regulatory sandboxes, which allow for innovation under more relaxed rules.
  • Access to Support: Financial incentives, EU funding programs, and training resources are available to help SMEs meet regulatory requirements while promoting innovation.
  • Longer Transition Periods: SMEs and startups may benefit from phased implementation with extended deadlines for compliance.
  • Proportionate Penalties: Penalties for non-compliance are adjusted based on the size and turnover of the business, preventing smaller companies from being disproportionately harmed.
  • Encouraging Innovation: The Act strikes a balance between regulation and innovation, ensuring that AI-driven startups can thrive without excessive regulatory pressure.
  • Harmonized Market: The single regulatory framework provides legal certainty for businesses looking to scale across the EU.

5. Implementation Steps for Compliance

To comply with the EU AI Act, companies should follow these steps:

  • Identify Your AI System’s Risk Level: Determine whether the AI system you use falls under the high-risk category, as these will be subject to more stringent regulations such as certification requirements.
  • Prepare for Conformity Assessments: Undergo necessary assessments to ensure your AI system complies with EU standards.
  • Establish Governance Frameworks: Implement a robust governance system to ensure data governance, transparency, and human oversight, especially for high-risk systems.
  • Ensure Documentation and Transparency: Properly document all AI systems and inform users when interacting with AI.
  • Post-Market Monitoring: Continuously monitor your AI systems to detect and mitigate new risks.
  • Utilize Regulatory Sandboxes: SMEs and startups can leverage these environments to test AI systems under flexible regulatory conditions.

6. Streamlining Compliance with GAIA

At GAIA, we are committed to staying ahead of the curve of this technological revolution, ensuring that our clients benefit from the latest advancements in legal tech-enabled AI. Our solutions are tailored to meet the unique needs of each corporate legal department, ensuring personalized and effective implementation. From contract creation and management to building effective internal legal workflow operations, GAIA helps you automate your legal tasks. GAIA offers a full-suite legal management system that seamlessly integrates with your operations for businesses looking to streamline compliance with the EU AI Act.

Ready to learn more about how you can benefit from GAIA's AI in your daily legal operations now?

Similar posts