← Back to AIEU AI Act regulatory framework showing risk categories and compliance requirements for artificial intelligence systems
⚖️ AI Regulation: European Policy

EU AI Act: Complete Guide to Europe's Revolutionary Artificial Intelligence Regulation

📅 February 19, 2026 ⏱️ 9 min read

What Is the EU AI Act

On August 1, 2024, Regulation (EU) 2024/1689 — known as the EU AI Act — officially entered into force, making the European Union the first major jurisdiction in the world to adopt a comprehensive legislative framework for artificial intelligence. It is the most ambitious technology regulation since the GDPR — and many analysts consider it equally transformative.

The law covers virtually every AI system operating in the EU, regardless of its country of origin. Just as the GDPR forced tech giants to change practices globally, the AI Act is expected to create a "Brussels Effect" — a de facto global regulation through the power of the European market.

EU AI Act Timeline

  • February 2020: AI White Paper — European Commission
  • April 21, 2021: Official regulatory proposal
  • December 9, 2023: “Marathon” trilogue negotiations — political agreement
  • March 13, 2024: European Parliament vote (523 for, 46 against, 49 abstentions)
  • May 21, 2024: Unanimous Council approval
  • July 12, 2024: Published in the Official Journal
  • August 1, 2024: Entered into force
  • July 10, 2025: GPAI Code of Practice published

The 4+1 Risk Categories

The core of the AI Act is based on a risk-based classification system inspired by product safety standards. Every AI system is categorized into one of four basic levels, plus a special category for general-purpose models:

🚫 Unacceptable Risk

Complete ban — Social scoring, real-time biometric identification in public spaces, AI behavior manipulation

⚠️ High Risk

Strict obligations — Healthcare, education, recruitment, critical infrastructure, policing, justice. Requires Fundamental Rights Impact Assessment (FRIA)

ℹ️ Limited Risk

Transparency obligations — Deepfakes, chatbots: users must know they're interacting with AI

Minimal Risk

No regulation — Spam filters, AI video games, low-risk chatbots. The majority of AI applications

What's Completely Prohibited

Article 5 prohibitions were the first provisions to take effect — just 6 months after entry into force, meaning from February 2025. They include:

  • Social Scoring: Ranking citizens based on personal characteristics or socioeconomic status — a system inspired by China's Social Credit System
  • Real-time biometric identification: Live facial recognition in public spaces, except for targeted exemptions (terrorism, missing persons)
  • Manipulation: AI systems exploiting vulnerable groups or using subliminal techniques
  • Facial image scraping: Mass collection from internet/cameras for recognition databases
  • Emotion recognition: In workplace and educational settings

La Quadrature du Net noted significant gaps in the exemptions — for example, sector-specific social scoring (like benefit eligibility assessment systems in France) may still be permitted.

High-Risk AI: Strict Requirements

High-risk AI systems form the regulatory core. They cover critical sectors where AI can substantially impact citizens' lives:

High-Risk Sectors

  • Healthcare: Diagnostic tools, AI-powered medical devices
  • Education: Grading, student assessment, institutional access
  • Employment: Automated CV screening, hiring, termination
  • Critical infrastructure: Energy, transport, water supply
  • Policing: Predictive policing, evidence assessment
  • Migration: Asylum application assessment, border control
  • Justice: AI judicial assistants, evidence analysis

For each high-risk system, the following is required:

  • Conformity Assessment before market placement
  • Fundamental Rights Impact Assessment (FRIA) — mandatory ex ante analysis
  • Human oversight — human intervention capability must always exist
  • Technical documentation — complete recording of training data, algorithms, limitations
  • Continuous monitoring throughout the system's lifecycle
  • Right to explanation — citizens can request explanations of decisions affecting them

A significant criticism: the majority of high-risk systems are subject to self-assessment by providers themselves, rather than independent third-party auditing — something several legal scholars consider insufficient.

General-Purpose AI Models (GPAI)

One of the most significant additions, introduced in 2023 after ChatGPT's explosive popularity, is the regulation of General-Purpose AI models. These models — such as GPT-4, Claude, Gemini, LLaMA — didn't fit the original risk framework.

Two Tiers of GPAI Regulation

Basic obligations (all GPAI):

  • Publish training data summary
  • Copyright compliance policy
  • Technical documentation for downstream providers

Enhanced obligations (systemic risk >10²⁵ FLOPS):

  • Model evaluations and adversarial testing
  • Risk assessment and mitigation (bias, security)
  • Serious incident reporting
  • Adequate cybersecurity level

Open-source models enjoy reduced obligations — they need only publish a training data summary and copyright policy. Closed-source models must meet broader transparency requirements.

The GPAI Code of Practice, published on July 10, 2025, contains three chapters: transparency, copyright, and safety. Participation is voluntary, but it represents the most practical way for providers to demonstrate compliance.

Fines: The Biggest Pressure Lever

The AI Act establishes a three-tier sanctions scale rivaling GDPR fines:

€35M or 7% of global turnover

For violating prohibitions (Article 5) — social scoring, biometric identification, manipulation

€15M or 3% of global turnover

For non-compliance with provider and user obligations — high-risk, GPAI systems

€7.5M or 1% of global turnover

For providing inaccurate, incomplete, or misleading information to authorities

For SMEs and startups, the lower amount between the percentage and the fixed threshold applies — a provision designed to avoid crushing emerging businesses.

New Governance Bodies

The AI Act creates four new bodies at the European level:

  • AI Office: The “headquarters” — coordinates implementation across all member states, oversees GPAI providers, can request information or open investigations
  • European Artificial Intelligence Board: One representative per member state — advises the Commission, ensures consistent application
  • Advisory Forum: Representatives from industry, startups, SMEs, civil society, academia — provides expertise
  • Scientific Panel: Independent experts — technical evaluation, risk alerts, linking regulation to scientific findings

In parallel, each member state must designate national competent authorities for market surveillance and complaint handling. Together, these bodies form a multi-level oversight system.

Implementation Timeline

The AI Act doesn't apply all at once — it follows a phased introduction depending on the type of rule:

Implementation Phases

  • +6 months (Feb. 2025): “Unacceptable risk” bans — ALREADY IN EFFECT
  • +9 months (May 2025): Codes of practice
  • +12 months (Aug. 2025): General-purpose AI (GPAI) rules
  • +24 months (Aug. 2026): General rules — the majority of obligations
  • +36 months (Aug. 2027): Certain high-risk AI obligations

At the time of writing (February 2026), the prohibitions are already in effect, GPAI rules just became active, and within the next 6 months the majority of the regulation will come into full force.

Extraterritorial Application: The “Brussels Effect”

Like the GDPR, the AI Act applies extraterritorially. This means any company — American, Chinese, or from any country — offering AI services to users within the EU must comply.

Professor Anu Bradford from Columbia University calls this the "Brussels Effect": European rules become the de facto global standard, as companies prefer a single set of rules rather than creating separate versions for each market. This is already happening: OpenAI, Google, Meta, and Anthropic are intensively preparing for full AI Act compliance.

Legal analysts note that the AI Act will serve as a reference point for governments and regulatory bodies outside the EU designing their own frameworks — from Brazil to Canada, from India to South Korea.

Regulatory Sandboxes: Room for Innovation

The AI Act isn't just a “rule book.” It includes provisions for regulatory sandboxes — controlled testing environments where developers can build and test AI systems under supervision before market release.

Member states are encouraged to create sandboxes, with priority access for SMEs and startups. The idea is to balance regulatory safety with technological innovation — a vital factor if Europe wants to remain competitive against the US and China.

Criticism and Reactions

The AI Act has drawn criticism from all sides:

From civil society: Amnesty International stated the law fails to account for basic human rights principles, leaving inadequate protection for migrants, refugees, and asylum seekers. La Quadrature du Net called it “tailor-made” for the tech industry and police forces. A broad coalition of 80+ organizations (EDRi) concluded it leaves significant gaps in privacy and non-discrimination protections.

From startups: While some welcomed the clarity, many European startups argue the additional regulation makes them less competitive compared to American and Chinese companies. Academics and analysts worry about compliance costs, especially for smaller companies.

From creators: In August 2025, 38 global creators' organizations (reported by Le Monde) condemned the Code of Practice and GPAI guidelines for inadequately protecting intellectual property. They called the outcome a “betrayal” of Europe's stated goals.

From academia: Legal scholars describe the AI Act as a “medley” of product safety and fundamental rights frameworks, noting that heavy reliance on provider self-assessment leaves enforcement gaps. A recent analysis (February 2026) introduced the concept of the "Synthetic Outlaw" — systems that satisfy formal rules while producing prohibited outcomes.

Exemptions: What It Doesn't Cover

  • Military / National Security: Full exemption — AI systems for defense purposes are unregulated
  • Research: Systems used exclusively for scientific R&D are exempt
  • Non-professional use: Personal AI projects don't fall under the Act
  • Minimal risk: Due to maximum harmonization, member states cannot impose stricter rules

These exemptions, especially for military use, are particularly contested — especially as AI is increasingly deployed in autonomous weapons systems.

What It Means for Greece

Greece, as a member state, is required to designate national competent authorities for the AI Act. This means:

  • Creating or assigning an existing body (e.g., the Data Protection Authority or a new agency) for enforcement oversight
  • Greek companies developing AI for healthcare, education, or public administration must implement FRIA and technical documentation
  • Greek sandboxes can help startups experiment without fear of fines
  • Greece's National AI Strategy will need alignment with AI Act requirements
  • Opportunity: companies that comply early gain a competitive advantage in the European market

What Comes Next

The AI Act isn't the end. It's the beginning. Next steps include:

  • August 2026: Full application of general rules — the biggest compliance test begins
  • August 2027: High-risk rules take effect — first significant fines expected
  • CEN/CENELEC JTC 21: Development of harmonized technical standards — critical for conformity assessments
  • High-risk list expansion: The Commission can add new applications without amending the regulation
  • Interaction with GDPR / DSA / DMA: How the EU's multiple regulations will work together
  • Global impact: Brazil, Canada, India, South Korea are designing their own AI frameworks inspired by the AI Act

Democratic legitimacy, practical enforcement, and the ability to adapt to AI's rapid evolution will determine whether the EU AI Act becomes the "gold standard" of regulation or a bureaucratic obstacle. For now, it is the most institutionalized AI framework in the world — and the world is watching.

EU AI Act AI regulation European Union artificial intelligence GDPR compliance technology policy AI governance