☀️ AI Morning Minute: EU AI Act
The world's first comprehensive AI law, and it probably applies to you
For years, AI companies built whatever they wanted and figured out the rules later. The European Union decided that era is over. In 2024, it passed the first major law that tells companies what they can and can’t do with artificial intelligence, and the enforcement deadline for most of it is August 2026.
What it means
The EU AI Act is a law that regulates the development and use of artificial intelligence across all 27 European Union member states. It uses a risk-based system that sorts AI applications into four categories: unacceptable risk (banned outright), high risk (heavy regulation), limited risk (transparency requirements), and minimal risk (mostly unregulated).
It was proposed in April 2021, passed the European Parliament in March 2024, and published in the Official Journal in July 2024. The majority of its provisions take effect on August 2, 2026.
Why it matters
Some AI uses are now illegal in Europe. The law bans government-run social scoring systems (ranking citizens by behavior), real-time facial recognition in public spaces (with narrow law enforcement exceptions), AI that manipulates people through subliminal techniques, and systems that exploit vulnerable groups like children or people with disabilities. These bans started applying in February 2025.
It reaches beyond Europe’s borders. Any company that sells AI products or services to people in the EU has to comply, regardless of where the company is based. An American startup selling an AI hiring tool to a German company falls under the EU AI Act. The penalties are steep: up to 35 million euros or 7% of global annual revenue, whichever is higher. For context, GDPR (the EU’s data privacy law) works the same way, and it reshaped privacy practices worldwide.
It creates special rules for general-purpose AI models like GPT, Claude, and Gemini. Providers of these models must publish detailed technical documentation, comply with EU copyright law, and provide summaries of training data. Models classified as posing “systemic risk” face additional requirements including adversarial testing, incident reporting to the European Commission, and cybersecurity assessments.
Simple example
A city builds a new highway. Before the highway existed, anyone could drive anywhere at any speed. Once the road is built, the city adds speed limits, lane markings, traffic lights, and rules about who can drive what kind of vehicle. The highway didn’t stop people from driving. It set boundaries around how they drive.
The EU AI Act is the traffic code for artificial intelligence. AI companies can still build and sell products. They just can’t drive 120 in a school zone anymore. And the speed cameras are going live in August.

