EU AI Act: What the First Wave of Enforcement Means for Operators

The European Union's AI Act formally entered its enforcement phase this month, marking the first time a major jurisdiction has applied binding obligations on artificial intelligence systems. For companies operating in Europe, the implications are real and immediate — if somewhat narrower than the sweeping framework's headlines suggest.
The first obligations to bite are those on prohibited practices and high-risk AI systems. Prohibited practices — including real-time biometric surveillance in public spaces and AI-based social scoring — carry fines of up to €35 million or 7% of global annual turnover, whichever is higher.
High-risk systems, which span everything from CV-screening tools to credit scoring algorithms, must now undergo conformity assessments before deployment. This requires documented technical testing, human oversight measures, and registration in the EU database.
Legal teams across the continent are scrambling. The Act's definitions of "high-risk" are broader than many companies initially appreciated. A seemingly routine HR tool that uses algorithmic ranking, for instance, may qualify — particularly if it influences employment decisions at scale.
About the Author
Sarah ChenSenior Technology Correspondent
Sarah is a senior technology correspondent with 12 years covering the AI and semiconductor industries. Previously at the Financial Times.
More from Sarah Chen →