Lorem

If hearing the word “prohibition” brings to mind the moonshine, speakeasies, and bootleg liquor of 1920s America, you’re not alone. It conjures images from ‘Boardwalk Empire’ or ‘The Untouchables’.  But today’s prohibition isn’t about gin or whiskey – it’s about AI. With the EU’s new “prohibited AI” rules in force, entire categories of technology are effectively off-limits. And like the bootleggers of old, organisations might find themselves tiptoeing into illicit territory, facing hefty fines if regulators catch them ‘running’ banned algorithms.

Below, we explore how this new era of prohibition could impact your business and why these rules matter – even if you’re not based in the EU.

Why “prohibited AI” matters

Under these new rules, certain uses of artificial intelligence are banned outright. The rationale is to protect individuals from the most invasive and manipulative forms of AI. Offenders may face fines as high as 7% of their annual worldwide turnover (yes, you read that correctly – 7%!).

Even more eye-opening is that many prohibited AI scenarios will likely involve personal data processing, triggering the GDPR. So, companies could see a potential 4% fine on top of the 7% – totalling a jaw-dropping 11% of worldwide turnover. Even harder to swallow than raw moonshine.

No point getting emotional about it…

A prime example of what could now be off-limits is AI intended to determine people’s emotions in the workplace. Think of call centres that use software to monitor employees’ tone of voice or facial expressions to see whether they’re “truly engaged” or “properly representing the brand.” Or systems that track typing speed to see if an employee is distracted or unhappy.

Why the concern? Because this kind of real-time monitoring and inference can be overly intrusive and may create a chilling effect among staff (and interestingly, this is consistent with the approach taken by data protection authorities on employee monitoring via video surveillance). It also risks misinterpretation—AI may incorrectly label someone as “uninterested” or “lazy” based on flawed data or biased algorithms. Under the Prohibited AI rules, the EU wants to clamp down on such methods when they’re used in ways that can cause harm to workers’ well-being or autonomy.

Unconscious manipulation

Another area in the spotlight is AI that subliminally manipulates people or exploits their vulnerabilities to cause harm. You might have heard the term “dark patterns” to describe tricky user-interface designs that nudge people into buying more, sticking around on a gaming platform longer, or revealing personal data. Now, layer on AI that learns exactly which colour scheme, near-miss “I’ll beat it net time” game mechanic, or timing of pop-ups will best hook a specific user into constantly having one more go or spending more on the next pay-to-win upgrade – and you have an AI-based “dark pattern.”

Under these new rules, any system that’s been designed to push users into harmful or excessive behaviour might stray over the line, and be viewed as a prohibited “subliminal” use of AI. This is a fine line to tread and raises many questions, especially in retail, as to how the criteria of harmful or excessive behaviour will be construed.

This doesn’t just cover what you do on your own platform; if the AI you use is recognised as manipulative or harmful in an EU country, that’s trouble – no matter where you are in the world.

But my company isn’t in the EU – why worry?

Here’s the catch: the EU AI Act has extra-territorial reach. That means if the output of your AI system is used in the EU – even if you’re based in, say, California or Singapore – you could be caught by these rules. Think of it like the GDPR’s global influence: if your AI touches EU users or employees, regulators can come knocking.

Congratulations: you’re all Europeans now!

Key takeaways

  1. Map Out Your AI Use: Pinpoint where and how your company uses AI, especially in HR, customer interactions, and user-interface design.
  2. Check for Potential “Emotion Detection” or “Subliminal” Features: If your company uses analytics to gauge staff morale or user emotions, or operates systems (social media, games, retail environments) with a strong link between engagement and revenue, evaluate and document the risk and potential mitigation measures / guardrails you implement.
  3. Review Vendors: Don’t forget that third-party tools could also land you in hot water. If you’re outsourcing to an AI vendor, ensure they’re aware of – and compliant with – these rules.
  4. Prepare for Overlap with GDPR: Confirm that your data processing aligns with privacy regulations. Where AI and personal data meet, you might face a double fine scenario if something goes wrong.
  5. Stay Updated: The AI regulatory landscape is evolving fast. Keep an eye on the European Commission’s official guidance and any new clarifications around “Prohibited AI” definitions.

Final thoughts For many of us, AI has been both a game-changer and a regulatory minefield. With these new Prohibited AI rules, the stakes are higher than ever. If you’re using AI in ways that stray into emotion detection or subliminal manipulation, it’s time to do a top-to-bottom compliance review – because a huge fine is not a headline any of us want to see in the morning.