Amid the “flooding the zone” US policy shifts, dismantling agencies and regulators impacting tech and beyond, it may have slipped your mind that regulation in Europe is finally on the move.
Let’s rewind.
2 February came and went with a whimper for the EU AI Act. Some practices are now banned or mandatory. Luiza's newsletter sums up what's in and out. One regulation does not preclude the other. She warns:
Even if the EU AI Act doesn’t prohibit an AI system, it might still be deemed unlawful by other EU laws, for example, if it lacks a lawful basis to process personal data according to the GDPR.
Let’s explore what the Act means for your business. And before your eyes glaze with “We’re not based in the EU”, or “We’re not based in the EU… anymore” (unwitting Brexit benefit?), remember these regs take effect if EU citizens access outputs of your AI systems. Using GenAI content on non-geo-locked websites or social channels? That’s you then.
EU AI Act: What’s out and what’s still in?
Banned: Systems using subliminal manipulation techniques
Exceptions: None. Well, why would you do that?
Banned: Inferring likelihood of a crime
Minority Report- style systems that predict who may commit a future crime
Exceptions: Predicting crimes in specific locations is still acceptable.
Banned: Untargeted facial recognition systems
Exceptions: Creating lookbooks of specific people isn't. And voice scraping is OK. Photo scraping from the internet also OK. Bad news if your face was ever published online, with or without consent.
Banned: Systems that detect emotions in the workplace
Exceptions: Medical support or safety. Flagging an exhausted heavy-machinery operator: allowed. Call centres can use it to identify furious customers. (Perhaps, like HP customers, they’ve been intentionally forced to wait 15 minutes). I haven’t deciphered this logic, as this is neither medical nor safety. Have telcos and others lobbied hard?
Banned: Some biometric uses
Using biometrics (like iris recognition, fingerprints, facial recognition) to deduce characteristics like political leanings, union membership or religion.
Exceptions: Permitted for medical diagnosis (good) and to improve job interview chances for marginalized groups (also good).
Banned: Biometric recognition in public places
Exceptions: Biometrics in private and digital spaces, like using face recognition to unlock devices or workplace doors. Law enforcement can bypass this for certain crimes involving missing persons and trafficking (mainly good, but either could be abused).
Banned: Manipulating children
Systems targeting children that encourage addiction or simulate human responses. Unclear how this affects language tutor Duolingo's passive-aggressive suicidal owl or Facebook/Instagram's general approach to luring teens onto the social dopamine wagon.
Confused? You will be
Clarity and working examples are good. But much of the AI Act still has governance experts head-scratching. It makes more sense when you consider the compromise soup to get here.
The Act has been years in the making. Critics believe it doesn’t protect human rights and has been watered down after corporate lobbying from big tech, including OpenAI. In the other ball court, big tech and AI startups claim this over-regulation will “smother Europe's digital technology development, stifling innovation and growth.” Harsh.
The clarifiers could be seen to dilute the rules, pressured by Vance and Trump's anti-regulation stance. Following Vance's toys-from-pram rant at the AI Action Summit, the European Commission withdrew plans for the EU AI Liability Directive, which could have made system developers responsible for harmful AI outputs.
With the fog circling compliance and mixed messages on the importance of responsible AI, you may be waiting for others to act first. If implemented like GDPR, expect warning wrist-slaps followed by massive fines for the big boys. Penalties reach up to 35 million Euros or 7% of worldwide annual turnover, whichever is higher. Remember when British Airways got fined £20M for a data breach?
AI Literacy isn’t what you think
Now, let’s talk about AI Literacy. Because as business leaders, you should be thinking about and acting on it, even if your fingers aren’t in the weeds of tech or data security.
Article 4 of the EU AI Act mandates training for organizations creating AI outputs affecting EU citizens. Training people about the right way to use potentially dangerous systems seems like a pretty vital thing to do from a corporate responsibility POV.
As Luiza reports, misconceptions and fake news about AI literacy abound. While consultants may like to spin it as productivity training on creative AI use (you really should be creating more GenAI images of a baby peacock🦚), it’s actually about protecting rights and complying with the AI Act.
Right now, you’re best off hiring a governance specialist to do a belt-and-braces review. And when you’re ready to ‘unleash’ AI tools, bring in a productivity or creative trainer to do it the right way.
A living repository for AI literacy
The Commission published a ‘living repository’ for AI literacy (Brussels-speak for an updatable webpage, linking to a non-accessible PDF. Though I’m glad its resposites are living rather than dead🪦).
Don’t be put off by the basic presentation: it’s a treasure trove of how orgs are planning and implementing training.
Good examples include insurance firm Assicurazioni Generali, which trained from basic to advanced AI academy tiers. 40% of those trained went into AI roles. That’s a neat way to fill your gaping AI skills gap.
Telefónica appointed Responsible AI Champions and included AI literacy in their digital inclusion by educating vulnerable groups on AI risks.
There are micro and SMB examples, too. Creative agency Studio Deussen focused on practical GenAI training, including ethics and copyright. They could then educate clients on when and when not to deploy AI.
My LLM summary tells me common methodologies include e-learning, role-specific training, AI academy collabs with universities and experts like the Alan Turing Institute, and interactive formats (podcasts, videos, games). And, above all, let people apply their knowledge on free-roaming live projects. Sort of everything you already knew about effective corporate learning.
What to do now for AI Literacy?
My trusty LLM produced a tedious list of techniques which would have had Sherlock constipated. To get started, I recommend thinking about these three things first:
1. Where are you now?
Assess what people know, don’t know, think they may know, and might need to know. The latter is the most tricky. You may want to dial in a compliance expert.
2. What’s high risk right now?
Upweight compliance and ethics in your all-hands training. That’s what the AI Act is there for, and this grounding will help you manage potential risks and unintended consequences of AI deployment.
3. Make AI literacy part of the culture
It’s not a one-and-done dark Tuesday afternoon classroom session. Use all the good techniques of online, bitesize, classroom, inspiration and bring in experts from outside when you lack knowledge, enthusiasm (or both).
Deep breath. Let’s get started.
First, see what compliance levels you need to achieve. Free tools include Future of Life's compliance checker (ungated) and Hogan Lovell's AI Act and HR systems questionnaire (gated).
As legal and governance experts are keen to emphasise, your organisation’s circumstances will likely need further interpretation, so instruct your legal counsel if you have one. I know a fair few UK and EU governance folks who can help with compliance. Contact me for recommendations.
And if you’re business doesn’t touch the sides of the EU, remember AI regulation is a global concern. Tech lawyer Raymond Sun, aka Techie Ray, has completed his AI regulation map with updates on 195 countries.
For planning what to do about your AI literacy plan, I run an AI Literacy fast-track programme to pin the jelly on the wall and work out where you are and what you need to do next.
Don’t put your head in the sand on this one. Improving understanding in your team about how AI works and how to be a responsible user improves employee engagement and reduces risks.
According to Slack’s Workforce Index, those training in AI are up to 19 times as likely to report it improved their productivity, but most people have spent less than five hours in learning.
Get planning to get ahead of the pack.🐺