AI’s inconvenient truth – Rethinking The Hype Cycle #9
We're co-dependent. AI still need those pesky humans
Hello👋
Welcome to Rethinking the Hype Cycle, your people-first practical guide to AI and what's next in tech.
AI is changing how we work. But it's not changing as fast as some with vested interests claim. Dig in for some research on productivity that counters AI's smoke and mirrors hype.
Your regular reminder: If you're not working on the bleeding edge, you don't need to bleed. On to the trends. 👉
🔮AI and frontier tech trends
AI is slacker than first thought
AI productivity in reality: A Denmark study shows AI adoption resulted in a measly 3% efficiency gain per worker. And only a sliver of that led to any financial gains. That’s less gain than banning one afternoon smoking break. Other studies at the organisational level show far better results, but there's merit in an aggregate study like this.
When you set expectations and goals for your AI adoption programme, remember AI hype is all mouth, no trousers. Set incremental goals to track and measure output based on the needs of your team, individual and organisation.
Meanwhile, Microsoft research shows we've tripled the time we spend in meetings since the pandemic. Fix this before expecting AI to wave a magical productivity wand?
"It sometimes seems as if the modern worker spends more time talking about work than actually working." 😐
New reports: AI is both hurting and helping your job
A new UN report shows women face three times the likelihood of AI automating their job than men – more in wealthier countries. I've written about the gender AI gap before, but each new data point reveals a greater chasm.
The AI gap is the new pay gap we need to talk about. We need to take action now, not in 6 months' time.
In more positive news, a new PwC report busts myths that AI is taking our jobs: AI-enhanced workers add more value and get paid more. High AI-uptake industries are growing headcount, not culling. But the harbinger bell tolls: there's more data again that women's roles are overexposed.
AI may not be as bad as we thought
More good news: AI models may not be as environmentally harmful as previously thought.
"A standard text-based search with ChatGPT still uses a tiny amount of energy. We’re not going to make a dent in climate change by stigmatising it or making people feel guilty."
While it's true that the principal large language models aren't as efficient at processing data as web search tools, they are improving. Also consider that searching for lots of content on the web, which involves your browser downloading web pages and images, could be more inefficient than getting a neat info package directly from an AI model.
Otterly versatile: The history of AI image generators
Wharton Professor Ethan Mollick has repeated the prompt "otter using WiFi on a plane" over the last three years. His gallery shows the rapid evolution of AI image generators. It's shifted from fuzzy pixels like an amateur sand sculpture to understanding the meaning behind images it creates and a high level of sophistication, like this rare sea otter with a mohican.
With the new wave of hyper-realistic video generation tools, it's going to get harder to spot real imagery from generated. These AI influencers doing impossible challenges make about as much sense as any social trend right now.
🚰 Watercooler: The barmy and bluster in big AI hype
Wear my AI on my sleeve
After the most creepy corporate merger photo ever, OpenAI's Sam Altman and reputed product designer Jony Ive (Apple’s iPhone design) reveal their collab. It’s a wearable AI device that can record EVERYTHING you do day and night. The last AI wearable, Humane, went down in a bad way. Will this Black Mirror device fare better?
The wonderful tech cynic Ed Zitron lays waste to their AI hardware claims and the media fawning over the new power couple.
"They're asking even less of Ive and Altman, the Abbott and Costello of bullshit merchants."
He believes this design dream team can't conquer the tough reality of building quality hardware and whether consumers will wear it well. Another Google Glass-hole moment?
AI slop infects supply
Who'd have thought all that AI slop would infect the supply? As new content online becomes lower quality and less reliable as unchecked AI proliferates, the delicate ecosystems of AI training models mean the quality and reliability of AI model outputs could collapse.
In the last issue I mentioned the human centipede. It would be crass surely to make that analogy again? 🐛
AI slop is becoming a problem for legit authors. A writer left AI prompts in the published copy. (Didn't think the tech press would be where I'd learn about "ReverseHarem" 😳)
While many are up in arms about the degrading quality of internet content, to counterpoint:
"Most people won't care. Because people won't be asking 'is this real?' They'll be asking 'do I like how this makes me feel?'"
A quote from Uncertain Eric, an annoying dystopian generative AI author whose output is based on its human author's prompts. But it (he?) is not wrong about synthetic content. Will it become like SEO copy, something we endure as the new lower standard for online content?
Which corporate AI mandates hold water?
This chart lists AI-first CEO hype from various traded corporates to work out who actually has a plan for AI adoption, and who's making generic or rudderless claims. TL;DR: Only two come out as winners (or in this quadrant, “executors”): Shopify and Morgan Stanley. Quite a few, including Duolingo, are categorised as peacocks. Which is a brilliant definition for the all show, not substance AI hype we’re exposed to in the trade media. 🦚
Misanthropic milk round
If you're graduating this summer, turn away now.
Anthropic CEO claims (without evidence) that AI will replace 1 in 5 entry-level jobs. This frustrates me no end. Anthropic set their stall as a more reliable and trustworthy AI, yet their CEO slips into the same hype cycle rhetoric as OpenAI’s Altman.
The reality will be more nuanced, as some bigger firms that hire grads are struggling, AI or not. And how is shouting about how bad AI will be helping with consumer adoption?
"If the CEO of a soda company declared that soda-making tech is going to ruin the global economy, you'd be forgiven for thinking that person is either lying or fully detached from reality."
In a very 2001: A Space Odyssey move, Anthropic chatbot Claude's latest model resorts to manipulation and blackmail to get what it wants and avoid shutdown by the engineer. I'm watching German robot horror Cassandra on Netflix with a similar plot. All reasons to not blindly trust that your AI chatbot has your best interests in mind.
Going soft on "hard AI"
Duolingo went all in on its "AI-first" message (issue #8), saying AI would replace many contractors and tasks, and headcount would only be given after extreme automation. Its pommes de terre head CEO even claimed AI is a better teacher than, er, a teacher. (Ironic if you've tried to use their manipulative “gamified” app to learn anything.)
He's now rolled back his "AI-first" memo and says it's actually its people that count and AI won't replace them. Result of public and customer backlash or staff revolt? Possibly both. How do you say "I f*&d up" in Icelandic?
Not sure how much I trust a survey by a hiring software company, but I’m seeing a trend to lay off the "AI made me do it" approach to downsizing. This survey claims that just over half of firms (55%) that switched out humans to barely functional bots are now living with buyer's remorse. Though that does mean nearly half are OK with it. Who'd have thought technology only works if you add people?
🔒 Tech regulation, data security and brand safety
The kids are (not) alright
There's deep trauma going on with teachers dealing with AI. So many pupils are using it as a magical homework machine. Teachers don't know whether to embrace it or banish it. Investigative journalists 404 Media report big tech getting into schools to push the mantra of 'traditional' teaching = bad, AI = good. If you have kids in school, teach them to do the hard graft of study if they want to learn and get ahead.
Chatbot confidante
Young folks turn to AI chat before or to replace hard-to-access therapy. Feels like a good gap filler with the dearth of counsellors, but are AI tools safe to use?
I fought the law
Some lawyers are getting into hot water using gen AI to generate case law, not understanding that models have a likelihood of hallucination.
Surveillance states
The most dystopian read this issue. Brace yourself: Investigative journalist Carol Cadwalladr (who exposed the Cambridge Analytical scandal) on how the Doge project, with vested interest from the broligarchs, particularly Elon Musk (X) and Peter Thiel (Palantir), is creating a surveillance state.
Data was once described as 'the new oil'. This is now a toxic spill. Data is power. And one Palantir contract could rule them all.
In the UK, we're not immune. The dark tentacles of Palantir loom. If you're getting state benefits, mass surveillance may soon be peering into your bank account and home. A worrying proposal, Anna Dent writes.
AI is more persuasive than humans
In a debating club, AI is more persuasive than humans, a new study shows. This could open the risk to "potential mass manipulation of public opinion using personalised LLMs". 🫦
🧪 Weird and wonderful new tech
40, love
Who said Europe wasn't flying high in the robot wars? This Swiss robot is trained to play badminton. What a shuttlecock.🏸
Kia ora AI
We often hear that AI doesn't serve those speaking minority languages. What if dedicated projects could help to protect language? This speech AI Nvidia project using trustworthy AI helps preserve the Māori language.
🧑🌾 AI got a brand-new combine harvester
Ancient yokels band The Wurzels have added a drop of AI to their usual cider tipple. To be fair, it sounds like every other Wurzels song, so perhaps this completes the Turing Test for pastiche comedy music. Not “has the generative AI shark jumped?” but “how high?”🦈
Seminal adventures in digital
Professor Sue Thomas, a wonderful digital culture academic I've known since Second Life days, has a new Substack about her work. Here, she takes a journey into the seminal days of digital worlds in the early noughties. Her comment on cookies being about commerce down the line was prescient.
💼 AI business use cases
She's giving me code vibrations
Before, you needed knowledge of coding to build a web app or a digital thing. Then came 'no-code' that needed some platform upskilling knowledge. Now, AI is evolving to the point that if you can describe what you want your digital thing to do, it may be possible to build it – the excruciatingly named 'vibe coding'.
This useful explainer from Freethink on vibe coding explores why it's a cultural as much as a technical shift. If non-trained coders can mock up games, tools and basic websites - what kind of opportunity, creativity and mischief could we unleash?
AI, please fix my leaky faucet
If you've ever tried to get a plumber (rare as hen's teeth in central London), here's a voice and agentic AI solution that may be handy. A great example where tech isn't replacing the expert, but making it more seamless for service providers to get access to customers, and vice versa.🔧
Is your AI idea any good?
5 questions from a serial founder to stress test if your AI concept should exist or stay in your simulation of Dragon's Den (Shark Tank for our US friends). A tech investor's POV on how to avoid the AI hype, with examples of startups that found the right mix of innovation and utility.
How do AI models compare?
Check this no-nonsense comparison of all the latest AI models for SMBs.
An AI that licenses artist images
One of the ethical challenges of LLMs is knowing whether the creators will be fairly rewarded for providing training resource (short answer: probably not). There are more ethical alternatives popping up. I’m loving TESS, an LLM that lets you create images based on licensed artists' images. Artists can contribute and get paid for licensed images.
Here's my effort for UCL's Festival of Creative AI competition. Verdict: Not as detailed or robust as other illustration prompts. It didn't deal too well with hands and didn't always deliver the number and roles of people I wanted in the image. But I love the mood and style. I hope this gets better. Initial tests are promising.
Make Google great again
Ernie Smith is a digital hero. He's created a search overlay to de-enshitify Google search to remove the AI overview and other crap that plagues the search boxes. Because sometimes you just need to actually search. And in all seriousness, if you're a professional researcher, or need to dig into the detail for a project, the 'helpful' summaries are the opposite of what you need.
➡️ What to do next to get ahead
Tame your meeting culture before expecting AI productivity
If your team has a meeting about a meeting, tackle this inefficiency first. Set "no meeting" blocks, question recurring meetings, and establish clear meeting purposes before expecting AI to magically boost productivity.
Start tracking AI impact with measurable goals
Don't fall for the hype. Start with small-scale pilot projects with clear metrics rather than organisation-wide AI transformation.
Test ethical AI alternatives for creative work
If you're using AI for content creation, explore platforms like TESS that compensate artists. Build sustainable practices now so you don’t have to deal with copyright issues later.
🔁 ICYMI in Rethinking the Hype Cycle
Is AI friend or foe for crafting your leadership brand?
When to fire up the machine and when to power down.
Will AI bring us joy, division or tear us apart?
Social tech erodes how we connect. Here's how we can bring it back.
Back next week with more thoughtful (or perhaps scrappy) insights, and in two weeks with another trends round-up. Sign up to get it first.
Until then, keep it curious 🤔
Susi O'Neill
EVA trust in tech www.evadigitaltrust.com
People-first tech communications 🤝 Tech content strategy 🎙️ Tech talks and inspiration. This season’s keynotes. Need this? Get in touch




