AI’s getting bigger but not always better – Rethinking the Hype Cycle #4
There is no AI in team. Until now.
Welcome to Rethinking the Hype Cycle, your practical guide to AI and what's next in tech for people-first leaders.
👋Welcome new readers from Responsible AI Leaders newsletter!
We live and work in radically uncertain times. Many ‘rules’ of business are being turned on their heads by brutal new overlords and broligarchs. Rethinking the Hype Cycle helps you survive the hype with a measured approach to tech adoption.
Your regular reminder: If you're not working on the bleeding edge, you don't need to bleed.
On to the trends. 👉
AI and frontier tech trends
There is no AI in team. Until now.
New research with Wharton Professor Ethan Mollick and Proctor and Gamble shows AI can become a teammate, not just a productivity tool. Teams worked better than individuals using AI. Teams plus AI were less siloed and happier.
This is one of the most reported studies in recent times. It deepens understanding of AI as more than a productivity tool. When used right, this boring glue improves collaboration.
Case in point: Academic publishing is complex, with peer review determining research accuracy. It’s fraught and led by favours as academics who must 'publish or die.' Enter AI, which will "push the human enterprise forward," Wiley's Head of AI claims. Ignoring this hideous way to describe the expansion of knowledge, could an academic enhanced with an AI reviewer be better at verifying accuracy and originality?
What do people think about AI?
Ada Lovelace and Alan Turing Institute’s latest report has dropped. It shows disparities in AI adoption: minorities and people on lower incomes aren't seeing the benefits. AI awareness is high. Many know about the risks of self-driving cars, yet few understand AI's role in benefit assessments where bias is most likely to be experienced firsthand.
Thanks to a savvy freedom of information request, we know what UK business and AI minister Peter Kyle thinks about AI and searches for on ChatGPT.
It seems he’s short on policy ideas. Perhaps he should chat with Rachel Coldicutt. She’s asking critical questions the government needs to answer and understand about AI efficiency.
“It may sound like a silver bullet but we can’t just shoehorn it into existing services.”
This isn’t negging using AI for public services as many ethical AI folks are wont to do. It’s about getting institutes that are investing our money in big tech to consider it in the round, and avoid another failed big IT project.
We live in the long shadows of the Horizon (aka Fujitsu) Post Office scandal that saw innocent people lose livelihoods, homes and in some cases their lives over failed tech they were convinced they weren’t smart enough to understand. AI can’t become the same ‘black box’. Policy makers must understand the purpose and implementation of AI, and audiences (in this case, citizens) need to understand its benefits and how it works for us before it’s foisted upon us. And the same goes for bringing AI into your workplace.
AI gets you hired, then fired
This student is in hot water for building an LLM to cheat tech tests. He claimed, "Most human intelligence work will be obsolete in two years."
His LLM worked, and he got hired by Amazon. Surely this is the future-leaning skill big tech should value? As the job market tightens, the logic of not using automation for jobs already saturated by decision-making automation, which requires you to demonstrate your automation tech skills, seems backward.
We need more lessons in AI
Meanwhile, a UK classroom in the foothills of a nuclear power station is helping kids plan for a high-tech future. They're using Character AI to reimagine Darwin and create visuals. This academic deep dive shows how GenAI can support education. Imagine classes where students can analyse data, create interactive timelines, and get real-time feedback.
AI education will be critical for students to compete globally. In Beijing, students in elementary school will have mandatory AI education covering basics, homework assistance and ethics. We're all going to need to play catch-up.
More juice from me last week on shifting education from chalkboards to chatbots to help the next generation succeed. And how not to get outshone by a robot cat. 😼
Investors want to see AI with staying power
Surprising stat: despite its $6 billion fundraising round, the top-funded AI business last year wasn't OpenAI (clue: it's a data platform hidden in plain sight). With the flood of AI-led SaaS services in the market, investors are now looking beyond solutions; they’re seeking market staying power.
Moore's Law predicted software processing power would double every 18 months. A new study shows AI models double their task length every 7 months. Longer tasks mean more complexity and end-to-end capabilities to complete tasks, aka the era of agentic AI.
We're in rapid adoption mode, but without measuring effectiveness, we'll hit deep disillusionment. Every person and organisation need a nuanced plan for AI tool integration. There’s no one-size-fits-all or button you can press to get all the efficiency benefits at once.
Watercooler: The barmy and bluster in big AI hype
I’ve exhausted myself absorbing this endless tech bro AI noise, as usual. Here’s the tasty bits.
Keep reading with a 7-day free trial
Subscribe to Rethinking The Hype Cycle to keep reading this post and get 7 days of free access to the full post archives.