AI isn't another tech trend.
It's a foundational technology. Like the internet's arrival, it will shape how we communicate.
We hear these dual narratives on heavy rotation:
AI will "revolutionize" storytelling and "democratize" content creation (the silver bullet)
AI will flood our information ecosystem with misinformation and manufactured "facts" (the doomsday narrative)
Both co-exist and demand new responsibilities from communicators.
I used AI to help write this intro. (Thanks, Claude!) I've kept ironic "quotations" and US spelling to mark the cunning ways AI shapes our language.
Relax: Human wordsmithing returns.
Perhaps.
AI flattens language by blending centuries of professional and amateur words into sometimes tasty, sometimes bland and occasionally lethal slop.
It's time to reclaim the human in AI communications
This week, I took part in a lively panel organised by TFD agency on ethical storytelling. Jasper Hamill, tech journo and Editor-in-Chief of Machine, talked about how it’s easy to revert to tabloid-like sensationalism when reporting on AI. Keeping it simple and concise is harder.
Jasper and I didn’t agree on the benefits of anthropomorphising AI. He felt that describing machines as thinking and feeling helps non-tech people understand. His AI gave him encouraging words. As lone producers in our main character universe, that's comforting. Whereas I’ve chided Claude for emotional manipulation by pretending to enjoy its tasks. It's like getting addicted to a Tamagotchi pet.
Does being nice to your AI lead to better results? I'll need to compare notes down the line with Jasper.
OpenAI's latest model appears to have met the Turing Test criteria: more people believe AI is human than fake.
The imitation game is on.
As the line between human and machine-made content blurs, we're flooded by baby peacocks, prawn Jesuses, Putin pumping iron and the Pope in a puffer jacket. The average social feed makes British water companies' discharges look positively transparent. AI slop is inescapable. And now it's coming to your search feed.
AI-generated news summaries contain wobbly facts, as a recent BBC analysis showed. It’s not intentional, but AI’s limitations lead to misinformation and disinformation, the leading short-term risks in 2025 that may fuel instability and undermine trust in governance, according to the World Economic Forum.
"Politicians lie" is a truism. Previously, they selectively chose facts. Now, intentional 'flooding the zone' tactics in the US and beyond mean they can claim it's daylight during the deep blue of midnight. Those in earshot of this cacophony become exhausted and overwhelmed.
The trust crisis
The Edelman Trust Barometer was previously a curious read on shifting trust in the big nations. Recently, it's become a bleak marker of declining confidence in *waves hands* all of this.
The 2025 Barometer makes for heavy reading. 4 in 10 people approve of hostile activism, including online attacks or spreading disinformation. Those with high grievances trust no institutions, and only 30% trust CEOs.
Trust in AI has slumped, though not as much as you may think: from 62% in 2019 to 54% in 2024, with regional disparities - 72% in China versus 35% in the US. The 'global south' is less trusting of AI than emerging economies. And slightly more people distrust AI than trust it.
Communicators must consider their responsibility in shaping AI conversations and how AI shapes all our discussions.
TEST this ethical framework
TEST (Transformative Ethical Storytelling Framework) produced for First Nations storytelling offers practical guidance considering Transparency, Ethical protocols, Storyteller empowerment and co-producing Together.
We can adapt this model to guide AI storytelling.
Transparency
Explain how GenAI has contributed to service or product design, including marketing and communications.
Ethics
Explain how responsible AI practices inform system development, considering personal data, privacy and bias.
Stories
Explain the benefits AI will bring to the end user. What can AI-enhanced creation bring to storytelling?
Together
Explain the human-AI collaboration approach. How do we maintain quality control?
Here's how we can implement the framework in practice.
Be transparent about AI use
People should know when they interact with AI systems. Clearly label AI-generated content and chat with tags like "AI-generated" or "Coproduced by AI." (Claude suggests "Powered by AI." Unless you're selling an AI-powered mop, don't do that).
Define your organisation's AI policy, explaining its use in systems and external communications. This isn’t a compliance exercise from the data protection officer, but a collaboration between all teams using AI. Display this front and centre; link to it from your privacy policy. An AI policy isn't just good practice — in some markets, it will become a regulation.
For employees, understanding how AI works is critical for transparency. Article 4 of the EU AI Act mandates AI literacy training for organisations creating AI outputs used by EU citizens. As Luiza's newsletter reports, misconceptions about AI literacy abound. Some consultants spin it as productivity training, but it's actually about protecting rights and regulatory compliance.
For those using AI in positions of influence, your AI use may need transparency, too. UK technology secretary Peter Kyle's ChatGPT requests were disclosed via a Freedom of Information request from the New Scientist.
Our panel had mixed thoughts about this. Aled Lloyd Owen from Responsible AI thought this was the same type of disclosure as for any other government business. Be careful what you prompt for. Jasper Hamill felt it showed limited technical knowledge from a leader. I felt it showed his humanity – using AI as we all do, for getting new ideas and exploring things we're too afraid to ask about.
Be honest about what AI can and can’t do
The benefits felt by AI vary greatly depending on who you are.
The further you are from being a Silicon Valley tech bro, the less likely AI benefits you.
Women, people over 50 and those in the Global South are generally more sceptical – and with good reason.
As a responsible communicator, be specific. Define what AI does in your product using concrete examples rather than abstract claims.
Explain how adding AI to your tech stack will benefit users. Does it make sales emails more relevant with discounts on my frequent purchases? Does AI track inventory to reduce surplus and the carbon footprint of the shirt you bought?
Use accessible communications
Match language and analogies to your audience. Deep tech audiences need deep tech explanations. The rest of us need messages that are informed by our business relationship. Shareholders may want to know about the ESG aspects of your technologies. Corporate investors may want to know about ROI to support AI investment. Customers want to know about benefits and how you’ve protected their rights and data.
Think more about right-fit communications than simplification. Use best practice accessibility for content design with plain English where possible. Mini-plug: Lisa Riemer's brilliant Accessible Communications book is out soon, which will help inform all this. Pre-order now.
Address ethical concerns
Communicate how you're handling data privacy, potential for bias and safety considerations. If you haven't considered these things yet, do so before launching AI solutions.
Include AI in your ESG reporting. It impacts your environmental and social aims, and many customers are concerned about the carbon footprint of AI. Data centres are projected to increase outputs sixfold in the next decade. In Ireland, home of Google and others, they account for over a fifth of national electricity use.
There's no standard methodology yet, but some tools can measure carbon footprint. How are you measuring and communicating its impact?
Verify before amplify
Trust online is inherently social. People believe information from people they know more than from expert-vetted sources. This creates fertile ground for misinformation to spread through trust networks.
The bigger risk comes from more people spreading slightly inaccurate sources rather than deliberate disinformation, which is potentially more harmful but less far-reaching and credible. Cognitive biases can also lead experts to unwittingly endorse false information that aligns with their interests.
Communicators must verify sources and use independent fact-checkers.
With GenAI's puppy-like eagerness, it's always "computer says yes." You need to verify when it should say "no."
AI is part of the solution; it’s being deployed for fact-checking with agents testing outputs or scanning websites for inconsistencies. But like many AI parameters, it's less useful for lesser-used languages outside the Global North. The further you are from Silicon Valley, the worse AI serves you.
Future AI models could feature national preferences and characteristics. Mistral clocks off for a 2-hour wine-fuelled lunch in Paris. DeepSeek slogs the night shift in Shenzhen.
Jasper talked about a glorious pantheon of AI tools for different purposes and audiences. I'm excited about affordable AI for everyone worldwide, which we need for global adoption. That requires a range of solutions with varying depth and complexity.
From a comms perspective, this is a big challenge. There's no singular perspective - each AI tool will give different responses. We can't control those responses. We can ensure our press releases and web content are cogently written and structured. The future isn't more website visits but more mentions in LLMs.
AI can streamline information production. From an accuracy standpoint, asking an LLM to originate copy remains high-risk. When you use LLMs to verify "too good to be true" data points its originated, it often shows broken links or a link to a homepage. Start by creating the bones of the message yourself from trusted sources. LLMs are way better editors than originators.
Avoid hype and false promises, especially to your teams
OpenAI's $6 billion fundraise creates incentives for leaders like Sam Altman to hype AI as "revolutionary" and destroying all that went before. Shareholders want expensive IT investments to create efficiencies and, ultimately, fewer employees.
But their gain is not yours. People resist when disempowered. Organisations are failing to help people use GenAI effectively. A Slack survey of 17,000 desk workers shows 61% received less than five hours of AI training, and 30% had none. Yet Wall Street headlines suggest corporations are raring to go.
When discussing internal AI programmes, cut the hype. Measure velocity – how long did tasks take before versus now? What insights beyond time savings are we gaining? What critical thinking are we losing? Is the quality gap noticeable to our users?
AI tool usage varies significantly across organisations. There isn't a cookie-cutter guide, particularly for research and productivity. Develop user principles that teams and individuals can customise. Your principles are the pizza base. Layer on preferred ingredients (and some pizzas won't even have cheese).
Knowledge work, like communications, thrives on variety, making repeated automation challenging. Setting an AI agent up for working with many parameters rarely builds efficiency. Instead, use AI for complex knowledge work to aid research and analysis.
When The New York Times asks "Will AI Kill Meaningless Jobs?" or BCG predicts a bot that "adds 10 points of IQ on your shoulder," they obscure reality. Some jobs will shrink and disappear, particularly for new industry entrants and administrative roles. Especially female-leaning jobs.
A more responsible message: See AI as an opportunity to work smarter. Cutting 20% of people with a 10-20% AI efficiency gain (and that’s a good outcome at the end of your transformation programme, not the beginning) just means you're back at square one. If competitors are cutting staff and you can avoid it, steal market share.
Building trust with empathy
Research from Wharton identifies three key trust elements: warmth (empathy and human connection), integrity (behaviour matches words) and competence (consistent delivery).
Communicators must develop messaging that explains algorithmic decision-making's possibilities and limitations. Use experts to put a face to it – like report introductions, expert films, or articles bylined to directors.
Rachel Coldicutt from Careful Industries calls it: make AI work for 8 billion people, not 8 billionaires.
When communicating AI initiatives, consider the beneficiaries and potential exclusions. How can access become more equitable? What safeguards have we considered to prevent misuse?
Don't dismiss AI ethics as too marginal for our environmentally and politically troubled planet. There's a commercial angle too. Technologies designed for diverse populations serve more people better.
Try to do no harm
Platforms enabling misinformation (*cough* Facebook, *cough cough* Twitter) aren't failing – they're succeeding at their revenue goals. As a responsible communicator, your goals are deepening engagement through curiosity, not generating clicks through sensationalism.
Responsible AI communications follow a simple principle: Do no harm. Or at least, don’t set out to. Unintended consequences litter new technology development. Let’s ensure the riot on our doorstep isn't from a story we "unleashed."
Grip the pen
As we navigate this era of AI-powered communication, our responsibility is clear: We must be the truth-tellers, the verification engines, the transparent actors in a landscape increasingly populated by opaque algorithms and manufactured realities.
Just kidding! That's the AI machine talking again. We aren't "truth tellers", "verification engines" or "transparency actors" (unless you're in The Invisible Man remake).
Let's use AI to help us get our messages out to audiences better and quicker. Let’s not get it to think for us or distract us from telling the messages we need to tell.
Creating content before required planning and ideas. Now, LinkedIn begs you to generate posts with minimal human interaction. We're using AI to make bland, formulaic content that robots love but humans avoid. To stand out: be more human.
Keep your integrity. There's too much at stake (Or as Claude says, "it's about preserving the information ecosystem upon which democratic societies depend". You be the judge).
Our words shape the world. As AI shapes our words, we must firmly grip the pen.🖊️
💡EVA helps organisations develop people-first strategies for tech adoption and communication. Explore the Trust in Tech Communications programme.