By Everlyne W Muriithi
Like any beginner, I started frustrated — the tools often seemed to give me everything except what I actually wanted
On 17th January 2026, I joined fellow journalists for IAWRT’s Digital Training, (Module 6) a session that felt less like a class and more like a wake-up call. The training pushed us to confront a question many of us are already grappling with in our newsrooms and timelines.
The session, facilitated by Nelly Moraa, explored one of the most urgent questions in modern journalism: How do we protect truth in an era of artificial intelligence and viral misinformation? From the start, it was clear this wasn’t going to be just another technical training. It was a conversation about responsibility, ethics, and the future of our craft.
We unpacked AI policy, digital verification, and the evolving role of journalists in a world where content travels faster than facts. One powerful truth stood out: no tool no matter how advanced can verify user-generated content with 100% certainty. Yet relying on the “human eye” alone is no longer enough either. Verification today is a hybrid process, blending human judgment, digital tools, and AI-assisted analysis.
As we went deeper, we examined how misinformation is created and amplified online from fake tweets and manipulated images to coordinated inauthentic behaviour (CIB), where networks of accounts push the same narratives in synchronized patterns. What struck me most was how convincingly these tactics can disguise themselves as organic public opinion.
Nelly guided us through the core questions every journalist must now ask:
Who created this content?
What is being claimed?
When and where did it originate?
Why is it being shared now?
From there, we explored practical ways to trace provenance, assess the authenticity of social media profiles, and use tools like reverse image search, metadata analysis, mapping platforms, and fact-checking databases such as Africa Check and PesaCheck. We learned how weather data, timestamps, and visual landmarks can help confirm whether a video or image truly belongs to the moment and place it claims.
AI, we discovered, plays a powerful supporting role. It can summarize large volumes of information, highlight repeated text or suspicious patterns, and surface inconsistencies across documents and timelines. But the session emphasized a critical boundary: AI is a guide, not a judge. It can point us toward clues, but it must never replace primary evidence, editorial judgment, or human verification. Every AI-assisted insight must end with one question: What do the original sources say?
We also explored Google Pinpoint, a tool that allows journalists to organize and search official documents over time. By comparing language across months or years, it becomes possible to spot contradictions, policy shifts, and the gaps between public statements and reality.
What this training reaffirmed for me is that verification is not a single action it is a process. A mindset. A discipline.
In a digital landscape where misinformation is engineered to look credible, the journalist’s role is more vital than ever. Our work is not just to report what is trending, but to slow the story down, interrogate it, and rebuild it on a foundation of evidence.
Truth still depends on human judgment.
AI simply gives us sharper tools to defend it.


