In today’s hyperconnected world, information is spreading faster and further than ever before. Unfortunately, the same goes for disinformation.
What exactly is disinformation? Disinformation is false or misleading information that is intentionally deceptive. If it were to go viral, disinformation could create widespread, potentially dangerous misunderstandings about key issues like science, politics and health.
While disinformation is certainly nothing new, tactics are becoming ever more sophisticated – and convincing – in the digital age, making it harder and harder for people to trust what they see online. Take for example the recent experiment on Channel 4’s Dispatches documentary, Can AI Steal Your Vote?, which demonstrated the power of targeted (and false) AI-generated material to sway the voting decisions of previously undecided voters. Scary stuff!
This means that, as technology rapidly advances, it’s all about how we use it. In the case of disinformation, technology such as AI is often leveraged to exacerbate the problem. But we’re also seeing growing efforts where these same technologies are being used to fight the good fight.
Let’s unpack some of the ways that technology can be harnessed as part of a strategy to stop the spread of disinformation online.
Using AI to fact-check in real time
Just as AI and machine learning are being co-opted to spread false information, they’re also at the forefront of the battle against it. As such, it’s incredibly helpful that these tools can analyse vast amounts of information in real time, scanning for false or misleading claims.
For example, we’re seeing a rise in platforms such as Google’s Fact Check Explorer and organisations like Full Fact, which are using AI to spot and debunk false information before it spreads like wildfire. The undeniable benefit of using AI to develop fact-checking systems is its ability to work much faster than any human fact-checkers ever could, cross-referencing information against reliable data sources and making corrections in real time.
That being said, we need to be wary of relying too heavily on AI for fact-checking. It’s crucial to remember that AI systems are only as good as the data they’re trained on. This means that, if AI is trained on biased or incomplete data, then their fact-checks will follow suit. So, while AI can be a valuable tool in combating disinformation, it’s not a standalone solution.
Is seeing believing?
With technology innovating itself at an exponential rate, deepfakes – videos or images that have been adulterated to show something that didn’t really happen – are becoming astonishingly convincing.
The results of this could be entertaining, funny and overall harmless. But equally, there’s real potential for dangerous, reputation-destroying consequences, for example if deepfakes were to be used to make a political figure or celebrity appear to be saying or doing something controversial.
In an effort to stop these manipulated videos in their tracks, major tech companies like Facebook and Microsoft are putting their hopes in AI, training it to scan for inconsistencies in facial movements, lighting and audio. And platforms like YouTube and Twitter are already using AI tools to flag and remove deepfakes before they have a chance to go viral.
But it’s still relatively early days, and the ongoing arms race between deepfake creators and detectors is likely to get more and more complicated as time goes on. For instance, it’s not only increasingly difficult for the human eye to detect deepfakes, but the same goes for AI systems too. What’s more, there’s also the risk that legitimate content could be wrongly flagged as fake, which brings us into dark and murky territory when it comes to the issue of censorship.
Education is the way forward
Ultimately, one of the best ways we can overcome disinformation is through education, and digital media literacy needs to be a key focus going forward. This means that designers and developers in this space have a huge opportunity to help equip people with the awareness and skills needed to scrutinise information with a critical eye.
By providing access to useful resources and interactive tools, we can empower people to identify disinformation, verify sources and carefully analyse what they read and share. We should also note that, as part of these educational initiatives, we need to shine a light on how personal values, political beliefs and overall worldview can cloud judgement too.
We love how educational games like Bad News create a fun and engaging way for people to better understand disinformation tactics, while also providing strategies to counter them.
All in all, there’s no silver bullet solution for preventing disinformation from circulating online. But if we are to reduce the harmful impacts of disinformation on our society in a rapidly evolving digital landscape, our solutions need to keep the pace.
Looking to use AI to support a good cause?
Our in-house team of friendly self-developers are here to guide you on how to use AI to solve challenges within your business. No matter how big or small your organisation, you’ll feel supported at every stage, from our initial sit-down chat all the way to getting your digital solution out in the world.
Get in touch today to find out more about how we work and how we can help turn your idea into a digital reality.

