From data scandals to AI deepfakes: How misinformation has come full circle
Article by:Jacob Mutton

Back when I was still a teenager in 2019, I watched the Netflix documentary The Great Hack, which exposed how data-driven political persuasion was shaping elections. The film detailed how companies like Cambridge Analytica harvested personal Facebook data to micro-target voters with tailored political messaging, influencing key elections like the 2016 US presidential race and Brexit.
Ironically, I always think back to this documentary as being the initial spark that ignited my interest with communications.
Back then, the concern was about harvesting and misusing intimate personal data to manipulate swing voters. Now, in 2025, misinformation doesn’t just spread – it generates itself. When I saw the recent AI generated video Trump posted on his social media, it felt like a strange full-circle moment in how technology is transforming political communications. The tools have changed, but the playbook remains the same: manipulate public perception, control the narrative, and blur the lines between reality and fiction.
The difference now is that AI doesn’t just amplify misinformation, it creates it from scratch. This makes false narratives more seamless, scalable, and harder to debunk.
The New Era of Misinformation
The Trump example may seem extreme – after all, the video was clearly AI-generated and not intended to be a legitimate campaign video. But that didn’t stop his campaign from hijacking it, using AI’s viral nature to fuel political discourse. And while this instance may have been more spectacle than substance, AI and deepfake technology’s potential for political interference is well established, as expertly demonstrated in Channel 4’s eye-opening Dispatches: can AI steal your vote?, showing how AI can sway voters across the political spectrum.
The blurring of reality and fiction has reached new extremes with the rise of AI-generated content. Beyond misleading political ads, entire speeches and videos can now be fabricated in minutes.
Of course, AI is not inherently bad. It’s a powerful tool for generating ideas, analysing trends, and automating content. However, with every technological advancement, bad actors inevitably find ways to exploit these innovations.
The Political-Corporate Connection
Inevitably, what starts in political campaigns rarely stays there. The tactics refined in high-stakes electoral battles inevitably make their way into corporate strategies, especially within tech. Just as Cambridge Analytica’s methods transformed political marketing, today’s AI deepfakes will reshape corporate communications landscapes globally.
We’ve already seen Meta abandon the use of independent fact checkers on Facebook and Instagram, utilising a more X style community notes content verification system. Ultimately, political discourse around AI manipulation sets the regulatory agenda that tech companies must anticipate and navigate.
In this sense, reputational stakes for tech companies are particularly high. When misinformation tools become so democratised, tech PR professionals face a dual responsibility: advising clients on both defensive strategies against synthetic media attacks and ethical guidelines for their own AI communication tools.
But it’s equally important for brands to pivot from simply safeguarding their reputation to fostering lasting trust through authenticity. This is where the role of PR professionals shifts from managing risks to helping clients embrace communication strategies that reinforce credibility and human connection.
The Trust Imperative
AI is here to stay, and it’s changing PR, media, and political communications at breakneck speed. But if there’s one lesson from The Great Hack that still holds true, it’s that trust is everything. Whether it’s a political campaign, a business leader’s brand, or a company’s public messaging, credibility is the currency that matters most.
As AI-generated content floods the media landscape, brands that proactively build and authenticate trust will gain a significant competitive edge. Thought leadership, real interviews, and personal anecdotes stand out because they can’t be fabricated in the same way. While AI can enhance research and streamline processes, it can’t replicate genuine relationships, nuanced understanding, or the human touch that fosters lasting trust.
For PR professionals, the challenge isn’t just managing risks. It’s becoming guardians of authenticity and ensuring that, in an increasingly synthetic media landscape, the human voice still cuts through the noise.
Related Articles

Human brain power doesn’t cost the environment, ChatGPT’s does – so should we be moderating our AI use?
Article by:Alex Maxwell