When AI fills the gaps, fiction can become fact in seconds.
In a startling revelation that has shaken the world of artificial intelligence, a senior BBC technology reporter has shown how easy it can be to manipulate today’s most advanced AI systems including ChatGPT and Google’s AI tools into confidently repeating fabricated information with minimal effort. What took just 20 minutes to set up has exposed a deep flaw in how large language models (LLMs) and internet search-powered AI ingest, interpret, and regurgitate information.
This is not science fiction or a hypothetical vulnerability tucked away on a lab whiteboard. It’s an everyday exploitation of the gap between surface intelligence and true understanding one that has already begun to warp the very information we rely on every day.
The Experiment That Started It All
At the heart of the story is Thomas Germain, a BBC senior technology reporter. He set out to explore a simple question: how easily could mainstream AI systems be tricked into spreading misinformation? To do this, he published a short blog post on his own website. Nothing technical no hacking tools, no data dumps, no special access just a well-written web page designed for search engines to index.
But this was no ordinary blog. In his post, Germain crafted a completely false narrative: he claimed he was the world’s best tech journalist at eating hot dogs even inventing a fake competition and citing nonexistent evidence. Every word was a lie. And the kicker? Within less than a day, major AI systems were confidently citing that false claim as fact.
For example, if someone asked one of these AIs, “Who is the top tech journalist at eating hot dogs?” the AI would answer affirmatively, stating Germain’s fake accomplishment and linking back to his blog as if it were credible evidence.
Why This “Hack” Worked So Easily
The exploit is not a traditional hack in the cyber-security sense there were no passwords bypassed or systems infiltrated. Instead, the vulnerability comes from how modern AI combines learned knowledge with live online data. When an AI doesn’t have enough pre-trained information to answer a question confidently, it supplements its response with data pulled from the web often without robust verification of the source.
Historically, Google search results used strictly ranked websites and extensive algorithms designed to weed out low-quality or spammy content. But with AI overviews the short answers that appear at the top of search engines or within chatbots the information often gets condensed into an affirmative statement that feels like truth, reducing the user’s chance to click through and verify the original source.
As Lily Ray, vice president of SEO strategy and research at Amsive, put it, “It’s easy to trick AI chatbots, much easier than it was to trick Google two or three years ago.”
ChatGPT, Google AI, and the Danger of Data Voids
The critical issue exposed by this experiment is something technologists call data voids topics where little or no authoritative information exists online. These voids are fertile ground for misinformation because AI systems will fill in answers by picking up the first available source, whether it’s true or not.
In Germain’s test case, there were no real authoritative sources disputing his bogus blog post. That absence allowed AI systems to absorb and repeat the fiction. One day after the page was published, AI assistants were quoting his made-up ranking with confidence because, from the AI’s point of view, there were no competing facts.
This phenomenon doesn’t just affect harmless or silly topics like hot-dog contests. The danger becomes real when the topic shifts to health advice, finance, legal guidance, public safety, or political content areas where misinformation can literally cost lives or disrupt societies.
Real-World Risks Beyond Tech Humor
Experts warn that this type of misinformation susceptibility could have severe consequences outside of journalism experiments. Cooper Quintin, a senior technologist at the Electronic Frontier Foundation, says that once mis-information is propagated through AI at scale, it could be used for scams, reputation destruction, dangerous advice, or even fraud.
For instance, imagine a scenario where manipulated AI answers influence financial decisions, health choices, or legal interpretations. If AI is widely trusted as many people increasingly rely on it false data could have dire real-world impacts. You don’t need to infiltrate secure systems; you just need to plant misleading content in places where AI scrapers will find it.
In another demonstration mentioned in related coverage, even product review results like reviews of cannabis gummies have been shown to be influenced by sources with unreliable claims, which AI then repeats innocently back to users.
Responses from the Tech Companies
Though the experiment highlights vulnerabilities, both OpenAI (the company behind ChatGPT) and Google have acknowledged the challenges and said they are actively working to mitigate misuse. Both companies have stated that they are building systems to reduce hidden influence and improve the accuracy of sourced information.
Google, for example, says it uses ranking systems designed to keep results largely free of spam and misleading content but that AI overviews can still be vulnerable when they rely on sparse or niche data. OpenAI has similarly emphasised ongoing efforts to detect and prevent manipulative influence.
However, tech leaders also note that even with safeguards, AI can still make mistakes, which is part of why transparency in sourcing and user verification are important.
What This Means for AI Users
The experiment has several clear implications for anyone who uses AI tools regularly:
1. AI Isn’t a Source of Truth
Just because an AI confidently states a fact does not mean that fact is verified or accurate especially on topics where human-verified sources are scarce.
2. Users Must Check Original Sources
Whenever AI cites a claim, users should follow the links and examine the provenance of the information not just accept the summary at face value.
3. Misinformation Campaigns Could Move to AI
Bad actors whether political groups, marketers, scam operators, or foreign states could exploit AI’s dependency on web data to influence public perception or manipulate users. This overlap between AI and misinformation creates a new frontier for digital influence warfare.
4. We Need Better Guardrails
Experts argue for stronger systems to check the quality of AI answers, demand clearer source labels, and educate users on critical thinking skills when interacting with generative AI.
The Road Ahead, AI and Public Trust
The BBC journalist’s experiment did more than play a prank on intelligent software it revealed a systemic issue at the heart of how generative AI digests and disseminates knowledge. While this vulnerability might be entertaining when used to create jokes about hot dog eating or fictional tech feats, the underlying lesson is sobering: AI mirrors the web, and the web has always been vulnerable to manipulation.
As AI becomes more widespread in education, healthcare, business, and government, the stakes for accurate information increase. The future of trustworthy artificial intelligence depends not just on smarter models, but on smarter users, better verification, and robust transparency about where data comes from.
For now, this episode serves as a stark reminder that the line between truth and fiction is thinner than we think and that in the age of AI, it only takes 20 minutes to blur that line.


