FOMO DailyFOMO DailyFOMO Daily
Font ResizerAa
  • Home
  • News
  • Politics
  • Entertainment
  • Sport
  • Lifestyle
  • Finance
  • Cryptocurrency
Reading: How a Journalist “Hacked” ChatGPT and Google’s AI in Just 20 Minutes and What It Means for the Future of Truth
Share
Font ResizerAa
FOMO DailyFOMO Daily
  • Home
  • News
  • Politics
  • Entertainment
  • Sport
  • Lifestyle
  • Finance
  • Cryptocurrency
Search
  • Home
  • News
  • Politics
  • Entertainment
  • Sport
  • Lifestyle
  • Finance
  • Cryptocurrency
Copyright © 2026 FOMO Daily - All Rights Reserved.

How a Journalist “Hacked” ChatGPT and Google’s AI in Just 20 Minutes and What It Means for the Future of Truth

In the age of smart machines, verification is still a human job.

Oscar Harding
Last updated: February 22, 2026 12:08 am
Oscar Harding
9 Min Read
Share
9 Min Read

When AI fills the gaps, fiction can become fact in seconds.

In a startling revelation that has shaken the world of artificial intelligence, a senior BBC technology reporter has shown how easy it can be to manipulate today’s most advanced AI systems  including ChatGPT and Google’s AI tools  into confidently repeating fabricated information with minimal effort. What took just 20 minutes to set up has exposed a deep flaw in how large language models (LLMs) and internet search-powered AI ingest, interpret, and regurgitate information.

This is not science fiction or a hypothetical vulnerability tucked away on a lab whiteboard. It’s an everyday exploitation of the gap between surface intelligence and true understanding  one that has already begun to warp the very information we rely on every day.

The Experiment That Started It All

At the heart of the story is Thomas Germain, a BBC senior technology reporter. He set out to explore a simple question: how easily could mainstream AI systems be tricked into spreading misinformation? To do this, he published a short blog post on his own website. Nothing technical no hacking tools, no data dumps, no special access  just a well-written web page designed for search engines to index.

But this was no ordinary blog. In his post, Germain crafted a completely false narrative: he claimed he was the world’s best tech journalist at eating hot dogs  even inventing a fake competition and citing nonexistent evidence. Every word was a lie. And the kicker? Within less than a day, major AI systems were confidently citing that false claim as fact.

For example, if someone asked one of these AIs, “Who is the top tech journalist at eating hot dogs?” the AI would answer affirmatively, stating Germain’s fake accomplishment and linking back to his blog as if it were credible evidence.

Why This “Hack” Worked So Easily

The exploit is not a traditional hack in the cyber-security sense  there were no passwords bypassed or systems infiltrated. Instead, the vulnerability comes from how modern AI combines learned knowledge with live online data. When an AI doesn’t have enough pre-trained information to answer a question confidently, it supplements its response with data pulled from the web  often without robust verification of the source.

Historically, Google search results used strictly ranked websites and extensive algorithms designed to weed out low-quality or spammy content. But with AI overviews  the short answers that appear at the top of search engines or within chatbots  the information often gets condensed into an affirmative statement that feels like truth, reducing the user’s chance to click through and verify the original source.

As Lily Ray, vice president of SEO strategy and research at Amsive, put it, “It’s easy to trick AI chatbots, much easier than it was to trick Google two or three years ago.”

ChatGPT, Google AI, and the Danger of Data Voids

The critical issue exposed by this experiment is something technologists call data voids  topics where little or no authoritative information exists online. These voids are fertile ground for misinformation because AI systems will fill in answers by picking up the first available source, whether it’s true or not.

In Germain’s test case, there were no real authoritative sources disputing his bogus blog post. That absence allowed AI systems to absorb and repeat the fiction. One day after the page was published, AI assistants were quoting his made-up ranking with confidence  because, from the AI’s point of view, there were no competing facts.

This phenomenon doesn’t just affect harmless or silly topics like hot-dog contests. The danger becomes real when the topic shifts to health advice, finance, legal guidance, public safety, or political content  areas where misinformation can literally cost lives or disrupt societies.

Real-World Risks Beyond Tech Humor

Experts warn that this type of misinformation susceptibility could have severe consequences outside of journalism experiments. Cooper Quintin, a senior technologist at the Electronic Frontier Foundation, says that once mis-information is propagated through AI at scale, it could be used for scams, reputation destruction, dangerous advice, or even fraud.

For instance, imagine a scenario where manipulated AI answers influence financial decisions, health choices, or legal interpretations. If AI is widely trusted  as many people increasingly rely on it false data could have dire real-world impacts. You don’t need to infiltrate secure systems; you just need to plant misleading content in places where AI scrapers will find it.

In another demonstration mentioned in related coverage, even product review results  like reviews of cannabis gummies  have been shown to be influenced by sources with unreliable claims, which AI then repeats innocently back to users.

Responses from the Tech Companies

Though the experiment highlights vulnerabilities, both OpenAI (the company behind ChatGPT) and Google have acknowledged the challenges and said they are actively working to mitigate misuse. Both companies have stated that they are building systems to reduce hidden influence and improve the accuracy of sourced information.

Google, for example, says it uses ranking systems designed to keep results largely free of spam and misleading content  but that AI overviews can still be vulnerable when they rely on sparse or niche data. OpenAI has similarly emphasised ongoing efforts to detect and prevent manipulative influence.

However, tech leaders also note that even with safeguards, AI can still make mistakes, which is part of why transparency in sourcing and user verification are important.

What This Means for AI Users

The experiment has several clear implications for anyone who uses AI tools regularly:

1. AI Isn’t a Source of Truth

Just because an AI confidently states a fact does not mean that fact is verified or accurate  especially on topics where human-verified sources are scarce.

2. Users Must Check Original Sources

Whenever AI cites a claim, users should follow the links and examine the provenance of the information  not just accept the summary at face value.

3. Misinformation Campaigns Could Move to AI

Bad actors  whether political groups, marketers, scam operators, or foreign states  could exploit AI’s dependency on web data to influence public perception or manipulate users. This overlap between AI and misinformation creates a new frontier for digital influence warfare.

4. We Need Better Guardrails

Experts argue for stronger systems to check the quality of AI answers, demand clearer source labels, and educate users on critical thinking skills when interacting with generative AI.

The Road Ahead, AI and Public Trust

The BBC journalist’s experiment did more than play a prank on intelligent software it revealed a systemic issue at the heart of how generative AI digests and disseminates knowledge. While this vulnerability might be entertaining when used to create jokes about hot dog eating or fictional tech feats, the underlying lesson is sobering: AI mirrors the web, and the web has always been vulnerable to manipulation.

As AI becomes more widespread in education, healthcare, business, and government, the stakes for accurate information increase. The future of trustworthy artificial intelligence depends not just on smarter models, but on smarter users, better verification, and robust transparency about where data comes from.

For now, this episode serves as a stark reminder that the line between truth and fiction is thinner than we think and that in the age of AI, it only takes 20 minutes to blur that line.

Bitcoin’s $106,400 test could define BTC’s next big move
The UK is introducing a compulsory digital ID
China’s $71 Billion Treasury Dump: What It Means for Bitcoin’s Big Story
U.S. Companies Struggle with Trump’s Expanding Tariffs
Crypto Traders Say Something Broke After October, The Data Shows the Market Really Did Change

Sign up to FOMO Daily

Get the latest breaking news & weekly roundup, delivered straight to your inbox.

By signing up, you acknowledge the data practices in our Privacy Policy. You may unsubscribe at any time.
Share This Article
Facebook Whatsapp Whatsapp LinkedIn Reddit Telegram Threads Bluesky Email Copy Link Print
ByOscar Harding
G'day I’m Oscar Harding, a Australia based crypto / web3 blogger / Summary writer and NFT artist. “Boomer in the blockchain.” I break down Web3 in plain English and make art in pencil, watercolour, Illustrator, AI, and animation. Off-chain: into  combat sports, gold panning, cycling and fishing. If I don’t know it, I’ll dig in research, verify, and ask. Here to learn, share, and help onboard the next wave.
Previous Article Sui ETFs Just Launched and Nobody Is Showing Up
Next Article Could Andrew Mountbatten-Windsor Be Removed From the Royal Line of Succession?

Latest News

Ronda Rousey vs Gina Carano: Why the UFC Fight Didn’t Happen
Entertainment MMA TV Entertainment
Could Andrew Mountbatten-Windsor Be Removed From the Royal Line of Succession?
Europe News Political News
Sui ETFs Just Launched and Nobody Is Showing Up
Finance News
Why Stablecoins Are Crypto’s M2 and How a Small Supply Slip Tightens Bitcoin Liquidity
Finance News
Was Trump’s Executive Order Really About Bringing Back Insane Asylums
Health Lifestyle Opinion Politics
What Are Real World Assets in the Crypto Space Explained in Detail
Finance Opinion RWA
Why Crypto Venture Capital Funding Headlines Don’t Tell the Full Story
War News
China’s Level-IV Emergency Response: Weather Risks and Preparedness
Economy News Politics
The Supreme Court Strikes Down President Trump’s Tariff Powers What It Means for the U.S. and the World
Finance News Opinion Politics
Why XRP Sentiment Is Hitting a 5-Week High
War News
Peter Thiel Sells All Ethereum Treasury Shares and What It Means for Crypto
War News
Japan Approves the World’s First iPS Cell-Based Therapies
Health Opinion Science News Technology Technology News
Bitcoin Tax Panic Is Rising: What Crypto Investors Need to Know Before Filing
War News
If CLARITY Stalls, On-Chain Perps Stay Offshore and US Traders Get Pushed Out
War News

You Might Also Like

Cboe’s Bid to Bring Back Yes No Options and What It Means for Markets

February 16, 2026

Coinbase Reveals Arrest in the $355 Million Insider Extortion Scheme That Targeted Nearly 70,000 Customers

December 30, 2025

How Crypto Is Being Devoured by Traditional Finance and What It Means for the Future

January 20, 2026

Sam Bankman-Fried’s Bid for a New Trial

February 13, 2026

FOMO Daily — delivering the stories, trends, and insights you can’t afford to miss.

We cut through the noise to bring you what’s shaping conversations, driving culture, and defining today — all in one quick, daily read.

  • Privacy Policy
  • Contact
  • Home
  • News
  • Politics
  • Entertainment
  • Sport
  • Lifestyle
  • Finance
  • Cryptocurrency

Subscribe to our newsletter to get the latest articles delivered to your inbox.

FOMO DailyFOMO Daily
Follow US
Copyright © 2026 FOMO Daily. All Rights Reserved.
Welcome Back!

Sign in to your account

Username or Email Address
Password

Lost your password?