A new study has found that false information and conspiracy theories about extreme weather events are spreading faster on major social media platforms than official, life-saving alerts from emergency agencies.
The research, published by the Centre for Countering Digital Hate (CCDH), accuses platforms like Meta, X (formerly Twitter), and YouTube of failing to adequately moderate dangerous content during climate disasters. This failure, the report claims, has undermined disaster response, eroded public trust, and put lives at risk.
Disinformation Eclipses Official Warnings
CCDH researchers analyzed 300 viral posts during recent natural disasters—including the LA wildfires and Hurricanes Helene and Milton—and found that climate conspiracy content frequently outperformed factual updates. In some cases, they say, conspiracy posts were algorithmically promoted and monetized while emergency alerts were buried or ignored.
According to the report, false narratives were allowed to flourish without adequate checks:
-
98% of misleading posts on Meta had no fact-checks or warning labels
-
99% on X were similarly unflagged
-
100% on YouTube lacked any form of moderation or correction
"This is not just negligence—it's a business model," said CCDH CEO Imran Ahmed. "Social media companies are profiting from outrage, fear, and misinformation while people are dying or desperately trying to find reliable information during climate catastrophes."
Geoengineering and 'Government Lasers': The New Normal?
Among the most pervasive falsehoods were claims that hurricanes were deliberately engineered as weapons, and that wildfires were started by "government lasers." These conspiracy theories were widely shared, often by high-profile accounts, and often drowned out messages from emergency services.
Following Hurricanes Helene and Milton in late 2024, and the LA wildfires in January 2025, misleading posts quickly went viral. Some falsely alleged that migrants were being prioritized for aid over citizens. Others impersonated disaster relief programs, luring survivors into scams.
"These weren't fringe voices in dark corners of the internet," said Sam Bright, UK deputy editor at DeSmog, a climate misinformation watchdog. "Many of the worst offenders had blue checkmarks and massive followings."
Verified Accounts Fuel the Misinformation Surge
The study found that most of the viral disinformation came from verified accounts—those granted greater visibility and, in many cases, financial incentives through monetization tools.
-
On X, 88% of misleading posts were from verified users
-
On YouTube, the figure was 73%
-
On Meta (Facebook and Instagram), it was 64%
A standout example came during the January 2025 LA wildfires, when controversial American radio host Alex Jones posted a series of false claims—alleging food confiscation, government cover-ups, and "globalist weather manipulation." According to the report, Jones' posts received more views than the combined reach of FEMA, the LA Times, and 10 major emergency and news agencies during the same period.
Eroding Trust, Impeding Response
CCDH argues that the unchecked spread of disinformation is doing real harm. In the chaotic hours and days after a disaster, false claims can divert attention from urgent safety instructions or cause people to mistrust official aid.
"The spread of climate conspiracies online isn't accidental," Ahmed added. "It's hardwired into platforms that reward outrage and confusion. When the world is burning or flooding, people need facts—not fantasy."
What the Platforms Are (Not) Doing
Despite repeated pledges by tech companies to improve content moderation, the CCDH says current systems remain inadequate, particularly in the face of rapid-onset disasters.
Unlike public safety messages, which often come from local agencies with limited digital reach, conspiracy content is frequently pushed by influencers with large followings, often monetized through advertising or platform rewards.
Calls are now mounting for greater platform accountability, including legal requirements for fact-checking in emergencies and penalties for algorithmically amplifying harmful falsehoods.
"We need regulation," said Bright. "These platforms cannot continue to profit while allowing lies to endanger lives."