It was a Sunday afternoon in December 2016 when Edgar Maddison Welch, a twenty-eight-year-old father from Salisbury, North Carolina, loaded three guns into his car — an AR-15 assault rifle, a revolver, and a shotgun — and drove four hours to Washington D.C.
He had a mission. He had read, online, that a popular pizza restaurant called Comet Ping Pong was hiding child trafficking victims in a secret basement. He was going to rescue them.
He walked into the restaurant, fired his rifle three times, and terrorised dozens of families eating lunch. He found no victims. There was no basement. There had never been any evidence — because the story was entirely, completely, catastrophically false.
Edgar Welch is not a monster from a thriller novel. He is a real man who went to prison for four years because of a lie that spread on the internet like a brushfire in a drought. The story — known as “Pizzagate” — had been concocted from a wild misreading of hacked emails, amplified by fringe websites, turbo-charged by social media algorithms, and inhaled as gospel truth by thousands of people who genuinely believed they were seeing through a conspiracy that mainstream media refused to report.
Nobody made him believe it at gunpoint. No one had to. The internet did the work quietly, invisibly, with the cheerful efficiency of a recommendation engine just doing its job.
That was 2016. Today, in the age of artificial intelligence, the problem has not merely continued — it has mutated into something far more terrifying. The same machinery that convinced Welch that a pizza restaurant was a trafficking hub can now generate photorealistic images, clone voices, and produce convincing news articles in seconds, at scale, at almost zero cost. We are no longer just fighting rumours. We are fighting reality itself.
A Rumour With Legs, a Lie With Wings
To understand why Pizzagate worked — why any piece of misinformation works — you have to understand something uncomfortable about the human brain: it was not designed for the modern information environment. It was designed for a world where information came slowly, from people you knew and trusted, and where the cost of being wrong was immediate and physical. A bad berry. A misread predator. Survival.
In that world, fast pattern recognition was life-saving. Today, it is a vulnerability that bad actors have learned to exploit with startling precision.
“We do not see things as they are. We see things as we are — and we share things as we feel.”
— Adapted from Anaïs Nin, frequently misattributed. Fitting, really.Psychologists call the two modes of human thinking System 1 and System 2. System 1 is fast, instinctive, emotional — it is the brain on autopilot. System 2 is slow, deliberate, analytical — it is the brain doing actual work. The uncomfortable truth, documented by Nobel laureate Daniel Kahneman, is that we spend most of our lives in System 1. And misinformation is almost always engineered to keep us there.
The Pizzagate story was not subtly misleading. It was obviously, risibly absurd — which is exactly why it worked on people who were already emotionally primed to believe it. Fear of child trafficking. Distrust of elites. Partisan fury. These were not logical entry points. They were emotional ones. And once you enter through an emotional door, logic rarely finds the key.
(Vosoughi, Roy & Aral, MIT — Science 2018)
The WhatsApp Murders: India’s Child-Kidnapper Panic
If Pizzagate showed what a single lie could do in America, India in 2018 showed what a thousand lies could do when handed a platform built for intimacy, trust, and speed. The platform was WhatsApp. The lie was about child kidnappers. And the body count was real.
May 2018. The village of Rainpada in Maharashtra’s Dhule district. A group of five men — members of the Nandiwalla nomadic community — stopped near a village to rest during their travels. Someone in the village had recently received a WhatsApp forward: a voice message, in the local language, warning that gangs of child kidnappers were roaming the area, abducting children for their organs.
The five men were strangers. They were resting. Within minutes, a mob of several hundred people had gathered. The men were dragged out, beaten, and killed — all five of them — while bystanders filmed on their phones.
None of them had abducted anyone. There were no missing children. The warning message had been circulating for weeks, in multiple states, in multiple languages, and it was entirely fabricated.
This was not an isolated tragedy. Between 2017 and 2019, at least 30 people were killed across India in mob lynchings directly linked to child-kidnapping rumours spread via WhatsApp. Victims were attacked in Jharkhand, Telangana, Tripura, Karnataka, Assam, and Maharashtra. Many were daily labourers, migrant workers, and nomads — people whose unfamiliarity in a locality made them instantly suspicious to communities already primed by weeks of viral fear-mongering.
Women were particularly targeted. In multiple incidents, women who offered sweets or money to children — acts of ordinary kindness — were dragged out by crowds and beaten under the suspicion that they were luring children. Some survived. Several did not. In one case in Assam, a young woman and her aunt were beaten to death by a mob after someone mistook them for outsiders behaving suspiciously near a school.
Unlike Facebook or Twitter, WhatsApp is end-to-end encrypted — meaning false information cannot be identified, tracked, or removed by the platform before it causes harm. Messages spread through intimate social networks: family groups, neighbourhood circles, religious communities. When your uncle or your neighbour sends you a warning about child kidnappers, it carries infinitely more credibility than a stranger’s tweet. The trusted messenger bypasses the scepticism the content might otherwise trigger.
There was another dimension that made the India crisis unlike anything seen in the West: voice messages. A significant proportion of WhatsApp misinformation in rural India spread not as text — which requires literacy — but as audio clips recorded in local dialects, delivered in the urgent, breathless tone of a concerned community member passing on a genuine warning. The emotional authenticity of a human voice, combined with the perceived credibility of a trusted contact, proved catastrophically effective.
WhatsApp — owned by Meta — was so alarmed by the scale of the violence that it took an unprecedented step: it introduced message-forwarding limits specifically for India in 2018, capping forwards at five chats. The limit was later extended globally. It was the first time a major social platform had structurally altered its product in direct response to real-world deaths caused by misinformation on its platform. The Indian government, too, issued formal advisories asking people to verify information before forwarding it.
“The message arrived from a trusted contact, in our own language, warning of a danger every parent feared. Nobody stopped to ask who had sent it first.”
— Survivor account, documented by India’s BOOM Fact-Check, 2018The child-kidnapper rumours were not the product of a single malicious actor. They grew organically from a confluence of genuine parental anxiety, communal distrust simmering from older tensions, and the architecture of a platform designed to make sharing frictionless and fast. By the time the government and platforms scrambled to respond, the rumours had mutated into dozens of regional variations and were circulating in at least seven languages. The correction never caught the lie. It never does.
When the Lie Learned to Draw
In March 2023, a photograph began circulating online. It showed Pope Francis in a gleaming white puffer jacket, looking like a particularly devout fashion influencer. The image was shared millions of times. News anchors referenced it. Fashion journalists wrote about it. Tens of thousands of people believed, with complete sincerity, that the Pope had simply decided to wear a very nice coat.
The photograph was generated entirely by artificial intelligence. It had never happened. The Pope had never worn the jacket. There was no jacket.
A week later, AI-generated images of Donald Trump being arrested by police officers spread across social media. Then came fake images of an explosion near the Pentagon — images convincing enough that they briefly sent the US stock market into a momentary jitter before being debunked. No explosion had occurred. The Pentagon was fine. The market shudder was real.
A deepfake is any AI-generated or AI-manipulated image, audio, or video in which a person is depicted doing or saying something they never did. The term comes from “deep learning” — the AI technique used to create them. In 2024, a deepfake audio clip of a US political candidate’s voice was used in robocalls to suppress voter turnout. The technology is no longer the exclusive province of sophisticated state actors. Anyone with a decent laptop can make one.
This is the new frontier of misinformation — and it is not a distant dystopia. It is the information environment of right now. AI image generators like Midjourney and DALL-E can produce photorealistic images of events that never happened in seconds. Voice-cloning tools can replicate a person’s voice from a three-second audio sample. Video deepfake software, once requiring specialist knowledge, is now available as a mobile app.
The problem is not just that fake content is getting better at looking real. The problem is something researchers call the liar’s dividend: even when a real piece of footage is authentic, bad actors can now credibly claim it is fake. The very existence of deepfakes has given anyone accused of anything a new line of defence. “That video? Could be AI.” Reality itself is becoming deniable.
The Brain Bugs That Make Us Believe
Here is a question worth sitting with: are you immune to misinformation? Most people reading this article believe they are above-average at spotting fake news. Statistically, this is impossible. But it is a feeling almost all of us share — and it is one of the precise cognitive biases that makes us most vulnerable.
The Cognitive Biases at Play
Confirmation bias is the granddaddy of them all — the tendency to seek, interpret, and remember information that confirms what we already believe. If you distrust a politician, a story portraying them as corrupt feels more credible, requires less evidence, and sticks more firmly in memory. Psychologist Peter Wason identified this in 1960. Six decades later, we have not evolved out of it — we have just built social media algorithms that know how to feed it.
Then there is the illusory truth effect, which is perhaps even more alarming. Research by Hasher, Goldstein, and Toppino in 1977 — and replicated many times since — demonstrates that repeated exposure to a claim makes it feel more true, regardless of its actual accuracy. The brain mistakes familiarity for credibility. This is why misinformation that circulates for weeks across social feeds becomes increasingly difficult to dislodge even after it is factually corrected. The correction is new. The falsehood feels old and settled.
Telling someone a fact is false can, in some circumstances, make them believe it more strongly. This “backfire effect” is especially potent when the false claim is tied to personal or group identity. Fact-checkers have learned, painfully, that leading with the myth in order to debunk it can simply reinforce it. The corrected truth needs to arrive first, clean and clear — not as an afterthought after the false claim has been re-aired.
The most unsettling finding of all comes from Yale Law professor Dan Kahan, who studied what he called “identity-protective cognition.” His research found that highly intelligent, analytically capable people are not less susceptible to misinformation when it threatens their group identity — they are often more susceptible, because they are better at constructing sophisticated rationalisations for beliefs they were always going to reach anyway. Intelligence, in other words, can be weaponised against truth when the emotional stakes are high enough.
Who Builds the Pipeline?
Misinformation does not emerge from a void. It moves through an ecosystem with identifiable parts: creators (state actors, troll farms, partisan agents, attention-seeking individuals), amplifiers (social platforms and their algorithms), vectors (influencers, WhatsApp groups, community networks), and audiences primed by exactly the cognitive biases we have been discussing.
The engine running underneath all of it is simple: attention is money. Platforms profit from engagement. Emotionally arousing content — outrage, fear, moral disgust — generates the most engagement. And misinformation, almost by design, is extraordinarily emotionally arousing. A landmark 2017 study by Brady et al. found that each moral-emotional word in a tweet increases its retweet rate by about 20 per cent. Bad actors have known this intuitively for years. Now they know it empirically too.
Shoshana Zuboff, the Harvard philosopher who coined the term “surveillance capitalism,” puts it bleakly: the platforms are not neutral pipes. They are prediction engines that harvest human behaviour as raw material. They are not incentivised to make you well-informed. They are incentivised to keep you clicking. In that economy, an outrageous lie and an outrageous truth perform about the same — but lies are cheaper to manufacture.
“False news is more novel than true news — and humans are wired to share novelty. The algorithm did not create this instinct. It merely gave it rocket fuel.”
— Adapted from Vosoughi, Roy & Aral, Science (2018)The Politicians Who Learned to Love the Lie
Misinformation is not merely a problem that politicians are trying to solve. For a significant number of them, around the world, it is a tool they have deliberately picked up, sharpened, and used. The weaponisation of false information for political gain is not new — propaganda is as old as power. What is new is the scale, the speed, and the surgical precision with which it can now be deployed.
The Industrial Manufacture of Political Lies
In 2018, the world learned about Cambridge Analytica — a British political consulting firm that had harvested the data of 87 million Facebook users without their knowledge, built detailed psychological profiles of voters, and used them to micro-target politically polarising content during the Brexit referendum and the 2016 US presidential election. The company boasted of having worked in elections across 44 countries. Its methods, a former employee told the UK Parliament, were designed not to persuade people toward a candidate but to depress the opposition’s turnout and inflame existing cultural anxieties to the point of paralysis.
Russia’s Internet Research Agency — a St. Petersburg-based troll farm operating under Kremlin direction — took this logic further. During the 2016 US election, it ran simultaneous fake accounts supporting Black Lives Matter and anti-BLM groups, orchestrated fake rallies for opposing sides, and flooded social media with contradictory, outrage-generating content on immigration, religion, and race. The goal was not to elect one candidate. It was to convince American voters that their country was in irreconcilable civil war. Division itself was the product.
In the Philippines, Rodrigo Duterte’s 2016 presidential campaign pioneered what researchers now call “computational propaganda” — a paid network of social media trolls operating at industrial scale. Journalist Maria Ressa, who investigated and documented the operation and subsequently won the Nobel Peace Prize, described an ecosystem in which Facebook algorithms rewarded outrage and harassment so richly that political operatives found it cheaper and more effective than traditional advertising. Ressa herself became a target: she received an average of 90 hate messages per hour over a sustained period.
Hungary’s Viktor Orbán has taken a different route — not viral manipulation but structural media capture. His government and allied oligarchs now control the overwhelming majority of Hungarian media outlets. The result is a population that is not bombarded with competing lies, but simply deprived of alternative truths. Brazil, Turkey, and Venezuela have pursued variations of the same model. As Oxford Internet Institute researcher Samantha Bradshaw has noted, the world’s most sophisticated political misinformation operations no longer need to be loud. They just need to be everywhere.
India: Political parties across the spectrum operate organised WhatsApp broadcast networks — sometimes called “WhatsApp Universities” — pushing partisan content to millions of subscribers. India’s 2019 general election saw coordinated sharing of fabricated quotes, doctored videos, and AI-altered images at a scale researchers had never previously documented in a democratic election.
USA: Pew Research found that in the run-up to the 2020 election, the most widely shared misinformation on Facebook came from a small number of highly partisan pages with audiences in the millions — not from fringe actors, but from mainstream political media ecosystems.
Myanmar: Facebook’s own internal research, leaked in 2021, acknowledged that the platform had played a “role” in amplifying anti-Rohingya hate speech that contributed to ethnic cleansing. The military junta used fake accounts to spread fabricated stories about Rohingya violence for years before the 2017 massacres.
Brazil: A 2022 study found that Jair Bolsonaro’s political movement had the most sophisticated WhatsApp disinformation infrastructure ever documented in Latin America — hundreds of thousands of automated message broadcasts reaching tens of millions of voters with fabricated content weekly.
Old Lies, Fresh Wounds: The Weapon of Recycled History
There is a species of misinformation more insidious than the outright fabrication — and far harder to debunk, because it is built from something real. It is the deliberate resurrection of old footage, old incidents, and old wounds, stripped of their original context and redeployed at a moment chosen for maximum political effect.
Call it temporal misinformation: the weaponisation of time itself.
The mechanics are straightforward. A video of communal violence from five years ago is clipped, cropped, stripped of its original caption, and recirculated — sometimes with a new caption placing it in a different city, a different community, a different decade. The footage is real. The event happened. But the meaning being assigned to it — the who, the when, the why — is entirely fabricated. And because fact-checkers must now locate not just the claim but the original source in order to debunk it, this kind of misinformation is extraordinarily difficult to counter quickly.
During the violence that engulfed parts of Delhi in early 2020, videos began circulating on WhatsApp and Twitter showing graphic scenes of people being attacked. Some showed stone-pelting. Some showed buildings burning. Fact-checkers at Alt News and BOOM, two of India’s most diligent verification organisations, traced many of the most widely shared clips to incidents that had occurred years earlier — some from communal riots in other states, some from footage originating outside India entirely.
But by the time each clip was traced, verified, and debunked, it had already been viewed millions of times. The debunking reached thousands. The original lie, dressed in the borrowed clothes of a real atrocity, had already done its work — inflaming communities, deepening distrust, and giving partisans on both sides fresh fuel for a fire that needed no additional kindling.
India has become a particularly vivid case study in temporal misinformation, partly because of the depth and complexity of its communal fault lines — between Hindu and Muslim communities, between castes, between linguistic groups — and partly because of the sheer penetration of WhatsApp into rural and semi-urban India, where verification resources are scarce and trust in official media is low. Videos from the Muzaffarnagar riots of 2013 have been recirculated multiple times in subsequent years, each time positioned as “new” evidence of ongoing violence. Footage from communal incidents in one state has appeared — reframed — as evidence of atrocities in an entirely different state months or years later.
The pattern is consistent: the recycled content always arrives in proximity to an election, a politically sensitive verdict, or a moment of existing communal tension. It is never accidental. Researchers at the Digital Forensic Research Lab and India’s DataLEADS have documented the clear correlation between election cycles and spikes in the circulation of old communal content. The timing is the tell.
“The most dangerous misinformation is not the outright lie. It is the true thing, shown at the wrong time, to the wrong people, for the wrong reasons.”
— Claire Wardle, Information Disorder Researcher, Brown UniversityThe United States has its own version of this phenomenon. In the years following the killing of George Floyd, researchers documented repeated instances of old police-related footage — some from entirely unrelated incidents, some from other countries — being circulated as “new” evidence in ongoing racial justice debates. The footage was not always false. Its framing was. The emotional weight of a real incident, transplanted into a new moment, served to escalate rather than illuminate.
Political actors understand something that most ordinary users do not: the date on a piece of content is irrelevant to its emotional impact. A shocking image of violence produces identical levels of outrage whether it happened yesterday or a decade ago. The brain does not timestamp its emotional responses. It simply feels. And politicians who traffic in temporal misinformation are farming that feeling — harvesting grief, fear, and fury from wounds that may have partially healed, splitting them open again precisely when it is most electorally useful.
Always check the date. Before reacting to any video or image of a violent or provocative incident, run it through a reverse image or video search (InVID/WeVerify browser tool is built for this). Search the location visible in the footage. Check whether the clothing, vehicles, or signage are consistent with the claimed time and place. If a video is circulating with high urgency around a political flashpoint, treat that urgency as a reason to slow down, not to speed up your sharing.
What makes temporal misinformation particularly resistant to correction is the “kernel of truth” defence. When fact-checkers label a piece of recycled footage as misleading, supporters of the narrative can truthfully say: “But the violence in that video really happened.” It did. Somewhere. To someone. The debunker is then forced into the uncomfortable position of appearing to minimise a real atrocity in order to contest its false framing — a rhetorical trap that skilled political propagandists have learned to set deliberately.
The antidote, as with all misinformation, begins with a question: Why am I seeing this now? Who benefits from this content circulating at this particular moment? What emotion is it designed to produce in me, and what action does that emotion make me more likely to take? These are not the questions of a cynic. They are the questions of a citizen.
Can We Actually Fight Back?
Here is the hopeful part — and it is genuinely hopeful, if incomplete. We are not helpless. The same cognitive science that explains why misinformation works also points toward practical defences. And researchers have spent the last decade stress-testing them.
The most promising approach is called prebunking — and it draws from an unlikely analogy: vaccination. Social psychologist Sander van der Linden at Cambridge University has spent years demonstrating that you can build psychological resistance to misinformation the same way a vaccine builds immunity to a virus: by exposing people to a weakened form of the manipulation technique, along with a refutation, before they encounter the full-strength version.
His team created a browser game called Bad News in which players take on the role of a fake news creator, learning the tricks of the trade — emotional manipulation, false experts, conspiracy framing — from the inside. Studies showed that playing the game made people significantly better at spotting those same techniques in the wild. Google subsequently partnered with van der Linden’s team to turn these insights into YouTube pre-roll advertisements reaching millions of users across multiple countries.
For the rest of us — in the thick of daily life, scrolling at 11pm — the most practical tool is a four-step habit called SIFT, developed by digital literacy educator Mike Caulfield:
Before you react, share, or even finish reading — pause. Misinformation is engineered to trigger a fast emotional response. The pause is the whole game.
Don’t evaluate the article itself first. Google the outlet. Who are they? What is their track record? What do others say about their credibility?
Is any credible, independent outlet reporting this? If a remarkable claim is only appearing in one place — especially a fringe or partisan one — treat that as a loud red flag.
Many fake stories use real images in false contexts. Reverse-image search a photo. Read the study actually being cited. Does the original source actually say what the article claims?
Specifically for the AI Era: What to Look For
The SIFT method was designed before the age of AI image generation was mainstream. In 2025, it needs a companion checklist specifically for synthetic media. Here is what the experts currently recommend:
Hands and fingers: AI still struggles with human hands — look for extra digits, fused fingers, or unnatural positioning.
Text in images: AI-generated images often contain garbled, nonsensical text on signs, labels, or backgrounds.
Jewellery and accessories: Earrings that don’t match, glasses frames that morph mid-arc, asymmetrical details.
Background weirdness: Repeated patterns, objects that don’t obey physics, floors or walls that seem to warp.
Reverse image search: Google Images and TinEye can locate where an image has appeared before — if an “exclusive” photo shows up on a stock image site, something is wrong.
AI detection tools: Platforms like Hive Moderation, Illuminarty, and Google’s SynthID watermarking (for Gemini outputs) can help identify AI-generated content — though no tool is infallible.
Edgar Welch Got Out of Prison in 2020
He served his sentence. He is a free man. The restaurant he shot up — Comet Ping Pong — is still open. Its owner, James Alefantis, spent years receiving death threats over a story that was entirely invented. His staff were harassed, doxxed, and terrified. Some of them left the industry entirely. Real damage, flowing from pure fiction.
In a 2016 interview before his sentencing, Welch said he had realised the story was false once he was inside the restaurant. He had found no victims, no basement. He told the reporter he was “truly sorry.” He also said he had “done the best I could with the information I had.”
That sentence deserves to sit with us for a moment. The best I could with the information I had. This is what the misinformation ecosystem produces — not monsters, but ordinary people who have been epistemically poisoned: given contaminated information, stripped of the tools to evaluate it, and then handed a grievance that feels righteous and urgent.
The antidote is not cynicism — the belief that nothing is true, that everything is fake, that all media lies equally. That path leads to its own paralysis and is, itself, a goal of state-sponsored disinformation campaigns. The antidote is what researchers call calibrated scepticism: the disciplined, curious habit of asking not “is this false?” but “how do I know? What is the evidence? Who benefits from me believing this?”
It takes practice. It takes a pause. In a world designed to make you react before you think, that pause — that single, deliberate second before you share — is a small act of resistance against a very large machine.
Take it.
