Deepfakes—highly realistic, AI‑generated fake videos, images and audio—are rapidly reshaping how we decide what’s real. Once, a photo or recording felt like solid proof; now, almost anything we see or hear online could be digitally fabricated or manipulated.
This article explores how deepfake technology works, the real harms already happening (from non‑consensual sexual content to financial fraud and political disinformation), and the more subtle damage it does by eroding public trust and fuelling the “everything is fake” mentality.
It also looks at emerging laws, detection tools, watermarking and content‑provenance standards, and offers practical steps for individuals, companies and policymakers. In a world of synthetic media, the future of truth won’t depend on what looks convincing on a screen, but on how well we verify sources, protect people’s likenesses and rebuild shared standards for evidence.
Table of Contents
Key takeaways
- Deepfakes are AI-generated synthetic media (video, audio, images) that depict events or statements that never happened, often with highly realistic faces and voices, according to Encyclopedia Britannica.
- The worst harms already happening are non‑consensual sexual deepfakes, fraud and scams, and politically weaponized disinformation—not just sci‑fi “perfect fakes.”
- Research suggests deepfakes may not always directly brainwash people, but they undermine trust in all digital evidence and fuel what scholars call the “liar’s dividend”—the ability to dismiss real evidence as “just AI.”
- Governments are starting to respond with deepfake‑specific laws (e.g., Texas and California in the U.S., China’s deep synthesis rules, the EU AI Act, new national laws in Italy, Spain, Denmark).
- Technical solutions—AI detection, watermarking, and content provenance standards like C2PA—are emerging, but current platform enforcement is uneven.
- The future of truth will depend on media literacy, strong provenance systems, and clear regulation, not on any single “magic” detector.
1. What exactly is a deepfake?
A deepfake is a type of synthetic media created using artificial intelligence—especially deep learning—to generate or manipulate images, video, or audio so that they convincingly depict something that never happened. Encyclopaedia Britannica describes deepfakes as AI‑generated media that portray non‑existent events or people, often by combining “deep” learning with “fake” content.
Key points:
- Usually involve faces or voices being swapped, cloned, or generated.
- Often built with neural networks such as GANs (Generative Adversarial Networks) or modern diffusion models that learn from large amounts of training data.
- Can be fully fabricated (a politician giving a speech that never happened) or partially altered (changing what someone appears to say in real footage).
Many experts view deepfakes as a subset of synthetic media, which also includes AI‑generated text, music, and images.
2. How do deepfakes work?
At a high level, most deepfake systems do three things:
- Learn a face or voice
- A network is trained on many images or audio clips of a specific person.
- The model learns the patterns in their facial structure, expressions, or vocal features.
- Generate or transform content
- Video deepfakes: a model maps one face onto another frame‑by‑frame, or animates a synthetic face based on a target performance.
- Audio deepfakes: a voice‑cloning model takes text (or another voice) and outputs speech that sounds like the target person.
- Refine for realism
- GAN‑style approaches pit a generator (creates fake content) against a discriminator (tries to spot fakes) until the generator becomes very convincing.
- Modern diffusion models iterate from noise to a realistic image or video, guided by learned patterns.
Because tools are increasingly easy to use, any reasonably motivated non‑expert can now produce deepfake‑style media, sometimes directly in a browser.
3. A brief history of deepfakes
The term “deepfake” itself appeared in late 2017, when a Reddit user with the handle deepfakes shared AI‑generated pornographic videos that swapped celebrity faces into adult scenes.
A few milestones:
- 2014–2016: Researchers introduce GANs, and academic teams show early examples of realistic face re‑animation and voice synthesis.
- 2017–2018: Deepfake pornography spreads on Reddit and fringe sites, prompting bans from major platforms and the first wave of public concern.
- Late 2010s: Proof‑of‑concept political deepfakes appear (e.g., Barack Obama “saying” things he never said) in research and satire, demonstrating the technology’s propaganda potential.
- 2020s: Deepfakes move into mainstream culture—used in films for de‑aging actors, reviving historical figures for campaigns, and powering both beneficial and malicious synthetic media applications.
So deepfakes didn’t appear overnight—they’re the result of a decade of progress in generative AI and computer graphics.
4. Deepfakes in the real world: harms we can measure today
4.1 Non‑consensual sexual deepfakes and gendered abuse
The most pervasive use of deepfakes so far is not political, but sexual abuse:
- An industry analysis of 14,678 online deepfake videos found that around 96% were non‑consensual intimate content and that the top deepfake porn sites targeted women in nearly all cases.
- A 2024 survey of more than 16,000 people in 10 countries focused on “deepfake pornography” as a form of non‑consensual synthetic intimate imagery, documenting widespread exposure and normalization.
- Qualitative research highlights serious psychological and social impacts on victims and frames sexual deepfakes as a form of image‑based sexual violence, not just “pranks.”
A recent police‑commissioned survey in the UK found 7% of respondents had been victims of intimate deepfakes, and about a quarter of people expressed either neutrality or acceptance of creating and sharing such content—an alarming gap between harm and social norms.
4.2 Fraud, financial crime, and deepfake scams
Deepfakes are increasingly used in fraud and identity crime, especially through voice cloning and “CEO scams”:
- In 2019, criminals used AI voice mimicry to trick a company into wiring €220,000 after impersonating its CEO in a phone call.
- In 2020, a bank in the UAE reportedly lost $35 million after a director’s voice was convincingly deepfaked to authorize transfers.
- In 2024, a finance worker in Hong Kong was duped into transferring $39 million during a video meeting where the CFO and colleagues were deepfaked in real time.
- In 2025, Italian police froze nearly €1 million after scammers used an AI‑cloned voice of the defence minister to target wealthy business leaders.
Deloitte’s Center for Financial Services projects that generative‑AI‑enabled fraud in U.S. financial institutions could reach roughly $40 billion by 2027, up from about $12.3 billion in 2023. While not all of this will come from deepfakes, voice and video impersonation are central to these threats.
4.3 Elections, geopolitics, and disinformation
Legal scholars Robert Chesney and Danielle Citron warned early that deepfakes could be used to fabricate scandals about officials, incite ethnic or religious violence, or trigger panic with fake emergency messages—posing risks for democracy and national security.
Empirical work on political deepfakes is growing:
- Studies in Social Media + Society have explored how synthetic political videos can increase uncertainty, reduce trust in news, and make audiences more skeptical of authentic footage.
- Experimental work on deepfake depictions of infrastructure failures found increased distrust in government among U.S. participants, though effects varied by country and context.
At the international level, a 2025 UN/ITU report flags deepfakes as a major driver of election interference and financial fraud and calls for global standards in detection and digital verification.
5. Deepfakes and the crisis of truth
5.1 Why “seeing is believing” is breaking
For most of modern history, photo and video evidence carried a strong presumption of authenticity. Deepfakes directly challenge that presumption.
Research by the UK’s Academy of Social Sciences and partner scholars highlights two key problems:
- We are not very good at detecting deepfakes.
People often struggle to distinguish high‑quality fakes from real footage. - We’re overconfident about our abilities.
Many people believe they can spot deepfakes, even when they can’t.
A recent scoping review in PLOS ONE maps existing experiments on deepfakes’ impact on beliefs, memory, and behavior and concludes that while the technology’s harms are clear, empirical evidence on large‑scale opinion manipulation is still limited and mixed.
So deepfakes don’t simply hypnotize societies into believing anything. Instead, they contribute to a more subtle and dangerous outcome: a pervasive sense that “nothing online can be trusted.”
5.2 The “liar’s dividend”
Chesney and Citron introduced the concept of the “liar’s dividend” to describe how deepfakes can help dishonest actors, not just by faking evidence, but by undermining real evidence.
The logic:
- Once people know deepfakes exist,
- A politician, CEO, or abuser can claim that authentic video or audio is “just a deepfake.”
- Some supporters will find this plausible, especially in a polarized environment.
Follow‑up studies in political science support this concern. Experiments with over 15,000 U.S. adults found that false claims that real scandals are “fake news” or deepfakes can help politicians maintain support, especially for text‑based scandals and, to a lesser extent, video.
Recent reporting shows this dynamic moving from theory into practice, as high‑profile politicians increasingly dismiss embarrassing footage as “probably AI,” even when their own staff confirm it’s real.
5.3 What current research actually says
Based on available research as of late 2025, we can say with reasonable confidence:
- Direct persuasion effects of individual deepfakes are real but not yet proven to be overwhelming at scale; some studies show modest shifts, others show more nuanced results.
- Indirect effects on trust and uncertainty are more clearly documented—deepfakes add to a broader environment where people are less sure what to believe and more open to dismissing inconvenient truths.
In other words: deepfakes may be less about one fake video changing your mind, and more about a rising tide of epistemic chaos.
6. Law, policy, and platform responses
Regulation is playing catch‑up, but a patchwork of laws is emerging:
- U.S. states
- Texas SB 751 (2019) criminalizes creating and sharing deceptive deepfake videos intended to influence an election within 30 days of voting.
- California AB 602 (2019) gives victims of non‑consensual deepfake pornography a civil cause of action; AB 730 targeted political deepfakes during campaigns (and has since sunset).
- Newer California election‑focused deepfake laws have faced constitutional challenges; a federal judge recently struck down one such law on Section 230 and free‑speech grounds.
- European Union & member states
- The EU AI Act imposes transparency obligations: AI systems that generate synthetic content must clearly label outputs as AI‑generated.
- Spain’s 2025 bill mandates strict labeling of AI‑generated content with heavy fines for non‑compliance, aligned with AI Act requirements.
- Italy’s 2025 AI law includes prison sentences for harmful misuse of AI, such as deepfake‑driven fraud or identity abuse.
- Denmark is pursuing a law granting people rights over their own face, voice, and body as a way to fight non‑consensual deepfakes and unauthorized digital doubles.
- China
- China’s Administrative Provisions on Deep Synthesis (effective 2023) require labeling of AI‑generated content, user consent for biometric use, and impose obligations on providers and platforms offering “deep synthesis” services.
At the international level, the 2025 ITU report calls for shared watermarking standards and provenance tools to track AI‑generated video, noting that video already accounts for about 80% of internet traffic.
Platform action remains patchy. A recent investigation found that when a deepfake video containing tamper‑proof C2PA provenance metadata was uploaded to eight major platforms, only YouTube surfaced any label at all, and even that was buried in the description—most platforms stripped the metadata entirely.
7. Technical defenses: detection, watermarking, and provenance
No single technology “solves” deepfakes, but three main strategies are emerging.
7.1 AI detection
Dozens of research papers and products now focus on detecting deepfakes by spotting subtle artifacts in pixels, sound, or motion. Systematic reviews of detection research note rapid progress but also a cat‑and‑mouse dynamic: as generators improve, detectors must constantly retrain.
Limitations:
- Detectors can be brittle—models trained on one generation method may fail on new ones.
- Attackers can intentionally optimize against known detectors.
- False positives risk delegitimizing real evidence, which is especially sensitive in journalism and courts.
7.2 Watermarking and cryptographic provenance
Rather than only trying to recognize fakes, many experts argue we need strong provenance for genuine content:
- The Coalition for Content Provenance and Authenticity (C2PA) develops standards to cryptographically sign media at the time of capture or creation, storing verifiable metadata about where and how it was made.
- Companies including major camera manufacturers, news organizations, and AI labs support this model, and some AI systems now embed authenticity markers in outputs—though tools for public verification are still limited.
In principle, this lets you ask, “Can I verify this video came from a trusted camera or newsroom?” rather than just asking whether it looks real.
In practice, as the Washington Post experiment showed, platforms are not yet consistently preserving or surfacing provenance data, which greatly reduces its protective value.
7.3 Watermarks in generative AI
The EU AI Act and related codes of practice push providers of AI systems to mark AI‑generated content—for example with invisible watermarks in audio or video.
The UN’s ITU report likewise highlights watermarking for AI‑generated video as a key pillar of future global standards.
The challenge is adoption: watermarks must be robust, standardized, and widely honored across platforms and jurisdictions, or they risk becoming just one more inconsistent signal in a noisy environment.
8. How individuals and organizations can respond
Even without perfect detectors, there are practical steps you can take.
For individuals
- Slow down with shocking content.
Deepfakes often rely on emotional reaction. If a clip seems outrageous, especially about a public figure, treat it as a claim, not a fact. - Check the source, not just the clip.
- Is it coming from a reputable news outlet, or an anonymous account?
- Can you find the same video covered by trusted organizations?
- Search for corroboration.
Use reverse image search or search engines to see if the event is reported elsewhere. Deepfakes often lack independent verification. - Be careful what you share.
Many harms—especially sexual deepfakes—are amplified by resharing. If you suspect something is abusive or synthetic, don’t forward it, and report it where possible. - Protect your own likeness.
You can’t prevent all misuse, but you can:
- Keep intimate images offline as much as possible.
- Lock down privacy settings.
- Monitor for impersonation accounts or obviously faked images of you.
For companies and institutions
- Implement strong verification for high‑risk actions.
Especially around finance:
- Never rely on voice or video alone to authorize large transfers.
- Require out‑of‑band confirmation via known channels (e.g., call a known number, require written approval, or multi‑person sign‑off).
- Train staff about deepfake risks.
Use real case studies (CEO voice scams, deepfake video conferences) to show that “seeing and hearing” is not enough for security. - Use detection and provenance tools where appropriate.
- For newsrooms, courts, and law enforcement, consider tools that can verify C2PA signatures or flag likely manipulations.
- For brands, monitor for fake endorsements and impersonation.
- Establish clear policies for synthetic media.
If you use AI‑generated content—say, for training or marketing—be upfront about it, and avoid using synthetic content in contexts where authenticity is critical (e.g., compliance footage, audits, legal evidence).
9. Possible futures: what might “truth” look like in 2030?
No one can predict this with certainty, but based on current trends, three broad scenarios are plausible:
1 . Chaotic mistrust (worst case).
Deepfakes become cheap, ubiquitous, and hyper‑realistic.
- Politicians, corporations, and criminals routinely claim “it’s AI” when caught on camera.
- Courts and the public start doubting almost all digital evidence.
- The liar’s dividend becomes routine political strategy.
2. Provenance‑first ecosystem (optimistic case).
We successfully scale up provenance frameworks, watermarking, and legal requirements:
- Authentic sources (news, courts, official body cams) become thoroughly signed and traceable.
- “Unsigned” media is treated more like anonymous rumor.
- Deepfakes still exist, but they’re easier to ignore in serious decision‑making.
3. Hybrid adaptation (most likely).
We get better tools and better habits, but no perfect solution:
- People become more skeptical—but also more skilled at source checking.
- Deepfakes are used both for harm and for widely accepted positive uses in entertainment, accessibility, and education.
In all scenarios, truth will depend less on “how real it looks” and more on “who stands behind it and with what evidence.” The social, legal, and technical systems around media will matter as much as the pixels themselves.
10. FAQ (SEO / AEO‑friendly Q&A)
What is a deepfake in simple terms?
A deepfake is an AI‑generated or heavily AI‑edited image, video, or audio clip that makes it look like someone did or said something they never actually did. It’s created using deep learning models trained on real footage of that person’s face or voice.
Are deepfakes always harmful?
No. Deepfake and synthetic media techniques can be used ethically—for example, de‑aging actors in films with consent, localizing educational content by realistically dubbing teachers’ speech, or reviving historical figures for museums and classrooms. The harm comes when they’re used without consent or to deceive.
How can I tell if a video is a deepfake?
There is no foolproof visual trick, especially as quality improves. Warning signs include inconsistent lighting, strange blinking, mismatched earrings/tattoos, or audio slightly out of sync. But the safest approach is not to rely on your eyes alone—check the source, look for coverage by reputable outlets, and use verification tools where available.
Can deepfakes change election outcomes?
We do not yet have definitive evidence that a specific deepfake has flipped a major election. However, research and policy reports show that deepfakes can fuel disinformation, increase uncertainty, and erode trust in legitimate information, especially in tightly contested or polarized contexts.
What laws protect me if someone makes a deepfake of me?
This depends heavily on where you live.
- Some U.S. states (e.g., Texas, California) have specific laws targeting political deepfakes or non‑consensual deepfake pornography.
- The EU AI Act and emerging national laws in countries like Italy, Spain, and Denmark are creating new rights and obligations around AI‑generated content and misuse of likeness. If you’re affected, local legal advice is essential; the global legal landscape is evolving quickly.
11. Recommended books, research, and reports
Below are some key sources you can cite or explore further. I’m listing them by theme; the summaries are based on the referenced material.
Foundational books and overviews
- Nina Schick (2020). Deepfakes: The Coming Infocalypse.
A journalist’s early, accessible account of how synthetic media could trigger a crisis of misinformation (“infocalypse”), especially for politics and public trust. - Nina Schick (forthcoming/updated editions).
Later editions and talks expand on geopolitical uses of deepfakes and the broader synthetic media ecosystem.
Legal and policy analysis
- Robert Chesney & Danielle Citron (2019). “Deep Fakes: A Looming Challenge for Privacy, Democracy, and National Security.” California Law Review, 107(1753–1819).
Classic law‑review treatment of deepfakes’ risks and policy options, introducing concepts like truth decay and the liar’s dividend. - Brennan Center for Justice (2021). “Deepfakes, Elections, and Shrinking the Liar’s Dividend.”
Focuses on elections and proposes strategies to reduce the liar’s dividend in democratic contexts. - China’s “Administrative Provisions on Deep Synthesis in Internet-Based Information Services” (2023).
One of the first comprehensive regulatory frameworks explicitly targeting synthetic media labeling and platform responsibility. - Analyses of the EU AI Act’s deepfake rules (e.g., Reality Defender explainer, Imatag, EU official summaries).
Empirical research on impact and trust
- Hancock et al. (2024). “Can deepfakes manipulate us? Assessing the evidence via a scoping review.” PLOS ONE.
Reviews experimental studies on how deepfakes affect beliefs, memories, and behaviors; finds concerning potential but limited, mixed empirical evidence so far. - Vaccari & Chadwick (2020). “Deepfakes and Disinformation: Exploring the Impact of Synthetic Political Video on Deception, Uncertainty, and Trust in News.” Social Media + Society.
Examines how people evaluate political deepfakes versus genuine videos, emphasizing uncertainty and trust rather than simple persuasion.(SAGE Journals) - Academy of Social Sciences (ACSS). “Trust in Evidence in an Era of Deepfakes” project and related reports.
Explores how awareness of deepfakes affects trust in user‑generated audio‑visual evidence. - Studies on deepfakes and distrust in government, e.g., deepfake infrastructure‑failure experiments showing increased distrust among U.S. participants.
Sexual deepfakes and gender‑based abuse
- Industry and academic work on non‑consensual intimate deepfakes (NCID / NSII):
- Industry analysis showing ~96% of deepfake videos as non‑consensual intimate content, mostly targeting women.
- Large‑N survey (16,000+ respondents in 10 countries) on deepfake pornography as a form of non‑consensual synthetic intimate imagery.
- Research framing sexual deepfakes as a form of image‑based sexual abuse and sexual violence.
Fraud, security, and technical countermeasures
- Deloitte Center for Financial Services. “Deepfake banking and AI fraud risk on the rise” (2024/2025 analyses).
Provides projections of AI‑enabled fraud losses (up to $40B in the U.S. by 2027) and discusses financial deepfake risks - Recent legal‑technology articles on deepfake detection and provenance, including the role of C2PA.
- UN/ITU 2025 report on deepfakes and AI for Good.
Calls for global standards on watermarking, detection, and content verification in response to deepfakes’ role in misinformation and fraud.