The New Illusion Problem
Seeing used to mean believing. Not anymore.
AI can now generate convincing videos, images, and audio that appear completely real. Politicians saying things they never said. Celebrities in fake scandals. Even friends or family appearing in fake calls.
These “deepfakes” and hoaxes spread fast: damaging reputations, manipulating opinions, or scamming money. Your best defense isn’t technical, it’s awareness.
By the end of this module, you’ll be able to:
- Spot clues that reveal a deepfake or hoax.
- Check sources before you share or believe.
- Stay calm when media seems shocking or “too real.”
What Deepfakes Look Like in the Wild
- Video Hoaxes
- Example: a fake video of a world leader declaring war.
- Tell-tale signs: awkward blinking, mismatched lip movements, unnatural lighting.
- Audio Hoaxes
- A cloned voice leaves a voicemail or robocall.
- Signs: flat tone, odd pauses, robotic “edges” in words.
- Image Hoaxes
- Viral “photos” of disasters, events, or celebrities.
- Signs: hands with too many fingers, odd reflections, blurry backgrounds.
The 5-Point Deepfake Test
Before trusting or sharing any shocking media, ask:
- Origin: Who posted it first? Was it from a verified source?
- Details: Are eyes, teeth, or hands slightly off?
- Context: Does it appear suddenly with no coverage elsewhere?
- Audio: Do words feel slightly delayed or unnatural?
- Confirmation: Can you find the same story on a trusted outlet?
If any point fails, don’t share it until confirmed.
Other aspects to consider while evaluating:
- Lighting
- Background
- Speed of objects and people
- Lip movement
- Facial expressions
Interactive: Can You Tell the Fake?
Look at these two video clips:
- Clip A: A real news segment.
- Clip B: An AI-generated “announcement.”
Ask yourself:
- Do the facial movements match the voice?
- Do the shadows and lighting look natural?
- Does the source channel look legitimate?
Reveal: Clip B was the deepfake. Notice the lip-sync lag and strange blinking pattern.
Why This Matters to You
- Hoaxes can trick you into scams (fake calls from “family” or “banks”).
- They can manipulate communities (fake political or health information).
- Sharing them without checking can damage your credibility.
Watch & Learn
Video of how journalists and experts detect deepfakes (2 minutes):
Example of a convincing deepfake that fooled millions (1-2 minutes):
Quick Reference Guide
Download and keep:
Safe & Smart AI Promise
Our goal isn’t to scare you — it’s to empower you. With a few simple habits, you can spot hoaxes before they fool you, and protect yourself and others from false information.
Up Next
Module 5: Using AI Tools to Help Protect Personal Data. You’ll learn how scammers use these hoaxes in real life to steal information — and how to block them.
Disclaimer: The information in this lesson is provided for educational purposes only. It is not legal, financial, medical, or professional advice. Results may vary depending on individual use. While we update content regularly, AI tools and risks can change over time. Always use your own judgment and consult a qualified professional if you need specific advice.