
Beware of Deepfakes: A New Age of Deception
Caught off Guard: Steve’s Story
Steve was at his desk when he received a frantic video call from his manager, Bela. She looked stressed, her voice hurried. “I need you to send the confidential client report to this new email right away!” she insisted. Seeing her familiar face and hearing her distinct voice, he didn’t hesitate—he sent the confidential report.
Hours later, Bela walked into his office and asked about the report. Confused, Steve mentioned the video call. Bela’s expression turned to shock—she hadn’t called him. The person he saw on the video wasn’t Bela. It was a deepfake, created by a cyber-criminal to trick him.
Steve couldn’t believe how real the fake call seemed. The face, the voice—everything matched his boss perfectly. He had fallen victim to a growing cyber threat where criminals use Artificial Intelligence (AI) to create highly convincing fakes.
What is a Deepfake?
AI can create images, audio, or videos that look real. While this technology has legitimate uses, such as in marketing, filmmaking, and education, it can also be misused by cybercriminals.
A deepfake is an AI-generated fake image, audio, or video designed to deceive others. The term “deepfake” combines “deep learning” (a type of AI) and “fake.” These forgeries can be used to manipulate public opinion, impersonate trusted individuals, or spread misinformation.
Three Types of Deepfakes
- Image Deepfakes: AI-generated photos of non-existent people or altered images of real individuals engaged in activities they never did. These are often used to spread misinformation or manipulate emotions.
- Audio Deepfakes (Voice Cloning): Attackers collect voice samples from podcasts or videos and use AI to replicate someone’s voice. They then use these cloned voices in scams, such as impersonating a company executive or a distressed family member asking for money.
- Video Deepfakes: These involve AI-generated videos where a person’s voice and actions are altered or entirely fabricated. Cybercriminals can create fake live video calls or misleading political statements to deceive people.
How to Detect Deepfakes: Focus on Context
- Trust Your Instincts: If something feels “off,” even if the image or video looks real, be cautious. Deepfake scams often create urgency or fear to push victims into making quick decisions.
- Watch Out for Emotional Manipulation: If a video, message, or call evokes panic, take a step back and verify before acting.
- Verify Through Another Method: If you receive a suspicious video or call, contact the person through a different method, such as their official email or a trusted phone number.
- Establish a Code Word or Phrase: Families and teams can create a shared code phrase that verifies urgent requests before acting on them.
Guest Editor
Dhruti Mehta is an Information Security Analyst at Physicians Health Plan of Northern Indiana and President of WiCyS Northern Indiana. She is passionate about building a diverse cybersecurity workforce and bridging educational and skill gaps in the field. Connect with her on LinkedIn.
Resources
About OUCH!
OUCH! is a monthly security awareness newsletter for everyone. It is published by SANS Security Awareness and is distributed under the Creative Commons BY-NC-ND 4.0 license. You are free to share or distribute this newsletter as long as you do not sell or modify it.
Editorial Board: Walter Scrivens, Phil Hoffman, Alan Waggoner, Leslie Ridout, Princess Young.