Welcome to 2025, where asking your CFO to turn sideways or stick out their tongue during a video call isn’t weird — it’s just good risk mitigation. Perry Carpenter of KnowBe4 explores the increasingly absurd cat-and-mouse game between deepfake creators and defenders, offering practical (if occasionally awkward) verification techniques to prevent your organization from joining the ranks of companies that have been scammed out of millions.
Deepfake technology is advancing rapidly. Whether it’s a viral video of a celeb, a CEO’s urgent message or a pundit’s bizarrely controversial statement, the possibility that some piece of media could be a deepfake is no longer science fiction — it’s something very real. Deepfakes are increasingly being weaponized for misinformation, online scams, identity fraud and cyber attacks. By 2027, deepfake-related losses are expected to reach $40 billion, up from $12 billion in 2023.
Defining deepfakes
A deepfake is a type of synthetic media — AI-generated imagery, video or audio — but deepfakes come in a variety of forms:
- Face swapping: Replacing one person’s face with another in a video or image
- Lip-syncing: Changing someone’s mouth movements to match a different audio track
- Voice cloning: Replicating someone else’s voice by capturing their tone, accent, sound or mannerisms
- Full-body: Creating an entirely new video of someone performing an action they never did
- Live: A real-time, not pre-fabricated, digitized version of a real person on a video call, broadcast or online stream.
Identify visual oddities
One of the most effective ways to identify deepfake videos is to closely examine visual cues in the media, as this can reveal tell-tale signs that something is off.
Commonly identifiable visual oddities include:
- Unnatural face movements, such as irregularities in eye and eyebrow movements, blinking, lip sync issues, facial expression, missing body parts like tongues.
- Inconsistent backgrounds where objects appear or disappear or where other sudden changes happen in the background.
- Blurring, including unexplainable imperfections and pixelations around the edges of a face and body.
- Side adhesion, which is when videos lose facial integrity because the subject turns their head to the side.
- Masking problems, such as when an object like a hand or another object hovers over or occludes a face, with the underlying image may reveal a mask.
AI Voice Cloning Is Giving Rise to Extortion & Vishing Scams
Technology powering new generation of attackers
Read moreDetailsSearch for hidden clues in audio
Modern AI technology only needs about 3 to 5 seconds of audio to clone someone’s voice. It’s no surprise that threat actors are increasingly employing voice cloning in social engineering and extortion scams.
The key to detecting audio deepfakes is listening carefully. AI-generated voices have a flatter delivery, awkward pauses and lack the emotional nuances of human speech.
Other times, the vocal delivery may be more fluid than the vocal patterns of the person whose speech was cloned. For instance, if the known individual has some unique vocal quirks, such as an accent, breath or nasal sounds or mouth clicks, then this can also help identify a potential deepfake.
Perform real-time tests
A finance worker transferred $25 million to scammers after a video call with a deepfake CFO. Threat actors are using deepfakes in job interviews to infiltrate organizations. To detect such real-time deepfakes, there are some tests people can run.
A side profile test, for example, can help identify a deepfake by asking the person to move their head from side to side. Deepfakes often struggle to maintain integrity at sharp angles.
Similarly, a hand-interaction test, where you ask the individual to place a hand or a finger in front of their face, can often reveal a deepfake, as facial occlusion is a common weakness of deepfakes.
Other real-time tests focus on mouths. Ask the subject to stick out their tongue and you’re likely to see things that don’t look quite right if what you’re seeing is a deepfake, and watch for synchronization between mouth movements. A code-word test, where you ask the subject for a secret word or answer to a question that only the real person would know, can also help you sniff out a real-time deepfake.
Ask yourself some critical questions
An obvious question is “Is this a deepfake,” but that’s not always going to give you a straightforward answer. Instead, pose the following questions: Why am I seeing this? Why is this content visible to me, and what is the context? Who created it? Can I identify the author, the source or the creator? What is the intent? Is someone trying to sell me something, influence my opinions or get my personal info? What emotions are manipulated? Is the content consistent with known behavior or facts? Does the content align with a person’s known behavior, location or timeline?
Deepfake defense strategies for organizations
In addition to training and “spot the deepfake” exercises, several AI-based tools can be used for deepfake detection, including DuckDuckGoose, Sensity AI, Deepware, Resemble AI, TrueMedia.org and FakeCatcher. Using methods like digital watermarking or CAI standards to authenticate, verify and trace digital content back to its original source might also help.
Organizations can consider using some practical strategies, such as implementing robust verification protocols for sensitive communications, ensuring they go beyond visual confirmation. Incorporating multi-factor authentication and additional verification steps, such as secure passphrases, or establishing secondary approval channels can also safeguard against deepfake manipulation and unauthorized access.