Deepfake voice scams are more sophisticated than ever: How to keep your family safe
The Rise of Deepfake Scams: How AI is Being Used to Steal Your Money

The world of online scams is constantly evolving, and a particularly insidious new threat is emerging: deepfake voice scams. These scams utilize artificial intelligence to clone the voices of victims' loved ones, creating incredibly realistic audio recordings used to trick people into handing over money or sensitive information.
Recently, Sharon Brightwell, a Florida mother, experienced this firsthand. She received a panicked call from what she believed to be her daughter, April, who claimed to be involved in a car accident that injured a pregnant woman. However, the voice wasn't April's; it was a sophisticated deepfake, convincingly mimicking her daughter's voice. The fake April spun a tale of reckless driving, a confiscated phone (explaining the unknown number), and an urgent need for bail money.
Following the initial call, a man posing as a public defender contacted Sharon, demanding $15,000 to secure April's release. Desperate and believing her daughter was in danger, Sharon complied, gathering the cash and handing it over to an Uber courier. Only after a subsequent call demanding a further $30,000 for the "injured pregnant woman" did Sharon's grandson realize the family had fallen victim to a sophisticated scam. The emotional toll on the family was immense, highlighting the devastating consequences of these advanced scams.
"To tell you the trauma that my mom and son went through that day makes me nauseous and has made me lose more faith in humanity," April wrote on a GoFundMe page. "Evil is too nice a word for the kind of people that can do this."
This case isn't unique. Deepfake scams are increasingly prevalent, targeting not just seniors, but individuals across all demographics. For example, this year, the voice of Florida Senator Marco Rubio was faked using AI, in an attempt to contact foreign and domestic leaders. Even President Trump's chief of staff, Susie Wilkes, was targeted by similar deepfake voice scams.
The surge in these crimes is directly linked to advancements in AI voice cloning technology. According to Matthew Wright, PhD, a professor and Chair of Cybersecurity at Rochester Institute of Technology, the technology is not only improving but also becoming far easier to access.
"This is increasing because the technology is getting easier to use," explains Wright. "A few years ago, this technology existed but required significant technical skill. Now, with services like ElevenLab and others, it's remarkably simple to create convincing deepfakes."
ElevenLab is a leading software company providing realistic voice cloning for various legitimate applications, such as customer service. However, a March report from Consumer Reports highlighted concerns about the lax oversight of non-consensual voice cloning by ElevenLab and similar companies like Lovo and Speechify. The report noted that these companies rely on simple authorization checkboxes, leaving a significant vulnerability for misuse.
Behind these sophisticated AI-powered scams are organized criminal networks, often operating on an international scale. Wright explains, "A lot of it is organized crime. There are reports of organizations kidnapping individuals in one country and transporting them to another, like Malaysia, holding them in secure facilities and essentially forcing them into creating these deepfakes."
These criminal enterprises often begin by sourcing easily accessible vocal samples from the target's social media presence. A mere 30 seconds of audio can be enough to generate a convincing deepfake voice.
Keeping Yourself Safe from Deepfake Voice Scams
While public figures like Senator Rubio might have their voices cloned from publicly available content, private individuals are often targeted using their social media activity. Wright emphasizes the importance of privacy settings on social media platforms.
"Even videos of you with friends and family, seemingly harmless moments, can provide the material needed to create a deepfake," says Wright. "Ensure your settings are private. This also applies to your friends and family who might have posted videos featuring you. Scammers don't need a lot of audio; 30 seconds is often enough."
The deceptive nature of deepfakes makes detection difficult, even for close friends and family. Wright warns that relying on your ability to distinguish a real voice from a deepfake is unreliable.
"Studies show that people struggle to detect deepfakes, even with short audio clips. This is true even when the voice claims to be a close friend or family member," he explains.
Therefore, heightened skepticism is crucial when dealing with calls from unknown numbers. Wright advises adopting standard scam-prevention techniques, specifically being wary of calls creating a sense of urgency or emotional distress to pressure immediate action.
Any call demanding immediate payment, especially for situations requiring urgent action, should raise significant red flags. Wright emphasizes the importance of using secure payment methods such as bank transactions, as banks are generally more adept at identifying and flagging fraudulent activities.
"Banks are often trained to recognize these types of scams and can help you identify them," says Wright. "They can provide an extra layer of protection."
Establishing a secret password or code word with friends and family, particularly those who may be more vulnerable to these scams (e.g., older adults or those less tech-savvy), is another effective preventative measure. This "password" should be something that's not easily accessible via social media but known only to the specific individuals involved.
"Having a secret code word or phrase allows for a quick verification of identity. If someone calls claiming to be your loved one and asks for money or information, asking for this code word instantly reveals whether the caller is legitimate or a scammer," Wright suggests.
Have a story to share about a scam or security breach that impacted you? Tell us about it. Email [email protected] with the subject line "Safety Net" or use this form. Someone from Mashable will get in touch.
from Mashable
-via DynaSage