Picture this: you connect with someone new online, perhaps through a dating app or social media. The conversation flows naturally, and the authenticity of the individual prompts you to shift communication to platforms like Telegram or WhatsApp. Things progress quickly; you exchange photos and even video chat. Just as your comfort level increases, they unexpectedly mention finances.
Out of nowhere, they might request assistance with their Wi-Fi bill, or advocate for an investment in a nascent cryptocurrency. Before you realize it, the truth unfolds—it becomes apparent that your online acquaintance was never real. Instead, you’ve been engaging with a sophisticated AI-generated deepfake disguising the identity of a scammer.
This scenario may seem far-fetched or primarily the stuff of science fiction, yet occurrences like this are becoming alarmingly frequent. As advances in generative AI technologies surge, scammers are harnessing these capabilities to create increasingly realistic fake identities, complete with both visual and auditory deception. Experts caution that such deepfakes amplify various online scams including those centered around romantic relationships, job advertisements, and even tax fraud.
David Maimon, who leads fraud insights at identity verification firm SentiLink and serves as a criminology professor at Georgia State University, has closely monitored the trajectory of AI-driven scams—including those focused on romance—for several years. “There has been a significant uptick in the prevalence of deepfakes, especially when comparing data from 2023 and 2024,” Maimon states.
Previously, the detection of deepfake-related incidents was somewhat rare, with reports averaging four to five cases monthly. Currently, that number has escalated to hundreds. Such figures underscore a paradigm shift in the landscape of online fraud.
Deepfake technologies have already infiltrated a spectrum of scams online. For instance, a finance professional in Hong Kong fell victim to a fraudulent scheme, transferring $25 million to a scammer posing as the firm’s CFO during what appeared to be an authentic video call. In another instance, scammers have publicly shared tutorial videos on platforms claiming their content is intended solely for “pranks and educational purposes.” Many of these videos involve initial scam calls featuring AI-generated personas in romantic interactions.
Traditional deepfakes—pre-rendered videos of public figures—as well have surfaced more regularly. A case from New Zealand involved a retiree who lost approximately $133,000 after being targeted by a cryptocurrency scam that featured a manipulated video of the country’s prime minister promoting an investment opportunity.
Maimon indicates that SentiLink has begun to identify instances where deepfakes have been employed to establish fraudulent bank accounts, aimed at securing leases or facilitating tax refund scams. He further notes that deepfakes are infiltrating the job recruitment process during video interviews, highlighting the growing sophistication of these schemes.
“Any online interaction that allows for face-swapping poses a risk for fraud exploitation,” Maimon warns. Given the evolving nature of deepfake technologies, the implications for cybersecurity are profound, presenting new challenges for business owners striving to protect themselves from increasingly complex threats.