Business leaders often dismiss the possibility of becoming victims of cyberattacks until they are caught in the crosshairs themselves, at which point the consequences can be dire. Recently, I observed three companies falling prey to cybercrimes that utilized social engineering tactics, prompting a stark realization about the threats facing executives. A pressing question surfaced: if substantial organizations could be compromised, what repercussions would this have for my private equity firm that handles millions in transactions every month?
With 25 years of expertise in IT and cybersecurity, including patented innovations and a successful history developing and transitioning a Managed Service Provider (MSP) that garnered $65 million in annual revenue, I felt compelled to devise a solution. I recognized that cultivating awareness throughout all levels of corporate communications was vital. Implementing a system that could visually alert end-users to threats could greatly mitigate risks associated with deepfake-enabled social engineering attacks.
This initiative led me to draft patents, create software, and construct a business model. However, what took me by surprise was the staggering frequency of cyberattacks occurring daily in Corporate America.
In recent months, discussions with numerous large companies have unveiled a troubling trend. Chief Technology Officers (CTOs) and Chief Information Security Officers (CISOs) have discreetly confessed to experiencing breaches. The observations clearly indicate that social engineering has become the predominant attack vector, with artificial intelligence significantly enhancing these malicious endeavors, transforming them from blatant scams to remarkably credible impersonations.
For instance, a recent incident involved a director from a company in Dubai being misled by a cloned voice, orchestrating the transfer of $35 million. Similarly, another company disclosed that AI-generated video calls mimicking their CFO nearly led to fraudulent transfers totaling $25 million. These incidents signal a critical shift in the cybersecurity landscape, one that many organizations—if not most individuals—still struggle to comprehend.
Traditional cybersecurity measures have emphasized safeguarding infrastructure through firewalls, intrusion detection systems, and endpoint protections. While these defenses are necessary, they are no longer sufficiently comprehensive. The most adept attackers forgo breaching technical barriers. Instead, they opt for direct human interaction, impersonating executives and manipulating finance departments into facilitating urgent wire transfers.
The increase in generative AI technology has dramatically expanded the scale and sophistication of such attacks. Where social engineering previously required skilled individuals to execute convincing calls or craft compelling email communications, AI now allows for the rapid generation of thousands of personalized, contextually relevant messages—be it through email, phone calls, or video—which can appear entirely legitimate.
The speed of this evolution is astonishing. A company in the Midwest remarked that their phishing simulation exercises from just 18 months prior seem laughably ineffective compared to the sophisticated attacks currently being executed against them. The previously noticeable grammatical errors and awkward phrases that served as indicators have dissipated, replaced by well-structured messages that mimic the communication style of the targeted executive with uncanny precision.
This crisis is particularly dangerous due to its stealthiness. Unlike ransomware attacks, which announce their presence with encrypted files and ransom notes, successful social engineering often occurs without any immediate signs until financial losses are realized. Moreover, due to fears of reputational damage, organizations are often reluctant to disclose these experiences unless legally obligated, creating a veil of silence around the issue and leading to the unnerving reality of being “robbed blind.”
The financial repercussions of such cyberattacks are staggering. According to the FBI’s Internet Crime Complaint Center, Business Email Compromise (BEC) incidents have led to billions in reported losses, but industry experts estimate that actual costs could be 5 to 10 times higher when accounting for unreported cases. The magnitude of this threat demands an urgent reevaluation of cybersecurity strategies within companies.
What steps should companies take? While technological solutions play a critical role, the system we are developing leverages AI to detect AI, focusing on communication patterns to identify anomalies and provide timely warnings. Regulatory bodies also have a significant role to play; compliance auditors and cyber insurance firms can steer organizations toward technologies that foster shared awareness and non-repudiation. Furthermore, existing disclosure mandates often inadequately reflect the real nature and impact of social engineering attacks. Enhanced reporting requirements could shine a light on the scale of this issue and encourage appropriate corporate responses.
As advancements in artificial intelligence continue, the distinction between genuine and synthetic communications will become increasingly blurred. Attackers exploit human psychology, leveraging our inherent trust in familiar voices and images. This crisis is real, on the rise, and largely concealed from public view; it’s imperative to understand that in the current cybersecurity ecosystem, human psychology represents the most vulnerable point—not only firewalls. Strengthening this link necessitates tools, training, and a level of vigilance unprecedented in the field.