Human Misuse Could Increase the Dangers of Artificial Intelligence

In a recent development that has sparked considerable discussion in the cybersecurity community, OpenAI CEO Sam Altman predicts that artificial general intelligence (AGI)—artificial intelligence that can surpass human performance across a wide range of tasks—may arrive as early as 2027 or 2028. Contrarily, Elon Musk forecasts an even sooner emergence, speculating that true AGI could arise in 2025 or 2026. Musk has openly expressed his concerns regarding the potential dangers of AI, stating that he is “losing sleep” over the associated risks. However, many experts in the field believe that these predictions might be premature, as the limitations of current AI systems indicate that simply scaling existing models will not yield AGI.

As we look towards 2025, the threats posed by AI are likely to stem not from superintelligence but from human misapplications of the technology. Notably, instances of unintentional misuse are already surfacing, evidenced by legal professionals using AI to draft court documents without fully understanding its limitations. For instance, lawyers in various regions, including British Columbia and New York, have faced penalties for relying on AI-generated data that turned out to be inaccurate. In Colorado, a lawyer was even suspended for referencing fictitious case law produced by an AI. These errors highlight the critical need for legal professionals to remain vigilant and informed about the reliability of AI technologies.

In addition to inadvertent misuse, more concerning are the deliberate abuses of AI capabilities. A striking example occurred in January 2024, when explicit deepfake images of Taylor Swift circulated on social media, generated using an AI tool. This incident underscores the troubling trend of non-consensual deepfakes, which have become more prevalent due partly to accessible open-source software. As governments worldwide seek legislation to combat these issues, the effectiveness of such measures remains uncertain.

As we advance into 2025, distinguishing reality from AI-generated fabrications will become increasingly challenging. The remarkable quality of AI-generated content—including audio, images, and text—will soon extend to video, presenting significant societal risks such as the “liar’s dividend.” This phenomenon allows individuals in positions of power to deny credible evidence of their wrongdoings by labeling it as manipulated or fake. High-profile cases exemplify this trend: for instance, defenses have been raised suggesting that viral video evidence could be deepfakes in political and legal contexts, complicating accountability.

The commercial landscape is also feeling the impact of AI-generated confusion as companies exploit public misperceptions to market questionable products branded as “AI.” This poses serious risks when such tools are used in high-stakes scenarios, including hiring practices. Notably, the recruiting agency Retorio faced scrutiny when its AI system for evaluating candidates reportedly relied on superficial traits, such as the presence of glasses, to assess job suitability, raising ethical concerns regarding reliance on flawed technological assessments in critical decision-making processes.

Numerous sectors—ranging from healthcare to finance—are leveraging AI technologies, often with detrimental consequences. A notable incidence involved the Dutch tax authority employing AI to target child welfare fraud, resulting in wrongful accusations against thousands of individuals, which culminated in the resignation of the Prime Minister and cabinet amidst public outcry.

In summary, the imminent challenges posed by AI in 2025 will not stem from autonomous AI behavior, but rather from how it is utilized by individuals. Critical issues will arise as professionals over-rely on AI systems, misuse them intentionally, or deploy them inappropriately within sensitive contexts. Addressing these multifaceted risks demands concerted efforts from corporations, policymakers, and societal stakeholders. Without careful consideration and strategic mitigation, the potential for significant harm—exacerbated by the allure of advanced AI capabilities—remains a profound concern for business leaders and cybersecurity experts alike.

Source