Discontent about the integration of artificial intelligence into online platforms is emerging from an unexpected source. A disgruntled individual has expressed frustration regarding a cybercrime forum’s plans to enhance its features with generative AI. In an anonymous online comment, the user stated, “No one is asking for this—we want you to improve the site, stop charging for new features.” This sentiment reflects a growing unease among various factions of cybercriminals regarding the influence of AI within their circles.
Contrary to the typical outcry of regular internet users, these complaints stem from a community previously enthusiastic about the utility of AI in hacking. Recent research led by Ben Collier, a senior lecturer at the University of Edinburgh, indicates that low-level cybercriminals are increasingly voicing their disapproval of generative AI technologies in underground forums. The study highlights a notable shift from initial optimism toward skepticism about the implications of AI within these cybercrime communities.
Analyzing nearly 98,000 discussions on cybercrime forums since the advent of ChatGPT in late 2022, the researchers uncovered widespread grievances. Users lamented the influx of superficial AI-generated content, such as generic cybersecurity explanations, and raised concerns about the quality of posts, sparking fears that AI-driven summaries might reduce community engagement.
Traditionally, these cybercrime forums—often rooted in Russian origin—have served as informal marketplaces for illicit activities, where stolen data is traded and hacking services advertised. The significance of community dynamics within these platforms cannot be overlooked. Participants develop reputations for reliability, and forum owners encourage active engagement through competitions. The introduction of AI-generated content disrupts the foundational social fabric of these online spaces.
Collier emphasizes the communal aspect of these forums, stating that the presence of AI-generated posts fundamentally alters the social interactions that members value. Many forum users feel that the rise of AI erodes their perceived expertise, undermining their identity as skilled individuals. This ambivalence toward AI is further illuminated by user posts on Hack Forums, where members vocalize their irritation at AI-generated content that lacks genuine personal input. Comments like “Stop posting AI shit” underline the frustration that has become common in these discussions.
Moreover, the desire for authentic human interaction is evident among users, as illustrated by one post expressing, “If I wanted to talk to an AI chatbot, there are many websites for that. I come here for human interaction.” This underscores the community’s yearning for personal connection within a space increasingly threatened by automated content.
Since the emergence of ChatGPT, interest in the intersection of AI and cybercrime has escalated. Both seasoned hackers and novices are seeking ways to harness AI in their schemes. While advanced fraudsters utilize sophisticated AI techniques for impersonation and social engineering, widespread focus has been directed toward generative AI’s capability to craft malicious code and identify security vulnerabilities.
Considering the MITRE ATT&CK framework, several tactics may be involved in the evolution of AI utilization within these cybercrime networks. Methods like initial access, persistence, and privilege escalation are pertinent as criminals adapt to technological changes. The growing frustration within these forums speaks to a larger discourse about the role of AI in a landscape where quality, authenticity, and community engagement are increasingly at risk. The future of these interactions, as well as the potential for further AI integration, remains to be seen.