Recent findings by cybersecurity researcher Fowler reveal alarming content associated with the GenNomis website, which had previously hosted a database containing disturbing AI-generated imagery. Among the material identified were not only questionable adult images but also potential child sexual abuse material (CSAM) linked to face-swapping technologies. Fowler indicated that several files seemed to represent actual photographs of individuals, manipulated to create explicit content. He expressed concern that these images were generated by swapping the faces of real people onto sexually explicit AI art.
The GenNomis site, prior to its shutdown, allowed for the generation of explicit adult imagery, which was prominently featured on its homepage. This included a dedicated section showcasing AI “models,” where sexualized representations of women, ranging from photorealistic to animated styles, were prevalent. Users also had access to a “NSFW” gallery and marketplace for sharing and potentially selling AI-generated media. The website’s promotional claims included the ability to “generate unrestricted” images, raising serious questions about the oversight regarding the creation of harmful content.
While GenNomis established guidelines prohibiting “explicit violence” and “hate speech,” concerns have been raised regarding the enforcement of these policies, particularly in relation to AI-generated CSAM. Reports from users noted difficulties in generating certain content due to interference with non-sexual prompts, suggesting inconsistencies in moderation. Fowler pointed out the potential lack of rigorous measures, postulating that if he could access such material through simple URL links, the site was not implementing adequate blocking mechanisms.
Expert Henry Ajder, specializing in deepfake technology, remarked on the disconcerting association between the South Korean entity behind the platform and the increased instances of nonconsensual deepfake content. He highlighted ongoing legislative efforts in South Korea to mitigate deepfake abuse, indicating a rising awareness and urgency surrounding such issues. Ajder emphasized that the branding of GenNomis, particularly the mention of “unrestricted” content creation, reflects possible shortcomings in safety measures for such sensitive material.
Investigations into the files also unveiled AI-generated prompts containing troubling keywords and references to inappropriate scenarios, including sexual acts involving minors and celebrities. Fowler noted this raises significant concerns about the rapid advancement of technology outpacing existing legal frameworks. He reiterated the existing legal prohibitions against child sexual exploitation but lamented that technological capabilities afforded by AI continue to circumvent these laws.
As generative AI tools improve, they have made it increasingly simple to create and disseminate explicit materials. Derek Ray-Hill, interim CEO of the Internet Watch Foundation, reported a dramatic rise in AI-generated CSAM, quadrupling since 2023. This surge, combined with enhanced photorealism in the generated content, underscores the urgent need for innovative regulatory actions.
The GenNomis situation highlights a broader issue within the cybersecurity landscape, where adversaries exploit available technology to produce harmful material at scale. Pertaining to the MITRE ATT&CK framework, tactics such as initial access and content manipulation are evident. The case emphasizes the necessity for business owners and tech professionals to remain vigilant about such evolving threats, ensuring robust preventative measures and compliance with legal standards to protect against potential exploitation in this digital age.