A recent disclosure has revealed over thirty security vulnerabilities in various open-source artificial intelligence (AI) and machine learning (ML) models, some of which pose severe risks, including remote code execution and data theft. These vulnerabilities, reported through Protect AI’s Huntr bug bounty platform, affect tools such as ChuanhuChatGPT, Lunary, and LocalAI.

Among the most critical vulnerabilities are two affecting Lunary, a toolkit for large language models (LLMs). The first, categorized as CVE-2024-7474, has a CVSS score of 9.1 and highlights an Insecure Direct Object Reference (IDOR) flaw. This issue permits an authenticated user to view or delete external users, leading to unauthorized data access and potential loss. The second vulnerability, CVE-2024-7475, also scored at 9.1, reveals an improper access control weakness, which allows attackers to alter the SAML configuration. This could enable unauthorized logins and access to sensitive information.

In addition, Lunary features another IDOR vulnerability (CVE-2024-7473), with a CVSS score of 7.5, which could enable malicious actors to manipulate prompts from other users by altering a user-controlled parameter. These luminary-related vulnerabilities underscore the challenges that organizations face in securing AI applications.

Further highlighting the severity of these flaws is a path traversal vulnerability in ChuanhuChatGPT’s user upload capability, designated CVE-2024-5982, which also scores a CVSS of 9.1. This vulnerability can lead to arbitrary code execution and the exposure of sensitive data when exploited.

Additionally, two critical vulnerabilities in LocalAI, an open-source framework that allows for self-hosted LLMs, have come to light. The first, CVE-2024-6983, with a CVSS score of 8.8, could enable an attacker to execute arbitrary code through uploaded malicious configuration files. The other, CVE-2024-7010, rated at 7.5, can allow an attacker to ascertain valid API keys through a timing attack on server response times. This type of vulnerability is particularly concerning as it allows adversaries to infer sensitive information systematically.

The cumulative disclosure of these security flaws coincides with NVIDIA’s release of patches addressing a path traversal issue in its NeMo generative AI framework, identified as CVE-2024-0129, which also carries a risk of code execution and data manipulation.

In response, users are urged to update their software to the latest versions to mitigate potential threats. The emergence of these vulnerabilities takes place alongside Protect AI’s launch of Vulnhuntr, a static code analysis tool aimed at identifying zero-day vulnerabilities within Python codebases by leveraging LLMs. Vulnhuntr diligently examines project files to flag potential weaknesses, ensuring robust security measures are taken.

Moreover, a new jailbreaking technique discovered by Mozilla’s 0Day Investigative Network illustrates that malicious prompts encoded in hexadecimal format and emojis could circumvent OpenAI’s ChatGPT safeguards, enabling exploit creation for known vulnerabilities. This jailbreak tactic manipulates the model’s inherent design to follow natural language instructions, revealing a significant gap in the model’s ability to evaluate the safety of complex tasks.

In summary, the complexities of these vulnerabilities illustrate the heightened responsibility organizations have to secure their AI ML applications effectively. The implications of these flaws extend beyond immediate technical fixes, necessitating a thorough understanding of potential adversarial tactics as outlined by the MITRE ATT&CK framework, such as initial access, privilege escalation, and more. Business leaders must be diligent in their cybersecurity measures as the landscape continues to evolve with innovative threats.