Severe RCE Vulnerability Found in Ollama AI Infrastructure Tool

Patch Released for Critical RCE Vulnerability in Ollama AI Platform

Cybersecurity experts have unveiled a significant security vulnerability in the Ollama open-source artificial intelligence (AI) infrastructure platform, leading to the potential for remote code execution (RCE). The flaw, designated as CVE-2024-37032, has been internally referred to as Probllama by the cloud security firm Wiz. It was responsibly disclosed on May 5, 2024, and subsequently addressed in the release version 0.1.34 on May 7, 2024.

Ollama serves as a platform for packaging, deploying, and operating large language models (LLMs) on devices running Windows, Linux, and macOS. The vulnerability stems from inadequate input validation, manifesting as a path traversal issue. This flaw could be exploited by malicious actors to overwrite arbitrary files on the server, which might culminate in remote code execution.

Successful exploitation requires an attacker to send specifically crafted HTTP requests to the Ollama API server. The primary target of this attack is the API endpoint "/api/pull," which facilitates the download of models from the official registry or a private repository. By providing a malicious model manifest file with a path traversal payload in the digest field, scammers could manipulate the server’s file structure.

The gravity of this vulnerability is compounded by its potential to corrupt files and achieve code execution through the overwrite of the "etc/ld.so.preload" configuration file. This alteration can pave the way for the introduction of an unauthorized shared library that executes prior to any program’s run, posing a significant risk to system security. While default Linux installations may mitigate risks of remote code execution due to server configurations limited to localhost, Docker deployments expose the API server to the internet, significantly raising the stakes.

Security researcher Sagi Tzadik emphasized the danger in Docker installations, highlighting that the server operates with root privileges and listens on 0.0.0.0 by default, enabling remote exploitation. The absence of authentication mechanisms within Ollama exacerbates this issue, as attackers can access exposed servers to manipulate AI models and compromise self-hosted inference environments.

A troubling finding by Wiz indicated that over 1,000 instances of Ollama were exposed without adequate security measures, leaving various AI models vulnerable. This situation underscores the necessity for businesses utilizing Ollama to implement security protocols, such as middleware solutions with authentication, to protect their operations.

Tzadik noted that CVE-2024-37032 exemplifies an “easy-to-exploit” RCE vulnerability endemic to modern AI infrastructures. Despite the current codebase being comparatively contemporary and utilizing modern programming languages, it remains susceptible to traditional vulnerabilities like path traversal.

This revelation aligns with a broader warning from Protect AI, which reported more than 60 security weaknesses across various open-source AI and machine learning tools. Among these, the most critical was CVE-2024-22476, identified as an SQL injection flaw in Intel Neural Compressor software with a CVSS score of 10.0, underscoring the need for heightened vigilance in securing AI technologies.

In terms of potential tactics employed in this attack, the MITRE ATT&CK framework suggests that techniques such as initial access could have been utilized, with attackers leveraging the exposed API to gain entry. Persistence and privilege escalation tactics would also be pertinent, as the flaw can allow adversaries to remain on the system and gain elevated access, ultimately facilitating broader exploitation.

As organizations increasingly integrate AI solutions into their operations, the vigilance against such vulnerabilities must remain a top priority to safeguard against sophisticated cyber threats.

Source link

Leave a Reply

Your email address will not be published. Required fields are marked *