Researchers Introduce AI Tool for Identifying Zero-Day Vulnerabilities

Artificial Intelligence & Machine Learning,
Governance & Risk Management,
Next-Generation Technologies & Secure Development

New Vulnerability Tool Uncovers Flaws in OpenAI and Nvidia APIs Used in GitHub Projects

Researchers Debut AI Tool That Helps Detect Zero-Days
Protect AI utilizes Anthropic’s Claude LLM to operate the vulnerability detection tool. (Image: Shutterstock)

In a significant advance in cybersecurity, researchers have unveiled an autonomous artificial intelligence tool capable of identifying remote code vulnerabilities and zero-day exploits within software. Despite some inconsistencies in its findings, the tool reportedly reduces the number of false positives compared to traditional methods.

Developed by Protect AI, the Python static code analyzer known as Vulnhuntr leverages Anthropic’s Claude 3.5 Sonnet large language model to discover weaknesses in coding and provide proof of concept for potential exploits. The tool has revealed critical vulnerabilities in GitHub projects that utilize OpenAI, Nvidia, and YandexGPT APIs, drawing attention to a specific OpenAI file that contained a server-side request forgery flaw capable of redirecting API requests to malicious endpoints.

According to the researchers, the Vulnhuntr’s confidence scoring system indicates the validity of identified vulnerabilities. A score of seven suggests a high likelihood of authenticity, while scores of eight or higher strongly indicate valid vulnerabilities. In contrast, scores of one to six imply a lower probability of being actual flaws.

To address typical limitations associated with large language models (LLMs), particularly regarding context windows, Protect AI researchers employed a technique known as retrieval-augmented generation. This involved parsing extensive text into manageable tokens and refining the tool’s capabilities with both pre-patch and post-patch code, alongside established vulnerability databases like CVEFixes. By breaking code into smaller, manageable units, the tool enhances its ability to analyze relevant sections more effectively.

Protect AI emphasizes that this selective approach allows the tool to focus on code files that are most susceptible to handling user input, ultimately delivering a comprehensive analysis of potential vulnerabilities. The tool uses specifically designed prompts to shape responses and guide its logical reasoning, ensuring thorough output evaluation.

While Vulnhuntr represents a notable advancement in static code analysis, Protect AI acknowledges that the tool still faces challenges, including accuracy issues tied to its training data. Currently, it is limited to identifying seven types of flaws, and while it can be adapted to detect additional vulnerabilities, increased training may extend processing times significantly. Additionally, as the tool is tailored for Python, its effectiveness may diminish when analyzing code in other programming languages.

The non-deterministic nature of LLMs also poses a challenge, as repeated analyses of the same project can yield different results. Nonetheless, researchers assert that Vulnhuntr marks a significant improvement over existing static code analyzers, particularly in detecting complex vulnerabilities and minimizing false positives. Future plans include expanding the tool’s capabilities to encompass entire codebases, enhancing its overall utility in cybersecurity efforts.

Source link

Leave a Reply

Your email address will not be published. Required fields are marked *