Google recently announced that its AI-driven fuzzing tool, OSS-Fuzz, has successfully uncovered 26 vulnerabilities in multiple open-source code repositories. Among these is a medium-severity flaw identified in the widely used OpenSSL cryptographic library.
The open-source security team from Google highlighted in a blog post, shared with The Hacker News, that these vulnerabilities mark a significant advancement in automated vulnerability detection. They were all identified through AI techniques, employing AI-generated fuzz targets.
The specific vulnerability in OpenSSL—a critical system for securing communications—is categorized as CVE-2024-9143, with a CVSS score of 4.3. This out-of-bounds memory write issue poses risks of application crashes and potential remote code execution. OpenSSL has addressed this vulnerability in its updates, including versions 3.3.3, 3.2.4, 3.1.8, 3.0.16, 1.1.1zb, and 1.0.2zl.
Google integrated large language models (LLMs) into OSS-Fuzz to enhance fuzzing coverage in August 2023. The team noted that the mentioned vulnerability may have existed in the codebase for roughly 20 years and would have remained undetected using traditional human-written fuzz targets.
Notably, the AI’s ability to generate fuzz targets has expanded code coverage across 272 C/C++ projects, introducing over 370,000 lines of new code. Google explained that previous limitations in identifying bugs stemmed from the idea that line coverage does not inherently guarantee the absence of bugs within functions. Variability in flags and configurations can produce different behaviors, potentially revealing unseen vulnerabilities.
The success of these AI-driven vulnerability discoveries can be attributed to LLMs effectively mimicking a developer’s fuzzing workflow, thus increasing automation. This development aligns with Google’s recent revelation about its LLM-based system, Big Sleep, which enabled the detection of a zero-day vulnerability in the SQLite open-source database engine.
Moreover, Google is making strides in transitioning its software to memory-safe programming languages like Rust, addressing spatial memory safety vulnerabilities within existing C++ projects, including popular software like Chrome. This shift is part of a broader commitment to enhancing security measures across its codebase.
As a part of this transition, Google is also implementing techniques such as migrating to Safe Buffers and enabling hardened libc++, which introduces bounds checking to C++ data structures, thus reducing risks associated with spatial safety vulnerabilities. The impact on performance due to these changes is minimal, with an average overhead of about 0.30%. This newly integrated hardened libc—contributed by the open-source community—aims to address vulnerabilities like out-of-bounds accesses, enhancing the reliability and security of production code.
Overall, these advancements illustrate Google’s unwavering commitment to enhancing open-source security and leveraging cutting-edge technology to mitigate risks within its ecosystem. By employing AI for vulnerability discovery and adopting safer programming practices, the company is laying the groundwork for a more secure software environment.