Large language models (LLMs) like GPTs have gained notoriety for generating inaccurate information. However, for Erica Burgess, an artificial intelligence cybersecurity architect, these “hallucinations” can serve a beneficial role in threat modeling. “I prefer to view these hallucinations as untested ideas,” she remarked, highlighting their potential in cybersecurity applications.
In her recent presentation, “Never Break the Chain,” at Black Hat Europe in London, Burgess illustrated this concept with redacted examples from her work in red-teaming and penetration testing. She discussed how LLMs can swiftly identify and assemble low-severity vulnerabilities, which may seem inconsequential at first glance but can lead to substantial security risks, including actual server compromises.
“When I bill clients, my priority is delivering high-quality results,” Burgess stated. “The ability to expedite processes efficiently has proven invaluable.” By leveraging GPT technology, Burgess aims to spark innovative thinking, encouraging unconventional approaches to problem-solving that eschew traditional methods. The non-judgmental nature of AI gives it a distinct advantage, especially in a field where setbacks are commonplace.
“Hacking fundamentally involves exploration and observation,” she noted. “It’s about identifying anomalous behavior and encouraging it to manifest in adverse ways.” In an interview with Information Security Media Group, Burgess elaborated on her work with GPTs, focusing on potential applications like rapidly uncovering obscure commands that would otherwise take extensive manual effort to discover.
Burgess emphasizes ongoing vigilance, routinely stress testing vendor patches to identify vulnerabilities that she has previously uncovered. Her experience shows that AI tools can generate counterintuitive solutions that may not align with standard engineering practices but are effective nonetheless.
As businesses increasingly face sophisticated cyber threats, understanding the potential vulnerabilities in their systems is imperative. The MITRE ATT&CK framework helps elucidate the tactics and techniques that adversaries may employ, including initial access, persistence, and privilege escalation. This analytical framework not only enhances a business’s ability to develop robust cybersecurity strategies but also equips them with the knowledge necessary to address evolving threats in a complex digital landscape.