Transcript
This transcript has been streamlined for clarity.
Mathew Schwartz: Hello. I’m Mathew Schwartz from Information Security Media Group, and today I’m joined by Candid Wüest, a prominent security advocate at Xorlab. Candid, it’s a pleasure to have you here.
Candid Wüest: Thank you for having me.
Mathew Schwartz: You have extensive experience as a security researcher, delving into various devices and applications across the hacking landscape. With that expertise in mind, I’d like to ask your views on the current state of artificial intelligence, particularly regarding its capacity for generating malware with tools like GPTs.
Candid Wüest: That’s a great question. With over 25 years in the antivirus and EDR/XDR industry, I’ve observed how tools like ChatGPT and others can indeed assist attackers in writing malware. While guardrails limit their potential for outright malicious code creation, creative prompts can still yield dangerous results. For instance, while straightforward requests for malware lead to denials, asking in poetic forms can sometimes bypass these restrictions.
Mathew Schwartz: So, using creative phrasing can help circumvent those guardrails?
Candid Wüest: Exactly. Asking for a program that secures files or encrypts data under the pretense of security can yield code that resembles malware, albeit a rudimentary variant. This highlights the requirement for a foundational understanding of malware mechanics. Simply asking for a sophisticated ransomware might not achieve the desired outcome.
Mathew Schwartz: That raises the question of how much expertise is required to effectively use these tools. It seems like potential users need an in-depth understanding before they can extract valuable results.
Candid Wüest: Unfortunately, that’s true. While AI lowers the barrier for potential attackers, some expertise remains necessary. The malware-as-a-service landscape already allows operatives with minimal technical skills to access powerful toolkits. A common method used by cybercriminals is simply to repurpose existing threat reports from security firms, creating new variants of ransomware by following templates rather than innovating.
In many cases, attackers are leveraging developed techniques from known frameworks, such as the MITRE ATT&CK framework, which categorizes various attack methods. Even new developments in AI-assisted malware often adhere to established tactics like persistence and privilege escalation. Thus, the fundamental defensive strategies against ransomware remain relevant regardless of the origin.
Mathew Schwartz: It appears that while AI can augment the creation of these threats, foundational security practices are crucial to thwart them. One ongoing concern is the evolution toward AI that can adapt on the fly to remain undetected. Are we at that point yet?
Candid Wüest: Thankfully, we’re not there yet. The existing AI models excel at rearranging established techniques but still fall under known detection patterns according to frameworks like MITRE ATT&CK. However, we are witnessing the emergence of more sophisticated malware that can modify its behavior in real-time, but these adaptations often become recognizable over time as security solutions evolve. Effective defense still hinges on fundamental security hygiene, such as timely patching and robust password management.
Mathew Schwartz: You mentioned dynamic behavior. Are traditional techniques sufficient to counteract these more advanced threats, or must new strategies be employed?
Candid Wüest: While established security measures such as behavioral detection and reputation systems are effective, the landscape is shifting. Innovative AI-driven variants like polymorphic malware do exist, creating challenges for detection. However, our research shows that many of these AI-generated codes still depend on external models. A significant drawback is their reliance on cloud-based services, which becomes a vulnerability when these systems can be shut down by their providers.
Mathew Schwartz: That reliance introduces a critical point of failure. Looking ahead, do you see evidence of emerging threats leveraging GPT models to develop cutting-edge malware?
Candid Wüest: Yes, augmentation rather than replacement is the prevailing theme. Reports suggest instances where AI models like Claude have been used to automate several stages of cyber exploits, requiring less direct human intervention. As these AI systems improve, they may facilitate advanced attacks while still relying on traditional tools for operations. While these tools can streamline processes, they remain dependent on human oversight for validation and direction.
Mathew Schwartz: Your insights highlight the complexities inherent in this evolving landscape. The potential for decentralized models raises new questions about detection and attribution. Thank you, Candid, for sharing your expertise today. I look forward to updating our audience on further developments in this critical field.
Candid Wüest: Thank you. I also hope for continued dialogue as the landscape of cybersecurity changes rapidly, ensuring our understanding evolves alongside it.
Mathew Schwartz: I’ve been speaking with Candid Wüest, security advocate at Xorlab. This is Mathew Schwartz from ISMG, and thank you for joining us today.