New Assault on ChatGPT Research Agent Exfiltrates Secrets from Gmail Inboxes
ShadowLeak Vulnerability Exposes Risks in Language Models Recent developments in the cybersecurity landscape have unveiled a significant vulnerability involving prompt injection attacks on large language models (LLMs), spotlighted by the alarming case of ShadowLeak. This method primarily utilizes indirect prompt injections embedded within untrusted documents and emails, enabling malicious actors…