Google Strengthens GenAI Security with Enhanced Multi-Layered Defenses Against Prompt Injection Threats
June 23, 2025
Artificial Intelligence / AI Security
Google has announced new safety measures aimed at fortifying its generative artificial intelligence (AI) systems against emerging threats such as indirect prompt injections. These attacks, unlike direct prompt injections that involve the submission of harmful commands, embed malicious instructions within external data sources like emails, documents, or calendar invites, potentially leading AI systems to leak sensitive information or execute harmful actions. In response, Google’s GenAI security team has developed a comprehensive “layered” defense strategy that raises the difficulty, cost, and complexity associated with executing successful attacks. This multifaceted approach includes model hardening and the introduction of specialized safeguards.