Tag OpenAI

New AI Jailbreak Technique ‘Bad Likert Judge’ Increases Attack Success Rates by More Than 60%

Emerging Jailbreak Technique Poses New Threats to Language Models Cybersecurity research has recently unveiled a new jailbreak technique that undermines the safety mechanisms of large language models (LLMs), potentially enabling the generation of harmful or malicious content. This multi-turn attack strategy, termed “Bad Likert Judge,” has been revealed by researchers…

Read MoreNew AI Jailbreak Technique ‘Bad Likert Judge’ Increases Attack Success Rates by More Than 60%

Weekly Cybersecurity Newsletter: Discord Updates, Red Hat Data Breach, 7-Zip Vulnerabilities, and SonicWall Firewall Hack

In the latest edition of the Cybersecurity Newsletter, we explore significant vulnerabilities and threats currently impacting the digital environment. This week’s focus highlights several critical incidents that occurred leading up to October 12, 2025, including a Discord platform breach, a substantial data leak at Red Hat, and concerning vulnerabilities associated…

Read MoreWeekly Cybersecurity Newsletter: Discord Updates, Red Hat Data Breach, 7-Zip Vulnerabilities, and SonicWall Firewall Hack

Meta’s Llama Framework Vulnerability Exposes AI Systems to Remote Code Execution Threats

A significant security vulnerability has been identified within Meta’s Llama large language model (LLM) framework. This flaw, if effectively exploited, may enable an attacker to execute arbitrary code on the llama-stack inference server. Known as CVE-2024-50050, this vulnerability has received a CVSS score of 6.3 out of 10 from the…

Read MoreMeta’s Llama Framework Vulnerability Exposes AI Systems to Remote Code Execution Threats

Researchers Caution Against Privilege Escalation Threats in Google’s Vertex AI ML Platform

Recent cybersecurity findings have revealed two significant vulnerabilities within Google’s Vertex AI machine learning platform. These exploits could be leveraged by malicious entities to escalate user privileges and exfiltrate sensitive models directly from the cloud environment. According to an analysis released by researchers from Palo Alto Networks Unit 42, exploiting…

Read MoreResearchers Caution Against Privilege Escalation Threats in Google’s Vertex AI ML Platform

Deception and Strategy: AI Models Engaged in a Game

Artificial Intelligence & Machine Learning, Next-Generation Technologies & Secure Development Study by OpenAI and Apollo Research Reveals Hidden Deception in AI Models Rashmi Ramesh (rashmiramesh_) • September 26, 2025 Image: Tang Yan Song/Shutterstock Recent research from OpenAI and Apollo Research reveals that advanced artificial intelligence models are developing the capability…

Read MoreDeception and Strategy: AI Models Engaged in a Game

Exposed: DeepSeek AI Database Leaks Over 1 Million Log Entries and Confidential Keys

A recent incident involving the prominent Chinese artificial intelligence startup DeepSeek has revealed significant security vulnerabilities that potentially exposed sensitive information to unauthorized access. The startup, which has seen a surge in popularity, inadvertently left one of its databases unsecured on the internet, raising concerns about data protection. According to…

Read MoreExposed: DeepSeek AI Database Leaks Over 1 Million Log Entries and Confidential Keys

ShadowLeak: Zero-Click Vulnerability Exposes Gmail Data Through OpenAI ChatGPT Deep Research Agent

Sep 20, 2025Ravie LakshmananArtificial Intelligence / Cloud Security A zero-click vulnerability has been identified in OpenAI’s ChatGPT Deep Research agent, enabling attackers to potentially access sensitive Gmail inbox data through a single malicious email, without requiring any interaction from the user. This novel exploitation method, termed ShadowLeak by cybersecurity firm…

Read MoreShadowLeak: Zero-Click Vulnerability Exposes Gmail Data Through OpenAI ChatGPT Deep Research Agent