Tag AI

Thrive Introduces Network Detection and Response Solutions

BOSTON, Aug. 21, 2025 (GLOBE NEWSWIRE) — Thrive, a prominent global provider of technology outsourcing specializing in cybersecurity, cloud services, and traditional managed services, has unveiled a new Network Detection and Response (NDR) service aimed at bolstering cybersecurity for businesses. This service will continuously monitor networks for potential security incidents,…

Read MoreThrive Introduces Network Detection and Response Solutions

Zero-Click AI Vulnerability Exposes Microsoft 365 Copilot Data Without User Interaction

June 12, 2025
Artificial Intelligence / Vulnerability

A new attack method called EchoLeak has been identified as a “zero-click” AI vulnerability, enabling malicious actors to extract sensitive data from Microsoft 365 (M365) Copilot without any user involvement. This critical vulnerability has been assigned CVE identifier CVE-2025-32711, with a CVSS score of 9.3. It requires no action from users and has already been addressed by Microsoft, with no reported instances of exploitation. According to a recent advisory, “AI command injection in M365 Copilot allows an unauthorized attacker to disclose information over a network.” This vulnerability has been included in Microsoft’s June 2025 Patch Tuesday updates, bringing the total number of fixed vulnerabilities to 68. Aim Security, which discovered and reported the issue, noted that it exemplifies a large language model (LLM) Scope Violation that leads to indirect prompt injection risks.

Zero-Click AI Vulnerability Exposes Microsoft 365 Copilot Data Without User Interaction On June 12, 2025, cybersecurity experts disclosed a significant vulnerability known as EchoLeak, which has been classified as a “zero-click” artificial intelligence (AI) exploit. This flaw allows malicious actors to extract sensitive data from Microsoft 365 (M365) Copilot without…

Read More

Zero-Click AI Vulnerability Exposes Microsoft 365 Copilot Data Without User Interaction

June 12, 2025
Artificial Intelligence / Vulnerability

A new attack method called EchoLeak has been identified as a “zero-click” AI vulnerability, enabling malicious actors to extract sensitive data from Microsoft 365 (M365) Copilot without any user involvement. This critical vulnerability has been assigned CVE identifier CVE-2025-32711, with a CVSS score of 9.3. It requires no action from users and has already been addressed by Microsoft, with no reported instances of exploitation. According to a recent advisory, “AI command injection in M365 Copilot allows an unauthorized attacker to disclose information over a network.” This vulnerability has been included in Microsoft’s June 2025 Patch Tuesday updates, bringing the total number of fixed vulnerabilities to 68. Aim Security, which discovered and reported the issue, noted that it exemplifies a large language model (LLM) Scope Violation that leads to indirect prompt injection risks.

New Flodrix Botnet Variant Takes Advantage of Langflow AI Server RCE Vulnerability for DDoS Attacks

Cybersecurity researchers have identified a new campaign that actively exploits a recently revealed critical security flaw in Langflow to deploy the Flodrix botnet malware. According to Trend Micro researchers Aliakbar Zahravi, Ahmed Mohamed Ibrahim, Sunil Bharti, and Shubham Singh in their technical report, attackers are leveraging this vulnerability to execute downloader scripts on compromised Langflow servers, which subsequently retrieve and install the Flodrix malware. This activity involves the exploitation of CVE-2025-3248 (CVSS score: 9.8), a missing authentication vulnerability affecting Langflow, a Python-based visual framework for creating AI applications. Successful exploitation allows unauthenticated attackers to execute arbitrary code through specially crafted HTTP requests. Langflow addressed this flaw with version 1.3.0, released in March 2025. Last month, the U.S. Cybersecurity and Infrastructure Security Agency (CISA) highlighted…

New Variant of Flodrix Botnet Leverages Langflow AI Server RCE Vulnerability for DDoS Operations On June 17, 2025, cybersecurity professionals alerted the public to an ongoing campaign targeting vulnerabilities in Langflow, a Python-based platform for developing artificial intelligence applications. This campaign is primarily focused on delivering the Flodrix botnet malware,…

Read More

New Flodrix Botnet Variant Takes Advantage of Langflow AI Server RCE Vulnerability for DDoS Attacks

Cybersecurity researchers have identified a new campaign that actively exploits a recently revealed critical security flaw in Langflow to deploy the Flodrix botnet malware. According to Trend Micro researchers Aliakbar Zahravi, Ahmed Mohamed Ibrahim, Sunil Bharti, and Shubham Singh in their technical report, attackers are leveraging this vulnerability to execute downloader scripts on compromised Langflow servers, which subsequently retrieve and install the Flodrix malware. This activity involves the exploitation of CVE-2025-3248 (CVSS score: 9.8), a missing authentication vulnerability affecting Langflow, a Python-based visual framework for creating AI applications. Successful exploitation allows unauthenticated attackers to execute arbitrary code through specially crafted HTTP requests. Langflow addressed this flaw with version 1.3.0, released in March 2025. Last month, the U.S. Cybersecurity and Infrastructure Security Agency (CISA) highlighted…

493 Cases of Child Sextortion Tied to Infamous Scam Networks

Research Highlights Dark Links Between Scam Operations and Sextortion Recent investigations into alleged sextortion activities reveal a concerning nexus involving organized crime and technology abuse. Heintz, a researcher in the field, noted, “While the data available has limitations, it accurately reflects the situation. If anything, it may even understate the…

Read More493 Cases of Child Sextortion Tied to Infamous Scam Networks

IBM Discovers Inadequate Controls in 97% of AI-Related Data Breaches

Recent research from IBM highlights a significant “AI oversight gap” among organizations that have experienced data breaches. According to findings from the company’s Cost of a Data Breach Report, an alarming 97% of these organizations reported a lack of adequate AI access controls, underscoring potential vulnerabilities in their cybersecurity frameworks.…

Read MoreIBM Discovers Inadequate Controls in 97% of AI-Related Data Breaches

Russia Intensifies Restrictions on End-to-End Encrypted Calls

A recent collaborative investigation by WIRED, The Markup, and CalMatters has unveiled that numerous data brokers are purposefully obscuring their opt-out and data deletion tools from Google Search results. This tactic complicates the ability of consumers to locate and utilize these privacy options, raising significant concerns about data privacy practices.…

Read MoreRussia Intensifies Restrictions on End-to-End Encrypted Calls

Your SSN Exposed Online, AI Data Breaches, and Bus Hacking: This Week’s Cybersecurity Chaos – PCMag

Major Cybersecurity Concerns: Data Exposure and Vulnerabilities on the Rise In the latest developments in cybersecurity, various incidents have highlighted growing vulnerabilities in digital infrastructures. Notably, social security numbers (SSNs) are increasingly becoming compromised, with significant amounts of personal data leaking online. The rise of artificial intelligence is exacerbating this…

Read MoreYour SSN Exposed Online, AI Data Breaches, and Bus Hacking: This Week’s Cybersecurity Chaos – PCMag

Critical Flaw in Anthropic’s MCP Poses Remote Exploitation Risk for Developer Systems

July 01, 2025
Vulnerability / AI Security

Cybersecurity experts have identified a severe security flaw in Anthropic’s Model Context Protocol (MCP) Inspector project, potentially enabling remote code execution (RCE) and granting attackers total access to affected systems. Identified as CVE-2025-49596, this vulnerability boasts a CVSS score of 9.4 out of 10, indicating a critical risk level. “This represents one of the first significant RCE vulnerabilities within Anthropic’s MCP framework, opening the door to a new wave of browser-based attacks targeting AI development tools,” stated Avi Lumelsky from Oligo Security in a recent report. “With the ability to execute code on a developer’s machine, attackers can compromise sensitive data, install malware, and navigate through networks—posing serious threats to AI teams, open-source initiatives, and enterprises utilizing MCP.” Introduced by Anthropic in November 2024, MCP is an open protocol aimed at standardizing large language model (LLM) applications…

Critical Flaw in Anthropic’s MCP Poses Severe Risks to Developer Systems July 1, 2025 In a significant cybersecurity revelation, researchers have identified a critical vulnerability within Anthropic’s Model Context Protocol (MCP) Inspector project, potentially permitting remote code execution (RCE) that could compromise developer machines. This vulnerability, cataloged as CVE-2025-49596, has…

Read More

Critical Flaw in Anthropic’s MCP Poses Remote Exploitation Risk for Developer Systems

July 01, 2025
Vulnerability / AI Security

Cybersecurity experts have identified a severe security flaw in Anthropic’s Model Context Protocol (MCP) Inspector project, potentially enabling remote code execution (RCE) and granting attackers total access to affected systems. Identified as CVE-2025-49596, this vulnerability boasts a CVSS score of 9.4 out of 10, indicating a critical risk level. “This represents one of the first significant RCE vulnerabilities within Anthropic’s MCP framework, opening the door to a new wave of browser-based attacks targeting AI development tools,” stated Avi Lumelsky from Oligo Security in a recent report. “With the ability to execute code on a developer’s machine, attackers can compromise sensitive data, install malware, and navigate through networks—posing serious threats to AI teams, open-source initiatives, and enterprises utilizing MCP.” Introduced by Anthropic in November 2024, MCP is an open protocol aimed at standardizing large language model (LLM) applications…