Recent discussions among cybersecurity experts highlight serious concerns regarding data privacy in AI-enabled toys, with specific focus on Bondu, a company producing these products. Security researchers Margolis and Thacker have raised alarms over access to sensitive user data, questioning how many employees within these organizations can view such information, the oversight of that access, and the protection of their credentials. The researchers emphasize that a single weak password could lead to a significant data breach, exposing sensitive information to the public.
Margolis further warns that this level of access to a child’s private thoughts and emotions could result in exploitation. He states unequivocally that this scenario is a “kidnapper’s dream,” suggesting that such information could enable malicious individuals to endanger children by luring them into harmful situations.
Adding complexity to the issue, Bondu reportedly utilizes third-party enterprise AI services, including Google Gemini and OpenAI’s GPT-5, for processing conversational data. This raises the possibility that children’s conversations could inadvertently be shared with these external companies. A response from Bondu’s representative indicated that while the company employs these services, it is committed to minimizing data sharing and using stringent contractual and technical measures to protect user information.
In light of these findings, Margolis and Thacker caution that AI toy manufacturers may unwittingly increase their cybersecurity risks through poor coding practices, particularly if generative AI tools are used in product development. Their investigations revealed that the Bondu admin console may have been created with programming tools prone to security vulnerabilities. However, Bondu has not clarified whether AI tools were used for this console’s development.
As awareness of the risks associated with AI toys has surged, most discussions have centered around inappropriate content exposure and the psychological impacts on children. For instance, there are reports indicating that some AI-driven toys have provided disturbing or dangerous advice. In contrast, Bondu has made efforts to establish safety protocols within its AI systems, including a $500 bounty for reporting inappropriate responses. The company asserts that no violations have been recorded since these measures were implemented.
Nonetheless, Thacker and Margolis underscore that Bondu’s significant data exposure poses a critical dilemma. “This is a perfect conflation of safety with security,” Thacker remarked, questioning the effectiveness of safety protocols when user data remains unprotected.
Thacker, having considered gifting AI toys to his children, reconsidered after witnessing the extent of the security vulnerabilities within Bondu’s systems. “Do I really want this in my house? No, I don’t,” he stated, succumbing to the prevailing fears surrounding privacy violations.
The situation with Bondu aligns with several MITRE ATT&CK tactics, particularly those centered on initial access and privilege escalation, suggesting that a multi-layered approach to security is paramount. As businesses in the tech sector continue to embrace AI-driven solutions, the conversation around responsible data management and security practices must prevail to protect vulnerable populations, particularly children.