New Jersey Takes Legal Action Against Discord Over Child Safety Concerns
Discord, the widely used messaging platform, is now under scrutiny following a lawsuit filed by the state of New Jersey, which alleges that the company has engaged in “deceptive and unconscionable business practices” that jeopardize the safety of its younger users. The lawsuit, initiated by the New Jersey Office of the Attorney General, follows a comprehensive investigation that uncovered evidence suggesting Discord’s policies, although aimed at protecting children and teenagers, may not be sufficient in practice.
Attorney General Matthew Platkin announced the lawsuit, declaring New Jersey the first state to take legal action against Discord. He revealed that the investigation was sparked by two key incidents. One was a personal experience of a family friend, who expressed concern over Discord’s registration policy, which prohibits children under 13 from creating accounts. This person’s 10-year-old son had managed to sign up on the platform, raising alarm about the effectiveness of Discord’s age verification protocols.
The second catalyst was the tragic mass shooting that occurred in Buffalo, New York. The shooter reportedly used Discord to express his ideologies and streamed the attack live on the platform. This incident highlighted the potential dangers associated with the app, prompting a closer examination of its content moderation and user safety measures.
In its lawsuit, the New Jersey AG’s office contends that Discord has violated the state’s Consumer Fraud Act by failing to prevent children under 13 from using the platform and inadequately protecting teenagers from sexual exploitation and exposure to violent content. This legal action adds to a growing trend of states pursuing litigation against major social media companies, despite past attempts yielding limited results.
Discord publicly emphasizes its commitment to child and teen safety, prohibiting users under 13 from accessing the service and deploying various safety measures, including algorithmic filters designed to detect inappropriate content. Despite these assertions, the New Jersey Attorney General expressed skepticism regarding the effectiveness of Discord’s safety policies, stating that the platform’s practices “have fallen flat.”
According to the lawsuit, Discord provides three safety settings for users: “Keep me safe,” “My friends are nice,” and “Do not scan.” The AG argues that for many teenage users, the default setting is “my friends are nice,” which may expose younger users to potential risks from adult interactions. Furthermore, the absence of rigorous age verification measures exacerbates these concerns, allowing minors to circumvent restrictions.
The lawsuit points out that despite Discord’s introduction of new content filters in 2023 intended to combat unwanted sexual messages, the company should have made the most protective option, “Keep me safe,” the default setting for all users. This reflection on user safety protocols highlights potential gaps in the company’s approach to safeguarding its young demographic.
From a cybersecurity perspective, this situation underscores the importance of implementing robust safety features and age verification processes to prevent unauthorized access by underage users. The tactics employed by adversaries in such incidents might include initial access, potentially through exploited weaknesses in user registration processes, and even persistence techniques to maintain contact with the platform despite safety protocols. As the legal landscape surrounding technology companies continues to evolve, it will be crucial for businesses like Discord to reassess their security measures in response to regulatory pressures and public concerns about user safety.