Lawsuit Alleges LinkedIn Misused Private Messages for AI Training

Artificial Intelligence & Machine Learning,
Data Governance,
Data Privacy

Class Action Lawsuit Alleges LinkedIn Breached Privacy Laws and User Contracts

Lawsuit Claims LinkedIn Used Private Messages to Train AI
Image: Shutterstock

This week, a LinkedIn user based in California has initiated a class action lawsuit against the professional networking platform, alleging violations of privacy regulations and contractual obligations. The plaintiff, Alessandro De La Torre, who holds a LinkedIn Premium account, claims that the company allowed third parties to access sensitive user information, including private messages, for the purpose of training artificial intelligence models.

The lawsuit was formally filed in federal court. It contends that LinkedIn mishandled user data by permitting Microsoft subsidiaries to access “Premium customers’ private and confidential communications to third parties to train generative AI models,” a breach that contradicts the enhanced privacy assurances LinkedIn purportedly provides its Premium users.

De La Torre asserts that from July 2021 to September 2024, he utilized LinkedIn extensively for professional communication, covering topics such as startup financing and job searches. He argues that the exposure of such information poses significant risks to his professional relationships and career prospects. The lawsuit expresses concern that LinkedIn’s actions might lead to the unconsented dissemination of private discussions across other Microsoft products, further embedding personal data in AI systems without user consent.

The controversy surrounding LinkedIn’s AI policies intensified last year when the company made default changes to its privacy settings, which allowed user data to be utilized unless explicitly opted out. Although the company temporarily disabled this feature, the lawsuit alleges that additional privacy-compromising alterations to its policies were implemented without adequate notification to users, violating prior assurances about maintaining user privacy.

In the filing, De La Torre claims that LinkedIn’s failure to adequately inform users of how their data would be used constitutes a breach of the Stored Communications Act and California’s Unfair Competition Law. He is pursuing damages of $1,000 under these statutes. A spokesperson for LinkedIn responded to the allegations, labeling them as “false claims with no merit.”

Implications for Privacy and AI Regulation

The growing scrutiny of AI practices concerning data privacy comes amid wider discussions within the tech community about transparency in data management and processing. Industry experts have raised alarms about the practices companies employ to handle data extracted from the internet for AI training, questioning the adherence to privacy regulations.

Central to the discourse is whether organizations are exploiting data without complying with current privacy standards, including implementing sufficient protective measures like anonymization. High-profile AI entities, including OpenAI, have faced investigations into their privacy practices, revealing a sector increasingly challenged to justify its data usage methods.

As regulatory frameworks around AI remain uncertain, with recent actions by the U.S. government stalling, businesses are advised to reassess their data practices. The revocation of executive actions aimed at addressing AI risks suggests a fragmented regulatory landscape where states may impose varying rules.

Given the sensitive nature of the data involved and increasing public scrutiny, businesses must understand that unauthorized sharing and use of personal communications represent major infringements on user privacy. This evolving legal landscape could result in a significant rise in lawsuits related to AI practices, highlighting the necessity for companies to prioritize robust data governance measures.

Source link