Linkedin Allegations Of Using Private Messages To Train AI

linkedin
Technology

LinkedIn, the professional networking giant owned by Microsoft, is facing legal scrutiny after a new lawsuit alleges that the platform used private messages from its Premium subscribers to train generative AI models—without their consent.

The lawsuit, filed in California on behalf of Alessandro De La Torre and millions of other Premium users, accuses LinkedIn of breaching its contractual obligations and violating U.S. privacy laws.

At the heart of the controversy are LinkedIn’s 2024 policy changes, which permitted user data to be used for AI training. While users in regions with strict privacy protections—such as the UK, EU, and Canada—were automatically excluded, U.S. users were enrolled by default unless they actively opted out. According to the lawsuit, LinkedIn went even further by allegedly incorporating the contents of private InMail messages into this data-sharing practice, despite these messages often containing highly sensitive personal and professional details.

The complaint warns that this could expose private conversations involving “life-altering information about employment, intellectual property, compensation, and other personal matters.” The plaintiffs argue that this directly contradicts LinkedIn’s Subscription Agreement (LSA), which explicitly assures Premium users that their confidential information will not be shared with third parties. The lawsuit also contends that LinkedIn failed to adequately notify users of these changes, thereby undermining trust and violating the U.S. Stored Communications Act.

In response, LinkedIn has dismissed the allegations as “false claims with no merit.” However, many remain skeptical, especially given the platform’s handling of similar privacy concerns in 2024. For instance, in August, LinkedIn introduced an opt-out setting for AI training data-sharing—but it was enabled by default, raising concerns about informed consent. The company also updated its privacy policy in September, clarifying that opting out would not retroactively remove data already used for AI model training.

Legal experts suggest this case could set a major precedent for how social media and tech companies manage user data in the AI era. As plaintiff attorney Rafey Balabanian put it, “This lawsuit underscores a growing tension between innovation and privacy,” adding that if LinkedIn’s actions are proven, they represent a “serious breach of trust,” given the sensitive nature of the data involved.

Beyond the courtroom, the case could also have financial and reputational consequences for LinkedIn. Premium subscribers—who pay as much as $169.99 per month for features like InMail and enhanced privacy—may reconsider their memberships if the allegations hold weight. Furthermore, the lawsuit highlights the broader issue of corporate transparency regarding AI training data, a topic that has already drawn regulatory scrutiny. Notably, the UK’s Information Commissioner’s Office (ICO) previously pressured LinkedIn to cease using UK user data for AI training, a demand to which the company ultimately agreed.

For LinkedIn users, this lawsuit serves as a stark reminder to review privacy policies and settings closely. If successful, the plaintiffs seek damages, statutory penalties of $1,000 per affected user, and the deletion of AI models trained with improperly obtained data. With LinkedIn potentially facing both financial and reputational fallout, this case could prompt broader industry changes toward greater transparency and accountability in AI-driven data usage. Whether the company’s actions stemmed from oversight or a calculated push to advance AI capabilities, the coming months could be pivotal in shaping the future of user privacy in the digital world.