Lawsuit accuses Perplexity AI of sharing private user conversations with Meta and Google
A lawsuit alleges the startup shared private chats with Meta and Google, undermining its identity as a privacy-focused search alternative.
April 1, 2026

Perplexity AI, a startup that has rapidly risen to prominence by positioning itself as a privacy-focused alternative to traditional search engines, is now at the center of a high-stakes legal battle that threatens to undermine its core brand identity. A proposed class-action lawsuit filed in federal court in San Francisco alleges that the company has been surreptitiously sharing sensitive user data and entire chat histories with the very tech giants it claims to disrupt: Meta Platforms and Alphabet’s Google.[1][2] The legal challenge represents a significant moment of reckoning for the artificial intelligence industry, highlighting a growing tension between the promise of private, "ad-free" AI assistants and the technical realities of web-based tracking and data monetization.[1]
At the heart of the complaint is the allegation that Perplexity embedded sophisticated and undetectable tracking software into its search engine code.[1] According to the filing, these trackers automatically transmit the contents of user conversations—including highly personal queries—to third parties without explicit consent.[1][2][3] The plaintiff, a Utah resident identified in court documents as John Doe, claims that even when users specifically enable Perplexity’s "Incognito" mode, their personal data is still harvested and routed to Meta and Google.[2][4][3][1][5] The lawsuit details how the plaintiff shared intimate financial information with the AI, including family tax obligations, investment strategies, and portfolio details, under the assumption that the platform provided a secure and private environment.[4] Instead, the suit argues, this data was funneled into the advertising ecosystems of the industry’s largest players, allowing them to exploit the information for targeted marketing and potentially resell it to further third parties.[3][4]
The technical mechanism described in the lawsuit involves the use of tracking pixels and analytics tools, which are common on traditional websites but take on a more invasive character within the context of a conversational AI. While standard search engines might track keywords or navigation patterns, the lawsuit contends that Perplexity’s implementation allows for the extraction of full dialogue strings. This creates a bridge between the user’s private interactions with an AI "answer engine" and the massive data-gathering machines of Meta and Google.[3][1][6][2][4] The inclusion of the two tech giants as defendants alongside Perplexity underscores the gravity of the accusations, with the complaint alleging that all three companies have violated federal and state computer privacy and fraud laws.[4] Legal experts point out that the suit leans heavily on the California Invasion of Privacy Act, a statute that has become a powerful tool for class-action litigants because it allows for significant statutory damages without the need to prove specific financial harm.
The litigation strikes at a particularly sensitive time for Perplexity, which has marketed itself as the "Google killer" by offering a cleaner, more direct way to find information without the clutter of sponsored links and intrusive tracking. By allegedly utilizing the infrastructure of Google and Meta to manage its own user metrics or advertising goals, Perplexity faces a crisis of credibility.[3] The irony of a "privacy-first" company feeding data into the world’s largest advertising networks is a central theme of the legal argument. For users, the expectation of privacy in an AI chat is often higher than in a traditional search bar; people tend to treat AI agents as confidants or advisors, leading them to disclose information they might otherwise keep private. If the allegations are proven true, they would suggest that the safeguards promised by the new generation of AI tools are secondary to the data-sharing practices that have long defined the broader internet economy.
This lawsuit is not an isolated incident for the startup, but rather the latest in a series of legal challenges that suggest a pattern of aggressive data practices. Perplexity is already embroiled in disputes with major publishers, including News Corp and The New York Times, over allegations that it "scrapes" and "launders" copyrighted content to fuel its responses. Furthermore, Amazon has recently targeted the company with legal action regarding its agentic shopping features, accusing Perplexity of bypassing security measures to access customer accounts.[4][3] When viewed together, these cases paint a picture of a company willing to push the boundaries of data acquisition in its quest to compete with established giants. The convergence of copyright issues and consumer privacy violations indicates that the AI industry's "move fast and break things" era is meeting a formidable wall of litigation and regulatory scrutiny.
The implications for the broader AI sector are profound. If a court finds that the use of standard tracking pixels in an AI interface constitutes an invasion of privacy, it could force a fundamental redesign of how AI applications are built and monetized. Developers may be forced to move away from "free" models supported by invisible data exchanges and toward more transparent, subscription-based architectures. It also raises questions for Meta and Google, who must defend their roles as the recipients of this data. Meta has historically argued that it is the responsibility of third-party developers to ensure they do not send sensitive information through its tracking tools, but the lawsuit challenges the idea that these platforms can remain passive beneficiaries of such "surreptitious" transfers.
Furthermore, the case highlights the limitations of "Incognito" or "private" modes in the modern web environment. The lawsuit alleges that these settings are effectively cosmetic, providing a false sense of security while the underlying tracking software continues to operate. This mirrors previous legal battles fought by Google regarding its own Chrome browser, suggesting that the industry has yet to resolve the fundamental conflict between user expectations of anonymity and the persistent need for data to power AI models and advertising engines. As AI agents become more integrated into daily life—handling everything from healthcare inquiries to financial planning—the legal definition of what constitutes a "private conversation" will likely be shaped by the outcome of this case.
Perplexity has responded to the allegations by stating that it has not been officially served with the lawsuit and cannot verify the claims, a common initial defense in high-profile class actions. However, the reputational damage may already be taking root. For a company valued in the billions and seeking to lead the next era of information retrieval, the accusation that it acts as a funnel for Big Tech’s data appetites is a direct threat to its market position. The outcome will serve as a bellwether for the industry, determining whether AI startups can truly offer a different path than their predecessors or if they are ultimately bound by the same data-extractive models that defined the previous decade of the internet. As the legal process moves forward in San Francisco, the entire AI ecosystem will be watching to see if the "answer engine" can provide a compelling defense for its own internal practices.