A Utah man filed a class-action lawsuit against Perplexity AI on Tuesday, alleging the AI search engine has been secretly funneling users’ private conversations to Meta and Google through hidden tracking software. The complaint, filed in the U.S. District Court for the Northern District of California, names Perplexity, Meta, and Alphabet as defendants.
The core accusation: the moment you log into Perplexity, trackers download onto your device and start transmitting everything you type—including the full text of your queries—to two of the largest advertising companies on earth. Turning on Incognito mode doesn’t help.
What the Complaint Alleges
According to the filing (Doe v. Perplexity AI Inc., 3:26-cv-02803), Perplexity embedded “undetectable” tracking software into its search engine that automatically transmits user conversations to Meta, Google, and other third parties. The plaintiff—identified only as John Doe—says he shared sensitive financial information with Perplexity’s chatbot, including details about family finances, tax obligations, and investment strategies, believing those conversations were private.
They weren’t. The complaint alleges that entire prompts were shared with Meta and Google via full-string URLs intercepted in the plaintiff’s browser. Users’ email addresses were also shared if they’d created a free account.
“No reasonable person would have expected that Perplexity would share complete transcripts of their conversations…with companies like Meta and Google,” the complaint states.
Server-Side Tracking Bypasses Your Privacy Settings
The technical mechanism matters here. The lawsuit targets what appear to be server-to-server API integrations—the kind of tracking that operates independently of your browser. Cookie consent banners don’t stop it. Private browsing doesn’t stop it. Browser extensions and ad blockers can’t catch it because the data never flows through your browser in a way those tools can intercept.
This is the growing problem with server-side tracking: it operates entirely outside the traditional privacy protections that users rely on. You can block cookies, use Incognito, install uBlock Origin—and none of it matters if the company is sending your data from their servers directly to Meta’s and Google’s servers.
The CCPA Problem
The lawsuit cites violations of California’s Consumer Privacy Act, the California Electronic Communications Privacy Act (CalECPA), and federal privacy laws. Under CCPA, the legal exposure is significant:
- Right to Know (§1798.100): Perplexity allegedly failed to disclose third-party data transmission at the point of collection
- Sale/Sharing Prohibition (§1798.120): Transmitting query data to advertising platforms may count as prohibited “sharing” without an opt-out mechanism
- Sensitive Personal Information (§1798.121): AI chatbot conversations routinely contain health, financial, and personal data—all categories that trigger heightened use restrictions
- Private Right of Action (§1798.150): Enables statutory damages of $100 to $750 per consumer, or actual damages if greater
The CalECPA angle is particularly aggressive. California’s wiretapping law may treat the simultaneous transmission of user queries to undisclosed third-party servers as “interception” of communications—a framing that could expand liability well beyond CCPA’s scope.
People Tell AI Chatbots Everything
This case highlights a problem that goes beyond one company. People treat AI chatbots like confidential advisors. They ask about medical symptoms, financial trouble, relationship problems, legal questions, and mental health concerns. The intimacy of the interaction—a private text box where you type and get answers—creates an expectation of confidentiality that may not match reality.
The complaint notes the plaintiff shared details about his tax situation, investment portfolio, and family finances. That’s not unusual. Millions of users do the same every day across every major AI chatbot. The question this lawsuit forces is: where does that data actually go?
Perplexity’s Growing Legal Problems
This isn’t Perplexity’s first courtroom appearance in 2026. Amazon won a court order in March blocking Perplexity’s Comet AI shopping agent from accessing Amazon’s marketplace. The judge found “strong evidence” that Perplexity accessed Amazon’s site without authorization, with the company allegedly taking steps to conceal its AI agents’ activities.
A pattern is forming: a fast-growing startup that moves aggressively on data access while keeping users and partners in the dark about what it’s doing with the information it collects.
Perplexity’s official privacy policy states the company “does not sell, trade, or share your personal information with third parties, except as outlined in our policy.” The lawsuit alleges the company’s actual practices tell a different story.
Spokesperson Jesse Dwyer told reporters that Perplexity “hasn’t been served and cannot verify the lawsuit’s claims.” Meta pointed to its policies prohibiting advertisers from sending sensitive data through Facebook’s systems. Google declined to comment.
What You Can Do
If you use Perplexity AI:
- Assume your conversations aren’t private. This applies to every AI chatbot, not just Perplexity. Don’t share information you’d be uncomfortable seeing in an ad targeting profile.
- Don’t trust Incognito mode for AI tools. If data is being transmitted server-side, browser privacy settings are irrelevant.
- Check what trackers are running. Browser extensions like uBlock Origin and Privacy Badger can catch client-side trackers, though they won’t stop server-side data sharing.
- Use throwaway accounts. If you must use AI search tools, avoid linking your real email address.
- Consider alternatives. Self-hosted search tools and local AI models don’t have this problem—your queries stay on your machine. We’ve covered several local AI options that give you the same capabilities without the tracking.
The case is Doe v. Perplexity AI Inc., 3:26-cv-02803, U.S. District Court for the Northern District of California. Class-action lawsuits of this type typically take 12 to 36 months to resolve, but the discovery process could reveal exactly how much user data Perplexity has been sharing—and with whom.