Fake VPNs Caught Stealing Eight Million Users' Secret AI Chats
Eight million users affected: 'Privacy' extensions covertly stole raw, entire conversations from ChatGPT, Claude, and Gemini.
December 29, 2025

A new and significant threat to user privacy in the era of generative artificial intelligence has been exposed, as security researchers have uncovered a network of highly popular browser extensions, many explicitly marketed for privacy and security, that were secretly collecting and selling users' complete conversations with major AI chatbots. The comprehensive report by the security firm Koi details how the primary extension, Urban VPN Proxy, along with seven other extensions from the same publisher, were equipped with hidden, hardcoded functionality designed to intercept and exfiltrate sensitive chat data, affecting an estimated eight million users across Google Chrome and Microsoft Edge browsers.[1][2][3] This covert operation turned intimate, personal, and professional dialogues with AI assistants into a commercial commodity, highlighting the precarious state of user data within the browser ecosystem.[2][4]
The flagship extension, Urban VPN Proxy, which boasted over six million users on Chrome and a prominent “Featured” badge from Google, was found to be the central point of the data siphoning scheme.[1][3] The researchers identified that the mechanism for harvesting data operated independently of the extension's advertised virtual private network (VPN) functionality; regardless of whether the VPN was active, the script ran continuously in the background.[2][5] The malicious functionality, introduced in an update to version 5.5.0 in July, silently injected a dedicated “executor” script into the webpage of targeted AI platforms.[3][6] This script then overrode fundamental browser network functions, specifically the fetch and XMLHttpRequest APIs, enabling it to capture every user prompt, every AI response, timestamps, and session identifiers before the content was ever displayed on screen.[6][5] This aggressive man-in-the-prompt technique allowed the extension to steal entire, raw chat transcripts.[6][5] The collected data was then compressed and transmitted to analytics servers controlled by the developer's parent company, Urban Cyber Security Inc., which is affiliated with the data broker BiScience.[1][4]
The scope of the surveillance targeted a wide array of the most popular AI platforms currently available to the public.[3][5] Researchers confirmed that dedicated scripts existed to capture conversations from at least ten major generative AI services, including industry leaders such as ChatGPT, Claude, Gemini, Microsoft Copilot, Perplexity, DeepSeek, Grok, and Meta AI.[2][3][6][5] The developers of these extensions strategically positioned their products under the guise of privacy and security—including VPNs, ad blockers, and general browser security tools—to gain the trust of users.[2][6] The total count of affected extensions, including Urban VPN Proxy, 1ClickVPN Proxy, Urban Browser Guard, and Urban Ad Blocker across both Chrome and Edge, brought the total user base at risk to over eight million.[2][6] The nature of the compromised information is highly concerning, as users often confide deeply personal and sensitive details to AI assistants that they would never enter into a standard search engine.[2] This includes medical inquiries, financial specifics, confidential proprietary code, and complex personal or work-related dilemmas, all of which the researchers concluded should be assumed to have been captured and shared with third parties since the harvesting began.[1][2]
A key aspect of the deception was the convoluted and contradictory nature of the publisher’s data disclosure.[4][6] While the Chrome Web Store listing for Urban VPN Proxy claimed that user data was not sold to third parties outside of approved uses, the company's privacy policy contained a passage explicitly stating that AI prompts would be collected and disclosed for “marketing analytics purposes.”[4][7] Furthermore, the extension featured an “AI protection” setting that was ostensibly meant to warn users about sharing sensitive data; however, researchers found the harvesting continued even when this protective feature was disabled.[1][8] This presentation framed data monitoring as a security benefit, while the underlying code ensured the data was captured and commercialized.[4][6] The data broker component, BiScience, has been scrutinized by security researchers in the past for large-scale browsing data collection through various software development kits and affiliate programs, suggesting a systematic approach to monetizing user activity.[1][4][9] This practice represents a direct conflict with the policies of app marketplaces, which generally prohibit the sale of user data to data brokers.[4] The widespread approval and “Featured” status of many of the affected extensions on the Chrome Web Store and Microsoft Edge Add-ons marketplace has raised serious questions about the effectiveness of their vetting processes for extensions that handle sensitive user data.[2][5]
The implications of this breach extend far beyond individual privacy concerns, posing a profound risk to the nascent enterprise AI industry. As employees increasingly use generative AI tools to assist with tasks ranging from drafting internal communications to debugging proprietary code, the exfiltration of these chat logs creates a severe vector for corporate espionage and the leakage of intellectual property.[10][11] The incident underscores the critical need for organizations to implement stringent policies governing the use of browser extensions on corporate devices and to employ enterprise browser policies to whitelist only vetted, essential software.[2] The exposure of an attack vector that capitalizes on the trust placed in tools designed to enhance privacy serves as a sobering reminder that a high rating or a "featured" badge on a major app store is not sufficient to guarantee an extension's legitimate behavior.[2] For the millions of affected users, the sole remedy to stop the conversation harvesting was the complete and immediate uninstallation of the implicated extensions.[1][3] The incident marks a watershed moment in the intersection of generative AI and cybersecurity, demonstrating that the new frontier of personal and corporate data to be exploited is the intimate conversation taking place between users and their AI assistants.[12][13]