OpenAI’s Privacy Meltdown: The $50 Million “Address Leak” Scandal
Gizmodo Report: The ChatGPT Privacy & Doxxing Crisis
On May 13, 2026, Gizmodo published a chilling account of multiple users who discovered their sensitive personal information—including home addresses, private phone numbers, and in one case, a garage door code—was being served to unrelated users by ChatGPT. This isn’t just a simple mistake; it’s being hailed as a “Structural Privacy Failure.”
How the “Leak” Happens: Database Bleed
Security researchers interviewed by Gizmodo point to a phenomenon called “Cross-Context Leakage.”
-
The Glitch: When the model undergoes “Continuous Training” or “RLHF” (Reinforcement Learning from Human Feedback), specific user inputs are sometimes inadvertently “memorized” rather than generalized.
-
The Trigger: A user in one part of the world asks for “examples of addresses in Mumbai,” and the AI, trying to be helpful, retrieves a real address it recently processed from another user’s prompt or a “memory” file.
-
The Scale: While OpenAI claims these are “isolated incidents,” Gizmodo found over 40 verified cases in a single week where users were able to “social engineer” the AI into revealing PII of other subscribers.
OpenAI’s Response & The “Memory” Problem
OpenAI has faced intense scrutiny over its “Personalization” features launched in early 2026.
-
The Memory Feature: The AI is designed to “remember” things about you to be a better assistant. However, the Gizmodo report suggests that the firewall between your memory and the global model’s weights has become porous.
-
Emergency Patch: Following the report, OpenAI temporarily disabled the “Memory” and “Personalization” toggles for users in the EU and California to comply with emergency GDPR and CCPA inquiries.
-
The $50M Fine Risk: Legal experts suggest that if these leaks are proven to be systemic, OpenAI could face fines exceeding $50 million for violating data protection laws.
How to Protect Your Data Right Now
According to the Gizmodo guide, you should take these steps immediately:
-
Disable “Improve the Model”: Go to Settings > Data Controls and turn off the option that allows OpenAI to train on your chats.
-
Clear Your “Memory”: Regularly review the “Memory” tab in your settings and delete any entries containing specific locations, phone numbers, or family names.
-
Use Temporary Chat: For sensitive tasks, use the “Temporary Chat” mode, which does not save history or use data for training.











