On July 31, 2025, "private" ChatGPT conversations went very much public on Google search results pages (SERPs).
Any chat that had been previously shared between two people via a link—similar to how you can share a Google Doc link with someone—wound up public, on the internet, and indexed in Google search. In a matter of about five minutes, I came across:
- Some pitch deck outlines
- Someone's movie script idea
- Personality analysis, including full names of users
- What someone was dreaming about last night
- Content about mood and emotions
- An analysis of some strangely old financial reports (if they were running a test using outdated information, well done, internet stranger!)
OpenAI and Google have since removed these chats from SERPs and if OpenAI did the right thing, they've permanently blocked these shared chats from appearing in search indexing. (The company has also removed the specific link sharing option that made the chat publicly searchable — while there was a very small warning this could happen when sharing chats via link, I expect most people weren't aware.)
But that doesn't mean your chats are private. NONE of this is truly private. Even when you have "improve the model for everyone" turned off or ChatGPT says "we don't use the contents of this chat to train our model."
There are two things to know about your chats with an AI:
- If your data is used to train or "improve" the model, the computer algorithm that powers the AI is processing your chat data and may use elements of it in future outputs. It could take on bits of your tone, style, and even the words that you've written — one of the reasons why creative professionals are very concerned about AI and copyright.
- Your chats are not encrypted. Encryption scrambles the contents of a file in a way that is unreadable unless someone has the 'key' to unlock it. (A lot of this happens between approved devices — iMessages are encrypted between iPhones, for example, and you don't even realize it's happening.) ChatGPT conversations are basically plain text files that are just hanging out on one of OpenAI's servers. Even if they don't feed the chat into their AI training program, anyone with access to the server could technically find and read your chat. Or, you know, accidentally push it out to Google search results for ✨everyone✨ to read!
How to tweak your ChatGPT settings to make them a little safer
If you want to be just a little safer
- Open up your account settings
- Click on "data controls"
- Select "improve the model for everyone" and turn it off
- Click "manage" next to "shared links" and delete any shared chats
- Select the "security" section of your settings
- Turn on multi-factor authentication, which makes it harder (though not impossible) for someone else to sign into your account without you knowing
If you want to balance convenience and safety
- Do everything in the previous section
- Open up your account settings
- Click on "personalization" and do the following to reduce what ChatGPT stores about you:
- Toggle "reference saved memories" off
- Toggle "reference chat history" off
- Click on "manage memories" and clear your memories
If you want to prioritize safety over convenience
- Do everything in the previous two sections
- Open your account settings
- Click on "connected apps" and disconnect from every option, if connected. This reduces the amount of extra information ChatGPT might have about you and your work or personal life.
- Pay for a ChatGPT account. While it's not foolproof, generally speaking, paying for software makes the company a little less likely to use your data for other monetization purposes. No corporation is giving you a free service of the goodness of their hearts.
If you say "screw it, I want to be as safe as possible"
...Stop using ChatGPT. Or at the very least, stop using it for anything related to your personal life and business intellectual property.
I'm sorry. I know that sounds like a flippant answer, but it's not. Neither the free nor paid versions of ChatGPT are truly private. It's up to everyone to create their own threat model, but if you really want to be sure that your data won't end up in the wrong hands, the best way is to stop chatting with the AI.