Elon Musk’s artificial intelligence (AI) chatbot, Grok, has inadvertently published user messages without their knowledge, leading to significant privacy concerns.

A Google search on Thursday revealed that nearly 300,000 Grok conversations had been indexed. This situation has prompted experts to label AI chatbots as a “privacy disaster in progress.”
Unique links are generated when Grok users choose to share a transcript of their conversation. However, it appears that this sharing function also made the chats searchable online, exposing them to the public.
The tech industry publication Forbes initially reported that over 370,000 user conversations were indexed by Google. Among the transcripts examined by the BBC were examples of users asking Grok to create secure passwords, provide meal plans for weight loss, and answer detailed medical questions. Some indexed chats even showed users testing the limits of what Grok would say or do. In one instance, the chatbot provided detailed instructions on how to make a Class A drug in a lab.
This is not the first occurrence of AI chatbot conversations being more widely shared than users expected. OpenAI recently faced backlash when ChatGPT conversations appeared in search results after users shared them. A spokesperson for OpenAI explained that they were “testing ways to make it easier to share helpful conversations” while keeping users in control, emphasizing that user chats were private by default.
Earlier this year, Meta also faced criticism for sharing users’ conversations with its chatbot, Meta AI, in a public “discover” feed on its app.

While users’ account details may be anonymized in shared transcripts, their prompts can still include sensitive personal information. Experts express growing concern over privacy issues related to AI chatbots.
“AI chatbots are a privacy disaster in progress,” said Prof. Luc Rocher, an associate professor at the Oxford Internet Institute. He noted that leaked conversations can reveal sensitive information, ranging from full names and locations to details about mental health, business operations, or relationships. “Once leaked online, these conversations will stay there forever,” he added.
Carissa Veliz, an associate professor in philosophy at Oxford University’s Institute for Ethics in AI, echoed these concerns, stating that users should be informed if shared chats will appear in search results. “Our technology doesn’t even tell us what it’s doing with our data, and that’s a problem,” she said.




