Subj : Meta patches worrying security bug which could have exposed user To : All From : TechnologyDaily Date : Wed Jul 16 2025 12:30:08 Meta patches worrying security bug which could have exposed user AI prompts and responses - and pays the bug hunter $10,000 Date: Wed, 16 Jul 2025 11:18:00 +0000 Description: There was a way to read other people's prompts and Meta AI responses, expert warns. FULL STORY ======================================================================Meta AI was assigning unique identifiers to prompts and responses The servers were not checking who had access rights to these identifiers The vulnerability was fixed in late January 2025 A bug which could have exposed users prompts and AI responses on Metas artificial intelligence platform has been patched. The bug stemmed from the way Meta AI assigned identifiers to both prompts, and responses. As it turns out, when a logged-in user tries to edit their previous prompt to get a different response, Meta assigns both of them a unique identifier. By changing that number, Metas servers would return someone elses queries and results. Norton VPN Criminals now use AI to invade your online privacy and scam you, making it hard to stay safer online. Thats why Norton VPN has combined advanced online privacy protection with AI-powered scam detection starting at $49.99 the first year (or $4.17/month) . View Deal No abuse so far The bug was discovered by a security researcher and AppSecure founder, Sandeep Hodkasia, in late December 2024. He reported it to Meta, who deployed a fix on January 24, 2025, and paid out a $10,000 bounty for his troubles. Hodkasia said that the prompt numbers that Metas servers were generating were easy to guess, but apparently - no threat actors thought of this before it was addressed. This basically means that Metas servers werent double-checking if the user had proper authorization to view the contents. This is clearly problematic in a number of ways, the most obvious one being that many people share sensitive information with chatbots these days. Business documents, contracts and reports, personal information, all of these get uploaded to LLMs every day, and in many cases - people are using AI tools as psychotherapists , sharing intimate life details and private revelations. This information can be abused, among other things, in highly customized phishing attacks, that could lead to infostealer deployment, identity theft , or even ransomware. For example, if a threat actor knows that a person was prompting the AI for cheap VPN solutions , they could send them an email offering a great, cost-effective product, that is nothing more than a backdoor. Via TechCrunch You might also like Researcher tricks ChatGPT into revealing security keys - by saying "I give up" Take a look at our guide to the best authenticator app We've rounded up the best password managers ====================================================================== Link to news story: https://www.techradar.com/pro/security/bug-that-would-expose-user-ai-prompts-a nd-responses-patched-bug-hunter-paid-usd10-000-by-meta --- Mystic BBS v1.12 A47 (Linux/64) * Origin: tqwNet Technology News (1337:1/100) .