Subj : Proton Launches Hallucinatory AI Chatbot To : All From : LundukeJournal Date : Fri Jul 25 2025 21:15:08 Proton Launches Hallucinatory AI Chatbot Date: Fri, 25 Jul 2025 20:03:44 GMT Description: Lumo, the chatbot on mushrooms, may respect your privacy it just doesnt respect reality. FULL STORY ====================================================================== Proton the Swiss company behind Proton VPN & Proton Mail apparently was feeling very left out of the A.I. Craze (tm) and has decided to launch their own AI Chatbot dubbed Lumo. And it is possibly even more hallucinatory than the other AI Chatbots. And thats saying something. Lumo the AI that respects your privacy boasts that the company keeps no logs and has zero access encryption. Since they offer a few free queries without creating an account, I decided to take it for a spin. The results were a bit like talking to a schizophrenic on mushrooms. Lumos Grasp on History First I asked it a series of simple historical, nerdy questions. Easy stuff that any LLM AI system should nail. Like What year did the first Macintosh computer ship? and Who was the first CEO of Microsoft? Easy stuff. Lumo got about half of the answers right it was convinced that the first Mac shipped in 2003 (off by about 20 years). On the other hand it did know the correct number of floppies that Windows 95 shipped on (13). So. Mixed bag. In other words: Lumo got so much wrong that it was not usable for any sort of research. I then decided to ask Lumo some questions about myself. Lunduke. Lunduke is Hard for AI Chatbots Last year I noticed that OpenAIs ChatGPT was saying some pretty crazy things about yours truly. Stuff like Lunduke has two clubbed feet, Lunduke is a trans activist, and Lunduke has a husband named Evan. I gave OpenAI an ultimatum: Either they needed to fix ChatGPT such that it would no longer spew out made-up, defamatory stuff about me or they needed to stop ChatGPT from talking about Lunduke entirely. In the end, OpenAI decided that there was no way to make ChatGPT output accurate information (seriously). So they added a Bryan Lunduke filter so that any query that results in mentioning my full name causes ChatGPT to error out (amusingly, even that Lunduke filter only works about 80% of the time). I decided to ask Protons Lumo AI about Lunduke. Lets see how it compares to ChatGPT, right? The results were insane. Lumo on Shrooms First Lumo refused to spell my first name correctly (it used an i instead of a y and no amount of correcting it seemed to work). Worth noting that there is no human on Earth named Brian Lunduke. Only Bryan. Weird. But no biggy. The rest of it though was wild. Lumo is convinced that I am a transgender man and advocate for transgender rights. Also I am, apparently, a critic of Israel and a crusader for social justice. Basically, Lumo invented Mirror Universe Lunduke. Oh, and like ChatGPT Lumo is convinced I have a husband. This time his name is Michael DeFreese. And, apparently, we got married in 2018. Which will be a surprise to my wife. It gets weirder. I then asked Lumo about my husband the next day. Apparently, overnight, I had gotten divorced and re-married. I was now Mr. Bart Butler. I spoke to the team at Proton to see what their plan for dealing with factual errors was. The team at Proton informed me that they could not reproduce the output I received which I believe, as Lumo seems to generate wildly different facts almost every time its used. At the same time, Lumo changed to output a template response about providing helpful, respectful assistance while not actually answering questions when the word Lunduke was included. The Lumo team sent me this screenshot. A few hours later, Lumo changed back to spouting hallucinations regarding Lunduke but spontaneously learned how to spell my name correctly. So. That was a plus! Even if I was still an openly transgender man with an unnamed husband. So sure. Lumo may be almost completely incapable of outputting factual information. And it changes its mind on what made up nonsense it spews out almost every few minutes. But, hey! At least Lumo has that reassuring Conversation encrypted message at the bottom of each chat. Its got that going for it. ====================================================================== Link to news story: https://lunduke.substack.com/p/proton-launches-hallucinatory-ai --- Mystic BBS v1.12 A47 (Linux/64) * Origin: tqwNet Technology News (1337:1/100) .