Subj : CRYPTO-GRAM, December 15, 2024 Part 6 To : All From : Sean Rima Date : Mon Dec 23 2024 11:41 am There is potential for AI models to be much more scalable and adaptable to more languages and countries than organizations of human moderators. But the implementations to date on platforms like Meta demonstrate that a lot more work [https://dig.watch/updates/the-consequences-of-metas-multilingual-content-moderation-strategies] needs to be done to make these systems fair and effective. One thing that didn’t matter much in 2024 was corporate AI developers’ prohibitions on using their tools for politics. Despite market leader OpenAI’s emphasis on banning political uses [https://www.washingtonpost.com/technology/2023/08/28/ai-2024-election-campaigns-disinformation-ads/] and its use of AI to automatically reject a quarter-million requests [https://www.nbcnews.com/tech/chatgpt-rejected-image-generations-presidential-candidates-rcna179469] to generate images of political candidates, the company’s enforcement has been ineffective [https://www.washingtonpost.com/technology/2023/08/28/ai-2024-election-campaigns-disinformation-ads/] and actual use is widespread. * THE GENIE IS LOOSE All of these trends -- both good and bad -- are likely to continue. As AI gets more powerful and capable, it is likely to infiltrate every aspect of politics. This will happen whether the AI’s performance is superhuman or suboptimal, whether it makes mistakes or not, and whether the balance of its use is positive or negative. All it takes is for one party, one campaign, one outside group, or even an individual to see an advantage in automation. _This essay was written with Nathan E. Sanders, and originally appeared in The Conversation [https://theconversation.com/the-apocalypse-that-wasnt-ai-was-everywhere-in-2024s-elections-but-deepfakes-and-misinformation-were-only-part-of-the-picture-244225]._ ** *** ***** ******* *********** ************* ** DETECTING PEGASUS INFECTIONS ------------------------------------------------------------ [2024.12.06] [https://www.schneier.com/blog/archives/2024/12/detecting-pegasus-infections.html] This tool [https://arstechnica.com/security/2024/12/1-phone-scanner-finds-seven-pegasus-spyware-infections/] seems to do a pretty good job. > The company’s Mobile Threat Hunting feature uses a combination of malware signature-based detection, heuristics, and machine learning to look for anomalies in iOS and Android device activity or telltale signs of spyware infection. For paying iVerify customers, the tool regularly checks devices for potential compromise. But the company also offers a free version of the feature for anyone who downloads the iVerify Basics app for $1. These users can walk through steps to generate and send a special diagnostic utility file to iVerify and receive analysis within hours. Free users can use the tool once a month. iVerify’s infrastructure is built to be privacy-preserving, but to run the Mobile Threat Hunting feature, users must enter an email address so the company has a way to contact them if a scan turns up spyware -- as it did in the seven recent Pegasus discoveries. ** *** ***** ******* *********** ************* ** TRUST ISSUES IN AI ------------------------------------------------------------ [2024.12.09] [https://www.schneier.com/blog/archives/2024/12/trust-issues-in-ai.html] _This essay was written with Nathan E. Sanders. It originally appeared as a response to Evgeny Morozov in _Boston Review_‘s forum, “The AI We Deserve [https://www.bostonreview.net/forum/the-ai-we-deserve/].”_ For a technology that seems startling in its modernity, AI sure has a long history. Google Translate, OpenAI chatbots, and Meta AI image generators are built on decades of advancements in linguistics, signal processing, statistics, and other fields going back to the early days of computing -- and, often, on seed funding from the U.S. Department of Defense. But today’s tools are hardly the intentional product of the diverse generations of innovators that came before. We agree with Morozov that the “refuseniks,” as he calls [https://www.bostonreview.net/forum/the-ai-we-deserve/] them, are wrong to see AI as “irreparably tainted” by its origins. AI is better understood as a creative, global field of human endeavor that has been largely captured by U.S. venture capitalists, private equity, and Big Tech. But that was never the inevitable outcome, and it doesn’t need to stay that way. The internet is a case in point. The fact that it originated in the military is a historical curiosity, not an indication of its essential capabilities or social significance. Yes, it was created to connect different, incompatible Department of Defense networks. Yes, it was designed to survive the sorts of physical damage expected from a nuclear war. And yes, back then it was a bureaucratically controlled space where frivolity was discouraged and commerce was forbidden. Over the decades, the internet transformed from military project to academic tool to the corporate marketplace it is today. These forces, each in turn, shaped what the internet was and what it could do. For most of us billions online today, the only internet we have ever known has been corporate -- because the internet didn’t flourish until the capitalists got hold of it. AI followed a similar path. It was originally funded by the military, with the military’s goals in mind. But the Department of Defense didn’t design the modern ecosystem of AI any more than it did the modern internet. Arguably, its influence on AI was even less because AI simply didn’t work back then. While the internet exploded in usage, AI hit a series of dead ends. The research discipline went through multiple “winters” when funders of all kinds -- military and corporate -- were disillusioned and research money dried up for years at a time. Since the release of ChatGPT, AI has reached the same endpoint as the internet: it is thoroughly dominated by corporate power. Modern AI, with its deep reinforcement learning and large language models, is shaped by venture capitalists, not the military -- nor even by idealistic academics anymore. We agree with much of Morozov’s critique of corporate control, but it does not follow that we must reject the value of instrumental reason. Solving problems and pursuing goals is not a bad thing, and there is real cause to be excited about the uses of current AI. Morozov illustrates this from his own experience: he uses AI to pursue the explicit goal of language learning. AI tools promise to increase our individual power, amplifying our capabilities and endowing us with skills, knowledge, and abilities we would --- * Origin: High Portable Tosser at my node (21:1/229.1) .