commented: This is the same root complaint I have with those dumb things that generate slop-wikis of my open source work and docs. It's exactly the wrong use. It's an assault on the author and on the creative community. It replaces value-positive communication with value-negative bullshit commented: Totally; one of the reasons I've switched to paying for Kagi search is that they filter out sites like deepwiki that get in the way of me finding interesting content. It's easy enough for me to generate my own LLM summaries if I feel I need it, based on my own prompts. It's much more difficult to locate other interesting human perspectives on the Internet while wading through slop. commented: This seems like a great example of cluelessly saying "let's throw AI on it." One problem is that the AI doesn't seem to be very good. A second problem is that this seems like a really dodgy idea about how to use AI. As the post notes, authors already write abstracts. Unless the author has done a bad job, the value of having AI redo it is pretty low, even if the AI were good. commented: Glanced through a summary myself and it gets enough subtleties wrong to be possibly misleading; and yes, I agree with the author that in theory a paper's introduction should just be a better version of this. Although let's be honest, there are many papers out there with shall we say sub-optimal writing. Ignoring confabulations (because in my case there weren't all too many important ones at a glance), I still took issue with the summary. While I read both it and the OP, I found myself reminded of The Monad Tutorial Fallacy, specifically this part: .... he finally has an “aha!” moment: everything is suddenly clear, and Joe Understands Monads! What has really happened, of course, is that Joe’s brain has fit all the details together into a higher-level abstraction, a metaphor which Joe can use to get an intuitive grasp of monads ... The problem, of course, is that if Joe HAD thought of this before, it wouldn’t have helped: the week of struggling through details was a necessary and integral part of forming Joe’s ... intuition, not a sad consequence of his failure to hit upon the idea sooner. I don't believe that papers are perfectly minimal ways of conveying ideas: certainly there is (perhaps lots of) room to summarize. But I do think that to be truly useful, a summary needs to align with whatever high-level abstractions work for our own brains, and the uncanny feeling I got from reading the AI summary was a high-level abstraction that was out of alignment with own way of thinking in several places. What I fear is that LLM summaries are increasingly being used as shortcuts to intuition building which either requires (1) puzzling through less than "perfect" explanations or (2) finding an explanation "perfect" for your brain. The danger I see is the presentation is meant to always give the appearance of (2), even if it is off (again --- ignoring confabulations), thereby building bad intuitions like "monads are just burritos." commented: ACM is about to make papers free to read or download: yay, finally! A sizable swathe of all computer science research will soon be available without a subscription. Now, they need a way to fill the revenue gap. They're a nonprofit doing useful things with money, I assume. I'm no expert on the ACM's activities but I imagine we'd all miss them if they were gone, so I support their attempt to make money from something besides paywalling papers. Unfortunately, my first impression of this "ACM premium" subscription is that it isn't worthwhile for me and I doubt it will be for anyone else. But the harm here is that the ACM has made an effort to fill their revenue hole and it looks like the effort will be wasted and the hole will remain. The harm is probably not that there will be some more bad AI summaries on the web (the summaries will be paywalled and no one will pay to read them) and the harm is not that they're paywalled (again, because no one will want them, so restricting them harms no one). Take all this with a grain of salt, I have no special knowledge about the ACM, surely they've thought about this much more than me so I'm probably wrong. .